]> High-resolution path-integral development of financial options

High-resolution path-integral development of financial options

1.1. Background
1.2. Our Approach
1.3. Outline of Paper
2.1. Eurodollars
2.2. Futures
2.3. Options on Futures
2.4. Front/Back Month Contracts
2.5. Stochastic Volatility
3.1. Random walk model
3.2. Black-Scholes (BS) Theory
3.3. Some Key Issues in Derivation of BS
3.4. Models
3.4.1. Various Diffusions
4.1. Statistical Mechanics of Large Systems
4.2. Correlations
5.1. ASA Outline
5.1.1. General description
5.1.2. Mathematical outline
5.1.3. Reannealing
5.1.4. Quenching
5.2. -Indicator of Market Contexts
6.1. PATHINT Outline
6.2. Development of Long-Time Probabilities
6.3. Dependence of Probabilities on and
6.4. Two-Factor Volatility and PATHINT Modifications
7.1. Primary Use of Probabilities For European Options
7.2. PATHINT Baselined to CRR and BS
7.3. Two-Factor PATHINT Baselined to One-Factor PATHINT
Figure 1
Figure 2
Figure 3
Table 1

Lester Ingber

Lester Ingber Research
POB 06440 Sears Tower, Chicago, IL 60606
DRW Investments LLC
311 S Wacker Dr, Ste 900, Chicago, IL 60606

ingber@ingber.com, ingber@alumni.caltech.edu

+1.312.542.1006 Voice
+1.801.881.1088 Fax


The Black-Scholes theory of option pricing has been considered for many years as an important but very approximate zeroth-order description of actual market behavior. We generalize the functional form of the diffusion of these systems and also consider multi-factor models including stochastic volatility. Daily Eurodollar futures prices and implied volatilities are fit to determine exponents of functional behavior of diffusions using methods of global optimization, Adaptive Simulated Annealing (ASA), to generate tight fits across moving time windows of Eurodollar contracts. These short-time fitted distributions are then developed into long-time distributions using a robust non-Monte Carlo path-integral algorithm, PATHINT, to generate prices and derivatives commonly used by option traders.

Keywords: options; eurodollar; volatility; path integral; optimization; statistical mechanics

PACS Nos.: 05.10.-a, 02.70.-c, 82.20.wt, 89.90.+n


1.1. Background

There always is much interest in developing more sophisticated pricing models for financial instruments. In particular, there currently is much interest in improving option pricing models, particularly with respect to stochastic variables [1-4].
The standard Black-Scholes (BS) theory assumes a lognormal distribution of market prices, i.e., a diffusion linearly proportional to the market price. However, many texts include outlines of more general diffusions proportional to an arbitrary power of the market price [5].
The above aspects of stochastic volatility and of more general functional dependencies of diffusions are most often “swept under the rug” of a simple lognormal form. Experienced traders often use their own intuition to put volatility “smiles” into the BS theoretical constant coefficient in the BS lognormal distribution to compensate for these aspects.
It is generally acknowledged that since the market crash of 1987, markets have been increasingly difficult to describe using the BS model, and so better modelling and computational techniques should be used traders [6], although in practice simple BS models are the rule rather than the exception simply because they are easy to use [7]. To a large extent, previous modelling that has included stochastic volatility and multiple factors has been driven more by the desire to either delve into mathematics tangential to these issues, or to deal only with models that can accommodate closed-form algebraic expressions. We do not see much of the philosophy in the literature that has long driven the natural sciences: to respect first raw data, secondly models of raw data, and finally the use of numerical techniques that do not excessively distort models for the sake of ease of analysis and speed of computation. Indeed, very often the reverse set of priorities is seen in mathematical finance.

1.2. Our Approach

We have addressed the above issues in detail within the framework of a previously developed statistical mechanics of financial markets (SMFM) [8-13].
Our approach requires three sensible parts. Part one is the formulation of the model, which to some extent also involves specification of the specific market(s) data to be addressed. Part two is the fitting of the model to specific market data. Part three is the use of the resulting model to calculate option prices and their Greeks (partial derivatives of the prices with respect to their independent variables), which are used as risk parameters by traders. Each part requires some specific numerical tuning to the market under consideration.
The first part was to develop the algebraic model to replace/generalize BS, including the possibility of also addressing how to handle data regions not previously observed in trading. This is not absurd; current BS models perform integrals that must include a much influence from fat tails that include data regions never seen or likely to be seen in real-world markets. There are some issues as to whether we should take seriously the notion that the market is strongly driven by some element of a “self-fulfilling prophesy” by the BS model [14], but in any case our models have parameters to handle a wide range of possible cases that might arise.
We have developed two parallel tracks starting with part one, a one-factor and a two-factor model. The two-factor model includes stochastic volatility. At first we sensed the need to develop this two-factor model, and we now see that this is at the least an important benchmark against which to judge the worth of the one-factor model.
The second part was to fit the actual raw data so we can come up with real distributions. Some tests illustrated that standard quasi-linear fitting routines, could not get the proper fits, and so we used a more powerful global optimization, Adaptive Simulated Annealing (ASA) [15]. Tuning and selection of the time periods to perform the fits to the data were not trivial aspects of this research. Practical decisions had to be made on the time span of data to be fit and how to aggregate the fits to get sensible “fair values” for reasonable standard deviations of the exponents in the diffusions.
The third part was to develop Greeks and risk parameters from these distributions without making premature approximations just to ease the analysis. Perhaps someday, simple approximations and intuitions similar to what traders now use for BS models will be available for these models, but we do not think the best approach is to start out with such approximations until we first see proper calculations, especially in this uncharted territory. When it seemed that Cox-Ross-Rubenstein (CRR) standard tree codes (discretized approximations to partial differential equations) [16] were not stable for general exponents, i.e., for other than the lognormal case, we turned to a PATHINT code developed a decade ago for some hard nonlinear multifactor problems [17], e.g., combat analyses [18], neuroscience [19,20], and potentially chaotic systems [21,22]. In 1990 and 1991 papers on financial applications, it was mentioned how these techniques could be used for stochastic interest rates and bonds [9,10]. The modifications required here for one-factor European and then American cases went surprisingly smoothly; we still had to tune the meshes, etc. The two-factor model presented a technical problem to the algorithm, which we have reasonably handled using a combination of selection of the model in part one and a reasonable approach to developing the meshes.
More detailed numerical calculations than contained in this paper will be presented in a subsequent paper [23].

1.3. Outline of Paper

Section 1 is this introduction. Section 2 describes the nature of Eurodollar (ED) futures data and the evidence for stochastic volatility. Section 3 outlines the algebra of modelling options, including the standard BS theory and our generalizations. Section 4 outlines the three equivalent mathematical representations used by SMFM; this is required to understand the development of the short-time distribution that defines the cost function we derive for global optimization, as well as the numerical methods we have developed to calculate the long-time evolution of these short-time distributions. Section 5 outlines ASA and explains its use to fit short-time probability distributions defined by our models to the Eurodollar data; we offer the fitted exponent in the diffusion as a new important technical indicator of market behavior. Section 6 outlines PATHINT and explains its use to develop long-time probability distributions from the fitted short-time probability distributions, for both the one-factor and two-factor tracks. Section 7 describes how we use these long-time probability distributions to calculate European and American option prices and Greeks; here we give numerical tests of our approach to BS CRR algorithms. Section 8 is our conclusion.


2.1. Eurodollars

Eurodollars are fixed-rate time deposits held primarily by overseas banks, but denominated in US dollars. They are not subject to US banking regulations and therefore tend to have a tighter bid-ask spread than deposits held in the United States [24].

2.2. Futures

The three-month Eurodollar futures contract is one of the most actively traded futures markets in the world. The contract is quoted as an index where the yield is equal to the Eurodollar price subtracted from 100. This yield is equal to the fixed rate of interest paid by Eurodollar time deposits upon maturity and is expressed as an annualized interest rate based on a 360-day year. The Eurodollar futures are cash settled based on the 90-day London Interbank Offer Rate (LIBOR). A “notional” principal amount of $1 million, is used to determine the change in the total interest payable on a hypothetical underlying time deposit, but is never actually paid or received [24].
Currently a total of 40 quarterly Eurodollar futures contracts (or ten years worth) are listed, with expirations annually in March, June, September and December.

2.3. Options on Futures

The options traded on the Eurodollar futures include not only 18 months of options expiring at the same time as the underlying future, but also various short dated options which themselves expire up to one year prior to the expiration of the underlying futures contract.

2.4. Front/Back Month Contracts

For purposes of risk minimization, as discussed in a previous paper [4], traders put on spreads across a variety of option contracts. One common example is to trade the spread on contracts expiring one year apart, where the future closer to expiration is referred to as the front month contract, and the future expiring one year later is called the back month. The availability of short dated or “mid-curve” options which are based on an underlying back month futures contract, but expire at the same time as the front month, allow one to trade the volatility ratios of the front and back month futures contracts without having to take the time differences in option expirations into consideration. We studied the volatilities of these types of front and back month contracts. Here, we give analyses with respect only to quarterly data longer than six months from expiration.

2.5. Stochastic Volatility

Below we develop two-factor models to address stochastic volatility. In a previous paper, we have performed empirical studies of Eurodollar futures to support the necessity of dealing with these issues [4].


3.1. Random walk model

The use of Brownian motion as a model for financial systems is generally attributed to Bachelier [25], though he incorrectly intuited that the noise scaled linearly instead of as the square root relative to the random log-price variable. Einstein is generally credited with using the correct mathematical description in a larger physical context of statistical systems. However, several studies imply that changing prices of many markets do not follow a random walk, that they may have long-term dependences in price correlations, and that they may not be efficient in quickly arbitraging new information [26-28]. A random walk for returns, rate of change of prices over prices, is described by a Langevin equation with simple additive noiseη , typically representing the continual random influx of information into the market.



whereγ1 andγ2 are constants, andΓ is the logarithm of (scaled) price. Price, although the most dramatic observable, may not be the only appropriate dependent variable or order parameter for the system of markets [29]. This possibility has also been called the “semistrong form of the efficient market hypothesis” [26].
The generalization of this approach to include multivariate nonlinear nonequilibrium markets led to a model of statistical mechanics of financial markets (SMFM) [8].

3.2. Black-Scholes (BS) Theory

The standard partial-differential equation used to formulate most variants of Black-Scholes (BS) models describing the market value of an option,V , is



whereS is the asset price, andσ is the standard deviation, or volatility ofS , andr is the short-term interest rate. The solution depends on boundary conditions, subject to a number of interpretations, some requiring minor transformations of the basic BS equation or its solution. For example, the basic equation can apply to a number of one-dimensional models of interpretations of prices given toV , e.g., puts or calls, and toS , e.g., stocks or futures, dividends, etc.
For instance, ifV is set toC , a call on an European option with exercise priceX with maturity atT , the solution is



In practice, the volatilityσ is the least known parameter in this equation, and its estimation is generally the most important part of pricing options. Usually the volatility is given in a yearly basis, baselined to some standard, e.g., 252 trading days per year, or 360 or 365 calendar days. Therefore, all values of volatility given in the graphs in this paper, based on daily data, would be annualized by multiplying the standard deviations of the yields by252=15.87 We have used this factor to present our implied volatilities as daily movements.

3.3. Some Key Issues in Derivation of BS

The basic BS model considers a portfolio in terms of delta (Δ ),



in a market with Gaussian-Markovian (“white”) noiseX and driftμ ,



whereV(S,t) inherits a random process fromS ,



This yields



The expected risk-neutral return ofΠ is



OptionsV on futuresF can be derived, e.g., using simple transformations to take cost of carry into consideration, such as



and setting



The corresponding BS equation for futuresF is



At least two advantages are present ifΔ is chosen such that



Then, the portfolio can be instantaneously “risk-neutral,” in terms of zeroing the coefficient ofX , as well as independent of the direction of market, in terms of zeroing the coefficient ofμ For the above example ofV =C ,



Other trading strategies based on this simple model use similar constructs as risk parameters, e.g., gamma (Γ ), theta (Θ ), vega (Υ ), rho (ρ ) [5],



The BS equation, Eq. (2), may be written as



3.4. Models

Our two-factor model includes stochastic volatilityσ of the underlyingS ,



whereS0 andS are selected to lie outside the data region used to fit the other parameters, e.g.,S0=1 andS=20 for fits to Eurodollar futures which historically have a very tight range relative to other markets. We have used the Black-Scholes formF=S insideS<S0 to obtain the usual benefits, e.g., no negative prices as the distribution is naturally excluded fromS<0 and preservation of put-call parity. Put-call parity for European options is derived quite independent of any mathematical model of options [5]. In its simplest form, it is given by



wherec (p ) is the fair price of a call (put),X is the strike price,r is the risk-free interest rate,t is the present time,T is the time of expiration, andS is the underlying market. We have takeny=0 , a normal distribution, to reflect total ignorance of markets outside the range ofS>S The one-factor model just assumes a constantσ It is often noted that BS models incorrectly include untenable contributions from largeS regions because of their fat tails [30]. (If we wished to handle negative interest rates, ED prices > 100, we would move shift theS=0 axis to someS<0 value.)
We found that the abrupt, albeit continuous, changes acrossS0 especially forx0 did not cause any similar effects in the distributions evolved using these diffusions, as reported below.
The formula for pricing an optionP , derived in a Black-Scholes generalized framework after factoring out interest-rate discounting, is equivalent to using the form



We experimented with some alternative functional forms, primarily to apply some smooth cutoffs across the above three regions ofS For example, we usedF , a functionF designed to revert to the lognormal Black-Scholes model in several limits,



However, our fits were most sensitive to the data when we permitted the central region to be pureSx usingF above.

3.4.1. Various Diffusions

Fig. 1. gives examples ofF(S,S0,S,x,y)dzS forx in {-1, 0, 1, 2}. The other parameters areS=5 ,S0=0.5 ,S=20 ,y=0


4.1. Statistical Mechanics of Large Systems

Aggregation problems in nonlinear nonequilibrium systems typically are “solved” (accommodated) by having new entities/languages developed at these disparate scales in order to efficiently pass information back and forth. This is quite different from the nature of quasi-equilibrium quasi-linear systems, where thermodynamic or cybernetic approaches are possible. These approaches typically fail for nonequilibrium nonlinear systems.
Many systems are aptly modeled in terms of multivariate differential rate-equations, known as Langevin equations,



wherefG andg^jG are generally nonlinear functions of mesoscopic order parametersMG ,j is a microscopic index indicating the source of fluctuations, andNΛ The Einstein convention of summing over repeated indices is used. Vertical bars on an index, e.g., |j|, imply no sum is to be taken on repeated indices.
Via a somewhat lengthy, albeit instructive calculation, outlined in several other papers [8,10,31], involving an intermediate derivation of a corresponding Fokker-Planck or Sch¨odinger-type equation for the conditional probability distributionP[M(t)|M(t0)] , the Langevin rate Eq. (20) is developed into the more useful probability distribution forMG at long-time macroscopic time eventt=(u+1)θ+t0 , in terms of a Stratonovich path-integral over mesoscopic Gaussian conditional probabilities [32-36]. Here, macroscopic variables are defined as the long-time limit of the evolving mesoscopic system.
The corresponding Sch¨odinger-type equation is [34,35]



This is properly referred to as a Fokker-Planck equation whenV0 Note that although the partial differential Eq. (21) contains information regardingMG as in the stochastic differential Eq. (20), all references toj have been properly averaged over. I.e.,g^jG in Eq. (20) is an entity with parameters in both microscopic and mesoscopic spaces, butM is a purely mesoscopic variable, and this is more clearly reflected in Eq. (21).
The path integral representation is given in terms of the “Feynman” Lagrangian



Mesoscopic variables have been defined asMG in the Langevin and Fokker-Planck representations, in terms of their development from the microscopic system labeled byj The Riemannian curvature termR arises from nonlineargGG , which is a bona fide metric of this space [34]. Even if a stationary solution, i.e.,M.G=0 , is ultimately sought, a necessarily prior stochastic treatment ofM.G terms gives rise to these Riemannian “corrections.” Even for a constant metric, the termhG;G contributes toL for a nonlinear meanhGV may include terms such asΣTJTGMG , where the Lagrange multipliersJTG are constraints onMG , which are advantageously modeled as extrinsic sources in this representation; they too may be time-dependent.
For our purposes, the above Feynman Lagrangian defines a kernel of the short-time conditional probability distribution, in the curved space defined by the metric, in the limit of continuous time, whose iteration yields the solution of the previous partial differential equation Eq. (21). This differs from the Lagrangian which satisfies the requirement that the action is stationary to the first order indt — the WKBJ approximation, but which does not include the first-order correction to the WKBJ approximation as does the Feynman Lagrangian. This latter Lagrangian differs from the Feynman Lagrangian, essentially by replacingR/6 above byR/12  [37]. In this sense, the WKBJ Lagrangian is more useful for some theoretical discussions [38]. However, the use of the Feynman Lagrangian coincides with the numerical method we present here using the PATHINT code.
Using the variational principle,JTG may also be used to constrainMG to regions where they are empirically bound. More complicated constraints may be affixed toL using methods of optimal control theory [39]. With respect to a steady stateP¯ , when it exists, the information gain in stateP is defined by



In the economics literature, there appears to be sentiment to define Eq. (20) by the Ito, rather than the Stratonovich prescription. It is true that Ito integrals have Martingale properties not possessed by Stratonovich integrals [40] which leads to risk-neural theorems for markets [41,42], but the nature of the proper mathematics should eventually be determined by proper aggregation of relatively microscopic models of markets. It should be noted that virtually all investigations of other physical systems, which are also continuous time models of discrete processes, conclude that the Stratonovich interpretation coincides with reality, when multiplicative noise with zero correlation time, modeled in terms of white noiseηj , is properly considered as the limit of real noise with finite correlation time [43]. The path integral succinctly demonstrates the difference between the two: The Ito prescription corresponds to the prepoint discretization ofL , whereinθM.(t)Mρ+1Mρ andM(t)Mρ The Stratonovich prescription corresponds to the midpoint discretization ofL , whereinθM.(t)Mρ+1Mρ andM(t)12(Mρ+1+Mρ) In terms of the functions appearing in the Fokker-Planck Eq. (21), the Ito prescription of the prepoint discretized Lagrangian,LI , is relatively simple, albeit deceptively so because of its nonstandard calculus.



In the absence of a nonphenomenological microscopic theory, the difference between a Ito prescription and a Stratonovich prescription is simply a transformed drift [37].
There are several other advantages to Eq. (22) over Eq. (20). Extrema and most probable states ofMG ,MG , are simply derived by a variational principle, similar to conditions sought in previous studies [44]. In the Stratonovich prescription, necessary, albeit not sufficient, conditions are given by



For stationary states,M.G=0 , andL¯/M¯G=0 definesM¯G , where the bars identify stationary variables; in this case, the macroscopic variables are equal to their mesoscopic counterparts. [Note thatL¯ is not the stationary solution of the system, e.g., to Eq. (21) withP/t=0 However, in some cases [45],L¯ is a definite aid to finding such stationary states.] Many times only properties of stationary states are examined, but here a temporal dependence is included. E.g., theM.G terms inL permit steady states and their fluctuations to be investigated in a nonequilibrium context. Note that Eq. (25) must be derived from the path integral, Eq. (22), which is at least one reason to justify its development.

4.2. Correlations

Correlations between variables are modeled explicitly in the Lagrangian as a parameter usually designatedρ (not to be confused with the Rho Greek calculated for options). This section uses a simple two-factor model to develop the correspondence between the correlationρ in the Lagrangian and that among the commonly written Weiner distributionsdz
Consider coupled stochastic differential equations



where<.> denotes expectations.
These can be rewritten as Langevin equations (in the It^ prepoint discretization)



wherep1 andp2 are independent [0,1] Gaussian distributions.
The equivalent short-time probability distribution,P , for the above set of equations is




, the metric in{r,l} -space, is the inverse of the covariance matrix,



The above also corrects previous papers which inadvertently dropped thesgn factors in the above [9,10,17].


5.1. ASA Outline

The algorithm developed which is now called Adaptive Simulated Annealing (ASA) [46] fits short-time probability distributions to observed data, using a maximum likelihood technique on the Lagrangian. This algorithm has been developed to fit observed data to a theoretical cost function over aD -dimensional parameter space [46], adapting for varying sensitivities of parameters during the fit. The ASA code can be obtained at no charge, via WWW from http://www.ingber.com/ or via FTP from ftp.ingber.com [15].

5.1.1. General description

Simulated annealing (SA) was developed in 1983 to deal with highly nonlinear problems [47], as an extension of a Monte-Carlo importance-sampling technique developed in 1953 for chemical physics problems. It helps to visualize the problems presented by such complex systems as a geographical terrain. For example, consider a mountain range, with two “parameters,” e.g., along the North−South and East−West directions. We wish to find the lowest valley in this terrain. SA approaches this problem similar to using a bouncing ball that can bounce over mountains from valley to valley. We start at a high “temperature,” where the temperature is an SA parameter that mimics the effect of a fast moving particle in a hot object like a hot molten metal, thereby permitting the ball to make very high bounces and being able to bounce over any mountain to access any valley, given enough bounces. As the temperature is made relatively colder, the ball cannot bounce so high, and it also can settle to become trapped in relatively smaller ranges of valleys.
We imagine that our mountain range is aptly described by a “cost function.” We define probability distributions of the two directional parameters, called generating distributions since they generate possible valleys or states we are to explore. We define another distribution, called the acceptance distribution, which depends on the difference of cost functions of the present generated valley we are to explore and the last saved lowest valley. The acceptance distribution decides probabilistically whether to stay in a new lower valley or to bounce out of it. All the generating and acceptance distributions depend on temperatures.
In 1984 [48], it was established that SA possessed a proof that, by carefully controlling the rates of cooling of temperatures, it could statistically find the best minimum, e.g., the lowest valley of our example above. This was good news for people trying to solve hard problems which could not be solved by other algorithms. The bad news was that the guarantee was only good if they were willing to run SA forever. In 1987, a method of fast annealing (FA) was developed [49], which permitted lowering the temperature exponentially faster, thereby statistically guaranteeing that the minimum could be found in some finite time. However, that time still could be quite long. Shortly thereafter, Very Fast Simulated Reannealing (VFSR) was developed in 1987 [46], now called Adaptive Simulated Annealing (ASA), which is exponentially faster than FA.
ASA has been applied to many problems by many people in many disciplines [50-52]. The feedback of many users regularly scrutinizing the source code ensures its soundness as it becomes more flexible and powerful.

5.1.2. Mathematical outline

ASA considers a parameterαki in dimensioni generated at annealing-timek with the range



calculated with the random variableyi ,



The generating functiongT(y) is defined,



where the subscripti onTi specifies the parameter index, and thek -dependence inTi(k) for the annealing schedule has been dropped for brevity. Its cumulative probability distribution is




is generated from aui from the uniform distribution



It is straightforward to calculate that for an annealing schedule for



a global minima statistically can be obtained. I.e.,



Control can be taken overci , such that



wheremi andni can be considered “free” parameters to help tune ASA for specific problems.
ASA has over 100 OPTIONS available for tuning. A few important ones were used in this project.

5.1.3. Reannealing

Whenever doing a multi-dimensional search in the course of a complex nonlinear physical problem, inevitably one must deal with different changing sensitivities of theαi in the search. At any given annealing-time, the range over which the relatively insensitive parameters are being searched can be “stretched out” relative to the ranges of the more sensitive parameters. This can be accomplished by periodically rescaling the annealing-timek , essentially reannealing, every hundred or so acceptance-events (or at some user-defined modulus of the number of accepted or generated states), in terms of the sensitivitiessi calculated at the most current minimum value of the cost function,C ,



In terms of the largestsi =smax , a default rescaling is performed for eachki of each parameter dimension, whereby a new indexki is calculated from eachki ,




is set to unity to begin the search, which is ample to span each parameter dimension.

5.1.4. Quenching

Another adaptive feature of ASA is its ability to perform quenching in a methodical fashion. This is applied by noting that the temperature schedule above can be redefined as



in terms of the “quenching factor”Qi The sampling proof fails ifQi>1 as



This simple calculation shows how the “curse of dimensionality” arises, and also gives a possible way of living with this disease. In ASA, the influence of large dimensions becomes clearly focussed on the exponential of the power ofk being1/D , as the annealing required to properly sample the space becomes prohibitively slow. So, if resources cannot be committed to properly sample the space, then for some systems perhaps the next best procedure may be to turn on quenching, wherebyQi can become on the order of the size of number of dimensions.
The scale of the power of1/D temperature schedule used for the acceptance function can be altered in a similar fashion. However, this does not affect the annealing proof of ASA, and so this may used without damaging the sampling property.

5.2. -Indicator of Market Contexts

Our studies of contexts of markets well recognized by option traders to have significantly different volatility behavior show that the exponentsx are reasonably faithful indicators defining these different contexts.
We feel the two-factor model is more accurate because the data indeed demonstrate stochastic volatility [4]. We also note that the two-factorx fit by ASA across the last few years. This is not true of the one-factor ASA fittedx the Black-Scholesσ as a parameter, but rather calculate as historical volatility during all runs. Some results of two-factor studies and one-factor studies using a Black-Scholesσ have been reported elsewhere [13].
Sinceσ is not widely traded and arbitraged, to fit the two-factor model, we calculate this quantity as an historical volatility for both its prepoint and postpoint values. Some previous studies used a scaled implied volatility (which is calculated from a BS model). We use a standard deviationσ ,



In the one-factor model, it does not make good numerical sense to have two free parameters in one term, i.e.,σ andx , as these cannot be fit very well within the variance the data. Instead, one method is to take guidance from the two-factor results, to set a scale for an effectiveσ , and then fit the parameterx Another method it apply the above StdDev as a proxy forσ Some motivation for this approach is given by considering collapsing a two-factor stochastic volatility model in one-factor model: The one-factor model now has an integral over the stochastic process in its diffusion term. The is integral is what we are approximating by using the standard deviation of a moving window of the data.


6.1. PATHINT Outline

The fits described above clearly demonstrate the need to incorporate stochastic volatility in option pricing models. If one-factor fits are desired, e.g., for efficiency of calculation, then at the least the exponent of pricex should be permitted to freely adapt to the data. In either case, it is required to develop a full set of Greeks for trading. To meet these needs, we have used a path-integral code, PATHINT, described below, with great success. At this time, the two-factor code takes too long to run for daily use, but it proves to be a good weekly baseline for the one-factor code.
The PATHINT algorithm develops the long-time probability distribution from the Lagrangian fit by the first optimization code. A robust and accurate histogram-based (non-Monte Carlo) path-integral algorithm to calculate the long-time probability distribution has been developed to handle nonlinear Lagrangians [18-20,22,53-55],
The histogram procedure recognizes that the distribution can be numerically approximated to a high degree of accuracy as sum of rectangles at pointsMi of heightPi and widthΔMi For convenience, just consider a one-dimensional system. The above path-integral representation can be rewritten, for each of its intermediate integrals, as



which yields




is a banded matrix representing the Gaussian nature of the short-time probability centered about the (varying) drift.
Care must be used in developing the mesh inΔMG , which is strongly dependent on the diagonal elements of the diffusion matrix, e.g.,



Presently, this constrains the dependence of the covariance of each variable to be a nonlinear function of that variable, albeit arbitrarily nonlinear, in order to present a straightforward rectangular underlying mesh. Below we address how we have handled this problem in our two-factor stochastic-volatility model.
Fitting data with the short-time probability distribution, effectively using an integral over this epoch, permits the use of coarser meshes than the corresponding stochastic differential equation. The coarser resolution is appropriate, typically required, for numerical solution of the time-dependent path-integral: By considering the contributions to the first and second moments ofΔMG for small time slicesθ , conditions on the time and variable meshes can be derived [53]. The time slice essentially is determined byθL¯1 , whereL¯ is the “static” Lagrangian withdMG/dt=0 , throughout the ranges ofMG giving the most important contributions to the probability distributionP The variable mesh, a function ofMG , is optimally chosen such thatΔMG is measured by the covariancegGG , orΔMG(gGGθ)1/2
If the histogram method is further developed into a trapezoidal integration, then more accuracy can be expected at more complex boundaries [54]. Such problems does not arise here, and 6−7 significant figure accuracy is easily achieved provided great care is taken to develop the mesh as described above. For example, after setting the initial-condition discretized delta functionMSUPG at the prepoint of an interval, the mesh going forward inMG is simply calculated stepwise using



However, going backwards inMG , an iterative procedure was used at each step, starting with an estimate from the prepoint and going forward again, until there was no mismatch. That much care is required for the mesh was observed in the original Wehner-Wolfer paper [53].
It is important to stress that very good numerical accuracy is required to get very good Greeks required for real-world trading. Many authors develop very efficient numerical schemes to get reasonable prices to 2 or 3 significant figures, but these methods often are not very good to enough significant figures to get good precision for the Greeks. Typical Monte Carlo methods are notorious for giving such poor results after very long computer runs. In particular, we do not believe that good Greeks required for trading can be obtained by using meshes obtained by other simpler numerical algorithms [56].
The PATHINT algorithm in its present form can “theoretically” handle any n-factor model subject to its diffusion-mesh constraints. In practice, the calculation of 3-factor and 4-factor models likely will wait until giga-hertz speeds and giga-byte RAM are commonplace.

6.2. Development of Long-Time Probabilities

The noise determined empirically as the diffusion of the data is the same, independent ofx within our approach. Therefore, we scale different exponents such that the the diffusions, the square of the “basis-point volatilities” (BPV), are scaled to be equivalent. Then, there is not a very drastic change in option prices for different exponentsx for the strikeX set to theS underlying, the at-the-money (ATM) strike. This is not the case for out of the money (OTM) or in the money (ITM) strikes, e.g., when exercising the strike would generate loss or profit, resp. This implies that current pricing models are not radically mispricing the markets, but there still are significant changes in Greeks using more sophisticated models.

6.3. Dependence of Probabilities on and

Fig. 2. gives examples of the short-time distribution evolved out toT=0.5 year forx in {-1, 0, 1, 2}, with 500 intermediate epochs/foldings, and BSσ=0.0075 Each calculation scalesσ by multiplying byS/F(S,S0,S,x,y)
Fig. 3. gives an example of a two-factor distribution evolved out toT=0.5 year forx=0.7

6.4. Two-Factor Volatility and PATHINT Modifications

In our two-factor model, the mesh ofS would depend onσ and cause some problems in any PATHINT grid to be developed inS -σ
For some time we have considered how to handle this generic problem forn -factor multivariate systems with truly multivariate diffusions within the framework of PATHINT. In one case, we have taken advantage of the Riemannian invariance of the probability distribution as discussed above, to transform to a system where the diffusions have only “diagonal” multiplicative dependence [19]. However, this leads to cumbersome numerical problems with the transformed boundary conditions [20]. Another method, not yet fully tested, is to develop a tiling of diagonal meshes for each factori that often are suitable for off-diagonal regions in ann -factor system, e.g.,



where the mesh of variablei at a given point labeled byk is an exponentiation of 2, labeled bymki ; the integral powermki is determined such that it gives a good approximation to the diagonal mesh given by the one-factor PATHINT mesh conditions, in terms of some minimal meshΔM0i , throughout regions of the Lagrangian giving most important contributions to the distribution as predetermined by a scan of the system. This tiling of the kernel is to be used together with interpolation of intermediate distributions.
The results of our study here are that, after the at-the-money BPV are scaled to be equivalent, there is not a very drastic change in the one-factor ATM Greeks developed below. Therefore, while we have not at all changed the functional dependence of the Lagrangian onS andσ , we have determined our meshes using a diffusion for theS equation asσ0F(S,S0,S,x,y) , whereσ0 is determined by the same BPV-equivalent condition as imposed on the one-factor models. This seems to work very well, especially since we have taken ourσ equation to be normal with a limited range of influence in the calculations. Future work yet has to establish a more definitive distribution forσ


7.1. Primary Use of Probabilities For European Options

We can use PATHINT to develop the distribution of the option value back in time from expiration. This is the standard approach used by CRR, explicit and implicit Crank-Nicolson models, etc [57].
For European options, we also take advantage of the accuracy of PATHINT enhanced by normalizing the distribution as well as the kernel at each iteration (though in these studies this was not required after normalizing the kernel). Therefore, we have calculated our European option prices and Greeks using the most elementary and intuitive definition of the option’s priceV  [58], which is the expected value



whereX is the strike price, and the expected value<.> is taken with respect to the risk-neutral distribution of the underlying marketS It should be noted that, while the standard approach of developing the option price delivers at the present time a range of underlying values for a given strike, our approach delivers a more practical complete range of strikes (as many as 50−60 for Eurodollar options) for a given underlying at the present time, resulting in a greatly enhanced numerical efficiency. The risk-neutral distribution is effectively calculated taking the drift as the cost-of-carryb timesS , using the above arguments leading to the BS formula. We have designed our codes to use parameters risk-free-rater and cost-of-carryb such that



which is similar to how generalized European BS codes useb andr  [59].
Using this approach, the European priceVE is calculated as



The American priceVA must be calculated using a different kernel going back in time from expiration, using as “initial conditions” the option values used in the above average. This kernel is the transposed matrix used for the European case, and includes additional drift and “potential” terms due to the need to develop this back in time. This can be understood as requiring the adjoint partial differential equation or a postpoint Lagrangian in real time.
That is, a forward equation for the conditional probability distributionP[M(t+dt),t+dt|M(t),t] is



where the partial derivatives with respect toMG act on the postpointMG(t+dt) A backward equation for the conditional probability distributionP[M(t+dt),t+dt|M(t),t] is



where the partial derivatives with respect toMG act on the prepointMG(t) The forward equation has a particularly simple form in the mathematically equivalent prepoint path integral representation.
Above, we have described how the forward distribution at present timeT0 is evolved using PATHINT to the time of expiration,P(T) , e.g., using a path-integral kernelK folded overn epochs, where it is folded with the functionO ,



to determine the European value at the present time of the calls and puts at different strike valuesX An equivalent calculation can be performed by using the backward equation, expressed in terms of the “equivalent” kernelK acting onO ,



It is convenient to use the simple prepoint representation for the Lagrangian, so the backward kernel is first re-expressed as a forward kernel by bringing the diffusions and drifts inside the partial derivatives, giving a transformed adjoint kernelK
The above mathematics is easily tested by calculating European options going forwards and backwards. For American options, while performing the backwards calculation, at each point of mesh, the options price evolving backwards fromT is tested and calculated as



The Greeks{Δ,Γ,Θ} are directly taken off the final developed option at the present time, since theMG mesh is available for all derivatives. We get excellent results for all Greeks. Note that for CRR trees, only one point of mesh is at the present time, soΔ requires moving back one epoch andΓ requires moving back two epochs, unless the present time is pushed back additional epochs, etc.

7.2. PATHINT Baselined to CRR and BS

The CRR method is a simple binomial tree which in a specific limit approaches the BS partial differential equation. It has the virtues of being fast and readily accommodates European and American calculations. However, it suffers a number of well-known numerical problems, e.g., a systematic bias in the tree approximation and an oscillatory error as a function of the number of intermediate epochs/iterations in its time mesh. Some Greeks like{Δ,Γ,Θ} can be directly taken off the tree used for pricing with reasonable approximations (at epochs just before the actual current time). The first problem for American options can be alleviated somewhat by using the variant method [5],



The second problem can be alleviated somewhat by averaging runs ofn iterations with runs ofn+1 iterations [60]. This four-fold increase of runs is rarely used, though perhaps it should be more often. Furthermore, if increased accuracy in price is needed in order to take numerical derivatives, typically 200−300 iterations should be used for expirations some months away, not 30−70 as too often used in practice.
When taking numerical derivatives there can arise a need to tune increments taken for the differentials. For some Greeks likeΔ andΓ the size of the best differentials to use may vary with strikes that have different sensitivities to parts of the underlying distribution. One method of building in some adaptive flexibility across many such strikes is to increase the order of terms used in taking numerical derivatives. (This was not required for results presented here.) For example, it is straightforward to verify that, while the central difference



is typically good too((dx)3) ,



is typically good too((dx)5) Similarly, while



is typically good to((dx)4) ,



is typically good to((dx)6)
Table 1 gives an example of baselining our one-factor PATHINT code to the CRR and BS codes using the above safeguards for an option whose American price is the same as its European counterpart, a typical circumstance [59]. In the literature, the CRR code is most often taken as the benchmark for American calculations. We took the number of intermediate epochs/points to be 300 for each calculation. Parameters used for this particular ATM call areT=0.5 years,r=0.05 ,b=0 ,σ=0.10

Table 1.

Tests with American CRR and American PATHINT lead to results with the same degrees of accuracy.

7.3. Two-Factor PATHINT Baselined to One-Factor PATHINT

Previous papers and tests have demonstrated that the two-factor PATHINT code performs as expected. The code was developed with only a few lines to be changed for running anyn -factor problem, i.e., of course after coding the Lagrangian specific to a given system. Tests were performed by combining two one-factor problems, and there is no loss of accuracy. However, here we are making some additional mesh approximations as discussed above to accommodateσ in theS diffusion. This seems quite reasonable, but there is no sure test of the accuracy. We indeed see that the ATM results are very close acrossx similar to our ATM comparisons between BS and our one-factor PATHINT results for variousx scaling is performed to have all models used the same BPV (using theσ0 procedure for the mesh as described above for the two-factor model).
The logical extension of Greeks for the two-factor model is to develop derivatives of price with respect toρ andε inσ volatility equation. However, we did not find a bona fide two-factor proxy for the one-factorΥ , the derivative of price with respect to the one-factorσ constant. We get very good ATMΥ comparisons between BS and our one-factor models with variousx We tried simply multiplying the noise in the two-factor stochastic volatility in the price equation by a parameter with deviations from 1 to get numerical derivatives of PATHINT solutions, and this gave somewhat good agreement with the ATM BPV-scaled BSΥ within a couple of significant figures. Perhaps this is not too surprising, especially given the correlation substantialρ between the price and volatility equations which we do not neglect.


The results of our study are that, after the at-the-money basis-point volatilities are scaled to be equivalent, there is only a very small change in option prices for different exponentsx , both for the one-factor and two-factor models. There still are significant differences in Greeks using more sophisticated models, especially for out-of-the-money options. This implies that current pricing models are not radically mispricing the markets.
Our studies point to contexts of markets well recognized by option traders to have significantly different volatility behavior. Suppression of stochastic volatility in the one-factor model just leaks out into stochasticity of parameters in the model, e.g., especially inx , unless additional numerical methods are employed, e.g., using an adaptive standard deviation. Our studies show that the two-factor exponentsx are reasonably faithful indicators defining these different contexts. Thex -exponents in the two-factor fits are quite stable. As such, especially the two-factorx can be considered as a “context indicator” over a longer time scale than other indicators typically used by traders. The two-factor fits also exhibit differences due to theσ parameters, including theρ correlations, in accord with the sense traders have about the nature of changing volatilities across this time period.
Modern methods of developing multivariate nonlinear multiplicative Gaussian-Markovian systems are quite important, as there are many such systems and the mathematics must be diligently exercised if such models are to faithfully represent the data they describe. Similarly, sophisticated numerical techniques, e.g., global optimization and path integration are important tools to carry out the modeling and fitting to data without compromising the model, e.g., by unwarranted quasi-linear approximations. Three quite different systems have benefited from this approach:
The large-scale modeling of neocortical interactions has benefited from the use of intuitive constructs that yet are faithful to the complex algebra describing this multiple-scaled complex system. For example, canonical-momenta indicators have been successfully applied to multivariate financial markets.
It is clear that ASA optimization and PATHINT path-integral tools are very useful to develop the algebra of statistical mechanics for a large class of nonlinear stochastic systems encountered in finance. However, it also is clear that each system typically presents its own non-typical unique character and this must be included in any such analysis. A virtue of this statistical mechanics approach and these associated tools is they appear to be flexible and robust to handle quite different systems.


I thank Donald Wilson for his support and Jennifer Wilson for collaboration running ASA and PATHINT calculations. Implied volatility and yield data was extracted from the MIM database of Logical Information Machines (LIM).



L. Ederington and W. Guan, Is implied volatility an informationally efficient and effective predictor of future volatility?, U Oklahoma, Norman, OK, (1998).


G. Bakshi, C. Cao, and Z. Chen, Pricing and hedging long-term options, Pennsylvania State U, University Park, PA, (1998).


L. Ingber, Some Applications of Statistical Mechanics of Financial Markets, LIR-98-1-SASMFM, Lester Ingber Research, Chicago, IL, (1998).


L. Ingber and J.K. Wilson, Volatility of volatility of financial markets, Mathl. Computer Modelling 29 (5), 39-57 (1999).


J.C. Hull, Options, Futures, and Other Derivatives, Third Edition, Prentice Hall, Upper Saddle River, NJ, (1997).


J.C. Jackwerth, Recovering risk aversion from option prices and realized returns, London Business School, London, UK, (1998).


F.A. Longstaff and E.S. Schwartz, Valuing American options by simulation: A simple least-squares approach, Capital Management Sciences, Los Angeles, CA, (1998).


L. Ingber, Statistical mechanics of nonlinear nonequilibrium financial markets, Math. Modelling (6), 343-361 (1984).


L. Ingber, Statistical mechanical aids to calculating term structure models, Phys. Rev. A 42 (12), 7057-7064 (1990).


L. Ingber, M.F. Wehner, G.M. Jabbour, and T.M. Barnhill, Application of statistical mechanics methodology to term-structure bond-pricing models, Mathl. Comput. Modelling 15 (11), 77-98 (1991).


L. Ingber, Statistical mechanics of nonlinear nonequilibrium financial markets: Applications to optimized trading, Mathl. Computer Modelling 23 (7), 101-121 (1996).


L. Ingber, Canonical momenta indicators of financial markets and neocortical EEG, in Progress in Neural Information Processing, (Edited by S.-I. Amari, L. Xu, I. King, and K.-S. Leung), pp. 777-784, Springer, New York, (1996).


L. Ingber and J.K. Wilson, Statistical mechanics of financial markets: Exponential modifications to Black-Scholes, Mathl. Computer Modelling 31 (8/9), 167-192 (2000).


C. Azariadis, Self-fulfilling prophecies, J. Econ. Theory 25, 380-396 (1981).


L. Ingber, Adaptive Simulated Annealing (ASA), Global optimization C-code, Caltech Alumni Association, Pasadena, CA, (1993).


J. C. Cox, S. A. Ross, and M. Rubenstein, Option pricing: A simplified approach, J. Fin. Econ. 7, 229-263 (1979).


L. Ingber, Data mining and knowledge discovery via statistical mechanics in nonlinear stochastic systems, Mathl. Computer Modelling 27 (3), 9-31 (1998).


L. Ingber, H. Fujio, and M.F. Wehner, Mathematical comparison of combat computer models to exercise data, Mathl. Comput. Modelling 15 (1), 65-90 (1991).


L. Ingber, Statistical mechanics of neocortical interactions: Path-integral evolution of short-term memory, Phys. Rev. E 49 (5B), 4652-4664 (1994).


L. Ingber and P.L. Nunez, Statistical mechanics of neocortical interactions: High resolution path-integral calculation of short-term memory, Phys. Rev. E 51 (5), 5074-5083 (1995).


L. Ingber, R. Srinivasan, and P.L. Nunez, Path-integral evolution of chaos embedded in noise: Duffing neocortical analog, Mathl. Computer Modelling 23 (3), 43-53 (1996).


L. Ingber, Path-integral evolution of multivariate systems with moderate noise, Phys. Rev. E 51 (2), 1616-1619 (1995).


L. Ingber and R. Mondescu, (in preparation), (2000).


Federal Reserve Bank, Instruments of the Money Markets, Seventh Edition, Federal Reserve Bank of Richmond, Richmond, VA, (1993).


L. Bachelier, Th´orie de la Sp´culation, Annales de l’Ecole Normale Sup´rieure 17, 21-86 (1900).


M. C. Jensen, Some anomalous evidence regarding market efficiency, an editorial introduction, J. Finan. Econ. 6, 95-101 (1978).


B. B. Mandelbrot, When can price be arbitraged efficiently? A limit to the validity of the random walk and martingale models, Rev. Econ. Statist. 53, 225-236 (1971).


S. J. Taylor, Tests of the random walk hypothesis against a price-trend hypothesis, J. Finan. Quant. Anal. 17, 37-61 (1982).


P. Brown, A. W. Kleidon, and T. A. Marsh, New evidence on the nature of size-related anomalies in stock prices, J. Fin. Econ. 12, 33-56 (1983).


F.J. Fabozzi, Treasury Securities and Derivatives, Fabozzi Associates, New Hope, PA, (1998).


L. Ingber, Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography, Phys. Rev. A 44 (6), 4017-4060 (1991).


K.S. Cheng, Quantization of a general dynamical system by Feynman’s path integration formulation, J. Math. Phys. 13, 1723-1726 (1972).


H. Dekker, Functional integration and the Onsager-Machlup Lagrangian for continuous Markov processes in Riemannian geometries, Phys. Rev. A 19, 2102-2111 (1979).


R. Graham, Path-integral methods in nonequilibrium thermodynamics and statistics, in Stochastic Processes in Nonequilibrium Systems, (Edited by L. Garrido, P. Seglar, and P.J. Shepherd), pp. 82-138, Springer, New York, NY, (1978).


F. Langouche, D. Roekaerts, and E. Tirapegui, Discretization problems of functional integrals in phase space, Phys. Rev. D 20, 419-432 (1979).


F. Langouche, D. Roekaerts, and E. Tirapegui, Short derivation of Feynman Lagrangian for general diffusion process, J. Phys. A 113, 449-452 (1980).


F. Langouche, D. Roekaerts, and E. Tirapegui, Functional Integration and Semiclassical Expansions, Reidel, Dordrecht, The Netherlands, (1982).


M. Rosa-Clot and S. Taddei, A path integral approach to derivative security pricing: I. Formalism and analytic results, INFN, Firenze, Italy, (1999).


P. Hagedorn, Non-Linear Oscillations, Oxford Univ., New York, NY, (1981).


B. Oksendal, Stochastic Differential Equations, Springer, New York, NY, (1998).


J.M. Harrison and D. Kreps, Martingales and arbitrage in multiperiod securities markets, J. Econ. Theory 20, 381-408 (1979).


S.R. Pliska, Introduction to Mathematical Finance, Blackwell, Oxford, UK, (1997).


C.W. Gardiner, Handbook of Stochastic Methods for Physics, Chemistry and the Natural Sciences, Springer-Verlag, Berlin, Germany, (1983).


R. C. Merton, An intertemporal capital asset pricing model, Econometrica 41, 867-887 (1973).


L. Ingber, Statistical mechanics of neocortical interactions: Stability and duration of the 7+−2 rule of short-term-memory capacity, Phys. Rev. A 31, 1183-1186 (1985).


L. Ingber, Very fast simulated re-annealing, Mathl. Comput. Modelling 12 (8), 967-973 (1989).


S. Kirkpatrick, C.D. Gelatt, Jr., and M.P. Vecchi, Optimization by simulated annealing, Science 220 (4598), 671-680 (1983).


S. Geman and D. Geman, Stochastic relaxation, Gibbs distribution and the Bayesian restoration in images, IEEE Trans. Patt. Anal. Mac. Int. (6), 721-741 (1984).


H. Szu and R. Hartley, Fast simulated annealing, Phys. Lett. A 122 (3-4), 157-162 (1987).


M. Wofsey, Technology: Shortcut tests validity of complicated formulas, The Wall Street Journal 222 (60), B1 (1993).


L. Ingber, Simulated annealing: Practice versus theory, Mathl. Comput. Modelling 18 (11), 29-57 (1993).


L. Ingber, Adaptive simulated annealing (ASA): Lessons learned, Control and Cybernetics 25 (1), 33-54 (1996).


M.F. Wehner and W.G. Wolfer, Numerical evaluation of path-integral solutions to Fokker-Planck equations. I., Phys. Rev. A 27, 2663-2670 (1983).


M.F. Wehner and W.G. Wolfer, Numerical evaluation of path-integral solutions to Fokker-Planck equations. II. Restricted stochastic processes, Phys. Rev. A 28, 3003-3011 (1983).


M.F. Wehner and W.G. Wolfer, Numerical evaluation of path integral solutions to Fokker-Planck equations. III. Time and functionally dependent coefficients, Phys. Rev. A 35, 1795-1801 (1987).


M. Rosa-Clot and S. Taddei, A path integral approach to derivative security pricing: II. Numerical methods, INFN, Firenze, Italy, (1999).


P. Wilmott, S. Howison, and J. Dewynne, The Mathematics of Financial Derivatives, Cambridge U Press, Cambridge, (1995).


L. Ingber, A simple options training model, Mathl. Computer Modelling 30 (5-6), 167-182 (1999).


E.G. Haug, The Complete Guide to Option Pricing Formulas, McGraw-Hill, New York, NY, (1997).


M. Broadie and J. Detemple, Recent advances in numerical methods for pricing derivative securities, in Numerical Methods in Finance, (Edited by L.C.G Rogers and D. Talay), pp. 43-66, Cambridge University Press, Cambridge, UK, (1997).


FIG. 1. (a)F(S,S0,S,x,y) forx=1 , the Black-Scholes case. The other parameters areS=5 ,S0=0.5 ,S=20 ,y=0 (b)F(S,S0,S,x,y) forx=0 , the normal distribution. (c)F(S,S0,S,x,y) forx=1 (d)F(S,S0,S,x,y) forx=2
FIG. 2. The short-time probability distribution at timeT=0.5 years forx=1 , the (truncated) Black-Scholes distribution. The short-time probability distribution at timeT=0.5 years forx=0 , the normal distribution. The short-time probability distribution at timeT=0.5 years forx=1 The short-time probability distribution at timeT=0.5 years forx=2
FIG. 3. A two-factor distribution evolved out toT=0.5 year forx=0.7


Table 1. Calculation of prices and Greeks are given for closed form BS (only valid for European options), binomial treeCRREuropean ,CRRAmerican ,CRRvariant , and PATHINT. As verified by calculation, the American option would not be exercised early, so the PATHINT results are identical to the European option. TheCRRAmerican differs somewhat from theCRREuropean due to the discrete nature of the calculation. All CRR calculations include averaging over 300 and 301 iterations to minimize oscillatory errors.

Figure 1

Image FIGS/F4.ps.png

Figure 2

Image FIGS/dist4.ps.png

Figure 3

Image FIGS/dist2f.ps.png

Table 1

Image pngdir/paper2122.png

Valid XHTML 1.1 Transitional