## Micah S. Allred and Dr. David Clark, Mathematics

Since its popularization by Mandelbrot and Von Ness in the early 60’s, Fractional Brownian Motion (fBm), has found a great many applications in such fields as Option Pricing, Signal Processing, Internet Traffic, Hydrology, and Geology. This process is an extension of Brownian Motion which allows for a process with long memory, that is, the next increment is influenced by the evolution of increments in the past. Contrast this to Brownian Motion which is the continuous time analog to coin flipping. Here, each flip of the coin is completely independent and unaffected by past flips. As many processes exhibit dependent, rather than independent, increments, the usefulness of fBm as a theoretical tool can clearly be seen.

However, because of the wide-spread application of fBm to various processes, the need for a discrete approximation to this continuous-time process has become critical in simulation. The most recently published paper, Yin’s “New Methods for Simulation of Fractional Brownian Motion,” which appeared in The Journal of Computational Physics in 1996 outlines a spectral method for the simulation of fBm. While this method eliminates the problems which were associated with earlier methods; namely increments which are not stationary (the statistical distribution of the increments changes over time) or necessarily high frequencies.1 The spectral method implements a complex valued Fast Fourier Transform on a series that involves the power spectral density function. While accurate in a statistical sense, the implementation of this process was overly complicated and perhaps more computationally intense than other methods.

After a suggestion from my advisor at the University of Utah’s Summer Program in math, I implemented another method for the generation of fBm for my honors thesis. By using the Cholesky decomposition of the covariance matrix (a positive definite matrix) to find the square root of the matrix, then multiplying it by a column vector of normally distributed random numbers. This method was very straight forward to implement, in fact, there is a function already programmed in MATLAB.

The central question of this project was whether the Cholesky decomposition represents a computationally superior method for generating fBm. Its ease of implementation is clear, but after some evaluation, its computational superiority is still unclear. Burden and Faire calculate the Cholesky decomposition as requiring approximately n3 / 6 multiplications and additions. The Fast Fourier Transform method appears to require on the order of n2 multiplications. While the difference in these categories may be stark, the FFT requires complex multiplications and the calculation of approximately 4n cosine functions. Since the two methods are not directly comparable, the final step for this project which will entail the run time of each of these methods over a range of values of n. This may be necessary to do on a couple of different platforms as different programs may have different algorithms for the calculation of powers, square roots, cosines, etc. which all of which could affect the run time. It may be possible that one method will have an advantage over smaller values of n while the other may be better for larger n. I anticipate this final part to be finished in the next two – three weeks. If there is a significant difference in run times this result may be publishable.

A sizable portion of the background research for this project was done while still working on my honors thesis. The major challenge since the completion of the thesis has been the evaluation of the spectral density distribution. It is not readily evaluated in the form stated in Yin’s paper, and Yin’s source for that definition is of little help. After some discussion Dr. Clark and I tentatively decided that the essence of the function could be capture in the first 100 or so terms of the series. In the calculation of the required computations, I accounted for it as n terms rather than 100