Feeds:
Posts
Comments

Archive for May, 2011

Monte Carlo Integration

In mathematics, Monte Carlo integration is numerical integration using random numbers. That is, Monte Carlo integration methods are algorithms for the approximate evaluation of definite integrals, usually multidimensional ones. The usual algorithms evaluate the integrand at a regular grid. Monte Carlo methods, however, randomly choose the points at which the integrand is evaluated. It is helpful for complex functions or large dimensions.

To evaluate a function f, we sample it N times:

Here the angle brackets denote taking the arithmetic mean over the N sample points,

The fundamental disadvantage of simple Monte Carlo integration is that its accuracy increases only as the square root of N , the number of sampled points. Each new point sampled adds linearly to an accumulated sum that will become the function average, and also linearly to an accumulated sum of squares that will become the variance (see chapter 7 of  Numerical Recipes).

The sampled points should be randomly selected. Assume we take a set of uniform random number with a distribution like below:

Numerical example: calculate π

We want to calculate the Pi number to a certain level of accuracy. There are several approaches. I take integral of the unit circle in the first quadrant and multiply that with four.

Recipe: 1) generate N random number between zero and one, 2) evaluate function f   for each of them, and 3) perform averaging (above equation). The plot below shows the average of error after 100 times iteration for each number of samples.  For comparison, I put the simple Simpson integration method as well.   The uncertainty is larger than traditional integration method (1/N in this case) and with increasing  number of samples, it goes down slowly.

As discussed above, the variance goes with square root of N. It starts to show advantages over traditional integration methods only in large dimensions. We have limitations for increasing this number. The convergence scheme  makes calculations computationally expensive. The CPU run time for the above calculations is shown below.

In one dimension (the present example), it goes linearly. In two dimensions, it goes with square of N !  That severely limits number of iterations in large dimensions. Recall that we selected a set of uniformly distributed random numbers. However, if the function has a sharp peak at certain range and takes small values elsewhere, we need to somehow sample those regions more frequently to enhance the accuracy. That is topic of the next post, the importance sampling.

Advertisements

Read Full Post »

tgif, an alternative for xfig

Annotating the PS or EPS figures is an everyday task for many students and scientists. For many years, I used xfig. It is nice and powerful. You can install it via default repository, so no need to look for an alternative. However, recently a problem appeared: xfig cannot upload EPS files. And it turned out that the problem is due to some security patch in a gostscript file.

Well, I had to annotate some EPS files, and had to find an alternative solution. After a short search, I tried tgif. You can install it on Debian Squeeze from standard repository. And it worked fine. The menu is easy to work and its import export also works fine. Tgif has exactly nothing to do with GIF image format.

Read Full Post »