Feeds:
Posts
Comments

Archive for June, 2011

The above photo was captured during the evening of June 15, 2011 when I was waiting for the lunar eclipse. A Canon EOS 400D camera was attached to the Celestron Nexstar 4 inch Maksutov telescope in prime focus mode.

Read Full Post »

Importance sampling is choosing a good distribution from which to simulate one’s random variables. It involves multiplying the integrand by 1 (usually dressed up in a “tricky fashion”) to yield an expectation of a quantity that varies less than the original integrand over the region of integration.

We now return to it in a slightly more formal way. Suppose that an integrand f can be written as the product of a function h that is almost constant times another, positive, function g. Then its integral over a multidimensional volume V is

f dV =(f /g) gdV =h gdV

Here, we interpreted this equation as suggesting a change of variable to G, the indefinite integral of g. That made gdV a perfect differential. We then proceeded to use the basic theorem of Monte Carlo integration. A more general interpretation of the above equation is that we can integrate f  by instead sampling h — not, however, with uniform probability density dV , but rather with nonuniform density gdV . In this second interpretation, the first interpretation follows as the special case.

In this case, random points are chosen from a uniform distribution in the domain A < y < B.  The new integrand, f/g, is close to unity and so the variance (i.e. the value of  sigma) for this function is much smaller than that obtained when evaluating the function  by sampling from a uniform distribution. Sampling from a non-uniform distribution for this function should therefore be more efficient than doing a crude Monte Carlo calculation without importance sampling.

Numerical example

We want to evaluate the integral of y=x^2 in the range between zero and one. We take g(x)=x. It samples larger values more often. The inverse function of g is √(2x).

Recipes: 1) take N uniform samples between zero and one, 2) build a set of non-uniform sample such that dU=gdV is uniformly sampled. To this end, one has to use the inverse of g(x). In other words, if p(x) is the uniform distribution of x, then q(u)= g-1(x)  will be a uniform distribution for U. 3) evaluate the average of h=f/g for the new sample.

The distribution of the new variable, dU=gdV, is like this:

Now, we have to perform a normal Monte Carlo integration of the function h in U domain. The results are shown in the next plot. For comparison, result of a Monte Carlo integration with uniform sample and the Simpson’s rule were also shown.

Here, each point shows the average of absolute errors after a hundred iterations (except for the Simpson’s method). Results of importance sampling have systematically smaller variance compared to the ones with uniform samples. Hence, the practical trick is to find the best g function.

g MUST be a good approximation for f

In the following two examples, I show a similar plot as above but for two other functions. In the first case, y=√x, the importance sampling does not help. The variance is comparable to the Monte Carlo method with uniformly distributed samples.

The next example shows that selecting a wrong  g function can make the situation even worse. We use the similar function, g(x)=x, for integration of the unit circle (see the previous post). In other words, we sample regions around one more often than regions about zero while the value of function close to zero is larger than those about one. So, we get a worse variance compared to a completely uniform sample.

Read Full Post »