Feeds:
Posts
Comments

Posts Tagged ‘science’

The diffusion  equations:

Assuming a constant diffusion coefficient, D, we use the Crank-Nicolson methos (second order accurate in time and space):

u[n+1,j]-u[n,j] = 0.5 a {(u[n+1,j+1] – 2u[n+1,j] + u[n+1,j-1])+(u[n,j+1] – 2u[n,j] + u[n,j-1])}

A linear system of equations, A.U[n+1] = B.U[n], should be solved in each time setp.

As an example, we take a Gaussian pulse and study variation of density with time. Such example can occur in several fields of physics, e.g., quantum mechanics.

As stated above, we have two arrays: A and B. We form them once, and then calculate inverse(A).B:

#———————————————–
# populate the coefficient arrays
#———————————————-
from scipy.linalg import svd
from scipy.linalg import inv

for j in arange(0, nx_step):
    coefa[j, (j-1)%nx_step] = – lamb
    coefa[j, (j)%nx_step] = 2*lamb + 2.
    coefa[j, (j+1)%nx_step] = – lamb
#———————————————–
lamb = – lamb
for j in arange(0, nx_step):
    coefb[j, (j-1)%nx_step] = – lamb
    coefb[j, (j)%nx_step] = 2*lamb + 2.
    coefb[j, (j+1)%nx_step] = – lamb
#———————————————–
coefa = scipy.linalg.inv(coefa)         # inverse of A
coef = numpy.dot(coefa, coefb)  
coef = scipy.linalg.inv(coef)

for i in arange(1,nt_step):        #———– main loop ———

    ans = scipy.linalg.solve(coef, uo)
    uo = ans
    plot(uoz,’k’, ans,’r’)
        draw()
    t[i] = t[i-1] + dt
    print dt*nt_step – t[i], ‘      ‘, ans.sum()
    rho[:,i] = ans

The result is shown in the following figure: as expected, the peak of the density profile falls down. If one continues the simulation for a longer time, the density distribution will be completely uniform.

Advertisements

Read Full Post »

Sometimes we have to prepare scatter plot of two parameters. When number of elements in each
parameter is a big number, e.g., several thousands, and points are concentrated, it is very difficult
to use a standard scatter plot. The reason is that due to over-population, one cannot say where
the density has a maximum. In this post, I explain alternative ways to prepare such plots.

The below scatter plot has more than two million points. It is absolutely impossible to say where is
the peak of the distribution.

One way to cope with the problem is to over plot the contours of the density on the scatter plot.

Using np.histogram2d function of numpy, one can create a map of two dimensional density function.

It is not always the best idea. Sometimes, it might be better to show the 2D density map itself (below).
The colorbar reads the number of points in each bin.

Now, one might complain that due to large central concentration, the halo of the distribution is not seen in
the 2D density map. There are two solutions for the issue: either we change the color table, or over plot
the contour on the 2D density plot (below). As you see, we can easily show the values of the contours as well.

the Python code to create this plot is the following:

fig = plt.figure()
ax = fig.add_subplot(111)
H, xedges, yedges = np.histogram2d(aa, bb, range=[[293.,1454.0], [464.,1896.0]], bins=(50, 50))
extent = [yedges[0], yedges[-1], xedges[0], xedges[-1]]
subplots_adjust(bottom=0.15, left=0.15)
levels = (1.0e4, 1.0e3, 1.0e2, 2.0e1)
cset = contour(H, levels, origin=’lower’,colors=[‘black’,’green’,’blue’,’red’],linewidths=(1.9, 1.6, 1.5, 1.4),extent=extent)
plt.clabel(cset, inline=1, fontsize=10, fmt=’%1.0i’)
for c in cset.collections:
c.set_linestyle(‘solid’)

Yet another alternative is just to show the density contours and forget about the rest(below).

Read Full Post »

Importance sampling is choosing a good distribution from which to simulate one’s random variables. It involves multiplying the integrand by 1 (usually dressed up in a “tricky fashion”) to yield an expectation of a quantity that varies less than the original integrand over the region of integration.

We now return to it in a slightly more formal way. Suppose that an integrand f can be written as the product of a function h that is almost constant times another, positive, function g. Then its integral over a multidimensional volume V is

f dV =(f /g) gdV =h gdV

Here, we interpreted this equation as suggesting a change of variable to G, the indefinite integral of g. That made gdV a perfect differential. We then proceeded to use the basic theorem of Monte Carlo integration. A more general interpretation of the above equation is that we can integrate f  by instead sampling h — not, however, with uniform probability density dV , but rather with nonuniform density gdV . In this second interpretation, the first interpretation follows as the special case.

In this case, random points are chosen from a uniform distribution in the domain A < y < B.  The new integrand, f/g, is close to unity and so the variance (i.e. the value of  sigma) for this function is much smaller than that obtained when evaluating the function  by sampling from a uniform distribution. Sampling from a non-uniform distribution for this function should therefore be more efficient than doing a crude Monte Carlo calculation without importance sampling.

Numerical example

We want to evaluate the integral of y=x^2 in the range between zero and one. We take g(x)=x. It samples larger values more often. The inverse function of g is √(2x).

Recipes: 1) take N uniform samples between zero and one, 2) build a set of non-uniform sample such that dU=gdV is uniformly sampled. To this end, one has to use the inverse of g(x). In other words, if p(x) is the uniform distribution of x, then q(u)= g-1(x)  will be a uniform distribution for U. 3) evaluate the average of h=f/g for the new sample.

The distribution of the new variable, dU=gdV, is like this:

Now, we have to perform a normal Monte Carlo integration of the function h in U domain. The results are shown in the next plot. For comparison, result of a Monte Carlo integration with uniform sample and the Simpson’s rule were also shown.

Here, each point shows the average of absolute errors after a hundred iterations (except for the Simpson’s method). Results of importance sampling have systematically smaller variance compared to the ones with uniform samples. Hence, the practical trick is to find the best g function.

g MUST be a good approximation for f

In the following two examples, I show a similar plot as above but for two other functions. In the first case, y=√x, the importance sampling does not help. The variance is comparable to the Monte Carlo method with uniformly distributed samples.

The next example shows that selecting a wrong  g function can make the situation even worse. We use the similar function, g(x)=x, for integration of the unit circle (see the previous post). In other words, we sample regions around one more often than regions about zero while the value of function close to zero is larger than those about one. So, we get a worse variance compared to a completely uniform sample.

Read Full Post »

Monte Carlo Integration

In mathematics, Monte Carlo integration is numerical integration using random numbers. That is, Monte Carlo integration methods are algorithms for the approximate evaluation of definite integrals, usually multidimensional ones. The usual algorithms evaluate the integrand at a regular grid. Monte Carlo methods, however, randomly choose the points at which the integrand is evaluated. It is helpful for complex functions or large dimensions.

To evaluate a function f, we sample it N times:

Here the angle brackets denote taking the arithmetic mean over the N sample points,

The fundamental disadvantage of simple Monte Carlo integration is that its accuracy increases only as the square root of N , the number of sampled points. Each new point sampled adds linearly to an accumulated sum that will become the function average, and also linearly to an accumulated sum of squares that will become the variance (see chapter 7 of  Numerical Recipes).

The sampled points should be randomly selected. Assume we take a set of uniform random number with a distribution like below:

Numerical example: calculate π

We want to calculate the Pi number to a certain level of accuracy. There are several approaches. I take integral of the unit circle in the first quadrant and multiply that with four.

Recipe: 1) generate N random number between zero and one, 2) evaluate function f   for each of them, and 3) perform averaging (above equation). The plot below shows the average of error after 100 times iteration for each number of samples.  For comparison, I put the simple Simpson integration method as well.   The uncertainty is larger than traditional integration method (1/N in this case) and with increasing  number of samples, it goes down slowly.

As discussed above, the variance goes with square root of N. It starts to show advantages over traditional integration methods only in large dimensions. We have limitations for increasing this number. The convergence scheme  makes calculations computationally expensive. The CPU run time for the above calculations is shown below.

In one dimension (the present example), it goes linearly. In two dimensions, it goes with square of N !  That severely limits number of iterations in large dimensions. Recall that we selected a set of uniformly distributed random numbers. However, if the function has a sharp peak at certain range and takes small values elsewhere, we need to somehow sample those regions more frequently to enhance the accuracy. That is topic of the next post, the importance sampling.

Read Full Post »

A word on PDEs

So far, we discussed very simple problems. From now one, we want to simulate some simple problems, e.g., the diffusion equation. For this reason, I would like to have a very short discussion on the PDEs. I refer the interested reader to chapter 19 of Numerical Recipes.

Partial differential equations are at the heart of many, if not most, computer analysis or simulations of continuous physical systems, such as fluids, electromagnetic fields, etc. In most mathematics books, partial differential equations (PDEs) are classified into the three categories, hyperbolic, parabolic, and elliptic, on the basis of their characteristics, or curves of information propagation. The prototypical example of a hyperbolic equation is the one-dimensional wave equation

where v = constant is the velocity of wave propagation. The prototypical parabolic equation is the diffusion equation

where D is the diffusion coefficient.

The prototypical elliptic equation is the Poisson equation

where the source term ρ is given. If the source term is equal to zero, the equation is Laplace’s equation. From a computational point of view, the classification into these three canonical

From a computational point of view, the classification into these three canonical types is not very meaningful. The first two equations both define initial value or Cauchy problems: If information on u (perhaps including time derivative information) is given at some initial time t0 for all x, then the equations describe how u(x, t) propagates itself forward in time. In other words, these two equations describe time evolution. The goal of a numerical code should be to track that time evolution with some desired accuracy. In initial value problems, one’s principal computational concern must be the stability of the algorithm. Many reasonable- looking algorithms for initial value problems just don’t work — they are numerically unstable.

By contrast, the third equation directs us to find a single “static” function u(x, y) which satisfies the equation within some (x, y) region of interest, and which — one must also specify — has some desired behavior on the boundary of the region of interest. These problems are called boundary value problems. In contrast to initial value problems, stability is relatively easy to achieve for boundary value problems. Thus, the efficiency of the algorithms, both in computational load and storage requirements, becomes the principal concern.

Read Full Post »

Flare diagram from GOES x-ray data. The largeest peak is the second X-class flare in the current solar cycle (24).

Flare diagram from the GOES x-ray data. The largest peak is the second X-class flare in the current solar cycle (24).

Explosive events on the solar surface release energy in various forms, e.g., accelerated particles, bulk mass motion, and radiated emission in various spectral ranges.  A solar flare is an explosion on the Sun that happens when energy stored in twisted magnetic fields (usually above sunspots) is suddenly released.  Flares produce a burst of radiation across the electromagnetic spectrum, from radio waves to x-rays and gamma-rays. In a typical flare, the brightness increases for several minutes (the flash phase), followed by a slow decay (the decay phase) which lasts up to an hour.  Formation of a flare is usually accompanied by a significant re-arrangement of the magnetic field configuration.

The maximum temperature in the core of a flare event can reach to 10 million Kelvin ! It causes bursts of radiation in gamma and x-ray, extreme ultraviolet, and microwaves. The physical process is the bremsstrahlung of electrons with energies 10-100 keV (an electron with 100 keV energy travels with 1/3 of speed of light).

Scientists classify solar flares according to their x-ray brightness in the wavelength range 1 to 8 Angstroms. There are 3 categories: X-class flares are big;  they are major events that can trigger planet-wide radio blackouts and long-lasting radiation storms. M-class flares are medium-sized; they can cause brief radio blackouts that affect Earth’s polar regions. Minor radiation storms sometimes follow an M-class flare. Compared to X- and M-class events, C-class flares are small with few noticeable consequences here on Earth. Large flares may be visible in white light as well !

Each category for x-ray flares has nine subdivisions ranging from, e.g., C1 to C9, M1 to M9, and X1 to X9. In this figure, the three indicated flares registered (from left to right) X2, M5, and X6. The X6 flare triggered a radiation storm around Earth nicknamed the Bastille Day event.

—————————————————————-———————————

Class                  Peak (W/m2)  between 1 and 8 Angstroms

—————————————————————-———————————

B                           I < 10-6

—————————————————————-———————————

C                          10-6 < = I < 10-5

—————————————————————-———————————

M                         10-5 < = I < 10-4

—————————————————————-———————————

X                          I > = 10-4

—————————————————————-———————————

The image below shows a large flare as recorded with EIT telescope of the SOHO spacecraft in previous solar cycle. We expect to see more solar flares in the coming 4-5 years.

X28 flare in EIT 195 -- The Sun unleashed a powerful flare on 4 November 2003 that could be the most powerful ever witnessed and probably as strong as anything detected since satellites were able to record these events n the mid-1970s. The still and video clip from the Extreme ultraviolet Imager in the 195A emission line captured the event. The two strongest flares on record, in 1989 and 2001, were rated at X20. This one was stronger scientists say. But because it saturated the X-ray detector aboard NOAA's GOES satellite that monitors the Sun, it is not possible to tell exactly how large it was. The consensus by scientists put it somewhere around X28.

Read Full Post »

In a set of posts, I show the solution of a few simple 1D and 2D problems in physics. The numerical solutions and plots all are
done using Python. As we go on, the problems will be more and more difficult. Although, I did all that just for fun!

As the simplest problem, I discuss a mass and spring system in one dimension with friction. As you see in the below plot, the system oscillates and the amplitude of the oscillation decreases gradually. That is due to loss of the energy (work of friction force). The rate of decrease of the amplitude is determined by the friction coefficient. I would like to note that although this problem has analytical solution (Haliday, Resnik, Walker, chapters 7 and 8), I used a numerical scheme to solve it (see the Python code below).

Variation of the amplitude of oscillations.

The next plot shows the variation of the potential, kinetic, and total energies. As expected, when e.g. the kinetic energy is at zero, the potential energy has a maxima.

Note that I only used the simplest form of the drag force (-kv). In reality, one may consider terms of higher order. Alternatively, one might take the friction coefficient, k, as a function of velocity. And here is the code. I didn’t manage to put it in a way it should look like. In case you have an idea, let me know.


“””
Program to solve 1D mass and spring system in presence of friction

Equations

F = dp/dt = – kx

https://micropore.wordpress.com/
“””

import pylab
import numpy
import scipy

from pylab import *
from numpy import *
from scipy import *

#———————–
# initial condition, constants
#———————–
h0 = .50     # initial deviation from equilibrium in m
v0 = 1.0     # m/s
t0 = 0.0     # s

#———————–
# constants
#———————–
time = 10.        # duration in sec
dt = 1.0e-3       # time step in sec
nstep = int(time/dt) # number of time steps

k = 10.0         # spring constant in N/m
m = 1.0          # mass in kg
k1_drag = 0.5    # kg/s

eta = 0.8        # v_in/v_out in an inellastic encounter
Energy = 0.5 * m * pow(v0,2) + 0.5 * k * pow(h0,2)

#———————–
# array definitions
#———————–
h = zeros((nstep,), dtype=float32)  # height in meters
v = zeros((nstep,), dtype=float32)  # velocity in m/sec
a = zeros((nstep,), dtype=float32)  # acceleration in m/sec^2
t = zeros((nstep,), dtype=float32)  # time in sec
K = zeros((nstep,), dtype=float32)  # kinetic energy
V = zeros((nstep,), dtype=float32)  # potential energy
En = zeros((nstep,), dtype=float32) # total energy

#———– initial conditions ———
i = 0
v[0] = v0
h[0] = h0
K[0] = 0.5 * m * pow(v0,2)
V[0] = 0.5 * k * pow(h0,2)
t[0] = t0
En[0] = K[0] + V[0]

for i in arange(1,nstep):#———– main loop ———

a[i-1] = -(k/m) * h[i-1] – (k1_drag/m) * v[i-1]
v[i] = v[i-1] + a[i-1] * dt
dE = – k1_drag * abs(v[i-1]) * abs(v[i-1]) * dt # drag
Energy = Energy + dE
h[i] = h[i-1] + v[i-1]*dt
t[i] = t[i-1] + dt
K[i] = 0.5 * m * pow(v[i],2)
V[i] = 0.5 * k * pow(h[i],2)
En[i] = K[i] + V[i]

tz = [0, time]
hz = [0, 0]
#——————————————————
pylab.figure()
plot(t, h, tz, hz,’k’)
xlabel(‘Time (s)’, fontsize=18)
ylabel(‘Amplitude (m)’, fontsize=18)
tex = r’$F\,=\,-kx\, -\, k_dv$’
text(5, .4, tex, fontsize=24)
savefig(‘spring_oscil.png’, dpi=100)
#——————————————————
pylab.figure()
plot(t, K, t, V, t, En, ‘r’, tz, hz,’k’)
xlabel(‘Time (s)’, fontsize=18)
ylabel(‘Energy (J)’, fontsize=18)
legend((‘Kinetic’, ‘Potential’, ‘Total’),’upper right’, shadow=True)
savefig(‘spring_energy.png’, dpi=100)
show()

Read Full Post »

Older Posts »