Feeds:
Posts
Comments

Archive for the ‘Physics’ Category

For near 400 years after Galilee and Newton invented initial telescopes, the image quality was improve through better optical quality of lenses and mirror, better polishing and coating techniques, and advanced optical design. For instance none of the present day professional telescopes are either Newtonian or Galilean. The root mean square (rms) fluctuation of the deviation in the optical surface compared to what it was supposed to be is of the order of a small fraction of the optical wavelength, say 20 nm or so. Achieving such a fine optical surface is challenging by itself. Nonetheless, the final spatial resolution of large ground-based telescopes are ALWAYS governed by the turbulence in the atmosphere, known as seeing.

What is seeing?

Seeing is the collective effect of distortions in the wavefront passing through the earth atmosphere. It causes blurring, image motion, … resulting in a smeared spatial resolution of the order of one or two arc second (1 radian = 206265 arc second). The physical mechanism behind seeing is the turbulence in the atmosphere driven by the temperature gradient which generates the convection cells. There are turbulence in day and night, and in low altitudes and high altitudes. A good fraction of the seeing is due to ground-layer turbulence, which plays role of the boundary condition to the atmosphere. It means the first say 100 m above the ground generates a significant fraction of the seeing. The famous blinking of stars at night sky is solely due to seeing: the stellar size is way much smaller than the seeing limit, therefore the intensity fluctuates. In contrast most of the planets have a large angular diameter of ten or more arc seconds and do not twinkle.

The theoretical resolution of a telescope is estimated by the Rayleigh criteria, and is about 1.22 λ/D, where λ is the wavelength and D the diameter of the telescope objective (both in meter). For a two meter telescope, the theoretical resolution is about 0.07 arc second, way smaller than say one arc second, the seeing limit for a lucky observer. The seeing frustrated many astronomers through decades. Even amateur astronomers experienced a watershed effect when they observe the Moon on high magnification with small telescopes. It is like watching the Moon through a layer of water.

 

A passive approach is to build telescopes at high altitudes to skip a significant fraction of the earth atmosphere, like the Keck or VLT telescopes. At a height of 5000m above the see level, about half of the atmosphere is “gone”. The atmosphere, however, extends over a hundred kilometer or so. To completely eliminate this effect, one has to launch the telescope to the low-earth orbit like the famous Hubble space telescope or the upcoming James Webb telescope.

 

The Adaptive Optics

A breakthrough emerged in 80s when a correlation tracker was first employed in astronomical telescopes. Although it did not sharpen the unshared images, it did fix the location of stars in the focal plane. The correlation tracker used the cross-correlation of the current image of a lock point (a bright star used as a target) with the image it has recorded just a millisecond earlier. The difference was then converted to a voltage following the calibration scheme. A tip/tilt mirror then apply the correction in a closed-loop system such then before the seeing modifies the location of the star, the image was displaced to the “correct” position. To achieve this operation, the correction speed should exceed the seeing frequency which is about 100 Hz. As a result, kHz systems were used in correlation trackers.

The adaptive optics is the natural successor of the correlation trackers. It employed a deformable mirror to correct the optical aberration of the wavefront. A costly deformable mirror is like a flat mirror in the first glance. It consists of several ten or hundred small mirrors, each controlled via a few actuators from behind. The joint action of all the small mirrors is to form the deformable mirror to correct the applied aberration to the wavefront and “undo” all those perturbations. It is a challenging task both from manufacturing and from computation point of view. One needs a dedicated mid-size machine to close the loop at a frequency much higher than the seeing frequency. The adaptive optics can correct several ten or more modes of aberration like defocus, coma, and astigmatism. As a result, the AO-corrected images gain a lot of sharpness and contrast compared to a standard telescope without AO.

You can imagine that the AO business is not in the realm of amateur astronomy. There are, however, tip/tipt systems to fix the image movement which can be purchased like the one SBIG offers. I do not think that anytime soon an AO can be realized in a mid-size amateur telescope. Their implementation for the large professional telescope is a must, I would say. The multi million Euro cost of the AO systems impeded their installation on many aging and brand-new telescopes.

Read Full Post »

A particle of mass m is confined to a one-dimensional region 0≤x≤a.  At the beginning, the normalized wave function is

Ψ(x,t=0) = √(8/5a)  [ 1 + cos(πx/a)] sin(πx/a).

a) What is the wave function at a later time t=t0?.

b)  What is the average energy of the system at t=0 and t=t0?

c)  Find the probability that the particle is found in the left half of the box (0≤x≤a/2) at t=t0.

Solutions

Read Full Post »

Ψ(x,t) is a solution of the Schrödinger equation for a free particle of mass m in one dimensional and Ψ(x,0) =  A exp(-x^2/a^2)

a)  Find the probability amplitude in the momentum space at time t=0.

b)  Find Ψ(x,t).

Solution

Read Full Post »

Energy of earthquakes

The expression Richter magnitude scale refers to a  ways to assign a single number to quantify the energy contained in an earthquake. In this system invented by Charles Richter in 1935, one magnitude more is roughly equal to a factor 32 more energy. That means an earthquake of magnitude 6.0 has 32 times more energy than an earthquake of magnitude 5.0. The below plot shows the amount of released energy in earthquakes with different magnitudes.

In the above image, the energy is expressed in Joule [left axis] and equivalent to ton TNT [right axis]. The black horizontal line shows the estimated energy of the first nuclear bomb dropped on Hiroshima. It roughly weights as a six Richter earthquake. Note that a large fraction of the energy of earthquakes is absorbed in the deep layers of the earth.

Read Full Post »

Consider a one dimensional time-independent Schrödinger equation for some arbitrary potential V(x). Prove that if a solution Ψ(x) has the property that Ψ(x) →0 as x→±∞, then the solution must be nondegenerate and therefore real, apart from a possible overall phase factor.

Solution

Read Full Post »

Solar battery chargers

Recently, there are quite a lot of solar battery chargers. Such instruments like the one in the following image are able, in theory, to recharge mobile devices, ipods, etc when they are charged, and charge themselves using the sun light.

In this particular example, the company provides the following specifications for the instrument: it has a charge capacity of 3.5 Ah (Ampere  hour) at a potential difference of 3.7 Volts. That means it contains 12.95 Watt h energy when it is fully charged. Just for comparison, a small auto battery has like 40 A.h at a potential difference of 12 V.  That means the energy stored in the device is like 2% of the energy stored in an auto battery, so it is quite a bit actually.

Needless to say, you can charge it via AC adapter or USB cable of your laptop and use it to charge your mobile phone when required. The life cycle according to the webpage is 500. Indeed, the typical Lithium-ion polymer batteries have a life cycle of 1000 or more. Perhaps due to irregular solar charging rather than the standard net charging, the life cycle will be shorter as expressed in the webpage.

Can it really charge itself via solar radiation? I try to simply evaluate how much solar energy it can absorb, according to the data given in the webpage. The solar constant, the energy that the unit area on the top of the earth atmosphere receives in a second is about S = 1400 Watt per square meter. On tropical latitudes, the received energy on the ground is larger than 1000 W/m2. On middle latitudes like central Europe, and in a summer day, S = 800 W/m2. In winter, it is usually about 500 to 600 W/m2 in a sunny day. When it is overcast and very dark, it drops to values comparable to 1W/m2.

Now let us calculate how long it takes to collect 12.95 Wh energy from the received solar radiation on an average day. The collecting area, according to webpage, is 11.5 x 6.0 cm2 , and the efficiency is 17%. If I take S = 300 W/m2 for a relatively bright day, then the rate of collecting energy is

Area  x  Solar constant  x  efficiency = 69 cm2 x 300 W/m2 x 0.17 = 0.35 Watt.

Therefore to collect 12.95 Wh, i.e. to recharge the battery via sun light,  one has to keep the instrument in sunshine for 37 hours. In tropical regions, this time can be a factor 2 or 3 shorter.

This simple calculation shows that if you have such an instrument, you should be patient to get it charged using its own solar panel. In practice, the charging time can be longer because of the cloudy sky, and a decline of the efficiency with aging. 

Do you save money if you buy it?

Perhaps not, specially if you live in high latitudes. The instrument can, in theory, work for 500 cycles of 12.95 Wh. That amounts to 6.5 kWh during its lifetime. The electricity price in expensive cases is like half a dollar per kWh. That means you collect in total like 3.2 US dollar while the instrument costs like 40+ dollars. This particular device is more a (big) spare battery than a solar charger.

There are better options. Consider a similar instrument with an area of 2-3 times larger. That efficiently reduce the recharging time. In addition, I guess the systems with rechargeable AA batteries have the ability to live longer: you can exchange those batteries when they are dead. The solar panels usually work much longer (like 10+ years).

Read Full Post »

The diffusion  equations:

Assuming a constant diffusion coefficient, D, we use the Crank-Nicolson methos (second order accurate in time and space):

u[n+1,j]-u[n,j] = 0.5 a {(u[n+1,j+1] – 2u[n+1,j] + u[n+1,j-1])+(u[n,j+1] – 2u[n,j] + u[n,j-1])}

A linear system of equations, A.U[n+1] = B.U[n], should be solved in each time setp.

As an example, we take a Gaussian pulse and study variation of density with time. Such example can occur in several fields of physics, e.g., quantum mechanics.

As stated above, we have two arrays: A and B. We form them once, and then calculate inverse(A).B:

#———————————————–
# populate the coefficient arrays
#———————————————-
from scipy.linalg import svd
from scipy.linalg import inv

for j in arange(0, nx_step):
    coefa[j, (j-1)%nx_step] = – lamb
    coefa[j, (j)%nx_step] = 2*lamb + 2.
    coefa[j, (j+1)%nx_step] = – lamb
#———————————————–
lamb = – lamb
for j in arange(0, nx_step):
    coefb[j, (j-1)%nx_step] = – lamb
    coefb[j, (j)%nx_step] = 2*lamb + 2.
    coefb[j, (j+1)%nx_step] = – lamb
#———————————————–
coefa = scipy.linalg.inv(coefa)         # inverse of A
coef = numpy.dot(coefa, coefb)  
coef = scipy.linalg.inv(coef)

for i in arange(1,nt_step):        #———– main loop ———

    ans = scipy.linalg.solve(coef, uo)
    uo = ans
    plot(uoz,’k’, ans,’r’)
        draw()
    t[i] = t[i-1] + dt
    print dt*nt_step – t[i], ‘      ‘, ans.sum()
    rho[:,i] = ans

The result is shown in the following figure: as expected, the peak of the density profile falls down. If one continues the simulation for a longer time, the density distribution will be completely uniform.

Read Full Post »

Older Posts »