Feeds:
Posts
Comments

For near 400 years after Galilee and Newton invented initial telescopes, the image quality was improve through better optical quality of lenses and mirror, better polishing and coating techniques, and advanced optical design. For instance none of the present day professional telescopes are either Newtonian or Galilean. The root mean square (rms) fluctuation of the deviation in the optical surface compared to what it was supposed to be is of the order of a small fraction of the optical wavelength, say 20 nm or so. Achieving such a fine optical surface is challenging by itself. Nonetheless, the final spatial resolution of large ground-based telescopes are ALWAYS governed by the turbulence in the atmosphere, known as seeing.

What is seeing?

Seeing is the collective effect of distortions in the wavefront passing through the earth atmosphere. It causes blurring, image motion, … resulting in a smeared spatial resolution of the order of one or two arc second (1 radian = 206265 arc second). The physical mechanism behind seeing is the turbulence in the atmosphere driven by the temperature gradient which generates the convection cells. There are turbulence in day and night, and in low altitudes and high altitudes. A good fraction of the seeing is due to ground-layer turbulence, which plays role of the boundary condition to the atmosphere. It means the first say 100 m above the ground generates a significant fraction of the seeing. The famous blinking of stars at night sky is solely due to seeing: the stellar size is way much smaller than the seeing limit, therefore the intensity fluctuates. In contrast most of the planets have a large angular diameter of ten or more arc seconds and do not twinkle.

The theoretical resolution of a telescope is estimated by the Rayleigh criteria, and is about 1.22 λ/D, where λ is the wavelength and D the diameter of the telescope objective (both in meter). For a two meter telescope, the theoretical resolution is about 0.07 arc second, way smaller than say one arc second, the seeing limit for a lucky observer. The seeing frustrated many astronomers through decades. Even amateur astronomers experienced a watershed effect when they observe the Moon on high magnification with small telescopes. It is like watching the Moon through a layer of water.

 

A passive approach is to build telescopes at high altitudes to skip a significant fraction of the earth atmosphere, like the Keck or VLT telescopes. At a height of 5000m above the see level, about half of the atmosphere is “gone”. The atmosphere, however, extends over a hundred kilometer or so. To completely eliminate this effect, one has to launch the telescope to the low-earth orbit like the famous Hubble space telescope or the upcoming James Webb telescope.

 

The Adaptive Optics

A breakthrough emerged in 80s when a correlation tracker was first employed in astronomical telescopes. Although it did not sharpen the unshared images, it did fix the location of stars in the focal plane. The correlation tracker used the cross-correlation of the current image of a lock point (a bright star used as a target) with the image it has recorded just a millisecond earlier. The difference was then converted to a voltage following the calibration scheme. A tip/tilt mirror then apply the correction in a closed-loop system such then before the seeing modifies the location of the star, the image was displaced to the “correct” position. To achieve this operation, the correction speed should exceed the seeing frequency which is about 100 Hz. As a result, kHz systems were used in correlation trackers.

The adaptive optics is the natural successor of the correlation trackers. It employed a deformable mirror to correct the optical aberration of the wavefront. A costly deformable mirror is like a flat mirror in the first glance. It consists of several ten or hundred small mirrors, each controlled via a few actuators from behind. The joint action of all the small mirrors is to form the deformable mirror to correct the applied aberration to the wavefront and “undo” all those perturbations. It is a challenging task both from manufacturing and from computation point of view. One needs a dedicated mid-size machine to close the loop at a frequency much higher than the seeing frequency. The adaptive optics can correct several ten or more modes of aberration like defocus, coma, and astigmatism. As a result, the AO-corrected images gain a lot of sharpness and contrast compared to a standard telescope without AO.

You can imagine that the AO business is not in the realm of amateur astronomy. There are, however, tip/tipt systems to fix the image movement which can be purchased like the one SBIG offers. I do not think that anytime soon an AO can be realized in a mid-size amateur telescope. Their implementation for the large professional telescope is a must, I would say. The multi million Euro cost of the AO systems impeded their installation on many aging and brand-new telescopes.

Advertisements

What is the cryptocurrency, Bitcoin and co? do you need to learn about Bitcoin?
Perhaps you can make a decision after reading the following short paragraph.

In an exclusive interview with NewsBTC, Yoni Assia, the CEO of the trading platform eToro discussed the Bitcoin evolution. According to him in August, the New York Stock Exchange, Microsoft, and Starbucks formed an initiative called BAAK to improve the usability and adoption of cryptocurrencies. The Japanese and South Korea governments disclosed they intend to strictly regulate cryptocurrency exchanges as regulated financial institutions and the government of China has spent over 3 B$ to finance blockchain initiatives.

 

A cryptocurrency is a collection of concepts and technologies that form the basis of a digital money ecosystem. The first blockchain was conceptualized by Satoshi Nakamoto in 2008 for Bitcoin. The original paper is available online. The current market cap of all cryptocurrencies is about less than 200 B$ with Bitcoin covering over half of it followed by Ethereum and other coins. In total there are over a thousand different coins by 2018.

The block time is the average time it takes for the network to generate one extra block in the blockchain. The block time for Ethereum is set to between 14 and 15 seconds, while for Bitcoin it is 10 minutes. It means perhaps a transaction can be performed faster with Ethereum than Bitcoin. By storing data across its peer-to-peer network, the blockchain eliminates a number of risks that come with data being held centrally. Bitcoins are created as a reward for a process known as mining. According to Wikipedia, Bitcoin had 5 million unique users in 2017.

After early “proof-of-concept” transactions, the first major users of Bitcoin were black markets, such as Silk Road. During its 30 months of existence, starting in February 2011, Silk Road exclusively accepted Bitcoins as payment, transacting 9.9 million in Bitcoins, worth about $214 million (2011-2012). It is a general trend that criminals adopt new technologies early enough.

The price profile

In 2012 Bitcoin prices started at about one Euro growing to about ten for the year. In 2013 prices started at 10 Euro rising to 560 Euro by 1 January 2014. In 2015 prices started at 260 Euro and rose to about 400 Euro for the year. In 2016 prices rose to over 900 Euro on 1 January 2017 and rose to 16,500 Euro on Dec 17, 2017. Since then, the price collapsed in several steps. Early 2018, Google and Facebook announced that they ban advertisement of cryptocurrency. Throughout the rest of the first half of 2018, Bitcoin’s price fluctuated between 10,000 and 5,848 Euro. On Aug 18 2018 Bitcoin’s price was about 5750 Euro. Daily volume of the Bitcoin exchange is usually several billion Euro while the market cap is currently about 96 B Euro.

The price of Bitcoin and other cryptocurrencies are volatile compared to gold or standard currencies. About 20% of all Bitcoins are believed to be lost. The lost coins would have a market value of about $20 billion at July 2018 prices. Approximately 1 million Bitcoins have been stolen, which would have a value of about $7 billion at July 2018 prices. Bulk of the Bitcoin is traded in US $ or Japanese Yen.

 

The crypto wallet

Before getting started, you have to make sure that you are doing everything safely and properly. There have been a number of major hacking incidents in the past. This means your first step is to find the proper cryptocurrency wallet for storing your currency with proper security. Remember that there is no central bank to store your asset. You yourself have to take care of it in your wallet. Bitcoin is transferred in a peer-to-peer system: there is no central bank or server.

Bitcoin uses public-key cryptography, in which two cryptographic keys, one public and one private, are generated. At its most basic, a wallet is a collection of these keys. A wallet to cryptocurrency is like a browser to html. You need a wallet to perform standard operations: buy/sell, send/receive, and store. Some wallets also allow you to exchange one form of the cryptocurrency to another (see this and this).

Bitcoin Core is, perhaps, the best known implementation or client. It is an open source program available on GitHub. If you prefer to have a full node, Bitcoin Core requires a one-time download of about 210 GB of data plus a further 5-10 GB per month. By default, you will need to store all of that data, but if you enable pruning, you can store as little as 6 GB total without sacrificing any security.

There are tons of wallets for desktops, mobile apps, and web applications. Mobile wallets are user friendly but otherwise not the safest place to store your digital asset.

To buy Bitcoin or other coins, you need to verify your identity in one of the many places. The verification procedure usually takes a few days. You can buy Bitcoin with your bank account, credit card, PayPal, …. Nowadays even vouchers of Bitcoin are available in supermarket. Some of the exchange places like Coinbase, Bitstamp, eToro, Kraken, … allows you to have an online wallet on their webpage. However if you prefer to stay anonymous, you need a desktop wallet independent from your online or mobile app wallets.

Monitoring

There are certain places like Coin360 where you can have an eye on the developments in the market. Another example is the cryptocompare webpage. There are tons of other page providing near real-time data of the market. There are also dedicated mobile apps just to update users with the latest changes in the crypto market including the rising/falling prices, volume 24h (total transaction volume per day), market cap (total value of a certain type of coin), …

Dominance percentage from CoinMarketCap.

Bitcoin and other digital coins are interesting tools but perhaps not for everybody in the current state. It is worth, however, to study their structures as  more and more companies adopt their usage. If you would like to know more about regulatory issues, for instance if a Bitcoin is a property, a currency, or a commodity, there are webpgaes like Cointelegraph which you can visit.

If someone has no interest in learning about the Bitcoin and co, it is better to stay away. Learning about cryptocurrency is similar to sex education for teenagers: one has to know it in advance.

A typical problem in natural science and engineering is to select one model among many; the one that best fits the observed data. The following figures show fitting a linear, quadratic, and cubic polynomial to the same test data (the test data was parabolic with added noise on both axes).

The question is: which model fits best the observed data?

A classical approach is to use the reduced chi-square which takes into account the degrees of freedom in each fit. In Bayesian statistics, we can use the Bayes factor to perform decision making. The aim of the Bayes factor is to quantify the support for a model over another. When we want to choose between two models based on observed data, we calculate the evidence of each model, also called the marginalized likelihood. The Bayes factor is the ratio of the two evidences.

K = P(D|M1) / P(D|M2)

A value of K larger than one means support for model M1. There are, however, detailed tables about how to interpret the Bayes factor. Kass and Raftery (1995) presented the following table:

K                           Strength of evidence
1 to 3                    not worth more than a bare mention
3 to 20                  positive
20 to 150             strong
>150                     very strong

in python, it can be done in different ways. One option is to use the PT Sampler in EMCEE. Another one is to use the PyMultinest, an advanced MCMC package which performs importance nested sampling.

In our example, the PT Sampler finds that there are strong support for the parabolic model compared to the cubic model, and at the same time very strong support for the parabolic model against the linear model. Please note that the Bayes factor does not tell which model is correct, it just quantitatively estimates which model is preferred given the observed data.

There are several different methods to smooth a noisy signal. In this post I compare three common smoothing methods, namely a median filter, a Gaussian filter, and a Radian Basis Function (RBF) smoothing. RBF is a powerful tool not only for the multivariate data smoothing, but also for the interpolation, regression, etc. The following figure shows the magnificent performance of RBF compared to the median and Gaussian filters. The synthetic data was modified with Gaussian noise. I have used the ‘quintic’ kernel in this example.

Comparison of the RBF smoothing with the median and Gaussian filtering in a one-dimensional example.

the core program is fairly easy as it is a built-in function in python:

from scipy.interpolate import RBF
 rbf = RBF(x, y, function=’quintic’, smooth=0.1)
s = rbf(x)

In an earlier post, I have discussed a least-square fit with error on y-axis (for statistical fits, check the PyMC and EMCEE posts).  Almost in any fit, having an estimate of the fit uncertainty is a must. The better we know the noise characteristics of the experiment, the better we should estimate the uncertainty of the fit parameters. In this post, I show a more serious example in which we have error on both axis.

fit2Unlike the previous example, we do not use the curve_fit module of Scipy, Instead, there is  another  dedicated module to estimate the orthogonal distance regression (odr). The program with some comments is shown below:

import numpy as np
from pylab import *
from scipy.optimize import curve_fit
from scipy import odr

def func(p, x):

a, b, c = p
return a * x *x + b*x + c

# Model object
quad_model = odr.Model(func)

# test data and error
x0 = np.linspace(-10, 10, 100)
y0 = – 0.07 * x0 * x0 + 0.5 * x0 + 2.
noise_x = np.random.normal(0.0, 1.0, len(x0))
noise_y = np.random.normal(0.0, 1.0, len(x0))
y = y0 + noise_y
x = x0 + noise_x

# Create a RealData object
data = odr.RealData(x, y, sx=noise_x, sy=noise_y)

# Set up ODR with the model and data.
odr = odr.ODR(data, quad_model, beta0=[0., 1., 1.])

# Run the regression.
out = odr.run()

#print fit parameters and 1-sigma estimates
popt = out.beta
perr = out.sd_beta
print(‘fit parameter 1-sigma error’)
print(‘———————————–‘)
for i in range(len(popt)):
print(str(popt[i])+’ +- ‘+str(perr[i]))

# prepare confidence level curves
nstd = 5. # to draw 5-sigma intervals
popt_up = popt + nstd * perr
popt_dw = popt – nstd * perr

x_fit = np.linspace(min(x), max(x), 100)
fit = func(popt, x_fit)
fit_up = func(popt_up, x_fit)
fit_dw= func(popt_dw, x_fit)

#plot
fig, ax = plt.subplots(1)
rcParams[‘font.size’]= 20
errorbar(x, y, yerr=noise_y, xerr=noise_x, hold=True, ecolor=’k’, fmt=’none’, label=’data’)
xlabel(‘x’, fontsize=18)
ylabel(‘y’, fontsize=18)
title(‘fit with error on both axis’, fontsize=18)
plot(x_fit, fit, ‘r’, lw=2, label=’best fit curve’)
plot(x0, y0, ‘k–‘, lw=2, label=’True curve’)
ax.fill_between(x_fit, fit_up, fit_dw, alpha=.25, label=’5-sigma interval’)
legend(loc=’lower right’,fontsize=18)
show()

Please note that as you know, python is case sensitive so do not try to use change the upper/lower case in the above commands. A general comment is that you can easily change the second order  function of this example to any desired function. The method we used to estimate the uncertainties of fit parameters are the standard method using diagonal elements of the co-variance matrix.

In some earlier post, I have discussed statistical fits with PyMC and EMCEE. Advantage of statistical methods is that they are not sensitive to the form of chi-square function. This is important in some cases where the merit function doe snot have a well-define minimum. The advantage of chi-squaree methods is that they are generally much faster. In this post, I show a typical example of a least-square fit with measurement error. As usual, we are interested to estimate a fit parameter as well as their uncertainties.

fit1As you see in the above example, we fit a simple function with measured y-error, estimate the fit parameters and their uncertainties, and plot a confidence level of a given range. The program is shown below:

 

import numpy as np
from pylab import *
from scipy.optimize import curve_fit

def func(x, a, b, c):

return a * x *x + b*x + c

# test data and error
x = np.linspace(-10, 10, 100)
y0 = – 0.07 * x * x + 0.5 * x + 2.
noise = np.random.normal(0.0, 1.0, len(x))
y = y0 + noise

# curve fit [with only y-error]
popt, pcov = curve_fit(func, x, y, sigma=1./(noise*noise))
perr = np.sqrt(np.diag(pcov))

#print fit parameters and 1-sigma estimates
print(‘fit parameter 1-sigma error’)
print(‘———————————–‘)
for i in range(len(popt)):
print(str(popt[i])+’ +- ‘+str(perr[i]))

# prepare confidence level curves
nstd = 5. # to draw 5-sigma intervals
popt_up = popt + nstd * perr
popt_dw = popt – nstd * perr

fit = func(x, *popt)
fit_up = func(x, *popt_up)
fit_dw = func(x, *popt_dw)

#plot
fig, ax = plt.subplots(1)
rcParams[‘xtick.labelsize’] = 18
rcParams[‘ytick.labelsize’] = 18
rcParams[‘font.size’]= 20
errorbar(x, y0, yerr=noise, xerr=0, hold=True, ecolor=’k’, fmt=’none’, label=’data’)

xlabel(‘x’, fontsize=18)
ylabel(‘y’, fontsize=18)
title(‘fit with only Y-error’, fontsize=18)
plot(x, fit, ‘r’, lw=2, label=’best fit curve’)
plot(x, y0, ‘k–‘, lw=2, label=’True curve’)
ax.fill_between(x, fit_up, fit_dw, alpha=.25, label=’5-sigma interval’)
legend(loc=’lower right’,fontsize=18)
show()

Please note that using the measurement error is optional. If you do not have y-error, simply skip its command in the fit procedure:

# curve fit [with only y-error]
popt, pcov = curve_fit(func, x, y)

You still get an estimate for the uncertainty of the fit parameters, although it is less reliable.  In the next post, I show an example of a least-square fit with error on both axis.

It is a common practice in programming to use a variable as template for another and then later changing the value of the new variable, something like

b=a

b = 3

at this point, variable a still has its old value, it was not replaced by 3. So far so good. This works in python like other programming languages, but what about compound objects (compound objects contain other objects = lists, dictionary, tuple)? Compound object will NOT work like this.

Lets have a closer look into problem. We can check the memory address of a given variable like this:

print(id(a), id(b))

the id() function returns memory address of its variable so one can instantly check if the two are pointing to the same address or not.

In case of compound variables, the situation is different as we start from a shallow copy:

x1 = [“a”, “b”]
>>> x2 = x1
>>> print(x1)
[‘a’, ‘b’]
>>> print(x2)
[‘a’, ‘b’]
>>> print(id(x1),id(x2))
43746416 43746416

> x2 = [“c”, “d”]
>>> print(x1)
[‘a’, ‘b’]
>>> print(x2)
[‘c’, ‘d’]
>>> print(id(x1),id(x2))
43746416 43875200

x1 and x2 are originally pointing to the same memory address but as soon as one variable is modifies, it allocates a new memory address: this is a shallow copy. It is so because our list in this example is not nested. In this case if we change only one element of x2, x1 will be modified as well:

>>> x1 = [“a”, “b”]
>>> x2 = x1
>>> x2[1] = “h”
>>> print(x1)
[‘a’, ‘h’]
>>> print(x2)
[‘a’, ‘h’]

x1 and x2 are originally pointing to the same memory address but as soon as one variable is modifies, it allocates a new memory address: this is a shallow copy. It is so because our list in this example is not nested. Now if we change only one element of x2, x1 will be modified as well:

>> x1 = [“a”, “b”]
>>> x2 = x1
>>> x2[1] = “h”
>>> print(x1)
[‘a’, ‘h’]
>>> print(x2)
[‘a’, ‘h’]

be careful !

we had actually two names for one variable, it was just a shallow copy, we did not assign a new object to x2. To avoid this problem, use deep copy:

from copy import deepcopy

x2 = deepcopy(x1)

 

hopefully it helps to prevent confusing errors in your python codes.