In some earlier post, I have discussed statistical fits with PyMC and EMCEE. Advantage of statistical methods is that they are not sensitive to the form of chi-square function. This is important in some cases where the merit function doe snot have a well-define minimum. The advantage of chi-squaree methods is that they are generally much faster. In this post, I show a typical example of a least-square fit with measurement error. As usual, we are interested to estimate a fit parameter as well as their uncertainties.

fit1As you see in the above example, we fit a simple function with measured y-error, estimate the fit parameters and their uncertainties, and plot a confidence level of a given range. The program is shown below:


import numpy as np
from pylab import *
from scipy.optimize import curve_fit

def func(x, a, b, c):

return a * x *x + b*x + c

# test data and error
x = np.linspace(-10, 10, 100)
y0 = – 0.07 * x * x + 0.5 * x + 2.
noise = np.random.normal(0.0, 1.0, len(x))
y = y0 + noise

# curve fit [with only y-error]
popt, pcov = curve_fit(func, x, y, sigma=1./(noise*noise))
perr = np.sqrt(np.diag(pcov))

#print fit parameters and 1-sigma estimates
print(‘fit parameter 1-sigma error’)
for i in range(len(popt)):
print(str(popt[i])+’ +- ‘+str(perr[i]))

# prepare confidence level curves
nstd = 5. # to draw 5-sigma intervals
popt_up = popt + nstd * perr
popt_dw = popt – nstd * perr

fit = func(x, *popt)
fit_up = func(x, *popt_up)
fit_dw = func(x, *popt_dw)

fig, ax = plt.subplots(1)
rcParams[‘xtick.labelsize’] = 18
rcParams[‘ytick.labelsize’] = 18
rcParams[‘font.size’]= 20
errorbar(x, y0, yerr=noise, xerr=0, hold=True, ecolor=’k’, fmt=’none’, label=’data’)

xlabel(‘x’, fontsize=18)
ylabel(‘y’, fontsize=18)
title(‘fit with only Y-error’, fontsize=18)
plot(x, fit, ‘r’, lw=2, label=’best fit curve’)
plot(x, y0, ‘k–‘, lw=2, label=’True curve’)
ax.fill_between(x, fit_up, fit_dw, alpha=.25, label=’5-sigma interval’)
legend(loc=’lower right’,fontsize=18)

Please note that using the measurement error is optional. If you do not have y-error, simply skip its command in the fit procedure:

# curve fit [with only y-error]
popt, pcov = curve_fit(func, x, y)

You still get an estimate for the uncertainty of the fit parameters, although it is less reliable.  In the next post, I show an example of a least-square fit with error on both axis.

It is a common practice in programming to use a variable as template for another and then later changing the value of the new variable, something like


b = 3

at this point, variable a still has its old value, it was not replaced by 3. So far so good. This works in python like other programming languages, but what about compound objects (compound objects contain other objects = lists, dictionary, tuple)? Compound object will NOT work like this.

Lets have a closer look into problem. We can check the memory address of a given variable like this:

print(id(a), id(b))

the id() function returns memory address of its variable so one can instantly check if the two are pointing to the same address or not.

In case of compound variables, the situation is different as we start from a shallow copy:

x1 = [“a”, “b”]
>>> x2 = x1
>>> print(x1)
[‘a’, ‘b’]
>>> print(x2)
[‘a’, ‘b’]
>>> print(id(x1),id(x2))
43746416 43746416

> x2 = [“c”, “d”]
>>> print(x1)
[‘a’, ‘b’]
>>> print(x2)
[‘c’, ‘d’]
>>> print(id(x1),id(x2))
43746416 43875200

x1 and x2 are originally pointing to the same memory address but as soon as one variable is modifies, it allocates a new memory address: this is a shallow copy. It is so because our list in this example is not nested. In this case if we change only one element of x2, x1 will be modified as well:

>>> x1 = [“a”, “b”]
>>> x2 = x1
>>> x2[1] = “h”
>>> print(x1)
[‘a’, ‘h’]
>>> print(x2)
[‘a’, ‘h’]

x1 and x2 are originally pointing to the same memory address but as soon as one variable is modifies, it allocates a new memory address: this is a shallow copy. It is so because our list in this example is not nested. Now if we change only one element of x2, x1 will be modified as well:

>> x1 = [“a”, “b”]
>>> x2 = x1
>>> x2[1] = “h”
>>> print(x1)
[‘a’, ‘h’]
>>> print(x2)
[‘a’, ‘h’]

be careful !

we had actually two names for one variable, it was just a shallow copy, we did not assign a new object to x2. To avoid this problem, use deep copy:

from copy import deepcopy

x2 = deepcopy(x1)


hopefully it helps to prevent confusing errors in your python codes.

Debian, Fedora, Kubuntu

I use Debain since Sarge. Back then, mounting a USB disk was far from trivial. It is so much user friendly now. For a while between 2014-2016, I have used Fedora and Kubuntu instead. Both are more appropriate if you have a modern laptop as they are faster in incorporating new kernels in their distributions (compare a life cycle of 6 month vs 2 years). This particularly affects things like sound, LAN, and wireless. Without an internet connection, it is difficult to update the system in first place, if it cannot recognize your network interface. I have to mention that using geeky tricks, it is doable to get a work around solution, as I usually do not get the bleeding edge new laptops. However a more standard solution is to live with either Fedora or Kubuntu, and a year later install Debian: by then, the latest (testing) kernels usually are modern enough to support the main functions like network.

Doing science and mathematical calculations, it is frustrating to see that a system crashes every now and then for nonsense reasons: Trash crashed, Copy crashed, …. these stuff I have never ever experienced on a  stable Debian system, so the least is that I spent a week to get everything working with Debian before giving it up.

There are also some more fundamental differences in philosophy of Debian versus Fedora or Kubuntu. While you get bug fix and minimal updates for stable software, an update never actually changes a package dramatically. So if you want to see a new functionalities, it does not arrive before a major release. Debian is very stable and it makes it very interesting for systems with heavy jobs.

Unlike the Venus transit which is a rare event, there is a Mercury transit every 7.5 years (on average). Between 1601 and 2400, there are only 14 Venus transit. If you have not seen the last one on June 06, 2012, it is too late  since the next Venus transit occurs on Dec 11, 2117 !! Therefore, we should focus on the Mercury transit.

The next transit of mercury occurs on May 09, 2016. It starts at about 11:12 UT and ends on 18:42 UT. The transit is visible in most of Asia, Europe, Africa, and America. Note that the apparent size of the solar disk is about half a degree while the apparent size of mercury disk is only 8 arc seconds (1 degree = 60 arc minute = 3600 arc second).

The next mercury transits in this century are the followings (time in UT):

Date                    start           max          end

Nov  13, 2032      06:41         08:54      11:07

Nov 07, 2039       07:17         08:46       10:15

May 07, 2049       11:03         14:24       17:44

Nov 09, 2052       23:53         02:29       05:06

May 10, 2062       18:16         21:36       00:57

Nov 11, 2065        17:24         20:06      22:48

Nov 14, 2078        11:42         13:41      15:39

Nov 07, 2085        11:42         13:34      15:26

May 08, 2095        17:20         21:05      00:50

Nov 10, 2098        04:35         07:16      09:57

Losmandy G11 mount

I have used the famous Losmandy G-11 mount for a few years. It is a super heavy mount. You need quiet a bit of force to move it around. The mount was serviced recently and with a rough polar alignment, it allowed us to have exposures of up to five minutes at a focal length of 300 mm.

close-up view of Losmandy G11 mount.

close-up view of Losmandy G-11 mount.

The Losmandy G11 mount.

The Losmandy G-11 mount.

The ball-head to connect DSLR cameras.

The ball head to connect DSLR cameras.

The ball head allows you to point the camera to any direction of the sky very easily. Depending on the weight of the camera and lens, you should think of a ball which has a diameter of at least 30-40mm for a 5 kg load.