the core program is fairly easy as it is a built-in function in python:

from scipy.interpolate import RBF

rbf = RBF(x, y, function=’quintic’, smooth=0.1)

s = rbf(x)

Unlike the previous example, we do not use the curve_fit module of Scipy, Instead, there is another dedicated module to estimate the orthogonal distance regression (odr). The program with some comments is shown below:

import numpy as np

from pylab import *

from scipy.optimize import curve_fit

from scipy import odr

def func(p, x):

a, b, c = p

return a * x *x + b*x + c

# Model object

quad_model = odr.Model(func)

# test data and error

x0 = np.linspace(-10, 10, 100)

y0 = – 0.07 * x0 * x0 + 0.5 * x0 + 2.

noise_x = np.random.normal(0.0, 1.0, len(x0))

noise_y = np.random.normal(0.0, 1.0, len(x0))

y = y0 + noise_y

x = x0 + noise_x

# Create a RealData object

data = odr.RealData(x, y, sx=noise_x, sy=noise_y)

# Set up ODR with the model and data.

odr = odr.ODR(data, quad_model, beta0=[0., 1., 1.])

# Run the regression.

out = odr.run()

#print fit parameters and 1-sigma estimates

popt = out.beta

perr = out.sd_beta

print(‘fit parameter 1-sigma error’)

print(‘———————————–‘)

for i in range(len(popt)):

print(str(popt[i])+’ +- ‘+str(perr[i]))

# prepare confidence level curves

nstd = 5. # to draw 5-sigma intervals

popt_up = popt + nstd * perr

popt_dw = popt – nstd * perr

x_fit = np.linspace(min(x), max(x), 100)

fit = func(popt, x_fit)

fit_up = func(popt_up, x_fit)

fit_dw= func(popt_dw, x_fit)

#plot

fig, ax = plt.subplots(1)

rcParams[‘font.size’]= 20

errorbar(x, y, yerr=noise_y, xerr=noise_x, hold=True, ecolor=’k’, fmt=’none’, label=’data’)

xlabel(‘x’, fontsize=18)

ylabel(‘y’, fontsize=18)

title(‘fit with error on both axis’, fontsize=18)

plot(x_fit, fit, ‘r’, lw=2, label=’best fit curve’)

plot(x0, y0, ‘k–‘, lw=2, label=’True curve’)

ax.fill_between(x_fit, fit_up, fit_dw, alpha=.25, label=’5-sigma interval’)

legend(loc=’lower right’,fontsize=18)

show()

Please note that as you know, python is case sensitive so do not try to use change the upper/lower case in the above commands. A general comment is that you can easily change the second order function of this example to any desired function. The method we used to estimate the uncertainties of fit parameters are the standard method using diagonal elements of the co-variance matrix.

]]>As you see in the above example, we fit a simple function with measured y-error, estimate the fit parameters and their uncertainties, and plot a confidence level of a given range. The program is shown below:

import numpy as np

from pylab import *

from scipy.optimize import curve_fit

def func(x, a, b, c):

return a * x *x + b*x + c

# test data and error

x = np.linspace(-10, 10, 100)

y0 = – 0.07 * x * x + 0.5 * x + 2.

noise = np.random.normal(0.0, 1.0, len(x))

y = y0 + noise

# curve fit [with only y-error]

popt, pcov = curve_fit(func, x, y, sigma=1./(noise*noise))

perr = np.sqrt(np.diag(pcov))

#print fit parameters and 1-sigma estimates

print(‘fit parameter 1-sigma error’)

print(‘———————————–‘)

for i in range(len(popt)):

print(str(popt[i])+’ +- ‘+str(perr[i]))

# prepare confidence level curves

nstd = 5. # to draw 5-sigma intervals

popt_up = popt + nstd * perr

popt_dw = popt – nstd * perr

fit = func(x, *popt)

fit_up = func(x, *popt_up)

fit_dw = func(x, *popt_dw)

#plot

fig, ax = plt.subplots(1)

rcParams[‘xtick.labelsize’] = 18

rcParams[‘ytick.labelsize’] = 18

rcParams[‘font.size’]= 20

errorbar(x, y0, yerr=noise, xerr=0, hold=True, ecolor=’k’, fmt=’none’, label=’data’)

xlabel(‘x’, fontsize=18)

ylabel(‘y’, fontsize=18)

title(‘fit with only Y-error’, fontsize=18)

plot(x, fit, ‘r’, lw=2, label=’best fit curve’)

plot(x, y0, ‘k–‘, lw=2, label=’True curve’)

ax.fill_between(x, fit_up, fit_dw, alpha=.25, label=’5-sigma interval’)

legend(loc=’lower right’,fontsize=18)

show()

Please note that using the measurement error is optional. If you do not have y-error, simply skip its command in the fit procedure:

# curve fit [with only y-error]

popt, pcov = curve_fit(func, x, y)

You still get an estimate for the uncertainty of the fit parameters, although it is less reliable. In the next post, I show an example of a least-square fit with error on both axis.

]]>b=a

b = 3

at this point, variable **a** still has its old value, it was not replaced by 3. So far so good. This works in python like other programming languages, but what about compound objects (compound objects contain other objects = lists, dictionary, tuple)? Compound object will NOT work like this.

Lets have a closer look into problem. We can check the memory address of a given variable like this:

print(id(a), id(b))

the id() function returns memory address of its variable so one can instantly check if the two are pointing to the same address or not.

In case of compound variables, the situation is different as we start from a shallow copy:

x1 = [“a”, “b”]

>>> x2 = x1

>>> print(x1)

[‘a’, ‘b’]

>>> print(x2)

[‘a’, ‘b’]

>>> print(id(x1),id(x2))

43746416 43746416> x2 = [“c”, “d”]

>>> print(x1)

[‘a’, ‘b’]

>>> print(x2)

[‘c’, ‘d’]

>>> print(id(x1),id(x2))

43746416 43875200

**x1** and **x2** are originally pointing to the same memory address but as soon as one variable is modifies, it allocates a new memory address: this is a shallow copy. It is so because our list in this example is not nested. In this case if we change only one element of **x2**, **x1** will be modified as well:

>>> x1 = [“a”, “b”]

>>> x2 = x1

>>> x2[1] = “h”

>>> print(x1)

[‘a’, ‘h’]

>>> print(x2)

[‘a’, ‘h’]

**x1** and **x2** are originally pointing to the same memory address but as soon as one variable is modifies, it allocates a new memory address: this is a shallow copy. It is so because our list in this example is not nested. Now if we change only one element of **x2**, **x1** will be modified as well:

>> x1 = [“a”, “b”]

>>> x2 = x1

>>> x2[1] = “h”

>>> print(x1)

[‘a’, ‘h’]

>>> print(x2)

[‘a’, ‘h’]

we had actually two names for one variable, it was just a shallow copy, we did not assign a new object to **x2**. To avoid this problem, use deep copy:

from copy import deepcopy

x2 = deepcopy(x1)

hopefully it helps to prevent confusing errors in your python codes.

]]>Doing science and mathematical calculations, it is frustrating to see that a system crashes every now and then for nonsense reasons: Trash crashed, Copy crashed, …. these stuff I have never ever experienced on a stable Debian system, so the least is that I spent a week to get everything working with Debian before giving it up.

There are also some more fundamental differences in philosophy of Debian versus Fedora or Kubuntu. While you get bug fix and minimal updates for stable software, an update never actually changes a package dramatically. So if you want to see a new functionalities, it does not arrive before a major release. Debian is very stable and it makes it very interesting for systems with heavy jobs.

]]>