Test vectors, x and y, are created. x, the input, goes from 0 to 100 in steps of 1.
y, the output, corresponds to x/(25+x) with some added noise. This signal reaches about 0.8.
Now when using the Newton’s method, we have a vector of length 101. Some of the values will be positive and some negative. The function fit finds the sum over all the terms. Finding the root of fit, will find the curve in the middle of the data points. We find the best fit x/(k+x). k has to be near 25. There are many functions for finding best fits in scipy.
In many cases, Newton’s method will not converge such as when the y’ keeps on changing sign. In those cases, you can always plot the function over a grid of points and then find an approximate local minima, and use that as the starting value in Newton’s method (the argument x0).
from __future__ import print_function, division
import numpy as np
import matplotlib.pyplot as plt
def newton(f, tol=1E-10, x0=1, N=50, h=0.2):
for i in range(N):
df_dx = (f(x0+h/2)-f(x0-h/2))/h
x1 = x0 - f(x0)/df_dx
error = abs(x1-x0)
print("i = %d,x1 = %.4f, error = %.4e" % (i,x1,error))
if error < tol: break
x0 = x1
print('Test data x and y')
x = np.arange(101)
r = 0.2*np.random.rand(len(x))-0.1
y = x/(25+x) + r
plt.title(r'Fitting y = x/($k_0$+x)')
k0 = newton(fit)
plt.plot(x, y, 'b', x, x/(k0+x),'m')
plt.legend(('actual','fit'), loc = 4)
#Test data x and y
#i = 0,x1 = 11.2652, error = 1.0265e+01
#i = 1,x1 = 22.3195, error = 1.1054e+01
#i = 2,x1 = 25.9416, error = 3.6221e+00
#i = 3,x1 = 26.1740, error = 2.3239e-01
#i = 4,x1 = 26.1749, error = 8.3630e-04
#i = 5,x1 = 26.1749, error = 1.3040e-08
#i = 6,x1 = 26.1749, error = 3.9080e-14
#k0 = 26.1748507877