OiO.lk Community platform!

Oio.lk is an excellent forum for developers, providing a wide range of resources, discussions, and support for those in the developer community. Join oio.lk today to connect with like-minded professionals, share insights, and stay updated on the latest trends and technologies in the development field.
  You need to log in or register to access the solved answers to this problem.
  • You have reached the maximum number of guest views allowed
  • Please register below to remove this limitation

How to make scipy newton_krylov use a different derivative approximation method

  • Thread starter Thread starter Ultrinik
  • Start date Start date
U

Ultrinik

Guest
After reading the documentation seems like it uses forward difference as its approximation method, but I can't see any direct way to make it use other method or a custom one.

Using the tools in the documentation I tried this to make it use a custom method and did this implementation to test if the results were the same:

Code:
import numpy as np
from scipy.optimize import newton_krylov
from scipy.sparse.linalg import LinearOperator

# Function
def uniform_problem(x, A, b):
    return b - A@x

size = 12

A = np.random.uniform(-1, 1, size=(size, size))
b = np.random.uniform(-1, 1, size=(size, ))

xr = np.random.uniform(-1, 1, size=(size, ))# root
x0 = np.random.uniform(-1, 1, size=(size, ))# initial guess

F = lambda x: uniform_problem(x, A, b) - uniform_problem(xr, A, b)

#Arbitrary parameters
max_iter = 10
tol = 1e-3
h = 1e-4

repeats = 5000

Code:
def get_jacobian_vector_product_fdf(F, x, v, h=1e-5):
  step = h * v
  return (F(x + step) - F(x)) / h

error1 = 0
for i in range(repeats):
    x = x0.copy()

    lambdaJv = lambda v: get_jacobian_vector_product_fdf(F, x, v, h)
    linear_operator = LinearOperator((size, size), matvec=lambdaJv)

    solution1 = newton_krylov(F, x, method="gmres", inner_maxiter=max_iter, iter=max_iter, callback=None, f_tol=tol, rdiff=h, inner_M=linear_operator)
    error1 += np.linalg.norm(F(solution1))
error1 /= repeats
print(error1) # aprox 1.659173186802721

I compared it to the default one:

Code:
error2 = 0
for i in range(repeats):
    x = x0.copy()
    
    solution2 = newton_krylov(F, x, method="gmres", inner_maxiter=max_iter, iter=max_iter, callback=None, f_tol=tol, rdiff=h)
    error2 += np.linalg.norm(F(solution2))
error2 /= repeats
print(error2) # aprox 0.024629534404425796

Code:
print(error1/error2) # Orders of magnitude of difference

I expected to get the same results, but they are clearly different. I think I'm having trouble understanding what the tools of the documentation do.
<p>After reading the <a href="https://docs.scipy.org/doc/scipy-1.13.1/reference/generated/scipy.optimize.newton_krylov.html" rel="nofollow noreferrer">documentation</a> seems like it uses forward difference as its approximation method, but I can't see any direct way to make it use other method or a custom one.</p>
<p>Using the tools in the documentation I tried this to make it use a custom method and did this implementation to test if the results were the same:</p>
<pre><code>import numpy as np
from scipy.optimize import newton_krylov
from scipy.sparse.linalg import LinearOperator

# Function
def uniform_problem(x, A, b):
return b - A@x

size = 12

A = np.random.uniform(-1, 1, size=(size, size))
b = np.random.uniform(-1, 1, size=(size, ))

xr = np.random.uniform(-1, 1, size=(size, ))# root
x0 = np.random.uniform(-1, 1, size=(size, ))# initial guess

F = lambda x: uniform_problem(x, A, b) - uniform_problem(xr, A, b)

#Arbitrary parameters
max_iter = 10
tol = 1e-3
h = 1e-4

repeats = 5000
</code></pre>
<pre><code>def get_jacobian_vector_product_fdf(F, x, v, h=1e-5):
step = h * v
return (F(x + step) - F(x)) / h

error1 = 0
for i in range(repeats):
x = x0.copy()

lambdaJv = lambda v: get_jacobian_vector_product_fdf(F, x, v, h)
linear_operator = LinearOperator((size, size), matvec=lambdaJv)

solution1 = newton_krylov(F, x, method="gmres", inner_maxiter=max_iter, iter=max_iter, callback=None, f_tol=tol, rdiff=h, inner_M=linear_operator)
error1 += np.linalg.norm(F(solution1))
error1 /= repeats
print(error1) # aprox 1.659173186802721
</code></pre>
<p>I compared it to the default one:</p>
<pre><code>error2 = 0
for i in range(repeats):
x = x0.copy()

solution2 = newton_krylov(F, x, method="gmres", inner_maxiter=max_iter, iter=max_iter, callback=None, f_tol=tol, rdiff=h)
error2 += np.linalg.norm(F(solution2))
error2 /= repeats
print(error2) # aprox 0.024629534404425796
</code></pre>
<pre><code>print(error1/error2) # Orders of magnitude of difference
</code></pre>
<p>I expected to get the same results, but they are clearly different. I think I'm having trouble understanding what the tools of the documentation do.</p>
 
Top