Utilities for gradient and hessians#

A number of utilities to help with problems with gradient and hessians.


template<typename Func>
sparsity_pattern pagmo::estimate_sparsity(Func f, const vector_double &x, double dx = 1e-8)#

Heuristic to estimate the sparsity pattern.

A numerical estimation of the sparsity pattern of same callable object is made by numerically computing it around a given input point and detecting the components that are changed.

The callable function f must have the prototype:

vector_double f(const vector_double &)

otherwise compiler errors will be generated.

Note that estimate_sparsity may fail to detect the real sparsity as it only considers one variation around the input point. It is of use, though, in tests or cases where its not possible to write the sparsity or where the user is confident the estimate will be correct.

Parameters
  • f – instance of the callable object.

  • x – decision vector to test the sparisty around.

  • dx – To detect the sparsity each component of the input decision vector x will be changed by \(\max(|x_i|, 1) * \) dx.

Throws

std::invalid_argument – if f returns fitness vectors of different sizes when perturbing x.

Returns

the sparsity_pattern of f as detected around x.


template<typename Func>
vector_double pagmo::estimate_gradient(Func f, const vector_double &x, double dx = 1e-8)#

Numerical computation of the gradient (low-order)

A numerical estimation of the gradient of same callable function is made numerically.

The callable function f must have the prototype:

vector_double f(const vector_double &)

otherwise compiler errors will be generated. The gradient returned will be dense and contain, in the lexicographic order requested by pagmo::problem::gradient(), \(\frac{df_i}{dx_j}\).

The numerical approximation of each derivative is made by central difference, according to the formula:

\[ \frac{df}{dx} \approx \frac{f(x+dx) - f(x-dx)}{2dx} + O(dx^2) \]

The overall cost, in terms of calls to f will thus be \(2n\) where \(n\) is the size of x.

Note

The gradient returned is assumed as dense: elements equal to zero are not excluded.

Parameters
  • f – instance of the callable object.

  • x – decision vector to test the sparisty around.

  • dx – To detect the numerical derivative each component of the input decision vector x will be varied by \(\max(|x_i|,1) * \) dx.

Throws

std::invalid_argument – if f returns vectors of different sizes when perturbing x.

Returns

the gradient of f approximated around x in the format required by pagmo::problem::gradient().


template<typename Func>
vector_double pagmo::estimate_gradient_h(Func f, const vector_double &x, double dx = 1e-2)#

Numerical computation of the gradient (high-order)

A numerical estimation of the gradient of same callable function is made numerically.

The callable function f must have the prototype:

vector_double f(const vector_double &)

otherwise compiler errors will be generated. The gradient returned will be dense and contain, in the lexicographic order requested by pagmo::problem::gradient(), \(\frac{df_i}{dx_j}\).

The numerical approximation of each derivative is made by central difference, according to the formula:

\[ \frac{df}{dx} \approx \frac 32 m_1 - \frac 35 m_2 +\frac 1{10} m_3 + O(dx^6) \]

where:

\[ m_i = \frac{f(x + i dx) - f(x-i dx)}{2i dx} \]

The overall cost, in terms of calls to f will thus be \(6n\) where \(n\) is the size of x.

Note

The gradient returned is assumed as dense: elements equal to zero are not excluded.

Parameters
  • f – instance of the callable object.

  • x – decision vector to test the sparisty around.

  • dx – To detect the numerical derivative each component of the input decision vector x will be varied by \(\max(|x_i|,1) * \) dx.

Throws

std::invalid_argument – if f returns vectors of different sizes when perturbing x.

Returns

the gradient of f approximated around x in the format required by pagmo::problem::gradient().