pygmo.estimate_sparsity(callable, x, dx=1e-8)#

Performs a numerical estimation of the sparsity pattern of same callable object by numerically computing it around the input point x and detecting the components that are changed.

The callable must accept an iterable as input and return an array-like object

Note that estimate_sparsity may fail to detect the real sparsity as it only considers one variation around the input point. It is of use, though, in tests or cases where its not possible to write the sparsity or where the user is confident the estimate will be correct.

Parameters
• callable (a callable object) – The function we want to estimate sparsity (typically a fitness).

• x (array-like object) – decision vector to use when testing for sparisty.

• dx (float) – To detect the sparsity each component of x will be changed by $$\max(|x_i|,1) dx$$.

Raises
• unspecified – any exception thrown by the callable object when called on x.

• TypeError – if x cannot be converted to a vector of floats or callable is not callable.

Returns

the sparsity_pattern of callable detected around x

Return type

2D NumPy float array

Examples

>>> import pygmo as pg
>>> def my_fun(x):
...     return [x[0]+x[3], x[2], x[1]]
>>> pg.estimate_sparsity(callable = my_fun, x = [0.1,0.1,0.1,0.1], dx = 1e-8)
array([[0, 0],
[0, 3],
[1, 2],
[2, 1]], dtype=uint64)


Performs a numerical estimation of the sparsity pattern of same callable object by numerically computing it around the input point x and detecting the components that are changed.

The callable must accept an iterable as input and return an array-like object. The gradient returned will be dense and contain, in the lexicographic order requested by gradient(), $$\frac{df_i}{dx_j}$$.

The numerical approximation of each derivative is made by central difference, according to the formula:

$\frac{df}{dx} \approx \frac{f(x+dx) - f(x-dx)}{2dx} + O(dx^2)$

The overall cost, in terms of calls to callable will thus be $$n$$ where $$n$$ is the size of x.

Parameters
• callable (a callable object) – The function we want to estimate sparsity (typically a fitness).

• x (array-like object) – decision vector to use when testing for sparisty.

• dx (float) – To detect the sparsity each component of x will be changed by $$\max(|x_i|,1) dx$$.

Raises
• unspecified – any exception thrown by the callable object when called on x.

• TypeError – if x cannot be converted to a vector of floats or callable is not callable.

Returns

the dense gradient of callable detected around x

Return type

2D NumPy float array

Examples

>>> import pygmo as pg
>>> def my_fun(x):
...     return [x[0]+x[3], x[2], x[1]]
>>> pg.estimate_gradient(callable = my_fun, x = [0]*4, dx = 1e-8)
array([1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0.])


Performs a numerical estimation of the sparsity pattern of same callable object by numerically computing it around the input point x and detecting the components that are changed.

The callable must accept an iterable as input and return an array-like object. The gradient returned will be dense and contain, in the lexicographic order requested by gradient(), $$\frac{df_i}{dx_j}$$.

The numerical approximation of each derivative is made by central difference, according to the formula:

$\frac{df}{dx} \approx \frac 32 m_1 - \frac 35 m_2 +\frac 1{10} m_3 + O(dx^6)$

where:

$m_i = \frac{f(x + i dx) - f(x-i dx)}{2i dx}$

The overall cost, in terms of calls to callable will thus be 6:math:n where $$n$$ is the size of x.

Parameters
• callable (a callable object) – The function we want to estimate sparsity (typically a fitness).

• x (array-like object) – decision vector to use when testing for sparisty.

• dx (float) – To detect the sparsity each component of x will be changed by $$\max(|x_i|,1) dx$$.

Raises
• unspecified – any exception thrown by the callable object when called on x.

• TypeError – if x cannot be converted to a vector of floats or callable is not callable.

Returns

the dense gradient of callable detected around x

Return type

2D NumPy float array

Examples

>>> import pygmo as pg
>>> def my_fun(x):
...     return [x[0]+x[3], x[2], x[1]]
>>> pg.estimate_gradient_h(callable = my_fun, x = [0]*4, dx = 1e-2)
array([1., 0., 0., 1., 0., 0., 1., 0., 0., 1., 0., 0.])