# List of algorithms#

## Algorithms implemented in Python#

class pygmo.scipy_optimize(args=(), method: typing.Optional[str] = None, tol: typing.Optional[float] = None, callback: typing.Optional[typing.Callable[[typing.Any], typing.Any]] = None, options: typing.Optional[typing.MutableMapping[str, typing.Any]] = None, selection: pygmo.core.s_policy = Selection policy name: Select best C++ class name: pagmo::select_best  Extra info: Absolute migration rate: 1 )#

This class is a user defined algorithm (UDA) providing a wrapper around the function scipy.optimize.minimize().

This wraps several well-known local optimization algorithms:

• Powell

• CG

• BFGS

• Newton-CG

• L-BFGS-B

• TNC

• COBYLA

• SLSQP

• trust-constr

• dogleg

• trust-ncg

• trust-exact

• trust-krylov

These methods are mostly variants of gradient descent. Some of them require a gradient and will throw an error if invoked on a problem that does not offer one. Constraints are only supported by methods COBYLA, SLSQP and trust-constr.

Example:

>>> import pygmo as pg
>>> prob = pg.problem(pg.rosenbrock(10))
>>> pop = pg.population(prob=prob, size=1, seed=0)
>>> pop.champion_f[0]
929975.7994682974
>>> scp = pg.algorithm(pg.scipy_optimize(method="L-BFGS-B"))
>>> result = scp.evolve(pop).champion_f
>>> result[0]
1.13770...
>>> pop.problem.get_fevals()
55
>>> pop.problem.get_gevals()
54


The constructor initializes a wrapper instance for a specific algorithm. Construction arguments are those options of scipy.optimize.minimize() that are not problem-specific. Problem-specific options, for example the bounds, constraints and the existence of a gradient and hessian, are deduced from the problem in the population given to the evolve function.

Parameters
• args – optional - extra arguments for fitness callable

• method – optional - string specifying the method to be used by scipy. From scipy docs: “If not given, chosen to be one of BFGS, L-BFGS-B, SLSQP, depending if the problem has constraints or bounds.”

• tol – optional - tolerance for termination

• callback – optional - callable that is called in each iteration, independent from the fitness function

• options – optional - dict of solver-specific options

• selection – optional - s_policy to select candidate for local optimization

Raises

ValueError – If method is not one of Nelder-Mead Powell, CG, BFGS, Newton-CG, L-BFGS-B, TNC, COBYLA, SLSQP, trust-constr, dogleg, trust-ncg, trust-exact, trust-krylov or None.

evolve(population)#

Call scipy.optimize.minimize with a random member of the population as start value.

The problem is extracted from the population and its fitness function gives the objective value for the optimization process.

Parameters

population – The population containing the problem and a set of initial solutions.

Returns

The changed population.

Raises
• ValueError – If the problem has constraints, but during construction a method was selected that cannot deal with them.

• ValueError – If the problem contains multiple objectives

• ValueError – If the problem is stochastic

• unspecified – any exception thrown by the member functions of the problem

get_name() str#

Returns the method name if one was selected, scipy.optimize.minimize otherwise

set_verbosity(level: int) None#

Modifies the ‘disp’ parameter in the options dict, which prints out a final convergence message.

Parameters

level – Every verbosity level above zero prints out a convergence message.

Raises

ValueError – If options dict was given in instance constructor and has options conflicting with verbosity level

## Algorithms exposed from C++#

class pygmo.null_algorithm#

The null algorithm.

An algorithm used in the default-initialization of pygmo.algorithm and of the meta-algorithms.

class pygmo.gaco(gen = 1, ker = 63, q = 1.0, oracle = 0., acc = 0.01, threshold = 1u, n_gen_mark = 7u, impstop = 100000u, evalstop = 100000u, focus = 0., memory = false, seed = random)#

Extended Ant Colony Optimization algorithm (gaco).

Ant colony optimization is a class of optimization algorithms modeled on the actions of an ant colony. Artificial ‘ants’ (e.g. simulation agents) locate optimal solutions by moving through a parameter space representing all possible solutions. Real ants lay down pheromones directing each other to resources while exploring their environment. The simulated ‘ants’ similarly record their positions and the quality of their solutions, so that in later simulation iterations more ants locate better solutions.

In pygmo we propose a version of this algorithm called extended ACO and originally described by Schlueter et al. Extended ACO generates future generations of ants by using the a multi-kernel gaussian distribution based on three parameters (i.e., pheromone values) which are computed depending on the quality of each previous solution. The solutions are ranked through an oracle penalty method.

This algorithm can be applied to box-bounded single-objective, constrained and unconstrained optimization, with both continuous and integer variables.

Note

The ACO version implemented in PaGMO is an extension of Schlueter’s originally proposed extended ACO algorithm. The main difference between the implemented version and the original one lies in how two of the three pheromone values are computed (in particular, the weights and the standard deviations).

1. Schlueter, et al. (2009). Extended ant colony optimization for non-convex mixed integer non-linear programming. Computers & Operations Research.

Parameters
Raises
• OverflowError – if gen or seed are negative or greater than an implementation-defined value

• ValueError – if either acc is not >=0, focus is not >=0 or q is not >=0, threshold is not in [1,gen] when gen!=0 and memory==false, or threshold is not in >=1 when gen!=0 and memory==true

See also the docs of the C++ class pagmo::gaco.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a gaco. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, Best, Kernel, Oracle, dx, dp, where:

Return type

Examples

>>> import pygmo as pg
>>> prob = pg.problem(pg.rosenbrock(dim = 2))
>>> pop = pg.population(prob, size=13, seed=23)
>>> algo = pg.algorithm(pg.gaco(10, 13, 1.0, 1e9, 0.0, 1, 7, 100000, 100000, 0.0, False, 23))
>>> algo.set_verbosity(1)
>>> pop = algo.evolve(pop)
Gen:        Fevals:          Best:        Kernel:        Oracle:            dx:            dp:
1              0        179.464             13          1e+09        13.1007         649155
2             13        166.317             13        179.464        5.11695        15654.1
3             26        3.81781             13        166.317        5.40633        2299.95
4             39        3.81781             13        3.81781        2.11767        385.781
5             52        2.32543             13        3.81781        1.30415        174.982
6             65        2.32543             13        2.32543        4.58441         43.808
7             78        1.17205             13        2.32543        1.18585        21.6315
8             91        1.17205             13        1.17205       0.806727        12.0702
9            104        1.17205             13        1.17205       0.806727        12.0702
10            130       0.586187             13       0.586187       0.806727        12.0702
>>> uda = algo.extract(pg.gaco)
>>> uda.get_log()
[(1, 0, 179.464, 13, 1e+09, 13.1007, 649155), (2, 15, 166.317, 13, 179.464, ...


See also the docs of the relevant C++ method pagmo::gaco::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

set_bfe(b)#

Set the batch function evaluation scheme.

This method will set the batch function evaluation scheme to be used for gaco.

Parameters

b (bfe) – the batch function evaluation object

Raises

unspecified – any exception thrown by the underlying C++ method

class pygmo.maco(gen=1, ker=63, q=1.0, threshold=1, n_gen_mark=7, evalstop=100000, focus=0., memory=False, seed=random)#

Multi-objective Ant Colony Optimizer (MACO).

Parameters
Raises
• OverflowError – if gen or seed are negative or greater than an implementation-defined value

• ValueError – if either focus is < 0, threshold is not in [0,*gen*] when gen is > 0 and memory is False, or if threshold is not >=1 when gen is > 0 and memory is True

See also the docs of the C++ class pagmo::maco.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a maco. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, ideal_point, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• ideal_point (1D numpy array), the ideal point of the current population (cropped to max 5 dimensions only in the screen output)

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(maco(gen=100))
>>> algo.set_verbosity(20)
>>> pop = population(zdt(1), 63)
>>> pop = algo.evolve(pop)
Gen:        Fevals:        ideal1:        ideal2:
1              0      0.0422249        2.72416
21           1260    0.000622664        1.27304
41           2520    0.000100557       0.542994
61           3780    8.06766e-06       0.290677
81           5040    8.06766e-06       0.290677
>>> uda = algo.extract(maco)
>>> uda.get_log()
[(1, 0, array([0.04222492, 2.72415949])), (21, 1260, array([6.22663991e-04, ...


See also the docs of the relevant C++ method pagmo::maco::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

set_bfe(b)#

Set the batch function evaluation scheme.

This method will set the batch function evaluation scheme to be used for nsga2.

Parameters

b (bfe) – the batch function evaluation object

Raises

unspecified – any exception thrown by the underlying C++ method

class pygmo.gwo(gen=1, seed=random)#

Grey Wolf Optimizer (gwo).

Grey Wolf Optimizer is an optimization algorithm based on the leadership hierarchy and hunting mechanism of greywolves, proposed by Seyedali Mirjalilia, Seyed Mohammad Mirjalilib, Andrew Lewis in 2014.

This algorithm is a classic example of a highly criticizable line of search that led in the first decades of our millenia to the development of an entire zoo of metaphors inspiring optimzation heuristics. In our opinion they, as is the case for the grey wolf optimizer, are often but small variations of already existing heuristics rebranded with unnecessray and convoluted biological metaphors. In the case of GWO this is particularly evident as the position update rule is shokingly trivial and can also be easily seen as a product of an evolutionary metaphor or a particle swarm one. Such an update rule is also not particulary effective and results in a rather poor performance most of times. Reading the original peer-reviewed paper, where the poor algorithmic perfromance is hidden by the methodological flaws of the benchmark presented, one is left with a bitter opinion of the whole peer-review system.

This algorithm can be applied to box-bounded single-objective, constrained and unconstrained optimization, with continuous value.

Parameters
Raises
• OverflowError – if gen or seed are negative or greater than an implementation-defined value

• ValueError – if gen is not >=3

See also the docs of the C++ class pagmo::gwo.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a gwo. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, ideal_point, where:

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(gwo(gen=10))
>>> algo.set_verbosity(2)
>>> prob = problem(rosenbrock(dim=2))
>>> pop = population(prob, size=13, seed=23)
>>> pop = algo.evolve(pop)
Gen:         Alpha:          Beta:         Delta:
1        179.464        3502.82        3964.75
3        6.82024        30.2149        61.1906
5       0.321879        2.39373        3.46188
7       0.134441       0.342357       0.439651
9       0.100281       0.211849       0.297448
>>> uda = algo.extract(gwo)
>>> uda.get_log()
[(1, 179.46420983829944, 3502.8158822203472, 3964.7542658046486), ...


See also the docs of the relevant C++ method pagmo::gwo::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.bee_colony(gen=1, limit=1, seed=random)#

Artificial Bee Colony.

Parameters
Raises
• OverflowError – if gen, limit or seed is negative or greater than an implementation-defined value

• ValueError – if limit is not greater than 0

See also the docs of the C++ class pagmo::bee_colony.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a bee_colony. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, Current best, Best, where:

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(bee_colony(gen = 500, limit = 20))
>>> algo.set_verbosity(100)
>>> prob = problem(rosenbrock(10))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:          Best: Current Best:
1             40         261363         261363
101           4040        112.237        267.969
201           8040        20.8885        265.122
301          12040        20.6076        20.6076
401          16040         18.252        140.079
>>> uda = algo.extract(bee_colony)
>>> uda.get_log()
[(1, 40, 183727.83934515435, 183727.83934515435), ...


See also the docs of the relevant C++ method pagmo::bee_colony::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.de(gen=1, F=0.8, CR=0.9, variant=2, ftol=1e-6, xtol=1e-6, seed=random)#

Differential Evolution

Parameters
Raises
• OverflowError – if gen, variant or seed is negative or greater than an implementation-defined value

• ValueError – if F, CR are not in [0,1] or variant is not in [1, 10]

The following variants (mutation variants) are available to create a new candidate individual:

 1 - best/1/exp 2 - rand/1/exp 3 - rand-to-best/1/exp 4 - best/2/exp 5 - rand/2/exp 6 - best/1/bin 7 - rand/1/bin 8 - rand-to-best/1/bin 9 - best/2/bin 10 - rand/2/bin

See also the docs of the C++ class pagmo::de.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a de. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, Best, dx, df, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• Best (float), the best fitness function currently in the population

• dx (float), the norm of the distance to the population mean of the mutant vectors

• df (float), the population flatness evaluated as the distance between the fitness of the best and of the worst individual

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(de(gen = 500))
>>> algo.set_verbosity(100)
>>> prob = problem(rosenbrock(10))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:          Best:            dx:            df:
1             20         162446        65.2891    1.78686e+06
101           2020        198.402         8.4454        572.161
201           4020        21.1155        2.60629        24.5152
301           6020        6.67069        0.51811        1.99744
401           8020        3.60022       0.583444       0.554511
Exit condition -- generations = 500
>>> uda = algo.extract(de)
>>> uda.get_log()
[(1, 20, 162446.0185265718, 65.28911664703388, 1786857.8926660626), ...


See also the docs of the relevant C++ method pagmo::de::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.sea(gen=1, seed=random)#

(N+1)-ES simple evolutionary algorithm.

Parameters
• gen (int) – number of generations to consider (each generation will compute the objective function once)

• seed (int) – seed used by the internal random number generator

Raises
• OverflowError – if gen or seed are negative or greater than an implementation-defined value

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

See also the docs of the C++ class pagmo::sea.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a sea. A verbosity larger than 1 will produce a log with one entry each verbosity fitness evaluations. A verbosity equal to 1 will produce a log with one entry at each improvement of the fitness.

Returns

at each logged epoch, the values Gen, Fevals, Best, Improvement, Mutations

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(sea(500))
>>> algo.set_verbosity(50)
>>> prob = problem(schwefel(dim = 20))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:          Best:   Improvement:     Mutations:
1              1        6363.44        2890.49              2
1001           1001        1039.92       -562.407              3
2001           2001        358.966         -632.6              2
3001           3001         106.08       -995.927              3
4001           4001         83.391         -266.8              1
5001           5001        62.4994       -1018.38              3
6001           6001        39.2851       -732.695              2
7001           7001        37.2185       -518.847              1
8001           8001        20.9452        -450.75              1
9001           9001        17.9193       -270.679              1
>>> uda = algo.extract(sea)
>>> uda.get_log()
[(1, 1, 6363.442036625835, 2890.4854414320716, 2), (1001, 1001, ...


See also the docs of the relevant C++ method pagmo::sea::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.sga(gen=1, cr=.90, eta_c=1., m=0.02, param_m=1., param_s=2, crossover='exponential', mutation='polynomial', selection='tournament', seed=random)#

A Simple Genetic Algorithm

New in version 2.2.

Approximately during the same decades as Evolutionary Strategies (see sea) were studied, a different group led by John Holland, and later by his student David Goldberg, introduced and studied an algorithmic framework called “genetic algorithms” that were, essentially, leveraging on the same idea but introducing also crossover as a genetic operator. This led to a few decades of confusion and discussions on what was an evolutionary startegy and what a genetic algorithm and on whether the crossover was a useful operator or mutation only algorithms were to be preferred.

In pygmo we provide a rather classical implementation of a genetic algorithm, letting the user choose between selected crossover types, selection schemes and mutation types.

The pseudo code of our version is:

> Start from a population (pop) of dimension N
> while i < gen
> > Selection: create a new population (pop2) with N individuals selected from pop (with repetition allowed)
> > Crossover: create a new population (pop3) with N individuals obtained applying crossover to pop2
> > Mutation:  create a new population (pop4) with N individuals obtained applying mutation to pop3
> > Evaluate all new chromosomes in pop4
> > Reinsertion: set pop to contain the best N individuals taken from pop and pop4


The various blocks of pygmo genetic algorithm are listed below:

Selection: two selection methods are provided: tournament and truncated. Tournament selection works by selecting each offspring as the one having the minimal fitness in a random group of size param_s. The truncated selection, instead, works selecting the best param_s chromosomes in the entire population over and over. We have deliberately not implemented the popular roulette wheel selection as we are of the opinion that such a system does not generalize much being highly sensitive to the fitness scaling.

Crossover: four different crossover schemes are provided:single, exponential, binomial, sbx. The single point crossover, works selecting a random point in the parent chromosome and, with probability cr, inserting the partner chromosome thereafter. The exponential crossover is taken from the algorithm differential evolution, implemented, in pygmo, as de. It essentially selects a random point in the parent chromosome and inserts, in each successive gene, the partner values with probability cr up to when it stops. The binomial crossover inserts each gene from the partner with probability cr. The simulated binary crossover (called sbx), is taken from the NSGA-II algorithm, implemented in pygmo as nsga2, and makes use of an additional parameter called distribution index eta_c.

Mutation: three different mutations schemes are provided: uniform, gaussian and polynomial. Uniform mutation simply randomly samples from the bounds. Gaussian muattion samples around each gene using a normal distribution with standard deviation proportional to the param_m and the bounds width. The last scheme is the polynomial mutation from Deb.

Reinsertion: the only reinsertion strategy provided is what we call pure elitism. After each generation all parents and children are put in the same pool and only the best are passed to the next generation.

Parameters
• gen (int) – number of generations.

• cr (float) – crossover probability.

• eta_c (float) – distribution index for sbx crossover. This parameter is inactive if other types of crossover are selected.

• m (float) – mutation probability.

• param_m (float) – distribution index (polynomial mutation), gaussian width (gaussian mutation) or inactive (uniform mutation)

• param_s (float) – the number of best individuals to use in “truncated” selection or the size of the tournament in tournament selection.

• crossover (str) – the crossover strategy. One of exponential, binomial, single or sbx

• mutation (str) – the mutation strategy. One of gaussian, polynomial or uniform.

• selection (str) – the selection strategy. One of tournament, “truncated”.

• seed (int) – seed used by the internal random number generator

Raises
• OverflowError – if gen or seed are negative or greater than an implementation-defined value

• ValueError – if cr is not in [0,1], if eta_c is not in [1,100], if m is not in [0,1], input_f mutation is not one of gaussian, uniform or polynomial, if selection not one of “roulette”, “truncated” or crossover is not one of exponential, binomial, sbx, single, if param_m is not in [0,1] and mutation is not polynomial, if mutation is not in [1,100] and mutation is polynomial

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

See also the docs of the C++ class pagmo::sga.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a sga. A verbosity larger than 1 will produce a log with one entry each verbosity fitness evaluations. A verbosity equal to 1 will produce a log with one entry at each improvement of the fitness.

Returns

at each logged epoch, the values Gen, Fevals, Best, Improvement

Gen (int), generation. Fevals (int), number of functions evaluation made. Best (float), the best fitness function found so far. Improvement (float), improvement made by the last generation.

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(sga(gen = 500))
>>> algo.set_verbosity(50)
>>> prob = problem(schwefel(dim = 20))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:          Best:   Improvement:     Mutations:
1              1        6363.44        2890.49              2
1001           1001        1039.92       -562.407              3
2001           2001        358.966         -632.6              2
3001           3001         106.08       -995.927              3
4001           4001         83.391         -266.8              1
5001           5001        62.4994       -1018.38              3
6001           6001        39.2851       -732.695              2
7001           7001        37.2185       -518.847              1
8001           8001        20.9452        -450.75              1
9001           9001        17.9193       -270.679              1
>>> uda = algo.extract(sea)
>>> uda.get_log()
[(1, 1, 6363.442036625835, 2890.4854414320716, 2), (1001, 1001, ...


See also the docs of the relevant C++ method pagmo::sga::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

Parameters
• gen (int) – number of generations

• variant (int) – mutation variant (dafault variant is 2: /rand/1/exp)

• variant_adptv (int) – F and CR parameter adaptation scheme to be used (one of 1..2)

• ftol (float) – stopping criteria on the x tolerance (default is 1e-6)

• xtol (float) – stopping criteria on the f tolerance (default is 1e-6)

• memory (bool) – when true the adapted parameters CR anf F are not reset between successive calls to the evolve method

• seed (int) – seed used by the internal random number generator (default is random)

Raises
• OverflowError – if gen, variant, variant_adptv or seed is negative or greater than an implementation-defined value

• ValueError – if variant is not in [1,18] or variant_adptv is not in [0,1]

The following variants (mutation variants) are available to create a new candidate individual:

 1 - best/1/exp 2 - rand/1/exp 3 - rand-to-best/1/exp 4 - best/2/exp 5 - rand/2/exp 6 - best/1/bin 7 - rand/1/bin 8 - rand-to-best/1/bin 9 - best/2/bin 10 - rand/2/bin 11 - rand/3/exp 12 - rand/3/bin 13 - best/3/exp 14 - best/3/bin 15 - rand-to-current/2/exp 16 - rand-to-current/2/bin 17 - rand-to-best-and-current/2/exp 18 - rand-to-best-and-current/2/bin

The following adaptation schemes are available:

 1 - jDE 2 - iDE

See also the docs of the C++ class pagmo::sade.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a sade. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, Best, F, CR, dx, df, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• Best (float), the best fitness function currently in the population

• F (float), the value of the adapted paramter F used to create the best so far

• CR (float), the value of the adapted paramter CR used to create the best so far

• dx (float), the norm of the distance to the population mean of the mutant vectors

• df (float), the population flatness evaluated as the distance between the fitness of the best and of the worst individual

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(sade(gen = 500))
>>> algo.set_verbosity(100)
>>> prob = problems.rosenbrock(10)
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:          Best:             F:            CR:            dx:            df:
1             20         297060       0.690031       0.294769        44.1494    2.30584e+06
101           2020        97.4258        0.58354       0.591527        13.3115        441.545
201           4020        8.79247         0.6678        0.53148        17.8822        121.676
301           6020        6.84774       0.494549        0.98105        12.2781        40.9626
401           8020         4.7861       0.428741       0.743813        12.2938        39.7791
Exit condition -- generations = 500
>>> uda.get_log()
[(1, 20, 297059.6296130389, 0.690031071850855, 0.29476914701127666, 44.14940516578547, 2305836.7422693395), ...


See also the docs of the relevant C++ method pagmo::sade::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.de1220(gen=1, allowed_variants=[2, 3, 7, 10, 13, 14, 15, 16], variant_adptv=1, ftol=1e-6, xtol=1e-6, memory=False, seed=random)#

Self-adaptive Differential Evolution, pygmo flavour (pDE). The adaptation of the mutation variant is added to sade

Parameters
• gen (int) – number of generations

• allowed_variants (array-like object) – allowed mutation variants, each one being a number in [1, 18]

• variant_adptv (int) – F and CR parameter adaptation scheme to be used (one of 1..2)

• ftol (float) – stopping criteria on the x tolerance (default is 1e-6)

• xtol (float) – stopping criteria on the f tolerance (default is 1e-6)

• memory (bool) – when true the adapted parameters CR anf F are not reset between successive calls to the evolve method

• seed (int) – seed used by the internal random number generator (default is random)

Raises
• OverflowError – if gen, variant, variant_adptv or seed is negative or greater than an implementation-defined value

• ValueError – if each id in variant_adptv is not in [1,18] or variant_adptv is not in [0,1]

The following variants (mutation variants) can be put into allowed_variants:

 1 - best/1/exp 2 - rand/1/exp 3 - rand-to-best/1/exp 4 - best/2/exp 5 - rand/2/exp 6 - best/1/bin 7 - rand/1/bin 8 - rand-to-best/1/bin 9 - best/2/bin 10 - rand/2/bin 11 - rand/3/exp 12 - rand/3/bin 13 - best/3/exp 14 - best/3/bin 15 - rand-to-current/2/exp 16 - rand-to-current/2/bin 17 - rand-to-best-and-current/2/exp 18 - rand-to-best-and-current/2/bin

The following adaptation schemes for the parameters F and CR are available:

 1 - jDE 2 - iDE

See also the docs of the C++ class pagmo::de1220.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a de1220. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, Best, F, CR, Variant, dx, df, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• Best (float), the best fitness function currently in the population

• F (float), the value of the adapted paramter F used to create the best so far

• CR (float), the value of the adapted paramter CR used to create the best so far

• Variant (int), the mutation variant used to create the best so far

• dx (float), the norm of the distance to the population mean of the mutant vectors

• df (float), the population flatness evaluated as the distance between the fitness of the best and of the worst individual

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(de1220(gen = 500))
>>> algo.set_verbosity(100)
>>> prob = problem(rosenbrock(10))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:          Best:             F:            CR:       Variant:            dx:            df:
1             20         285653        0.55135       0.441551             16        43.9719    2.02379e+06
101           2020        12.2721       0.127285      0.0792493             14        3.22986        106.764
201           4020        5.72927       0.148337       0.777806             14        2.72177        4.10793
301           6020        4.85084        0.12193       0.996191              3        2.95555        3.85027
401           8020        4.20638       0.235997       0.996259              3        3.60338        4.49432
Exit condition -- generations = 500
>>> uda = algo.extract(de1220)
>>> uda.get_log()
[(1, 20, 285652.7928977573, 0.551350234239449, 0.4415510963067054, 16, 43.97185788345982, 2023791.5123259544), ...


See also the docs of the relevant C++ method pagmo::de1220::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.cmaes(gen=1, cc=- 1, cs=- 1, c1=- 1, cmu=- 1, sigma0=0.5, ftol=1e-6, xtol=1e-6, memory=False, force_bounds=False, seed=random)#

Covariance Matrix Evolutionary Strategy (CMA-ES).

Parameters
• gen (int) – number of generations

• cc (float) – backward time horizon for the evolution path (by default is automatically assigned)

• cs (float) – makes partly up for the small variance loss in case the indicator is zero (by default is automatically assigned)

• c1 (float) – learning rate for the rank-one update of the covariance matrix (by default is automatically assigned)

• cmu (float) – learning rate for the rank-mu update of the covariance matrix (by default is automatically assigned)

• sigma0 (float) – initial step-size

• ftol (float) – stopping criteria on the x tolerance

• xtol (float) – stopping criteria on the f tolerance

• memory (bool) – when true the adapted parameters are not reset between successive calls to the evolve method

• force_bounds (bool) – when true the box bounds are enforced. The fitness will never be called outside the bounds but the covariance matrix adaptation mechanism will worsen

• seed (int) – seed used by the internal random number generator (default is random)

Raises
• OverflowError – if gen is negative or greater than an implementation-defined value

• ValueError – if cc, cs, c1, cmu are not in [0,1] or -1

See also the docs of the C++ class pagmo::cmaes.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a cmaes. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, Best, dx, df, sigma, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• Best (float), the best fitness function currently in the population

• dx (float), the norm of the distance to the population mean of the mutant vectors

• df (float), the population flatness evaluated as the distance between the fitness of the best and of the worst individual

• sigma (float), the current step-size

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(cmaes(gen = 500))
>>> algo.set_verbosity(100)
>>> prob = problem(rosenbrock(10))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:          Best:            dx:            df:         sigma:
1              0         173924        33.6872    3.06519e+06            0.5
101           2000        92.9612       0.583942        156.921      0.0382078
201           4000        8.79819       0.117574          5.101      0.0228353
301           6000        4.81377      0.0698366        1.34637      0.0297664
401           8000        1.04445      0.0568541       0.514459      0.0649836
Exit condition -- generations = 500
>>> uda = algo.extract(cmaes)
>>> uda.get_log()
[(1, 0, 173924.2840042722, 33.68717961390855, 3065192.3843070837, 0.5), ...


See also the docs of the relevant C++ method pagmo::cmaes::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.moead(gen=1, weight_generation='grid', decomposition='tchebycheff', neighbours=20, CR=1, F=0.5, eta_m=20, realb=0.9, limit=2, preserve_diversity=true, seed=random)#

Multi Objective Evolutionary Algorithms by Decomposition (the DE variant)

Parameters
• gen (int) – number of generations

• weight_generation (str) – method used to generate the weights, one of “grid”, “low discrepancy” or “random”

• decomposition (str) – method used to decompose the objectives, one of “tchebycheff”, “weighted” or “bi”

• neighbours (int) – size of the weight’s neighborhood

• CR (float) – crossover parameter in the Differential Evolution operator

• F (float) – parameter for the Differential Evolution operator

• eta_m (float) – distribution index used by the polynomial mutation

• realb (float) – chance that the neighbourhood is considered at each generation, rather than the whole population (only if preserve_diversity is true)

• limit (int) – maximum number of copies reinserted in the population (only if m_preserve_diversity is true)

• preserve_diversity (bool) – when true activates diversity preservation mechanisms

• seed (int) – seed used by the internal random number generator (default is random)

Raises
• OverflowError – if gen, neighbours, seed or limit are negative or greater than an implementation-defined value

• ValueError – if either decomposition is not one of ‘tchebycheff’, ‘weighted’ or ‘bi’, weight_generation is not one of ‘random’, ‘low discrepancy’ or ‘grid’, CR or F or realb are not in [0.,1.] or eta_m is negative, if neighbours is not >=2

See also the docs of the C++ class pagmo::moead.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a moead. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, ADR, ideal_point, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• ADF (float), Average Decomposed Fitness, that is the average across all decomposed problem of the single objective decomposed fitness along the corresponding direction

• ideal_point (array), the ideal point of the current population (cropped to max 5 dimensions only in the screen output)

Return type

Examples

>>> from pygmo import *
>>> algo.set_verbosity(100)
>>> prob = problem(zdt())
>>> pop = population(prob, 40)
>>> pop = algo.evolve(pop)
1              0        32.5747     0.00190532        2.65685
101           4000        5.67751    2.56736e-09       0.468789
201           8000        5.38297    2.56736e-09      0.0855025
301          12000        5.05509    9.76581e-10      0.0574796
401          16000        5.13126    9.76581e-10      0.0242256
>>> uda.get_log()
[(1, 0, 32.574745630075874, array([  1.90532430e-03,   2.65684834e+00])), ...


See also the docs of the relevant C++ method pagmo::moead::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.moead_gen(gen=1, weight_generation='grid', decomposition='tchebycheff', neighbours=20, CR=1, F=0.5, eta_m=20, realb=0.9, limit=2, preserve_diversity=true, seed=random)#

Multi Objective Evolutionary Algorithms by Decomposition (the DE variant)

Parameters
• gen (int) – number of generations

• weight_generation (str) – method used to generate the weights, one of “grid”, “low discrepancy” or “random”

• decomposition (str) – method used to decompose the objectives, one of “tchebycheff”, “weighted” or “bi”

• neighbours (int) – size of the weight’s neighborhood

• CR (float) – crossover parameter in the Differential Evolution operator

• F (float) – parameter for the Differential Evolution operator

• eta_m (float) – distribution index used by the polynomial mutation

• realb (float) – chance that the neighbourhood is considered at each generation, rather than the whole population (only if preserve_diversity is true)

• limit (int) – maximum number of copies reinserted in the population (only if m_preserve_diversity is true)

• preserve_diversity (bool) – when true activates diversity preservation mechanisms

• seed (int) – seed used by the internal random number generator (default is random)

Raises
• OverflowError – if gen, neighbours, seed or limit are negative or greater than an implementation-defined value

• ValueError – if either decomposition is not one of ‘tchebycheff’, ‘weighted’ or ‘bi’, weight_generation is not one of ‘random’, ‘low discrepancy’ or ‘grid’, CR or F or realb are not in [0.,1.] or eta_m is negative, if neighbours is not >=2

See also the docs of the C++ class pagmo::moead_gen.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a moead_gen. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, ADR, ideal_point, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• ADF (float), Average Decomposed Fitness, that is the average across all decomposed problem of the single objective decomposed fitness along the corresponding direction

• ideal_point (array), the ideal point of the current population (cropped to max 5 dimensions only in the screen output)

Return type

Examples

>>> from pygmo import *
>>> algo.set_verbosity(100)
>>> prob = problem(zdt())
>>> pop = population(prob, 40)
>>> pop = algo.evolve(pop)
1              0        32.5747     0.00190532        2.65685
101           4000        5.67751    2.56736e-09       0.468789
201           8000        5.38297    2.56736e-09      0.0855025
301          12000        5.05509    9.76581e-10      0.0574796
401          16000        5.13126    9.76581e-10      0.0242256
>>> uda.get_log()
[(1, 0, 32.574745630075874, array([  1.90532430e-03,   2.65684834e+00])), ...


See also the docs of the relevant C++ method pagmo::moead_gen::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

set_bfe(b)#

Set the batch function evaluation scheme.

This method will set the batch function evaluation scheme to be used for moead_gen.

Parameters

b (bfe) – the batch function evaluation object

Raises

unspecified – any exception thrown by the underlying C++ method

Compass Search

Parameters
Raises
• OverflowError – if max_fevals is negative or greater than an implementation-defined value

• ValueError – if start_range is not in (0, 1], if stop_range is not in (start_range, 1] or if reduction_coeff is not in (0,1)

See also the docs of the C++ class pagmo::compass_search.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a compass_search. A verbosity larger than 0 implies one log line at each improvment of the fitness or change in the search range.

Returns

at each logged epoch, the valuesFevals, Best, Range, where:

• Fevals (int), number of functions evaluation made

• Best (float), the best fitness function currently in the population

• Range (float), the range used to vary the chromosome (relative to the box bounds width)

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(compass_search(max_fevals = 500))
>>> algo.set_verbosity(1)
>>> prob = problem(hock_schittkowski_71())
>>> pop = population(prob, 1)
>>> pop = algo.evolve(pop)
Fevals:          Best:      Violated:    Viol. Norm:         Range:
4        110.785              1        2.40583            0.5
12        110.785              1        2.40583           0.25
20        110.785              1        2.40583          0.125
22        91.0454              1        1.01855          0.125
25        96.2795              1       0.229446          0.125
33        96.2795              1       0.229446         0.0625
41        96.2795              1       0.229446        0.03125
45         94.971              1       0.127929        0.03125
53         94.971              1       0.127929       0.015625
56        95.6252              1      0.0458521       0.015625
64        95.6252              1      0.0458521      0.0078125
68        95.2981              1      0.0410151      0.0078125
76        95.2981              1      0.0410151     0.00390625
79        95.4617              1     0.00117433     0.00390625
87        95.4617              1     0.00117433     0.00195312
95        95.4617              1     0.00117433    0.000976562
103        95.4617              1     0.00117433    0.000488281
111        95.4617              1     0.00117433    0.000244141
115        95.4515              0              0    0.000244141
123        95.4515              0              0     0.00012207
131        95.4515              0              0    6.10352e-05
139        95.4515              0              0    3.05176e-05
143        95.4502              0              0    3.05176e-05
151        95.4502              0              0    1.52588e-05
159        95.4502              0              0    7.62939e-06
Exit condition -- range: 7.62939e-06 <= 1e-05
>>> uda = algo.extract(compass_search)
>>> uda.get_log()
[(4, 110.785345345, 1, 2.405833534534, 0.5), (12, 110.785345345, 1, 2.405833534534, 0.25) ...


See also the docs of the relevant C++ method pagmo::compass_search::get_log().

property replacement#

Individual replacement policy.

This attribute represents the policy that is used in the evolve() method to select the individual that will be replaced by the optimised individual. The attribute can be either a string or an integral.

If the attribute is a string, it must be one of "best", "worst" and "random":

• "best" will select the best individual in the population,

• "worst" will select the worst individual in the population,

• "random" will randomly choose one individual in the population.

set_random_sr_seed() can be used to seed the random number generator used by the "random" policy.

If the attribute is an integer, it represents the index (in the population) of the individual that will be replaced by the optimised individual.

Returns

the individual replacement policy or index

Return type
Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• ValueError – if the attribute is set to an invalid string

• TypeError – if the attribute is set to a value of an invalid type

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property selection#

Individual selection policy.

This attribute represents the policy that is used in the evolve() method to select the individual that will be optimised. The attribute can be either a string or an integral.

If the attribute is a string, it must be one of "best", "worst" and "random":

• "best" will select the best individual in the population,

• "worst" will select the worst individual in the population,

• "random" will randomly choose one individual in the population.

set_random_sr_seed() can be used to seed the random number generator used by the "random" policy.

If the attribute is an integer, it represents the index (in the population) of the individual that is selected for optimisation.

Returns

the individual selection policy or index

Return type
Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• ValueError – if the attribute is set to an invalid string

• TypeError – if the attribute is set to a value of an invalid type

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

set_random_sr_seed(seed)#

Set the seed for the "random" selection/replacement policies.

Parameters

seed (int) – the value that will be used to seed the random number generator used by the "random" election/replacement policies (see selection and replacement)

Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Simulated Annealing (Corana’s version)

Parameters
Raises
• OverflowError – if n_T_adj, n_range_adj or bin_size are negative or greater than an implementation-defined value

• ValueError – if Ts is not in (0, inf), if Tf is not in (0, inf), if Tf > Ts or if start_range is not in (0,1]

• ValueError – if n_T_adj is not strictly positive or if n_range_adj is not strictly positive

See also the docs of the C++ class pagmo::simulated_annealing.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a simulated_annealing. A verbosity larger than 0 will produce a log with one entry each verbosity fitness evaluations.

Returns

at each logged epoch, the values Fevals, Best, Current, Mean range, Temperature, where:

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(simulated_annealing(Ts=10., Tf=1e-5, n_T_adj = 100))
>>> algo.set_verbosity(5000)
>>> prob = problem(rosenbrock(dim = 10))
>>> pop = population(prob, 1)
>>> pop = algo.evolve(pop)
Fevals:          Best:       Current:    Mean range:   Temperature:
57           5937           5937           0.48             10
10033        9.50937        28.6775      0.0325519        2.51189
15033        7.87389        14.3951      0.0131132        1.25893
20033        7.87389        8.68616      0.0120491       0.630957
25033        2.90084        4.43344     0.00676893       0.316228
30033       0.963616        1.36471     0.00355931       0.158489
35033       0.265868        0.63457     0.00202753      0.0794328
40033        0.13894       0.383283     0.00172611      0.0398107
45033       0.108051       0.169876    0.000870499      0.0199526
50033      0.0391731      0.0895308     0.00084195           0.01
55033      0.0217027      0.0303561    0.000596116     0.00501187
60033     0.00670073     0.00914824    0.000342754     0.00251189
65033      0.0012298     0.00791511    0.000275182     0.00125893
70033     0.00112816     0.00396297    0.000192117    0.000630957
75033    0.000183055     0.00139717    0.000135137    0.000316228
80033    0.000174868     0.00192479    0.000109781    0.000158489
85033       7.83e-05    0.000494225    8.20723e-05    7.94328e-05
90033    5.35153e-05    0.000120148    5.76009e-05    3.98107e-05
95033    5.35153e-05    9.10958e-05    3.18624e-05    1.99526e-05
99933    2.34849e-05    8.72206e-05    2.59215e-05    1.14815e-05
>>> uda = algo.extract(simulated_annealing)
>>> uda.get_log()
[(57, 5936.999957947842, 5936.999957947842, 0.47999999999999987, 10.0), (10033, ...


See also the docs of the relevant C++ method pagmo::simulated_annealing::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

property replacement#

Individual replacement policy.

This attribute represents the policy that is used in the evolve() method to select the individual that will be replaced by the optimised individual. The attribute can be either a string or an integral.

If the attribute is a string, it must be one of "best", "worst" and "random":

• "best" will select the best individual in the population,

• "worst" will select the worst individual in the population,

• "random" will randomly choose one individual in the population.

set_random_sr_seed() can be used to seed the random number generator used by the "random" policy.

If the attribute is an integer, it represents the index (in the population) of the individual that will be replaced by the optimised individual.

Returns

the individual replacement policy or index

Return type
Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• ValueError – if the attribute is set to an invalid string

• TypeError – if the attribute is set to a value of an invalid type

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property selection#

Individual selection policy.

This attribute represents the policy that is used in the evolve() method to select the individual that will be optimised. The attribute can be either a string or an integral.

If the attribute is a string, it must be one of "best", "worst" and "random":

• "best" will select the best individual in the population,

• "worst" will select the worst individual in the population,

• "random" will randomly choose one individual in the population.

set_random_sr_seed() can be used to seed the random number generator used by the "random" policy.

If the attribute is an integer, it represents the index (in the population) of the individual that is selected for optimisation.

Returns

the individual selection policy or index

Return type
Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• ValueError – if the attribute is set to an invalid string

• TypeError – if the attribute is set to a value of an invalid type

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

set_random_sr_seed(seed)#

Set the seed for the "random" selection/replacement policies.

Parameters

seed (int) – the value that will be used to seed the random number generator used by the "random" election/replacement policies (see selection and replacement)

Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

class pygmo.pso(gen=1, omega=0.7298, eta1=2.05, eta2=2.05, max_vel=0.5, variant=5, neighb_type=2, neighb_param=4, memory=False, seed=random)#

Particle Swarm Optimization

Parameters
Raises
• OverflowError – if gen or seed is negative or greater than an implementation-defined value

• ValueError – if omega is not in the [0,1] interval, if eta1, eta2 are not in the [0,4] interval, if max_vel is not in ]0,1]

• ValueErrorvariant is not one of 1 .. 6, if neighb_type is not one of 1 .. 4 or if neighb_param is zero

The following variants can be selected via the variant parameter:

 1 - Canonical (with inertia weight) 2 - Same social and cognitive rand. 3 - Same rand. for all components 4 - Only one rand. 5 - Canonical (with constriction fact.) 6 - Fully Informed (FIPS)

The following topologies are selected by neighb_type:

 1 - gbest 2 - lbest 3 - Von Neumann 4 - Adaptive random

The topology determines (together with the topology parameter) which particles need to be considered when computing the social component of the velocity update.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a pso. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, gbest, Mean Vel., Mean lbest, Avg. Dist., where:

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(pso(gen = 500))
>>> algo.set_verbosity(50)
>>> prob = problem(rosenbrock(10))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:         gbest:     Mean Vel.:    Mean lbest:    Avg. Dist.:
1             40        72473.3       0.173892         677427       0.281744
51           1040        135.867      0.0183806        748.001       0.065826
101           2040        12.6726     0.00291046        84.9531      0.0339452
151           3040         8.4405    0.000852588        33.5161      0.0191379
201           4040        7.56943    0.000778264         28.213     0.00789202
251           5040         6.8089     0.00435521        22.7988     0.00107112
301           6040         6.3692    0.000289725        17.3763     0.00325571
351           7040        6.09414    0.000187343        16.8875     0.00172307
401           8040        5.78415    0.000524536        16.5073     0.00234197
451           9040         5.4662     0.00018305        16.2339    0.000958182
>>> uda = algo.extract(pso)
>>> uda.get_log()
[(1,40,72473.32713790605,0.1738915144248373,677427.3504996448,0.2817443174278134), (51,1040,...


See also the docs of the relevant C++ method pagmo::pso::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.pso_gen(gen=1, omega=0.7298, eta1=2.05, eta2=2.05, max_vel=0.5, variant=5, neighb_type=2, neighb_param=4, memory=False, seed=random)#

Particle Swarm Optimization (generational) is identical to pso, but does update the velocities of each particle before new particle positions are computed (taking into consideration all updated particle velocities). Each particle is thus evaluated on the same seed within a generation as opposed to the standard PSO which evaluates single particle at a time. Consequently, the generational PSO algorithm is suited for stochastic optimization problems.

Parameters
Raises
• OverflowError – if gen or seed is negative or greater than an implementation-defined value

• ValueError – if omega is not in the [0,1] interval, if eta1, eta2 are not in the [0,1] interval, if max_vel is not in ]0,1]

• ValueErrorvariant is not one of 1 .. 6, if neighb_type is not one of 1 .. 4 or if neighb_param is zero

The following variants can be selected via the variant parameter:

 1 - Canonical (with inertia weight) 2 - Same social and cognitive rand. 3 - Same rand. for all components 4 - Only one rand. 5 - Canonical (with constriction fact.) 6 - Fully Informed (FIPS)

The following topologies are selected by neighb_type:

 1 - gbest 2 - lbest 3 - Von Neumann 4 - Adaptive random

The topology determines (together with the topology parameter) which particles need to be considered when computing the social component of the velocity update.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a pso. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, gbest, Mean Vel., Mean lbest, Avg. Dist., where:

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(pso(gen = 500))
>>> algo.set_verbosity(50)
>>> prob = problem(rosenbrock(10))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:         gbest:     Mean Vel.:    Mean lbest:    Avg. Dist.:
1             40        72473.3       0.173892         677427       0.281744
51           1040        135.867      0.0183806        748.001       0.065826
101           2040        12.6726     0.00291046        84.9531      0.0339452
151           3040         8.4405    0.000852588        33.5161      0.0191379
201           4040        7.56943    0.000778264         28.213     0.00789202
251           5040         6.8089     0.00435521        22.7988     0.00107112
301           6040         6.3692    0.000289725        17.3763     0.00325571
351           7040        6.09414    0.000187343        16.8875     0.00172307
401           8040        5.78415    0.000524536        16.5073     0.00234197
451           9040         5.4662     0.00018305        16.2339    0.000958182
>>> uda = algo.extract(pso)
>>> uda.get_log()
[(1,40,72473.32713790605,0.1738915144248373,677427.3504996448,0.2817443174278134), (51,1040,...


See also the docs of the relevant C++ method pagmo::pso::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

set_bfe(b)#

Set the batch function evaluation scheme. This method will set the batch function evaluation scheme to be used for pso_gen. :param b: the batch function evaluation object :type b: bfe

Raises

unspecified – any exception thrown by the underlying C++ method

class pygmo.nsga2(gen=1, cr=0.95, eta_c=10., m=0.01, eta_m=50., seed=random)#

Non dominated Sorting Genetic Algorithm (NSGA-II).

Parameters
Raises
• OverflowError – if gen or seed are negative or greater than an implementation-defined value

• ValueError – if either cr is not in [0,1[, eta_c is not in [0,100[, m is not in [0,1], or eta_m is not in [0,100[

See also the docs of the C++ class pagmo::nsga2.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a nsga2. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, ideal_point, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• ideal_point (1D numpy array), the ideal point of the current population (cropped to max 5 dimensions only in the screen output)

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(nsga2(gen=100))
>>> algo.set_verbosity(20)
>>> pop = population(zdt(1), 40)
>>> pop = algo.evolve(pop)
Gen:        Fevals:        ideal1:        ideal2:
1              0      0.0033062        2.44966
21            800    0.000275601       0.893137
41           1600    3.15834e-05        0.44117
61           2400     2.3664e-05       0.206365
81           3200     2.3664e-05       0.133305
>>> uda = algo.extract(nsga2)
>>> uda.get_log()
[(1, 0, array([ 0.0033062 ,  2.44965599])), (21, 800, array([  2.75601086e-04 ...


See also the docs of the relevant C++ method pagmo::nsga2::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

set_bfe(b)#

Set the batch function evaluation scheme.

This method will set the batch function evaluation scheme to be used for nsga2.

Parameters

b (bfe) – the batch function evaluation object

Raises

unspecified – any exception thrown by the underlying C++ method

class pygmo.nspso(gen=1, omega=0.6, c1=0.01, c2=0.5, chi=0.5, v_coeff=0.5, leader_selection_range=2, diversity_mechanism='crowding distance', memory=false, seed=random)#

Non dominated Sorting Particle Swarm Optimization (NSPSO).

Parameters
Raises
• OverflowError – if gen or seed are negative or greater than an implementation-defined value

• ValueError – if either omega < 0 or c1 <= 0 or c2 <= 0 or chi <= 0, if omega > 1,

• if v_coeff <= 0 or v_coeff > 1, if leader_selection_range > 100, if diversity_mechanism != "crowding distance", or != "niche count", or != "max min"

See also the docs of the C++ class pagmo::nspso.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a nspso. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, ideal_point, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• ideal_point (1D numpy array), the ideal point of the current population (cropped to max 5 dimensions only in the screen output)

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(nspso(gen=100))
>>> algo.set_verbosity(20)
>>> pop = population(zdt(1), 40)
>>> pop = algo.evolve(pop)
Gen:        Fevals:        ideal1:        ideal2:
1             40       0.019376        2.75209
21            840              0        1.97882
41           1640              0        1.88428
61           2440              0        1.88428
81           3240              0        1.88428
>>> uda = algo.extract(nspso)
>>> uda.get_log()
[(1, 40, array([0.04843319, 2.98129814])), (21, 840, array([0., 1.68331679])) ...


See also the docs of the relevant C++ method pagmo::nspso::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

set_bfe(b)#

Set the batch function evaluation scheme.

This method will set the batch function evaluation scheme to be used for nspso.

Parameters

b (bfe) – the batch function evaluation object

Raises

unspecified – any exception thrown by the underlying C++ method

class pygmo.mbh(algo=None, stop=5, perturb=0.01, seed=None)#

Monotonic Basin Hopping (generalized).

Monotonic basin hopping, or simply, basin hopping, is an algorithm rooted in the idea of mapping the objective function $$f(\mathbf x_0)$$ into the local minima found starting from $$\mathbf x_0$$. This simple idea allows a substantial increase of efficiency in solving problems, such as the Lennard-Jones cluster or the MGA-1DSM interplanetary trajectory problem that are conjectured to have a so-called funnel structure.

In pygmo we provide an original generalization of this concept resulting in a meta-algorithm that operates on any pygmo.population using any suitable user-defined algorithm (UDA). When a population containing a single individual is used and coupled with a local optimizer, the original method is recovered. The pseudo code of our generalized version is:

> Select a pygmo population
> Select a UDA
> Store best individual
> while i < stop_criteria
> > Perturb the population in a selected neighbourhood
> > Evolve the population using the algorithm
> > if the best individual is improved
> > > increment i
> > > update best individual
> > else
> > > i = 0


pygmo.mbh is a user-defined algorithm (UDA) that can be used to construct pygmo.algorithm objects.

See: https://arxiv.org/pdf/cond-mat/9803344.pdf for the paper introducing the basin hopping idea for a Lennard-Jones cluster optimization.

See also the docs of the C++ class pagmo::mbh.

Parameters
Raises
• ValueError – if perturb (or one of its components, if perturb is an array) is not in the (0,1] range

• unspecified – any exception thrown by the constructor of pygmo.algorithm, or by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling set_verbosity() on an algorithm constructed with an mbh. A verbosity level N > 0 will log one line at the end of each call to the inner algorithm.

Returns

at each call of the inner algorithm, the values Fevals, Best, Violated, Viol. Norm and Trial, where:

• Fevals (int), the number of fitness evaluations made

• Best (float), the objective function of the best fitness currently in the population

• Violated (int), the number of constraints currently violated by the best solution

• Viol. Norm (float), the norm of the violation (discounted already by the constraints tolerance)

• Trial (int), the trial number (which will determine the algorithm stop)

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(mbh(algorithm(de(gen = 10))))
>>> algo.set_verbosity(3)
>>> prob = problem(cec2013(prob_id = 1, dim = 20))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Fevals:          Best:      Violated:    Viol. Norm:         Trial:
440        25162.3              0              0              0
880          14318              0              0              0
1320        11178.2              0              0              0
1760        6613.71              0              0              0
2200        6613.71              0              0              1
2640        6124.62              0              0              0
3080        6124.62              0              0              1


See also the docs of the relevant C++ method pagmo::mbh::get_log().

get_perturb()#

Get the perturbation vector.

Returns

the perturbation vector

Return type

1D NumPy float array

get_seed()#

Get the seed value that was used for the construction of this mbh.

Returns

the seed value

Return type

int

get_verbosity()#

Get the verbosity level value that was used for the construction of this mbh.

Returns

the verbosity level

Return type

int

property inner_algorithm#

Inner algorithm of the meta-algorithm.

This read-only property gives direct access to the algorithm stored within this meta-algorithm.

Returns

a reference to the inner algorithm

Return type

algorithm

set_perturb(perturb)#

Set the perturbation vector.

Parameters

perturb (array-like object) – perturb the perturbation to be applied to each component

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

This meta-algorithm implements a constraint handling technique that allows the use of any user-defined algorithm (UDA) able to deal with single-objective unconstrained problems, on single-objective constrained problems. The technique self-adapts its parameters during each successive call to the inner UDA basing its decisions on the entire underlying population. The resulting approach is an alternative to using the meta-problem unconstrain to transform the constrained fitness into an unconstrained fitness.

The self-adaptive constraints handling meta-algorithm is largely based on the ideas of Faramani and Wright but it extends their use to any-algorithm, in particular to non generational, population based, evolutionary approaches where a steady-state reinsertion is used (i.e. as soon as an individual is found fit it is immediately reinserted into the population and will influence the next offspring genetic material).

Each decision vector is assigned an infeasibility measure $$\iota$$ which accounts for the normalized violation of all the constraints (discounted by the constraints tolerance as returned by pygmo.problem.c_tol). The normalization factor used $$c_{j_{max}}$$ is the maximum violation of the $$j$$ constraint.

As in the original paper, three individuals in the evolving population are then used to penalize the single objective.

$\begin{split}\begin{array}{rl} \check X & \mbox{: the best decision vector} \\ \hat X & \mbox{: the worst decision vector} \\ \breve X & \mbox{: the decision vector with the highest objective} \end{array}\end{split}$

The best and worst decision vectors are defined accounting for their infeasibilities and for the value of the objective function. Using the above definitions the overall pseudo code can be summarized as follows:

> Select a pygmo.population (related to a single-objective constrained problem)
> Select a UDA (able to solve single-objective unconstrained problems)
> while i < iter
> > Compute the normalization factors (will depend on the current population)
> > Compute the best, worst, highest (will depend on the current population)
> > Evolve the population using the UDA and a penalized objective
> > Reinsert the best decision vector from the previous evolution


pygmo.cstrs_self_adaptive is a user-defined algorithm (UDA) that can be used to construct pygmo.algorithm objects.

Note

Self-adaptive constraints handling implements an internal cache to avoid the re-evaluation of the fitness for decision vectors already evaluated. This makes the final counter of fitness evaluations somewhat unpredictable. The number of function evaluation will be bounded to iters times the fevals made by one call to the inner UDA. The internal cache is reset at each iteration, but its size will grow unlimited during each call to the inner UDA evolve method.

Note

Several modification were made to the original Faramani and Wright ideas to allow their approach to work on corner cases and with any UDAs. Most notably, a violation to the $$j$$-th constraint is ignored if all the decision vectors in the population satisfy that particular constraint (i.e. if $$c_{j_{max}} = 0$$).

Note

The performances of cstrs_self_adaptive are highly dependent on the particular inner algorithm employed and in particular to its parameters (generations / iterations).

Farmani, Raziyeh, and Jonathan A. Wright. “Self-adaptive fitness formulation for constrained optimization.” IEEE Transactions on Evolutionary Computation 7.5 (2003): 445-455.

See also the docs of the C++ class pagmo::cstrs_self_adaptive.

Parameters
• iter (int) – number of iterations (i.e., calls to the inner algorithm evolve)

• algo – an algorithm or a user-defined algorithm, either C++ or Python (if algo is None, a de algorithm will be used in its stead)

• seed (int) – seed used by the internal random number generator (if seed is None, a randomly-generated value will be used in its stead)

Raises
• ValueError – if iters is negative or greater than an implementation-defined value

• unspecified – any exception thrown by the constructor of pygmo.algorithm, or by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling set_verbosity() on an algorithm constructed with an cstrs_self_adaptive. A verbosity level of N > 0 will log one line each N iters.

Returns

at each call of the inner algorithm, the values Iters, Fevals, Best, Infeasibility, Violated, Viol. Norm and N. Feasible, where:

• Iters (int), the number of iterations made (i.e. calls to the evolve method of the inner algorithm)

• Fevals (int), the number of fitness evaluations made

• Best (float), the objective function of the best fitness currently in the population

• Infeasibility (float), the aggregated (and normalized) infeasibility value of Best

• Violated (int), the number of constraints currently violated by the best solution

• Viol. Norm (float), the norm of the violation (discounted already by the constraints tolerance)

• N. Feasible (int), the number of feasible individuals currently in the population.

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(cstrs_self_adaptive(iters = 20, algo = de(10)))
>>> algo.set_verbosity(3)
>>> prob = problem(cec2006(prob_id = 1))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Iter:        Fevals:          Best: Infeasibility:      Violated:    Viol. Norm:   N. Feasible:
1              0       -96.5435        0.34607              4        177.705              0 i
4            600       -96.5435       0.360913              4        177.705              0 i
7           1200       -96.5435        0.36434              4        177.705              0 i
10           1800       -96.5435       0.362307              4        177.705              0 i
13           2400       -23.2502       0.098049              4        37.1092              0 i
16           3000       -23.2502       0.071571              4        37.1092              0 i
19           3600       -23.2502       0.257604              4        37.1092              0 i
>>> uda.get_log()
[(1, 0, -96.54346700540063, 0.34606950943401493, 4, 177.70482046341274, 0), (4, 600, ...


See also the docs of the relevant C++ method pagmo::cstrs_self_adaptive::get_log().

property inner_algorithm#

Inner algorithm of the meta-algorithm.

This read-only property gives direct access to the algorithm stored within this meta-algorithm.

Returns

a reference to the inner algorithm

Return type

algorithm

class pygmo.nlopt(solver='cobyla')#

NLopt algorithms.

This user-defined algorithm wraps a selection of solvers from the NLopt library, focusing on local optimisation (both gradient-based and derivative-free). The complete list of supported NLopt algorithms is:

• COBYLA,

• BOBYQA,

• NEWUOA + bound constraints,

• PRAXIS,

• sbplx,

• MMA (Method of Moving Asymptotes),

• CCSA,

• SLSQP,

• low-storage BFGS,

• preconditioned truncated Newton,

• shifted limited-memory variable-metric,

• augmented Lagrangian algorithm.

The desired NLopt solver is selected upon construction of an nlopt algorithm. Various properties of the solver (e.g., the stopping criteria) can be configured via class attributes. Multiple stopping criteria can be active at the same time: the optimisation will stop as soon as at least one stopping criterion is satisfied. By default, only the xtol_rel stopping criterion is active (see xtol_rel).

All NLopt solvers support only single-objective optimisation, and, as usual in pygmo, minimisation is always assumed. The gradient-based algorithms require the optimisation problem to provide a gradient. Some solvers support equality and/or inequality constraints. The constraints’ tolerances will be set to those specified in the problem being optimised (see pygmo.problem.c_tol).

In order to support pygmo’s population-based optimisation model, the evolve() method will select a single individual from the input population to be optimised by the NLopt solver. If the optimisation produces a better individual (as established by compare_fc()), the optimised individual will be inserted back into the population. The selection and replacement strategies can be configured via the selection and replacement attributes.

Note

This user-defined algorithm is available only if pygmo was compiled with the PAGMO_WITH_NLOPT option enabled (see the installation instructions).

The NLopt website contains a detailed description of each supported solver.

This constructor will initialise an nlopt object which will use the NLopt algorithm specified by the input string solver, the "best" individual selection strategy and the "best" individual replacement strategy. solver is translated to an NLopt algorithm type according to the following translation table:

solver string

NLopt algorithm

"cobyla"

NLOPT_LN_COBYLA

"bobyqa"

NLOPT_LN_BOBYQA

"newuoa"

NLOPT_LN_NEWUOA

"newuoa_bound"

NLOPT_LN_NEWUOA_BOUND

"praxis"

NLOPT_LN_PRAXIS

"neldermead"

NLOPT_LN_NELDERMEAD

"sbplx"

NLOPT_LN_SBPLX

"mma"

NLOPT_LD_MMA

"ccsaq"

NLOPT_LD_CCSAQ

"slsqp"

NLOPT_LD_SLSQP

"lbfgs"

NLOPT_LD_LBFGS

"tnewton_precond_restart"

NLOPT_LD_TNEWTON_PRECOND_RESTART

"tnewton_precond"

NLOPT_LD_TNEWTON_PRECOND

"tnewton_restart"

NLOPT_LD_TNEWTON_RESTART

"tnewton"

NLOPT_LD_TNEWTON

"var2"

NLOPT_LD_VAR2

"var1"

NLOPT_LD_VAR1

"auglag"

NLOPT_AUGLAG

"auglag_eq"

NLOPT_AUGLAG_EQ

The parameters of the selected solver can be configured via the attributes of this class.

See also the docs of the C++ class pagmo::nlopt.

The NLopt website contains a detailed description of each supported solver.

Parameters

solver (str) – the name of the NLopt algorithm that will be used by this nlopt object

Raises
• RuntimeError – if the NLopt version is not at least 2

• ValueError – if solver is not one of the allowed algorithm names

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Examples

>>> from pygmo import *
>>> nl = nlopt('slsqp')
>>> nl.xtol_rel = 1E-6 # Change the default value of the xtol_rel stopping criterion
>>> nl.xtol_rel
1E-6
>>> algo = algorithm(nl)
>>> algo.set_verbosity(1)
>>> prob = problem(luksan_vlcek1(20))
>>> prob.c_tol = [1E-6] * 18 # Set constraints tolerance to 1E-6
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
objevals:       objval:      violated:    viol. norm:
1       95959.4             18        538.227 i
2       89282.7             18        5177.42 i
3         75580             18        464.206 i
4         75580             18        464.206 i
5       77737.6             18        1095.94 i
6         41162             18        350.446 i
7         41162             18        350.446 i
8         67881             18        362.454 i
9       30502.2             18        249.762 i
10       30502.2             18        249.762 i
11       7266.73             18        95.5946 i
12        4510.3             18        42.2385 i
13       2400.66             18        35.2507 i
14       34051.9             18        749.355 i
15       1657.41             18        32.1575 i
16       1657.41             18        32.1575 i
17       1564.44             18        12.5042 i
18       275.987             14        6.22676 i
19       232.765             12         12.442 i
20       161.892             15        4.00744 i
21       161.892             15        4.00744 i
22       17.6821             11        1.78909 i
23       7.71103              5       0.130386 i
24       6.24758              4     0.00736759 i
25       6.23325              1    5.12547e-05 i
26        6.2325              0              0
27       6.23246              0              0
28       6.23246              0              0
29       6.23246              0              0
30       6.23246              0              0

Optimisation return status: NLOPT_XTOL_REACHED (value = 4, Optimization stopped because xtol_rel or xtol_abs was reached)

property ftol_abs#

ftol_abs stopping criterion.

The ftol_abs stopping criterion instructs the solver to stop when an optimization step (or an estimate of the optimum) changes the function value by less than ftol_abs. Defaults to 0 (that is, this stopping criterion is disabled by default).

Returns

the value of the ftol_abs stopping criterion

Return type

float

Raises
• ValueError – if, when setting this property, a NaN is passed

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property ftol_rel#

ftol_rel stopping criterion.

The ftol_rel stopping criterion instructs the solver to stop when an optimization step (or an estimate of the optimum) changes the objective function value by less than ftol_rel multiplied by the absolute value of the function value. Defaults to 0 (that is, this stopping criterion is disabled by default).

Returns

the value of the ftol_rel stopping criterion

Return type

float

Raises
• ValueError – if, when setting this property, a NaN is passed

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

get_last_opt_result()#

Get the result of the last optimisation.

Returns

the NLopt return code for the last optimisation run, or NLOPT_SUCCESS if no optimisations have been run yet

Return type

int

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

get_log()#

Optimisation log.

The optimisation log is a collection of log data lines. A log data line is a tuple consisting of:

• the number of objective function evaluations made so far,

• the objective function value for the current decision vector,

• the number of constraints violated by the current decision vector,

• the constraints violation norm for the current decision vector,

• a boolean flag signalling the feasibility of the current decision vector.

Returns

the optimisation log

Return type

list

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

get_solver_name()#

Get the name of the NLopt solver used during construction.

Returns

the name of the NLopt solver used during construction

Return type

str

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property local_optimizer#

Local optimizer.

Some NLopt algorithms rely on other NLopt algorithms as local/subsidiary optimizers. This property, of type nlopt, allows to set such local optimizer. By default, no local optimizer is specified, and the property is set to None.

Note

At the present time, only the "auglag" and "auglag_eq" solvers make use of a local optimizer. Setting a local optimizer on any other solver will have no effect.

Note

The objective function, bounds, and nonlinear-constraint parameters of the local optimizer are ignored (as they are provided by the parent optimizer). Conversely, the stopping criteria should be specified in the local optimizer.The verbosity of the local optimizer is also forcibly set to zero during the optimisation.

Returns

the local optimizer, or None if not set

Return type

nlopt

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.), when setting the property

property maxeval#

maxeval stopping criterion.

The maxeval stopping criterion instructs the solver to stop when the number of function evaluations exceeds maxeval. Defaults to 0 (that is, this stopping criterion is disabled by default).

Returns

the value of the maxeval stopping criterion

Return type

int

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property maxtime#

maxtime stopping criterion.

The maxtime stopping criterion instructs the solver to stop when the optimization time (in seconds) exceeds maxtime. Defaults to 0 (that is, this stopping criterion is disabled by default).

Returns

the value of the maxtime stopping criterion

Return type

float

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property replacement#

Individual replacement policy.

This attribute represents the policy that is used in the evolve() method to select the individual that will be replaced by the optimised individual. The attribute can be either a string or an integral.

If the attribute is a string, it must be one of "best", "worst" and "random":

• "best" will select the best individual in the population,

• "worst" will select the worst individual in the population,

• "random" will randomly choose one individual in the population.

set_random_sr_seed() can be used to seed the random number generator used by the "random" policy.

If the attribute is an integer, it represents the index (in the population) of the individual that will be replaced by the optimised individual.

Returns

the individual replacement policy or index

Return type
Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• ValueError – if the attribute is set to an invalid string

• TypeError – if the attribute is set to a value of an invalid type

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property selection#

Individual selection policy.

This attribute represents the policy that is used in the evolve() method to select the individual that will be optimised. The attribute can be either a string or an integral.

If the attribute is a string, it must be one of "best", "worst" and "random":

• "best" will select the best individual in the population,

• "worst" will select the worst individual in the population,

• "random" will randomly choose one individual in the population.

set_random_sr_seed() can be used to seed the random number generator used by the "random" policy.

If the attribute is an integer, it represents the index (in the population) of the individual that is selected for optimisation.

Returns

the individual selection policy or index

Return type
Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• ValueError – if the attribute is set to an invalid string

• TypeError – if the attribute is set to a value of an invalid type

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

set_random_sr_seed(seed)#

Set the seed for the "random" selection/replacement policies.

Parameters

seed (int) – the value that will be used to seed the random number generator used by the "random" election/replacement policies (see selection and replacement)

Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property stopval#

stopval stopping criterion.

The stopval stopping criterion instructs the solver to stop when an objective value less than or equal to stopval is found. Defaults to the C constant -HUGE_VAL (that is, this stopping criterion is disabled by default).

Returns

the value of the stopval stopping criterion

Return type

float

Raises
• ValueError – if, when setting this property, a NaN is passed

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property xtol_abs#

xtol_abs stopping criterion.

The xtol_abs stopping criterion instructs the solver to stop when an optimization step (or an estimate of the optimum) changes every parameter by less than xtol_abs. Defaults to 0 (that is, this stopping criterion is disabled by default).

Returns

the value of the xtol_abs stopping criterion

Return type

float

Raises
• ValueError – if, when setting this property, a NaN is passed

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

property xtol_rel#

xtol_rel stopping criterion.

The xtol_rel stopping criterion instructs the solver to stop when an optimization step (or an estimate of the optimum) changes every parameter by less than xtol_rel multiplied by the absolute value of the parameter. Defaults to 1E-8.

Returns

the value of the xtol_rel stopping criterion

Return type

float

Raises
• ValueError – if, when setting this property, a NaN is passed

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

class pygmo.ipopt#

Ipopt.

New in version 2.2.

This class is a user-defined algorithm (UDA) that wraps the Ipopt (Interior Point OPTimizer) solver, a software package for large-scale nonlinear optimization. Ipopt is a powerful solver that is able to handle robustly and efficiently constrained nonlinear opimization problems at high dimensionalities.

Ipopt supports only single-objective minimisation, and it requires the availability of the gradient in the optimisation problem. If possible, for best results the Hessians should be provided as well (but Ipopt can estimate numerically the Hessians if needed).

In order to support pygmo’s population-based optimisation model, the evolve() method will select a single individual from the input population to be optimised. If the optimisation produces a better individual (as established by compare_fc()), the optimised individual will be inserted back into the population. The selection and replacement strategies can be configured via the selection and replacement attributes.

Ipopt supports a large amount of options for the configuration of the optimisation run. The options are divided into three categories:

The full list of options is available on the Ipopt website. pygmo.ipopt allows to configure any Ipopt option via methods such as set_string_options(), set_string_option(), set_integer_options(), etc., which need to be used before invoking the evolve() method.

If the user does not set any option, pygmo.ipopt use Ipopt’s default values for the options (see the documentation), with the following modifications:

• if the "print_level" integer option is not set by the user, it will be set to 0 by pygmo.ipopt (this will suppress most screen output produced by the solver - note that we support an alternative form of logging via the pygmo.algorithm.set_verbosity() machinery);

• if the "hessian_approximation" string option is not set by the user and the optimisation problem does not provide the Hessians, then the option will be set to "limited-memory" by pygmo.ipopt. This makes it possible to optimise problems without Hessians out-of-the-box (i.e., Ipopt will approximate numerically the Hessians for you);

• if the "constr_viol_tol" numeric option is not set by the user and the optimisation problem is constrained, then pygmo.ipopt will compute the minimum value min_tol in the vector returned by pygmo.problem.c_tol for the optimisation problem at hand. If min_tol is nonzero, then the "constr_viol_tol" Ipopt option will be set to min_tol, otherwise the default Ipopt value (1E-4) will be used for the option. This ensures that, if the constraint tolerance is not explicitly set by the user, a solution deemed feasible by Ipopt is also deemed feasible by pygmo (but the opposite is not necessarily true).

Note

This user-defined algorithm is available only if pygmo was compiled with the PAGMO_WITH_IPOPT option enabled (see the installation instructions).

Note

Ipopt is not thread-safe, and thus it cannot be used in a pygmo.thread_island.

See also the docs of the C++ class pagmo::ipopt.

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_numeric_option("tol",1E-9) # Change the relative convergence tolerance
>>> ip.get_numeric_options()
{'tol': 1e-09}
>>> algo = algorithm(ip)
>>> algo.set_verbosity(1)
>>> prob = problem(luksan_vlcek1(20))
>>> prob.c_tol = [1E-6] * 18 # Set constraints tolerance to 1E-6
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)

******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
******************************************************************************

objevals:        objval:      violated:    viol. norm:
1         201174             18         1075.3 i
2         209320             18        691.814 i
3        36222.3             18        341.639 i
4        11158.1             18        121.097 i
5        4270.38             18        46.4742 i
6        2054.03             18        20.7306 i
7        705.959             18        5.43118 i
8        37.8304             18        1.52099 i
9        2.89066             12       0.128862 i
10       0.300807              3      0.0165902 i
11     0.00430279              3    0.000496496 i
12    7.54121e-06              2    9.70735e-06 i
13    4.34249e-08              0              0
14    3.71925e-10              0              0
15    3.54406e-13              0              0
16    2.37071e-18              0              0

Optimisation return status: Solve_Succeeded (value = 0)

get_integer_options()#

Get integer options.

Returns

a name-value dictionary of optimisation integer options

Return type

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_integer_option("print_level",3)
>>> ip.get_integer_options()
{'print_level': 3}

get_last_opt_result()#

Get the result of the last optimisation.

Returns

the Ipopt return code for the last optimisation run, or Ipopt::Solve_Succeeded if no optimisations have been run yet

Return type

int

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.get_last_opt_result()
0

get_log()#

Optimisation log.

The optimisation log is a collection of log data lines. A log data line is a tuple consisting of:

• the number of objective function evaluations made so far,

• the objective function value for the current decision vector,

• the number of constraints violated by the current decision vector,

• the constraints violation norm for the current decision vector,

• a boolean flag signalling the feasibility of the current decision vector.

Returns

the optimisation log

Return type

list

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Warning

The number of constraints violated, the constraints violation norm and the feasibility flag stored in the log are all determined via the facilities and the tolerances specified within pygmo.problem. That is, they might not necessarily be consistent with Ipopt’s notion of feasibility. See the explanation of how the "constr_viol_tol" numeric option is handled in pygmo.ipopt.

Note

Ipopt supports its own logging format and protocol, including the ability to print to screen and write to file. Ipopt’s screen logging is disabled by default (i.e., the Ipopt verbosity setting is set to 0 - see pygmo.ipopt). On-screen logging can be enabled via the "print_level" string option.

get_numeric_options()#

Get numeric options.

Returns

a name-value dictionary of optimisation numeric options

Return type

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_numeric_option("tol",1E-4)
>>> ip.get_numeric_options()
{'tol': 1E-4}

get_string_options()#

Get string options.

Returns

a name-value dictionary of optimisation string options

Return type

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_string_option("hessian_approximation","limited-memory")
>>> ip.get_string_options()
{'hessian_approximation': 'limited-memory'}

property replacement#

Individual replacement policy.

This attribute represents the policy that is used in the evolve() method to select the individual that will be replaced by the optimised individual. The attribute can be either a string or an integral.

If the attribute is a string, it must be one of "best", "worst" and "random":

• "best" will select the best individual in the population,

• "worst" will select the worst individual in the population,

• "random" will randomly choose one individual in the population.

set_random_sr_seed() can be used to seed the random number generator used by the "random" policy.

If the attribute is an integer, it represents the index (in the population) of the individual that will be replaced by the optimised individual.

Returns

the individual replacement policy or index

Return type
Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• ValueError – if the attribute is set to an invalid string

• TypeError – if the attribute is set to a value of an invalid type

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

reset_integer_options()#

Clear all integer options.

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_integer_option("print_level",3)
>>> ip.get_integer_options()
{'print_level': 3}
>>> ip.reset_integer_options()
>>> ip.get_integer_options()
{}

reset_numeric_options()#

Clear all numeric options.

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_numeric_option("tol",1E-4)
>>> ip.get_numeric_options()
{'tol': 1E-4}
>>> ip.reset_numeric_options()
>>> ip.get_numeric_options()
{}

reset_string_options()#

Clear all string options.

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_string_option("hessian_approximation","limited-memory")
>>> ip.get_string_options()
{'hessian_approximation': 'limited-memory'}
>>> ip.reset_string_options()
>>> ip.get_string_options()
{}

property selection#

Individual selection policy.

This attribute represents the policy that is used in the evolve() method to select the individual that will be optimised. The attribute can be either a string or an integral.

If the attribute is a string, it must be one of "best", "worst" and "random":

• "best" will select the best individual in the population,

• "worst" will select the worst individual in the population,

• "random" will randomly choose one individual in the population.

set_random_sr_seed() can be used to seed the random number generator used by the "random" policy.

If the attribute is an integer, it represents the index (in the population) of the individual that is selected for optimisation.

Returns

the individual selection policy or index

Return type
Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• ValueError – if the attribute is set to an invalid string

• TypeError – if the attribute is set to a value of an invalid type

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

set_integer_option(name, value)#

Set integer option.

This method will set the optimisation integer option name to value. The optimisation options are passed to the Ipopt API when calling the evolve() method.

Parameters
Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_integer_option("print_level",3)
>>> algorithm(ip)
Algorithm name: Ipopt: Interior Point Optimization [deterministic]
C++ class name: ...

Extra info:
Last optimisation return code: Solve_Succeeded (value = 0)
Verbosity: 0
Individual selection policy: best
Individual replacement policy: best
Integer options: {print_level : 3}

set_integer_options(opts)#

Set integer options.

This method will set the optimisation integer options contained in opts. It is equivalent to calling set_integer_option() passing all the name-value pairs in opts as arguments.

Parameters

opts (dict of str-int pairs) – the name-value map that will be used to set the options

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_integer_options({"filter_reset_trigger":4, "print_level":3})
>>> algorithm(ip)
Algorithm name: Ipopt: Interior Point Optimization [deterministic]
C++ class name: ...

Extra info:
Last optimisation return code: Solve_Succeeded (value = 0)
Verbosity: 0
Individual selection policy: best
Individual replacement policy: best
Integer options: {filter_reset_trigger : 4,  print_level : 3}

set_numeric_option(name, value)#

Set numeric option.

This method will set the optimisation numeric option name to value. The optimisation options are passed to the Ipopt API when calling the evolve() method.

Parameters
Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_numeric_option("tol",1E-6)
>>> algorithm(ip)
Algorithm name: Ipopt: Interior Point Optimization [deterministic]
C++ class name: ...

Extra info:
Last optimisation return code: Solve_Succeeded (value = 0)
Verbosity: 0
Individual selection policy: best
Individual replacement policy: best
Numeric options: {tol : 1E-6}

set_numeric_options(opts)#

Set numeric options.

This method will set the optimisation numeric options contained in opts. It is equivalent to calling set_numeric_option() passing all the name-value pairs in opts as arguments.

Parameters

opts (dict of str-float pairs) – the name-value map that will be used to set the options

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_numeric_options({"tol":1E-4, "constr_viol_tol":1E-3})
>>> algorithm(ip)
Algorithm name: Ipopt: Interior Point Optimization [deterministic]
C++ class name: ...

Extra info:
Last optimisation return code: Solve_Succeeded (value = 0)
Verbosity: 0
Individual selection policy: best
Individual replacement policy: best
Numeric options: {constr_viol_tol : 1E-3,  tol : 1E-4}

set_random_sr_seed(seed)#

Set the seed for the "random" selection/replacement policies.

Parameters

seed (int) – the value that will be used to seed the random number generator used by the "random" election/replacement policies (see selection and replacement)

Raises
• OverflowError – if the attribute is set to an integer which is negative or too large

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

set_string_option(name, value)#

Set string option.

This method will set the optimisation string option name to value. The optimisation options are passed to the Ipopt API when calling the evolve() method.

Parameters
Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_string_option("hessian_approximation","limited-memory")
>>> algorithm(ip)
Algorithm name: Ipopt: Interior Point Optimization [deterministic]
C++ class name: ...

Extra info:
Last optimisation return code: Solve_Succeeded (value = 0)
Verbosity: 0
Individual selection policy: best
Individual replacement policy: best
String options: {hessian_approximation : limited-memory}

set_string_options(opts)#

Set string options.

This method will set the optimisation string options contained in opts. It is equivalent to calling set_string_option() passing all the name-value pairs in opts as arguments.

Parameters

opts (dict of str-str pairs) – the name-value map that will be used to set the options

Raises

unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

Examples

>>> from pygmo import *
>>> ip = ipopt()
>>> ip.set_string_options({"hessian_approximation":"limited-memory", "limited_memory_initialization":"scalar1"})
>>> algorithm(ip)
Algorithm name: Ipopt: Interior Point Optimization [deterministic]
C++ class name: ...

Extra info:
Last optimisation return code: Solve_Succeeded (value = 0)
Verbosity: 0
Individual selection policy: best
Individual replacement policy: best
String options: {hessian_approximation : limited-memory,  limited_memory_initialization : scalar1}


class pygmo.ihs(gen=1, phmcr=0.85, ppar_min=0.35, ppar_max=0.99, bw_min=1e-5, bw_max=1., seed=random)#

Harmony search (HS) is a metaheuristic algorithm said to mimick the improvisation process of musicians. In the metaphor, each musician (i.e., each variable) plays (i.e., generates) a note (i.e., a value) for finding a best harmony (i.e., the global optimum) all together.

This pygmo UDA implements the so-called improved harmony search algorithm (IHS), in which the probability of picking the variables from the decision vector and the amount of mutation to which they are subject vary (respectively linearly and exponentially) at each call of the evolve() method.

In this algorithm the number of fitness function evaluations is equal to the number of iterations. All the individuals in the input population participate in the evolution. A new individual is generated at every iteration, substituting the current worst individual of the population if better.

Warning

The HS algorithm can and has been criticized, not for its performances, but for the use of a metaphor that does not add anything to existing ones. The HS algorithm essentially applies mutation and crossover operators to a background population and as such should have been developed in the context of Evolutionary Strategies or Genetic Algorithms and studied in that context. The use of the musicians metaphor only obscures its internal functioning making theoretical results from ES and GA erroneously seem as unapplicable to HS.

Note

The original IHS algorithm was designed to solve unconstrained, deterministic single objective problems. In pygmo, the algorithm was modified to tackle also multi-objective, constrained (box and non linearly). Such extension is original with pygmo.

Parameters
• gen (int) – number of generations to consider (each generation will compute the objective function once)

• phmcr (float) – probability of choosing from memory (similar to a crossover probability)

• ppar_min (float) – minimum pitch adjustment rate. (similar to a mutation rate)

• ppar_max (float) – maximum pitch adjustment rate. (similar to a mutation rate)

• bw_min (float) – minimum distance bandwidth. (similar to a mutation width)

• bw_max (float) – maximum distance bandwidth. (similar to a mutation width)

• seed (int) – seed used by the internal random number generator

Raises
• OverflowError – if gen or seed are negative or greater than an implementation-defined value

• ValueError – if phmcr is not in the ]0,1[ interval, ppar_min or ppar_max are not in the ]0,1[ interval, min/max quantities are less than/greater than max/min quantities, bw_min is negative.

• unspecified – any exception thrown by failures at the intersection between C++ and Python (e.g., type conversion errors, mismatched function signatures, etc.)

See also the docs of the C++ class pagmo::ihs.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve() and printed to screen. The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a ihs. A verbosity larger than 1 will produce a log with one entry each verbosity fitness evaluations.

Returns

at each logged epoch, the values Fevals, ppar, bw, dx, df, Violated, Viol. Norm,ideal

• Fevals (int), number of functions evaluation made.

• ppar (float), the pitch adjustment rate.

• bw (float), the distance bandwidth.

• dx (float), the population flatness evaluated as the distance between the decisions vector of the best and of the worst individual (or -1 in a multiobjective case).

• df (float), the population flatness evaluated as the distance between the fitness of the best and of the worst individual (or -1 in a multiobjective case).

• Violated (int), the number of constraints violated by the current decision vector.

• Viol. Norm (float), the constraints violation norm for the current decision vector.

• ideal_point (1D numpy array), the ideal point of the current population (cropped to max 5 dimensions only in the screen output)

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(ihs(20000))
>>> algo.set_verbosity(2000)
>>> prob = problem(hock_schittkowski_71())
>>> prob.c_tol = [1e-1]*2
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Fevals:          ppar:            bw:            dx:            df:      Violated:    Viol. Norm:        ideal1:
1       0.350032       0.999425        4.88642        14.0397              0              0        43.2982
2001       0.414032       0.316046        5.56101        25.7009              0              0        33.4251
4001       0.478032      0.0999425          5.036        26.9657              0              0        19.0052
6001       0.542032      0.0316046        3.77292        23.9992              0              0        19.0052
8001       0.606032     0.00999425        3.97937        16.0803              0              0        18.1803
10001       0.670032     0.00316046        1.15023        1.57947              0              0        17.8626
12001       0.734032    0.000999425       0.017882      0.0185438              0              0        17.5894
14001       0.798032    0.000316046     0.00531358      0.0074745              0              0        17.5795
16001       0.862032    9.99425e-05     0.00270865     0.00155563              0              0        17.5766
18001       0.926032    3.16046e-05     0.00186637     0.00167523              0              0        17.5748
>>> uda = algo.extract(ihs)
>>> uda.get_log()
[(1, 0.35003234534534, 0.9994245193792801, 4.886415773459253, 14.0397487316794, ...


See also the docs of the relevant C++ method pagmo::ihs::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int

class pygmo.xnes(gen=1, eta_mu=- 1, eta_sigma=- 1, eta_b=- 1, sigma0=- 1, ftol=1e-6, xtol=1e-6, memory=False, force_bounds=False, seed=random)#

Exponential Evolution Strategies.

Parameters
• gen (int) – number of generations

• eta_mu (float) – learning rate for mean update (if -1 will be automatically selected to be 1)

• eta_sigma (float) – learning rate for step-size update (if -1 will be automatically selected)

• eta_b (float) – learning rate for the covariance matrix update (if -1 will be automatically selected)

• sigma0 (float) – the initial search width will be sigma0 * (ub - lb) (if -1 will be automatically selected to be 1)

• ftol (float) – stopping criteria on the x tolerance

• xtol (float) – stopping criteria on the f tolerance

• memory (bool) – when true the adapted parameters are not reset between successive calls to the evolve method

• force_bounds (bool) – when true the box bounds are enforced. The fitness will never be called outside the bounds but the covariance matrix adaptation mechanism will worsen

• seed (int) – seed used by the internal random number generator (default is random)

Raises
• OverflowError – if gen is negative or greater than an implementation-defined value

• ValueError – if eta_mu, eta_sigma, eta_b, sigma0 are not in ]0,1] or -1

See also the docs of the C++ class pagmo::xnes.

get_log()#

Returns a log containing relevant parameters recorded during the last call to evolve(). The log frequency depends on the verbosity parameter (by default nothing is logged) which can be set calling the method set_verbosity() on an algorithm constructed with a xnes. A verbosity of N implies a log line each N generations.

Returns

at each logged epoch, the values Gen, Fevals, Best, dx, df, sigma, where:

• Gen (int), generation number

• Fevals (int), number of functions evaluation made

• Best (float), the best fitness function currently in the population

• dx (float), the norm of the distance to the population mean of the mutant vectors

• df (float), the population flatness evaluated as the distance between the fitness of the best and of the worst individual

• sigma (float), the current step-size

Return type

Examples

>>> from pygmo import *
>>> algo = algorithm(xnes(gen = 500))
>>> algo.set_verbosity(100)
>>> prob = problem(rosenbrock(10))
>>> pop = population(prob, 20)
>>> pop = algo.evolve(pop)
Gen:        Fevals:          Best:            dx:            df:         sigma:
1              0         173924        33.6872    3.06519e+06            0.5
101           2000        92.9612       0.583942        156.921      0.0382078
201           4000        8.79819       0.117574          5.101      0.0228353
301           6000        4.81377      0.0698366        1.34637      0.0297664
401           8000        1.04445      0.0568541       0.514459      0.0649836
Exit condition -- generations = 500
>>> uda = algo.extract(xnes)
>>> uda.get_log()
[(1, 0, 173924.2840042722, 33.68717961390855, 3065192.3843070837, 0.5), ...


See also the docs of the relevant C++ method pagmo::xnes::get_log().

get_seed()#

This method will return the random seed used internally by this uda.

Returns

the random seed of the population

Return type

int