Algorithms

A Quick Look

Algorithms in PyGMO are objects, constructed and then used to optimize a problem via their evolve method. The user can implement his own algorithm in Python (in which case they need to derive from PyGMO.algorithm.base). You may follow the Adding a new algorithm tutorial. We also provide a number of algorithms that are considered useful for general purposes. Each algorithm can be associated only to problems of certain types: (Continuous, Integer or Mixed Integer)-(Constrained, Unconstrained)-(Single, Multi-objective).

Heuristic Optimization

Common Name Name in PyGMO Type Comments
Differential Evolution (DE) PyGMO.algorithm.de C-U-S The original algorithm
Self-adaptive DE (jDE) PyGMO.algorithm.jde C-U-S Self-adaptive F, CR
DE with p-best crossover (mde_pbx) PyGMO.algorithm.mde_pbx C-U-S Self-adaptive F, CR
Differential Evolution (DE) PyGMO.algorithm.de_1220 C-U-S Our own brew. self adaptive F, CR and variants
Particle Swarm Optimization (PSO) PyGMO.algorithm.pso C-U-S The PSO algorithm (canonical, with constriction factor, FIPS, etc.)
Particle Swarm Optimization (PSO) PyGMO.algorithm.pso_gen C-U-S Generational (also problems deriving from base_stochastic)
Simple Genetic Algorithm GRAY (SGA_GRAY) PyGMO.algorithm.sga_gray C-U-S Simple genetic algorithm with gray binary encoding
Simple Genetic Algorithm (SGA) PyGMO.algorithm.sga MI-U-S  
Vector Evaluated Genetic Algorithm (VEGA) PyGMO.algorithm.vega MI-U-M VEGA algorithm, multi-objective extension of SGA
(N+1)-EA Evol. Algorithm (SEA) PyGMO.algorithm.sea I-U-M The multiobjective extension uses crowding distance operator
Non-dominated Sorting GA (NSGA2) PyGMO.algorithm.nsga_II C-U-M NSGA-II
S-Metric Selection EMOA (SMS-EMOA) PyGMO.algorithm.sms_emoa C-U-M Relies on the hypervolume computation.
Corana’s Simulated Annealing (SA) PyGMO.algorithm.sa_corana C-U-S  
Parallel Decomposition (PADE) PyGMO.algorithm.pade C-U-M Parallel Decomposition (based on the MOEA/D framework)
Non-dominated Sorting PSO (NSPSO) PyGMO.algorithm.nspso C-U-M Multi-Objective PSO
Strength Pareto EA 2 (SPEA2) PyGMO.algorithm.spea2 C-U-M Strength Pareto Evolutionary Algorithm 2
Artificial Bee Colony (ABC) PyGMO.algorithm.bee_colony C-U-S  
Improved Harmony Search (IHS) PyGMO.algorithm.ihs MI-U-M Integer and Multiobjetive not tested yet
Monte Carlo Search (MC) PyGMO.algorithm.monte_carlo MI-C-S  
Monte Carlo Search (MC) PyGMO.algorithm.py_example MI-C-S Written directly in Python
Covariance Matrix Adaptation-ES PyGMO.algorithm.py_cmaes C-U-S Written directly in Python
Covariance Matrix Adaptation-ES PyGMO.algorithm.cmaes C-U-S  

Meta-algorithms

Common Name Name in PyGMO Type Comments
Monotonic Basin Hopping (MBH) PyGMO.algorithm.mbh N/A  
Multistart (MS) PyGMO.algorithm.ms N/A  
Augmented Lagrangian (AL) PyGMO.algorithm.nlopt_auglag C-C-S Requires PyGMO to be compiled with nlopt option. Minimization assumed
Augmented Lagrangian (AL) PyGMO.algorithm.nlopt_auglag_eq C-C-S Requires PyGMO to be compiled with nlopt option. Minimization assumed
Cstrs co-evolution PyGMO.algorithm.cstrs_co_evolution C-C-S Minimization assumed
Cstrs Self-Adaptive PyGMO.algorithm.cstrs_self_adaptive C-C-S Minimization assumed
Cstrs Immune System PyGMO.algorithm.cstrs_immune_system C-C-S Immune system constraints handling technique
Cstrs CORE PyGMO.algorithm.cstrs_core C-C-S CORE constraints handling technique (repairing technique)

Local optimization

Common Name Name in PyGMO Type Comments
Compass Search (CS) PyGMO.algorithm.cs C-U-S  
Nelder-Mead simplex PyGMO.algorithm.scipy_fmin C-U-S SciPy required. Minimization assumed
Nelder-Mead simplex PyGMO.algorithm.gsl_nm C-U-S Requires PyGMO to be compiled with GSL option. Minimization assumed
Nelder-Mead simplex variant 2 PyGMO.algorithm.gsl_nm2 C-U-S Requires PyGMO to be compiled with GSL option. Minimization assumed
Nelder-Mead simplex variant 2r PyGMO.algorithm.gsl_nm2rand C-U-S Requires PyGMO to be compiled with GSL option. Minimization assumed
Subplex (a Nelder-Mead variant) PyGMO.algorithm.nlopt_sbplx C-C-S Requires PyGMO to be compiled with nlopt option. Minimization assumed
L-BFGS-B PyGMO.algorithm.scipy_l_bfgs_b C-U-S SciPy required. Minimization assumed
BFGS PyGMO.algorithm.gsl_bfgs2 C-U-S Requires PyGMO to be compiled with GSL option. Minimization assumed
BFGS 2 PyGMO.algorithm.gsl_bfgs C-U-S Requires PyGMO to be compiled with GSL option. Minimization assumed
Sequential Least SQuares Prog. PyGMO.algorithm.scipy_slsqp C-C-S SciPy required. Minimization assumed
Sequential Least SQuares Prog. PyGMO.algorithm.nlopt_slsqp C-C-S Requires PyGMO to be compiled with nlopt option. Minimization assumed
Truncated Newton Method PyGMO.algorithm.scipy_tnc C-U-S SciPy required. Minimization assumed
Conjugate Gradient (fr) PyGMO.algorithm.gsl_fr C-U-S Requires PyGMO to be compiled with GSL option. Minimization assumed
Conjugate Gradient (pr) PyGMO.algorithm.gsl_pr C-U-S Requires PyGMO to be compiled with GSL option. Minimization assumed
COBYLA PyGMO.algorithm.scipy_cobyla C-C-S SciPy required. Minimization assumed
COBYLA PyGMO.algorithm.nlopt_cobyla C-C-S Requires PyGMO to be compiled with nlopt option. Minimization assumed
BOBYQA PyGMO.algorithm.nlopt_bobyqa C-C-S Requires PyGMO to be compiled with nlopt option. Minimization assumed
Method of Moving Asymptotes PyGMO.algorithm.nlopt_mma C-C-S Requires PyGMO to be compiled with nlopt option. Minimization assumed
SNOPT PyGMO.algorithm.snopt C-C-S Requires PyGMO to be compiled with snopt option. Minimization assumed
IPOPT PyGMO.algorithm.ipopt C-C-S Requires PyGMO to be compiled with ipopt option. Minimization assumed

Detailed Documentation

class PyGMO.algorithm.base

All PyGMO algorithms derive from this class

base.evolve((_base)arg1, (population)arg2) → population :

Returns the evolved population


class PyGMO.algorithm.de
de.__init__(gen=100, f=0.8, cr=0.9, variant=2, ftol=1e-06, xtol=1e-06, screen_output=False)

Constructs a Differential Evolution algorithm:

USAGE: algorithm.de(gen=1, f=0.5, cr=0.9, variant=2, ftol=1e-6, xtol=1e-6, screen_output = False)

  • gen: number of generations

  • f: weighting factor in [0,1] (if -1 self-adptation is used)

  • cr: crossover in [0,1] (if -1 self-adptation is used)

  • variant: algoritmic variant to use (one of [1 .. 10])
    1. DE/best/1/exp
    2. DE/rand/1/exp
    3. DE/rand-to-best/1/exp
    4. DE/best/2/exp
    5. DE/rand/2/exp
    6. DE/best/1/bin
    7. DE/rand/1/bin
    8. DE/rand-to-best/1/bin
    9. DE/best/2/bin
    10. DE/rand/2/bin
  • ftol stop criteria on f

  • xtol stop criteria on x


class PyGMO.algorithm.jde
jde.__init__(gen=100, variant=2, variant_adptv=1, ftol=1e-06, xtol=1e-06, memory=False, screen_output=False)

Constructs a jDE algorithm (self-adaptive DE)

REF: “Self-adaptive differential evolution algorithm in constrained real-parameter optimization” J Brest, V Zumer, MS Maucec - Evolutionary Computation, 2006. http://dsp.szu.edu.cn/DSP2006/research/publication/yan/WebEdit/UploadFile/Self-adaptive%20Differential%20Evolution%20Algorithm%20for%20Constrained%20Real-Parameter%20Optimization.pdf

USAGE: algorithm.jde(gen=100, variant=2, variant_adptv=1, ftol=1e-6, xtol=1e-6, memory = False, screen_output = False)

  • gen: number of generations

  • variant: algoritmic variant to use (one of [1 .. 18])

    1. best/1/exp 2. rand/1/exp 3. rand-to-best/1/exp 4. best/2/exp 5. rand/2/exp 6. best/1/bin 7. rand/1/bin 8. rand-to-best/1/bin 9. best/2/bin 10. rand/2/bin 11. best/3/exp 12. best/3/bin 13. rand/3/exp 14. rand/3/bin 15. rand-to-current/2/exp 16. rand-to-current/2/bin 17. rand-to-best-and-current/2/exp 18. rand-to-best-and-current/2/bin

  • variant_adptv: adaptive scheme to use (one of [1..2])
    1. random param mutation 2. param mutation follows rand/3 scheme
  • ftol: stop criteria on f

  • xtol: stop criteria on x

  • memory: if True the algorithm internal state is saved and used for the next call

  • screen_output: activates screen output of the algorithm (do not use in archipealgo, otherwise the screen will be flooded with

  • different island outputs)


class PyGMO.algorithm.mde_pbx
mde_pbx.__init__(gen=100, qperc=0.15, nexp=1.5, ftol=1e-06, xtol=1e-06, screen_output=False)

Constructs a mde_pbx algorithm (self-adaptive DE)

REF: “An Adaptive Differential Evolution Algorithm With Novel Mutation and Crossover Strategies for Global Numerical Optimization” - IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS?PART B: CYBERNETICS, VOL. 42, NO. 2, APRIL 20

USAGE: algorithm.mde_pbx(gen=100, qperc=0.15, nexp=1.5, ftol=1e-6, xtol=1e-6, screen_output = False)

  • gen: number of generations
  • qperc: percentage of population to choose the best vector
  • nexp: exponent for the powermean
  • ftol: stop criteria on f
  • xtol: stop criteria on x
  • screen_output: activates screen output of the algorithm (do not use in archipealgo, otherwise the screen will be flooded with
  • different island outputs)

class PyGMO.algorithm.de_1220
de_1220.__init__(gen=100, variant_adptv=1, allowed_variants=[1, 2, 3, 4, 5, 6, 7, 8, 9, 10], memory=False, ftol=1e-06, xtol=1e-06, screen_output=False)

Constructs a Differential Evolution algorithm (our own brew). Self adaptation on F, CR and mutation variant.:

USAGE: algorithm.de_1220(gen=100, variant_adptv=1, allowed_variants = [i for i in range(1,19)], memory = False, ftol=1e-6, xtol=1e-6, screen_output = False)

  • gen: number of generations

  • variant_adptv: adaptiv scheme to use (one of [1..2])
    1. random param mutation 2. param mutation follows relative DE scheme
  • allowed_variants : a list of the algoritmic variants to mix and self-adapt. Allowed variants are ...

    1. best/1/exp 2. rand/1/exp 3. rand-to-best/1/exp 4. best/2/exp 5. rand/2/exp 6. best/1/bin 7. rand/1/bin 8. rand-to-best/1/bin 9. best/2/bin 10. rand/2/bin 11. best/3/exp 12. best/3/bin 13. rand/3/exp 14. rand/3/bin 15. rand-to-current/2/exp 16. rand-to-current/2/bin 17. rand-to-best-and-current/2/exp 18. rand-to-best-and-current/2/bin

  • ftol: stop criteria on f

  • xtol: stop criteria on x

  • memory: if True the algorithm internal state is saved and used for the next call


class PyGMO.algorithm.pso
pso.__init__(gen=1, omega=0.7298, eta1=2.05, eta2=2.05, vcoeff=0.5, variant=5, neighb_type=2, neighb_param=4)

Constructs a Particle Swarm Optimization (steady-state). The position update is applied immediately after the velocity update

REF (for variants 5-6): http://cswww.essex.ac.uk/staff/rpoli/papers/PoliKennedyBlackwellSI2007.pdf

REF (for variants 1-4): Kennedy, J.; Eberhart, R. (1995). “Particle Swarm Optimization”. Proceedings of IEEE International Conference on Neural Networks. IV. pp. 1942?1948.

USAGE: algorithm.pso(gen=1, omega = 0.7298, eta1 = 2.05, eta2 = 2.05, vcoeff = 0.5, variant = 5, neighb_type = 2, neighb_param = 4)

  • gen: number of generations

  • omega: constriction factor (or particle inertia weight) in [0,1]

  • eta1: Cognitive component in [0,4]

  • eta2: Social component in [0,4]

  • vcoeff: Maximum velocity coefficient (w.r.t. the box-bounds width) in [0,1]

  • variant: algoritmic variant to use (one of [1 .. 6])
    1. PSO canonical (with inertia weight)

    2. PSO canonical (with inertia weight

      and equal random weights of social and cognitive components)

    3. PSO variant (with inertia weight

      same random number for all components.)

    4. PSO variant (with inertia weight

      same random number for all components and equal weights of social and cognitive components)

    5. PSO canonical (with constriction factor)

    6. Fully Informed Particle Swarm (FIPS)

  • neighb_type: defines the particle neighbourhood (used for the social component)
    1. gbest neighbourhood topology (fully connected)
    2. lbest neighbourhood topology (ring)
    3. Von-Neumann neighbourhood topology (square lattice)
    4. Randomly-varying neighbourhood topology
  • neighb_param: if the lbest topology is selected, it represents each particle’s indegree

    (also outdegree) in the swarm topology. Particles have neighbours up to a radius of k = neighb_param / 2 in the ring. If the Randomly-varying neighbourhood topology is selected, neighb_param represents each particle’s maximum outdegree in the swarm topology. The minimum outdegree is 1 (the particle always connects back to itself).


class PyGMO.algorithm.pso_gen
pso_gen.__init__(gen=1, omega=0.7298, eta1=2.05, eta2=2.05, vcoeff=0.5, variant=5, neighb_type=2, neighb_param=4)

Constructs a Particle Swarm Optimization (generational). The position update is applied only at the end of an entire loop over the population (swarm). Use this version for stochastic problems.

USAGE: algorithm.pso_gen(gen=1, omega = 0.7298, eta1 = 2.05, eta2 = 2.05, vcoeff = 0.5, variant = 5, neighb_type = 2, neighb_param = 4)

  • gen: number of generations

  • omega: constriction factor (or particle inertia weight) in [0,1]

  • eta1: Cognitive component in [0,4]

  • eta2: Social component in [0,4]

  • vcoeff: Maximum velocity coefficient (w.r.t. the box-bounds width) in [0,1]

  • variant: algoritmic variant to use (one of [1 .. 6])
    1. PSO canonical (with inertia weight)

    2. PSO canonical (with inertia weight

      and equal random weights of social and cognitive components)

    3. PSO variant (with inertia weight

      same random number for all components.)

    4. PSO variant (with inertia weight

      same random number for all components and equal weights of social and cognitive components)

    5. PSO canonical (with constriction factor)

    6. Fully Informed Particle Swarm (FIPS)

  • neighb_type: defines the particle neighbourhood (used for the social component)
    1. gbest neighbourhood topology (fully connected)
    2. lbest neighbourhood topology (ring)
    3. Von-Neumann neighbourhood topology (square lattice)
    4. Randomly-varying neighbourhood topology
  • neighb_param: if the lbest topology is selected, it represents each particle’s indegree

    (also outdegree) in the swarm topology. Particles have neighbours up to a radius of k = neighb_param / 2 in the ring. If the Randomly-varying neighbourhood topology is selected, neighb_param represents each particle’s maximum outdegree in the swarm topology. The minimum outdegree is 1 (the particle always connects back to itself).


class PyGMO.algorithm.sea
sea.__init__(gen=100, limit=20)

Constructs a simple (N+1)-EA: A Simple Evolutionary Algorithm

USAGE: algorithm.ea(gen = 1) SEE : Oliveto, Pietro S., Jun He, and Xin Yao. “Time complexity of evolutionary algorithms for combinatorial optimization: A decade of results.” International Journal of Automation and Computing 4.3 (2007): 281-293.

  • gen: number of ‘generations’ (each generation is one function evaluation)

class PyGMO.algorithm.sga
sga.__init__(gen=1, cr=0.95, m=0.02, elitism=1, mutation=PyGMO.algorithm._algorithm._sga_mutation_type.GAUSSIAN, width=0.1, selection=PyGMO.algorithm._algorithm._sga_selection_type.ROULETTE, crossover=PyGMO.algorithm._algorithm._sga_crossover_type.EXPONENTIAL)

Constructs a Simple Genetic Algorithm (generational)

USAGE: algorithm.sga(self, gen=1, cr=.95, m=.02, elitism=1, mutation=sga.mutation.GAUSSIAN, width = 0.1, selection=sga.selection.ROULETTE, crossover=sga.crossover.EXPONENTIAL)

  • gen: number of generations

  • cr: crossover factor in [0,1]

  • m: mutation probability (for each component) [0,1]

  • elitism: number of generation after which the best is reinserted

  • mutation: mutation type (one of [RANDOM, GAUSSIAN])

  • width: the mutation width (in case of a GAUSSIAN bell

    this is the std normalized with the width)

  • selection: selection startegy (one of [ROULETTE, BEST20])

  • crossover: crossover strategy (one of [BINOMIAL, EXPONENTIAL])

mutation.RANDOM

Random mutation (width is set by the width argument in PyGMO.algorithm.sga)

mutation.GAUSSIAN

Gaussian mutation (bell shape standard deviation is set by the width argument in PyGMO.algorithm.sga multiplied by the box-bounds width)

selection.ROULETTE

Roulette selection mechanism

selection.BEST20

Best 20% individuals are inserted over and over again

crossover.BINOMIAL

Binomial crossover

crossover.EXPONENTIAL

Exponential crossover


class PyGMO.algorithm.vega
vega.__init__(gen=1, cr=0.95, m=0.02, elitism=1, mutation=PyGMO.algorithm._algorithm._vega_mutation_type.GAUSSIAN, width=0.1, crossover=PyGMO.algorithm._algorithm._vega_crossover_type.EXPONENTIAL)

Constructs a Vector evaluated genetic algorithm

USAGE: algorithm.vega(self, gen=1, cr=.95, m=.02, elitism=1, mutation=vega.mutation.GAUSSIAN, width = 0.1, crossover=vega.crossover.EXPONENTIAL)

  • gen: number of generations

  • cr: crossover factor in [0,1]

  • m: mutation probability (for each component) [0,1]

  • elitism: number of generation after which the best is reinserted

  • mutation: mutation type (one of [RANDOM, GAUSSIAN])

  • width: the mutation width (in case of a GAUSSIAN bell

    this is the std normalized with the width)

  • crossover: crossover strategy (one of [BINOMIAL, EXPONENTIAL])

mutation.RANDOM

Random mutation (width is set by the width argument in PyGMO.algorithm.vega)

mutation.GAUSSIAN

Gaussian mutation (bell shape standard deviation is set by the width argument in PyGMO.algorithm.vega multiplied by the box-bounds width)

crossover.BINOMIAL

Binomial crossover

crossover.EXPONENTIAL

Exponential crossover


class PyGMO.algorithm.sga_gray
sga_gray.__init__(gen=1, cr=0.95, m=0.02, elitism=1, mutation=PyGMO.algorithm._algorithm._gray_mutation_type.UNIFORM, selection=PyGMO.algorithm._algorithm._gray_selection_type.ROULETTE, crossover=PyGMO.algorithm._algorithm._gray_crossover_type.SINGLE_POINT)

Constructs a Simple Genetic Algorithm with gray binary encoding (generational)

USAGE: algorithm.sga_gray(self, gen=1, cr=.95, m=.02, elitism=1, mutation=sga.mutation.UNIFORM, selection=sga.selection.ROULETTE, crossover=sga.crossover.SINGLE_POINT)

  • gen: Number of generations to evolve.
  • cr: crossover factor in [0,1]
  • m: mutation probability (of each encoded bit) [0,1]
  • elitism: number of generation after which the best is reinserted
  • mut: mutation type (one of [UNIFORM])
  • sel: selection strategy (one of [ROULETTE, BEST20])
  • cro: crossover strategy (one of [SINGLE_POINT])
mutation.UNIFORM

Uniform mutation

selection.ROULETTE

Roulette selection mechanism

selection.BEST20

Best 20% individuals are inserted over and over again

crossover.SINGLE_POINT

Single point crossover


class PyGMO.algorithm.nsga_II
nsga_II.__init__(gen=100, cr=0.95, eta_c=10, m=0.01, eta_m=10)

Constructs a Non-dominated Sorting Genetic Algorithm (NSGA_II)

USAGE: algorithm.nsga_II(self, gen=100, cr = 0.95, eta_c = 10, m = 0.01, eta_m = 10)

  • gen: number of generations
  • cr: crossover factor [0,1[
  • eta_c: Distribution index for crossover
  • m: mutation probability [0,1]
  • eta_m: Distribution index for mutation

class PyGMO.algorithm.sms_emoa
sms_emoa.__init__(hv_algorithm=None, gen=100, sel_m=2, cr=0.95, eta_c=10, m=0.01, eta_m=10)

Constructs a S-Metric Selection Evolutionary Multiobjective Optimiser Algorithm (SMS-EMOA)

USAGE: algorithm.sms_emoa(self, gen=100, sel_m = 2, cr = 0.95, eta_c = 10, m = 0.01, eta_m = 10)

  • hv_algorithm: hypervolume algorithm object used for the computation of the hypervolume. By default its chosen dynamically
  • gen: number of generations
  • sel_m: selection method for points in dominated fronts. 1 - always use least contributor, 2 - use domination count for fronts > 1
  • cr: crossover factor [0,1]
  • eta_c: Distribution index for crossover
  • m: mutation probability [0,1]
  • eta_m: Distribution index for mutation

class PyGMO.algorithm.pade
pade.__init__(gen=10, decomposition='tchebycheff', weights='grid', solver=None, threads=8, T=8, z=[])

Constructs a Parallel Decomposition Algorithm (PaDe).

For each element of the population a different single objective problem is generated using a decomposition method. Those single-objective problems are thus solved in an island model. At the end of the evolution the population is set as the best individual in each single-objective island. This algorithm, original with PaGMO, builds upon the MOEA/D framework

USAGE: algorithm.pade(self, gen=10, , decomposition = ‘tchebycheff’, weights = ‘grid’, solver = None, threads = 8, T = 8, z = [])

  • gen: number of generations
  • threads: the maximum number of single-objective problems to solve at the same time
  • solver: the algorithm to use to solve the single-objective problems
  • T: the size of the population on each subproblem (must be an even number)
  • decomposition = the decomposition method to use, on of (‘weighted’, ‘tchebycheff’ or ‘bi’)
  • weights: weight generation method, one of (‘grid’, ‘low_discrepancy’, ‘random’)
  • z: the reference point (used with Tchebycheff and BI decomposition methods)
RANDOM

Random generation of the weight vector

GRID

Weight vectors are generated to equally divide the search space (requires a particular population size)

LOW_DISCREPANCY

Weight vector are generated using a low discrepancy sequence


class PyGMO.algorithm.nspso
nspso.__init__(gen=100, minW=0.4, maxW=1.0, C1=2.0, C2=2.0, CHI=1.0, v_coeff=0.5, leader_selection_range=5, diversity_mechanism='crowding distance')

Constructs a Multi Objective PSO

USAGE: algorithm.nspso(self, gen=10, minW = 0.4, maxW = 1.0, C1 = 2.0, C2 = 2.0,
CHI = 1.0, v_coeff = 0.5, leader_selection = 5, diversity_mechanism = ‘crowding_distance’):
  • gen: number of generations
  • minW: minimum particles’ inertia weight (the inertia weight is decreased troughout the run between maxW and minW)
  • maxW: maximum particles’ inertia weight (the inertia weight is decreased troughout the run between maxW and minW)
  • C1: magnitude of the force, applied to the particle’s velocity, in the direction of its previous best position
  • C2: magnitude of the force, applied to the particle’s velocity, in the direction of its global best (leader)
  • CHI: velocity scaling factor
  • v_coeff: velocity coefficient (determining the maximum allowed particle velocity)
  • leader_selection_range the leader of each particle is selected among the best leader_selection_range% individuals
  • diversity_mechanism one of ‘crowding distance’, ‘niche count’ or ‘maxmin’. Defines the diversity mechanism used
CROWDING_DISTANCE

Individual with better crowding distance are prefered

NICHE_COUNT

Individuals with better niche count are prefered

MAXMIN

The MaxMin method is used to obtain the non-dominated set and to mantain diversity


class PyGMO.algorithm.spea2
spea2.__init__(gen=100, cr=0.95, eta_c=10, m=0.01, eta_m=50, archive_size=0)

Constructs a Strenght Pareto Evolutionary Algorithm 2

USAGE: algorithm.spea2(gen=100, cr = 0.95, eta_c = 10, m = 0.01, eta_m = 50, archive_size = -1)

  • gen: Number of generations to evolve.
  • cr: Crossover probability
  • eta_c: Distribution index for crossover
  • m: Mutation probability
  • eta_m: Distribution index for mutation
  • archive_size: the size of the non_dominated archive. If archive_size=0 then the archive size is set equal to the population size. The population returned after evolve has a size equal to archive_size

class PyGMO.algorithm.sa_corana
sa_corana.__init__(iter=10000, Ts=10, Tf=0.1, steps=1, bin_size=20, range=1)

Constructs Corana’s Simulated Annealing

USAGE: algorithm.sa_corana(iter = 10000, Ts = 10, Tf = .1, steps = 1, bin_size = 20, range = 1)

NOTE: as this version of simulated annealing loops through the chromosome, the iter number needs to be selected large enough to allow the temperature schedule to actuallt make sense. For example if your problem has D dimensions then in order to have at least N temperature adjustments (from Ts to Tf) one should select iter = D * N * steps * bin_size.

  • iter: number of total iterations
  • Ts: starting temperature
  • Tf: final temperature ( > Ts)
  • steps: number of steps adjustments
  • bin_size: size of the bin used to evaluate the step adjustment
  • range: initial size of the neighbourhood (in [0,1])

class PyGMO.algorithm.bee_colony
bee_colony.__init__(gen=100, limit=20)

Constructs an Artificial Bee Colony Algorithm

USAGE: algorithm.bee_colony(gen = 100, limit = 20)

  • gen: number of ‘generations’ (each generation 2*NP function evaluations

    are made where NP is the population size)

  • limit: number of tries after which a source of food is dropped if not improved


class PyGMO.algorithm.ms
ms.__init__(algorithm=None, iter=1)

Constructs a Multistart Algorithm

USAGE: algorithm.ms(algorithm = algorithm.de(), iter = 1)

NOTE: starting from pop1, at each iteration a random pop2 is evolved with the selected algorithm and its final best replaces the worst of pop1

  • algorithm: PyGMO algorithm to be multistarted
  • iter: number of multistarts
screen_output

When True, the algorithms produces output on screen

algorithm

Algorithm to be multistarted


class PyGMO.algorithm.mbh
mbh.__init__(algorithm=None, stop=5, perturb=0.05, screen_output=False)

Constructs a Monotonic Basin Hopping Algorithm (generalized to accept any algorithm)

USAGE: algorithm.mbh(algorithm = algorithm.cs(), stop = 5, perturb = 5e-2);

NOTE: Starting from pop, algorithm is applied to the perturbed pop returning pop2. If pop2 is better than pop then pop=pop2 and a counter is reset to zero. If pop2 is not better the counter is incremented. If the counter is larger than stop, optimization is terminated

  • algorithm: ‘local’ optimiser

  • stop: number of no improvements before halting the optimization

  • perturb: non-dimentional perturbation width (can be a list, in which case

    it has to have the same dimension of the problem mbh will be applied to)

  • screen_output: activates screen output of the algorithm (do not use in archipealgo, otherwise the screen will be flooded with

  • different island outputs)

screen_output

When True, the algorithms produces output on screen

algorithm

Algorithm to perform mbh ‘local’ search


class PyGMO.algorithm.cstrs_co_evolution
cstrs_co_evolution.__init__(original_algo=None, original_algo_penalties=None, pop_penalties_size=30, gen=20, method=PyGMO.algorithm._algorithm._co_evo_method_type.SIMPLE, pen_lower_bound=0.0, pen_upper_bound=100000.0, f_tol=1e-15, x_tol=1e-15)

Constructs a co-evolution adaptive penalty algorithm for constrained optimization.

USAGE: algorithm.cstrs_co_evolution(original_algo = _algorithm.jde(), original_algo_penalties = _algorithm.jde(), pop_penalties_size = 30, gen = 20, method = cstrs_co_evolution.method.SIMPLE, pen_lower_bound = 0, pen_upper_bound = 100000,f_tol = 1e-15,x_tol = 1e-15):

  • original_algo: optimizer to use as ‘original’ optimization method

  • original_algo_penalties: optimizer to use as ‘original’ optimization method for population encoding penalties coefficients

  • pop_penalties_size: size of the population encoding the penalty parameters.

  • gen: number of generations.

  • method: cstrs_co_evolution.method.SIMPLE by default, the method used for the population encoding penalties coefficients.

    Three posssibililties are available: SIMPLE, SPLIT_NEQ_EQ and SPLIT_CONSTRAINTS. The simple one is the original version of the Coello/He implementation (one penalty coefficient weights the sum of the constraints violation, one the number of violated constraints). The SPLIT_NEQ_EQ, splits the equalities and inequalities constraints in two different sets for the penalty weigths, containing respectively inequalities and equalities weigths. The SPLIT_CONSTRAINTS splits the constraints in M set of weigths with M the number of constraints.

  • pen_lower_bound: the lower boundary used for penalty.

  • pen_upper_bound: the upper boundary used for penalty.

  • ftol: 1e-15 by default. The stopping criteria on the x tolerance.

  • xtol: 1e-15 by default. The stopping criteria on the f tolerance.


class PyGMO.algorithm.cstrs_immune_system
cstrs_immune_system.__init__(algorithm=None, algorithm_immune=None, gen=1, select_method=PyGMO.algorithm._algorithm._immune_select_method_type.BEST_ANTIBODY, inject_method=PyGMO.algorithm._algorithm._immune_inject_method_type.CHAMPION, distance_method=PyGMO.algorithm._algorithm._immune_distance_method_type.EUCLIDEAN, phi=0.5, gamma=0.5, sigma=0.3333333333333333, f_tol=1e-15, x_tol=1e-15)

Constructs an immune system algorithm for constrained optimization.

USAGE: algorithm._cstrs_immune_system(algorithm = _algorithm.jde(), algorithm_immune = _algorithm.jde(), gen = 1, select_method = cstrs_immune_system.select_method.BEST_ANTIBODY, inject_method = cstrs_immune_system.inject_method.CHAMPION, distance_method = cstrs_immune_system.distance_method.EUCLIDEAN, phi = 0.5, gamma = 0.5, sigma = 1./3., ftol = 1e-15, xtol = 1e-15):

  • algorithm: optimizer to use as ‘original’ optimization method. Its number of generations should be set to 1.
  • algorithm_2: optimizer to use as ‘original’ optimization method for the evolution of the immune system.
  • gen: number of generations.
  • select_method: cstrs_immune_system.select_method.BEST_ANTIBODY by default, the method used for selecting the antibodies.
  • inject_method: cstrs_immune_system.inject_method.CHAMPION by default, the method used for reinjecting the antibodies.
  • distance_method: cstrs_immune_system.distance_method.EUCLIDEAN by default, the method used for computing the distance to the antigenes population.
  • Two possibilities are available: CHAMPION, and BEST25.
  • phi: 0.5 by default. The feasible fraction selection to compute the mean value.
  • gamma: 0.5 by default. The number of antigens selected / number of total antigens.
  • sigma: 1/3 by default. The number of antibodies / number of antigens.
  • ftol: 1e-15 by default. The stopping criteria on the x tolerance.
  • xtol: 1e-15 by default. The stopping criteria on the f tolerance.
screen_output

When True, the algorithms produces output on screen


class PyGMO.algorithm.cstrs_core
cstrs_core.__init__(algorithm=None, repair_algorithm=None, gen=1, repair_frequency=10, repair_ratio=1.0, f_tol=1e-15, x_tol=1e-15)

Constructs CORE (Constrained Optimization by Random Evolution) algorithm for constrained optimization (belong to the family of repairing techniques).

USAGE: algorithm._cstrs_core(algorithm = _algorithm.jde(), repair_algorithm = _algorithm.jde(), gen = 1, repair_frequency = 10, repair_ratio = 1., f_tol = 1e-15, x_tol = 1e-15):

  • algorithm: optimizer to use as ‘original’ optimization method. Its number of generations should be set to 1.
  • repair_algorithm: optimizer to use as ‘repairing’ algorithm. It should be able to deal with population of size 1.
  • gen: number of generations.
  • repair_frequency: The infeasible are repaired at each repair frequency generations.
  • repair_ratio: ratio of repaired individuals over infeasible (a ratio of 1 will repair all the individuals).
  • ftol: 1e-15 by default. The stopping criteria on the x tolerance.
  • xtol: 1e-15 by default. The stopping criteria on the f tolerance.
screen_output

When True, the algorithms produces output on screen


class PyGMO.algorithm.cs
cs.__init__(max_eval=1, stop_range=0.01, start_range=0.1, reduction_coeff=0.5)

Constructs a Compass Search Algorithm

USAGE: algorithm.cs(max_eval = 1, stop_range = 0.01, start_range = 0.1, reduction_coeff = 0.5);

  • max_eval: maximum number of function evaluations

  • stop_range: when the range is reduced to a value smaller than stop_range cs stops

  • start_range: starting range (non-dimensional wrt ub-lb)

  • reduction_coeff: the range is multiplied by reduction_coeff whenever no improvment is made

    across one chromosome


class PyGMO.algorithm.ihs
ihs.__init__(iter=100, hmcr=0.85, par_min=0.35, par_max=0.99, bw_min=1e-05, bw_max=1)

Constructs an Improved Harmony Search Algorithm

USAGE: algorithm.ihs(iter = 100, hmcr = 0.85, par_min = 0.35, par_max = 0.99, bw_min = 1E-5, bw_max = 1);

  • iter: number of iterations (improvisations)
  • hmcr: rate of choosing from memory (in ]0,1[)
  • par_min: minimum pitch adjustment rate (in ]0,1[)
  • par_max: maximum pitch adjustment rate (in ]0,1[, > par_min)
  • bw_min: minimum distance bandwidth
  • bw_max: maximum distance bandwidth (> bw_min)

class PyGMO.algorithm.monte_carlo
monte_carlo.__init__(iter=10000)

Constructs a Monte Carlo Algorithm

USAGE: algorithm.monte_carlo(iter = 10000)

NOTE: At the end of each iteration, the randomly generated
point substitutes the worst in the population if better
  • iter: number of Monte Carlo runs

class PyGMO.algorithm.py_example
py_example.__init__(iter=10)

Constructs a Monte-Carlo (random sampling) algorithm

USAGE: algorithm.py_example(iter = 10)

NOTE: At the end of each iteration, the randomly generated
point substitutes the worst individual in the population if better
  • iter: number of random samples

class PyGMO.algorithm.py_cmaes
py_cmaes.__init__(gen=500, cc=-1, cs=-1, c1=-1, cmu=-1, sigma0=0.5, ftol=1e-06, xtol=1e-06, memory=False, screen_output=False)

Constructs a Covariance Matrix Adaptation Evolutionary Strategy (Python)

USAGE: algorithm.py_cmaes(gen = 500, cc = -1, cs = -1, c1 = -1, cmu = -1, sigma0=0.5, ftol = 1e-6, xtol = 1e-6, memory = False, screen_output = False)

NOTE: In our variant of the algorithm, particle memory is used to extract the elite and reinsertion is made aggressively ..... getting rid of the worst guy). Also, the bounds of the problem are enforced, as to allow PaGMO machinery to work. Fine control on each iteration can be achieved by calling the algo with gen=1 (algo state is stored, cmaes will continue at next call ... without initializing again all its state!!)

  • gen: number of generations
  • cc: time constant for C cumulation (in [0,1]) if -1 automatic values are set
  • cs: time constant for sigma cumulation (in [0,1]) if -1 automatic values are set
  • c1: learning rate for rank-1 update (in [0,1]) if -1 automatic values are set
  • cmu: learning rate for rank-mu update (in [0,1]) if -1 automatic values are set
  • sigma0: starting step (std)
  • xtol: stopping criteria on the x tolerance
  • ftol: stopping criteria on the f tolerance
  • memory: when True the algorithm preserves memory of covariance, step and more between successive runs
  • screen_output: activates screen_output (output at each generation)

class PyGMO.algorithm.cmaes
cmaes.__init__(gen=500, cc=-1, cs=-1, c1=-1, cmu=-1, sigma0=0.5, ftol=1e-06, xtol=1e-06, memory=False, screen_output=False)

Constructs a Covariance Matrix Adaptation Evolutionary Strategy (C++)

USAGE: algorithm.cmaes(gen = 500, cc = -1, cs = -1, c1 = -1, cmu = -1, sigma0=0.5, ftol = 1e-6, xtol = 1e-6, memory = False, screen_output = False)

NOTE: In our variant of the algorithm, particle memory is used to extract the elite and reinsertion is made aggressively ..... getting rid of the worst guy). Also, the bounds of the problem are enforced, as to allow PaGMO machinery to work. Fine control on each iteration can be achieved by calling the algo with memory=True and gen=1

  • gen: number of generations
  • cc: time constant for C cumulation (in [0,1]) if -1 automatic values are set
  • cs: time constant for sigma cumulation (in [0,1]) if -1 automatic values are set
  • c1: learning rate for rank-1 update (in [0,1]) if -1 automatic values are set
  • cmu: learning rate for rank-mu update (in [0,1]) if -1 automatic values are set
  • sigma0: starting step (std)
  • xtol: stopping criteria on the x tolerance
  • ftol: stopping criteria on the f tolerance
  • memory: if True the algorithm internal state is saved and used for the next call
  • screen_output: activates screen output of the algorithm (do not use in archipealgo, otherwise the screen will be flooded with
  • different island outputs)

class PyGMO.algorithm.scipy_fmin
scipy_fmin.__init__(maxiter=1, xtol=0.0001, ftol=0.0001, maxfun=None, disp=False)

Constructs a Nelder-Mead Simplex algorithm (SciPy)

USAGE: algorithm.scipy_fmin(maxiter=1, xtol=0.0001, ftol=0.0001, maxfun=None, full_output=0, disp=0, retall=0)

  • maxiter: Maximum number of iterations to perform
  • xtol: Relative error in xopt acceptable for convergence
  • ftol: Relative error in func(xopt) acceptable for convergence
  • maxfun: Maximum number of function evaluations to make
  • disp: Set to True to print convergence messages

class PyGMO.algorithm.scipy_l_bfgs_b
scipy_l_bfgs_b.__init__(maxfun=1, m=10, factr=10000000.0, pgtol=1e-05, epsilon=1e-08, screen_output=False)

Constructs a L-BFGS-B algorithm (SciPy)

NOTE: gradient is numerically approximated

USAGE: algorithm.scipy_l_bfgs_b(maxfun = 15000, m = 10, factr = 10000000.0, pgtol = 1e-05, epsilon = 1e-08, screen_output = False):

  • maxfun: maximum number of function evaluations

  • m: the maximum number of variable metric corrections

    used to define the limited memory matrix. (the limited memory BFGS method does not store the full hessian but uses this many terms in an approximation to it).

  • factr: The iteration stops when

    (f{k} - f{k+1}) / max{| f{k} | , | f{k+1} |,1} <= factr*epsmch where epsmch is the machine precision, which is automatically generated by the code. Typical values for factr: 1e12 for low accuracy; 1e7 for moderate accuracy; 10.0 for extremely high accuracy.

  • pgtol: The iteration will stop when

    max{| proj g{i} | i = 1, ..., n} <= pgtol where proj g{i} is the ith component of the projected gradient.

  • epsilon: step size used when approx_grad is true, for numerically

    calculating the gradient

  • screen_output: Set to True to print iterations


class PyGMO.algorithm.scipy_slsqp
scipy_slsqp.__init__(max_iter=100, acc=1e-08, epsilon=1.4901161193847656e-08, screen_output=False)

Constructs a Sequential Least SQuares Programming algorithm

NOTE: gradient is numerically approximated

USAGE: algorithm.scipy_slsqp(max_iter = 100,acc = 1E-6,epsilon = 1.49e-08, screen_output = False))

  • max_iter: The maximum number of iterations.
  • acc: Requested accuracy.
  • epsilon: The step size for finite-difference derivative estimates.
  • screen_output: Set to True to print iterations

class PyGMO.algorithm.scipy_tnc
scipy_tnc.__init__(maxfun=15000, xtol=-1, ftol=-1, pgtol=1e-05, epsilon=1e-08, screen_output=False)

Constructs a Truncated Newton Method algorithm (SciPy)

NOTE: gradient is numerically approximated

USAGE: algorithm.scipy_tnc(maxfun = 1, xtol = -1, ftol = -1, pgtol = 1e-05, epsilon = 1e-08, screen_output = False)

  • maxfun: Maximum number of function evaluation.

  • xtol: Precision goal for the value of x in the stopping criterion

    (after applying x scaling factors). If xtol < 0.0, xtol is set to sqrt(machine_precision). Defaults to -1.

  • ftol: Precision goal for the value of f in the stoping criterion.

    If ftol < 0.0, ftol is set to 0.0 defaults to -1.

  • pgtol: Precision goal for the value of the projected gradient in the

    stopping criterion (after applying x scaling factors). If pgtol < 0.0, pgtol is set to 1e-2 * sqrt(accuracy). Setting it to 0.0 is not recommended. Defaults to -1.

  • epsilon: The stepsize in a finite difference approximation for the objfun

  • screen_output: Set to True to print iterations

screen_output

When True, the algorithms produces output on screen


class PyGMO.algorithm.scipy_cobyla
scipy_cobyla.__init__(max_fun=1, rho_end=1e-05, screen_output=False)

Constructs a Constrained Optimization BY Linear Approximation (COBYLA) algorithm (SciPy)

NOTE: equality constraints are transformed into two inequality constraints automatically

USAGE: algorithm.scipy_cobyla(max_fun = 1,rho_end = 1E-5,screen_output = False)

  • maxfun: Maximum number of function evaluations.
  • rhoend: Final accuracy in the optimization (not precisely guaranteed). This is a lower bound on the size of the trust region.
  • screen_output: Set to True to print iterations
screen_output

When True, the algorithms produces output on screen


class PyGMO.algorithm.nlopt_cobyla
nlopt_cobyla.__init__(max_iter=100, ftol=1e-06, xtol=1e-06)

Constructs a Constrained Optimization BY Linear Approximation (COBYLA) algorithm (NLOPT)

USAGE: algorithm.nlopt_cobyla(max_iter = 100, ftol = 1e-6, xtol = 1e-6)

  • max_iter: stop-criteria (number of iterations)
  • ftol: stop-criteria (absolute on the obj-fun)
  • xtol: stop-criteria (absolute on the chromosome)

class PyGMO.algorithm.nlopt_bobyqa
nlopt_bobyqa.__init__(max_iter=100, ftol=1e-06, xtol=1e-06)

Constructs a BOBYQA algorithm (Bound Optimization BY Quadratic Approximation) (NLOPT)

USAGE: algorithm.nlopt_bobyqa(max_iter = 100, ftol = 1e-6, xtol = 1e-6)

  • max_iter: stop-criteria (number of iterations)
  • ftol: stop-criteria (absolute on the obj-fun)
  • xtol: stop-criteria (absolute on the chromosome)

class PyGMO.algorithm.nlopt_sbplx
nlopt_sbplx.__init__(max_iter=100, ftol=1e-06, xtol=1e-06)

Constructs a Subplex (a variant of Nelder-Mead that uses Nelder-Mead on a sequence of subspaces) (NLOPT)

USAGE: algorithm.nlopt_sbplx(max_iter = 100, ftol = 1e-6, xtol = 1e-6)

  • max_iter: stop-criteria (number of iterations)
  • ftol: stop-criteria (absolute on the obj-fun)
  • xtol: stop-criteria (absolute on the chromosome)

class PyGMO.algorithm.nlopt_mma
nlopt_mma.__init__(max_iter=100, ftol=1e-06, xtol=1e-06)

Constructs a Method of Moving Asymptotes (MMA) algorithm (NLOPT)

USAGE: algorithm.nlopt_mma(max_iter = 100, ftol = 1e-6, xtol = 1e-6)

  • max_iter: stop-criteria (number of iterations)
  • ftol: stop-criteria (absolute on the obj-fun)
  • xtol: stop-criteria (absolute on the chromosome)

class PyGMO.algorithm.nlopt_auglag
nlopt_auglag.__init__(aux_algo_id=1, max_iter=100, ftol=1e-06, xtol=1e-06, aux_max_iter=100, aux_ftol=1e-06, aux_xtol=1e-06)

Constructs an Augmented agrangian Algotihm (NLOPT)

USAGE: algorithm.nlopt_mma(aux_algo_id = 1, max_iter = 100, ftol = 1e-6, xtol = 1e-6, aux_max_iter = 100, aux_ftol = 1e-6, aux_xtol = 1e-6)

  • aux_algo_id: auxiliary optimizer id

    1: SBPLX 2: COBYLA 3: BOBYQA 4: Low Storage BFGS

  • max_iter: stop-criteria (number of iterations)

  • ftol: stop-criteria (absolute on the obj-fun)

  • xtol: stop-criteria (absolute on the chromosome)

  • aux_max_iter: stop-criteria for the auxiliary optimizer (number of iterations)

  • aux_ftol: stop-criteria for the auxiliary optimizer (absolute on the obj-fun)

  • aux_xtol: stop-criteria for the auxiliary optimizer (absolute on the chromosome)


class PyGMO.algorithm.nlopt_auglag_eq
nlopt_auglag_eq.__init__(aux_algo_id=1, max_iter=100, ftol=1e-06, xtol=1e-06, aux_max_iter=100, aux_ftol=1e-06, aux_xtol=1e-06)

Constructs an Augmented agrangian Algotihm (using penalties only for the equalities) (NLOPT)

USAGE: algorithm.nlopt_auglag_eq(aux_algo_id = 1, max_iter = 100, ftol = 1e-6, xtol = 1e-6, aux_max_iter = 100, aux_ftol = 1e-6, aux_xtol = 1e-6)

  • aux_algo_id: auxiliary (local) optimizer id

    1: COBYLA 2: MMA

  • max_iter: stop-criteria (number of iterations)

  • ftol: stop-criteria (absolute on the obj-fun)

  • xtol: stop-criteria (absolute on the chromosome)

  • aux_max_iter: stop-criteria for the auxiliary optimizer (number of iterations)

  • aux_ftol: stop-criteria for the auxiliary optimizer (absolute on the obj-fun)

  • aux_xtol: stop-criteria for the auxiliary optimizer (absolute on the chromosome)


class PyGMO.algorithm.nlopt_slsqp
nlopt_slsqp.__init__(max_iter=100, ftol=1e-06, xtol=1e-06)

Constructs a Sequential Least SQuares Programming algorithm (SLSQP) algorithm (NLOPT)

USAGE: algorithm.nlopt_slsqp(max_iter = 100, ftol = 1e-6, xtol = 1e-6)

  • max_iter: stop-criteria (number of iterations)
  • ftol: stop-criteria (absolute on the obj-fun)
  • xtol: stop-criteria (absolute on the chromosome)

class PyGMO.algorithm.gsl_nm2rand
gsl_nm2rand.__init__(max_iter=100, step_size=1e-08, tol=1e-08)

Constructs a Nelder-Mead algorithm (Variant2 + randomly oriented initial simplex) (GSL)

USAGE: algorithm.gsl_nm2rand(max_iter = 100, step_size = 1e-8, tol = 1e-8);

  • max_iter: maximum number of iterations
  • step_size: size of the first trial step.
  • tol: accuracy of the line minimisation.

class PyGMO.algorithm.gsl_nm2
gsl_nm2.__init__(max_iter=100, step_size=1e-08, tol=1e-08)

Constructs a Nelder-Mead algorithm (Variant2) (GSL)

USAGE: algorithm.gsl_nm2(max_iter = 100, step_size = 1e-8, tol = 1e-8)

  • max_iter: maximum number of iterations
  • step_size: size of the first trial step.
  • tol: accuracy of the line minimisation.

class PyGMO.algorithm.gsl_nm
gsl_nm.__init__(max_iter=100, step_size=1e-08, tol=1e-08)

Constructs a Nelder-Mead Algorithm (GSL)

USAGE: algorithm.gsl_nm(max_iter = 100, step_size = 1e-8, tol = 1e-8);

  • max_iter: maximum number of iterations
  • step_size: size of the first trial step.
  • tol: accuracy of the line minimisation.

class PyGMO.algorithm.gsl_pr
gsl_pr.__init__(max_iter=100, step_size=1e-08, tol=1e-08, grad_step_size=0.01, grad_tol=0.0001)

Constructs a Polak-Ribiere conjugate gradient (GSL)

USAGE: algorithm.gsl_pr2(max_iter = 100, step_size = 1e-8, tol = 1e-8, grad_step_size = 0.01, grad_tol = 0.0001);

  • max_iter: maximum number of iterations
  • step_size: size of the first trial step.
  • tol: accuracy of the line minimisation.
  • grad_step_size: step size for the numerical computation of the gradient.
  • grad_tol: tolerance when testing the norm of the gradient as stopping criterion.

class PyGMO.algorithm.gsl_fr
gsl_fr.__init__(max_iter=100, step_size=1e-08, tol=1e-08, grad_step_size=0.01, grad_tol=0.0001)

Constructs a Fletcher-Reeves conjugate gradient (GSL)

USAGE: algorithm.gsl_fr(max_iter = 100, step_size = 1e-8, tol = 1e-8, grad_step_size = 0.01, grad_tol = 0.0001)

  • max_iter: maximum number of iterations
  • step_size: size of the first trial step.
  • tol: accuracy of the line minimisation.
  • grad_step_size: step size for the numerical computation of the gradient.
  • grad_tol: tolerance when testing the norm of the gradient as stopping criterion.

class PyGMO.algorithm.gsl_bfgs2
gsl_bfgs2.__init__(max_iter=100, step_size=1e-08, tol=1e-08, grad_step_size=0.01, grad_tol=0.0001)

Constructs a BFGS2 Algorithm (GSL)

NOTE: in GSL, BFGS2 is a more efficient version of BFGS

USAGE: algorithm.gsl_bfgs2(max_iter = 100, step_size = 1e-8, tol = 1e-8, grad_step_size = 0.01, grad_tol = 0.0001);

  • max_iter: maximum number of iterations
  • step_size: size of the first trial step.
  • tol: accuracy of the line minimisation.
  • grad_step_size: step size for the numerical computation of the gradient.
  • grad_tol: tolerance when testing the norm of the gradient as stopping criterion.

class PyGMO.algorithm.gsl_bfgs
gsl_bfgs.__init__(max_iter=100, step_size=1e-08, tol=1e-08, grad_step_size=0.01, grad_tol=0.0001)

Constructs a BFGS Algorithm (GSL)

USAGE: algorithm.gsl_bfgs(max_iter = 100, step_size = 1e-8, tol = 1e-8, grad_step_size = 0.01, grad_tol = 0.0001)

  • max_iter: maximum number of iterations
  • step_size: size of the first trial step.
  • tol: accuracy of the line minimisation.
  • grad_step_size: step size for the numerical computation of the gradient.
  • grad_tol: tolerance when testing the norm of the gradient as stopping criterion.

class PyGMO.algorithm.snopt
snopt.__init__(major_iter=100, feas_tol=1e-08, opt_tol=1e-06, screen_output=False)

Constructs SNOPT Algorithm

USAGE: algorithm.snopt(major_iter = 100, feas_tol = 1e-6, opt_tol = 1e-6, screen_output = False);

  • major_iter: Maximum number of major iterations
  • feas_tol: Feasibility tolerance
  • opt_tol: Optimality tolerance
  • screen_output: Activates output on screen
screen_output

When True, the algorithms produces output on screen


class PyGMO.algorithm.ipopt
ipopt.__init__(max_iter=100, constr_viol_tol=1e-08, dual_inf_tol=1e-08, compl_inf_tol=1e-08, nlp_scaling_method=True, obj_scaling_factor=1.0, mu_init=0.1, screen_output=False)

Constructs an Interior Point OPTimization Algorithm (IPOPT)

USAGE: algorithm.ipopt(major_iter = 100, constr_viol_tol = 1e-08, dual_inf_tol = 1e-08, compl_inf_tol = 1e-08, screen_output = False);

  • max_iter: Maximum number of major iterations
  • constr_viol_tol: Constraint violation tolerance
  • dual_inf_tol: Dual infeasibility tolerance
  • compl_inf_tol: Complementary feasibility tolerance
  • nlp_scaling_method Select if the “gradient-based” scaling of the NLP should be used
  • obj_scaling_factor Scaling factor for the objective function.
  • mu_init Initial value for the barrier parameter.
  • screen_output: Activates output on screen
screen_output

When True, the algorithms produces output on screen


class PyGMO.algorithm.cstrs_self_adaptive
cstrs_self_adaptive.__init__(algorithm=None, max_iter=100, f_tol=1e-15, x_tol=1e-15)

Constructs a Self-Adaptive Fitness constraints handling Meta Algorithm.

The key idea of this constraint handling technique is to represent the constraint violation by a single infeasibility measure, and to adapt dynamically the penalization of infeasible solutions.

USAGE: algorithm.self_adaptive(algorithm = algorithm.jde(), max_iter = 100, f_tol = 1e-15, x_tol = 1e-15);

  • algorithm: original optimizer
  • max_iter: stop-criteria (number of iterations)
  • ftol: 1e-15 by default. The stopping criteria on the x tolerance.
  • xtol: 1e-15 by default. The stopping criteria on the f tolerance.