Multi-objective optimization utilities#
- pygmo.fast_non_dominated_sorting(points)#
Runs the fast non dominated sorting algorithm on the input points
- Parameters
points (2d-array-like object) – the input points
- Raises
ValueError – if points is malformed
TypeError – if points cannot be converted to a vector of vector floats
- Returns
(ndf, dl, dc, ndr), where:
- Return type
Examples
>>> import pygmo as pg >>> ndf, dl, dc, ndr = pg.fast_non_dominated_sorting(points = [[0,1],[-1,3],[2.3,-0.2],[1.1,-0.12],[1.1, 2.12],[-1.1,-1.1]])
- pygmo.nadir(points)#
Computes the nadir point of a set of points, i.e objective vectors. The nadir is that point that has the maximum value of the objective function in the points of the non-dominated front.
Complexity is \(\mathcal{O}(MN^2)\) where \(M\) is the number of objectives and \(N\) is the number of points.
- Parameters
points (2d-array-like object) – the input points
- Raises
ValueError – if points is malformed
TypeError – if points cannot be converted to a vector of vector floats
- Returns
the nadir point
- Return type
1D NumPy float array
- pygmo.ideal(points)#
Computes the ideal point of a set of points, i.e objective vectors. The ideal point is that point that has, in each component, the minimum value of the objective functions of the input points.
Complexity is \(\mathcal{O}(MN)\) where \(M\) is the number of objectives and \(N\) is the number of points.
- Parameters
points (2d-array-like object) – the input points
- Raises
ValueError – if points is malformed
TypeError – if points cannot be converted to a vector of vector floats
- Returns
the ideal point
- Return type
1D NumPy float array
- pygmo.pareto_dominance(obj1, obj2)#
Returns
True
if obj1 Pareto dominates obj2,False
otherwise. Minimization is assumed.Each pair of corresponding elements in obj1 and obj2 is compared: if all elements in obj1 are less or equal to the corresponding element in obj2, but at least one is different,
True
will be returned. Otherwise,False
will be returned.- Parameters
obj1 (array-like object) – the first list of objectives
obj2 (array-like object) – the second list of objectives
- Raises
ValueError – if the dimensions of obj1 and obj2 are different
TypeError – if obj1 or obj2 cannot be converted to a vector of vector floats
- Returns
- Return type
Examples
>>> import pygmo as pg >>> pg.pareto_dominance(obj1 = [1,2], obj2 = [2,2]) True
- pygmo.non_dominated_front_2d(points)#
Finds the non dominated front of a set of two dimensional objectives. Complexity is \(\mathcal{O}(N \log N)\) and is thus lower than the complexity of calling
fast_non_dominated_sorting()
See: Jensen, Mikkel T. “Reducing the run-time complexity of multiobjective EAs: The NSGA-II and other algorithms.” IEEE Transactions on Evolutionary Computation 7.5 (2003): 503-515.
- Parameters
points (2d-array-like object) – the input points
- Raises
ValueError – if points contain anything else than 2 dimensional objectives
TypeError – if points cannot be converted to a vector of vector floats
- Returns
the non dominated fronts
- Return type
1D NumPy int array
Examples
>>> import pygmo as pg >>> pg.non_dominated_front_2d(points = [[0,5],[1,4],[2,3],[3,2],[4,1],[2,2]]) array([0, 1, 5, 4], dtype=uint64)
- pygmo.crowding_distance(points)#
An implementation of the crowding distance. Complexity is \(O(M N \log N)\) where \(M\) is the number of objectives and \(N\) is the number of individuals. The function assumes points contain a non-dominated front. Failiure to meet this condition will result in undefined behaviour.
See: Deb, Kalyanmoy, et al. “A fast elitist non-dominated sorting genetic algorithm for multi-objective optimization: NSGA-II.” Parallel problem solving from nature PPSN VI. Springer Berlin Heidelberg, 2000.
- Parameters
points (2d-array-like object) – the input points
- Raises
ValueError – if points does not contain at least two points, or is malformed
TypeError – if points cannot be converted to a vector of vector floats
- Returns
the crowding distances
- Return type
1D NumPy float array
Examples
>>> import pygmo as pg >>> pg.crowding_distance(points = [[0,5],[1,4],[2,3],[3,2],[4,1]]) array([inf, 1., 1., 1., inf])
- pygmo.sort_population_mo(points)#
Sorts a multi-objective, unconstrained, population (intended here as a 2D array-like containing objective vectors) with respect to the following strict ordering:
\(f_1 \prec f_2\) if the non domination ranks are such that \(i_1 < i_2\). In case \(i_1 = i_2\), then \(f_1 \prec f_2\) if the crowding distances are such that \(d_1 > d_2\).
Complexity is \(\mathcal{O}(M N^2)\) where \(M\) is the size of the objective vector and \(N\) is the number of individuals.
Note
This function will also work for single objective optimization, i.e. with objective vector of size 1, in which case, though, it is more efficient to sort using python built-in sorting methods.
- Parameters
points (2d-array-like object) – the input objective vectors
- Raises
unspecified – all exceptions thrown by
pygmo.fast_non_dominated_sorting()
andpygmo.crowding_distance()
TypeError – if points cannot be converted to a vector of vector floats
- Returns
the indexes of the sorted objectives vectors.
- Return type
1D NumPy int array
Examples
>>> import pygmo as pg >>> pop = pg.population(prob = pg.dtlz(prob_id = 3, dim=10, fdim=4), size = 20) >>> pg.sort_population_mo(points = pop.get_f()) array([ 4, 7, 14, 15, 16, 18, 9, 13, 5, 3, 6, 2, 12, 0, 1, 19, 17, 8, 10, 11])
- pygmo.select_best_N_mo(points, N)#
Returns (unordered) the best N individuals out of a multi-objective, unconstrained population, (intended here as a 2D array-like containing objective vectors). The strict ordering used is the same as that defined in
sort_population_mo()
Complexity is \(\mathcal{O}(M N^2)\) where \(M\) is the number of objectives and \(N\) is the number of individuals.
While the complexity is the same as that of
sort_population_mo()
, this function is to be preferred when possible in that it avoids to compute the crowidng distance for all individuals and only computes it for the last non-dominated front containing individuals included in the best N.If N is zero, an empty array will be returned.
- Parameters
points (2d-array-like object) – the input objective vectors
N (
int
) – The size of the returned list of bests.
- Raises
unspecified – all exceptions thrown by
pygmo.fast_non_dominated_sorting()
andpygmo.crowding_distance()
TypeError – if points cannot be converted to a vector of vector floats
- Returns
the indexes of the N best objectives vectors.
- Return type
1D NumPy int array
Examples
>>> import pygmo as pg >>> pop = pg.population(prob = pg.dtlz(prob_id = 3, dim=10, fdim=4), size = 20) >>> pg.select_best_N_mo(points = pop.get_f(), N = 13) array([ 2, 3, 4, 5, 6, 7, 9, 12, 13, 14, 15, 16, 18])
- pygmo.decompose_objectives(objs, weights, ref_point, method)#
Decomposes a vector of objectives.
A vector of objectives is reduced to one only objective using a decomposition technique.
Three different possibilities for method are here made available:
weighted decomposition,
Tchebycheff decomposition,
boundary interception method (with penalty constraint).
In the case of \(n\) objectives, we indicate with: \(\mathbf f(\mathbf x) = [f_1(\mathbf x), \ldots, f_n(\mathbf x)]\) the vector containing the original multiple objectives, with: \(\boldsymbol \lambda = (\lambda_1, \ldots, \lambda_n)\) an \(n\)-dimensional weight vector and with: \(\mathbf z^* = (z^*_1, \ldots, z^*_n)\) an \(n\)-dimensional reference point. We also ussume \(\lambda_i > 0, \forall i=1..n\) and \(\sum_i \lambda_i = 1\).
The resulting single objective is thus defined as:
weighted decomposition: \(f_d(\mathbf x) = \boldsymbol \lambda \cdot \mathbf f\)
Tchebycheff decomposition: \(f_d(\mathbf x) = \max_{1 \leq i \leq m} \lambda_i \vert f_i(\mathbf x) - z^*_i \vert\)
boundary interception method (with penalty constraint): \(f_d(\mathbf x) = d_1 + \theta d_2\)
where \(d_1 = (\mathbf f - \mathbf z^*) \cdot \hat {\mathbf i}_{\lambda}\) , \(d_2 = \vert (\mathbf f - \mathbf z^*) - d_1 \hat {\mathbf i}_{\lambda})\vert\) , and \(\hat {\mathbf i}_{\lambda} = \frac{\boldsymbol \lambda}{\vert \boldsymbol \lambda \vert}\)
Note that while ref_point is required, it does not impact the calculation for the weighted method as shown above.
- Parameters
objs (array-like object) – the objective vectors
weights (array-like object) – the weights \(\boldsymbol \lambda\)
ref_point (array-like object) – the reference point \(\mathbf z^*\) . It is not used if method is
"weighted"
method (
str
) – the decomposition method: one of"weighted"
,"tchebycheff"
or"bi"
- Raises
ValueError – if objs, weight and ref_point have different sizes or if method is not one of
"weighted"
,"tchebycheff"
or"bi"
.TypeError – if weights or ref_point or objs cannot be converted to a vector of floats.
- Returns
a one dimensional array containing the decomposed objective.
- Return type
1D NumPy float array
Examples
>>> import pygmo as pg >>> pg.decompose_objectives(objs = [1,2,3], weights = [0.1,0.1,0.8], ref_point=[5,5,5], method = "weighted") array([ 2.7]) >>> pg.decompose_objectives(objs = [1,2,3], weights = [0.1,0.1,0.8], ref_point=[0,0,0], method = "weighted") array([ 2.7]) >>> pg.decompose_objectives(objs = [1,2,3], weights = [0.1,0.1,0.8], ref_point=[5,5,5], method = "tchebycheff") array([ 1.6])
- pygmo.decomposition_weights(n_f, n_w, method, seed)#
Generates the requested number of weight vectors to be used to decompose a multi-objective problem. Three methods are available:
"grid"
generates weights on an uniform grid. This method may only be used when the number of requested weights to be genrated is such that a uniform grid is indeed possible. In two dimensions this is always the case, but in larger dimensions uniform grids are possible only in special cases"random"
generates weights randomly distributing them uniformly on the simplex (weights are such that \(\sum_i \lambda_i = 1\))"low discrepancy"
generates weights using a low-discrepancy sequence to, eventually, obtain a better coverage of the Pareto front. Halton sequence is used since low dimensionalities are expected in the number of objectives (i.e. less than 20), hence Halton sequence is deemed as appropriate.
Note
All methods are guaranteed to generate weights on the simplex (\(\sum_i \lambda_i = 1\)). All weight generation methods are guaranteed to generate the canonical weights [1,0,0,…], [0,1,0,..], … first.
- Parameters
- Raises
OverflowError – if n_f, n_w or seed are negative or greater than an implementation-defined value
ValueError – if n_f and n_w are not compatible with the selected weight generation method or if method is not one of
"grid"
,"random"
or"low discrepancy"
- Returns
the generated weights
- Return type
2D NumPy float array
Examples
>>> import pygmo as pg >>> pg.decomposition_weights(n_f = 2, n_w = 6, method = "low discrepancy", seed = 33) array([[ 1. , 0. ], [ 0. , 1. ], [ 0.25 , 0.75 ], [ 0.75 , 0.25 ], [ 0.125, 0.875], [ 0.625, 0.375]])