emukit.core.optimization package¶
Submodules¶
- class emukit.core.optimization.acquisition_optimizer.AcquisitionOptimizerBase(space)¶
Bases:
ABC
Base class for acquisition optimizers
- optimize(acquisition, context=None)¶
Optimizes the acquisition function.
- Parameters
acquisition (
Acquisition
) – The acquisition function to be optimizedcontext (
Optional
[Dict
[str
,Any
]]) – Optimization context. Determines whether any variable values should be fixed during the optimization
- Return type
Tuple
[ndarray
,ndarray
]- Returns
Tuple of (location of maximum, acquisition value at maximizer)
- class emukit.core.optimization.anchor_points_generator.AnchorPointsGenerator(space, num_samples)¶
Bases:
object
Anchor points are the points from which the optimization of the acquisition function is initialized.
This base class is for generating such points, and the sub-classes will implement different logic of how the points should be selected
- get_anchor_point_scores(X)¶
This abstract method should contain the logic to ascribe scores to different points in the input domain. Points with higher scores will be chosen over points with lower scores.
- Parameters
X (
ndarray
) – (n_samples x n_inputs_dims) arrays containing the points at which to evaluate the anchor point scores- Return type
ndarray
- Returns
Array containing score for each input point
- get(num_anchor=5, context_manager=None)¶
- Parameters
num_anchor (
int
) – Number of points to returncontext_manager (
Optional
[ContextManager
]) – Describes any fixed parameters in the optimization
- Return type
ndarray
- Returns
A (num_anchor x n_dims) array containing the anchor points
- class emukit.core.optimization.anchor_points_generator.ObjectiveAnchorPointsGenerator(space, acquisition, num_samples=1000)¶
Bases:
AnchorPointsGenerator
This anchor points generator chooses points where the acquisition function is highest
- get_anchor_point_scores(X)¶
- Parameters
X (
ndarray
) – The samples at which to evaluate the criterion- Return type
ndarray
- Returns
- class emukit.core.optimization.context_manager.ContextManager(space, context)¶
Bases:
object
Handles the context variables in the optimizer
- expand_vector(x)¶
Expand context free parameter vector by values of the context.
- Parameters
x (
ndarray
) – Context free parameter values as 2d-array- Return type
ndarray
- Returns
Parameter values with inserted context values
- class emukit.core.optimization.gradient_acquisition_optimizer.GradientAcquisitionOptimizer(space, num_samples=1000, num_anchor=1)¶
Bases:
AcquisitionOptimizerBase
Optimizes the acquisition function using a quasi-Newton method (L-BFGS). Can be used for continuous acquisition functions.
- class emukit.core.optimization.local_search_acquisition_optimizer.LocalSearchAcquisitionOptimizer(space, num_steps=10, num_init_points=5, std_dev=0.02, num_continuous=4)¶
Bases:
AcquisitionOptimizerBase
Optimizes the acquisition function by multiple local searches starting at random points. Each local optimization iteratively evaluates the one-exchange neighbourhoods. Can be used for discrete and continuous acquisition functions.
This kind of optimization is also known as Variable Neighbourhood Search (e.g. see https://en.wikipedia.org/wiki/Variable_neighborhood_search). Neighbourhood definitions and default parameters are based on the search used in SMAC [1].
Warning
The local search heuristic here currently differs to SMAC [1]. The neighbourhood of a point is evaluated completely, the search continues at the best neighbour (best improvement heuristic). SMAC iteratively samples neighbours and continues at the first which is better than the current (first improvement heuristic). Therefore this implementation is time consuming for large neighbourhoods (e.g. parameters with hundreds of categories).
- One-exchange neighbourhood is defined for the following parameter types:
- Categorical parameter with one-hot encoding
All other categories
- Categorical parameter with ordinal encoding
Only preceeding and following categories
- Continuous parameter
Gaussian samples (default: 4) around current value. Standard deviation (default: 0.2) is scaled by parameter value range.
- Discrete parameter
Preceeding and following discrete values.
- 1
Hutter, Frank, Holger H. Hoos, and Kevin Leyton-Brown. “Sequential model-based optimization for general algorithm configuration.” International Conference on Learning and Intelligent Optimization. Springer, Berlin, Heidelberg, 2011.
- class emukit.core.optimization.multi_source_acquisition_optimizer.MultiSourceAcquisitionOptimizer(acquisition_optimizer, space)¶
Bases:
AcquisitionOptimizerBase
Optimizes the acquisition function by finding the optimum input location at each information source, then picking the information source where the value of the acquisition at the optimum input location is highest.
- optimize(acquisition, context=None)¶
Computes the location and source of the next point to evaluate by finding the maximum input location at each information source, then picking the information source where the value of the acquisition at the optimum input location is highest.
- Parameters
acquisition (
Acquisition
) – The acquisition function to be optimizedcontext (
Optional
[Dict
[str
,Any
]]) – Contains variables to fix through optimization of acquisition function. The dictionary key is the parameter name and the value is the value to fix the parameter to.
- Return type
Tuple
[ndarray
,ndarray
]- Returns
A tuple of (location of maximum, acquisition value at maximum)
- class emukit.core.optimization.optimizer.Optimizer(bounds)¶
Bases:
object
Class for a general acquisition optimizer.
- class emukit.core.optimization.optimizer.OptLbfgs(bounds, max_iterations=1000)¶
Bases:
Optimizer
Wrapper for l-bfgs-b to use the true or the approximate gradients.
- emukit.core.optimization.optimizer.apply_optimizer(optimizer, x0, space, f=None, df=None, f_df=None, context_manager=None)¶
Optimizes f using the optimizer supplied, deals with potential context variables.
- Parameters
optimizer (
Optimizer
) – The optimizer object that will perform the optimizationx0 (
ndarray
) – initial point for a local optimizer (x0 can be defined with or without the context included).df (
Optional
[Callable
]) – gradient of the function to optimize.f_df (
Optional
[Callable
]) – returns both the function to optimize and its gradient.context_manager (
Optional
[ContextManager
]) – If provided, x0 (and the optimizer) operates in the space without the contextspace (
ParameterSpace
) – Parameter space describing input domain, including any context variables
- Return type
Tuple
[ndarray
,ndarray
]- Returns
Location of optimum and value at optimum
- class emukit.core.optimization.optimizer.OptimizationWithContext(x0, f, df=None, f_df=None, context_manager=None)¶
Bases:
object
- f_no_context(x)¶
Wrapper of optimization objective function which deals with adding context variables to x
- Parameters
x (
ndarray
) – Input without context variables- Return type
ndarray
- df_no_context(x)¶
Wrapper of the derivative of optimization objective function which deals with adding context variables to x
- Parameters
x (
ndarray
) – Input without context variables- Return type
ndarray
- class emukit.core.optimization.optimizer.OptTrustRegionConstrained(bounds, constraints, max_iterations=1000)¶
Bases:
Optimizer
Wrapper for Trust-Region Constrained algorithm that can deal with non-linear constraints
- class emukit.core.optimization.random_search_acquisition_optimizer.RandomSearchAcquisitionOptimizer(space, num_eval_points=10)¶
Bases:
AcquisitionOptimizerBase
Optimizes the acquisition function by evaluating at random points. Can be used for discrete and continuous acquisition functions.