emukit.multi_fidelity.models package

Submodules

Contains linear models

class emukit.multi_fidelity.models.linear_model.GPyLinearMultiFidelityModel(X, Y, kernel, n_fidelities, likelihood=None)

Bases: GP

A thin wrapper around GPy.core.GP that does some input checking and provides a default likelihood

Contains code for non-linear model multi-fidelity model.

It is based on this paper:

Nonlinear information fusion algorithms for data-efficient multi-fidelity modelling. P. Perdikaris, M. Raissi, A. Damianou, N. D. Lawrence and G. E. Karniadakis (2017) https://royalsocietypublishing.org/doi/10.1098/rspa.2016.0751

emukit.multi_fidelity.models.non_linear_multi_fidelity_model.make_non_linear_kernels(base_kernel_class, n_fidelities, n_input_dims, ARD=False)

This function takes a base kernel class and constructs the structured multi-fidelity kernels

At the first level the kernel is simply: .. math

k_{base}(x, x’)

At subsequent levels the kernels are of the form .. math

k_{base}(x, x’)k_{base}(y_{i-1}, y{i-1}’) + k_{base}(x, x’)

Parameters:
  • base_kernel_class (Type[Kern]) – GPy class definition of the kernel type to construct the kernels at

  • n_fidelities (int) – Number of fidelities in the model. A kernel will be returned for each fidelity

  • n_input_dims (int) – The dimensionality of the input.

  • ARD (bool) – If True, uses different lengthscales for different dimensions. Otherwise the same lengthscale is used for all dimensions. Default False.

Return type:

List

Returns:

A list of kernels with one entry for each fidelity starting from lowest to highest fidelity.

class emukit.multi_fidelity.models.non_linear_multi_fidelity_model.NonLinearMultiFidelityModel(X_init, Y_init, n_fidelities, kernels, n_samples=100, verbose=False, optimization_restarts=5)

Bases: IModel, IDifferentiable

Non-linear Model for multiple fidelities. This implementation of the model only handles 1-dimensional outputs.

The theory implies the training points should be nested such that any point in a higher fidelity exists in all lower fidelities, in practice the model will work if this constraint is ignored.

set_data(X, Y)

Updates training data in the model.

Parameters:
  • X (ndarray) – New training features.

  • Y (ndarray) – New training targets.

Return type:

None

property X

input array of size (n_points x n_inputs_dims) across every fidelity in original input domain meaning it excludes inputs to models that come from the output of the previous level

Type:

return

property Y

output array of size (n_points x n_outputs) across every fidelity level

Type:

return

property n_samples
predict(X)

Predicts mean and variance at fidelity given by the last column of X

Note that the posterior isn’t Gaussian and so this function doesn’t tell us everything about our posterior distribution.

Parameters:

X (ndarray) – Input locations with fidelity index appended.

Return type:

Tuple[ndarray, ndarray]

Returns:

mean and variance of posterior distribution at X.

get_prediction_gradients(X)

Predicts mean and variance and the gradients of the mean and variance with respect to X.

Parameters:

X (ndarray) – input location.

Return type:

Tuple[ndarray, ndarray]

Returns:

(mean, mean gradient, variance, variance gradient) Gradients will be shape (n_points x (d-1)) because we don’t return the gradient with respect to the fidelity index.

optimize()

Optimize the full model

Return type:

None

get_f_minimum()

Get the minimum of the top fidelity model.

Return type:

ndarray

Module contents