emukit.benchmarking.loop_benchmarking package

Submodules

class emukit.benchmarking.loop_benchmarking.benchmark_plot.BenchmarkPlot(benchmark_results, loop_colours=None, loop_line_styles=None, x_axis_metric_name=None, metrics_to_plot=None)

Bases: object

Creates a plot comparing the results from the different loops used during benchmarking

make_plot()

Make one plot for each metric measured, comparing the different loop results against each other

Return type

None

save_plot(file_name)

Save plot to file

Parameters

file_name (str) –

Return type

None

class emukit.benchmarking.loop_benchmarking.benchmark_result.BenchmarkResult(loop_names, n_repeats, metric_names)

Bases: object

add_results(loop_name, i_repeat, metric_name, metric_values)

Add results for a specific loop, metric and repeat combination

Parameters
  • loop_name (str) – Name of loop

  • i_repeat (int) – Index of repeat

  • metric_name (str) – Name of metric

  • metric_values (ndarray) – Metric values to add

Return type

None

extract_metric_as_array(loop_name, metric_name)

Returns results over all repeats and iterations for a specific metric and loop name pair

Parameters
  • loop_name (str) – Name of loop to return results for

  • metric_name (str) – Name of metric to extract

Return type

ndarray

Returns

2-d numpy array of shape (n_repeats x n_iterations)

class emukit.benchmarking.loop_benchmarking.benchmarker.Benchmarker(loops_with_names, test_function, parameter_space, metrics, initial_design=None)

Bases: object

run_benchmark(n_initial_data=10, n_iterations=10, n_repeats=10)

Runs the benchmarking. For each initial data set, every loop is created and run for the specified number of iterations and the results are collected.

Parameters
  • n_initial_data (int) – Number of points in the initial data set

  • n_iterations (int) – Number of iterations to run the loop for

  • n_repeats (int) – Number of times to run each loop with a different initial data set

Return type

BenchmarkResult

Returns

An instance of BenchmarkResult that contains all the tracked metrics for each loop

class emukit.benchmarking.loop_benchmarking.metrics.Metric

Bases: object

evaluate(loop, loop_state)
Return type

None

reset()
Return type

None

class emukit.benchmarking.loop_benchmarking.metrics.MeanSquaredErrorMetric(x_test, y_test, name='mean_squared_error')

Bases: Metric

Mean squared error metric stored in loop state metric dictionary with key “mean_squared_error”.

evaluate(loop, loop_state)

Calculate and store mean squared error

Parameters
  • loop (OuterLoop) – Outer loop

  • loop_state (LoopState) – Object containing history of the loop that we add results to

Return type

ndarray

class emukit.benchmarking.loop_benchmarking.metrics.MinimumObservedValueMetric(name='minimum_observed_value')

Bases: Metric

The result is stored in the “metrics” dictionary in the loop state with the key “minimum_observed_value”

evaluate(loop, loop_state)

Evaluates minimum observed value

Parameters
  • loop (OuterLoop) – Outer loop

  • loop_state (LoopState) – Object containing history of the loop that we add results to

Return type

ndarray

class emukit.benchmarking.loop_benchmarking.metrics.TimeMetric(name='time')

Bases: Metric

Time taken between each iteration of the loop

evaluate(loop, loop_state)

Returns difference between time now and when the reset method was last called

Return type

ndarray

reset()

Resets the start time

Return type

None

class emukit.benchmarking.loop_benchmarking.metrics.CumulativeCostMetric(name='cumulative_costs')

Bases: Metric

Accumulates the cost of each function evaluation. The result is stored in the “metrics” dictionary in the loop state with the key “cumulative_costs”

evaluate(loop, loop_state)

Computes the cumulative cost of all function evaluations after the last observed iteration

Parameters
  • loop (OuterLoop) – Outer loop

  • loop_state (LoopState) – Object containing history of the loop that we add results to

:return cumulative cost

Return type

ndarray

reset()

Resets the cumulative cost and the internal counter back to 0

Return type

None

class emukit.benchmarking.loop_benchmarking.random_search.RandomSearch(space, x_init=None, y_init=None, cost_init=None)

Bases: OuterLoop

Module contents