emukit.benchmarking.loop_benchmarking package
Submodules
- class emukit.benchmarking.loop_benchmarking.benchmark_plot.BenchmarkPlot(benchmark_results, loop_colours=None, loop_line_styles=None, x_axis_metric_name=None, metrics_to_plot=None)
Bases:
object
Creates a plot comparing the results from the different loops used during benchmarking
- make_plot()
Make one plot for each metric measured, comparing the different loop results against each other
- Return type:
- class emukit.benchmarking.loop_benchmarking.benchmark_result.BenchmarkResult(loop_names, n_repeats, metric_names)
Bases:
object
- add_results(loop_name, i_repeat, metric_name, metric_values)
Add results for a specific loop, metric and repeat combination
- extract_metric_as_array(loop_name, metric_name)
Returns results over all repeats and iterations for a specific metric and loop name pair
- class emukit.benchmarking.loop_benchmarking.benchmarker.Benchmarker(loops_with_names, test_function, parameter_space, metrics, initial_design=None)
Bases:
object
- run_benchmark(n_initial_data=10, n_iterations=10, n_repeats=10)
Runs the benchmarking. For each initial data set, every loop is created and run for the specified number of iterations and the results are collected.
- Parameters:
- Return type:
- Returns:
An instance of BenchmarkResult that contains all the tracked metrics for each loop
- class emukit.benchmarking.loop_benchmarking.metrics.MeanSquaredErrorMetric(x_test, y_test, name='mean_squared_error')
Bases:
Metric
Mean squared error metric stored in loop state metric dictionary with key “mean_squared_error”.
- class emukit.benchmarking.loop_benchmarking.metrics.MinimumObservedValueMetric(name='minimum_observed_value')
Bases:
Metric
The result is stored in the “metrics” dictionary in the loop state with the key “minimum_observed_value”
- class emukit.benchmarking.loop_benchmarking.metrics.TimeMetric(name='time')
Bases:
Metric
Time taken between each iteration of the loop
- evaluate(loop, loop_state)
Returns difference between time now and when the reset method was last called
- Return type:
ndarray
- class emukit.benchmarking.loop_benchmarking.metrics.CumulativeCostMetric(name='cumulative_costs')
Bases:
Metric
Accumulates the cost of each function evaluation. The result is stored in the “metrics” dictionary in the loop state with the key “cumulative_costs”
- evaluate(loop, loop_state)
Computes the cumulative cost of all function evaluations after the last observed iteration
- Parameters:
- Return type:
ndarray
:return cumulative cost