bayesvalidrox.surrogate_models.engine.Engine

class bayesvalidrox.surrogate_models.engine.Engine(MetaMod, Model, ExpDes)

Bases: object

__init__(MetaMod, Model, ExpDes)

Methods

__init__(MetaMod, Model, ExpDes)

choose_next_sample([sigma2, n_candidates, var])

Runs optimal sequential design.

dual_annealing(method, Bounds, sigma2Dict, ...)

Exploration algorithm to find the optimum parameter space.

eval_metamodel([samples, nsamples, ...])

Evaluates metamodel at the requested samples.

run_util_func(method, candidates, index[, ...])

Runs the utility function based on the given method.

start_engine()

Do all the preparations that need to be run before the actual training

tradeoff_weights(tradeoff_scheme, old_EDX, ...)

Calculates weights for exploration scores based on the requested scheme: None, equal, epsilon-decreasing and adaptive.

train_normal([parallel, verbose, save])

Trains surrogate on static samples only.

train_seq_design([parallel, verbose])

Starts the adaptive sequential design for refining the surrogate model by selecting training points in a sequential manner.

train_sequential([parallel, verbose])

Train the surrogate in a sequential manner.

util_AlphOptDesign(candidates[, var])

Enriches the Experimental design with the requested alphabetic criterion based on exploring the space with number of sampling points.

util_BayesianActiveDesign(y_hat, std, sigma2Dict)

Computes scores based on Bayesian active design criterion (var).

util_BayesianDesign(X_can, X_MC, sigma2Dict)

Computes scores based on Bayesian sequential design criterion (var).

util_VarBasedDesign(X_can, index[, util_func])

Computes the exploitation scores based on: active learning MacKay(ALM) and active learning Cohn (ALC) Paper: Sequential Design with Mutual Information for Computer Experiments (MICE): Emulation of a Tsunami Model by Beck and Guillas (2016)

choose_next_sample(sigma2=None, n_candidates=5, var='DKL')

Runs optimal sequential design.

Parameters

sigma2dict, optional

A dictionary containing the measurement errors (sigma^2). The default is None.

n_candidatesint, optional

Number of candidate samples. The default is 5.

varstring, optional

Utility function. The default is None. # TODO: default is set to DKL, not none

Raises

NameError

Wrong utility function.

Returns

Xnewarray (n_samples, n_params)

Selected new training point(s).

dual_annealing(method, Bounds, sigma2Dict, var, Run_No, verbose=False)

Exploration algorithm to find the optimum parameter space.

Parameters

methodstring

Exploitation method: VarOptDesign, BayesActDesign and BayesOptDesign.

Boundslist of tuples

List of lower and upper boundaries of parameters.

sigma2Dictdict

A dictionary containing the measurement errors (sigma^2).

var : unknown Run_No : int

Run number.

verbosebool, optional

Print out a summary. The default is False.

Returns

Run_Noint

Run number.

array

Optimial candidate.

eval_metamodel(samples=None, nsamples=None, sampling_method='random', return_samples=False, parallel=False)

Evaluates metamodel at the requested samples. One can also generate nsamples.

Parameters

samplesarray of shape (n_samples, n_params), optional

Samples to evaluate metamodel at. The default is None.

nsamplesint, optional

Number of samples to generate, if no samples is provided. The default is None.

sampling_methodstr, optional

Type of sampling, if no samples is provided. The default is ‘random’.

return_samplesbool, optional

Retun samples, if no samples is provided. The default is False.

parallelbool, optional

Set to true if the evaluations should be done in parallel. The default is False.

Returns

mean_preddict

Mean of the predictions.

std_preddict

Standard deviatioon of the predictions.

run_util_func(method, candidates, index, sigma2Dict=None, var=None, X_MC=None)

Runs the utility function based on the given method.

Parameters

methodstring

Exploitation method: VarOptDesign, BayesActDesign and BayesOptDesign.

candidatesarray of shape (n_samples, n_params)

All candidate parameter sets.

indexint

ExpDesign index.

sigma2Dictdict, optional

A dictionary containing the measurement errors (sigma^2). The default is None.

varstring, optional

Utility function. The default is None.

X_MCTYPE, optional

DESCRIPTION. The default is None.

Returns

indexTYPE

DESCRIPTION.

List

Scores.

start_engine() None

Do all the preparations that need to be run before the actual training

Returns

None

tradeoff_weights(tradeoff_scheme, old_EDX, old_EDY)

Calculates weights for exploration scores based on the requested scheme: None, equal, epsilon-decreasing and adaptive.

None: No exploration. equal: Same weights for exploration and exploitation scores. epsilon-decreasing: Start with more exploration and increase the

influence of exploitation along the way with an exponential decay function

adaptive: An adaptive method based on:

Liu, Haitao, Jianfei Cai, and Yew-Soon Ong. “An adaptive sampling approach for Kriging metamodeling by maximizing expected prediction error.” Computers & Chemical Engineering 106 (2017): 171-182.

Parameters

tradeoff_schemestring

Trade-off scheme for exloration and exploitation scores.

old_EDXarray (n_samples, n_params)

Old experimental design (training points).

old_EDYdict

Old model responses (targets).

Returns

exploration_weightfloat

Exploration weight.

exploitation_weight: float

Exploitation weight.

train_normal(parallel=False, verbose=False, save=False) None

Trains surrogate on static samples only. Samples are taken from the experimental design and the specified model is run on them. Alternatively the samples can be read in from a provided hdf5 file.

Returns

None

train_seq_design(parallel=False, verbose=False)

Starts the adaptive sequential design for refining the surrogate model by selecting training points in a sequential manner.

Returns

MetaModelobject

Meta model object.

train_sequential(parallel=False, verbose=False) None

Train the surrogate in a sequential manner. First build and train evereything on the static samples, then iterate choosing more samples and refitting the surrogate on them.

Returns

None

util_AlphOptDesign(candidates, var='D-Opt')

Enriches the Experimental design with the requested alphabetic criterion based on exploring the space with number of sampling points.

Ref: Hadigol, M., & Doostan, A. (2018). Least squares polynomial chaos expansion: A review of sampling strategies., Computer Methods in Applied Mechanics and Engineering, 332, 382-407.

Arguments

candidatesint?

Number of candidate points to be searched

varstring

Alphabetic optimality criterion

Returns

X_newarray of shape (1, n_params)

The new sampling location in the input space.

util_BayesianActiveDesign(y_hat, std, sigma2Dict, var='DKL')

Computes scores based on Bayesian active design criterion (var).

It is based on the following paper: Oladyshkin, Sergey, Farid Mohammadi, Ilja Kroeker, and Wolfgang Nowak. “Bayesian3 active learning for the gaussian process emulator using information theory.” Entropy 22, no. 8 (2020): 890.

Parameters

y_hat : unknown std : unknown sigma2Dict : dict

A dictionary containing the measurement errors (sigma^2).

varstring, optional

BAL design criterion. The default is ‘DKL’.

Returns

float

Score.

util_BayesianDesign(X_can, X_MC, sigma2Dict, var='DKL')

Computes scores based on Bayesian sequential design criterion (var).

Parameters

X_canarray of shape (n_samples, n_params)

Candidate samples.

X_MC : unknown sigma2Dict : dict

A dictionary containing the measurement errors (sigma^2).

varstring, optional

Bayesian design criterion. The default is ‘DKL’.

Returns

float

Score.

util_VarBasedDesign(X_can, index, util_func='Entropy')

Computes the exploitation scores based on: active learning MacKay(ALM) and active learning Cohn (ALC) Paper: Sequential Design with Mutual Information for Computer Experiments (MICE): Emulation of a Tsunami Model by Beck and Guillas (2016)

Parameters

X_canarray of shape (n_samples, n_params)

Candidate samples.

indexint

Model output index.

util_funcstring, optional

Exploitation utility function. The default is ‘Entropy’.

Returns

float

Score.