bayesvalidrox.surrogate_models.gaussian_process_sklearn.GPESkl

class bayesvalidrox.surrogate_models.gaussian_process_sklearn.GPESkl(input_obj, meta_model_type='GPE', gpe_reg_method='lbfgs', autoSelect=False, kernel_type='RBF', isotropy=True, noisy=False, nugget=1e-09, n_bootstrap_itrs=1, dim_red_method='no', verbose=False)

Bases: MetaModel

GP MetaModel using the Scikit-Learn library

This class trains a surrogate model of type Gaussian Process Regression. It accepts an input object (input_obj) containing the specification of the distributions for uncertain parameters and a model object with instructions on how to run the computational model.

Attributes

input_objobj

Input object with the information on the model input parameters.

_meta_model_typestr

Surrogate model types, in this case GPE. Default is GPE.

gpe_reg_methodstr

GPE regression method to compute the kernel hyperparameters. The following regression methods are available for Scikit-Learn library 1. LBFGS: Default is LBFGS.

autoSelect: bool

Flag to loop through different available Kernels and select the best one based on BME criteria. Default is False.

kernel_type: str

Type of kernel to use and train for. The following Scikit-Learn kernels are available: 1. RBF: Squared exponential kernel 2. Matern: Matern kernel 3. RQ: rational quadratic kernel Default is ‘RBF’ kernel.

isotropy: bool
Flat to train an isotropic kernel (one length scale for all input parameters) or

an anisotropic kernel (a length scale for each dimension). True for isotropy, False for anisotropic kernel

Default is True

noisy: bool

Consider a WhiteKernel for regularization purposes, and optimize for the noise hyperparameter. Default is False

nugget: float

Constant value added to the Kernel matrix for regularization purposes (not optimized) Default is 1e-9

bootstrap_methodstr

Bootstraping method. Options are ‘normal’ and ‘fast’. The default is ‘fast’. It means that in each iteration except the first one, only the coefficent are recalculated with the ordinary least square method.

n_bootstrap_itrsint

Number of iterations for the bootstrap sampling. The default is 1.

dim_red_methodstr

Dimensionality reduction method for the output space. The available method is based on principal component analysis (PCA). The Default is ‘no’. There are two ways to select number of components: use percentage of the explainable variance threshold (between 0 and 100) (Option A) or direct prescription of components’ number (Option B):

>>> MetaModelOpts = MetaModel()
>>> MetaModelOpts.dim_red_method = 'PCA'
>>> MetaModelOpts.var_pca_threshold = 99.999  # Option A
>>> MetaModelOpts.n_pca_components = 12 # Option B
verbosebool

Prints summary of the regression results. Default is False.

__init__(input_obj, meta_model_type='GPE', gpe_reg_method='lbfgs', autoSelect=False, kernel_type='RBF', isotropy=True, noisy=False, nugget=1e-09, n_bootstrap_itrs=1, dim_red_method='no', verbose=False)

Methods

__init__(input_obj[, meta_model_type, ...])

adaptive_regression(X, y, varIdx[, verbose])

Adaptively fits the GPE model by comparing different Kernel options

add_InputSpace()

Instanciates experimental design object.

build_kernels()

Initializes the different possible kernels, and selects the ones to train for, depending on the input options.

build_metamodel()

Builds the parts for the metamodel (,...) that are needed before fitting.

check_is_gaussian(n_bootstrap_itrs)

Check if the metamodel returns a mean and stdev.

compute_moments()

Computes the first two moments of the metamodel.

copy_meta_model_opts()

This method is a convinient function to copy the metamodel options.

eval_metamodel(samples[, b_i])

Evaluates GP metamodel at the requested samples.

fit(X, y[, parallel, verbose, b_i])

Fits the surrogate to the given data (samples X, outputs y).

pca_transformation(target, n_pca_components)

Transforms the targets (outputs) via Principal Component Analysis.

scale_x(X, transform_obj)

Transforms the inputs based on the scaling done during training Parameters ---------- X: 2D list or np.array of shape (#samples, #dim) The parameter value combinations to evaluate the model with. transform_obj: Scikit-Learn object Class instance to transform inputs.

transform_x(X[, transform_type])

Scales the inputs (X) during training using either normalize ([0, 1]), or standardize (N[0, 1]). If None, then the inputs are not scaled Parameters ---------- X: 2D list or np.array of shape (#samples, #dim) The parameter value combinations to train the model with. transform_type: str Transformation to apply to the input parameters. Default is None.

adaptive_regression(X, y, varIdx, verbose=False)

Adaptively fits the GPE model by comparing different Kernel options

Parameters

Xarray of shape (n_samples, ndim)

Training set. These samples should be already transformed.

yarray of shape (n_samples,)

Target values, i.e. simulation results for the Experimental design.

varIdxint

Index of the output.

verbosebool, optional

Print out summary. The default is False.

Returns

returnVarsDict

Fitted estimator, BME score

add_InputSpace()

Instanciates experimental design object.

Returns

None.

class auto_vivification

Bases: dict

Implementation of perl’s AutoVivification feature.

Source: https://stackoverflow.com/a/651879/18082457

clear() None.  Remove all items from D.
copy() a shallow copy of D
fromkeys(value=None, /)

Create a new dictionary with keys from iterable and values set to value.

get(key, default=None, /)

Return the value for key if key is in the dictionary, else default.

items() a set-like object providing a view on D's items
keys() a set-like object providing a view on D's keys
pop(k[, d]) v, remove specified key and return the corresponding value.

If the key is not found, return the default if given; otherwise, raise a KeyError.

popitem()

Remove and return a (key, value) pair as a 2-tuple.

Pairs are returned in LIFO (last-in, first-out) order. Raises KeyError if the dict is empty.

setdefault(key, default=None, /)

Insert key with a value of default if key is not in the dictionary.

Return the value for key if key is in the dictionary, else default.

update([E, ]**F) None.  Update D from dict/iterable E and F.

If E is present and has a .keys() method, then does: for k in E: D[k] = E[k] If E is present and lacks a .keys() method, then does: for k, v in E: D[k] = v In either case, this is followed by: for k in F: D[k] = F[k]

values() an object providing a view on D's values
build_kernels()

Initializes the different possible kernels, and selects the ones to train for, depending on the input options. ToDo: Add additional kernels ToDo: Add option to include user-defined kernel Returns ——- List: with the kernels to iterate over List: with the names of the kernels to iterate over

build_metamodel() None

Builds the parts for the metamodel (,…) that are needed before fitting. This is executed outside any loops related to e.g. bootstrap or transformations such as pca.

Returns

None

check_is_gaussian(n_bootstrap_itrs) bool

Check if the metamodel returns a mean and stdev.

Returns

bool

TRUE

compute_moments()

Computes the first two moments of the metamodel.

Returns

means: dict

The first moment (mean) of the surrogate.

stds: dict

The second moment (standard deviation) of the surrogate.

copy_meta_model_opts()

This method is a convinient function to copy the metamodel options.

Returns

metamod_copyobject

The copied object.

eval_metamodel(samples, b_i=0)

Evaluates GP metamodel at the requested samples.

Parameters

samplesarray of shape (n_samples, ndim), optional

Samples to evaluate metamodel at. The default is None.

Returns

mean_preddict

Mean of the predictions.

std_preddict

Standard deviation of the predictions.

fit(X: array, y: dict, parallel=False, verbose=False, b_i=0)

Fits the surrogate to the given data (samples X, outputs y). Note here that the samples X should be the transformed samples provided by the experimental design if the transformation is used there.

Parameters

X2D list or np.array of shape (#samples, #dim)

The parameter value combinations that the model was evaluated at.

ydict of 2D lists or arrays of shape (#samples, #timesteps)

The respective model evaluations.

parallelbool

Set to True to run the training in parallel for various keys. The default is False.

verbosebool

Set to True to obtain more information during runtime. The default is False.

Returns

None.

pca_transformation(target, n_pca_components)

Transforms the targets (outputs) via Principal Component Analysis. The number of features is set by self.n_pca_components. If this is not given, self.var_pca_threshold is used as a threshold.

ToDo: Check the inputs needed for this class, there is an error when PCA is used. ToDo: From the y_transformation() function, a dictionary is being sent instead of an array for target.

Parameters

targetarray of shape (n_samples,)

Target values.

Returns

pcaobj

Fitted sklearnPCA object.

OutputMatrixarray of shape (n_samples,)

Transformed target values.

n_pca_componentsint

Number of selected principal components.

static scale_x(X: array, transform_obj: object)

Transforms the inputs based on the scaling done during training Parameters ———- X: 2D list or np.array of shape (#samples, #dim)

The parameter value combinations to evaluate the model with.

transform_obj: Scikit-Learn object

Class instance to transform inputs

Returns

np.array (#samples, #dim)

Transformed input sets

static transform_x(X: array, transform_type='norm')

Scales the inputs (X) during training using either normalize ([0, 1]), or standardize (N[0, 1]). If None, then the inputs are not scaled Parameters ———- X: 2D list or np.array of shape (#samples, #dim) The parameter value combinations to train the model with. transform_type: str

Transformation to apply to the input parameters. Default is None

Returns

np.array: (#samples, #dim)

transformed input parameters

obj: Scaler object

transformation object, for future transformations during surrogate evaluation