uncertainty_wizard.models package

Subpackages

Module contents

Uncertainty wizard models and corresponding utilities

class uncertainty_wizard.models.LazyEnsemble(num_models: int, model_save_path: str, delete_existing: bool = True, expect_model: bool = False, default_num_processes: int = 1)

Bases: _UwizModel

LazyEnsembles are uncertainty wizards implementation of Deep Ensembles, where multiple atomic models are trained for the same problem; the output distribution (and thus uncertainty) is then inferred from predicting on all atomic models.

** Multi-Processing **

This ensemble implementation is lazy as it does not keep the atomic models in memory (or even worse, in the tf graph). Instead, atomic models are persisted on the file system and only loaded when needed - and discarded immediately afterwards. To further increase performance, in particular on high performance GPU powered hardware setups, where a single model instance training does not use the full GPU resources, LazyEnsemble allows to create multiple concurrent tensorflow sessions, each running a dedicated model in parallel. The number of processes to be used can be specified on essentially any LazyEnsemble function.

Models are loaded into a context, e.g. a gpu configuration which was configured before the model was loaded. The default context, if multiple processes are used, sets the GPU usage to dynamic memory growth. Pay attention: By using too many processes, it is easy to exhaust your systems resources. Thus, we recommend to set the number of processes conservatively, observe the system load and increase the number of processes if possible. Default contexts are uwiz.models.ensemble_utils.DynamicGpuGrowthContextManager if multiprocessing is enables, and uwiz.models.ensemble_utils.NoneContextManager otherwise.

Note: Multi-Processing can be disabled by setting the number of processes to 0. Then, predictions will be made in the main process on the main tensorflow session. Attention: In this case, the tensorflow session will be cleared after every model execution!

** The LazyEnsemble Interface & Workflow **

LazyEnsemble exposes four central functions: create, modify and consume. In general, every of these functions expectes a picklable function at input which either creates, modifies or consumes a plain keras model. Please refer to the specific methods documentation for details. Furthermore LazyEnsemble exposes utility methods wrapping the above listed methods, e.g. fit and predict_quantified, which expect numpy array inputs and automatically serialize and deserialize them to be used in parallel processes.

** Stability of Lazy Ensembles **

To optimize GPU use, LazyEnsemble relies on some of tensorflows experimental features and is thus, by extension, also to be considered experimental.

consume(consume_function: Callable[[int, Model], T], num_processes: Optional[int] = None, context: Optional[Callable[[int], EnsembleContextManager]] = None, models: Optional[Iterable[int]] = None) List[T]

This function uses the atomic models in the ensemble without changing them. At its core stands a consume_function: This custom function takes as input the id of the model to be modified (which may be ignored) and the model instance. It is expected to return a picklable consumption result. You should refrain from returning extremely large consumption results, as they will be kept in memory and may occupy too many system resources. In such a case, you may want to persist the results and return None as a consumption result instead.

Attention While this function can be used for predictions, you’d probably prefer to use ensemble.quantify_predictions(…) instead, which wraps this functions and allows to apply quantifiers for overall prediction inference and uncertainty quantification.

Parameters:
  • consume_function – A picklable function to consume atomic models, as explained in the description above.

  • num_processes – The number of processes to use. Default: The default or value specified when creating the lazy ensemble.

  • context – A contextmanager which prepares a newly crated process for execution (e.g. by configuring the gpus). See class docstring for explanation of default values.

  • models – The ids of the atomic models to be consumed. If None (default), all models will be consumed.

Returns:

The reports returned by the create_function executions.

create(create_function: Callable[[int], Tuple[Model, T]], num_processes: Optional[int] = None, context: Optional[Callable[[int, dict], EnsembleContextManager]] = None, models: Optional[Iterable[int]] = None) List[T]

This function takes care of the creation of new atomic models for this ensemble instance. At its core stands a create_function: This custom function takes as input the id of the model to be generated (which may be ignored), and is expected to return the newly created keras model and some custom, picklable, creation report (e.g. the fit history). If not required, the returned report may be None. You should refrain from returning extremely large report object, as they will be kept in memory and may occupy too many system resources.

Parameters:
  • create_function – A picklable function to create new atomic models, as explained in the description above.

  • num_processes – The number of processes to use. Default: The default or value specified when creating the lazy ensemble.

  • context – A contextmanager which prepares a newly crated process for execution (e.g. by configuring the gpus). See class docstring for explanation of default values.

  • models – The ids of the atomic models to be created. If None (default), all models will be created.

Returns:

The reports returned by the create_function executions.

fit(x: Optional[ndarray] = None, y: Optional[ndarray] = None, batch_size: Optional[int] = None, epochs: int = 1, verbose: int = 1, callbacks=None, validation_split: float = 0.0, validation_data: Optional[Tuple[ndarray, ndarray]] = None, shuffle: bool = True, class_weight: Optional[Dict[int, float]] = None, sample_weight: Optional[ndarray] = None, initial_epoch: int = 0, steps_per_epoch: Optional[int] = None, validation_steps: Optional[int] = None, validation_freq: int = 1, pickle_arrays=True, num_processes=None, context: Optional[Callable[[int], EnsembleContextManager]] = None)

An easy access to keras fit function. As the inputs are pickled and distributed onto processes, only numpy array are accepted for the data params and no callbacks can be provided.

If this is too restrictive for your use-case, consider using model.modify to setup your fit process and generate the datasets / callbacks right in the map_function.

Parameters:
  • x – See tf.keras.Model.fit documentation.

  • y – See tf.keras.Model.fit documentation.

  • batch_size – See tf.keras.Model.fit documentation.

  • epochs – See tf.keras.Model.fit documentation.

  • verbose – See tf.keras.Model.fit documentation.

  • callbacks – See tf.keras.Model.fit documentation.

  • validation_split – See tf.keras.Model.fit documentation.

  • validation_data – See tf.keras.Model.fit documentation.

  • shuffle – See tf.keras.Model.fit documentation.

  • class_weight – See tf.keras.Model.fit documentation.

  • sample_weight – See tf.keras.Model.fit documentation.

  • initial_epoch – See tf.keras.Model.fit documentation.

  • steps_per_epoch – See tf.keras.Model.fit documentation.

  • validation_steps – See tf.keras.Model.fit documentation.

  • validation_freq – See tf.keras.Model.fit documentation.

  • pickle_arrays – If true, the arrays are stored to the file system and deserialized in every child process to save memory.

  • num_processes – The number of processes to use. Default: The default or value specified when creating the lazy ensemble.

  • context – A contextmanager which prepares a newly crated process for execution (e.g. by configuring the gpus). See class docstring for explanation of default values.

Returns:

The fit histories of the atomic models

modify(map_function: Callable[[int, Model], Tuple[Model, T]], num_processes: Optional[int] = None, context: Optional[Callable[[int], EnsembleContextManager]] = None, models: Optional[Iterable[int]] = None) List[T]

This function takes care of modifications of previously generated atomic models for this ensemble instance. At its core stands a map_function: This custom function takes as input the id of the model to be modified (which may be ignored) and the model instance. It is expected to return the modified (or replaced) keras model and some custom, picklable, modification report (e.g. the fit history). If not required, the returned report may be None. You should refrain from returning extremely large report object, as they will be kept in memory and may occupy too many system resources.

Attention Whenever possible, try to reduce the number of calls to this function. For example, it is often possible to train models as part of the ‘create’ call. This will result in the creation of less processes and thus a faster overall performance.

Parameters:
  • map_function – A picklable function to modify atomic models, as explained in the description above.

  • num_processes – The number of processes to use. Default: The default or value specified when creating the lazy ensemble.

  • context – A contextmanager which prepares a newly crated process for execution (e.g. by configuring the gpus). See class docstring for explanation of default values.

  • models – The ids of the atomic models to be modified. If None (default), all models will be modified.

Returns:

The reports returned by the create_function executions.

predict_quantified(x: ndarray, quantifier: Union[Quantifier, Iterable[Union[str, Quantifier]]], batch_size: int = 32, verbose: int = 0, steps=None, as_confidence: Union[None, bool] = None, num_processes=None, context=None, models: Optional[Iterable[int]] = None, return_alias_dict: bool = False)

Utility function to make quantified predictions on numpy arrays. Note: The numpy arrays are replicated on every created process and will thus quickly consume a lot of memory.

Parameters:
  • x – An (unbatched) numpy array, to be used in tf.keras.Model.predict

  • quantifier – A single or a collection of (sampling expecting) uwiz.quantifiers

  • batch_size – The batch size to use in tf.keras.Model.predict

  • verbose – Not yet supported.

  • steps – The number of steps to use in tf.keras.Model.predict

  • as_confidence – If true, uncertainties are multiplied by (-1), if false, confidences are multiplied by (-1). Default: No transformations.

  • num_processes – The number of processes to use. Default: The default or value specified when creating the lazy ensemble.

  • context – A contextmanager which prepares a newly crated process for execution (e.g. by configuring the gpus). See class docstring for explanation of default values.

  • models – A list of model indices to use for prediction. Default: `None`(All models).

  • return_alias_dict – If true, the result is returned as a dictionary with the quantifier aliases as keys.

Returns:

If return_alias_dict=True, a dict with all quantifier aliases as keys and (predictions, uncertainties_or_confidences) as values. Otherwise (default), a tuple (predictions, uncertainties_or_confidences) if a single quantifier was passed as string or instance, or a collection of such tuples if the passed quantifiers was an iterable.

quantify_predictions(quantifier: Union[Quantifier, Iterable[Quantifier]], consume_function: Callable[[int, Model], Any], as_confidence: Optional[bool] = None, num_processes: Optional[int] = None, context: Optional[Callable[[int], EnsembleContextManager]] = None, models: Optional[Iterable[int]] = None, return_alias_dict: bool = False)

A utility function to make predictions on all atomic models and then infer overall predictions and uncertainty (or confidence) on those predictions.

The test data is expected to be loaded directly in the consume function. This function, which gets the atomic model id and the atomic model as inputs, is expected to return the predictions, i.e., the results of a model.predict(..) call. :param quantifier: A single or a collection of (sampling expecting) uwiz.quantifiers :param consume_function: A picklable function to make predictions on atomic models, as explained in the description above. :param as_confidence: If true, uncertainties are multiplied by (-1), if false, confidences are multiplied by (-1). Default: No transformations. :param num_processes: The number of processes to use. Default: The default or value specified when creating the lazy ensemble. :param context: A contextmanager which prepares a newly crated process for execution (e.g. by configuring the gpus). See class docstring for explanation of default values. :param models: A list of model indices to use for prediction. Default: None`(All models). :param return_alias_dict: If true, the result is returned as a dictionary with the quantifier aliases as keys. :return: If `return_alias_dict=True, a dict with all quantifier aliases as keys

and (predictions, uncertainties_or_confidences) as values. Otherwise (default), a tuple (predictions, uncertainties_or_confidences) if a single quantifier was passed as string or instance, or a collection of such tuples if the passed quantifiers was an iterable.

run_model_free(task: Callable[[int], T], num_processes: Optional[int] = None, context: Optional[Callable[[int], EnsembleContextManager]] = None, num_times: Optional[int] = None) List[T]

Runs a task for every model, but without actually loading or persisting any model

Hint: If you do not use the gpu for the passed task, consider passing context=uwiz.models.ensemble_utils.CpuOnlyContextManager.

Parameters:
  • task – The picklable function to be run for every model.

  • num_processes – The number of processes to use. Default: The default or value specified when creating the lazy ensemble.

  • context – A contextmanager which prepares a newly crated process for execution (e.g. by configuring the gpus). See class docstring for explanation of default values.

  • num_times – The number of times to run the task.

Returns:

The reports returned by the create_function executions.

class uncertainty_wizard.models.Stochastic

Bases: _UwizModel

Stochastic models are models in which some randomness is added to the network during training. While this is typically done for network regularization, models trained in such a way can be used for uncertainty quantification. Simply speaking:

Randomness (which is typically disabled during inference) can be enforced during inference, leading to predictions which are impacted by the random noise. By sampling multiple network outputs for the same input, we can infer the robustness of the network to the random noise. We assume that the higher the robustness, the higher the networks confidence.

Instances of stochastic uncertainty wizard models can also be used in a non-stochastic way as as point prediction models (i.e., models without sampling) by calling the model.predict function or by passing a quantifier which does not rely on sampling to model.predict_quantified (such as Max-Softmax). Randomization during model inference is automatically enabled or disabled.

call(inputs, training=None, mask=None)

See tf.keras.Model.call for the documentation: The call is forwarded :param inputs: See tf.keras docs :param training: See tf.keras docs :param mask: See tf.keras docs :return: See tf.keras docs

compile(optimizer='rmsprop', loss=None, metrics=None, loss_weights=None, weighted_metrics=None, run_eagerly=None, expect_deterministic: bool = False)

This wraps the tf.keras.Model.compile method, but checks before if a stochastic layer was added to the model: If none was added, a warning is printed.

This behavior can be turned of if you only intend to use the model as point predictor. In this case, set expect_deterministic to True.

Parameters:
  • optimizer – See tf.keras.Model docs

  • loss – See tf.keras.Model docs

  • metrics – See tf.keras.Model docs

  • loss_weights – See tf.keras.Model docs

  • weighted_metrics – See tf.keras.Model docs

  • run_eagerly – See tf.keras.Model docs

  • expect_deterministic – Iff true, the model is not checked for randomness. Default: False

Returns:

See tf.keras.Model docs

property evaluate

Direct access to the evaluate method the wrapped keras model. See tf.keras.Model.evaluate for precise documentation of this method.

Can be called as stochastic_model.evaluate(…), equivalent to how fit would be called on a plain keras model. This means that no stochastic sampling is done.

Returns:

The evaluate method of the wrapped model

property fit

Direct access to the fit method the wrapped keras model. See tf.keras.Model.fit for precise documentation of this method.

Can be called as stochastic_model.fit(…), equivalent to how fit would be called on a plain keras model. :return: The fit method of the wrapped model

abstract property inner: Model

Direct access to the wrapped keras model. Use this if you want to directly work on the wrapped model. When using this, make sure not to modify the stochastic layers or the stochastic_mode tensor on the model.

Returns: the tf.keras.Model wrapped in this StochasticSequential.

property predict

Direct access to the predict method the wrapped keras model. See tf.keras.Model.predict for precise documentation of this method.

Note that no confidences are calculated if calling this predict method, and the stochastic layers are disabled. To calculate confidences, call model.predict_quantified(…) instead of model.predict(…)

Can be called as model.predict(…), equivalent to how predict would be called on a plain keras model. :return: The predict method of the wrapped model

predict_quantified(x: Union[DatasetV2, ndarray], quantifier: Union[Quantifier, str, Iterable[Union[str, Quantifier]]], sample_size: int = 64, batch_size: int = 32, verbose: int = 0, steps=None, as_confidence: Union[None, bool] = None, broadcaster: Optional[Broadcaster] = None, return_alias_dict: bool = False)

Calculates predictions and uncertainties (or confidences) according to the passed quantifer(s). Sampling is done internally. Both point-predictor and sampling based quantifiers can be used in the same method call. Uwiz automatically enables and disables the randomness of the model accordingly. :param x: The inputs for which the predictions should be made. tf.data.Dataset (unbatched) or numpy array. :param quantifier: The quantifier or quantifier alias to use (or a collection of them) :param sample_size: The number of samples to be used for sample-expecting quantifiers :param batch_size: The batch size to be used for predictions :param verbose: Prediction process logging, as in tf.keras.Model.fit :param steps: Predictions steps, as in tf.keras.Model.fit. Is adapted according to chosen sample size. :param as_confidence: If true, uncertainties are multiplied by (-1), if false, confidences are multiplied by (-1). Default: No transformations. :param broadcaster: Sampling Related Dependencies. If None, the DefaultBroadcaster will be used. :param return_alias_dict: If true, the result is returned as a dictionary with the quantifier aliases as keys. :return: If return_alias_dict=True, a dict with all quantifier aliases as keys

and (predictions, uncertainties_or_confidences) as values.

Otherwise (default), a tuple (predictions, uncertainties_or_confidences) if a single quantifier was passed as string or instance, or a collection of such tuples if the passed quantifiers was an iterable.

save(filepath: str, overwrite: bool = True, include_optimizer: bool = True, save_format: Optional[str] = None, signatures=None, options=None)

Save the model to file, as on plain tf models. Note that you must not use the h5 file format.

** Attention ** uwiz models must be loaded using uwiz.models.load_model AND NOT using the corresponding keras method.

See below the keras documentation, with applies for this method as well (taking in account the limitations mentioned above)

abstract property stochastic_mode_tensor: Variable

Get access to the flag used to enable and disable the stochastic behavior.

Returns: A boolean 0-dimensions tensorflow variable.

property summary

Direct access to the summary method the wrapped keras model. See tf.keras.Model.summary for precise documentation of this method.

class uncertainty_wizard.models.StochasticFunctional(inputs, outputs, stochastic_mode: StochasticMode, name: Optional[str] = None)

Bases: Stochastic

A stochastic wrapper of a tf.keras.Model, allowing to build models using the functional interface. Note that when using the functional interface, you need to use uwiz.models.stochastic.layers or build your own Stochastic-Mode dependent stochastic layers. See the online user guide for more info.

Stochastic models are models in which some randomness is added to the network during training. While this is typically done for network regularization, models trained in such a way can be used for uncertainty quantification. Simply speaking:

Randomness (which is typically disabled during inference) can be enforced during inference, leading to predictions which are impacted by the random noise. By sampling multiple network outputs for the same input, we can infer the robustness of the network to the random noise. We assume that the higher the robustness, the higher the networks confidence.

Instances of stochastic uncertainty wizard models can also be used in a non-stochastic way as as point prediction models (i.e., models without sampling) by calling the model.predict function or by passing a quantifier which does not rely on sampling to model.predict_quantified (such as Max-Softmax). Randomization during model inference is automatically enabled or disabled.

property inner

Direct access to the wrapped keras model. Use this if you want to directly work on the wrapped model. When using this, make sure not to modify the stochastic layers or the stochastic_mode tensor on the model.

Returns: the tf.keras.Model wrapped in this StochasticSequential.

property stochastic_mode_tensor

Get access to the flag used to enable and disable the stochastic behavior.

Returns: A boolean 0-dimensions tensorflow variable.

class uncertainty_wizard.models.StochasticMode(tensor=None)

Bases: object

Stochastic mode is a wrapper for a bool tensor which serves as flag during inference in an uwiz stochastic model: If the flag is True, the inference in randomized. Otherwise, randomization is disabled.

When creating a StochasticFunctional model, you need to create a new StochasticMode(), use it for any of your (custom?) layers that should have a behavior in a stochastic environment than in a detererministic environment (for example your own randomization layer).

as_tensor()

Get the tensor wrapped by this stochastic mode :return: A boolean tensor

class uncertainty_wizard.models.StochasticSequential(layers=None, name=None)

Bases: Stochastic

A stochastic wrapper of tf.keras.models.Sequential models, suitable for MC Dropout and similar sampling based approaches on randomized models.

Stochastic models are models in which some randomness is added to the network during training. While this is typically done for network regularization, models trained in such a way can be used for uncertainty quantification. Simply speaking:

Randomness (which is typically disabled during inference) can be enforced during inference, leading to predictions which are impacted by the random noise. By sampling multiple network outputs for the same input, we can infer the robustness of the network to the random noise. We assume that the higher the robustness, the higher the networks confidence.

Instances of stochastic uncertainty wizard models can also be used in a non-stochastic way as as point prediction models (i.e., models without sampling) by calling the model.predict function or by passing a quantifier which does not rely on sampling to model.predict_quantified (such as Max-Softmax). Randomization during model inference is automatically enabled or disabled.

add(layer, prevent_use_for_sampling=False)

Adds the layer to the model. See docs of tf.keras.model.Sequential.add(layer) for details.

In addition, layers of type tf.keras.layers.Dropout, tf.keras.layers.GaussianNoise and tf.keras.layers.GaussianDropout are overridden by equivalent layers which allow to be enabled during inference for randomized predictions.

Parameters:
  • layer – layer instance to be added to the model.

  • prevent_use_for_sampling – Do not use the layer for randomization during inference. Has only effect on layers of type Dropout, GaussianNoise or GaussianDropout

get_config()

Not supported :return: An empty config

property inner

Direct access to the wrapped keras model. Use this if you want to directly work on the wrapped model. When using this, make sure not to modify the stochastic layers or the stochastic_mode tensor on the model.

Returns: the tf.keras.Model wrapped in this StochasticSequential.

property stochastic_mode_tensor

Get access to the flag used to enable and disable the stochastic behavior.

Returns: A boolean 0-dimensions tensorflow variable.

uncertainty_wizard.models.load_model(path, custom_objects: Optional[dict] = None, compile=None, options=None)

Loads an uncertainty wizard model that was saved using model.save(…). See the documentation of tf.keras.models.load_model for further information about the method params.

For lazy ensembles: As they are lazy, only the folder path and the number of models are interpreted by this model loading - no keras models are actually loaded yet. Thus, custom_objects, compile and options must not be specified.

Parameters:
  • path – The path of the folder where the ensemble was saved.

  • custom_objects – Dict containing methods for custom deserialization of objects.

  • compile – Whether to compile the models.

  • options – Load options, check tf.keras documentation for precise information.

Returns:

An uwiz model.

uncertainty_wizard.models.stochastic_from_keras(model: Model, input_tensors=None, clone_function=None, expect_determinism=False, temp_weights_path='tmp/weights')

Creates a stochastic instance from a given tf.keras.models.Sequential model: The new model will have the same structure (layers) and weights as the passed model.

All stochastic layers (e.g. tf.keras.layers.Dropout) will be used for randomization during randomized predictions. If no stochastic layers are present, a ValueError is thrown. The raising of the error can be suppressed by setting expect_determinism to true.

If your model contains custom layers, you can pass a function to clone_function to clone your custom layers, or place the annotation @tf.keras.utils.register_keras_serializable() on your custom layers, and make sure the get_config and from_config methods are implemented. (uncertainty wizard will serialize and deserialize all layers).

Parameters:
  • model – The model to copy. Remains unchanged.

  • input_tensors – Optional tensors to use as input_tensors for new model. See the corresponding parameter in tf.keras.models.clone_model for details.

  • _clone_function – Optional function to use to clone layers. Will be applied to all layers except input layers and stochastic layers. See the corresponding parameter in tf.keras.models.clone_model for more details.

  • expect_determinism – If True, deterministic models (e.g. models without stochastic layers) are accepted and no ValueError is thrown.

  • temp_weights_path – The model weights are temporarily saved to the disk at this path. Folder is deleted after successful completion.

Returns:

A newly created stochastic model