uncertainty_wizard.quantifiers package

Module contents

This module contains all quantifiers used to infer prediction and confidence (or uncertainty) from neural network outputs. It also contains the QuantifierRegistry which allows to refer to quantifiers by alias.

class uncertainty_wizard.quantifiers.MaxSoftmax

Bases: uncertainty_wizard.quantifiers.quantifier.ConfidenceQuantifier

The MaxSoftmax is a confidence metric in one-shot classification. It is the defaults in most simple use cases and sometimes also referred to as ‘Vanilla Confidence Metric’.

Inputs/activations have to be normalized using the softmax function over all classes. The class with the highest activation is chosen as prediction, the activation of this highest activation is used as confidence quantification.

classmethod aliases() List[str]

Aliases are string identifiers of this quantifier. They are used to select quantifiers by string in predict methods (need to be registered in quantifier_registry).

Additionally, the first identifier in the list is used for logging purpose. Thus, the returned list have at least length 1.

Returns

list of quantifier identifiers

classmethod calculate(nn_outputs: numpy.ndarray)

Calculates the predictions and uncertainties.

Note this this assumes batches of neural network outputs. When using this method for a single nn output, make sure to reshape the passed array, e.g. using x = np.expand_dims(x, axis=0)

The method returns a tuple of

  • A prediction (int or float) or array of predictions

  • A uncertainty or confidence quantification (float) or array of uncertainties

Parameters

nn_outputs – The NN outputs to be considered when determining prediction and uncertainty quantification

Returns

A tuple of prediction(s) and uncertainty(-ies).

classmethod problem_type() uncertainty_wizard.quantifiers.quantifier.ProblemType

Specifies whether this quantifier is applicable to classification or regression problems :return: One of the two enum values REGRESSION or CLASSIFICATION

classmethod takes_samples() bool

A flag indicating whether this quantifier relies on monte carlo samples (in which case the method returns True) or on a single neural network output (in which case the method return False)

Returns

True if this quantifier expects monte carlo samples for quantification. False otherwise.

class uncertainty_wizard.quantifiers.MeanSoftmax

Bases: uncertainty_wizard.quantifiers.quantifier.ConfidenceQuantifier

A predictor & uncertainty quantifier, based on multiple samples (e.g. nn outputs) in a classification problem.

Both the prediction and the uncertainty score are calculated using the average softmax values over all samples. This is sometimes also called ‘ensembling’, as it is often used in deep ensembles.

classmethod aliases() List[str]

Aliases are string identifiers of this quantifier. They are used to select quantifiers by string in predict methods (need to be registered in quantifier_registry).

Additionally, the first identifier in the list is used for logging purpose. Thus, the returned list have at least length 1.

Returns

list of quantifier identifiers

classmethod calculate(nn_outputs: numpy.ndarray) Tuple[numpy.ndarray, numpy.ndarray]

Calculates the predictions and uncertainties.

Note this this assumes batches of neural network outputs. When using this method for a single nn output, make sure to reshape the passed array, e.g. using x = np.expand_dims(x, axis=0)

The method returns a tuple of

  • A prediction (int or float) or array of predictions

  • A uncertainty or confidence quantification (float) or array of uncertainties

Parameters

nn_outputs – The NN outputs to be considered when determining prediction and uncertainty quantification

Returns

A tuple of prediction(s) and uncertainty(-ies).

classmethod problem_type() uncertainty_wizard.quantifiers.quantifier.ProblemType

Specifies whether this quantifier is applicable to classification or regression problems :return: One of the two enum values REGRESSION or CLASSIFICATION

classmethod takes_samples() bool

A flag indicating whether this quantifier relies on monte carlo samples (in which case the method returns True) or on a single neural network output (in which case the method return False)

Returns

True if this quantifier expects monte carlo samples for quantification. False otherwise.

class uncertainty_wizard.quantifiers.MutualInformation

Bases: uncertainty_wizard.quantifiers.quantifier.UncertaintyQuantifier

A predictor & uncertainty quantifier, based on multiple samples (e.g. nn outputs) in a classification problem

The prediction is made using a plurality vote, i.e., the class with the highest value in most samples is selected. In the case of a tie, the class with the lowest index is selected.

The uncertainty is quantified using the mutual information. See the docs for a precise explanation of mutual information.

classmethod aliases() List[str]

Aliases are string identifiers of this quantifier. They are used to select quantifiers by string in predict methods (need to be registered in quantifier_registry).

Additionally, the first identifier in the list is used for logging purpose. Thus, the returned list have at least length 1.

Returns

list of quantifier identifiers

classmethod calculate(nn_outputs: numpy.ndarray) Tuple[numpy.ndarray, numpy.ndarray]

Calculates the predictions and uncertainties.

Note this this assumes batches of neural network outputs. When using this method for a single nn output, make sure to reshape the passed array, e.g. using x = np.expand_dims(x, axis=0)

The method returns a tuple of

  • A prediction (int or float) or array of predictions

  • A uncertainty or confidence quantification (float) or array of uncertainties

Parameters

nn_outputs – The NN outputs to be considered when determining prediction and uncertainty quantification

Returns

A tuple of prediction(s) and uncertainty(-ies).

classmethod problem_type() uncertainty_wizard.quantifiers.quantifier.ProblemType

Specifies whether this quantifier is applicable to classification or regression problems :return: One of the two enum values REGRESSION or CLASSIFICATION

classmethod takes_samples() bool

A flag indicating whether this quantifier relies on monte carlo samples (in which case the method returns True) or on a single neural network output (in which case the method return False)

Returns

True if this quantifier expects monte carlo samples for quantification. False otherwise.

class uncertainty_wizard.quantifiers.PredictionConfidenceScore

Bases: uncertainty_wizard.quantifiers.quantifier.ConfidenceQuantifier

The Prediction Confidence Score is a confidence metric in one-shot classification. Inputs/activations have to be normalized using the softmax function over all classes. The class with the highest activation is chosen as prediction, the difference between the two highest activations is used as confidence quantification.

classmethod aliases() List[str]

Aliases are string identifiers of this quantifier. They are used to select quantifiers by string in predict methods (need to be registered in quantifier_registry).

Additionally, the first identifier in the list is used for logging purpose. Thus, the returned list have at least length 1.

Returns

list of quantifier identifiers

classmethod calculate(nn_outputs: numpy.ndarray)

Calculates the predictions and uncertainties.

Note this this assumes batches of neural network outputs. When using this method for a single nn output, make sure to reshape the passed array, e.g. using x = np.expand_dims(x, axis=0)

The method returns a tuple of

  • A prediction (int or float) or array of predictions

  • A uncertainty or confidence quantification (float) or array of uncertainties

Parameters

nn_outputs – The NN outputs to be considered when determining prediction and uncertainty quantification

Returns

A tuple of prediction(s) and uncertainty(-ies).

classmethod problem_type() uncertainty_wizard.quantifiers.quantifier.ProblemType

Specifies whether this quantifier is applicable to classification or regression problems :return: One of the two enum values REGRESSION or CLASSIFICATION

classmethod takes_samples() bool

A flag indicating whether this quantifier relies on monte carlo samples (in which case the method returns True) or on a single neural network output (in which case the method return False)

Returns

True if this quantifier expects monte carlo samples for quantification. False otherwise.

class uncertainty_wizard.quantifiers.PredictiveEntropy

Bases: uncertainty_wizard.quantifiers.quantifier.UncertaintyQuantifier

A predictor & uncertainty quantifier, based on multiple samples (e.g. nn outputs) in a classification problem

The prediction is made using a plurality vote, i.e., the class with the highest value in most samples is selected. In the case of a tie, the class with the lowest index is selected.

The uncertainty is quantified using the predictive entropy; the entropy (base 2) of the per-class means of the sampled predictions.

classmethod aliases() List[str]

Aliases are string identifiers of this quantifier. They are used to select quantifiers by string in predict methods (need to be registered in quantifier_registry).

Additionally, the first identifier in the list is used for logging purpose. Thus, the returned list have at least length 1.

Returns

list of quantifier identifiers

classmethod calculate(nn_outputs: numpy.ndarray) Tuple[numpy.ndarray, numpy.ndarray]

Calculates the predictions and uncertainties.

Note this this assumes batches of neural network outputs. When using this method for a single nn output, make sure to reshape the passed array, e.g. using x = np.expand_dims(x, axis=0)

The method returns a tuple of

  • A prediction (int or float) or array of predictions

  • A uncertainty or confidence quantification (float) or array of uncertainties

Parameters

nn_outputs – The NN outputs to be considered when determining prediction and uncertainty quantification

Returns

A tuple of prediction(s) and uncertainty(-ies).

classmethod problem_type() uncertainty_wizard.quantifiers.quantifier.ProblemType

Specifies whether this quantifier is applicable to classification or regression problems :return: One of the two enum values REGRESSION or CLASSIFICATION

classmethod takes_samples() bool

A flag indicating whether this quantifier relies on monte carlo samples (in which case the method returns True) or on a single neural network output (in which case the method return False)

Returns

True if this quantifier expects monte carlo samples for quantification. False otherwise.

class uncertainty_wizard.quantifiers.Quantifier

Bases: abc.ABC

Quantifiers are dependencies, injectable into prediction calls, which calculate predictions and uncertainties or confidences from DNN outputs.

The quantifier class is abstract and should not be directly implemented. Instead, new quantifiers should extend uwiz.quantifiers.ConfidenceQuantifier or uwiz.quantifiers.UncertaintyQuantifier instead.

abstract classmethod aliases() List[str]

Aliases are string identifiers of this quantifier. They are used to select quantifiers by string in predict methods (need to be registered in quantifier_registry).

Additionally, the first identifier in the list is used for logging purpose. Thus, the returned list have at least length 1.

Returns

list of quantifier identifiers

abstract classmethod calculate(nn_outputs: numpy.ndarray)

Calculates the predictions and uncertainties.

Note this this assumes batches of neural network outputs. When using this method for a single nn output, make sure to reshape the passed array, e.g. using x = np.expand_dims(x, axis=0)

The method returns a tuple of

  • A prediction (int or float) or array of predictions

  • A uncertainty or confidence quantification (float) or array of uncertainties

Parameters

nn_outputs – The NN outputs to be considered when determining prediction and uncertainty quantification

Returns

A tuple of prediction(s) and uncertainty(-ies).

classmethod cast_conf_or_unc(as_confidence: Union[None, bool], superv_scores: numpy.ndarray) numpy.ndarray

Utility method to convert confidence metrics into uncertainty and vice versa. Call is_confidence() to find out if this is a uncertainty or a confidence metric.

The supervisors scores are converted as follows:

  • Confidences are multiplied by (-1) iff as_confidence is False

  • Uncertainties are multiplied by (-1) iff as_confidence is True

  • Otherwise, the passed supervisor scores are returned unchanged.

Parameters
  • as_confidence – : A boolean indicating if the scores should be converted to confidences (True) or uncertainties (False)

  • superv_scores – : The scores that are to be converted, provided a conversion is needed.

Returns

The converted scores or the unchanged superv_scores (if as_confidence is None or no conversion is needed)

abstract classmethod is_confidence() bool

Boolean flag indicating whether this quantifier quantifies uncertainty or confidence. They are different as follows (assuming that the quantifier actually correctly captures the chance of misprediction):

  • In uncertainty quantification, the higher the quantification, the higher the chance of misprediction.

  • in confidence quantification the lower the quantification, the higher the change of misprediction.

return

True iff this is a confidence quantifier, False if this is an uncertainty quantifier

abstract classmethod problem_type() uncertainty_wizard.quantifiers.quantifier.ProblemType

Specifies whether this quantifier is applicable to classification or regression problems :return: One of the two enum values REGRESSION or CLASSIFICATION

abstract classmethod takes_samples() bool

A flag indicating whether this quantifier relies on monte carlo samples (in which case the method returns True) or on a single neural network output (in which case the method return False)

Returns

True if this quantifier expects monte carlo samples for quantification. False otherwise.

class uncertainty_wizard.quantifiers.QuantifierRegistry

Bases: object

The quantifier registry keeps track of all quantifiers and their string aliases. This is primarily used to allow to pass string representations of quantifiers in predict_quantified method calls, but may also be used for other purposes where dynamic quantifier selection is desired.

classmethod find(alias: str) uncertainty_wizard.quantifiers.quantifier.Quantifier

Find quantifiers by their id. :param alias: A string representation of the quantifier, as defined in the quantifiers aliases method :return: A quantifier instance

classmethod register(quantifier: uncertainty_wizard.quantifiers.quantifier.Quantifier) None

Use this method to add a new quantifier to the registry. :param quantifier: The quantifier instance to be added. :return: None

class uncertainty_wizard.quantifiers.SoftmaxEntropy

Bases: uncertainty_wizard.quantifiers.quantifier.UncertaintyQuantifier

The SoftmaxEntropy is a confidence metric in one-shot classification.

Inputs/activations have to be normalized using the softmax function over all classes. The class with the highest activation is chosen as prediction, the entropy over all activations is used as uncertainty quantification.

classmethod aliases() List[str]

Aliases are string identifiers of this quantifier. They are used to select quantifiers by string in predict methods (need to be registered in quantifier_registry).

Additionally, the first identifier in the list is used for logging purpose. Thus, the returned list have at least length 1.

Returns

list of quantifier identifiers

classmethod calculate(nn_outputs: numpy.ndarray)

Calculates the predictions and uncertainties.

Note this this assumes batches of neural network outputs. When using this method for a single nn output, make sure to reshape the passed array, e.g. using x = np.expand_dims(x, axis=0)

The method returns a tuple of

  • A prediction (int or float) or array of predictions

  • A uncertainty or confidence quantification (float) or array of uncertainties

Parameters

nn_outputs – The NN outputs to be considered when determining prediction and uncertainty quantification

Returns

A tuple of prediction(s) and uncertainty(-ies).

classmethod problem_type() uncertainty_wizard.quantifiers.quantifier.ProblemType

Specifies whether this quantifier is applicable to classification or regression problems :return: One of the two enum values REGRESSION or CLASSIFICATION

classmethod takes_samples() bool

A flag indicating whether this quantifier relies on monte carlo samples (in which case the method returns True) or on a single neural network output (in which case the method return False)

Returns

True if this quantifier expects monte carlo samples for quantification. False otherwise.

class uncertainty_wizard.quantifiers.StandardDeviation

Bases: uncertainty_wizard.quantifiers.quantifier.UncertaintyQuantifier

Measures the standard deviation over different samples of a regression problem, i.e., an arbitrary problem, which is used as Uncertainty and the mean of the samples as prediction

This implementation can handle both regression prediction consisting of a single scalar dnn output as well as larger-shaped dnn outputs. In the latter case, entropy is calculated and returned for every position in the dnn output shape.

classmethod aliases() List[str]

Aliases are string identifiers of this quantifier. They are used to select quantifiers by string in predict methods (need to be registered in quantifier_registry).

Additionally, the first identifier in the list is used for logging purpose. Thus, the returned list have at least length 1.

Returns

list of quantifier identifiers

classmethod calculate(nn_outputs: numpy.ndarray)

Calculates the predictions and uncertainties.

Note this this assumes batches of neural network outputs. When using this method for a single nn output, make sure to reshape the passed array, e.g. using x = np.expand_dims(x, axis=0)

The method returns a tuple of

  • A prediction (int or float) or array of predictions

  • A uncertainty or confidence quantification (float) or array of uncertainties

Parameters

nn_outputs – The NN outputs to be considered when determining prediction and uncertainty quantification

Returns

A tuple of prediction(s) and uncertainty(-ies).

classmethod problem_type() uncertainty_wizard.quantifiers.quantifier.ProblemType

Specifies whether this quantifier is applicable to classification or regression problems :return: One of the two enum values REGRESSION or CLASSIFICATION

classmethod takes_samples() bool

A flag indicating whether this quantifier relies on monte carlo samples (in which case the method returns True) or on a single neural network output (in which case the method return False)

Returns

True if this quantifier expects monte carlo samples for quantification. False otherwise.

class uncertainty_wizard.quantifiers.VariationRatio

Bases: uncertainty_wizard.quantifiers.quantifier.UncertaintyQuantifier

A predictor & uncertainty quantifier, based on multiple samples (e.g. nn outputs) in a classification problem

The prediction is made using a plurality vote, i.e., the class with the highest value in most samples is selected. In the case of a tie, the class with the lowest index is selected.

The uncertainty is quantified using the variation ratio 1 - w / S, where w is the number of samples where the overall prediction equals the prediction of the sample and S is the total number of samples.

classmethod aliases() List[str]

Aliases are string identifiers of this quantifier. They are used to select quantifiers by string in predict methods (need to be registered in quantifier_registry).

Additionally, the first identifier in the list is used for logging purpose. Thus, the returned list have at least length 1.

Returns

list of quantifier identifiers

classmethod calculate(nn_outputs: numpy.ndarray) Tuple[numpy.ndarray, numpy.ndarray]

Calculates the predictions and uncertainties.

Note this this assumes batches of neural network outputs. When using this method for a single nn output, make sure to reshape the passed array, e.g. using x = np.expand_dims(x, axis=0)

The method returns a tuple of

  • A prediction (int or float) or array of predictions

  • A uncertainty or confidence quantification (float) or array of uncertainties

Parameters

nn_outputs – The NN outputs to be considered when determining prediction and uncertainty quantification

Returns

A tuple of prediction(s) and uncertainty(-ies).

classmethod problem_type() uncertainty_wizard.quantifiers.quantifier.ProblemType

Specifies whether this quantifier is applicable to classification or regression problems :return: One of the two enum values REGRESSION or CLASSIFICATION

classmethod takes_samples() bool

A flag indicating whether this quantifier relies on monte carlo samples (in which case the method returns True) or on a single neural network output (in which case the method return False)

Returns

True if this quantifier expects monte carlo samples for quantification. False otherwise.