sciunit.scores package

Submodules

sciunit.scores.base module

Base class for SciUnit scores.

class sciunit.scores.base.ErrorScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.base.Score

A score returned when an error occurs during testing.

__module__ = 'sciunit.scores.base'
__str__() → str[source]

[summary]

_describe() → str[source]

Get the description of this score.

Returns:
str: The description of this score.
norm_score

Get the norm score, which is 0.0 for ErrorScore instance.

Returns:
float: The norm score.
summary

Summarize the performance of a model on a test.

Returns:
str: A textual summary of the score.
class sciunit.scores.base.Score(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.base.SciUnit

Abstract base class for scores.

__eq__(other: Union[Score, float]) → bool[source]

[summary]

__ge__(other: Union[Score, float]) → bool[source]

[summary]

__gt__(other: Union[Score, float]) → bool[source]

[summary]

__hash__ = None
__init__(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Abstract base class for scores.

Args:
score (Union[‘Score’, float, int, Quantity], bool): A raw value to wrap in a Score class. related_data (dict, optional): Artifacts to store with the score.
__le__(other: Union[Score, float]) → bool[source]

[summary]

__lt__(other: Union[Score, float]) → bool[source]

[summary]

__module__ = 'sciunit.scores.base'
__ne__(other: Union[Score, float]) → bool[source]

[summary]

__repr__() → str[source]

[summary]

__str__() → str[source]

[summary]

_allowed_types = None

List of allowed types for the score argument

_allowed_types_message = 'Score of type %s is not an instance of one of the allowed types: %s'

Error message when score argument is not one of these types

_best = None

The best possible score of this type

_check_score(score: sciunit.scores.base.Score) → None[source]

A method for each Score subclass to impose additional constraints on the score, e.g. the range of the allowed score.

Args:
score (Score): A sciunit score instance.
_describe() → str[source]

Get the description of this score.

Returns:
str: The description of this score.
_description = ''

A description of this score, i.e. how to interpret it. Provided in the score definition

_raw = None

A raw number arising in a test’s compute_score, used to determine this score. Can be set for reporting a raw value determined in Test.compute_score before any transformation, e.g. by a Converter

_worst = None

The best possible score of this type

check_score(score: sciunit.scores.base.Score) → None[source]

Check the score with imposed additional constraints in the subclass on the score, e.g. the range of the allowed score.

Args:
score (Score): A sciunit score instance.
Raises:
InvalidScoreError: Exception raised if score is not a instance of sciunit score.
color(value: Union[float, Score] = None) → tuple[source]

Turn the score into an RGB color tuple of three 8-bit integers.

Args:
value (Union[float,, optional): The score that will be turned to an RGB color. Defaults to None.
Returns:
tuple: A tuple of three 8-bit integers that represents an RGB color.
classmethod compute(observation: dict, prediction: dict)[source]

Compute whether the observation equals the prediction.

Args:
observation (dict): The observation from the real world. prediction (dict): The prediction generated by a model.
Returns:
NotImplementedError: Not implemented error.
describe(quiet: bool = False) → Optional[str][source]

Get the description of this score instance.

Args:
quiet (bool, optional): If True, then log the description, return the description otherwise.
Defaults to False.
Returns:
Union[str, None]: If not quiet, then return the description of this score instance.
Otherwise, None.
describe_from_docstring() → str[source]

Get the description of this score from the docstring.

Returns:
str: The description of this score.
description = ''

A description of this score, i.e. how to interpret it. For the user to set in bind_score

classmethod extract_mean_or_value(obs_or_pred: dict, key: str = None) → float[source]

Extracts the mean, value, or user-provided key from an observation or prediction dictionary.

Args:
obs_or_pred (dict): [description] key (str, optional): [description]. Defaults to None.
Raises:
KeyError: Key not found.
Returns:
float: The mean of the values of preditions or observations.
classmethod extract_means_or_values(observation: dict, prediction: dict, key: str = None) → Tuple[dict, dict][source]

Extracts the mean, value, or user-provided key from the observation and prediction dictionaries.

Args:
observation (dict): The observation from the real world. prediction (dict): The prediction generated by a model. key (str, optional): [description]. Defaults to None.
Returns:
Tuple[dict, dict]: A tuple that contains the mean of values of observations and the mean of
values of predictions.
get_raw() → float[source]

Get the raw score. If there is not raw score, then get score.

Returns:
float: The raw score.
log(**kwargs)[source]
log10_norm_score

The logarithm base 10 of the norm_score. This is useful for guaranteeing convexity in an error surface.

Returns:
float: The logarithm base 10 of the norm_score.
log2_norm_score

The logarithm base 2 of the norm_score. This is useful for guaranteeing convexity in an error surface.

Returns:
float: The logarithm base 2 of the norm_score.
log_norm_score

The natural logarithm of the norm_score. This is useful for guaranteeing convexity in an error surface.

Returns:
float: The natural logarithm of the norm_score.
model = None

The model judged. Set automatically by Test.judge.

norm_score

A floating point version of the score used for sorting. If normalized = True, this must be in the range 0.0 to 1.0, where larger is better (used for sorting and coloring tables).

Returns:
Score: The [0-1] normalized score.
classmethod observation_postprocess(observation: dict) → dict[source]
classmethod observation_preprocess(observation: dict) → dict[source]
observation_schema = None
raw

The raw score in string type.

Returns:
str: The raw score.
related_data = None

Data specific to the result of a test run on a model.

render_beautiful_msg(color: tuple, bg_brightness: int, msg: str)[source]
score = None

The score itself.

score_type

The type of the score.

Returns:
str: the name of the score class.
set_raw(raw: float) → None[source]

Set the raw score.

Args:
raw (float): The raw score to be set.
state_hide = ['related_data']
summarize()[source]

[summary]

summary

Summarize the performance of a model on a test.

Returns:
str: The summary of this score.
test = None

The test taken. Set automatically by Test.judge.

classmethod value_color(value: Union[float, Score]) → tuple[source]

Get a RGB color based on the Score.

Args:
value (Union[float,): [description]
Returns:
tuple: [description]

sciunit.scores.collections module

SciUnit score collections, such as arrays and matrices.

These collections allow scores to be organized and visualized by model, test, or both.

class sciunit.scores.collections.ScoreArray(tests_or_models, scores=None, weights=None, name=None)[source]

Bases: pandas.core.series.Series, sciunit.base.SciUnit, sciunit.base.TestWeighted

Represents an array of scores derived from a test suite.

Extends the pandas Series such that items are either models subject to a test or tests taken by a model. Also displays and compute score summaries in sciunit-specific ways.

Can use like this, assuming n tests and m models:

>>> sm[test]
>>> sm[test]
(score_1, ..., score_m)
>>> sm[model]
(score_1, ..., score_n)
__getitem__(item)[source]
__getstate__()[source]

Copy the object’s state from self.__dict__.

Contains all of the instance attributes. Always uses the dict.copy() method to avoid modifying the original state.

Returns:
dict: The state of this instance.
__init__(tests_or_models, scores=None, weights=None, name=None)[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'sciunit.scores.collections'
__setattr__(attr, value)[source]

After regular attribute access, try setting the name This allows simpler access to columns for interactive use.

check_tests_and_models(tests_or_models: Union[sciunit.tests.Test, sciunit.models.base.Model]) → Union[sciunit.tests.Test, sciunit.models.base.Model][source]
get_by_name(name: str) → Union[sciunit.models.base.Model, sciunit.tests.Test][source]

Get a test or a model by name.

Args:
name (str): The name of the model or test.
Raises:
KeyError: No model or test with name name.
Returns:
Union[Model, Test]: The model or test found.
mean() → float[source]

Compute a total score for each model over all the tests.

Uses the norm_score attribute, since otherwise direct comparison across different kinds of scores would not be possible.

Returns:
float: The computed total score for each model over all the tests.
norm_scores

Return the norm_score for each test.

Returns:
float: The norm_score for each test.
related_data
score
scores
scores_flat
state_hide = ['related_data', 'scores', 'norm_scores', 'style', 'plot', 'iat', 'at', 'iloc', 'loc', 'T']
stature(test_or_model: Union[sciunit.models.base.Model, sciunit.tests.Test]) → int[source]

Compute the relative rank of a model on a test.

Rank is against other models that were asked to take the test.

Args:
test_or_model (Union[Model, Test]): A sciunit model or test instance.
Returns:
int: The rank of the model or test instance.
class sciunit.scores.collections.ScoreMatrix(tests, models, scores=None, weights=None, transpose=False)[source]

Bases: pandas.core.frame.DataFrame, sciunit.base.SciUnit, sciunit.base.TestWeighted

Represents a matrix of scores derived from a test suite. Extends the pandas DataFrame such that tests are columns and models are the index. Also displays and compute score summaries in sciunit-specific ways.

Can use like this, assuming n tests and m models:

>>> sm[test]
>>> sm[test]
(score_1, ..., score_m)
>>> sm[model]
(score_1, ..., score_n)
T

Get transpose of this ScoreMatrix.

Returns:
ScoreMatrix: The transpose of this ScoreMatrix.
__getitem__(item)[source]
__getstate__()[source]

Copy the object’s state from self.__dict__.

Contains all of the instance attributes. Always uses the dict.copy() method to avoid modifying the original state.

Returns:
dict: The state of this instance.
__init__(tests, models, scores=None, weights=None, transpose=False)[source]

Constructor of ScoreMatrix class

Args:
tests (List[Test]): Test instances that will be in the ScoreMatrix models (List[Model]): Model instances that will be in the ScoreMatrix scores (List[Score], optional): Score instances that will be in the ScoreMatrix. Defaults to None. weights ([type], optional): [description]. Defaults to None. transpose (bool, optional): [description]. Defaults to False.
__module__ = 'sciunit.scores.collections'
__setattr__(attr, value)[source]

After regular attribute access, try setting the name This allows simpler access to columns for interactive use.

_repr_html_()[source]

Return a html representation for a particular DataFrame.

Mainly for IPython notebook.

add_mean()[source]
annotate(df: pandas.core.frame.DataFrame, html: str, show_mean: bool, colorize: bool) → Tuple[str, int][source]

[summary]

Args:
df (DataFrame): [description] html (str): [description] show_mean (bool): [description] colorize (bool): [description]
Returns:
Tuple[str, int]: [description]
annotate_body(soup: bs4.BeautifulSoup, df: pandas.core.frame.DataFrame, show_mean: bool) → None[source]

[summary]

Args:
soup (BeautifulSoup): [description] df (DataFrame): [description] show_mean (bool): [description]
annotate_body_cell(cell, df: pandas.core.frame.DataFrame, show_mean: bool, i: int, j: int) → None[source]

[summary]

Args:
cell ([type]): [description] df (DataFrame): [description] show_mean (bool): [description] i (int): [description] j (int): [description]
annotate_header_cell(cell, df: pandas.core.frame.DataFrame, show_mean: bool, i: int, j: int) → None[source]

[summary]

Args:
cell ([type]): [description] df (DataFrame): [description] show_mean (bool): [description] i (int): [description] j (int): [description]
annotate_headers(soup: bs4.BeautifulSoup, df: pandas.core.frame.DataFrame, show_mean: bool) → None[source]

[summary]

Args:
soup ([type]): [description] df (DataFrame): [description] show_mean (bool): [description]
annotate_mean(cell, df: pandas.core.frame.DataFrame, i: int) → float[source]

[summary]

Args:
cell ([type]): [description] df (DataFrame): [description] i (int): [description]
Returns:
float: [description]
classmethod apply_score_color(val)[source]
check_tests_models_scores(tests: Union[sciunit.tests.Test, List[sciunit.tests.Test]], models: Union[sciunit.models.base.Model, List[sciunit.models.base.Model]], scores: Union[sciunit.scores.base.Score, List[sciunit.scores.base.Score]]) → Tuple[List[sciunit.tests.Test], List[sciunit.models.base.Model], List[sciunit.scores.base.Score]][source]

Check if tests, models, and scores are lists and covert them to lists if they are not.

Args:
tests (List[Test]): A sciunit test instance or a list of the test instances. models (List[Model]): A sciunit model instance or a list of the model instances. scores (List[Score]): A sciunit score instance or a list of the score instances.
Returns:
Tuple[List[Test], List[Model], List[Score]]: Tuple of lists of tests, models, and scores instances.
colorize = True
copy(transpose=False) → sciunit.scores.collections.ScoreMatrix[source]

Get a copy of this ScoreMatrix.

Returns:
ScoreMatrix: The transpose of this ScoreMatrix.
dynamify(table_id: str) → None[source]

[summary]

Args:
table_id ([type]): [description]
get_by_name(name: str) → Union[sciunit.models.base.Model, sciunit.tests.Test][source]

Get a model or a test from the model or test list by name.

Args:
name (str): The name of the test or model.
Raises:
KeyError: No model or test found by name.
Returns:
Union[Model, Test]: The model or test found.
get_group(x: tuple) → Union[sciunit.models.base.Model, sciunit.tests.Test, sciunit.scores.base.Score][source]

[summary]

Args:
x (tuple): (test, model) or (model, test).
Raises:
TypeError: Expected (test, model) or (model, test).
Returns:
Union[Model, Test]: (test, model) or (model, test).
get_model(model: sciunit.models.base.Model) → sciunit.scores.collections.ScoreArray[source]

Generate a ScoreArray instance with all tests and the model.

Args:
model (Model): The model that will be included in the ScoreArray instance.
Returns:
ScoreArray: The generated ScoreArray instance.
get_test(test: sciunit.tests.Test) → sciunit.scores.collections.ScoreArray[source]

Generate a ScoreArray instance with all models and the test.

Args:
test (Test): The test that will be included in the ScoreArray instance.
Returns:
ScoreArray: The generated ScoreArray instance.
norm_scores

Get a DataFrame instance that contains norm scores as a matrix.

Returns:
DataFrame: The DataFrame instance that contains norm scores as a matrix.
related_data
score
scores
scores_flat
show_mean = False
sortable = False
state_hide = ['related_data', 'scores', 'norm_scores', 'style', 'plot', 'iat', 'at', 'iloc', 'loc', 'T']
stature(test: sciunit.tests.Test, model: sciunit.models.base.Model) → int[source]

Computes the relative rank of a model on a test compared to other models that were asked to take the test.

Args:
test (Test): A sciunit test instance. model (Model): A sciunit model instance.
Returns:
int: The relative rank of a model on a test

sciunit.scores.collections_m2m module

Score collections for direct comparison of models against other models.

class sciunit.scores.collections_m2m.ScoreArrayM2M(test: sciunit.tests.Test, models: List[sciunit.models.base.Model], scores: List[sciunit.scores.Score])[source]

Bases: pandas.core.series.Series, sciunit.base.SciUnit

Represents an array of scores derived from TestM2M. Extends the pandas Series such that items are either models subject to a test or the test itself.

Attributes:
index ([type]): [description]
__getattr__(name: str) → Any[source]

After regular attribute access, try looking up the name This allows simpler access to columns for interactive use.

__getitem__(item: Union[str, callable]) → Any[source]
__init__(test: sciunit.tests.Test, models: List[sciunit.models.base.Model], scores: List[sciunit.scores.Score])[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'sciunit.scores.collections_m2m'
get_by_name(name: str) → str[source]

Get item (can be a model, observation, or test) in index by name.

Args:
name (str): name of the item.
Raises:
KeyError: Item not found.
Returns:
Any: Item found.
norm_scores

A series of norm scores.

Returns:
Series: A series of norm scores.
state_hide = ['related_data', 'scores', 'norm_scores', 'style', 'plot', 'iat', 'at', 'iloc', 'loc', 'T']
class sciunit.scores.collections_m2m.ScoreMatrixM2M(test: sciunit.tests.Test, models: List[sciunit.models.base.Model], scores: List[sciunit.scores.Score])[source]

Bases: pandas.core.frame.DataFrame, sciunit.base.SciUnit

Represents a matrix of scores derived from TestM2M. Extends the pandas DataFrame such that models/observation are both columns and the index.

__getattr__(name: str) → Any[source]

After regular attribute access, try looking up the name This allows simpler access to columns for interactive use.

__getitem__(item: Union[Tuple[sciunit.tests.Test, sciunit.models.base.Model], str, Tuple[list, tuple]]) → Any[source]
__init__(test: sciunit.tests.Test, models: List[sciunit.models.base.Model], scores: List[sciunit.scores.Score])[source]

Initialize self. See help(type(self)) for accurate signature.

__module__ = 'sciunit.scores.collections_m2m'
get_by_name(name: str) → Union[sciunit.models.base.Model, sciunit.tests.Test][source]

Get the model or test from the models or tests by name.

Args:
name (str): The name of the model or test.
Raises:
KeyError: Raise an exception if there is not a model or test named name.
Returns:
Union[Model, Test]: The test or model found.
get_group(x: list) → Any[source]

[summary]

Args:
x (list): [description]
Raises:
TypeError: [description]
Returns:
Any: [description]
norm_scores

Get a pandas DataFrame instance that contains norm scores.

Returns:
DataFrame: A pandas DataFrame instance that contains norm scores.
state_hide = ['related_data', 'scores', 'norm_scores', 'style', 'plot', 'iat', 'at', 'iloc', 'loc', 'T']

sciunit.scores.complete module

Score types for tests that completed successfully.

These include various representations of goodness-of-fit.

class sciunit.scores.complete.BooleanScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.base.Score

A boolean score, which must be True or False.

__module__ = 'sciunit.scores.complete'
__str__() → str[source]

[summary]

_allowed_types = (<class 'bool'>,)
_best = True
_description = 'True if the observation and prediction were sufficiently similar; False otherwise'
_worst = False
classmethod compute(observation: dict, prediction: dict) → sciunit.scores.complete.BooleanScore[source]

Compute whether the observation equals the prediction.

Returns:
BooleanScore: Boolean score of the observation equals the prediction.
norm_score

Return 1.0 for a True score and 0.0 for False score.

Returns:
float: 1.0 for a True score and 0.0 for False score.
class sciunit.scores.complete.CohenDScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.complete.ZScore

A Cohen’s D score.

A float indicating difference between two means normalized by the pooled standard deviation.

__module__ = 'sciunit.scores.complete'
__str__() → str[source]

[summary]

_best = 0.0
_description = "The Cohen's D between the prediction and the observation"
_worst = inf
classmethod compute(observation: dict, prediction: dict) → sciunit.scores.complete.CohenDScore[source]

Compute a Cohen’s D from an observation and a prediction.

Returns:
CohenDScore: The computed Cohen’s D Score.
class sciunit.scores.complete.CorrelationScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.base.Score

A correlation score. A float in the range [-1.0, 1.0] representing the correlation coefficient.

__module__ = 'sciunit.scores.complete'
__str__()[source]

[summary]

_best = 1.0
_check_score(score)[source]

A method for each Score subclass to impose additional constraints on the score, e.g. the range of the allowed score.

Args:
score (Score): A sciunit score instance.
_description = 'A correlation of -1.0 shows a perfect negative correlation,while a correlation of 1.0 shows a perfect positive correlation.A correlation of 0.0 shows no linear relationship between the movement of the two variables'
_worst = -1.0
classmethod compute(observation, prediction)[source]

Compute whether the observation equals the prediction.

class sciunit.scores.complete.FloatScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.base.Score

A float score.

A float with any value.

__module__ = 'sciunit.scores.complete'
__str__() → str[source]

[summary]

_allowed_types = (<class 'float'>, <class 'quantities.quantity.Quantity'>)
_best = 0.0
_check_score(score)[source]

A method for each Score subclass to impose additional constraints on the score, e.g. the range of the allowed score.

Args:
score (Score): A sciunit score instance.
_description = 'There is no canonical mapping between this score type and a measure of agreement between the observation and the prediction'
_worst = 0.0
classmethod compute_ssd(observation: dict, prediction: dict) → sciunit.scores.base.Score[source]

Compute sum-squared diff between observation and prediction.

Args:
observation (dict): The observation to be used for computing the sum-squared diff. prediction (dict): The prediction to be used for computing the sum-squared diff.
Returns:
Score: The sum-squared diff between observation and prediction.
class sciunit.scores.complete.PercentScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.base.Score

A percent score.

A float in the range [0, 100.0] where higher is better.

__module__ = 'sciunit.scores.complete'
__str__() → str[source]

[summary]

_best = 100.0
_check_score(score)[source]

A method for each Score subclass to impose additional constraints on the score, e.g. the range of the allowed score.

Args:
score (Score): A sciunit score instance.
_description = '100.0 is considered perfect agreement between the observation and the prediction. 0.0 is the worst possible agreement'
_worst = 0.0
norm_score

Return 1.0 for a percent score of 100, and 0.0 for 0.

Returns:
float: 1.0 if the percent score is 100, else 0.0.
class sciunit.scores.complete.RandomScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.complete.FloatScore

A random score in [0,1].

This has no scientific value and should only be used for debugging purposes. For example, one might assign a random score under some error condition to move forward with an application that requires a numeric score, and use the presence of a RandomScore in the output as an indication of an internal error.

__module__ = 'sciunit.scores.complete'
__str__() → str[source]

[summary]

_allowed_types = (<class 'float'>,)
_description = 'There is a random number in [0,1] and has no relation to the prediction or the observation'
class sciunit.scores.complete.RatioScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.base.Score

A ratio of two numbers.

Usually the prediction divided by the observation.

__module__ = 'sciunit.scores.complete'
__str__()[source]

[summary]

_allowed_types = (<class 'float'>,)
_best = 1.0
_check_score(score)[source]

A method for each Score subclass to impose additional constraints on the score, e.g. the range of the allowed score.

Args:
score (Score): A sciunit score instance.
_description = 'The ratio between the prediction and the observation'
_worst = inf
classmethod compute(observation: dict, prediction: dict, key=None) → sciunit.scores.complete.RatioScore[source]

Compute a ratio from an observation and a prediction.

Returns:
RatioScore: A RatioScore of ratio from an observation and a prediction.
norm_score

Return 1.0 for a ratio of 1, falling to 0.0 for extremely small or large values.

Returns:
float: The value of the norm score.
observation_schema = {'value': {'required': True, 'units': True}}
class sciunit.scores.complete.RelativeDifferenceScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.base.Score

A relative difference between prediction and observation.

The absolute value of the difference between the prediction and the observation is divided by a reference value with the same units. This reference scale should be chosen for each test such that normalization produces directly comparable scores across tests. For example, if 5 volts represents a medium size difference for TestA, and 10 seconds represents a medium size difference for TestB, then 5 volts and 10 seconds should be used for this reference scale in TestA and TestB, respectively. The attribute scale can be passed to the compute method or set for the whole class in advance. Otherwise, a scale of 1 (in the units of the observation and prediction) will be used.

__module__ = 'sciunit.scores.complete'
__str__()[source]

[summary]

_allowed_types = (<class 'float'>,)
_best = 0.0
_check_score(score)[source]

A method for each Score subclass to impose additional constraints on the score, e.g. the range of the allowed score.

Args:
score (Score): A sciunit score instance.
_description = 'The relative difference between the prediction and the observation'
_worst = inf
classmethod compute(observation: Union[dict, float, int, quantities.quantity.Quantity], prediction: Union[dict, float, int, quantities.quantity.Quantity], key=None, scale: Union[float, int, quantities.quantity.Quantity, None] = None) → sciunit.scores.complete.RelativeDifferenceScore[source]

Compute the relative difference between the observation and a prediction.

Returns:
RelativeDifferenceScore: A relative difference between an observation and a prediction.
norm_score

Return 1.0 for a ratio of 0.0, falling to 0.0 for extremely large values.

Returns:
float: The value of the norm score.
scale = None
class sciunit.scores.complete.ZScore(score: Union[Score, float, int, quantities.quantity.Quantity], related_data: dict = None)[source]

Bases: sciunit.scores.base.Score

A Z score.

A float indicating standardized difference from a reference mean.

__module__ = 'sciunit.scores.complete'
__str__() → str[source]

[summary]

_allowed_types = (<class 'float'>,)
_best = 0.0
_description = 'The difference between the means of the observation and prediction divided by the standard deviation of the observation'
_worst = inf
classmethod compute(observation: dict, prediction: dict) → sciunit.scores.complete.ZScore[source]

Compute a z-score from an observation and a prediction.

Returns:
ZScore: The computed Z-Score.
norm_score

Return the normalized score.

Equals 1.0 for a z-score of 0, falling to 0.0 for extremely positive or negative values.

classmethod observation_postprocess(observation: dict) → dict[source]
observation_schema = [('Mean, Standard Deviation, N', {'mean': {'units': True, 'required': True}, 'std': {'units': True, 'min': 0, 'required': True}, 'n': {'type': 'integer', 'min': 1}}), ('Mean, Standard Error, N', {'mean': {'units': True, 'required': True}, 'sem': {'units': True, 'min': 0, 'required': True}, 'n': {'type': 'integer', 'min': 1, 'required': True}})]

sciunit.scores.incomplete module

Score types for tests that did not complete successfully.

These include details about the various possible reasons that a particular combination of model and test could not be completed.

class sciunit.scores.incomplete.InsufficientDataScore(score: sciunit.scores.base.Score, related_data: dict = None)[source]

Bases: sciunit.scores.incomplete.NoneScore

A score returned when the model or test data is insufficient to score the test.

__module__ = 'sciunit.scores.incomplete'
description = 'Insufficient Data'
class sciunit.scores.incomplete.NAScore(score: sciunit.scores.base.Score, related_data: dict = None)[source]

Bases: sciunit.scores.incomplete.NoneScore

A N/A (not applicable) score.

Indicates that the model doesn’t have the capabilities that the test requires.

__module__ = 'sciunit.scores.incomplete'
description = 'N/A'
class sciunit.scores.incomplete.NoneScore(score: sciunit.scores.base.Score, related_data: dict = None)[source]

Bases: sciunit.scores.base.Score

A None score.

Usually indicates that the model has not been checked to see if it has the capabilities required by the test.

__init__(score: sciunit.scores.base.Score, related_data: dict = None)[source]

Abstract base class for scores.

Args:
score (Union[‘Score’, float, int, Quantity], bool): A raw value to wrap in a Score class. related_data (dict, optional): Artifacts to store with the score.
__module__ = 'sciunit.scores.incomplete'
__str__() → str[source]

[summary]

norm_score

Return None as the norm score of this NoneScore instance.

Returns:
None: The norm score, which is None.
class sciunit.scores.incomplete.TBDScore(score: sciunit.scores.base.Score, related_data: dict = None)[source]

Bases: sciunit.scores.incomplete.NoneScore

A TBD (to be determined) score. Indicates that the model has capabilities required by the test but has not yet taken it.

__module__ = 'sciunit.scores.incomplete'
description = 'None'

Module contents

Contains classes for different representations of test scores.

It also contains score collections such as arrays and matrices.