_Data#

class myoverse.datatypes._Data(raw_data, sampling_frequency, nr_of_dimensions_when_unchunked)[source]#

Base class for all data types.

This class provides common functionality for handling different types of data, including maintaining original and processed representations.

Parameters:
  • raw_data (np.ndarray) – The raw data to store.

  • sampling_frequency (float) – The sampling frequency of the data.

  • nr_of_dimensions_when_unchunked (int)

sampling_frequency#

The sampling frequency of the data.

Type:

float

_last_processing_step#

The last processing step applied to the data.

Type:

str

_data#

Dictionary of all data. The keys are the names of the representations and the values are either numpy arrays or DeletedRepresentation objects (for representations that have been deleted to save memory).

Type:

dict[str, np.ndarray | DeletedRepresentation]

Raises:

ValueError – If the sampling frequency is less than or equal to 0.

Parameters:
  • raw_data (np.ndarray)

  • sampling_frequency (float)

  • nr_of_dimensions_when_unchunked (int)

Notes

Memory Management:

When representations are deleted with delete_data(), they are replaced with DeletedRepresentation objects that store essential metadata (shape, dtype) but don’t consume memory for the actual data. The chunking status is determined from the shape when needed.

Examples

This is an abstract base class and should not be instantiated directly. Instead, use one of the concrete subclasses like EMGData or KinematicsData:

>>> import numpy as np
>>> from myoverse.datatypes import EMGData
>>>
>>> # Create sample data
>>> data = np.random.randn(16, 1000)
>>> emg = EMGData(data, 2000)  # 2000 Hz sampling rate
>>>
>>> # Access attributes from the base _Data class
>>> print(f"Sampling frequency: {emg.sampling_frequency} Hz")
>>> print(f"Is input data chunked: {emg.is_chunked['Input']}")

Methods

__copy__()

Create a shallow copy of the instance.

__getitem__(key)

__init__(raw_data, sampling_frequency, ...)

__repr__()

Return repr(self).

__setitem__(key, value)

__str__()

Return str(self).

_check_if_chunked(data)

Checks if the data is chunked or not.

delete_data(representation_to_delete)

Delete data from a representation while keeping its metadata.

load(filename)

Load data from a file.

memory_usage()

Calculate memory usage of each representation.

plot(*_, **__)

Plots the data.

save(filename)

Save the data to a file.

classmethod load(filename)[source]#

Load data from a file.

Parameters:

filename (str) – The name of the file to load the data from.

Returns:

The loaded data.

Return type:

_Data

delete_data(representation_to_delete)[source]#

Delete data from a representation while keeping its metadata.

This replaces the actual numpy array with a DeletedRepresentation object that contains metadata about the array, saving memory while allowing regeneration when needed.

Parameters:

representation_to_delete (str) – The representation to delete the data from.

memory_usage()[source]#

Calculate memory usage of each representation.

Returns:

Dictionary with representation names as keys and tuples containing shape as string and memory usage in bytes as values.

Return type:

dict[str, tuple[str, int]]

abstractmethod plot(*_, **__)[source]#

Plots the data.

Parameters:
save(filename)[source]#

Save the data to a file.

Parameters:

filename (str) – The name of the file to save the data to.

property input_data: ndarray#

Returns the input data.

property is_chunked: dict[str, bool]#

Returns whether the data is chunked or not.

Returns:

A dictionary where the keys are the representations and the values are whether the data is chunked or not.

Return type:

dict[str, bool]

property processed_representations: dict[str, ndarray]#

Returns the processed representations of the data.