Internal API
MaskContainer
libertem.common.container.MaskContainer
helps to implement highly efficient
mask application operations, such as virtual detector, center of mass or feature
vector calculations.
Changed in version 0.4.0: Moved from libertem.job.masks
to libertem.common.container
to
use it in UDFs and prepare deprecation of the Job interface.
- class libertem.common.container.MaskContainer(mask_factories: Callable[[], ndarray | matrix | SparseArray | coo_matrix | csr_matrix | csc_matrix | coo_array | csr_array | csc_array | cupy.ndarray | cupyx.scipy.sparse.coo_matrix | cupyx.scipy.sparse.csr_matrix | cupyx.scipy.sparse.csc_matrix] | Sequence[Callable[[], ndarray | matrix | SparseArray | coo_matrix | csr_matrix | csc_matrix | coo_array | csr_array | csc_array | cupy.ndarray | cupyx.scipy.sparse.coo_matrix | cupyx.scipy.sparse.csr_matrix | cupyx.scipy.sparse.csc_matrix]], dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any] = None, use_sparse: bool | Literal['sparse.pydata', 'sparse.pydata.GCXS', 'scipy.sparse', 'scipy.sparse.csc', 'scipy.sparse.csr'] | None = None, count: int | None = None, backend: Literal['numpy', 'numpy.matrix', 'cuda', 'cupy', 'sparse.COO', 'sparse.GCXS', 'sparse.DOK', 'scipy.sparse.coo_matrix', 'scipy.sparse.csr_matrix', 'scipy.sparse.csc_matrix', 'scipy.sparse.coo_array', 'scipy.sparse.csr_array', 'scipy.sparse.csc_array', 'cupyx.scipy.sparse.coo_matrix', 'cupyx.scipy.sparse.csr_matrix', 'cupyx.scipy.sparse.csc_matrix'] | None = None, default_sparse: Literal['sparse.pydata', 'sparse.pydata.GCXS', 'scipy.sparse', 'scipy.sparse.csc', 'scipy.sparse.csr'] = 'scipy.sparse')[source]
Container for mask stacks that are created from factory functions.
It allows stacking, cached slicing, transposing and conversion to condition the masks for high-performance dot products.
Computation of masks is delayed until as late as possible, but is done automatically when necessary. Methods which can trigger mask instantiation include:
container.use_sparse
len(container) [if the count argument is None at __init__]
container.dtype [if the dtype argument is None at __init__]
any of the get() methods
use_sparse at init can be None, False, True or any supported sparse backend as a string in {‘scipy.sparse’, ‘scipy.sparse.csc’, ‘scipy.sparse.csr’, ‘sparse.pydata’, ‘sparse.pydata.GCXS’}
use_sparse as None means the sparse mode will be chosen only after the masks are instantiated. All masks being sparse will activate sparse processing using the backend in default_sparse, else dense processing will be used on the appropriate backend.
Shapes and slices
These classes help to manipulate shapes and slices of n-dimensional binary data to facilitate the MapReduce-like processing of LiberTEM. See Concepts for a high-level introduction.
Number of dimensions
Examples
>>> from libertem.common import Shape >>> s = Shape((5, 5, 16, 16), sig_dims=2) >>> s.dims 4 >>> s.nav.dims # creates a new temporary Shape and accesses .dims on it 2 >>> s.sig.dims 2
Flatten in the signal dimensions
Examples
>>> from libertem.common import Shape >>> s = Shape((5, 5, 16, 16), sig_dims=2) >>> s.flatten_sig() (5, 5, 256)
- class libertem.common.shape.Shape(shape: Shape | Sequence[int], sig_dims: int)[source]
Create a Shape that knows how many dimensions are part of navigation/signal. It is assumed that the signal is in the last sig_dims dimensions.
- Parameters:
- property dims: int
Number of dimensions
Examples
>>> from libertem.common import Shape >>> s = Shape((5, 5, 16, 16), sig_dims=2) >>> s.dims 4 >>> s.nav.dims # creates a new temporary Shape and accesses .dims on it 2 >>> s.sig.dims 2
Returns a new Shape that is flat in the navigation dimensions
Examples
>>> from libertem.common import Shape >>> s = Shape((5, 5, 16, 16), sig_dims=2) >>> s.flatten_nav() (25, 16, 16)
- flatten_sig() Shape [source]
Flatten in the signal dimensions
Examples
>>> from libertem.common import Shape >>> s = Shape((5, 5, 16, 16), sig_dims=2) >>> s.flatten_sig() (5, 5, 256)
Crop to navigation dimensions
#TODO Should be refactored with functools.cached_property when supported
- Returns:
shape – like this shape, but without the signal dimensions
- Return type:
Examples
>>> from libertem.common import Shape >>> s = Shape((5, 5, 16, 16), sig_dims=2) >>> s.nav (5, 5)
- property sig: Shape
Crop to signal dimensions
#TODO Should be refactored with functools.cached_property when supported
- Returns:
shape – like this shape, but without the navigation dimensions
- Return type:
Examples
>>> from libertem.common import Shape >>> s = Shape((5, 5, 16, 16), sig_dims=2) >>> s.sig (16, 16)
- class libertem.common.shape.SigOnlyShape(shape: Shape | Sequence[int])[source]
- property dims: int
Number of dimensions
Examples
>>> from libertem.common import Shape >>> s = Shape((5, 5, 16, 16), sig_dims=2) >>> s.dims 4 >>> s.nav.dims # creates a new temporary Shape and accesses .dims on it 2 >>> s.sig.dims 2
Returns a new Shape that is flat in the navigation dimensions
Examples
>>> from libertem.common import Shape >>> s = Shape((5, 5, 16, 16), sig_dims=2) >>> s.flatten_nav() (25, 16, 16)
- class libertem.common.slice.Slice(origin: Sequence[int], shape: Shape)[source]
A n-dimensional slice, defined by origin and shape
- Parameters:
- adjust_for_roi(roi: ndarray | None) Slice [source]
Make a new slice that has origin and shape modified according to roi.
returns a copy with the origin/shape zeroed in the nav dimensions
this is used to create uniform cache keys
- classmethod from_shape(shape: Sequence[int], sig_dims: int) Slice [source]
Construct a Slice at zero-origin from shape and sig_dims.
- get(arr: None = None, sig_only: bool = False, nav_only: bool = False) tuple[slice, ...] [source]
- get(arr: ndarray, sig_only: bool = False, nav_only: bool = False) ndarray
Get a standard python tuple-of-slice-object which can be used to slice any compatible numpy.ndarray
- Parameters:
- Returns:
returns standard python slices computed from our origin+shape model or arr indexed with this slicing if arr is given
- Return type:
tuple of slice objects
Examples
>>> import numpy as np >>> from libertem.common import Slice, Shape >>> s = Slice(shape=Shape((16, 16, 4, 4), sig_dims=2), origin=(0, 0, 12, 12)) >>> data = np.ones((16, 16)) >>> data[s.get(sig_only=True)] array([[1., 1., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.], [1., 1., 1., 1.]])
- intersection_with(other: Slice) Slice [source]
Calculate the intersection between this slice and other. May result in dimensions that are zero, which means that there is no intersection.
- Returns:
slice – the intersection between this and the other slice
- Return type:
- is_null() bool [source]
If any part of our shape is zero, this slice doesn’t span any data and is null / empty.
Returns a new Slice, with sig_dims=0, limited to the nav part
- origin
- shape
- shift(other: Slice) Slice [source]
make a new
Slice
with origin relative toother.origin
and the same shape as thisSlice
useful for translating to the local coordinate system of
other
CPU and CUDA devices
These methods get and set information that controls on which devices a computation runs.
- libertem.common.backend.get_device_class()[source]
New in version 0.6.0.
- Returns:
class – Device class to use. Can be ‘cpu’ or ‘cuda’. Default is ‘cpu’ if no settings were applied before.
- Return type:
- libertem.common.backend.get_use_cpu()[source]
New in version 0.6.0.
- Returns:
id – CPU device ID to use. Currently there is no pinning, i.e. the value itself is ignored.
None
means “don’t use CPU” and any integer means “use CPU”. Default is 0 if no settings were applied before.- Return type:
int or None
- libertem.common.backend.get_use_cuda() int | None [source]
New in version 0.6.0.
- Returns:
id – CUDA device ID to use.
- Return type:
int or None
- libertem.common.backend.set_use_cpu(cpu: int)[source]
This sets a CPU device ID and unsets any CUDA ID
New in version 0.6.0.
- Parameters:
cpu (int) – CPU to use. The value is currently ignored, i.e. any CPU is used without pinning
- libertem.common.backend.set_use_cuda(cuda_device: int)[source]
This sets a CUDA device ID and unsets any CPU ID
New in version 0.6.0.
- Parameters:
cuda_device (int) – CUDA device ID to use
- libertem.utils.devices.detect() DetectResult [source]
Detect which devices are present
New in version 0.6.0.
- Returns:
Dictionary with keys
'cpus'
and'cudas'
Each containing a list of devices. Only physical CPU cores are counted, i.e. no hyperthreading. Additionally it has the key'has_cupy'
, which signals if cupy is installed and available.- Return type:
- libertem.utils.devices.has_cupy()[source]
Probe if
cupy
was loaded successfully.New in version 0.6.0.
CuPy is an optional dependency with special integration for UDFs. See CuPy support for details.