Internal API

MaskContainer

libertem.common.container.MaskContainer helps to implement highly efficient mask application operations, such as virtual detector, center of mass or feature vector calculations.

Changed in version 0.4.0: Moved from libertem.job.masks to libertem.common.container to use it in UDFs and prepare deprecation of the Job interface.

class libertem.common.container.MaskContainer(mask_factories: Callable[[], ndarray | matrix | SparseArray | coo_matrix | csr_matrix | csc_matrix | coo_array | csr_array | csc_array | cupy.ndarray | cupyx.scipy.sparse.coo_matrix | cupyx.scipy.sparse.csr_matrix | cupyx.scipy.sparse.csc_matrix] | Sequence[Callable[[], ndarray | matrix | SparseArray | coo_matrix | csr_matrix | csc_matrix | coo_array | csr_array | csc_array | cupy.ndarray | cupyx.scipy.sparse.coo_matrix | cupyx.scipy.sparse.csr_matrix | cupyx.scipy.sparse.csc_matrix]], dtype: dtype[Any] | None | type[Any] | _SupportsDType[dtype[Any]] | str | tuple[Any, int] | tuple[Any, SupportsIndex | Sequence[SupportsIndex]] | list[Any] | _DTypeDict | tuple[Any, Any] = None, use_sparse: bool | Literal['sparse.pydata', 'sparse.pydata.GCXS', 'scipy.sparse', 'scipy.sparse.csc', 'scipy.sparse.csr'] | None = None, count: int | None = None, backend: Literal['numpy', 'numpy.matrix', 'cuda', 'cupy', 'sparse.COO', 'sparse.GCXS', 'sparse.DOK', 'scipy.sparse.coo_matrix', 'scipy.sparse.csr_matrix', 'scipy.sparse.csc_matrix', 'scipy.sparse.coo_array', 'scipy.sparse.csr_array', 'scipy.sparse.csc_array', 'cupyx.scipy.sparse.coo_matrix', 'cupyx.scipy.sparse.csr_matrix', 'cupyx.scipy.sparse.csc_matrix'] | None = None, default_sparse: Literal['sparse.pydata', 'sparse.pydata.GCXS', 'scipy.sparse', 'scipy.sparse.csc', 'scipy.sparse.csr'] = 'scipy.sparse')[source]

Container for mask stacks that are created from factory functions.

It allows stacking, cached slicing, transposing and conversion to condition the masks for high-performance dot products.

Computation of masks is delayed until as late as possible, but is done automatically when necessary. Methods which can trigger mask instantiation include:

  • container.use_sparse

  • len(container) [if the count argument is None at __init__]

  • container.dtype [if the dtype argument is None at __init__]

  • any of the get() methods

use_sparse at init can be None, False, True or any supported sparse backend as a string in {‘scipy.sparse’, ‘scipy.sparse.csc’, ‘scipy.sparse.csr’, ‘sparse.pydata’, ‘sparse.pydata.GCXS’}

use_sparse as None means the sparse mode will be chosen only after the masks are instantiated. All masks being sparse will activate sparse processing using the backend in default_sparse, else dense processing will be used on the appropriate backend.

get_for_sig_slice(sig_slice: Slice, *args, **kwargs)[source]

Same as get, but without calling discard_nav() on the slice

Shapes and slices

These classes help to manipulate shapes and slices of n-dimensional binary data to facilitate the MapReduce-like processing of LiberTEM. See Concepts for a high-level introduction.

class libertem.common.shape.NavOnlyShape(shape: Shape | Sequence[int])[source]
property dims: int

Number of dimensions

Examples

>>> from libertem.common import Shape
>>> s = Shape((5, 5, 16, 16), sig_dims=2)
>>> s.dims
4
>>> s.nav.dims  # creates a new temporary Shape and accesses .dims on it
2
>>> s.sig.dims
2
flatten_sig() Shape[source]

Flatten in the signal dimensions

Examples

>>> from libertem.common import Shape
>>> s = Shape((5, 5, 16, 16), sig_dims=2)
>>> s.flatten_sig()
(5, 5, 256)
property nav_dims: int
property sig_dims: int
to_tuple() tuple[int, ...][source]
class libertem.common.shape.Shape(shape: Shape | Sequence[int], sig_dims: int)[source]

Create a Shape that knows how many dimensions are part of navigation/signal. It is assumed that the signal is in the last sig_dims dimensions.

Parameters:
  • shape (tuple of int) – the shape we want to work with, as n-tuple (like numpy array shapes)

  • sig_dims (int) – the number of dimensions that are considered part of the signal

property dims: int

Number of dimensions

Examples

>>> from libertem.common import Shape
>>> s = Shape((5, 5, 16, 16), sig_dims=2)
>>> s.dims
4
>>> s.nav.dims  # creates a new temporary Shape and accesses .dims on it
2
>>> s.sig.dims
2
flatten_nav() Shape[source]

Returns a new Shape that is flat in the navigation dimensions

Examples

>>> from libertem.common import Shape
>>> s = Shape((5, 5, 16, 16), sig_dims=2)
>>> s.flatten_nav()
(25, 16, 16)
flatten_sig() Shape[source]

Flatten in the signal dimensions

Examples

>>> from libertem.common import Shape
>>> s = Shape((5, 5, 16, 16), sig_dims=2)
>>> s.flatten_sig()
(5, 5, 256)
property nav: Shape

Crop to navigation dimensions

#TODO Should be refactored with functools.cached_property when supported

Returns:

shape – like this shape, but without the signal dimensions

Return type:

Shape

Examples

>>> from libertem.common import Shape
>>> s = Shape((5, 5, 16, 16), sig_dims=2)
>>> s.nav
(5, 5)
property nav_dims: int
property sig: Shape

Crop to signal dimensions

#TODO Should be refactored with functools.cached_property when supported

Returns:

shape – like this shape, but without the navigation dimensions

Return type:

Shape

Examples

>>> from libertem.common import Shape
>>> s = Shape((5, 5, 16, 16), sig_dims=2)
>>> s.sig
(16, 16)
property sig_dims: int
property size: int

Number of elements covered by this shape

Examples

>>> from libertem.common import Shape
>>> s = Shape((16, 16), sig_dims=2)
>>> s.size
256
to_tuple() tuple[int, ...][source]
class libertem.common.shape.SigOnlyShape(shape: Shape | Sequence[int])[source]
property dims: int

Number of dimensions

Examples

>>> from libertem.common import Shape
>>> s = Shape((5, 5, 16, 16), sig_dims=2)
>>> s.dims
4
>>> s.nav.dims  # creates a new temporary Shape and accesses .dims on it
2
>>> s.sig.dims
2
flatten_nav() Shape[source]

Returns a new Shape that is flat in the navigation dimensions

Examples

>>> from libertem.common import Shape
>>> s = Shape((5, 5, 16, 16), sig_dims=2)
>>> s.flatten_nav()
(25, 16, 16)
property nav_dims: int
property sig_dims: int
to_tuple() tuple[int, ...][source]
class libertem.common.slice.Slice(origin: Sequence[int], shape: Shape)[source]

A n-dimensional slice, defined by origin and shape

Parameters:
  • origin (tuple of int) – global “top-left” coordinates of this slice

  • shape (Shape instance) – the size of this slice

adjust_for_roi(roi: ndarray | None) Slice[source]

Make a new slice that has origin and shape modified according to roi.

clip_to(shape: Shape)[source]
discard_nav() Slice[source]

returns a copy with the origin/shape zeroed in the nav dimensions

this is used to create uniform cache keys

flatten_nav(containing_shape: Shape | Sequence[int]) Slice[source]
classmethod from_shape(shape: Sequence[int], sig_dims: int) Slice[source]

Construct a Slice at zero-origin from shape and sig_dims.

get(arr: None = None, sig_only: bool = False, nav_only: bool = False) tuple[slice, ...][source]
get(arr: ndarray, sig_only: bool = False, nav_only: bool = False) ndarray

Get a standard python tuple-of-slice-object which can be used to slice any compatible numpy.ndarray

Parameters:
  • arr – something implementing the slice interface. if given, returns arr[slice]

  • sig_only (bool) – get a signal-only slice for frames/masks

  • nav_only (bool) – get a nav-only slice, for example for indexing something that is shaped like the navigation dimensions of this Slice.

Returns:

returns standard python slices computed from our origin+shape model or arr indexed with this slicing if arr is given

Return type:

tuple of slice objects

Examples

>>> import numpy as np
>>> from libertem.common import Slice, Shape
>>> s = Slice(shape=Shape((16, 16, 4, 4), sig_dims=2), origin=(0, 0, 12, 12))
>>> data = np.ones((16, 16))
>>> data[s.get(sig_only=True)]
array([[1., 1., 1., 1.],
       [1., 1., 1., 1.],
       [1., 1., 1., 1.],
       [1., 1., 1., 1.]])
intersection_with(other: Slice) Slice[source]

Calculate the intersection between this slice and other. May result in dimensions that are zero, which means that there is no intersection.

Returns:

slice – the intersection between this and the other slice

Return type:

Slice

is_null() bool[source]

If any part of our shape is zero, this slice doesn’t span any data and is null / empty.

property nav: Slice

Returns a new Slice, with sig_dims=0, limited to the nav part

origin
shape
shift(other: Slice) Slice[source]

make a new Slice with origin relative to other.origin and the same shape as this Slice

useful for translating to the local coordinate system of other

shift_by(offset: Sequence[int]) Slice[source]

Return a new slice with the origin moved by the supplied offset and the same shape

property sig: Slice

Returns a new Slice, limited to the sig part

subslices(shape: Shape | Sequence[int]) Generator[Slice, None, None][source]

Generator for all subslices of this slice with dimensions specified by shape.

Parameters:

shape (tuple of int or Shape) – the shape of each sub-slice

Yields:

Slice – all subslices, in fast-access order

exception libertem.common.slice.SliceUsageError[source]

Raised when a Slice is incorrectly instantiated or used

CPU and CUDA devices

These methods get and set information that controls on which devices a computation runs.

libertem.common.backend.get_device_class()[source]

New in version 0.6.0.

Returns:

class – Device class to use. Can be ‘cpu’ or ‘cuda’. Default is ‘cpu’ if no settings were applied before.

Return type:

str

libertem.common.backend.get_use_cpu()[source]

New in version 0.6.0.

Returns:

id – CPU device ID to use. Currently there is no pinning, i.e. the value itself is ignored. None means “don’t use CPU” and any integer means “use CPU”. Default is 0 if no settings were applied before.

Return type:

int or None

libertem.common.backend.get_use_cuda() int | None[source]

New in version 0.6.0.

Returns:

id – CUDA device ID to use.

Return type:

int or None

libertem.common.backend.set_file_limit()[source]
libertem.common.backend.set_use_cpu(cpu: int)[source]

This sets a CPU device ID and unsets any CUDA ID

New in version 0.6.0.

Parameters:

cpu (int) – CPU to use. The value is currently ignored, i.e. any CPU is used without pinning

libertem.common.backend.set_use_cuda(cuda_device: int)[source]

This sets a CUDA device ID and unsets any CPU ID

New in version 0.6.0.

Parameters:

cuda_device (int) – CUDA device ID to use

class libertem.utils.devices.DetectResult[source]
cpus: list[int]
cudas: list[int]
has_cupy: bool
libertem.utils.devices.detect() DetectResult[source]

Detect which devices are present

New in version 0.6.0.

Returns:

Dictionary with keys 'cpus' and 'cudas' Each containing a list of devices. Only physical CPU cores are counted, i.e. no hyperthreading. Additionally it has the key 'has_cupy', which signals if cupy is installed and available.

Return type:

dict

libertem.utils.devices.has_cupy()[source]

Probe if cupy was loaded successfully.

New in version 0.6.0.

CuPy is an optional dependency with special integration for UDFs. See CuPy support for details.