Loading data

To efficiently handle files larger than main memory, LiberTEM never loads the whole data set at once. Calling the load() function only checks that the dataset exists and is value before providing Python with an object which can be used in later computation. Running an analysis on this object with run() or run_udf() then streams the data from mass storage in optimal-sized chunks, such that even very large datasets can be processed without saturating the system resources.

See Sample Datasets for publicly available datasets for testing.

There are two main ways of opening a data set in LiberTEM: using the GUI, or the Python API.

Loading through the API

In the API, you can use libertem.api.Context.load(). The general pattern is:

ctx = Context()
ctx.load("typename", path="/path/to/some/file", arg1="val1", arg2=42)

So, you need to specify the data set type, the path, and dataset-specific arguments. These arguments are documented below.

For most file types, it is possible to automatically detect the type and parameters, which you can trigger by using "auto" as file type:

ctx.load("auto", path="/path/to/some/file")

For the full list of supported file formats with links to their reference documentation, see Supported formats below.

Loading using the GUI

Using the GUI, mostly the same parameters need to be specified, although some are only available in the Python API. Tuples (for example for nav_shape) have to be entered as separated values into the fields. You can hit a comma to jump to the next field. We follow the NumPy convention here and specify the “fast-access” dimension last, so a value of 42, 21 would mean the same as specifying (42, 21) in the Python API, setting y=42 and x=21.

See the GUI usage page for more information on the GUI.

For more general information about how LiberTEM structures data see the concepts section.

Common parameters

There are some common parameters across data set types:


The name of the data set, for display purposes. Only used in the GUI.


In the GUI, we generally support visualizing data containing rectangular 2D scans. For all the dataset types, you can specify a nav_shape as a tuple (y, x). If the dataset isn’t 4D, the GUI can reshape it to 4D. When using the Python API, you are free to use n-dimensional nav_shape, if the data set and chosen analysis supports it.


In the GUI, you can specify shape of the detector as height, width, but when using the Python API, it can be of any dimensionality.


You can specify a sync_offset to handle synchronization or acquisition problems. If it’s positive, sync_offset number of frames will be skipped from the start of the input data. If it’s negative, the dataset will be padded by abs(sync_offset) number of frames at the beginning.


Different methods for I/O are available in LiberTEM, which can influence performance. See I/O Backends for details.


When using sync_offset or a nav_shape that exceeds the size of the input data it is currently not well-defined if zero-filled frames are to be generated or if the missing data is skipped. Most dataset implementations seem to skip the data. See #1384 for discussion, feedback welcome!

Supported formats

LiberTEM supports the following file formats out of the box, see links for details:

Furthermore, two alternative mechanisms exist for interfacing LiberTEM with data loaded elsewhere in Python via other libraries:

  • a memory data set can be constructed from a NumPy array for testing purposes. See Memory data set for details.

  • a Dask data set can be constructed from a Dask array. Depending on the method used to construct the source array this can achieve good performance. See Dask for details.

Dataset conversion

LiberTEM supports a mechanism to efficiently convert any supported dataset into a Numpy binary file (.npy), which can then be loaded into memory independently of LiberTEM (or read as a npy format dataset as above).

New in version 0.12.0.

To convert a dataset to npy, use the export_dataset() method:

with lt.Context() as ctx:
    ctx.export_dataset(dataset, './output_path.npy')

As of this time only exporting to the npy format is supported, but other formats would be possible as the need arose.

Alternatively, you can create Dask arrays from LiberTEM datasets via the Dask integration. These arrays can then be stored with Dask’s built-in functions or through additional libraries such as RosettaSciIO.