0.10.0 / 2022-07-28

This release features the Pipelined executor for parallel live data processing (#1267). This change greatly improves the processing performance for live data, in particular to support detectors with high data rate. Many thanks to Alexander Clausen and Matthew Bryan for their work! The corresponding capabilities in the LiberTEM-live package will be released soon and announced separately.

Other changes:

New features

  • Support for Python 3.10.

  • NumPy files (NPY) for reading NumPy .npy files (#222, #1249).

  • Support for updated EMPAD XML format, including series (#1259, #1260).

  • Integrate Tracing using opentelemetry that allows to debug and trace distribted operation of LiberTEM (#691, #1266).

  • libertem-server picks a free port if the default is in use and no port was specified (#1184).

  • cluster_spec() now accepts the same CUDA device ID multiple times to spawn multiple workers on the same GPU. This can help increase GPU resource utilisation for some workloads (#1270).




  • Include tests in PyPI release to prepare release on conda-forge, and exclude unneeded files. (#1271, #1275, #1276).

  • Move some code around to make sure that and libertem.common only depend on code that is compatible with the MIT license. Moved items are re-imported at the same positions as before to keep backwards compatibility (#1031, #1245).

0.9.2 / 2022-04-28

This is a bugfix release with two small fixes:

  • Example notebook: compatibility with HyperSpy 1.7.0 #1242

  • Compatibility of CoM auto button with Jupyter server proxy #1220

0.9.0 / 2022-02-17

We are most happy to announce full Dask array integration with this release! Many thanks to Matthew Bryan who implemented major parts of this non-trivial feature. Most notably, HyperSpy lazy signals and LiberTEM can now be combined seamlessly. See Dask integration for details and an example!

This enables the following applications:

  • Use HyperSpy file readers and other readers that create Dask arrays for LiberTEM.

  • Create an ad-hoc file reader for LiberTEM by just building a Dask array. This is often simpler than implementing a native LiberTEM dataset, at the expense of performance.

  • Use LiberTEM file readers for HyperSpy and other software that works with Dask arrays.

  • Use the same implementation of an algorithm for live processing with LiberTEM, offline processing with LiberTEM, and offline processing with HyperSpy.

  • Simplify implementation of complex processing routines on Dask arrays. That includes, for example, routines that are not purely implemented with NumPy array operations and produce complex output or are not compatible with all Dask array chunking schemes. Here, LiberTEM UDFs offer a more powerful and versatile interface than Dask’s native map_blocks() interface.

  • Chain processing steps together using Dask arrays for intermediate results, including using the output of one UDF as input for another UDF. Dask arrays allow working with large intermediate results efficiently since they can remain on the workers.

Specifically, the Dask integration encompasses the following features:

Please note that these features are still experimental and cover a large space of possible uses and parameters. Expect the unexpected! Tests, feedback and improvements are highly appreciated.

Other changes in this release:

New features

  • Experimental helper function to guess parameters for Center of Mass analysis (#1111).

  • GUI interface for the COM analysis to call and update the GUI parameters from the result (#1172).

  • Support for some MIB Quad formats. All integer formats should be supported and were tested with 1x1 and 2x2 layouts. Raw formats with 1x1 and 2x2 layouts using 1 bit, 6 bit, and 12 bit counter depth are supported as well. Support for raw MIB data in other layouts and bit depths can be added on demand (#1169, #1135).

  • New attributes libertem.udf.base.UDFMeta.sig_slice and libertem.udf.base.UDFMeta.tiling_scheme_idx. These attributes can be used for performant access to the current signal slice - mostly important for throughput-limited analysis (#1167, #1166).

  • New --preload option to libertem-server and libertem-worker. That makes it work as documented in HDF5, following Dask worker preloading (#1151).

  • Allow selection of I/O backend in GUI and Python API (#753, #896, #1129).

  • Re-add support for direct I/O. It was previously only supported as a special case for raw files on Linux. Now it is supported for all native dataset formats we support on Linux and Windows. Notable exceptions are the OS X platform or HDF5, MRC, and SER formats (#1129, #753).

  • Support for reading TVIPS binary files, i.e. *_NNN.tvips files (#1179).


  • Allow running CoM analysis on a linescan dataset by only returning divergence and curl if they are defined (#1138, #1139).

  • make_dask_array now works correctly when a roi is specified (#933).

  • Correct shape of buffer views in process_tile() when the tile has depth 1 (#1215).


  • Information on multithreading added to UDF docs in Threading (#1170).


0.8.0 / 2021-10-04

This release mainly contains improvements of center of mass / first moment analysis and support for starting the web GUI from JupyterHub or JupyterLab.

New features

  • Support for center of mass with annular masks in create_com_analysis(), COMAnalysis and the GUI (#633, #1089).

  • Support in the GUI for specifying rotation of scan against detector and flipping the detector y axis (#1087, #31). Previously this was only supported in the Python API.

  • Tweaks and instructions for JupyterHub and JupyterLab integration in LiberTEM, see Jupyter integration (#1074). New package LiberTEM/LiberTEM-jupyter-proxy for interfacing.

  • In the web API, support was added to re-run visualization only, without re-running UDFs for an analysis. This allows for almost instant feedback for some operations, like changing CoM parameters.

  • Added token-based authentication. For now, it is only usable via integrations like Jupyter. It will be extended to local/manual usage later (#1074, #1097). Please comment on #1097 if local/manual use would be beneficial for you so that it is prioritized accordingly.

  • SEQ dataset: Added support for loading excluded pixels from XML (#805, #1077). See SEQDataSet for more information. Also support both *.seq.seq and *.seq as extension for the main SEQ file to find files with matching base name that contain correction data (#1120, #1121).


  • Assert that the files argument to DMDataSet is actually a list or tuple, to prevent iterating over a string path (#1058).

  • Escape globs to support special characters in file names for multi-file datasets (#1066, #1067).

  • Make sure multithreading in the main process still works properly after launching a Context (#1053, #1100).

  • Allow custom plots to return RGB as plot data, for example a color wheel for vector fields (#1052, #1101).

  • Adjust partition count to match the number of CPU compute workers, not total workers to prevent residual partitions (#1086, #1103).

  • Correct partition shape for ROI in UDFMeta (#1109).

  • Fix memory leak: Don’t submit dynamically generated callables directly to the distributed cluster, as they are cached in an unbounded cache (#894, #964, #1119).



  • Speed up coordinate calculation (#1108, #1109).

  • Make sure tasks are scheduled dynamically on available workers if they have uneven run time to benefit more from GPUs (#1107).

  • Cache loaded libraries to reduce overhead of setting the thread count (#1117, #1118).

Many thanks to our new contributors Levente Puskás for the excluded pixel loading and to Matthew Bryan for figuring non-standard compression in HDF5 and improving DM input validation. Congratulations to Alex for closing the long-standing CoM issue #31 and for enabling easy and secure access to the web interface on shared IT infrastructure.

0.7.1 / 2021-07-08

This is a bugfix release that ensures compatibility with the upcoming numba 0.54 release.

Our custom numba caching makes some assumptions about numba internals, which have changed in numba 0.54. This fixes compatibility with numba 0.54, and also makes sure we fail gracefully for future changes (#1060, #1061).

0.7.0 / 2021-06-10

This release introduces features that are essential for live data processing, but can be used for offline processing as well: Live plotting, API for bundled execution of several UDFs in one run, iteration over partial UDF results, and asynchronous UDF execution. Features and infrastructure that are specific to live processing are included in the LiberTEM-live package, which will be released soon.

New features


  • UDF: Consistently use attribute access in UDF.process_*(), UDF.merge(), UDF.get_results() etc. instead of mixing it with __getitem__() dict-like access. The previous method still works, but triggers a UserWarning (#1000, #1003).

  • Also allow non-sliced assignment, for example self.results.res += frame (#1000, #1003).

  • Better choice of kind='nav' buffer fill value outside ROI.

    • String : Was 'n', now ''

    • bool : Was True, now False

    • integers : Was smallest possible value, now 0

    • objects : was np.nan, now None (#1011)

  • Improve performance for chunked HDF5 files, especially compressed HDF5 files which have a chunking in both navigation dimensions. They were causing excessive read amplification (#984).

  • Fix plot range if only zero and one other value are present in the result, most notably boolean values (#944, #1011).

  • Fix axes order in COM template: The components in the field are (x, y) while the template had them as (y, x) before (#1023).


  • Update Gatan Digital Micrograph (GMS) examples to work with the current GMS and LiberTEM releases and demonstrate the new features. (#999, #1002, #1004, #1011). Many thanks to Winnie from Gatan for helping to work around a number of issues!

  • Restructure UDF documentation (#1034).

  • Document coordinate meta information (#928, #1034).


  • Removed deprecated blobfinder and FeatureVecMakerUDF as previously announced. Blobfinder is available as a separate package at Instead of FeatureVecMakerUDF, you can use a sparse matrix and ApplyMasksUDF (#979).

  • Remove deprecated Job interface as previously announced. The functionality was ported to the more capable UDF interface #978.

0.6.0 / 2021-02-16

We are pleased to announce the latest LiberTEM release, with many improvements since 0.5. We would like to highlight the contributions of our GSoc 2020 students @AnandBaburajan (reshaping and sync offset correction) and @twentyse7en, (Code generation to replicate GUI analyses in Jupyter notebooks) who implemented significant improvements in the areas of I/O and the user interface.

Another highlight of this release is experimental support of NVidia GPUs, both via CuPy and via native libraries. The API is ready to be used, including support in the GUI. Performance optimization is still to be done (#946). GPU support is activated for all mask-based analyses (virtual detector and Radial Fourier) for testing purposes, but will not bring a noticeable improvement of performance yet. GPU-based processing did show significant benefits for computationally heavy applications like the SSB implementation in

A lot of work was done to implement tiled reading, resulting in a new I/O system. This improves performance in many circumstances, especially when dealing with large detector frames. In addition, a correction module was integrated into the new I/O system, which can correct gain, subtract a dark reference, and patch pixel defects on the fly. See below for the full changelog!

New features

  • I/O overhaul

    • Implement tiled reading for most file formats (#27, #331, #373, #435).

    • Allow UDFs that implement process_tile to influence the tile shape by overriding libertem.udf.base.UDF.get_tiling_preferences() and make information about the tiling scheme available to the UDF through libertem.udf.base.UDFMeta.tiling_scheme. (#554, #247, #635).

    • Update MemoryDataSet to allow testing with different tile shapes (#634).

    • Added I/O backend selection (#896), which allows users to select the best-performing backend for their circumstance when loading via the new io_backend parameter of Context.load. This fixes a K2IS performance regression (#814) by disabling any readahead hints by default. Additionaly, this fixes a performance regression (#838) on slower media (like HDDs), by adding a buffered reading backend that tries its best to linearize I/O per-worker. GUI integration of backend selection is to be done.

    • For now, direct I/O is no longer supported, please let us know if this is an important use-case for you (#716)!

  • Support for specifying logging level from CLI (#758).

  • Support for Norpix SEQ files (#153, #767).

  • Support for MRC files, as supported by ncempy (#152, #873).

  • Support for loading stacks of 3D DM files (#877). GUI integration still to be done.

  • GUI: Filebrowser improvements: users can star directories in the file browser for easy navigation (#772).

  • Support for running multiple UDFs “at the same time”, not yet exposed in public APIs (#788).

  • GUI: Users can add or remove scan size dimensions according to the dataset’s shape (#779).

  • GUI: Shutdown button to stop server, useful for example for JupyterHub integration (#786).

  • Infrastructure for consistent coordinate transforms are added in libertem.corrections.coordinates and libertem.utils. See also a description of coordinate systems in Concepts.

  • create_com_analysis() now allows to specify a flipped y axis and a scan rotation angle to deal with MIB files and scan rotation correctly. (#325, #786).

  • Corrections can now be specified by the user when running a UDF (#778, #831, #939).

  • Support for loading dark frame and gain map that are sometimes shipped with SEQ data sets.

  • GPU support: process data on CPUs, CUDA devices or both (#760, CuPy support).

  • Spinning out holography to a separate package is in progress:

  • Implement CuPy support in HoloReconstructUDF, currently deactivated due to #815 (#760).

  • GUI: Allows the user to select the GPUs to use when creating a new local cluster (#812).

  • GUI: Support to download Jupyter notebook corresponding to an analysis made by a user in GUI (#801).

  • GUI: Copy the Jupyter notebook cells corresponding to the analysis directly from GUI, including cluster connection details (#862, #863)

  • Allow reshaping datasets into a custom shape. The DataSet implementations (currently except HDF5 and K2IS) and GUI now allow specifying nav_shape and sig_shape parameters to set a different shape than the layout in the dataset (#441, #793).

  • All DataSet implementations handle missing data gracefully (#256, #793).

  • The DataSet implementations (except HDF5 and K2IS) and GUI now allow specifying a sync_offset to handle synchronization/acquisition problems (#793).

  • Users can access the coordinates of a tile/partition slice through coordinates (#553, #793).

  • Cache warmup when opening a data set: Precompiles jit-ed functions on a single process per node, in a controlled manner, preventing CPU oversubscription. This improves further through implementing caching for functions which capture other functions in their closure (#886, #798).

  • Allow selecting lin and log scaled visualization for sum, stddev, pick and single mask analyses to handle data with large dynamic range. This adds key intensity_lin to SumResultSet, PickResultSet and the result of SDAnalysis. It adds key intensity_log to SingleMaskResultSet. The new keys are chosen to not affect existing keys (#925, #929).

  • Tuples can be added directly to Shape objects. Right addition adds to the signal dimensions of the Shape object while left addition adds to the navigation dimensions (#749)


  • Fix an off-by-one error in sync offset for K2IS data (drive-by change in #706).

  • Missing-directory error isn’t thrown if it’s due to last-recent-directory not being available (#748).

  • GUI: when cluster connection fails, reopen form with parameters user submitted (#735).

  • GUI: Fixed the glitch in file opening dialogue by disallowing parallel browsing before loading is concluded (#752).

  • Handle empty ROI and extra_shape with zero. Empty result buffers of the appropriate shape are returned if the ROI is empty or extra_shape has a zero (#765)

  • Improve internals of and to better support correction of many dead pixels. (#890, #889)

  • Handle single-frame partitions in combination with aux data. Instead of squeezing the aux buffer, reshape to the correct shape (#791, #902).

  • Libertem-server can now be started from Bash on Windows (#731)

  • Fix reading without a copy from multi-file datasets. The start offset of the file was not taken account when indexing into the memory maps (#903).

  • Improve performance and reduce memory consumption of point analysis. Custom right hand side matrix product to reduce memory consumption and improve performance of sparse masks, such as point analysis. See also scipy/13211 (#917, #920).

  • Fix stability issue with multiple dask clients. dd.as_completed needs to specify the loop to work with multiple dask.distributed clients (#921).

  • GUI: Snap to pixels in point selection analysis. Consistency between point selection and picking (#926, #927).

  • Open datasets with autodetection, positional and keyword arguments. Handle keyword and positional arguments to Context.load('auto', ...) correctly (#936, #938).


  • Switched to the readthedocs sphinx theme, improving the overall documentation structure. The developer documentation is now in a separate section from the user documentation.


  • Command line options can also be accessed with shorter alternatives (#757).

  • Depend on Numba >= 0.49.1 to support setting Numba thread count (#783), bumped to 0.51 to support caching improvements (#886).

  • libertem-server: Ask for confirmation if the user press ctrl+c. Can immediately stop using another ctrl+c (#781).

  • Included pytest-benchmark to integrate benchmarks in the test infrastructure. See Benchmarking for details (#819).

  • The X and Y components for the color wheel visualization in Center of Mass and Radial Fourier Analysis are swapped to match the axis convention in empyre. This just changes the color encoding in the visualization and not the result (#851).


  • The tileshape parameter of DataSet implementations is deprecated in favor of tileshape negotiation and will be ignored, if given (#754, #777).

  • Remove color wheel code from libertem.viz and replace with imports from empyre. Note that these functions expect three vector components instead of two (#851).

  • The new and consistent nav_shape and sig_shape parameters should be used when loading data. The old scan_size and detector_size parameters, where they existed, are still recognized (#793).

0.5.1 / 2020-08-12


  • Allow installation with latest dask distributed on Python 3.6 and 3.7

0.5.0 / 2020-04-23

New features


  • A large number of usability improvements (#622, #639, #641, #642, #659, #666, #690, #699, #700, #704). Thanks and credit to many new contributors from GSoC!

  • Fixed the buggy “enable Direct I/O” checkbox of the RAW dataset and handle unsupported operating systems gracefully. (#696, #659)


  • Added screenshots and description of ROI and stddev features in usage docs (#669)

  • Improved instructions for installing LiberTEM (general: #664; for development: #598)

  • Add information for downloading and generating sample datasets: Sample Datasets. (#650, #670, #707)


  • Parameters crop_detector_to and detector_size_raw of are deprecated and will be removed after 0.6.0. Please specify detector_size instead or use a specialized DataSet, for example for EMPAD.

  • libertem.udf.feature_vector_maker.FeatureVecMakerUDF is deprecated and will be removed in 0.6.0. Use ApplyMasksUDF with a sparse stack of single pixel masks or a stack generated by libertem_blobfinder.common.patterns.feature_vector() instead. (#618)


  • Clustering analysis

    • Use a connectivity matrix to only cluster neighboring pixels, reducing memory footprint while improving speed and quality (#618).

    • Use faster ApplyMasksUDF to generate feature vector (#618).

  • StdDevUDF

  • LiberTEM works with Python 3.8 for experimental use. A context using a remote Dask.Distributed cluster can lead to lock-ups or errors with Python 3.8. The default local Dask.Distributed context works.

  • Improve performance with large tiles. (#649)

  • SumUDF moved to the libertem.udf folder (#613).

  • Make sure the signal dimension of result buffer slices can be flattened without creating an implicit copy (#738, #739)

Many thanks to the contributors to this release: @AnandBaburajan, @twentyse7en, @sayandip18, @bdalevin, @saisunku, @Iamshankhadeep, @abiB27, @sk1p, @uellue

0.4.1 / 2020-02-18

This is a bugfix release, mainly constraining the msgpack dependency, as distributed is not compatible to version 1.0 yet. It also contains important fixes in the HDF5 dataset.


  • Fix HDF5 with automatic tileshape (#608)

  • Fix reading from HDF5 with roi beyond the first partition (#606)

  • Add version constraint on msgpack

0.4.0 / 2020-02-13

The main points of this release are the Job API deprecation and restructuring of our packaging, namely extracting the blobfinder module.

New features

  • dtype support for UDFs Preferred input dtype (#549, #550)

  • Dismiss error messages via keyboard: allows pressing the escape key to close all currently open error messages (#437)

  • ROI doesn’t have any effect if in pick mode, so we hide the dropdown in that case (#511)

  • Make tileshape parameter of HDF5 DataSet optional (#578)

  • Open browser after starting the server. Enabled by default, can be disabled using –no-browser (#81, #580)

  • Implement libertem.udf.masks.ApplyMasksUDF as a replacement of ApplyMasksJob (#549, #550)

  • Implement libertem.udf.raw.PickUDF as a replacement of PickFrameJob (#549, #550)

Bug fixes

  • Fix FRMS6 in a distributed setting. We now make sure to only do I/O in methods that are running on worker nodes (#531).

  • Fixed loading of nD HDF5 files. Previously the HDF5 DataSet was hardcoded for 4D data - now, arbitraty dimensions should be supported (#574, #567)

  • Fix DaskJobExecutor.run_each_host. Need to pass pure=False to ensure multiple runs of the function (#528).


  • Because HDFS support is right now not tested (and to my knowledge also not used) and the upstream hdfs3 project is not actively maintained, remove support for HDFS. ClusterDataSet or CachedDataSet should be used instead (#38, #534).


  • Depend on distributed>=2.2.0 because of an API change. (#577)

  • All analyses ported from Job to UDF back-end. The Job-related code remains for now for comparison purposes (#549, #550)

Job API deprecation

The original Job API of LiberTEM is superseded by the new User-defined functions (UDFs) API with release 0.4.0. See #549 for a detailed overview of the changes. The UDF API brings the following advantages:

  • Support for regions of interest (ROIs).

  • Easier to implement, extend and re-use UDFs compared to Jobs.

  • Clean separation between back-end implementation details and application-specific code.

  • Facilities to implement non-trivial operations, see User-defined functions: advanced topics.

  • Performance is at least on par.

For that reason, the Job API has become obsolete. The existing public interfaces, namely libertem.api.Context.create_mask_job() and libertem.api.Context.create_pick_job(), will be supported in LiberTEM for two more releases after 0.4.0, i.e. including 0.6.0. Using the Job API will trigger deprecation warnings starting with this release. The new ApplyMasksUDF replaces ApplyMasksJob, and PickUDF replaces PickFrameJob.

The Analysis classes that relied on the Job API as a back-end are already ported to the corresponding UDF back-end. The new back-end may lead to minor differences in behavior, such as a change of returned dtype. The legacy code for using a Job back-end will remain until 0.6.0 and can be activated during the transition period by setting analysis.TYPE = 'JOB' before running.

From ApplyMasksJob to ApplyMasksUDF

Main differences:

  • ApplyMasksUDF returns the result with the first axes being the dataset’s navigation axes. The last dimension is the mask index. ApplyMasksJob used to return transposed data with flattened navigation dimension.

  • Like all UDFs, running an ApplyMasksUDF returns a dictionary. The result data is accessible with key 'intensity' as a BufferWrapper object.

  • ROIs are supported now, like in all UDFs.

Previously with ApplyMasksJob:

# Deprecated!
mask_job = ctx.create_mask_job(
  factories=[all_ones, single_pixel],
mask_job_result =


Now with ApplyMasksUDF:

mask_udf = libertem.udf.masks.ApplyMasksUDF(
  mask_factories=[all_ones, single_pixel]
mask_udf_result = ctx.run_udf(dataset=dataset, udf=mask_udf)

plt.imshow(mask_udf_result['intensity'].data[..., 0])

From PickFrameJob to PickUDF

PickFrameJob allowed to pick arbitrary contiguous slices in both navigation and signal dimension. In practice, however, it was mostly used to extract single complete frames. PickUDF allows to pick the complete signal dimension from an arbitrary non-contiguous region of interest in navigation space by specifying a ROI.

If necessary, more complex subsets of a dataset can be extracted by constructing a suitable subset of an identity matrix for the signal dimension and using it with ApplyMasksUDF and the appropriate ROI for the navigation dimension. Alternatively, it is now easily possible to implement a custom UDF for this purpose. Performing the complete processing through an UDF on the worker nodes instead of loading the data to the central node may be a viable alternative as well.

PickUDF now returns data in the native dtype of the dataset. Previously, PickFrameJob converted to floats.

Using libertem.api.Context.create_pick_analysis() continues to be the recommended convenience function to pick single frames.

Restructuring into sub-packages

We are currently restructuring LiberTEM into packages that can be installed and used independently, see #261. This will be a longer process and changes the import locations.

For a transition period, importing from the previous locations is supported but will trigger a FutureWarning. See Show deprecation warnings on how to activate deprecation warning messages, which is strongly recommended while the restructuring is ongoing.

0.3.0 / 2019-12-12

New features

  • Make OOP based composition and subclassing easier for CorrelationUDF (#466)

  • Introduce plain circular match pattern Circular (#469)

  • Distributed sharded dataset ClusterDataSet (#136, #457)

  • Support for caching data sets CachedDataSet from slower storage (NFS, spinning metal) on fast local storage (#471)

  • Clustering analysis (#401, #408 by @kruzaeva).

  • implementation based on ncempy (#497)

    • Adds a new map() executor primitive. Used to concurrently read the metadata for DM3/DM4 files on initialization.

    • Note: no support for the web GUI yet, as the naming patterns for DM file series varies wildly. Needs changes in the file dialog.

  • Speed up of up to 150x for correlation-based peak refinement in libertem.udf.blobfinder.correlation with a Numba-based pipeline (#468)

  • Introduce FullFrameCorrelationUDF which correlates a large number (several hundred) of small peaks (10x10) on small frames (256x256) faster than FastCorrelationUDF and SparseCorrelationUDF (#468)

  • Introduce UDFPreprocessMixin (#464)

  • Implement iterator over AnalysisResultSet (#496)

  • Add hologram simulation libertem.utils.generate.hologram_frame() (#475)

  • Implement Hologram reconstruction UDF libertem.udf.holography.HoloReconstructUDF (#475)

Bug fixes

  • Improved error and validation handling when opening files with GUI (#433, #442)

  • Clean-up and improvements of libertem.analysis.fullmatch.FullMatcher (#463)

  • Ensure that RAW dataset sizes are calculated as int64 to avoid integer overflows (#495, #493)

  • Resolve shape mismatch issue and simplify dominant order calculation in Radial Fourier Analysis (#502)

  • Actually pass the enable_direct parameter from web API to the DataSet



  • The Job interface is planned to be replaced with an implementation based on UDFs in one of the upcoming releases.


  • Split up the blobfinder code between several files to reduce file size (#468)

0.2.2 / 2019-10-14

Point release to fix a number of minor issues, most notably PR #439 that should have been merged for version 0.2.

Bug fixes

  • Trigger a timeout when guessing parameters for HDF5 takes too long (#440 , #449)

  • Slightly improved error and validation handling when opening files with GUI (@ec74c13)

  • Recognize BLO file type (#432)

  • Fixed a glitch where negative peak elevations were possible (#446)

  • Update examples to match 0.2 release (#439)

0.2.1 / 2019-10-07

Point release to fix a bug in the Zenodo upload for production releases.

0.2.0 / 2019-10-07

This release constitutes a major update after almost a year of development. Systematic change management starts with this release.

This is the release message:

User-defined functions

LiberTEM 0.2 offers a new API to define a wide range of user-defined reduction functions (UDFs) on distributed data. The interface and implementation offers a number of unique features:

  • Reductions are defined as functions that are executed on subsets of the data. That means they are equally suitable for distributed computing, for interactive display of results from a progressing calculation, and for handling live data¹.

  • Interfaces adapted to both simple and complex use cases: From a simple map() functionality to complex multi-stage reductions.

  • Rich options to define input and output data for the reduction functions, which helps to implement non-trivial operations efficiently within a single pass over the input data.

  • Composition and extension through object oriented programming

  • Interfaces that allow highly efficient processing: locality of reference, cache efficiency, memory handling


Advanced features:

A big shoutout to Alex (@sk1p) who developed it! 🏆

¹User-defined functions will work on live data without modification as soon as LiberTEM implements back-end support for live data, expected in 2020.

Support for 4D STEM applications

In parallel to the UDF interface, we have implemented a number of applications that make use of the new facilities:

  • Correlation-based peak finding and refinement for CBED (credit: Karina Ruzaeva @kruzaeva)

  • Strain mapping

  • Clustering

  • Fluctuation EM

  • Radial Fourier Series (advanced Fluctuation EM)

More details and examples:

Extended documentation

We have greatly improved the coverage of our documentation:

Fully automated release pipeline

Alex (@sk1p) invested a great deal of effort into fully automating our release process. From now on, we will be able to release more often, including service releases. 🚀

Basic dask.distributed array integration

LiberTEM can generate efficient dask.distributed arrays from all supported dataset types with this release. That means it should be possible to use our high-performance file readers in applications outside of LiberTEM.

File formats

Support for various file formats has improved. More details:

0.1.0 / 2018-11-06

Initial release of a minimum viable product and proof of concept.

Support for applying masks with high throughput on distributed systems with interactive web GUI display and scripting capability.