Cumulative reductions in Blosc2

As mentioned in previous blog posts (see this blog) the maintainers of python-blosc2 are going all-in on Array API integration. This means adding new functions to bring the library up to the standard. Of course integrating a given function may be more or less difficult for each library which aspires to compatibility, depending on legacy code, design principles, and the overarching philosophy of the package. Since python-blosc2 uses chunked arrays, handling reductions and mapping between local chunk- and global array-indexing can be tricky.

Cumulative reductions

Consider an array a of shape (1000, 2000, 3000) and data type float64 (more on numerical precision later). The result of sum(a, axis=0) would be (20, 30) and sum(a, axis=1) would be (1000, 3000). In general we can say that reductions reduce the sizes of arrays. On the other hand, cumulative reductions store the intermediate reduction results along the reduction axis, so that the shape of the result is always the same as that of the input array: cumulative_sum(a, axis=ax) is always (1000, 2000, 3000) for any (valid) value of ax.

This has a couple of consequences. One is that memory consumption may be rather important: the array a will occupy math.prod((1000, 2000, 3000))*8/(1024**3) = 44.7GB, but its sum along the first axis only .0447GB. Thus we can easily store the final result in memory. Not so for the result of cumulative_sum which also occupies 44.7GB!

The second consequence, for chunked array libraries, is that the order in which one loads chunks and calculates the result matters. Consider the following diagram, where we have a 1D array of three elements. To calculate the final sum, we may load the chunks in any order and do not require access to any previous value except the running total - loading the first, third and finally second chunks, we obtain the correct sum of 4. However, for the cumulative sum, each element of the resutl depends on the previous element (and from there the sum of all prior elements of the array). Consequently, we must ensure we load the chunks according to their order in memory - if not, we will end up an incorrect final result - a minimal criterion is that the final element of the cumulative sum should be the same as the sum, which is not the case here!

/images/cumulative_sumprod/ordermatters.png

Consequences for numerical precision

When calculating reductions, numerical precision is a common hiccup. For products, one can quickly over flow the data type - the product of arange(1, 14) already overflows the maximum value of int32. For sums, rounding errors incurred due to adding elements of a small size to the running total of a large size can quickly become significant. For this reason, Numpy will try to use pairwise summation to calculate sum(a) - this involves breaking the array into small parts, calculating the sum on each small part (i.e. simply successively adding elements to a running total), and then recursively summing pairs of sums until the final result is reached. Each recirsive sum operation thus involves the sum of two numbers of similar size, thus reducing the rounding errors incurred when summing disparate numbers. This algorithm also only has a minimal additional overhead to the naive approach and is eminently parallelisable. And it has a natural recursive implementation, something which computer scientists always find appealing even if only for aesthetic reasons!

/images/cumulative_sumprod/pairwise_sum.png

Unfortunately, such an approach is not possible for cumulative sums since, as discussed above, order matters! One possibility is to use Kahan summation (the Wikipedia article is excellent), which does have additional costs (both in terms of FLOPS and memory consumption) although these are not prohibitive. One essentially keeps track of the rounding errors incurred with an auxiliary running total and uses this to correct the sum:

# Kahan summation algorithm
tot = 0
tracker = 0
for el in array:
    corrected_el = el - c # nudge el with accumulated lost digits
    temp = tot + corrected_el # lose last few digits of el
    tracker = (temp - tot) - corrected_el  # store the lost digits of el
    tot = temp

In implementation, we calculate the cumulative sum on a decompressed chunk in order and then carry forward the last element of the cumulative sum (i.e. the sum of the whole chunk) to the next chunk, incrementing the result of the cumulative sum by this carried-over value to give the global cumulative sum. Thus, we can use Kahan summation between the small(er) values of the local chunk cumulative sum and the large(r) carried-forward running total to try and conserve precision.

Unfortunately, we still observe discrepancies with respect to the Numpy implementation (which sums element-by-element essentially) of cumulative sum - but this also differs from the results of np.sum due to the latter's use of pairwise summation! Finite arithmetic imposes an insuperable barrier: if you use three different algorithms, one cannot guarantee agreement in every possible case. Since the Kahan sum approach has a slight overhead, we decided to junk it, as it did not improve precision sufficiently to justify its use.

Experiments

We performed some experiments comparing the new blosc2.cumulative_sum function to Numpy's version for some large arrays of (of size (N, N, N) for various values of N). Since the working set is double the size of the input array (input + output), we expect to see significant benefits from Blosc2 compression and exploitation of caching. Indeed, once the working set size starts to approach the available RAM (32 GB), NumPy begins to slow down rapidly and when the working set exceeds memory and swap must be used NumPy becomes vastly slower.

/images/cumulative_sumprod/cumsumbench.png

The plot shows the average computation time for cumulative_sum over the three different axes of the input array. The benchmark code may be found here

Conclusions

Blosc2 achieves superior compression and enables computation on larger datasets by tightly integrating compression and computation and interleaving I/O and computation. The returns on such an approach are clear in an era of increasingly expensive RAM and thus increasingly desirable memory efficiency. As an array library catering in a unique way to this growing need, bringing Blosc2 into greater alignment with the interlibrary array API standard is of utmost importance to ease its integration into users' workflows and applications. We are thus especially pleased that the performance of the freshly-implemented cumulative reduction operations mandated by the Array API standard only underline the validity of chunkwise operations.

The Blosc team isn't resting on our laurels either, as we continue to optimise the existing framework to accelerate computations further. The recent introduction of the miniexpr library into the backend is the capstone to these efforts, and has made the compression/computation integration truly seamless, bringing incredible speedups for memory-bound computations, justifying Blosc2's compression-first, cache-aware philosophy. This all allows Blosc2 to handle significantly larger working sets than other solutions, delivering high performance for both in-memory and on-disk datasets, even exceeding available RAM.

If you find our work useful and valuable, we would be grateful if you could support us by making a donation. Your contribution will help us continue to develop and improve Blosc packages, making them more accessible and useful for everyone. Our team is committed to creating high-quality and efficient software, and your support will help us to achieve this goal.

OpenZL Plugin for Blosc2

Blosc's philosophy of meta-compression is incredibly powerful - one is able to compose pipelines to optimally compress data (for speed or compression ratio), store information about the pipeline alognside the data in metadata, and then rely on a generic decompressor to read this and reverse the pipeline. The OpenZL team share our belief in the validity of this approach and have designed a graph-based formalisation with extensive support for all kinds of compression pipelines for all kinds of data.

However, Blosc2 is now much more than just a compression library - it offers comprehensive indexing support (including fancy indexing via the python-blosc2 interface) as well as an increasingly rapid compute engine (see this blog!). What if we could marry the incredibly comprehensive compression coverage of OpenZL with Blosc2's extended array manipulation functionality?

Foreseeing precisely this sort of challenge, prior Blosc2 developers implemented a dynamic plugin register functionality (loading the plugin in C-Blosc2, which can be called via Python-Blosc2). This means that with some unintrusive, relatively concise interface code, one can link Blosc2 and OpenZL at runtime (without substantially modifying either) and offer Blosc2 arrays compressed and decompressed with OpenZL.

The OpenZL plugin

The source code for the plugin can be found here. The minimal skeleton for the plugin layout follows

├── CMakeLists.txt
├── blosc2_openzl
│   └── __init__.py
├── pyproject.toml
├── requirements-build.txt
└── src
    ├── CMakeLists.txt
    ├── blosc2_openzl.c
    └── blosc2_openzl.h

The blosc2_openzl.c must implement an encoder and decoder which are exported via an info struct:

#include "blosc2_openzl.h"

BLOSC2_OPENZL_EXPORT codec_info info = {
    .encoder=(char *)"blosc2_openzl_encoder",
    .decoder=(char *)"blosc2_openzl_decoder"
};

int blosc2_openzl_encoder(const uint8_t* src, uint8_t* dest,
                                  int32_t size, uint8_t meta,
                                  blosc2_cparams *cparams, uint8_t id) {
  // code
}


int blosc2_openzl_decoder(const uint8_t *input, int32_t input_len, uint8_t *output,
                            int32_t output_len, uint8_t meta, blosc2_dparams *dparams,
                            const void *chunk) {
  // code
}

The header blosc2_openzl.h then makes the info and encoder/decoder functions available to Blosc2:

#include "blosc2.h"
#include "blosc2/codecs-registry.h"
#include "openzl/openzl.h"

BLOSC2_OPENZL_EXPORT int blosc2_openzl_encoder(...);

BLOSC2_OPENZL_EXPORT int blosc2_openzl_decoder(...);

// Declare the info struct as extern
extern BLOSC2_OPENZL_EXPORT codec_info info;

PEP 427 and wheel structure

In order for the plugin to dynamically link to Blosc2, it has to be able to find the Blosc2 library at runtime. This has historically been quite finicky since different platforms and package managers may store Python packages (and the associated .so/.dylib/.dll library objects differently). Consequently, PEP 427 recommends distributing the Python wheels for packages which depend on compiled objects such as Python-Blosc2 in the following way

blosc2
  ├── __init__.py
  ├── lib
  │   ├── libblosc2.so
  │   ├── cmake
  │   └── pkgconfig
  └── include
      └── blosc2.h

Finding the necessary libblosc2.so object from the top-level CMakeLists.txt file for the plugin is then as easy as:

# Find blosc2 package location using Python
execute_process(
    COMMAND "${Python_EXECUTABLE}" -c "import blosc2, pathlib; print(pathlib.Path(blosc2.__file__).parent)"
    OUTPUT_VARIABLE BLOSC2_PACKAGE_DIR
)
set(BLOSC2_INCLUDE_DIR "${BLOSC2_PACKAGE_DIR}/include")
set(BLOSC2_LIB_DIR "${BLOSC2_PACKAGE_DIR}/lib")

After building the plugin backend in src/CMakelists.txt one simply links the plugin to the backend (in this case openzl) and installs like so:

add_library(blosc2_openzl SHARED blosc2_openzl.c)
target_include_directories(blosc2_openzl PUBLIC ${BLOSC2_INCLUDE_DIR})
target_link_libraries(blosc2_openzl ${OPENZL_TARGET})
# Install
install(TARGETS blosc2_openzl
    RUNTIME DESTINATION blosc2_openzl
    LIBRARY DESTINATION blosc2_openzl
)

Note that it is not necessary to link blosc2_openzl and blosc2 in target_link_libraries as the former depends only on macros and structs defined in header files - and not functions. This makes the libblosc2_openzl.so object especially light and robust, as blosc2 is not registered as an explicit dependency. In fact on Linux, even if the blosc2_openzl.c were to include blosc2 functions, it is still not necessary to perform such linking!

Following PEP 427 allows one to add an additional safeguard to check if the plugin fails to find blosc2 by adding the RUNTIME_PATH property to the installed object

set_target_properties(blosc2_openzl PROPERTIES
    INSTALL_RPATH "$ORIGIN/../blosc2/lib"
)

It also allows one to easily find the plugin .so object when calling via python - in the blosc2_openzl/__init__.py file one can find the library path as easily as os.path.abspath(Path(__file__).parent / libname) where libname is the desired .so/.dylib/.dll object (depending on platform). All these benefits have led us to update the wheel structure for python-blosc2 in the latest 4.0 release.

Using OpenZL from Python

Installing is then as simple as:

pip install blosc2_openzl

One can also download the project and use the cmake and cmake --build commands to compile C-level tests or examples. But let's get compressing with python straight away:

import blosc2
import numpy as np
import blosc2_openzl
from blosc2_openzl import OpenZLProfile as OZLP
prof = OZLP.OZLPROF_SH_BD_LZ4
# Define the compression parameters for Blosc2
cparams = {'codec': blosc2.Codec.OPENZL, 'codec_meta': prof.value}

# Create (uncompressed) array
np_array = np.arange(1000).reshape((10,100))

# Compression with the OpenZL codec
bl_array = blosc2.asarray(np_array, cparams=cparams)
print(bl_array.cratio) # print compression ratio
## 25.078369905956112

The OpenZLProfile enum contains the available profile pipelines that have been implemented in the plugin, which use the codec_meta field (an 8-bit integer) to specify the desired transformation via codecs, filters and other nodes for the compression graph. Starting from the Least-Significant-Bit (LSB), setting the bits tells OpenZL how to build the graph:

CODEC | SHUFFLE | DELTA | SPLIT | CRC | x | x | x |

  • CODEC - If set, use LZ4. Else ZSTD.

  • SHUFFLE - If set, use shuffle (outputs a stream for every byte of input data typesize)

  • DELTA - If set, apply a bytedelta (to all streams if necessary)

  • SPLIT - If set, do not recombine the byte streams

  • CRC - If set, store a checksum during compression and check it during decompression

The remaining bits may be used in the future.

In the future it would be great to further expand the OpenZL functionalities that we can offer via the plugin, such as bespoke transformers trained via machine learning techniques - see the OpenZL page for a flavour of what can be done with the (still evolving) library.

Conclusions

C-Blosc2's ability to support dynamically loaded plugins allows the library to grow in features without increasing the size and complexity of the library itself. For more information about user-defined plugins, refer to this blog entry. We have put this to work to offer linkage with the rather complex OpenZL library with a relatively rapid turnaround from design to prototype to full release in around a month. This is thanks to prior hard work by open source contributors from Blosc but naturally also OpenZL - many thanks to all!

If you find our work useful and valuable, we would be grateful if you could support us by making a donation. Your contribution will help us continue to develop and improve Blosc packages, making them more accessible and useful for everyone. Our team is committed to creating high-quality and efficient software, and your support will help us to achieve this goal.

The Surprising Speed of Compressed Data: A Roofline Story

Can a library designed for computing with compressed data ever hope to outperform highly optimized numerical engines like NumPy and Numexpr? The answer is complex, and it hinges on the "memory wall" — a phenomenon which occurs when system memory limitations start to drag on CPU. This post uses Roofline analysis to explore this very question, dissecting the performance of Blosc2 and revealing the surprising scenarios where it can gain a competitive edge.

TL;DR

Before we dive in, here's what we discovered:

  • For in-memory tasks, Blosc2's overhead can make it slower than Numexpr, especially on x86 CPUs.

  • This changes on Apple Silicon, where Blosc2's performance is much more competitive.

  • For on-disk tasks, Blosc2 consistently outperforms NumPy/Numexpr on both platforms.

  • The "memory wall" is real, and disk I/O is an even bigger one, which is where compression shines.

A Trip Down Memory Lane

Let's rewind to 2008. NumPy 1.0 was just a toddler, and the computing world was buzzing with the arrival of multi-core CPUs and their shiny new SIMD instructions. On the NumPy mailing list, a group of us were brainstorming how to harness this new power to make Python's number-crunching faster.

The idea seemed simple: trust newer compilers to use SIMD (and, possibly, data alignment) to perform operations on multiple data points at once. To test this, a simple benchmark was shared: multiply two large vectors element-wise. Developers from around the community ran the code and shared their results. What came back was a revelation.

For small arrays that fit snugly into the CPU's high-speed cache, SIMD was quite good at accelerating computations. But as soon as the arrays grew larger, the performance boost vanished. Some of us were already suspicious about the new "memory wall" that had been growing lately, seemingly due to the widening gap between CPU speeds and memory bandwidth. However, a conclusive answer (and solution) was still lacking.

But amidst the confusion, a curious anomaly emerged. One machine, belonging to NumPy legend Charles Harris, was consistently outperforming the rest—even those with faster processors. It made no sense. We checked our code, our compilers, everything. Yet, his machine remained inexplicably faster. The answer, when it finally came, wasn't in the software at all. Charles, a hardware wizard, had tinkered with his BIOS to overclock his RAM from 667 MHz to a whopping 800 MHz.

That was my lightbulb moment: for data-intensive tasks, raw CPU clock speed was not the limiting factor; memory bandwidth was what truly mattered.

This led me to a wild idea: what if we could make memory effectively faster? What if we could compress data in memory and decompress it on-the-fly, just in time for the CPU? This would slash the amount of data being moved, boosting our effective memory bandwidth. That idea became the seed for Blosc, a project I started in 2010 that has been my passion ever since. Now, 15 years later, it is time to revisit that idea and see how well it holds up in today's computing landscape.

Roofline Model: Understanding the Memory Wall

Not all computations are equally affected by the memory wall - in general performance can be either CPU-bound or memory-bound. To diagnose which resource is the limiting factor, the Roofline model provides an insightful analytical framework. This model plots computational performance against arithmetic intensity (i.e. floating-point operations per second versus memory accesses per second) to visually determine whether a task is constrained by CPU speed or memory bandwidth.

/images/roofline-surprising-story/roofline-intro.avif

We will use Roofline plots to analyze Blosc2's performance, compared to that of NumPy and Numexpr. NumPy, with its highly optimized linear algebra backends, and Numexpr, with its efficient evaluation of element-wise expressions, together form a strong performance baseline for the full range of arithmetic intensities tested.

To highlight the role of memory bandwidth, we will conduct our benchmarks on an AMD Ryzen 7800X3D CPU at two different memory speeds: the standard 4800 MTS and an overclocked 6000 MTS. This allows us to directly observe how memory frequency impacts computational performance.

To cover a range of computational scenarios, our benchmarks include five operations with varying arithmetic intensities:

  • Very Low: A simple element-wise addition (a + b + c).

  • Low: A moderately complex element-wise expression (sqrt(a + 2 * b + (c / 2)) ^ 1.2).

  • Medium: A highly complex element-wise calculation involving trigonometric and exponential functions.

  • High: Matrix multiplication on small matrices (labeled matmul0).

  • Very High: Matrix multiplication on large matrices (labeled matmul1 and matmul2).

/images/roofline-surprising-story/roofline-mem-speed-AMD-7800X3D.png

The Roofline plot confirms that increasing memory speed only benefits memory-bound operations (low arithmetic intensity), while CPU-bound tasks (high arithmetic intensity) are unaffected, as expected. Although this might suggest the "memory wall" is not a major obstacle, low-intensity operations like element-wise calculations, reductions, and selections are extremely common and often create performance bottlenecks. Therefore, optimizing for memory performance remains crucial.

The In-Memory Surprise: Why Wasn't Compression Faster?

We benchmarked Blosc2 (both compressed and uncompressed) against NumPy and Numexpr. For this test, Blosc2 was configured with the LZ4 codec and shuffle filter, a setup known for its balance of speed and compression ratio. The benchmarks were executed on an AMD Ryzen 7800X3D CPU with memory speed set to 6000 MTS, ensuring optimal memory bandwidth for the tests.

/images/roofline-surprising-story/roofline-7800X3D-mem-def.png

The analysis reveals a surprising outcome: for memory-bound operations, Blosc2 is up to five times slower than Numexpr. Although operating on compressed data provides a marginal improvement over uncompressed Blosc2, it is not enough to overcome this performance gap. This result is unexpected because Blosc2 leverages Numexpr internally, and the reduced memory bandwidth from compression should theoretically lead to better performance in these scenarios.

To understand this counter-intuitive result, we must examine Blosc2's core architecture. The key lies in its double partitioning scheme, which, while powerful, introduces an overhead that can negate the benefits of compression in memory-bound contexts.

Unpacking the Overhead: A Look Inside Blosc2's Architecture

The performance characteristics of Blosc2 are rooted in its double partitioning architecture, which organizes data into chunks and blocks.

/images/roofline-surprising-story/double-partition-b2nd.avif

This design is crucial for both aligning with the CPU's memory hierarchy and enabling efficient multidimensional array representation (important for things like e.g. n-dimensional slicing). However, this structure introduces an inherent overhead from additional indexing logic. In memory-bound scenarios, this latency counteracts the performance gains from reduced memory traffic, explaining why Blosc2 does not surpass Numexpr.

Conversely, as arithmetic intensity increases, the computational demands begin to dominate the total execution time. In these CPU-bound regimes, the partitioning overhead is effectively amortized, allowing Blosc2 to close the performance gap and eventually match NumPy's performance in tasks like large matrix multiplications.

Modern ARM Architectures

CPU architecture is a rapidly evolving field. To investigate how these changes impact performance, we extended our analysis to the Apple Silicon M4 Pro, a modern ARM-based processor.

/images/roofline-surprising-story/roofline-m4pro-mem-def.png

The results show that Blosc2 performs significantly better on this platform, narrowing the performance gap with NumPy/NumExpr, especially for operations on compressed data. While compute engines optimized for uncompressed data still hold an edge, these findings suggest that compression will play an increasingly important role in improving computational performance in the future.

However, while the in-memory results are revealing, they don't tell the whole story. Blosc2 was designed not just to fight the memory wall, but to conquer an even greater bottleneck: disk I/O. Although compression has the benefit of fitting more data into RAM when used in-memory (which is per se extremely interesting in these times, where RAM prices skyrocketed), its true power is unleashed when computations move off-motherboard. Now, let's shift the battlefield to the disk and see how Blosc2 performs in its native territory.

A Different Battlefield: Blosc2 Shines with On-Disk Data

Blosc2's architecture extends its computational engine to operate seamlessly on data stored on disk, a significant advantage for large-scale analysis. This is particularly relevant in scenarios where datasets exceed available memory, necessitating out-of-core processing, as commonly encountered in data science, machine learning workflows or cloud computing environments.

Our on-disk benchmarks were designed to use datasets larger than the system's available memory to prevent filesystem caching from influencing the results. To establish a baseline, we implemented an out-of-core solution for NumPy/NumExpr, leveraging memory-mapped files. Here Blosc2 has a performance edge, particularly for memory-bound operations on compressed data, being able to send and receive data faster to disk than the memory-mapped NumPy arrays.

In this case, we've used high-performance NVMe SSDs (NVMe 4.0) to minimize the impact of disk speed on the results. We also switched to the ZSTD codec for Blosc2, as its superior compression ratio over LZ4 further minimizes data transfer to and from the disk.

First, let's see the results for the AMD Ryzen 7800X3D system:

/images/roofline-surprising-story/roofline-7800X3D-disk-def.png

The plots above show that Blosc2 outperforms both NumPy and Numexpr for all low-to-medium intensity operations. This is because the high latency of disk I/O amortizes the overhead of Blosc2's double partitioning scheme. Furthermore, the reduced bandwidth required for compressed data gives Blosc2 an additional performance advantage in this scenario.

Now, let's see the results for the Apple Silicon M4 Pro system:

/images/roofline-surprising-story/roofline-m4pro-disk-def.png

On the Apple Silicon M4 Pro system, Blosc2 again outperforms both NumPy and Numexpr for all on-disk operations, mirroring the results from the AMD system. However, the performance advantage is even more significant here, especially for memory-bound tasks. This is mainly because memory-mapped arrays are less efficient on Apple Silicon than on x86_64 systems, increasing the overhead for the NumPy/Numexpr baseline.

Roofline Plot: In-Memory vs On-Disk

To better understand the trade-offs between in-memory and on-disk processing with Blosc2, the following plot contrasts their performance characteristics for compressed data:

/images/roofline-surprising-story/roofline-mem-disk-def.png

A notable finding for the AMD system is that Blosc2's on-disk operations are noticeably faster than its in-memory operations, especially for memory-bound tasks (low arithmetic intensity). This is likely due to two factors: first, the larger datasets used for on-disk tests allow Blosc2 to use more efficient internal partitions (chunks and blocks), and second, parallel data reads from disk further reduce bandwidth requirements.

In contrast, for CPU-bound tasks (high arithmetic intensity), on-disk performance is comparable to, albeit slightly slower than, in-memory performance. The analysis also reveals a specific weakness: small matrix multiplications (matmul0) are significantly slower on-disk, identifying a clear target for future optimization.

In contrast to the AMD system, the Apple Silicon M4 Pro shows that Blosc2's on-disk operations are slower than in-memory, a difference that is most significant for memory-bound tasks. This performance disparity suggests that current on-disk optimizations may favor x86_64 architectures over ARM.

As with the AMD platform, CPU-bound operations exhibit similar performance for both on-disk and in-memory contexts. The notable exception remains the small matrix multiplication (matmul0), which performs significantly worse on-disk. This recurring pattern pinpoints a clear opportunity for future optimization efforts.

Finally, and in addition to its on-disk performance, Blosc2 offers a significant cost advantage. With the recent rise in SSD prices, compressing data on disk becomes an economically attractive strategy, allowing you to store more data in less space and thereby reduce hardware expenses.

Reproducibility

All the benchmarks and plots presented in this blog post can be reproduced. You are invited to run the scripts on your own hardware to explore the performance characteristics of Blosc2 in different environments. In case you get interesting results, please consider sharing them with the community!

Conclusions

In this blog post, we explored the Roofline model to analyze the performance of Blosc2, NumPy, and Numexpr. We've confirmed that memory-bound operations are significantly affected by the "memory wall", making data compression of interest when maximizing performance. However, for in-memory operations, the overhead of Blosc2's double partitioning scheme can be a limiting factor, especially on x86_64 architectures. Encouragingly, this performance gap narrows considerably on modern ARM platforms like Apple Silicon, suggesting a promising future.

The situation changes dramatically for on-disk operations. Here, Blosc2 consistently outperforms NumPy and Numexpr, as the high latency of disk I/O (even if we used SSDs here) amortizes its internal overhead. This makes Blosc2 a compelling choice for out-of-core computations, one of its primary use cases.

Overall, this analysis has provided valuable insights, highlighting the importance of the memory hierarchy. It has also exposed specific areas for improvement, such as the performance of small matrix multiplications. As Blosc2 continues to evolve, I am confident we can address these points and further enhance its performance, making it an even more powerful tool for numerical computations in Python.


Read more about ironArray SLU — the company behind Blosc2, Caterva2, Numexpr and other high-performance data processing libraries.

Compress Better, Compute Bigger!

Blosc2: A Universal Lazy Engine for Array Operations

While compression is often seen merely as a way to save storage, the Blosc development team has long viewed it as a foundational element for high-performance computing. This philosophy is at the heart of Blosc2, which is not just a compression library but a powerful framework for handling large datasets. This post will highlight one of Python-Blosc2's most exciting capabilities: its lazy evaluation engine for array operations.

Libraries optimised for computation on large datasets that don't fit in memory - such as Dask or Spark - often use lazy evaluation of computation expressions. This typically speeds up evaluation since one can build the full chain of computations and only execute them when the final result is needed. Consequently, Python-Blosc2's compute engine also uses the lazy imperative paradigm, which proves to be both powerful and efficient.

An additional benefit of the engine is its ability to act as a universal backend. Python-Blosc2 has a native blosc2.NDArray format, but it can also easily execute lazy operations on arrays from other popular libraries like NumPy, HDF5, Zarr, Xarray or TileDB - basically any array object which complies with a minimal protocol.

In the recent Python-Blosc2 3.10.x series, we added support for lazy evaluation of eager functions, expanding the capabilities of the compute engine, and making interaction with other formats easier. Let's explore how this works using an out-of-core tensordot operation as an example.

From Eager to Lazy with blosc2.lazyexpr

Functions which return a result with a different shape to the input operands - such as reductions or linear algebra operations - must be evaluated eagerly (computed and the result returned immediately). For example, blosc2.tensordot() executes eagerly.

Nevertheless, we can defer this computation, by wrapping the call in a string and passing it to blosc2.lazyexpr. This creates a LazyExpr object that represents the operation without executing it.

# Assume a and b are large, on-disk blosc2 arrays
axis = (0, 1)

# Create a lazy expression object
lexpr = blosc2.lazyexpr("tensordot(a, b, axes=(axis, axis))")

# The computation has not run yet.
# To execute it and save the result to a new persistent array:
out_blosc2 = lexpr.compute(urlpath="out.b2nd", mode="w")

This is useful, and highly efficient both in terms of computation time and memory usage, as we'll see later. But the real magic happens when we use this computation engine with other array formats.

One Engine, Many Backends

The blosc2.evaluate() function takes the same string expression but can operate on any array-like objects that follow the blosc2.Array protocol. This protocol simply requires the object to have shape, dtype, __getitem__, and __setitem__ attributes, which are standard in h5py, zarr, tiledb, xarray and numpy arrays.

This means you can use Blosc2's efficient evaluation engine to perform out-of-core computations directly on your existing (HDF5, Zarr, etc.) datasets.

Example with HDF5

Here, we instruct blosc2.evaluate to run the tensordot operation on two h5py datasets and store the result in a third one.

# Open HDF5 datasets
f = h5py.File("a_b_out.h5", "a")
a = f["a"]
b = f["b"]
out_hdf5 = f["out"]

# Use blosc2.evaluate() with HDF5 arrays
blosc2.evaluate("tensordot(a, b, axes=(axis, axis))", out=out_hdf5)

Notice that the expression string is identical to the one we used before. blosc2 inspects the objects in the expression's namespace and computes with them, regardless of their underlying format.

Example with Zarr

The same principle applies to Zarr arrays.

# Open Zarr arrays
a = zarr.open("a.zarr", mode="r")
b = zarr.open("b.zarr", mode="r")
zout = zarr.open_array("out.zarr", mode="w", ...)

# Use blosc2.evaluate() with Zarr arrays
blosc2.evaluate("tensordot(a, b, axes=(axis, axis))", out=zout)

This makes blosc2.evaluate a powerful, backend-agnostic tool for out-of-core array computations.

Performance Comparison

As well as offering smooth integration, blosc2.evaluate is highly performant. Python-Blosc2 uses a lazy evaluation engine that integrates tightly with the Blosc2 format. This means that the computation is performed on-the-fly, without any intermediate copies. This is a huge advantage for large datasets, as it allows us to perform computations on arrays that don't fit in memory. In addition, it actively tries to leverage the hierarchical memory layout in modern CPUs, so that it can use both private and shared caches in the best way possible.

We ran a benchmark performing a tensordot operation (run over three different axis combinations) on two 3D arrays stored on disk; we then write the output to disk as well. We consider four approaches:

  1. Blosc2 Native: Using blosc2.lazyexpr with blosc2.NDArray containers.

  2. Blosc2+HDF5: Using blosc2.evaluate with HDF5 for storage.

  3. Blosc2+Zarr: Using blosc2.evaluate with Zarr for storage.

  4. Dask+HDF5: The combination of Dask for computation and HDF5 for storage.

  5. Dask+Zarr: The combination of Dask for computation and Zarr for storage.

For each approach we plot the memory consumption vs. time for arrays of increasing size.

Results on two (600, 600, 600) float64 arrays (3 GB working set):

/images/tensordot_pure_persistent/tensordot-600c-amd.png

Results on two (1200, 1200, 1200) float64 arrays (26 GB working set):

/images/tensordot_pure_persistent/tensordot-1200c-amd.png

Results on two (1500, 1500, 1500) float64 arrays (50 GB working set):

/images/tensordot_pure_persistent/tensordot-1500c-amd.png

As can be seen, the amount of memory required by the different approaches is very different, although none requires more than a small fraction of the total working set (which is 3, 26 and 50 GB, respectively). This is because all approaches are out-of-core, and only load small chunks of data into memory at any given time.

The benchmarks were executed on an AMD Ryzen 9800X3D CPU, with 16 logical cores and 64GB of RAM, using Ubuntu Linux 25.04. We have used the following versions of the libraries: python-blosc2 3.10.1, h5py 3.14.0, zarr 3.1.3, 2025.9.1, and numpy 2.3.3. All backends are using Blosc or Blosc2 as the compression backend, with same codecs and filters, and using the same number of threads for compression and decompression.

Analysis

The results are revealing:

  • Blosc2 native is fastest: The tight integration between the Blosc2 compute engine and its native array format yields the best performance, making it the fastest solution by a significant margin.

  • Rapid computation time: blosc2.evaluate delivers impressive speed when operating directly on HDF5 and Zarr files, outperforming the more complex Dask+HDF5 and Dask+Zarr stack. This is great news for anyone with existing HDF5/Zarr datasets.

  • Low memory usage: While the memory consumption for the Blosc2+HDF5 combination is a bit high (we are still analyzing why), the memory usage for the Blosc2 native approach is pretty low, making it suitable for systems with limited RAM and/or operands not fitting in memory.

This is not to say that Dask (or Spark) is an inferior choice for out-of-core computations. It's a great tool for large-scale data processing, especially when using clusters, is very flexible, and offers a wide range of functions; it's certainly a first-class citizen in the PyData ecosystem. However, if your needs are more modest and you want a simple, efficient way to run computations on existing datasets, using a core of common functions, and leveraging the full capabilities of modern multi-core systems, all without the overhead of a full Dask setup, blosc2.evaluate() is a fantastic alternative.

Conclusion

Python-Blosc2 is more than just a compression library for storing data in blosc2.NDArray objects; it's a high-performance computing tool as well. Its lazy evaluation engine provides a simple yet powerful way to handle out-of-core operations. The computation engine is completely decoupled from the compression backend, and thus can easily work with many different array formats; however, the compute engine meshes most tightly with the Blosc2 native array format, achieving maximal performance (in terms of both computation time and memory usage).

By adhering to the Array API standard, it acts as a universal engine that can work with different storage backends; we already implement more than 100 functions that are required by that standard, and the number will only grow in the future. If you have existing datasets in HDF5 or Zarr or TileDB (and we are always looking forward to support even more formats), and need a lightweight, efficient way to run computations on them, blosc2.evaluate() is a fantastic tool to have in your arsenal. Of course, for maximum performance, the native Blosc2 format is a clear winner.

Our work continues. We are committed to enhancing Python-Blosc2 by expanding its supported operations, improving performance across backends, and adding new ones. Stay tuned for more updates! If you found this post useful, please share it. For questions or comments, reach out to us on GitHub.

TreeStore: Endowing Your Data With Hierarchical Structure

When working with large and complex datasets, having a way to organize your data efficiently is crucial. blosc2.TreeStore is a powerful feature in the blosc2 library that allows you to store and manage your compressed arrays in a hierarchical, tree-like structure, much like a filesystem. This container, typically saved with a .b2z extension, can hold not only blosc2.NDArray or blosc2.SChunk objects but also metadata, making it a versatile tool for data organization.

What is a TreeStore?

A TreeStore lets you arrange your data into groups (like directories) and datasets (like files). Each dataset is a blosc2.NDArray or blosc2.SChunk instance, benefiting from Blosc2's high-performance compression. This structure is ideal for scenarios where data has a natural hierarchy, such as in scientific experiments, simulations, or any project with multiple related datasets.

Basic Usage: Creating and Populating a TreeStore

Creating a TreeStore is straightforward. You can use a with statement to ensure the store is properly managed. Inside the with block, you can create groups and datasets using a path-like syntax.

import blosc2
import numpy as np

# Create a new TreeStore
with blosc2.TreeStore("my_experiment.b2z", mode="w") as ts:
    # You can store numpy arrays, which are converted to blosc2.NDArray
    ts["/dataset0"] = np.arange(100)

    # Create a group with a dataset that can be a blosc2 NDArray
    ts["/group1/dataset1"] = blosc2.zeros((10,))

    # You can also store blosc2 arrays directly (vlmeta included)
    ext = blosc2.linspace(0, 1, 10_000, dtype=np.float32)
    ext.vlmeta["desc"] = "dataset2 metadata"
    ts["/group1/dataset2"] = ext

In this example, we created a TreeStore in a file named my_experiment.b2z.

/images/new-treestore-blosc2/tree-store-blog.png

It contains two groups, root and group1, each holding datasets.

Reading from a TreeStore

To access the data, you open the TreeStore in read mode ('r') and use the same path-like keys to retrieve your arrays.

# Open the TreeStore in read-only mode ('r')
with blosc2.TreeStore("my_experiment.b2z", mode="r") as ts:
    # Access a dataset
    dataset1 = ts["/group1/dataset1"]
    print("Dataset 1:", dataset1[:])  # Use [:] to decompress and get a NumPy array

    # Access the external array that has been stored internally
    dataset2 = ts["/group1/dataset2"]
    print("Dataset 2", dataset2[:])
    print("Dataset 2 metadata:", dataset2.vlmeta[:])

    # List all paths in the store
    print("Paths in TreeStore:", list(ts))
Dataset 1: [0 1 2 3 4 5 6 7 8 9]
Dataset 2 [0.0000000e+00 1.0001000e-04 2.0002000e-04 ... 9.9979997e-01 9.9989998e-01
 1.0000000e+00]
Dataset 2 metadata: {b'desc': 'dataset2 metadata'}
Paths in TreeStore: ['/group1/dataset2', '/group2', '/group1', '/group2/another_dataset', '/group1/dataset1']

Advanced Usage: Metadata and Subtrees

TreeStore becomes even more powerful when you use metadata and interact with subtrees (groups).

Storing Metadata with vlmeta

You can attach variable-length metadata (vlmeta) to any group or to the root of the tree. This is useful for storing information like author names, dates, or experiment parameters. vlmeta is essentially a dictionary where you can store your metadata.

# Appending metadata to the TreeStore
with blosc2.TreeStore("my_experiment.b2z", mode="a") as ts:  # 'a' for append/modify
    # Add metadata to the root
    ts.vlmeta["author"] = "The Blosc Team"
    ts.vlmeta["date"] = "2025-08-17"

    # Add metadata to a group
    ts["/group1"].vlmeta["description"] = "Data from the first run"

# Reading metadata
with blosc2.TreeStore("my_experiment.b2z", mode="r") as ts:
    print("Root metadata:", ts.vlmeta[:])
    print("Group 1 metadata:", ts["/group1"].vlmeta[:])
Root metadata: {'author': 'The Blosc Team', 'date': '2025-08-17'}
Group 1 metadata: {'description': 'Data from the first run'}

Working with Subtrees (Groups)

A group object can be retrieved from the TreeStore and treated as a smaller, independent TreeStore. This capability is useful for better organizing your data access code.

with blosc2.TreeStore("my_experiment.b2z", mode="r") as ts:
    # Get the group as a subtree
    group1 = ts["/group1"]

    # Now you can access datasets relative to this group
    dataset2 = group1["dataset2"]
    print("Dataset 2 from group object:", dataset2[:])

    # You can also list contents relative to the group
    print("Contents of group1:", list(group1))
Dataset 2 from group object: [0.0000000e+00 1.0001000e-04 2.0002000e-04 ... 9.9979997e-01 9.9989998e-01
 1.0000000e+00]
Contents of group1: ['/dataset2', '/dataset1']

Iterating Through a TreeStore

You can easily iterate through all the nodes in a TreeStore to inspect its contents.

with blosc2.TreeStore("my_experiment.b2z", mode="r") as ts:
    for path, node in ts.items():
        if isinstance(node, blosc2.NDArray):
            print(f"Found dataset at '{path}' with shape {node.shape}")
        else:  # It's a group
            print(f"Found group at '{path}' with metadata: {node.vlmeta[:]}")
Found dataset at '/group1/dataset2' with shape (10000,)
Found group at '/group1' with metadata: {'description': 'Data from the first run'}
Found dataset at '/group1/dataset1' with shape (10,)
Found dataset at '/dataset0' with shape (100,)

That's it for this introduction to blosc2.TreeStore! You now know how to create, read, and manipulate a hierarchical data structure that can hold compressed datasets and metadata. You can find the source code for this example in the blosc2 repository.

Some Benchmarks

TreeStore is based on powerful abstractions from the blosc2 library, so it is very fast. Here are some benchmarks comparing TreeStore to other data storage formats, like HDF5 and Zarr. We have used two different configurations: one with small arrays, where sizes follow a normal distribution centered at 10 MB each, and the other with larger arrays, where sizes follow a normal distribution centered at 1 GB each. We have compared the performance of TreeStore against HDF5 and Zarr for both small and large arrays, measuring the time taken to create and read datasets. For comparing apples with apples, we have used the same compression codec (zstd) and filter (shuffle) for all three formats.

For assessing different platforms, we have used a desktop with an Intel i9-13900K CPU and 32 GB of RAM, running Ubuntu 25.04, and also a Mac mini with an Apple M4 Pro processor and 24 GB of RAM. The benchmarks were run using this script.

Results for the Intel i9-13900K desktop

100 small arrays (around 10 MB each) scenario:

/images/new-treestore-blosc2/benchmark_comparison_b2z-i13900K-10M.png

For the small arrays scenario, we can see that TreeStore is the fastest to create datasets (due to use of multi-threading), but it is slower than HDF5 and Zarr when reading datasets. The reason for this is two-fold: first, TreeStore is designed to work using multi-threading, so it must setup the necessary threads at the beginning of the read operation, which takes some time; second, TreeStore is using NDArray objects internally, which are using a double partitioning scheme (chunks and blocks) to store the data, which adds some overhead when reading small slices of data. Regarding the space used, TreeStore is the most efficient, very close to HDF5, and significantly more efficient than Zarr.

100 large arrays (around 1 GB each) scenario:

/images/new-treestore-blosc2/benchmark_comparison_b2z-i13900K-1G.png

When handling larger arrays, TreeStore maintains its lead in creation and full-read performance. Although HDF5 and Zarr offer faster access to small data slices, TreeStore compensates by being the most storage-efficient format, followed by HDF5, with Zarr being the most space-intensive.

Results for the Apple M4 Pro Mac mini

100 small arrays (around 10 MB each) scenario:

/images/new-treestore-blosc2/benchmark_comparison_b2z-MacM4-10M.png

100 large arrays (around 1 GB each) scenario:

/images/new-treestore-blosc2/benchmark_comparison_b2z-MacM4-1G.png

Consistent with the previous results, TreeStore is the most space-efficient format and the fastest for creating and reading datasets, particularly for larger arrays. Its performance is slower than HDF5 and Zarr only when reading small data slices (access time). This can be improved by reducing the number of threads from the default of eight, which lessens the thread setup overhead. For more details on this, see these slides comparing 8-thread vs 1-thread performance.

Notably, the Apple M4 Pro processor shows competitive performance against the Intel i9-13900K CPU, a high-end desktop processor that consumes up to 8x more power. This result underscores the efficiency of the ARM architecture in general and Apple silicon in particular.

Conclusion

In summary, blosc2.TreeStore offers a straightforward yet potent solution for hierarchically organizing compressed datasets. By merging the high-performance compression of blosc2.NDArray and blosc2.SChunk with a flexible, filesystem-like structure and metadata support, it stands out as an excellent choice for managing complex data projects.

As TreeStore is currently in beta, we welcome feedback and suggestions for its improvement. For further details, please consult the official documentation for blosc2.TreeStore.

Blosc2 Gets Fancy (Indexing)

Update (2025-08-26): After some further effort, the 1D fast path mentioned below has been extended to the multidimensional case, with consequent speedups in Blosc2 3.7.3! See below plot comparing maximum and minimum indexing times for the Blosc2-supported fancy indexing cases mentioned below.

/images/blosc2-fancy-indexing/newfancybench.png

---

In response to requests from our users, the Blosc2 team has introduced a fancy indexing capability into the flagship Blosc2 NDArray object. In the future, this could be extended to other classes within the Blosc2 library, such as C2Array and LazyArray.

What is Fancy Indexing?

In many array libraries, most famously NumPy, fancy indexing refers to a vectorized indexing format which allows for simultaneous selection and reshaping of arrays (see this excerpt). For example, one may wish to select three entries from a 1D array:

arr = array([10, 11, 12])

which can be done like so:

arr[[1,2,1]]
>> array([11, 12, 11])

Note that the order of the indices is arbitrary (i.e. the elements of the output may occur in a different order to the original array) and indices may be repeated. Moreover, if the array is multidimensional, for example:

arr = array([[10, 11],
             [12, 13],
             [14, 15]])

then the output consists of the relevant rows:

arr[[1,2,0]]
>> array([[12, 13],
          [14, 15],
          [10, 11]])

and so on for arbitrary numbers of dimensions.

Indeed one can output arbitrary shapes, for example via:

arr[[[1,2],[0,1]]]
>> array([[[12, 13],
          [14, 15]],

         [[10, 11],
          [12, 13]]])

NumPy supports many different kinds of fancy indexing, a flavour of which can be seen from the following examples, where row and col are integer array objects. If they are not of the same shape then broadcasting conventions will be applied to try to massage the index into an understandable format.

  1. arr[row]

  2. arr[[row, col]]

  3. arr[row, col]

  4. arr[row[:, None], col]

  5. arr[1, col] or arr[1:9, col]

In addition, one may use a boolean mask, in combination with integer indices, slices, or integer arrays via

  1. arr[row[:, None], mask]

where the mask must have the same length as the indexed dimension(s).

Support for Fancy Indexing and ndindex

Other libraries for management of large arrays such as zarr and h5py offer fancy indexing support but neither are as comprehensive as NumPy. h5py, which uses the HDF5 format, is quite limited in that one may only use one integer array, no repeated indices are allowed, and the array must be sorted in increasing order, although mixed slice and integer array indexing is possible. zarr, via its vindex (for vectorized index), offers more support, but is rather limited when it comes to mixed indexing, as slices may not be used with integer arrays, and an integer array must be provided for every dimension of the array (i.e. arr[row] fails on any non-1D arr).

This makes it difficult (in the case of zarr) or impossible (in the case of h5py) to do the kind of reshaping we saw in the introduction (i.e. case 2 above arr[[[1,2],[0,1]]]). This lack of support is due to a combination of: 1) the computational difficulty of many of these operations; and 2) the at times counter-intuitive behaviour of fancy indexing (see the end of this blog post for more details).

When implementing fancy indexing for Blosc2 we strove to match the functionality of NumPy as closely as possible, and we have almost been able to do so — all the 6 cases mentioned above are perfectly feasible with this new Blosc2 release! There are only some minor edge cases which are not supported (see Example 2 in the Addendum). This would not have been possible without the excellent ndindex library, which offers many very useful, efficient functions for index conversion between different shapes and chunks. We can then call NumPy behind-the-scenes, chunk-by-chunk, and exploit its native support for fancy indexing, without having to load the entire array into memory.

Results: Blosc2, Zarr, H5Py and NumPy

Hence, when averaging over the indexing cases above on 2D arrays of varying sizes, we observe only a minor slowdown for Blosc2 compared to NumPy when the array size is small compared to total memory (24GB), suggesting a small chunking-and-indexing overhead. As expected, when the array grows to an appreciable fraction of memory (16GB), loading the full NumPy array into memory starts to impact performance. The black error bars in the plots indicate the maximum and minimum times observed over the indexing cases (for which there is clearly a large variation).

Note that for cases 4 and 6 with large row or col index arrays, broadcasting causes the resulting index (stored in memory) to be very large, and even for array sizes of 2GB computation is too slow. In the future, we would like to see if this can be improved.

/images/blosc2-fancy-indexing/fancyIdxNumpyBlosc22D.png

Blosc2 is also as fast or faster than Zarr and HDF5 even for the limited use cases that the latter two libraries both support. HDF5 in particular is especially slow when the indexing array is very large.

/images/blosc2-fancy-indexing/fancyIdxNumpyBlosc2ZarrHDF52D.png

These plots have been generated using a Mac mini with the Apple M4 Pro processor. The benchmark is available on the Blosc2 github repo here.

Conclusion

Blosc2 offers a powerful and flexible fancy indexing functionality that is more extensive than that of Zarr and H5Py, while also being able to handle large arrays on-disk without loading them into memory. This makes it a great choice for applications that require complex indexing operations on large datasets. Give it a try in your own projects! If you have questions, the Blosc2 community is here to help.

If you appreciate what we're doing with Blosc2, please think about supporting us. Your help lets us keep making these tools better.

Addendum: Oindex, Vindex and FancyIndex via Two Examples

Zarr's implementation of fancy indexing is packaged as vindex (vectorized indexing). It also offers another indexing functionality, called orthogonal indexing, via oindex.

The reason for this dual support becomes clear when one considers a simple example.

Example 1

For a 2D array, we have seen that the fancy-indexing rules will cause the two index arrays below to be broadcast together:

arr[[0, 1], [2, 3]] -> [arr[0,2], arr[1,3]]

giving an output with two elements of shape (2,). This is vindexing.

However, one could understand this indexing as selecting rows 0 and 1 in the array, and then their intersection with columns 2 and 3. This gives an output with four elements of shape (2, 2), with elements:

[[arr[0,2], arr[0,3]],
 [arr[1,2], arr[1,3]]]

This is oindexing. Clearly, given the same index, the output is in general different; it is for this reason that the debate about fancy indexing can be quite polemical, and why there is a movement to introduce the vindex/oindex duality in NumPy.

Example 2

I have glossed over this until now, but vindex is not the same as fancy indexing. For this reason Zarr does not support all the functionality of fancy indexing, since it only supports vindex. The most important distinction between the two is that it seeks to avoid certain unexpected fancy indexing behaviour, as can be seen by considering a 3D NumPy array of shape (X, Y, Z) as in the example here. Consider the unexpected behaviour of:

arr[:10, :, [0,1]] has shape (10, Y, 2).

arr[0, :, [0, 1]] has shape (2, Y), not (Y, 2)!!

NumPy indexing treats non-slice indices differently, and will always put the axes introduced by the index array first, unless the non-slice indexes are consecutive, in which case it will try to massage the result to something intuitive (which normally coincides with the result of an oindex) — hence arr[:, 0, [0, 1]] has shape (X, 2), not (2, X).

The hypothesised NumPy vindex would eliminate this transposition behaviour, and be internally consistent, always putting the axes introduced by the index array first. Unfortunately, this is difficult and costly, and so the alternative is to simply not allow such indexing and throw an error, or force the user to be very specific.

Blosc2 will throw an error when one inserts a slice between array indices:

arr[:, 0, [0, 1]] -> shape (X, 2)
arr.vindex[0, :, [0,1]] -> ERROR

Zarr's vindex (called by __getitem__), by requiring integer array indices for all dimensions, throws an error for all mixed indices of this type:

arr[:, 0, [0, 1]] -> ERROR
arr[0, :, [0,1]] -> ERROR

Thus to reproduce the result of Blosc2 for the first case, one must use an explicit index array:

idx = np.array([0,1]).reshape(1,-1)
arr[np.arange(X).reshape(-1,1), 0 , idx] -> shape (X, 2)

For both Blosc2 and Zarr, one must use an explicit index array like so for the second case:

arr[0, np.arange(Y).reshape(-1,1), idx] -> shape (Y, 2)

Hopefully you now understand why fancy indexing can be so tricky, and why few libraries seek to support it to the same extent as NumPy - some would say it is perhaps not even desirable to do so!

Efficient array concatenation launched in Blosc2

Update (2025-06-23): Recently, Luke Shaw added a stack() function in Blosc2, using the concatenate feature described here. The new function allows you to stack arrays along a new axis, which is particularly useful for creating higher-dimensional arrays from lower-dimensional ones. We have added a section at the end of this post to show the usage and performance of this new function.

---

Blosc2 just got a cool new trick: super-efficient array concatenation! If you've ever needed to combine several arrays into one, especially when dealing with lots of data, this new feature is for you. It's built to be fast and use as little memory as possible. This is especially true if your array sizes line up nicely with Blosc2's internal "chunks" (think of these as the building blocks of your compressed data). When this alignment happens, concatenation is lightning-fast, making it perfect for demanding tasks.

You can use this new concatenate feature whether you're coding in C or Python, and it works with any Blosc2 NDArray (Blosc2's way of handling multi-dimensional arrays).

Let's see how easy it is to use in Python. If you're familiar with NumPy, the blosc2.concatenate function will feel very similar:

import blosc2
# Create some sample arrays
a = blosc2.full((10, 20), 1, urlpath="arrayA.b2nd", mode="w")
b = blosc2.full((10, 20), 2, urlpath="arrayB.b2nd", mode="w")
c = blosc2.full((10, 20), 3, urlpath="arrayC.b2nd", mode="w")
# Concatenate the arrays along the first axis
result = blosc2.concat([a, b, c], axis=0, urlpath="destination.b2nd", mode="w")
# The result is a new Blosc2 NDArray containing the concatenated data
print(result.shape)  # Output: (30, 20)
# You can also concatenate along other axes
result_axis1 = blosc2.concat([a, b, c], axis=1, urlpath="destination_axis1.b2nd", mode="w")
print(result_axis1.shape)  # Output: (10, 60)

The blosc2.concatenate function is pretty straightforward. You give it a list of the arrays you want to join together. You can also tell it which way to join them using the axis parameter (like joining them end-to-end or side-by-side).

A really handy feature is that you can use urlpath and mode to save the combined array directly to a file. This is great when you're working with huge datasets because you don't have to load everything into memory at once. What you get back is a brand new, persistent Blosc2 NDArray with all your data combined.

Aligned versus Non-Aligned Concatenation

Blosc2's concatenate function is smart. It processes your data in small pieces of compressed data (chunks). This has two consequences. The first is that you can join very large arrays, stored on your disk, chunk-by-chunk without using up all your computer's memory. Secondly, if the chunks fit neatly into the arrays to be concatenated, the process is much faster. Why? Because Blosc2 can avoid a lot of extra work, chiefly decompressing and re-compressing the chunks.

Let's look at some pictures to see what "aligned" and "unaligned" concatenation means. "Aligned" means that chunk boundaries of the arrays to be concatenated line up with each other. "Unaligned" means that this is not the case.

/images/blosc2-new-concatenate/concat-unaligned.png/images/blosc2-new-concatenate/concat-aligned.png

The pictures show why "aligned" concatenation is faster. In Blosc2, all data pieces (chunks) inside an array must be the same size. So, if the chunks in the arrays you're joining match up ("aligned"), Blosc2 can combine them very quickly. It doesn't have to rearrange the data into new, same-sized chunks for the final array. This is a big deal for large arrays.

If the arrays are "unaligned," Blosc2 has more work to do. It has to decompress and then re-compress the data to make the new chunks fit, which takes longer. There's one more small detail for this fast method to work: the first array's size needs to be a neat multiple of its chunk size along the direction you're joining.

A big plus with Blosc2 is that it always processes data in these small chunks. This means it can combine enormous arrays without ever needing to load everything into your computer's memory at once.

Performance

To show you how much faster this new concatenate feature is, we did a speed test using LZ4 as the internal compressor in Blosc2. We compared it to the usual way of joining arrays with numpy.concatenate.

/images/blosc2-new-concatenate/benchmark-lz4-20k-i13900K.png

The speed tests show that Blosc2's new concatenate is rather slow for small arrays (like 1,000 x 1,000). This is because it has to do a lot of work to set up the concatenation. But when you use larger arrays (like 20,000 x 20,000) that start to exceed the memory limits of our test machine (32 GB of RAM), Blosc2's new concatenate peformance is much better, and nearing the performance of NumPy's concatenate function.

However, if your array sizes line up well with Blosc2's internal chunks ("aligned" arrays), Blosc2 becomes much faster—typically more than 10x times faster than NumPy for large arrays. This is because it can skip a lot of the work of decompressing and re-compressing data, and the cost of copying compressed data is also lower (as much as the achieved compression ratio, which for this case is around 10x).

Using the Zstd compressor with Blosc2 can make joining "aligned" arrays even quicker, since Zstd is good at making data smaller.

/images/blosc2-new-concatenate/benchmark-zstd-20k-i13900K.png

So, when arrays are aligned, there's less data to copy (compression ratios here are around 20x), which speeds things up. If arrays aren't aligned, Zstd is a bit slower than the previous compressor (LZ4) because its decompression and re-compression algorithm is slower. Conclusion? Pick the compressor that works best for what you're doing!

Stacking Arrays

We've also added a new stack() function in Blosc2 that uses the concatenate feature. This function lets you stack arrays along a new axis, which is super useful for creating higher-dimensional arrays from lower-dimensional ones. Here's how it works:

import blosc2
# Create some sample arrays
a = blosc2.full((10, 20), 1, urlpath="arrayA.b2nd", mode="w")
b = blosc2.full((10, 20), 2, urlpath="arrayB.b2nd", mode="w")
c = blosc2.full((10, 20), 3, urlpath="arrayC.b2nd", mode="w")
# Stack the arrays along a new axis
stacked_result = blosc2.stack([a, b, c], axis=0, urlpath="stacked_destination.b2nd", mode="w")
print(stacked_result.shape)  # Output: (3, 10, 20)
# You can also stack along other axes
stacked_result_axis1 = blosc2.stack([a, b, c], axis=1, urlpath="stacked_destination_axis1.b2nd", mode="w")
print(stacked_result_axis1.shape)  # Output: (10, 3, 20)

Benchmarks for the stack() function show that it performs similarly to the concat() function, especially when the input arrays are aligned. Here are the results for the same data sizes and machine used in the previous benchmarks, and using the LZ4 compressor.

/images/blosc2-new-concatenate/stack-lz4-20k-i13900K.png

And here are the results for the Zstd compressor.

/images/blosc2-new-concatenate/stack-zstd-20k-i13900K.png

As can be seen, the stack() function is also very fast when the input arrays are aligned, and it performs well even for large arrays that don't fit into memory. Incidentally, when using the blosc2.stack() function in the last dim, it is slightly faster than numpy.stack() even when the arrays are not aligned; we are not sure why this is the case, but the fact that we can reproduces this behaviour is probably a sign that NumPy can optimize this use case better.

Conclusion

Blosc2's new concatenate and stack features are a great way to combine arrays quickly and without using too much memory. They are especially fast when your array sizes are an exact multiple of Blosc2's "chunks" (aligned arrays), making it perfect for big data jobs. They also work well for large arrays that don't fit into memory, as it processes data in small chunks. Finally, they are supported in both C and Python, so you can use them in your favorite programming language.

Give it a try in your own projects! If you have questions, the Blosc2 community is here to help.

If you appreciate what we're doing with Blosc2, please think about supporting us. Your help lets us keep making these tools better.

Make NDArray Transposition Fast (and Compressed!) within Blosc 2

Update (2025-04-30): The transpose function is now officially deprecated and replaced by the new permute_dims. This transition follows the Python array API standard v2022.12, aiming to make Blosc2 even more compatible with modern Python libraries and workflows.

In contrast with the previous transpose, the new permute_dims offers:

  • Support for arrays of any number of dimensions.

  • Full handling of arbitrary axis permutations, including support for negative indices.

Moreover, I have found a new way to transpose matrices more efficiently for Blosc2. This blog contains updated plots and discussions.

---

Matrix transposition is more than a textbook exercise, it plays a key role in memory-bound operations where layout and access patterns can make or break performance.

When working with large datasets, efficient data transformation can significantly improve both performance and compression ratios. In Blosc2, we recently implemented a matrix transposition function, a fundamental operation that rearranges data by swapping rows and columns. In this post, I'll share the design insights, implementation details, performance considerations that went into this feature, and an unexpected NumPy behaviour.

What was the old behavior?

Previously, calling blosc2.transpose(A) would transpose the data within each chunk, and a new chunk shape would be chosen for the output array. However, this new chunk shape was not necessarily aligned with the new memory access patterns induced by the transpose. As a result, even though the output looked correct, accessing data along the new axes still incurred a significant overhead due to increased number of I/O operations. This lead to performance bottlenecks, particularly in workloads that rely on efficient memory access patterns.

Transposition explanation for old operation

What's new?

The permute_dims function in Blosc2 has been redesigned to greatly improve performance when working with compressed, multidimensional arrays. The main improvement lies in transposing the chunk layout alongside the array data, which eliminates the overhead of cross-chunk access patterns.

The new implementation transposes the chunk layout along with the data. For example, an array with chunks=(2, 5) that is transposed with axes=(1, 0) will result in an array with chunks=(5, 2). This ensures that the output layout matches the new data order, making block access contiguous and efficient.

This logic generalizes to N-dimensional arrays and applies regardless of their shape or chunk configuration.

Transposition explanation for new operation

Performance benchmark: Transposing matrices with Blosc2 vs NumPy

To evaluate the performance of the new matrix transposition implementation in Blosc2, I conducted a series of benchmarks comparing it to NumPy, which serves as the baseline due to its widespread use and high optimization level. The goal was to observe how both approaches perform when handling matrices of increasing size and to understand the impact of different chunk configurations in Blosc2.

Benchmark setup

All tests were conducted using matrices filled with float64 values, covering a wide range of sizes, starting from small 100×100 matrices and scaling up to very large matrices of size 17000×17000, covering data sizes from just a few megabytes to over 2 GB. Each matrix was transposed using the Blosc2 API under different chunking strategies:

In the case of NumPy, I used the .transpose() function followed by a .copy() to ensure that the operation was comparable to that of Blosc2. This is because, by default, NumPy's transposition is a view operation that only modifies the array's metadata, without actually rearranging the data in memory. Adding .copy() forces NumPy to perform a real memory reordering, making the comparison with Blosc2 fair and accurate.

For Blosc2, I tested the transposition function across several chunk configurations. Specifically, I included:

  • Automatic chunking, where Blosc2 decides the optimal chunk size internally.

  • Fixed chunk sizes: (150, 300), (1000, 1000) and (5000, 5000).

These chunk sizes were chosen to represent a mix of square and rectangular blocks, allowing me to study how chunk geometry impacts performance, especially for very large matrices.

Each combination of library and configuration was tested across all matrix sizes, and the time taken to perform the transposition was recorded in seconds. This comprehensive setup makes it possible to compare not just raw performance, but also how well each method scales with data size and structure.

Results and discussion

The chart below summarizes the benchmark results for matrix transposition using NumPy and Blosc2, across various chunk shapes and matrix sizes.

Transposition performance for new method

While NumPy sets a strong performance baseline, the behaviour of Blosc2 becomes particularly interesting when we dive into how different chunk configurations affect transposition speed. The following observations highlight how crucial the choice of chunk shape is to achieving optimal performance.

  • Large square chunks (e.g., (4000, 4000)) showed the worst performance, especially with large matrices. Despite having fewer chunks, their size seems to hinder cache performance and introduces memory pressure that degrades throughput. Execution times were consistently higher than other configurations.

  • Small rectangular chunks such as (150, 300) also underperformed. As matrix size grew, execution times increased significantly, reaching nearly 3 seconds at around 2200 MB, likely due to poor cache utilization and the overhead of managing many tiny chunks.

  • Mid-sized square chunks like (1000, 1000) delivered consistently solid results across all tested sizes. Their timings stay below ~1.2 s with minimal variance, making them a reliable manual choice.

  • Automatically selected chunks consistently achieved the best performance. By adapting chunk layout to the data shape and size, the internal heuristics outpaced all fixed configurations, even rivaling plain NumPy transpose times.

Blosc2 vs NumPy comparison

The second plot provides a direct comparison between the standard NumPy transpose and the newly optimized Blosc2 version. It shows that Blosc2’s optimized implementation closely matches NumPy's performance, even for larger matrices. The results confirm that with good chunking strategies and proper memory handling, Blosc2 can achieve performance on par with NumPy for transposition operations.

Conclusion

The benchmarks highlight one key insight: Blosc2 is highly sensitive to chunk shape, and its performance can range from excellent to poor depending on how it is configured. With the right chunk size, Blosc2 can offer both high-speed transpositions and advanced features like compression and out-of-core processing. However, misconfigured chunks, especially those that are too big or too small, can drastically reduce its effectiveness. This makes chunk tuning an essential step for anyone seeking to get the most out of Blosc2 for large-scale matrix operations.

Appendix A: Unexpected NumPy behaviour

While running the benchmarks, two unusual spikes were consistently observed in the performance of NumPy around matrices of approximately 500 MB, 1100 MB and 2000 MB in size. This can be clearly seen in the plot below:

NumPy transposition performance anomaly

This sudden increase in transposition time is consistently reproducible and does not seem to correlate with the gradual increase expected from larger memory sizes. We have also observed this behaviour in other machines, although at different sizes.

This observation reinforces the importance of testing under realistic and varied conditions, as performance is not always linear or intuitive.

Optimizing chunks for matrix multiplication in Blosc2

As data volumes continue to grow in fields like machine learning and scientific computing, optimizing fundamental operations like matrix multiplication becomes increasingly critical. Blosc2's chunk-based approach offers a new path to efficiency in these scenarios.

Matrix Multiplication

Matrix multiplication is a fundamental operation in many scientific and engineering applications. With the introduction of matrix multiplication into Blosc2, users can now perform this operation on compressed arrays efficiently. The key advantages of having matrix multiplication in Blosc2 include:

  • Compressed matrices in memory: Blosc2 enables matrices to be stored in a compressed format without sacrificing the ability to perform operations directly on them.

  • Efficiency with chunks: In computation-intensive applications, matrix multiplication can be executed without fully decompressing the data, operating on small blocks of data independently, saving both time and memory.

  • Out-of-core computation: When matrices are too large to fit in main memory, Blosc2 facilitates out-of-core processing. Data stored on disk is read and processed in optimized chunks, allowing matrix multiplication operations without loading the entire dataset into memory.

These features are especially valuable in big data environments and in scientific or engineering applications where matrix sizes can be overwhelming, enabling complex calculations efficiently.

Implementation

The matrix multiplication functionality is implemented in the matmul function. It supports Blosc2 NDArray objects and leverages chunked operations to perform the multiplication efficiently.

How blocked matrix multiplication works

The image illustrates a blocked matrix multiplication approach. The key idea is to divide matrices into smaller blocks (or chunks) to optimize memory access and computational efficiency.

In the image, matrix A (M x K) and matrix B (K x N) are partitioned into chunks, and these are partitioned into blocks. The resulting matrix C (M x N) is computed as a sum of block-wise multiplication.

This method significantly improves cache utilization by ensuring that only the necessary parts of the matrices are loaded into memory at any given time. In Blosc2, storing matrix blocks as compressed chunks reduces memory footprint and enhances performance by enabling on-the-fly decompression.

Also, Blosc2 supports a wide range of data types. In addition to standard Python types such as int, float, and complex, it also fully supports various NumPy types. The currently supported types include:

  • np.int8

  • np.int16

  • np.int32

  • np.int64

  • np.float32

  • np.float64

  • np.complex64

  • np.complex128

This versatility allows compression and subsequent processing to be applied across diverse scenarios, tailored to the specific needs of each application.

Together, these features make Blosc2 a flexible and adaptable tool for various scenarios, but especially suited for the handling of large datasets.

Benchmarks

The benchmarks have been designed to evaluate the performance of the matmul function under various conditions. Here are the key aspects of our experimental setup and findings:

Different matrix sizes were tested using both float32 and float64 data types. All the matrices used for multiplication are square. The variation in matrix sizes helps observe how the function scales and how the overhead of chunk management impacts performance.

The x-axis represents the size of the resulting matrix in megabytes (MB). We used GFLOPS (Giga Floating-Point Operations per Second) to gauge the computational throughput, allowing us to compare the efficiency of the matmul function relative to highly optimized libraries like NumPy.

Blosc2 also incorporates a functionality to automatically select chunks, and it is represented in the benchmark by "Auto".

Benchmark float32Benchmark float64

For smaller matrices, the overhead of managing chunks in Blosc2 can result in lower GFLOPS compared to NumPy. As the matrix size increases, Blosc2 scales well, approaching its performance to NumPy.

Each chunk shape exhibits a peak performance when the matrix size matches the chunk size, or is a multiple of the chunk shape.

Conclusion

The new matrix multiplication feature in Blosc2 introduces efficient, chunked computation for compressed arrays. This allows users to handle large datasets both in memory and on disk without sacrificing performance. The implementation supports a wide range of data types, making it versatile for various numerical applications.

Real-world applications, such as neural network training, demonstrate the potential benefits in scenarios where memory constraints and large data sizes are common. While there are some limitations —such as support only for 2D arrays and the overhead of blocking— the applicability looks promising, like potential integration with deep learning frameworks.

Overall, Blosc2 offers a compelling alternative for applications where the advantages of compression and out-of-core computation are critical, paving the way for more efficient processing of massive datasets.

Getting my feet wet with Blosc2

In the initial phase of the project, my biggest challenge was understanding how Blosc2 manages data internally. For matrix multiplication, it was critical to grasp how to choose the right chunks, since the operation requires that the ranges of both matrices coincide. After some considerations and a few insightful conversations with Francesc, I finally understood the underlying mechanics. This breakthrough allowed me to begin implementing the first versions of my solution, adjusting the data fragmentation so that each block was properly aligned for precise computation.

Another important aspect was adapting to the professional workflow of using Git for version control. Embracing Git —with its branch creation, regular commits, and conflict resolution— represented a significant shift in my development approach. This experience not only improved the organization of my code and facilitated collaboration but also instilled a structured and disciplined mindset in managing my projects. This tool has shown to be both valuable and extremely helpful.

Finally, the moment when the function finally returned the correct result was really exciting. After multiple iterations, the rigorous debugging process paid off as everything fell into place. This breakthrough validated the robustness of the implementation and boosted my confidence to further optimize and tackle new challenges in data processing.

Mastering Persistent, Dynamic Reductions and Lazy Expressions in Blosc2

Working with large volumes of data is challenging, but Blosc2 offers unique tools to facilitate processing.

Blosc2 is a powerful data compression library designed to handle and process large datasets effectively. One standout feature is its support for lazy expressions and persistent and dynamic reductions. These tools make it possible to define complex calculations that execute only when necessary, reducing memory usage and optimizing processing time, which can be a game-changer when dealing with massive arrays.

In this guide, we’ll break down how to use these features to streamline data manipulation and get better performance out of your workflows. We’ll also see how resizing operand arrays is automatically reflected in the results, highlighting the flexibility of lazy expressions.

Getting Started with Arrays and Broadcasting

Blosc2 works smoothly with arrays of various shapes and dimensions, enabling users to perform calculations such as addition or multiplication across arrays of different sizes. This is where broadcasting comes in. With broadcasting, Blosc2 automatically aligns the shapes of arrays for easy operations. This means you don’t need to manually adjust array dimensions to match, a huge time-saver when working with multidimensional data.

For example, let’s suppose we have an array representing a large dataset and, a, another representing a smaller dimension, c.

a = blosc2.full((1, 3, 2), fill_value=3)
c = blosc2.full(2, fill_value=9, dtype=np.int8)
expr = a + c - 1

As seen above, broadcasting works automatically (and efficiently) with arrays of compressed data. Also, the correct data type of the result will be inferred from the operands and the expression. Thanks to this mechanism, the interpreter automatically adjusts the dimensions and data types of the arrays involved in the operation, allowing calculations to be performed without the need for manual adjustments.

/images/blosc2-broadcast.png

This approach is ideal for quick and simple data analysis, especially when working with large volumes of information that require frequent operations across different dimensions.

Setting Up and Saving Lazy Expressions

Imagine you need to perform a calculation like sum(a, axis=0) + b * sin(c). Rather than immediately calculating this, Blosc2’s lazy expression feature lets you store the expression for later. By using blosc2.lazyexpr, you define complex mathematical formulas and only trigger their execution when required, and only for the part of the resulting array that you are interested in. This is highly advantageous for large computations that might not be needed right away or that may depend on evolving data.

Let's see how that works with a little more complex expression:

# Create arrays with specific dimensions and values
a = blosc2.full((2, 3, 4), 1, urlpath="a.b2nd", mode="w")
b = blosc2.full((2, 4), 2, urlpath="b.b2nd", mode="w")
c = blosc2.full(4, 3, dtype=np.uint8, urlpath="c.b2nd", mode="w")
# Define a lazy expression and the operands for later execution
# Note that we are using a string version of the expression here
# so that it can be re-opened as-is later on
expression = "sum(a, axis=0) + b * sin(c)"
lazy_expression = blosc2.lazyexpr(expression)
lazy_expression.save("arrayResult.b2nd", mode="w")

In this code, sum(a, axis=0) + b * sin(c) is defined but not executed immediately. When you’re ready to use the result, you can call lazy_expression.compute() (returns a Blosc2 array that is compressed by default) to run the calculation. Alternatively, you can specify the part of the result that you are interested in with lazy_expression[0, :] (returns a NumPy array). This way, you save CPU and memory and only perform the computation when necessary.

Dynamic Computation: Reusing and Updating Results

Another big advantage of Blosc2 is its ability to compute persistent expressions that are dynamic: when an operand is enlarged, Blosc2 re-adapts the expression to account for its new shape. This approach significantly speeds up processing time, especially when working with frequently updated or real-time data.

For instance, if you have an expression stored, and only part of your dataset changes, Blosc2 can apply reductions dynamically to efficiently update the sum:

# Resizing arrays and updating values
a.resize((30, 30, 40))
a[20:30] = 5
b.resize((30, 40))
b[20:30] = 7
# Open the saved file
lazy_expression = blosc2.open(urlpath=url_path)
result = lazy_expression.compute()

In this case, the final result will have a shape of (30, 40) (instead of the previous (20, 40)). This allows for quick adaptability, which is crucial in data environments where values evolve constantly.

Why Persistent Reductions and Lazy Expressions Matter

These features make Blosc2 a top choice for working with large datasets, as they allow for:

  • Broadcasting of memory, on-disk or network operands.

  • Efficient use of CPU and memory by only executing calculations when needed.

  • Dynamic expressions that adapt to changing data in operands.

  • Enhanced performance due to streamlined, multi-threaded and pre-fetched calculations.

Together, lazy expressions and persistent reductions provide a flexible, resource-efficient way to manage complex data processes. They’re perfect for real-time analysis, evolving datasets, or any high-performance computing tasks requiring dynamic data handling.

Conclusion

Blosc2’s features offer a way to make data processing smarter and faster. If you work with large arrays or require adaptable workflows, Blosc2 can help you make the most of your data processing resources.

For more in-depth guidance, visit the full tutorial on Blosc2.