For reference, several fast compression algorithms were tested and compared on a desktop running Ubuntu 20.04 (
Linux 5.11.0-41-generic), with a Core i7-9700K CPU @ 4.9GHz, using lzbench, an open-source in-memory benchmark by @inikep compiled with gcc 9.3.0, on the Silesia compression corpus.
|zstd 1.5.1 -1||2.887||530 MB/s||1700 MB/s|
|zlib 1.2.11 -1||2.743||95 MB/s||400 MB/s|
|brotli 1.0.9 -0||2.702||395 MB/s||450 MB/s|
|zstd 1.5.1 --fast=1||2.437||600 MB/s||2150 MB/s|
|zstd 1.5.1 --fast=3||2.239||670 MB/s||2250 MB/s|
|quicklz 1.5.0 -1||2.238||540 MB/s||760 MB/s|
|zstd 1.5.1 --fast=4||2.148||710 MB/s||2300 MB/s|
|lzo1x 2.10 -1||2.106||660 MB/s||845 MB/s|
|lz4 1.9.3||2.101||740 MB/s||4500 MB/s|
|lzf 3.6 -1||2.077||410 MB/s||830 MB/s|
|snappy 1.1.9||2.073||550 MB/s||1750 MB/s|
The negative compression levels, specified with
--fast=#, offer faster compression and decompression speed at the cost of compression ratio (compared to level 1).
Zstd can also offer stronger compression ratios at the cost of compression speed. Speed vs Compression trade-off is configurable by small increments. Decompression speed is preserved and remains roughly the same at all settings, a property shared by most LZ compression algorithms, such as zlib or lzma.
The following tests were run on a server running Linux Debian (
Linux version 4.14.0-3-amd64) with a Core i7-6700K CPU @ 4.0GHz, using lzbench, an open-source in-memory benchmark by @inikep compiled with gcc 7.3.0, on the Silesia compression corpus.
|Compression Speed vs Ratio||Decompression Speed|
A few other algorithms can produce higher compression ratios at slower speeds, falling outside of the graph. For a larger picture including slow modes, click on this link.
Previous charts provide results applicable to typical file and stream scenarios (several MB). Small data comes with different perspectives.
The smaller the amount of data to compress, the more difficult it is to compress. This problem is common to all compression algorithms, and reason is, compression algorithms learn from past data how to compress future data. But at the beginning of a new data set, there is no “past” to build upon.
To solve this situation, Zstd offers a training mode, which can be used to tune the algorithm for a selected type of data. Training Zstandard is achieved by providing it with a few samples (one file per sample). The result of this training is stored in a file called “dictionary”, which must be loaded before compression and decompression. Using this dictionary, the compression ratio achievable on small data improves dramatically.
The following example uses the
github-users sample set, created from github public API. It consists of roughly 10K records weighing about 1KB each.
|Compression Ratio||Compression Speed||Decompression Speed|
These compression gains are achieved while simultaneously providing faster compression and decompression speeds.
Training works if there is some correlation in a family of small data samples. The more data-specific a dictionary is, the more efficient it is (there is no universal dictionary). Hence, deploying one dictionary per type of data will provide the greatest benefits. Dictionary gains are mostly effective in the first few KB. Then, the compression algorithm will gradually use previously decoded content to better compress the rest of the file.
Create the dictionary
zstd --train FullPathToTrainingSet/* -o dictionaryName
Compress with dictionary
zstd -D dictionaryName FILE
Decompress with dictionary
zstd -D dictionaryName --decompress FILE.zst
If your system is compatible with standard
make in root directory will generate
zstd cli in root directory.
Other available options include:
make install: create and install zstd cli, library and man pages
make check: create and run
zstd, tests its behavior on local platform
cmake project generator is provided within
build/cmake. It can generate Makefiles or other build scripts to create
zstd binary, and
libzstd dynamic and static libraries.
CMAKE_BUILD_TYPE is set to
A Meson project is provided within
build/meson. Follow build instructions in that directory.
You can also take a look at
.travis.yml file for an example about how Meson is used to build this project.
Note that default build type is release.
You can build and install zstd vcpkg dependency manager:
git clone https://github.com/Microsoft/vcpkg.git cd vcpkg ./bootstrap-vcpkg.sh ./vcpkg integrate install ./vcpkg install zstd
The zstd port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.
build directory, you will find additional possibilities:
build/VS_scripts, which will build
libzstdlibrary without any need to open Visual Studio solution.
You can build the zstd binary via buck by executing:
buck build programs:zstd from the root of the repo. The output binary will be in
You can run quick local smoke tests by executing the
playTest.sh script from the
src/tests directory. Two env variables
$DATAGEN_BIN are needed for the test script to locate the zstd and datagen binary. For information on CI testing, please refer to TESTING.md
Zstandard is currently deployed within Facebook. It is used continuously to compress large amounts of data in multiple formats and use cases. Zstandard is considered safe for production environments.
Zstandard is dual-licensed under BSD and GPLv2.
dev branch is the one where all contributions are merged before reaching
release. If you plan to propose a patch, please commit into the
dev branch, or its own feature branch. Direct commit to
release are not permitted. For more information, please read CONTRIBUTING.