I'm going to go through many of the major lossless codecs (LZMA, Zstd, LZ4, Deflate, bzip2, PAQ, etc.) and list the features and properties that made them unique or interesting, especially when first released. Let's start with LZHAM (yes I'm shamelessly beating my own drum here, but hey it's my tech blog). I think it's very important and interesting to understand the past.
LZHAM alpha was first released in Aug. 15, 2010 (according to Google Code), but the fast entropy decoder experiments and classes were written in early 2009 (before I joined Valve). At the time, the practical lossless data compression community didn't seem to have much focus or direction. They were kinda all over the map, and Charle Bloom's excellent reverse engineering of LZMA did not occur until after LZHAM's public release.
This codec was designed for next-generation video games, basically titles I thought would be eventually made with Source 2. Valve was awesome at allowing developers to work on open source and even commercial projects at home. The team didn't think data compression was an important thing to work on, so I decided to work on it on my spare time.
For some background, I was not able to use LZMA on Halo Wars because it was incredibly slow on X360, and Microsoft Game Studios stopped using my internal highly X360 optimized Deflate codec ("eslib") and switched to LZX. I used 7-zip on the Halo Wars build server, and was very impressed with its ratio, especially when in Deflate mode. I always wondered how it was able to achieve such high ratios when compressing to the old Deflate format, and I wanted to understand why.
Some of the major features it demonstrated:
- Micro-threaded compressor
Dictionary updating, match finding, and parsing all in parallel.
A lock-free approach is used to communicate between parser threads and match finder threads.
The usual approach to threading a compressor blocks up the input and sacrifices ratio, which is not necessary with the correct design.
Inspired by my experience writing the multithreaded Halo Wars engine, and the lock free stuff was inspired by experiments I was seeing done on Source 2's graphics engine.
- Interleaved coding
Huffman and binary arithmetic coding interleaved into the same bitstream. The compressor batches all symbols and simulates the entropy decoding steps the decompressor will use in order to figure out how to interleave the output bitstream.
I came up with this design because I wanted a simple symbol_codec class that supported totally free form usage of arithmetic, Huffman, and raw bits. This class was inspired by Amir Said's excellent papers and sample code. I tested it on a laptop and just keep on optimizing it for higher decoding performance over a few weeks time.
LZHAM also showed that Huffman coding still had legs in high ratio codecs. Very low or high probability symbols (what I called high "skew" symbols), where Huffman's prefix coding limitations are most obvious, can use fast and simple binary arithmetic coding, while everything else can be done with static Huffman coding, with bulk table updating for adaption. Also around this time, Andrew Polar showed it was possible to quickly update prefix codes.
- Best of X arrivals parsing (called "extreme" parsing in the code)
This was obvious after figuring out how to construct a parse graph.
Inspired by the path finding algorithms used in games.
LZHAM alpha was first released in Aug. 15, 2010 (according to Google Code), but the fast entropy decoder experiments and classes were written in early 2009 (before I joined Valve). At the time, the practical lossless data compression community didn't seem to have much focus or direction. They were kinda all over the map, and Charle Bloom's excellent reverse engineering of LZMA did not occur until after LZHAM's public release.
This codec was designed for next-generation video games, basically titles I thought would be eventually made with Source 2. Valve was awesome at allowing developers to work on open source and even commercial projects at home. The team didn't think data compression was an important thing to work on, so I decided to work on it on my spare time.
For some background, I was not able to use LZMA on Halo Wars because it was incredibly slow on X360, and Microsoft Game Studios stopped using my internal highly X360 optimized Deflate codec ("eslib") and switched to LZX. I used 7-zip on the Halo Wars build server, and was very impressed with its ratio, especially when in Deflate mode. I always wondered how it was able to achieve such high ratios when compressing to the old Deflate format, and I wanted to understand why.
Some of the major features it demonstrated:
- Micro-threaded compressor
Dictionary updating, match finding, and parsing all in parallel.
A lock-free approach is used to communicate between parser threads and match finder threads.
The usual approach to threading a compressor blocks up the input and sacrifices ratio, which is not necessary with the correct design.
Inspired by my experience writing the multithreaded Halo Wars engine, and the lock free stuff was inspired by experiments I was seeing done on Source 2's graphics engine.
- Interleaved coding
Huffman and binary arithmetic coding interleaved into the same bitstream. The compressor batches all symbols and simulates the entropy decoding steps the decompressor will use in order to figure out how to interleave the output bitstream.
I came up with this design because I wanted a simple symbol_codec class that supported totally free form usage of arithmetic, Huffman, and raw bits. This class was inspired by Amir Said's excellent papers and sample code. I tested it on a laptop and just keep on optimizing it for higher decoding performance over a few weeks time.
LZHAM also showed that Huffman coding still had legs in high ratio codecs. Very low or high probability symbols (what I called high "skew" symbols), where Huffman's prefix coding limitations are most obvious, can use fast and simple binary arithmetic coding, while everything else can be done with static Huffman coding, with bulk table updating for adaption. Also around this time, Andrew Polar showed it was possible to quickly update prefix codes.
- Best of X arrivals parsing (called "extreme" parsing in the code)
This was obvious after figuring out how to construct a parse graph.
Inspired by the path finding algorithms used in games.
- Other things it did that I think are important:
zlib compatible API - It's the standard "universal" lossless compression API, it makes no sense not to support it. To my knowledge LZHAM and miniz were the first to try and copy zlib's API.
Streaming support - I question how useful this is to many developers, but you need it otherwise you're limited to available RAM or have to use blocking which hurts ratio.
Seed dictionaries - Occasionally valuable.
Every update was thoroughly tested before pushing the code. Random failures or crashes = the kiss of death for a new codec trying to be accepted.
For LZHAM I decided that the best way to get noticed as adding value in a very competitive space was to match LZMA's ratio as closely as possible and just move "right" (faster) on the decompression speed/ratio Pareto frontier. I purposely de-emphasized the compression speed/ratio frontier, favoring offline compression.
One critical mistake I made in the alphas was optimizing too much for the Large Text Compression Benchmark, which is 100MB of Wikipedia text. This led to me going down a blind alley with higher order coding experiments, which used way too many Huffman tables.
zlib compatible API - It's the standard "universal" lossless compression API, it makes no sense not to support it. To my knowledge LZHAM and miniz were the first to try and copy zlib's API.
Streaming support - I question how useful this is to many developers, but you need it otherwise you're limited to available RAM or have to use blocking which hurts ratio.
Seed dictionaries - Occasionally valuable.
Every update was thoroughly tested before pushing the code. Random failures or crashes = the kiss of death for a new codec trying to be accepted.
For LZHAM I decided that the best way to get noticed as adding value in a very competitive space was to match LZMA's ratio as closely as possible and just move "right" (faster) on the decompression speed/ratio Pareto frontier. I purposely de-emphasized the compression speed/ratio frontier, favoring offline compression.
One critical mistake I made in the alphas was optimizing too much for the Large Text Compression Benchmark, which is 100MB of Wikipedia text. This led to me going down a blind alley with higher order coding experiments, which used way too many Huffman tables.