> Since the old version of the game is known on both sides, we compress the new version using the old version as its dictionary.
That's quite clever!
> Since we compress once and decompress many times on player machines, we can afford slow compression times. Zstd lets you tune the compression level, and we found that level 19 yielded about 13% better compression than zip.
Zstd is parallelizable across threads, which wasn't mentioned here. It helps speed it up at high compression levels, though not as much as I'd like.
Nice! There's also zstd's flush ability that I've used for streaming robotics data. You can write data and flush it over the network for realtime updates, but the compression stream stays open so it can still reference past messages. This means messages get smaller over time so you don't need to share a dictionary ahead of time. I'm not aware of other compression algorithms that have flushing capability like this.
> binary data to connected clients in tiny messages, each saying “field 5 on object X is now 123”
I wonder how Meta's newer, format-understanding OpenZL would do. I imagine its schemas could be auto-generated from protobuf.
Our updates are not only code, since it's a game, its mixture of game assets( textures, sounds, large json files... ) and code. Zstd is pretty good all around. For pure code updates, I'd probably evaluate code compressions rather than zstd; I know there is an ecosystem of those out there.
> Since the old version of the game is known on both sides, we compress the new version using the old version as its dictionary.
That's quite clever!
> Since we compress once and decompress many times on player machines, we can afford slow compression times. Zstd lets you tune the compression level, and we found that level 19 yielded about 13% better compression than zip.
Zstd is parallelizable across threads, which wasn't mentioned here. It helps speed it up at high compression levels, though not as much as I'd like.
We do use all available threads for it, did just not call it out in the article
Nice! There's also zstd's flush ability that I've used for streaming robotics data. You can write data and flush it over the network for realtime updates, but the compression stream stays open so it can still reference past messages. This means messages get smaller over time so you don't need to share a dictionary ahead of time. I'm not aware of other compression algorithms that have flushing capability like this.
> binary data to connected clients in tiny messages, each saying “field 5 on object X is now 123”
I wonder how Meta's newer, format-understanding OpenZL would do. I imagine its schemas could be auto-generated from protobuf.
Ah, I have not looked into that, zstd keeps giving.
Using zstd for binary diffs is something I would never expect. I wonder how it compares to e.g. the library that Chrome uses: https://blog.chromium.org/2009/07/smaller-is-faster-and-safe... (this is from 2009 so maybe they made it better since)
Our updates are not only code, since it's a game, its mixture of game assets( textures, sounds, large json files... ) and code. Zstd is pretty good all around. For pure code updates, I'd probably evaluate code compressions rather than zstd; I know there is an ecosystem of those out there.
They moved on to Courgette, then to Zucchini: https://chromium.googlesource.com/chromium/src/+/HEAD/compon...
These are optimized for compiled code though.