New issue
Advanced search Search tips

Issue 739188 link

Starred by 1 user

Issue metadata

Status: Available
Owner: ----
Cc:
Components:
EstimatedDays: ----
NextAction: ----
OS: All
Pri: 3
Type: Feature



Sign in to add a comment

Use lossless compression when serializing ImageBitmap and ImageData

Project Member Reported by zakerinasab@chromium.org, Jul 4 2017

Issue description

With increasing display resolutions and the extra bytes needed for higher color depth pixels, the size of ImageBitmap and ImageData can get very large when serialized to be written in an IndexedDB on disk. Investigate if we can use a lossless compression format when serializing these objects.
 
junov@ suggestion: instead of using a lossless ImageEncoder (e.g, PNG encoder) we can use a general compressor like Zip. The performance is the same for images as PNG encoder uses zlib under the hood, and we also can compress ImageData seamlessly.
When going to disk, we do already apply a general compression algorithm (Snappy, implicitly via LevelDB). Anything we consider should be compared with just using Snappy (although there still might be some savings due to the number of times we presently copy stuff around).

IIUC, PNG (and other lossless image compression algorithms) can compress better than DEFLATE (i.e. zlib), because it applies an additional logic (PNG's "filtering") which makes the image data more amenable to DEFLATE compression. Also, do we have data about how well DEFLATE does on float32 data?

A quick experiment (using a photographic image of Mont Tremblant and a graphic image of the Quebec coat of arms, both from Wikipedia):

Mont Tremblant:
bmp       : 15,116,682 bytes
jpg       :  1,015,771 bytes (source image)
png       :  5,871,629 bytes
bmp+gz    :  8,572,365 bytes
bmp+snappy: 12,223,309 bytes

Coat of Arms:
bmp       :  6,284,658 bytes
png       :    420,102 bytes (source image)
bmp+gz    :    352,538 bytes
bmp+snappy:    683,363 bytes

A more principled experiment would also look at the compression settings here.

Today we're effectively paying bmp+snappy on disk and bmp in memory. I don't know how much better it's worth doing. Snappy saves us from being embarrassingly bad on images that are mostly solid color, but I don't know whether a general or image-specific algorithm should be added to save some disk space, especially with higher color depth images.
If we do this, I intend to use SkPngEncoder. SkPngEncoder uses all the PNG filters by default (chosen heuristically during the compression) and zlib as the implementation for DEFLATE. It also allows passing a compression level to zlib, which has the default value of 6.

I can do some performance measurements before adding the code to the serialization engine over the filters and compression level, but I don't know what is the proper test set for a such a perf eval. Or, we can use the default parameters for now.
Cc: -junov@chromium.org
Owner: ----
Status: Available (was: Assigned)

Sign in to add a comment