Currently, JpegImageDecoder advertises supported decode sizes smaller than the full size only when the dimensions of the full image can fit a whole number of MCUs both horizontally and vertically. This is to avoid artifacts like the one reported in https://crbug.com/890745 .
Some preliminary exploration indicates that this is not a bug in libjpeg/libjpeg-turbo, but rather an intended feature. See https://github.com/libjpeg-turbo/libjpeg-turbo/issues/297 and https://github.com/python-pillow/Pillow/issues/3419.
As dcastagna@ pointed out, using the whole-number-of-MCUs rule to disable scaling might be too strict, depending on how this scaling is implemented in libjpeg/libjpeg-turbo. For example, suppose the MCUs in a 4:4:4 8x14 looks like this:
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
X X X X X X X X
g g g g g g g g
g g g g g g g g
Where 'g' is potential garbage added by the encoder because the height of the image only goes up to the 6th row of the last MCU. Now, suppose we want to scale to 1/2 (so that the image is 4x7). This is a guess (I don't have a good understanding of iDCT scaling): if libjpeg effectively "collapses" squares of 2x2 samples into 1 sample to achieve this scaling, then the result might look like this:
X X X X
X X X X
X X X X
X X X X
X X X X
X X X X
X X X X
g g g g
Since the last row is not going to be included in the final decoded image (because the image will be 4x7), it seems it's okay to allow this scale factor of 1/2 even though the original image cannot fit a whole number MCUs vertically.
The motivation to be "smarter" is that we can get memory savings. When the CL that imposed the whole-number-of-MCUs rule landed, we got multimple memory regressions:
https://crbug.com/897115
https://crbug.com/896277
https://crbug.com/896265
Comment 1 by vmi...@chromium.org
, Nov 2Status: Available (was: Untriaged)