Moving Compression Targets


I notice that WPT reports some images as requiring further compression.

Using Imagick I re-compress at “70” and for most images I achieve the WPT compression target.

However, a few images still fail or get warnings that the target has not been achieved, for example:

FAILED - (62.2 KB, compressed = 28.2 KB - savings of 34.0 KB)

WARNING - (15.7 KB, compressed = 11.4 KB - savings of 4.3 KB)

If I then re-compress the same images at “20”, a very low compression, the images still don’t hit the target, because the target size has moved!
FAILED - (47.3 KB, compressed = 13.7 KB - savings of 33.6 KB)
WARNING - (10.1 KB, compressed = 6.2 KB - savings of 3.8 KB)

I see that whilst the image dimensions have not changed the expected target compression has moved from

_small.jpg 28.2KB > 13.7KB
_cover_thumb.jpg 11.4KB > 6.2KB

After processing over 2000 images cut to different sizes, the above 2 formats of 480x306 and 320x180 consistently result in “FAILED” or “WARNING” respectively.

Whilst I understand that there are other factors to take into account during compression, I still expect the compression target to remain roughly the same based on image dimensions, and if not, then at least a compression of “20” should be enough to satisfy the target value.

Are there any other approaches I should take to achieve the compression targets for these image dimensions ?


Is there a bug in the given target values ?



p.s. each time an image is cut/compressed it is done automatically from the same original image during upload to the site.

The issue is the metadata attached to the images. If you add the -strip option to the Image magick it should remove the metadata.

If you want to see the actual data, if you click on the Waterfall thumbnail in the WebPageTest results and then scroll to the bottom of the big waterfall there is a “view all images” link. You can then click “Analyze Jpeg” on any of the images to see what the metadata is.


It looks like there’s a bunch of Photoshop lightroom data attached to that one.

btw, if you want to keep the credit and copyright info you will have to be a bit more selective and use a few more tools. You can strip the data when you compress the images and then use exiftool to copy the fields you want to keep.

Hey Patrick

Thank you for pointing out the “Analyze Jpeg” feature, I wasn’t aware of that capability.

OK so my meta is creating a significant amount of overhead.

One thing I’m still not clear on is

Why does the target “compression” value change in the examples above ?

Or to put it another way, how is that value calculated ?


The targets change because the input image into WPT changes as you compress it more. WPT strips the metadata and re-compresses the image at quality level 75. It is starting with whatever image was served though so if you give it a quality 30 image as input it will compress better than a quality 99 image as input, changing the size of the quality 75 target.

Hello Patrick

Thanks for the further explanation.

However, I’m still curious as to how a target is achieved.

IF target IS source AFTER “75” compression, how is the target achieved ?

I realise that compression is not as simple as a percentage of single value, but for the purposes of a simple example:

Source=99 Target=74.25
Source=70 Target=52.5
Source=30 Target=22.5

Unless there is a theoretical target (not solely based on the source compression) then I don’t see how a target can be achieved.

In other words

How do images achieve the WPT target value ?

Sorry, it actually compresses to quality 85 to be conservative. The simple example is overly simple because a 95 compression of a 30 quality image will actually result in a much larger image.

Strip your metadata and you’ll easily hit the targets.