Image Optimisation

What are the rules that WebPagetest applies to deciding if jpgs need optimising?

Do you simply strip out metadata or do you run any compression algorithm against the jpg.

Also why are gifs automatically assumed to be fully optimised. Is the png8 format not considered to beat gifs in most cases or at least be the same size?

Jpeg’s are re-compressed at roughly the equivalent of Photoshop’s quality “50” (Quality 85 on anything that uses IJG compression code) and meta data is also stripped out.

Gif’s are automatically assumed because of transparency issues with png8 and some browsers. At some point I’d like to go in and make the code smarter on what it checks but I haven’t had the time (the biggest problem right now is the assumption that png24 images should be converted to png8).

what tools do you use to achieve the compression?

Can you detect if a jpg has been compressed to 50% quality already, since it uses a lossy compression technique, before you apply a 50% quality reduction again to check for savings?

I thought that for the same transparency techniques that gifs have png8 was a simple drop in replacement. I thought the transparency issues with png8 and some browsers are to do with alpha transparency and not binary transparency?

For jpeg’s I use the IJG libraries (pretty much the standard compression implementation used in gimp, etc) and just re-compress the file and compare the sizes. There’s a little slop in that the savings need to be bigger than a certain amount before it is even considered. Actually looking at the compression tables would be a lot harder.

If png8 works for drop-in I can easily make the tweak to also try png compression for gif’s (I thought there were IE6 issues).
the above slashdot thread seems to indicate that gimp has code for guessing the quality/compression of jpeg.
it also seems to suggest that the best way to guess would be to take the DCT (discrete cosine transform) of blocks of pixels throughout the image and check the level of quantizing.
But as you say much more difficult than compressing the image and seeing if there are significant savings.

With respects to png8s Stoyan Stefanov’s presentation at Velocity 2008 about image optimisation ( states that ie6 supports gif like transparency (binary transparency) with png8 and that any alpha transparency gracefully degrades. There is no need to use a css filter hack with png8 unless you need alpha transparency effects to show in ie6 or below.

It appears that someone has kindly written a C program to have a guess at the quality of jpegs ( If it works, and fits (licence) this would make a good addition to your image optimisation toolsets as you can judge the current quality of the jpeg, and then apply the relevant degradation to get it to 50% quality. This IMHO would be better than blanket assuming that every jpeg can be reduced by 50% in quality.

I would like to optimize some large photos for the web so that they will look great on screen at about 900px width, but print out awfully, very bitmapped, and/or not fit for re-distribution or re-selling. If someone were to steal the photos, at least I would have peace of mind that they couldn’t re-print them with high quality.