Return to Digital Photography Articles

JPEG Compression Quality from Quantization Tables

DQT / Quantization Matrix / JPEG Compression Quality tables for Digital Cameras and Digital Photography Software.

Index of Quantization Table Sources

Would you like to compare two quantization tables?
Want to extract a quantization table or detect a forged/edited photo??

Canon DIGITAL IXUS 40 (superfine)
Canon DIGITAL IXUS 55 (superfine)
Canon DIGITAL IXUS 60 (fine)
Canon DIGITAL IXUS 700 (fine)
Canon DIGITAL IXUS 800 IS (superfine)
Canon DIGITAL IXUS 850 IS (superfine)
Canon DIGITAL IXUS 900Ti (superfine)
Canon EOS 10D (fine)
Canon EOS 10D (norm)
Canon EOS 20D (fine)
Canon EOS 300D DIGITAL ()
Canon EOS 300D DIGITAL (fine)
Canon EOS 30D (fine)
Canon EOS 350D DIGITAL (fine)
Canon EOS 40D (fine)
Canon EOS 5D (fine)
Canon EOS D30 (fine)
Canon EOS D60 (fine)
Canon EOS-1D (fine)
Canon EOS-1D Mark II (fine)
Canon EOS-1D Mark II N (fine)
Canon EOS-1D Mark III (fine)
Canon EOS-1DS (fine)
Canon EOS-1Ds Mark II (fine)
Canon PowerShot A40 (superfine)
Canon PowerShot A430 (superfine)
Canon PowerShot A510 (superfine)
Canon PowerShot A520 (fine)
Canon PowerShot A520 (superfine)
Canon PowerShot A630 (fine)
Canon PowerShot A640 (fine)
Canon PowerShot A70 (superfine)
Canon PowerShot A700 (superfine)
Canon PowerShot G1 (fine)
Canon PowerShot G1 (superfine)
Canon PowerShot G3 (superfine)
Canon PowerShot G5 (superfine)
Canon PowerShot G6 (fine)
Canon PowerShot G6 (superfine)
Canon PowerShot Pro1 ()
Canon PowerShot S1 IS (superfine)
Canon PowerShot S2 IS (fine)
Canon PowerShot S3 IS (superfine)
Canon PowerShot S30 (fine)
Canon PowerShot S30 (video)
Canon PowerShot S40 (superfine)
Canon PowerShot S45 (superfine)
Canon PowerShot S5 IS (superfine)
Canon PowerShot S50 (superfine)
Canon PowerShot S60 (superfine)
Canon PowerShot S70 (superfine)
Canon PowerShot S80 (superfine)
Canon PowerShot SD1000 (superfine)
Canon PowerShot SD20 (superfine)
Canon PowerShot SD40 (superfine)
Canon PowerShot SD400 (fine)
Canon PowerShot SD600 (superfine)
Canon PowerShot SD700 IS (fine)
Canon PowerShot SD800 IS (superfine)
Canon PowerShot TX1 (superfine)
E4600 (FINE)
E8400 (FINE)
E8700 (FINE)
E8800 (EXTRA)
E995 (FINE)
FinePix A700 (fine)
FinePix E550 (fine)
FinePix E900 (fine)
FinePix F40fd ()
FinePix F700 (normal)
FinePix F810 (normal)
FinePix S20Pro (fine)
FinePix S3Pro (fine)
FinePix S5000 (normal)
FinePix S5000 (normal)
FinePix S5000 (normal)
FinePix S7000 (fine)
FinePix S9000 (FINE)
FinePix S9500 (fine)
FinePixS1Pro (fine)
FinePixS2Pro (fine)
MX-500 (fine)
MX-500 (normal)
DSC-F828 ()
DSC-F88 ()
DSC-H1 (variable)
DSC-H1 (variable)
DSC-H1 (variable)
DSC-H1 (variable)
DSC-H1 (variable)
DSC-H2 (variable)
DSC-H2 (variable)
DSC-H2 (variable)
DSC-H2 (variable)
DSC-H2 (variable)
DSC-H5 (variable)
DSC-H5 (variable)
DSC-H5 (variable)
DSC-H5 (variable)
DSC-H5 (variable)
DSC-H5 (variable)
DSC-H7 (variable)
DSC-H7 (variable)
DSC-H7 (variable)
DSC-H7 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-H9 (variable)
DSC-N1 (fine)
DSC-N2 (fine)
DSC-P200 ()
DSC-R1 (fine)
DSC-T100 ()
DSC-W1 ()
DSC-W35 (fine)
DSC-W70 ()
DSLR-A100 ()
DSLR-A700 ()
DSLR-A700 (fine)
K800i (variable)
K800i (variable)
K800i (variable)
K800i (variable)

DSC-H9 (*)
SigmaPentaxKonica MinoltaOlympusApple
SIGMA SD10 (Qual:12)
SIGMA SD10 (Qual:12)
PENTAX *ist D ()
PENTAX *ist DL ()
PENTAX *ist DS ()
PENTAX *ist DS2 ()
PENTAX Optio 750Z ()
PENTAX Optio A10 ()
PENTAX Optio A30 ()
PENTAX Optio M40 ()
PENTAX Optio S ()
PENTAX Optio S5i ()
DiMAGE X1 (fine)
E-1 ()
E-20,E-20N,E-20P (SHQ)
E-300 ()
E-330 ()
E-410 ()
E-410 (SHQ)
E-500 ()
FE240/X795 (shq)
u1000/S1000 (SHQ)
u30D,S410D,u410D (variable)
u30D,S410D,u410D (variable)
u30D,S410D,u410D (variable)
u30D,S410D,u410D (variable)
u30D,S410D,u410D (variable)
u700,S700 ()
u700,S700 (variable)
u700,S700 (variable)
u700,S700 (variable)
u700,S700 (variable)
uD800,S800 (SHQ)
uD800,S800 (variable)
uD800,S800 (variable)
uD800,S800 (variable)
uD800,S800 (variable)
uD800,S800 (variable)
iPhone ()
Photoshop CSIrfanview
Adobe Photoshop (Save As 00)
Adobe Photoshop (Save As 01)
Adobe Photoshop (Save As 02)
Adobe Photoshop (Save As 03)
Adobe Photoshop (Save As 04)
Adobe Photoshop (Save As 05)
Adobe Photoshop (Save As 06)
Adobe Photoshop (Save As 07)
Adobe Photoshop (Save As 08)
Adobe Photoshop (Save As 09)
Adobe Photoshop (Save As 10)
Adobe Photoshop (Save As 11)
Adobe Photoshop (Save As 12)
Adobe Photoshop (Save For Web 010)
Adobe Photoshop (Save For Web 020)
Adobe Photoshop (Save For Web 030)
Adobe Photoshop (Save For Web 040)
Adobe Photoshop (Save For Web 050)
Adobe Photoshop (Save For Web 051)
Adobe Photoshop (Save For Web 060)
Adobe Photoshop (Save For Web 070)
Adobe Photoshop (Save For Web 080)
Adobe Photoshop (Save For Web 090)
Adobe Photoshop (Save For Web 100)
IrfanView (040)
IrfanView (050)
IrfanView (060)
IrfanView (070)
IrfanView (080)
IrfanView (090)
IrfanView (100)

Background Material

For some background on quantization tables and their use in JPEG compression, please see my JPEG compression article.

For examples and description of the breakdown of an image into 8x8 blocks, please see my article on the JPEG Minimum Coded Unit.

In JPEG compression, the quantization step is performed just prior to the JPEG Huffman Coding.

Example of a Quantization Table

The following diagram shows a typical luminance DQT found in a high-quality digital photo. Note that the numbers increase in magnitude as one approaches the bottom-right corner, which describes the amount of compression (loss) applied to high-frequency image components. Numbers towards the top-left corner (low-frequency & "DC") decrease as we typically don't want to discard this image information as the human visual system will tend to notice "errors" here.

Comparison of Quantization Tables

In trying to evaluate the differences between different software packages and their "quality settings" when saving to JPEG, I have extracted the quantization tables used in each case. In order to do this, I have used my JPEGsnoop utility which locates the offset of the JFIF marker tags within each image, and then extracts the linear stream of bytes from this marker, recreated the zig-zag representation of the quantization (DQT) coefficient matrix and have presented the results here.

In each of the following examples, when I make a reference to "better quality" it is meant in the sense of less quantization error (compression error / loss) and therefore better resulting image quality. The converse is true for "worse quality". The baseline for each of these "better / worse" comparisons is the original digital camera photo as produced by my Canon 10d digital SLR in Fine JPG mode. This helps give one an idea of a comparable quality setting to that used from the camera itself.

Compare JPEG Quality of Digicams and Software

Please check out my JPEG Compression Quality comparison page, where you can interactively compare the quantization tables of different sources!

JPEG Standard

These are the quantization table / matrix coefficients that are recommended / suggested in the Annex of the JPEG standard.

Quantization Table: Luminance
Quantization Table: Chrominance

Photoshop CS2

The JPEG quality range for Photoshop CS2 runs from 1-12. It is expected that this range was used to give the user an idea that 10 is really the most anyone should use, and that 11 and 12 are there purely for experimental reasons (not much gain in quality for a large increase in file size).

Note that the quantization tables generated from a save within Photoshop CS are concatenated together into a single quantization table of length 132 instead of the 2 x 67 that we see in other software programs and the digital camera itself.

  • One should realize that none of Photoshop's quality settings in the Save As dialog box will match any digital camera's JPEG compression tables. Therefore, you cannot avoid recompression error when resaving an image from a digicam in Photoshop!

Photoshop CS2 Chroma Subsampling

Note that Photoshop changes its use of chroma subsampling depending upon the image Quality setting in the Save dialog. Photoshop CS2 uses 1x1 (no chroma subsampling) in Quality 7-12, but 2x2 (chroma subsampling in both horizontal and vertical directions) in Quality 0-6. Note that this use of chroma subsampling throws away significantly more image information than the subsampling used by most digital cameras (except some basic digital Point & Shoot digicams).

Photoshop only offers 2x2 or 1x1 (none) chroma subsampling. Most digital cameras use 2x1 chroma subsampling. Therefore, you will have to either choose to over-compensate in the color channels (by doubling the horizontal color resolution of the original source) or discard (by eliminating / averaging the color resolution in the vertical component). I don't know why Adobe didn't choose to provide a saving option that would be closer to what digicams use — better yet, a choice in chroma subsampling, like IrfanView.

A Caution Regarding Photoshop Quality 7!

What many people don't know is that there is a quirk in the way that Photoshop defines its quality range. As mentioned earlier, Quality level 6 is the last point in which chroma subsampling is used. At Quality level 7 and higher, no chroma subsampling is used at all. With the amount of color information encoded now doubled, the file size would have naturally increased significantly at this level versus the previous level.

However, it is likely that Adobe decided to allocate the various quality levels with some relationship to the final compressed file size. Therefore, Adobe chose a poorer luminance and chrominance compression quality (i.e. higher level of compression) in Quality level 7 than Quality level 6!

  • What this means is that the image quality of Quality level 7 is actually lower than Quality level 6
    (at least from luminance detail perspective).
    This fact has apparently been confirmed with subjective MOS scores against various images at both quality levels.

The only advantage of image quality 7 over quality 6 is that the chrominance detail hasn't been subsampled. So, there is perhaps some degree of benefit to the ability to resolve color information over the previous level, but this benefit is also diminished somewhat by the higher-compressed chrominance table.

You can see the above juxtaposition in the JPEG Quality comparison list.

Photoshop CS2 Quality vs File Size

The following graph shows the relationship between the file size of a typical 6 megapixel photo (from a Canon 10d) and the quality setting selected within Photoshop CS2.

NOTE: There are many reasons why one should not take file size as any indication of a quality comparison. The most important reasons being: different quantization tables, different huffman tables / optimization, orientation differences and additional metadata stored within the JPEG JFIF file.

  • With Quality 10, the level of compression is "worse" than that used in the digital camera.
  • Using Quality 12, results in "better" quality than the original Canon 10d digital camera photo.

Canon Digital Cameras

  • Canon 1Ds Mark II - This camera uses the least amount of JPEG compression (and thus highest quality) of any of Canon's digital SLRs.
  • Canon 1D Mark II - Not surprisingly, this camera uses the second to least amount of JPEG compression of any of Canon's digital SLRs.
  • Canon 5d - There are extremely small differences in the luminance table in the Canon 5d versus the 30d/20d.
  • Canon 30d, Canon 20d, Canon 10d (Fine Mode)- All of the prosumer line uses the same set of tables. One can see that the degree of quantization rounding used in Fine mode is quite minimal, ie. there is not much compression loss at this setting.
  • Canon 30d, Canon 20d, Canon 10d (Normal Mode) - In Normal mode, the quality suffers quite significantly, putting it at worse compression than just about any other digicam's Fine mode.
  • Canon PowerShot S3 IS - The Canon PowerShot S3 IS has comparable luminance compression as to the Canon 10d, but considerably worse chrominance compression. However, it is still much better than the Canon PowerShot SD400 in this regard.
  • Canon PowerShot SD400 - Typical for a point and shoot digital camera, the JPEG compression quality is lower than the digital SLR counterparts.

Sony Digital Cameras

  • Sony DSC H9 - Interestingly enough, it appears that the Sony DSC H9 is one of the first cameras that I have come across that use automatic / variable image compression. The quantization tables are selected at exposure time by the camera, rather than relying on a couple-position user level setting. This has some very interesting implications. I am currently in the process of analyzing images from this camera to understand the reasoning and algorithm that is being used. Please see the Variable JPEG Compression page for more details on this camera's feature.

Sigma Digital Cameras

  • Sigma SD10 - Note that there is no chroma subsampling in the Sigma dSLRs with the Foveon sensor. In addition, there is extremely little compression being used on this camera when compared to other dSLRs.

Nikon Digital Cameras

  • Nikon D2x - Interestingly enough, it seems that the Nikon D2x and Sigma SD10 share the same quantization tables and hence compression quality.


  • The JPEG quality range for IrfanView runs from 0 to 100.
  • At quality setting 97, IrfanView uses slightly "worse" compression than the original Canon 10d digital photo.
  • At quality setting 98, IrfanView uses slightly "better" compression than the original digital camera photo.


Reader's Comments:

Please leave your comments or suggestions below!
 Hi! How you define this quantization tables for different devices? Can I download all these tables as a file? Thank you!
 I noticed that when using modeling software like Blender etc, jpegsnoop will register the signature as a match with TREO and Blackberry cameras in the subsamp match. This happens with other modeling software as well, do you ever get camera signature matches from software programs?
 Yes, it is certainly possible for a camera to use a quantization table that matches one typically used by software encoders though this appears to be a less common occurrence for more recent cameras.
 Last post need more this codefor tabX

function genCosTab2(x,y)
var i,j,k,c1,c2; k=0;
var p=new Array(MMNN);
	p[k] = Math.cos(c1*(2*x+1)*i) * Math.cos(c2*(2*y+1)*j);
return p;

wri('<br\><b>Experiment Suma abs cosinus2<\/b>');
var tabX=new Array(MMNN);
var i,j;
for(j=0;j<NN;j++) {for(i=0;i<MM;i++)
	{tabX = tablePLUSABS(tabX,genCosTab2(i,j));}}

tabX show as

Experiment Suma abs cosinus2
64	41.007	41.81	41.007	45.255	41.007	41.81	41.007
41.007	26.274	26.789	26.274	28.996	26.274	26.789	26.274
41.81	26.789	27.314	26.789	29.564	26.789	27.314	26.789
41.007	26.274	26.789	26.274	28.996	26.274	26.789	26.274
45.255	28.996	29.564	28.996	32	28.996	29.564	28.996
41.007	26.274	26.789	26.274	28.996	26.274	26.789	26.274
41.81	26.789	27.314	26.789	29.564	26.789	27.314	26.789
41.007	26.274	26.789	26.274	28.996	26.274	26.789	26.274
 Mathematical vs wiki table

16 11 10 16 24 40 51 61
12 12 14 19 26 58 60 55
14 13 16 24 40 57 69 56
14 17 22 29 51 87 80 62
18 22 37 56 68 109 103 77
24 35 55 64 81 104 113 92
49 64 78 87 103 121 120 101
72 92 95 98 112 100 103 99

16 18 23 27 34 34 38 40
18 20 26 30 38 38 42 44
23 26 34 40 50 50 55 58
27 30 40 46 58 58 64 67
34 38 50 58 72 72 80 84
34 38 50 58 72 72 80 84
38 42 55 64 80 80 89 94
40 44 58 67 84 84 94 99

cut from js-code

var tabZ=new Array(MMNN);
var x=Math.sqrt(MM/2 * NN/2);
var i,j,k;
I wanted to know how does the quantization table vary as the quality of the image increases of decreases. What is the mathematical relationship between quality factor and quantization table?
 Hi - here is the formula often used to relate the quality factor and quantization table:
Quality = 1..100
if (Quality < 50) { ScaleFactor = 5000 / Quality }
else { ScaleFactor = 200 - Quality * 2 }

Loop [i] Through Matrix:
  NewQuantMatrix[i] = (StandardMatrix[i] * ScaleFactor + 50 ) / 100
 Yes, you're absolutely right. I mitnedurssood the information on the website. I talked to the inventor at Electronic Imaging last week, too.Basically, the image is iteratively compressed until the visual difference between the original image and this iteration of the compression is indistinguishable. The result is visually lossless, and the optimal JPEG compression setting doesn't need to be known a priori.It's very clever; since you aren't doing this to every image, it's okay to trade the saved upload time versus the extra processing time for images you want to transmit. And as you noted, Oren, there's no new file format.
2013-04-15chen adiy
 About comparing two quantizattion tables
Two questions: what is variance and how do you calculate it?
One other question is how does JPEGsnoop calculate the scaling value?
Thank you
2013-04-10chen chao
 Dear Sir
Take PhotoShop's tables for example, does it have the base table like JPEG standard table ?
thank you
 Came across this page, and it prompted me to try to quantify visual performance of q-tables in default cjpeg and Photoshop SfW.

Not enough room for me to detail my effort here ;) but I posted it in a different forum for those who are curious:
 Is there a chance, that the newer Pentax models, K-20D, K-7, K-5, K-x K-r, K-01 will be added to the database.

Thank you very much
 Great article! THX!

Can you give me some advice to generte custom quant table for my image ??
 If you are asking about encoding with custom tables, the easiest way is to run cjpeg (an IJG tool) with the -qtables option.
 Hi. Its really good explanation, One can easily understand about MCU.
 Wonderful application.
Feature request: please, please add the count unique color to count unique colors based on RGB or CMYK value.
I can't find any app that do this.
This will be wonderful to count unique colors in the gamut profile.

Unlike file formats (such as GIF) with a color index table, JPEG files generally support a 24-bit color range (16 million colors). Counting distinct colors can be done, but it would slow down the decoder quite a lot. Given that I haven't seen much demand yet for color counting, I'm a bit hesitant to impact the general decoder performance. Can you give me some background how this might be useful? Thanks.
 Hi Cal!

I have a set of rather strange JPEG files produced with a Panasonic HDC-TM300 video camera. The images were saved with the "high-speed burst" capture mode, i.e., a sequence of 180 JPEG images over a period of 3 seconds.

The weird thing is that the images contain tell-tale JPEG artefacts (confirmed to fall nicely within MCU boundaries), yet the quantization matrix is all 1's.

I can of course recreate such images by first saving a synthetic image (no distortion) as a JPEG at say 80% quality, and then load and re-save it at 100% quality.

Can you think of any reason why the Panasonic camera would produce such deceptive JPEG files? It is clear that they quantize heavily (around 85% quality), but then proceed to re-encode with a "lossless" QM.

I can send you some samples if you like.

 That's a very interesting thing that you have found. On the surface, that would not seem to make much intuitive sense. If they are indeed performing two passes of JPEG compression, the second pass with an all-1 quantization matrix would only serve to increase file size with no improvement in image quality. Please feel free to email me a sample. Thanks!
 Dear Sir,

Your software is awesome. I learned from your website and learn practically by you JPEGSnoop very much.

However I just got one question, in JPEG Snoop software, the quality is approximated after extracting the quantization table. Besides quality value, it provides the 'variance'. So what is the meaning of 'variance'?

Thank you very much.
 Thanks Nam! The approximate image quality metric is based on a statistical calculation performed across the quantization matrix compared to the example table provided in the standard. Since it is possible for very different matrices to "average" out to the same quality value, the variance indicates how well all elements in the matrix matched the scaling. A low variance means that it is fairly likely that the quantization matrix is based on the example ("default") matrix plus scaling for quality. A high variance suggests that the quantization matrix is not based on the "default" and that the approximate quality factor isn't very meaningful.
 Dear Sir!
how can i find the quantization table from quantization step size.can you help me?

Best regards

I have strange jpg format:
That is DJPEG marker?
I no open this image.
p.s. this jpeg is compressed using ijl.dll

Best Regards
 Hi there --

I managed to decode this file -- it is an image of a car. The JFIF portion of the file is missing the DHT and DQT tables, and the file has a DJPEG prepend header that I have never seen before.

My guess is that this file format is actually a framegrab from a proprietary video file format very similar to MJPEG (Motion JPEG) except that both the DHT and DQT parameters are defined in the DJPEG header. This will allow for minimization of file size on low resolution video files.

Since I don't know the format of the DJPEG header, I have taken a guess at the DHT and DQT tables, inserted these and then corrected for my errors in the DQT scale factors. The DHT table I used was from the MJPEG standard.

 Cal, thank you very much for your explanation. It certainly helped me to understand a lot more about this subject.

I also have a Casio 6.0 megapixel camera, model EX-Z600. I tried to take the same scene in my house with both cameras. Although I could only do 6 mP with the Casio and 5 mP (closest comparison) with the Samsung HZ15W, I noticed that the file sizes were 2.89 mB and 1.22 mB respectively. The Casio was taken at it's lowest compression rate and the Samsung was taken at its middle quality compression ratio. It was difficult to really assess the quality differences between the 2 files (this was not a very controlled experiment), but the Samsung definitely took less (at least 50% if you adjust for the difference in the mP range I was using) space on a camera card, hence, more pictures in the same space. In fact, even at the 12 mP with the Samsung, the file size was still less than the Casio (2.72 mB vs. 2.89 mB) but with much better quality.

So what does this all mean? This is probably not the site to discuss something this non-technical but it is the site that I found to be most helpful.

But this compression ratio and it impact on quality and file size seems like it should be a factor in deciding what camera to buy. So how do you correlate the different factors here? Since all the digital compact cameras are within 100.00 in price, it would be good if people had this undestanding between the different megapixel ranges and compression ratios (with regard to quality and file size) for these cameras.

I wonder what your thoughts are in this area. By the way, with everything I have learned from you and this site, I would still buy the new Samsung camera. I just did not appreciate at the time some of these advantages this new camera offered to the buyer. Had I been trying to decide between 2 different companies (e.g., my brother's Panasonic and my Samsung), I would have liked to have had this knowledge before I bought any camera.

Thank you for your site and support.
 I just purchased the new Samsung HZ15W 12.2 megapixel camera. I am hoping someone can give me an explanation in layman's terms for this feature on my camera. I have settings of Normal, Fine, and Superfine. At 12 megapixel resolution, I get file sizes that are larger as I supposedly get sharper pictures. So the file sizes must be caused by the compression ratio that is used (?). Can someone exlain this to me so I can understand it. I thought all 12 megapixel pictures (4000 x 3000) would be the same. Thank you if you can explain this to me...
 That's a great question, Robert! I'll try to summarize it from an easier perspective.

A 12 megapixel photo would normally take up about 36 MB on your disk if no compression was used. Obviously, this would be pretty impractical or wasteful, so your camera uses compression techniques to reduce the file size dramatically without changing the appearance of the photo too much. Any time that you use "lossy" compression, you are "losing" some of the detail from your original photo. How much you decide to throw away (ignore) is dictated by the compression ratio.

The more detail you preserve, the more file size you'll require. An uncompressed photo has a huge amount of "detail", and therefore a very large file size. A low compression ratio (eg "Superfine" setting) means that your camera will aim to retain the majority of your detail, and therefore it isn't able to reduce the file size so much. A high compression ratio (eg. "Normal" setting) directs your camera to toss away a lot of detail, which translates into a big reduction in file size. Unfortunately, by throwing away this detail, the image quality suffers (you'll start to see "blockiness", some unexpected colors, etc.).

The various quality settings on the camera ("Superfine", "Fine", "Normal") are simple names given to some mathematical tables (quantization tables) that are used by the camera to determine what details can be thrown away when it saves/compresses your photo.

All 12 megapixel photos taken by the same camera with the same quality setting will not give you the same file size because some images have less "detail" to begin with (eg. a photo of the sky), meaning that it's able to create a smaller file.

Hope that helps!
2009-04-09Paul Leung
 Just tried your utility, it is excellent.

I was mainly looking at dumping the QTables, and am wondering what the two metrics "scaling" and "variance" mean? I am comparing two consumer products with built-in camera, where they have similar scaling value but very different variance value. What does that mean?
 Thanks! Many quantization tables (particularly from software editors) are based upon the "example" set included in the JPEG standard documentation, but scaled by a formula suggested by the IJG group. This scaling factor is sometimes referred to as the "quality factor". JPEGsnoop attempts to determine what this quality factor might have been (reflected in the "scaling" value) -- provided that the original base table was from the JPEG standard. Since this is often not the case, the variance will show how far off this table is from one that may have been derived directly from the JPEG standard tables. A very low variance will indicate that the JPEG standard tables were likely used as the basis for DQT. A high variance will imply that it is very unlikely that the JPEG standard tables were used.

Essentially, a large variance value indicates that you should ignore the scaling factor and approximate quality factor estimates, as they are not as meaningful.
 You wrote:
>JPEG Standard
>These are the quantization table / matrix coefficients that are >recommended / suggested in the Annex of the JPEG standard.

Can you please tell me in which standard and what annex this tables are defined?
I can't find them...
The only tables I can find are tables from ITU-T 81 Annex K.
But they are different:

Table K.1 "Luminance quantization table"

16 11 10 16 124 140 151 161
12 12 14 19 126 158 160 155
14 13 16 24 140 157 169 156
14 17 22 29 151 187 180 162
18 22 37 56 168 109 103 177
24 35 55 64 181 104 113 192
49 64 78 87 103 121 120 101
72 92 95 98 112 100 103 199

Table K.2 " Chrominance quantization table"

17 18 24 47 99 99 99 99
18 21 26 66 99 99 99 99
24 26 56 99 99 99 99 99
47 66 99 99 99 99 99 99
99 99 99 99 99 99 99 99
99 99 99 99 99 99 99 99
99 99 99 99 99 99 99 99
99 99 99 99 99 99 99 99
 Thanks for pointing this out. I changed my database lookup function and the page was showing a different DQT table than it did previously. I've corrected this now.
 @ my previous post

OK, a bit lost.
I read some of your answers on this very page suggesting interpolation or other means to device tables of orders higher than 8.
So MCU size is irrelevant to quantization?

Please correct my sequence as I'm a bit confused.
We start with any of these subsmapling settings
(1x1)==8x8 MCU
(2x1)==16x8 MCU
And replace each orginial region (with the chose size) by the resulting coded sequence only in the Cr and Cb channels leaving the Y unchanged.
Then go one with the JPEG compression steps using 8x8 or 16x16 or ... any qantization matrix?

Sorry one more thing. The two quantization tables mentioned; luminance and chrominance, used to quantize each channel, are the only ones? I read an image could have up to four tables. What does that mean?
 Very useful page. Thanks!
A question. Are JPEGs compressed with 16X16 quantization tables (or any other than 8X8) that common? I have an algorithm that measures blocking artifacts and it's deviced for 8X8 blocks (standard tables). I was trying to make it adaptive, but can't find any JPEGs generated from 16X16 Q-tables for testing. If I wanted to check the tables, JPEGSnoop does it well, thank you, but I can never seem to find any 16X16. How do I "make" some?
The mentioned tools, BetterJPEG, RealWorldPhoto, allows adjusting the quality and hence scaling the tables (which are not accessable).
Do I have to do it programmatically?

Sincere regards,
 I don't think you are actually looking for "16x16 quantization tables"... The only quantization tables I've seen represent 8x8 coefficient matrices. What I believe you may be referring to are the 16x16 MCUs (due to chroma subsampling). Depending on the sampling settings, you may need to alter the way your algorithm treats the 3 components (doubling up some, if required).
 When using JPEGsnoop, I noticed under the Quantization Tables that you showed the table's scaling value and variance.

Two questions: Unless I missed it from somewhere else, what is variance and how do you calculate it? One other question is how does JPEGsnoop calculate the scaling value?

Thank you
 Hi, i got same problem on .Net & JPEG as Musa.
Is there value of quantization tables that ensures 100% quality for saved JPEG?
I can pass these tables to encoder, but if i fill it with zeros (or with 1, 2 etc.) JPEG still loses number of unique colors and seems somewhat blured. If fill with 200 or so JPEG is completely lost.

Thx for any response
 Hi Joe -- the settings that would give you the least compressed standard JPEG image would be to set all 64 entries in the quantization tables to 1, in addition to the disabling of any chroma subsampling. Remember that this is still not 100% because of the color space conversion and other rounding errors. It would also be extremely inefficient when compared to other file formats (or lossless JPEG, for example).
 Thank you for offering interesting and detailed information on this website about JPG and thanks for offering JPEGsnoop.
My question is kind of similar like the one in comment 2006-02-18:

Since you can't change Photoshop's quantization tables and color subsampling settings and cjpeg doesn't have a GUI where you could paste an image... Is there any software preferable with a GUI where one could define a quantization table and control the color subsampling so that it matches the used digital camera's settings?
I guess saving edited JPEGs with the same settings like the used digital camera one's would minimize the generation loss compared to Photoshop's JPG save, wouldn't it?
 Yes... recently there have been a couple image editors that have made great strides in attempting to accomplish this task. Here are two great tools to check out: BetterJPEG and RealWorld Photos.
 Thanks for your input
I resample\resize the image tp save.
Microsoft GDI is dead project and lacks some power.
I played a lot with Encoder and it is just can't handle what I am looking for.
Anyway, I solved my problem.
Your knowledge on JPEG, compression and overall imaging is exceptional!!
I searched last 5 days to find solution to my problem. But I am lost.

I have online photo album site. Where users uload photos.
The resampled\resized images (500X375 or 375X500) from original jpg files are losing quality.

I uploaded the same file to Flickr, the quality is far better.
I can't get rid of this problem even with HighQuality options.

g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.SmoothingMode = SmoothingMode.HighQuality;
g.PixelOffsetMode = PixelOffsetMode.HighQuality;
g.CompositingQuality = CompositingQuality.HighQuality;

Also using quality 100 when saving.

I played with irfanview and found that if I check Disable color subsampling option during save, I get same quality and file size like Flickr.

My question: How can I Disable color subsampling programatically in my code when I create the 500X375 images??

I will wait for your response. Have a great day!!
 Hi Musa -- Unfortunately, I'm not familiar with the .NET Framework's graphics library calls, but it appears that most of those options you've referenced are intended to alter the visual display of the images, rather than affect any JPEG compression parameters used during save. Does your site resize the images and resave them as lower-res versions or does it just read in full-res images and dynamically resize them before display?

Digging a little into the MSDN, it would seem that you may want to look into setting the System.Drawing.Imaging Encoder parameters before saving out.
 Perhaps a naive question, but I was wondering where I can read what level of compression was used on any kind of jpg file.

But I guess it depends on the algorithms that each program uses.

I am using ACDSee and I gathered that my Canon 400D uses a similar to 96% quality, when comparing to the output. But when I examine other files that I get, already handled, I sometimes wanted to match the same compression level to return the edited pic.

And thanks for this cool info you have here!

Regards from Portugal
 As I think you saw later, you can use JPEGsnoop to extract the approximate quality factors and quantization matrices. Comparing image compression quality is not very straightforward because you're comparing 64 numbers, not just a single "quality" number!

is there any known method to encode a JPG image file ("SOS" and scan data) using predefined/user given quantization tables (DQT), SOF0 parameter block and Huffman tables (DHT)?

I need this for reconstruction of EXIF thumbnail images.
Jhead -rgt does rebuild a thumbnail, but it uses different quantization tables and SOF0 parameters which are useless for my purposes.
 Yes, you're partly in luck :) You can use jpegtran with the -qtables option to set the DQT entries, but you'll have a hard time trying to change the DHT to match your scan data. I had to build a custom transcoder to do this.

As for changing the SOF0 parameters, you might want to see if exiftool can do this for you.
 I have a question, if you don't mind answering it.

I see lots of quantization tables with low values like 2,3 etc for the DC and lower frequencies. The problem i'm having, is that the resulting quantized amplitude does not fit in an 8-bits register.

What do you suggest that can be done about it, without going to 16-bits Quantized values? If I simply lower the values, block-artifacts turn up pretty quickly.
 Thanks for the link to the code.

You were right when you doubted that the photo reader would use a hard-coded DQT. After I read your reply, I modified the DQT in the thumb file, which does work, rather than trying to get the thumb file, which does not display, fixed. This confirmed that the photo reader indeed uses the DQT in the file. Changing the DQT table in the working thumb file changed the appearance of the displayed thumb, but is still displayed.

I am still befuddled why one thumb file displays while the other one does not when inserted into an Image file and looked at in the photo player. Both thumb files open in all software based photo editors I tried. Both thumb files have the same set of JPG markers in the same order, and the identical EXIF entries. I could post a link for the two file for anyone to have a look at, and solve this technical puzzle.
 If you have an image that has a working thumbnail, then email both files to me and I'm quite sure that we can figure out the differences. Cal.

[UPDATE 09/15/2007]: After receiving the files from Udo, I figured that the HDTV Photo Player may expect that thumbnails are encoded with 2x1 chroma subsampling. For nearly all digicams, this assumption will be correct -- although it may still fail on images from some very low-end digicams that use 2x2 subsampling.

In the samples that he provided, JPEGsnoop reported that the images that didn't work on the player had hand-embedded thumbnails with either no subsampling or 2x2 (horizontal and vertical). I recoded one of his sample thumbnails to 2x1 and after inserting this thumbnail into his file with his HDTV Converter software, he reports that it now works!
 Hi There,

Thanks for your wonderful page on DQT. I learned more from your page than any of the others. I also appreciate the CS2 compression tip.

I prefer to look at, and share, my camera picture on a plasma HDTV using the excellent Panasonic photo reader. I have tried many photo editors; none saved images in a JPG format that they could be read by the hardware-based photo reader.

I am writing a programme that scales images from a camera or images saved with Photoshop (and other photo editors) into the HDTV aspect ratio, and more importantly, saves the image in a JPG format, which can be read by the hardware based image reader. I have accomplished that for the main image. I am stuck to get the embedded thumb preview into a format that can be read by the photo reader. Through trial and error, I have come to the conclusion that the problem is caused by the DQT table used to compress the thumb image. I suspect that the reader does not use the embedded thumb DQT, but uses the digital camera thumb image default DQT. Possibly, because the thumb conversion is hard-coded in the converter IC used in the reader.

I have been looking for some sample code (any language -- C preferred) to create JPG images using custom DQT tables, but have not found a single one. Any help would be appreciated.
 Sounds like a great idea for a project! I would be quite surprised if the Panasonic hardware decoder would hardcode the DQT tables used for decompressing the thumbnail. This would prevent the display from working with any future digicams by the company which use slightly different compression quality settings. But, of course its always possible!

So that I understand your problem better: is the photo reader able to read any thumbnails successfully? You were suggesting that the reader uses a "digital camera thumb image default DQT" -- which default DQT would this be? (default for thumbs from Panasonic digicams?) Since there is no universal default DQT for thumbnails, this doesn't seem as likely.

What brought you to the conclusion about the reader ignoring the embedded thumbnail DQT entries? Is the reader able to see any image data at all, or does it simply report an error? If the DQT table were being ignored (and a hardcoded DQT table in the hardware were used instead), then you should not get an error in the reader, but just some garbled image preview instead. If the wrong DHT tables were used, then yes, the decoder may encounter a failure.

As for encoding JPEG images with custom DQT tables, all JPEG encoders can use an arbitrary DQT table. Have a look at the IJG JPEG encoder C source code, and you'll see that you can simply substitute any quantization table you like.

Please feel free to post any more questions you have -- I'm sure that we can get your photo reader to work for you!
 I want to know how to in jpeg whether we want to code the ac and dc components separately. Initially, i converted image blocks into dct and afterwards iam dividing it with quantization matrix and iam rounding of the value. If iam doing like this i cant able to decompressed the image properly. iam getting the output image ofcorse . but i can see the pink color dots in the images.can you explain the reason of it.
with regards
 I'm not certain that I understand the question, but I believe you are asking about the sequence of components that get coded in the scan segment. Have a look at my JPEG Huffman Coding Tutorial page, where I have stepped through an entire decoding sequence between the DC and AC components. Hopefully that will help you out!
 I need to know how much error introduces JPEG compression in area measurement. I have to measure area of leafs (some of them small) and I'd like to know the relationship between compression and error
 That is an interesting problem, but unfortunately I don't see any easy way to identify such a correlation between the compression (as defined by the quantization tables) and the resulting color area. As an upper limit, error should be limited to the boundaries of the MCUs (e.g. 8x8 or up to 16x16 pixels), but this is likely too imprecise for your needs.
 Dear Sir,
I'm looking for the all types of actual used Quantization Tables for each Digital Camera's parameter settings. Could you tell me if it's possible to use any mathematical equations and the Quantization Tables above to get them all?
Best regards!

 Unfortunately, no.

Each camera manufacturer and computer program selects their own quantization tables for each arbitrary quality setting. The only case in which you can mathematically figure out the quantization tables from the quality setting is in programs such as Irfanview and other IJG-based JPEG encoders, where the quantization table is scaled by the quality setting (according to the function shown in the comment posted on 2006-03-07).
 Thank you very much first of all! one more doubt what is quantization generally, explain about discrete quantization. Next one more doubt why we have chosen DCT why not DFT or FFT. Please tell me ....thank you once again
 A good explanation of quantization and why DCT was chosen over FFT can be found on this DSP Guide page. It also shows great examples of how the basis functions can be represented spatially.

To summarize, the Discrete Cosine Transform was chosen for JPEG images over Fourier Transforms because the DCT includes basis functions that have twice the period of the lowest-frequency Fourier function. This means that DCT will compress images with slower-changing components (such as the luminance) better than a Fourier Transform.
 How to do interpolation any mathematical equations . do you have any references. If so please tell me...
 Interpolation of the quantization matrix to 16x16 should be easy. For example, if the original table (8x8) had the first few rows defined as: (perhaps substitute the JPEG Annex K quantization values in the variables listed below)

Then, the top-left corner of the interpolated 16x16 matrix may start as:
So, we are simply inserting an averaged value between each of the original 8x8 matrix values. As I mentioned earlier, this may not be the best way to create a 16x16 quantization matrix, but it may provide a starting point.
 how to create the 4X4 or 16X16 quantization tables please help me...
 I don't know of any published sample quantization tables for other MCU sizes that are appropriate for general digital photos. You may be able to start with an interpolated version of the 8x8 tables, but I'm not 100% certain.

Of course, the best approach would be to process hundreds of photos experimenting with different tables (probably based on the interpolated 8x8 version) and then attempt to compare peoples' objective opinion of the quality of the compressed image against the file size savings. This is the process that was used to generate the original 8x8 tables that appear in the JPEG standard. Unfortunately, this is a very difficult and expensive process to use, and hence why many encoders probably rely on the JPEG standard's example tables as a starting point.

Clearly, you can create any quantization table you want, but the difficult part will be finding the optimum balance between image quality and file size reduction.
2007-01-05veeraswamy k
What is the logic behind the quantization table? Is it purely random or basend on some logic like HVS? What is the significance of the specific numbers. For example why should the first number be 16 and not some other no.
Kindly answer
 The selection of the 64 coefficients for the quantization table is based on a combination of the traits of the human visual system (HVS) and a desired amount of compression. The inability for our eye to discern high-frequency details is what allows us to choose larger values in the matrix as we move towards the bottom right corner (high frequency image content), compared with the smaller values in as we near the top-left corner (low-frequency image content). The larger the value, the more we will throw away of that particular frequency of image content (in the quantization step).

So, the human visual system dictates the ratios between values that appear in the quantization table. The absolute values (overall scale factor across the entire table) is determined by the total image compression ratio that one desires.

Many software programs will have a built-in quantization table (sometimes based on the one suggested in the JPEG Standard), one for luminance and another for chrominance. They then scale all values in this base table according to the JPEG Quality that the user chooses (e.g. 0-100). So, the absolute values in the resulting quantization table is based on the desired compression / quality tradeoff, but the original base table was based largely on characteristics of the HVS.

Furthermore, the fact that the Human Visual System is not very well-tuned to see error / loss in the color components (versus the luminance components) means that nearly all programs and digital cameras start with quantization tables that have much larger numbers in the chrominance table than the luminance table.

As for the selection of 16 versus some other number, it really is somewhat arbitrary. But an analysis of thousands of photos with different measurements of human perception have allowed optimum quantization values to be chosen that offer the best compression (largest coefficients) while still preserving the majority of the image quality of the original. These optimum tables are provided in the Annex of the JPEG Standard. Note all manufacturers or software vendors chose to go with these suggested tables -- instead, they came up with their own, probably after a considerable amount of their own analysis.
2006-12-18Allen J
 Great Site, I stumbled onto it looking for a definition of Canon's compression types, and got so much more.

I have a question, on my Canon A540 there is a 2816x2112 and a lower 2272x1704. In order to keep file size a little lower, I have been opting for SuperFine (the highest compression) while dropping down to the lower res. Looking at your tables, though, I see that the 1704 res isn't a multiple of 16, which will give me problems with doing lossless rotation. Would you recommend going back up to the higher res and dropping down to the Fine compression? I doubt I will ever print anything larger than a 8.5 x 11, if ever that large.
 Hi Allen -- I have investigated further and have come to the conclusion that the reason for the rotation warning in Windows is likely a bug with how it handles chroma subsampling. I have updated my digicam lossless rotation page to reflect these findings.

So, what this means is that you certainly can still rotate your photos losslessly, but don't use Windows XP Explorer or Windows Picture and Fax Viewer! Doing so will result in image degradation. Instead, you can use IrfanView, BetterJPEG, jpegtran or a host of other applications.

In light of this, by all means shoot with the reduced resolution, but keep the compression quality high (BTW, SuperFine will be the highest compression quality, but the lowest compression amount). When it comes time to rotate the photos, use something other than Windows to rotate, or better yet, use a photo import utility to perform this for you automatically on the basis of EXIF information.

As an aside, I would never recommend dropping the compression quality over reducing the resolution. JPEG compression artifacts generally do far more damage to the appearance of your picture than simply having less resolution.

Good luck!
 Hello Calvin!
Could you describe more particularly process of transformation BMP in Jpeg
 Converting from BMP to JPEG is relatively easy. To keep such conversions generic, it's often best to first decode the BMP file into a raw RGB image array. Start with the BMP file format definition. Then, you can perform conversion to JPEG using the basic steps outlined in the JPEG compression page. But for the actual algorithms, I would strongly recommend that you download the IJG's IJG source code for cjpeg. Good luck!
I found this page as I was looking for information on Canon EXIF formats. I found the information very useful. Thank you for creating it.
I have written (on my Mac) a C program to extract selected EXIF info from the images from my digital cameras so I can load them in my database. I am trying to find a way to express the "quality" of the JPEG compression, but there does not seem to be a standard way to save this in the EXIF. My Pentax returns in tag 0x9102 the number of compressed bits per pixel, but the Canon does not, although it has a quality value in word 3 of the CanonMaker tag 0x0001. Any suggestion on how I could compute an approximation of quality, say in a range 0-12 as per Photoshop, independently of what device or software created the JPEG file?
Thanks in advance. Regards.
Daniel Guerin
 Hi Daniel -- That is certainly an interesting idea, and I did try something similar in creating my published quality comparison tables. There are only two measures of quality you can use: the camera-specific MakerNote tags and the quantization tables. For absolute comparison purposes, the make-specific tags will not be of much use to you.

How are you intending on using this quality rating in your database? Are you trying to compare quality between images from different cameras? This is not really practical as cameras all use different quantization tables and it is hard to linearize the comparison between 64-entry quantization matrices.

That said, you have a couple options:
  • Images created in Photoshop Save As - The quality rating (1-12) is preserved in the APP13 0x0406 tag. See section in Photoshop Save As vs Save for Web.
  • Images created in Photoshop Save For Web - The quality rating (1-100) is preserved in the APP12 Ducky tag. See section in Photoshop Save As vs Save for Web.
  • Images direct from Digital Camera - Your only option, besides using the Manufacturer's MakerNote interpretation of quality is to extract the actual Quantization Table. From this, you can do a lookup to get an approximation of the quality factor (based on JPEG standard), but this is not particularly easy to compare. Even if you were to find a rough approximation, the tables are often so different between digicams and Photoshop that it may not make sense to try comparing them. Even Photoshop's Save As and Save for Web produce very different quantization tables.
 Hello, I need a code in MATLAB for Hand written digits recognition, my problem is that the numbers that are scanned should be recognized without compared with references or database
Thank you
 Hi - I don't know of any resources for this, so hopefully someone else might be able to provide the answer for you.
2006-07-11John Lewis
 What are the rules behind the individual numbers in a quantization table, that is, how are the 64 numbers chosen? Why are these tables not symmetric with respect to the main diagonal? This implies that the compression in the x and y directions may be different. Also, shouldn't there be an overall scaling factor so that after compression, the tiles will continue to have the same average "energy" as before compression (i.e. Parseval-Plancherel's theorem)?
 Great questions. First -- the selection of quantization coefficients is up to the encoder implementer, but most people simply use the example provided in the Annex. You're right in that the luminance values are not symmetric across the axes in the frequency domain. Plus, the values are not monotonic in either direction. I am not absolutely certain of the reasoning, but I have seen a justification that suggests that the contrast sensitivity function for human beings is not monotonic -- it has bandpass characteristic. While this might answer the monotonic question, it doesn't directly explain why there would be differences in the vertical and horizontal directions. I would hazard a guess that perhaps the human eye may not have the same average distribution of cones in both axes.

If someone is aware of more specific reasoning for this, I would be very interested to hear it.

As for the energy distribution before and after compression, I understand that the DCT transform step itself satisfies Parseval & Plancheral's relations. The variance of a tile in the spatial domain can be represented in the DCT domain as equal to the squared average of the AC coefficients (prior to quantization).

Hope that helps, Cal.
 Dear sir :
Thank you for helping me last time.
I would like to ask you about JPEG decompression,what are the steps and what is the dequantization step?
Another quation : if you know any thing about the Speach compression , and what is ADPCM and how it is work .
thank you
 The steps in JPEG decompression are the exact opposite of what I listed for compression. In decompression, the quantizer indices are decoded using Huffman and then each quantizer output index gets mapped back to a reconstructed coefficient value (dequantization). Have a look for the midpoint reconstruction dequantization rule on the internet. As for speech compression and ADPCM, I don't know enough to help.
how I can implement JPEG using MATLAB spacially the Quantization step
thank you
 I am sure that there are many great examples, but here is one that I found.
2006-04-20Erkan Yavuz
 Dear Sir,

Some authors give a compression ratio (10:1, 30:1, etc) and some give quality factor (10%, 75%, etc) while talking about JPEG compression in their studies. To make a comparison, what is the relation with these terms, i.e can we say for example 30:1 compression ratio is more or less equivalent to bla bla percent quality factor JPEG compression?

Best regards

Erkan Yavuz
 Unfortunately, with basic JPEG compression (i.e. not JPEG2000), there is no predictable relationship between compression ratio and quality settings. The quality settings are application-dependent and are used to scale a built-in quantization matrix that also may differ between applications. Furthermore, the way that the quality setting is used to scale the base matrix is almost never linear.

That being said, it is easy to determine the compression ratio of an image, as this can be done by looking at the difference in the image stream size versus the uncompressed original (resolution x 3 Bytes per pixel).

Hope that helps... As alluded to earlier, the newer JPEG2000 format actually sets a known relationship between a compression quality setting and an output size (i.e. compression ratio).
2006-03-13Mona Elshinawy
 Dear sir,
I would like to know how to compute the compression ratio using different quantization tables provided above!I would like to know how can I compute the number of bits reduced when I use a certain quantization table
Thank you inadvance.
 Hi Mona -- Unfortunately, I don't think that you'll be able to determine what the actual compression ratio is from the quantization table (or quality setting). The run-length compression step (in standard JPEG) can reduce give you varying amounts of compression depending upon image content (high frequency components, for example). With JPEG2000, however, one can actually specify the resulting image size / compression.
 Hi... i currently making a JPEG compression program. I would like a photo quality setting for my compression. I look at the table and don't seem to understand how the quality setting of the photoshop, irfan etc the value is soo much smaller than the standard JPEG. Is the JPEG quantization has the worse quality? and if i use the high quality setting of canon, wouldn't it be very little compression only?

 Good question. According to the standard, the quality produced by the Annex matrix values is good, and when divided by 2, very good. Yet, as you point out, the digital cameras (at super fine) tend to use very high quality compression levels. At that setting, I get roughly 5:1 to 12:1 compression from my dSLR, depending on noise / ISO and content.

As for translation from quality factor into the new quantization matrix, here is the algorithm used by cjpeg:

Quality = 1..100
if (Quality < 50) { ScaleFactor = 5000 / Quality }
else { ScaleFactor = 200 - Quality * 2 }

Loop [i] Through Matrix:
  NewQuantMatrix[i] = (StandardMatrix[i] * ScaleFactor + 50 ) / 100

Hope that helps,


Leave a comment or suggestion for this page:

(Never Shown - Optional)