Return to Digital Photography Articles
JPEG Compression Quality from Quantization Tables
DQT / Quantization Matrix / JPEG Compression Quality tables for Digital Cameras and Digital Photography Software.
Index of Quantization Table Sources
Would you like to compare two quantization tables?
Want to extract a quantization table or detect a forged/edited photo??
Background Material
For some background on quantization tables and their use in JPEG compression, please see my JPEG compression article.
For examples and description of the breakdown of an image into 8x8 blocks, please see my article on the JPEG Minimum Coded Unit.
In JPEG compression, the quantization step is performed just prior to the JPEG Huffman Coding.
Example of a Quantization Table
The following diagram shows a typical luminance DQT found in a high-quality digital photo. Note that the numbers increase in magnitude as one approaches the bottom-right corner, which describes the amount of compression (loss) applied to high-frequency image components. Numbers towards the top-left corner (low-frequency & "DC") decrease as we typically don't want to discard this image information as the human visual system will tend to notice "errors" here.

Comparison of Quantization Tables
In trying to evaluate the differences between different software packages and their "quality settings" when saving to JPEG, I have extracted the quantization tables used in each case. In order to do this, I have used my JPEGsnoop utility which locates the offset of the JFIF marker tags within each image, and then extracts the linear stream of bytes from this marker, recreated the zig-zag representation of the quantization (DQT) coefficient matrix and have presented the results here.
In each of the following examples, when I make a reference to "better quality" it is meant in the sense of less quantization error (compression error / loss) and therefore better resulting image quality. The converse is true for "worse quality". The baseline for each of these "better / worse" comparisons is the original digital camera photo as produced by my Canon 10d digital SLR in Fine JPG mode. This helps give one an idea of a comparable quality setting to that used from the camera itself.
Compare JPEG Quality of Digicams and Software
Please check out my JPEG Compression Quality comparison page, where you can interactively compare the quantization tables of different sources!
JPEG Standard
These are the quantization table / matrix coefficients that are recommended / suggested in the Annex of the JPEG standard.
|
|
Photoshop CS2
The JPEG quality range for Photoshop CS2 runs from 1-12. It is expected that this range was used to give the user an idea that 10 is really the most anyone should use, and that 11 and 12 are there purely for experimental reasons (not much gain in quality for a large increase in file size).
Note that the quantization tables generated from a save within Photoshop CS are concatenated together into a single quantization table of length 132 instead of the 2 x 67 that we see in other software programs and the digital camera itself.
- One should realize that none of Photoshop's quality settings in the Save As dialog box will match any digital camera's JPEG compression tables. Therefore, you cannot avoid recompression error when resaving an image from a digicam in Photoshop!
Photoshop CS2 Chroma Subsampling
Note that Photoshop changes its use of chroma subsampling depending upon the image Quality setting in the Save dialog. Photoshop CS2 uses 1x1 (no chroma subsampling) in Quality 7-12, but 2x2 (chroma subsampling in both horizontal and vertical directions) in Quality 0-6. Note that this use of chroma subsampling throws away significantly more image information than the subsampling used by most digital cameras (except some basic digital Point & Shoot digicams).
Photoshop only offers 2x2 or 1x1 (none) chroma subsampling. Most digital cameras use 2x1 chroma subsampling. Therefore, you will have to either choose to over-compensate in the color channels (by doubling the horizontal color resolution of the original source) or discard (by eliminating / averaging the color resolution in the vertical component). I don't know why Adobe didn't choose to provide a saving option that would be closer to what digicams use — better yet, a choice in chroma subsampling, like IrfanView.
A Caution Regarding Photoshop Quality 7!
What many people don't know is that there is a quirk in the way that Photoshop defines its quality range. As mentioned earlier, Quality level 6 is the last point in which chroma subsampling is used. At Quality level 7 and higher, no chroma subsampling is used at all. With the amount of color information encoded now doubled, the file size would have naturally increased significantly at this level versus the previous level.
However, it is likely that Adobe decided to allocate the various quality levels with some relationship to the final compressed file size. Therefore, Adobe chose a poorer luminance and chrominance compression quality (i.e. higher level of compression) in Quality level 7 than Quality level 6!
- What this means is that the image quality of Quality level 7 is actually
lower than Quality level 6
(at least from luminance detail perspective).
This fact has apparently been confirmed with subjective MOS scores against various images at both quality levels.
The only advantage of image quality 7 over quality 6 is that the chrominance detail hasn't been subsampled. So, there is perhaps some degree of benefit to the ability to resolve color information over the previous level, but this benefit is also diminished somewhat by the higher-compressed chrominance table.
You can see the above juxtaposition in the JPEG Quality comparison list.
Photoshop CS2 Quality vs File Size
The following graph shows the relationship between the file size of a typical 6 megapixel photo (from a Canon 10d) and the quality setting selected within Photoshop CS2.
NOTE: There are many reasons why one should not take file size as any indication of a quality comparison. The most important reasons being: different quantization tables, different huffman tables / optimization, orientation differences and additional metadata stored within the JPEG JFIF file.
- With Quality 10, the level of compression is "worse" than that used in the digital camera.
- Using Quality 12, results in "better" quality than the original Canon 10d digital camera photo.
Canon Digital Cameras
- Canon 1Ds Mark II - This camera uses the least amount of JPEG compression (and thus highest quality) of any of Canon's digital SLRs.
- Canon 1D Mark II - Not surprisingly, this camera uses the second to least amount of JPEG compression of any of Canon's digital SLRs.
- Canon 5d - There are extremely small differences in the luminance table in the Canon 5d versus the 30d/20d.
- Canon 30d, Canon 20d, Canon 10d (Fine Mode)- All of the prosumer line uses the same set of tables. One can see that the degree of quantization rounding used in Fine mode is quite minimal, ie. there is not much compression loss at this setting.
- Canon 30d, Canon 20d, Canon 10d (Normal Mode) - In Normal mode, the quality suffers quite significantly, putting it at worse compression than just about any other digicam's Fine mode.
- Canon PowerShot S3 IS - The Canon PowerShot S3 IS has comparable luminance compression as to the Canon 10d, but considerably worse chrominance compression. However, it is still much better than the Canon PowerShot SD400 in this regard.
- Canon PowerShot SD400 - Typical for a point and shoot digital camera, the JPEG compression quality is lower than the digital SLR counterparts.
Sony Digital Cameras
- Sony DSC H9 - Interestingly enough, it appears that the Sony DSC H9 is one of the first cameras that I have come across that use automatic / variable image compression. The quantization tables are selected at exposure time by the camera, rather than relying on a couple-position user level setting. This has some very interesting implications. I am currently in the process of analyzing images from this camera to understand the reasoning and algorithm that is being used. Please see the Variable JPEG Compression page for more details on this camera's feature.
Sigma Digital Cameras
- Sigma SD10 - Note that there is no chroma subsampling in the Sigma dSLRs with the Foveon sensor. In addition, there is extremely little compression being used on this camera when compared to other dSLRs.
Nikon Digital Cameras
- Nikon D2x - Interestingly enough, it seems that the Nikon D2x and Sigma SD10 share the same quantization tables and hence compression quality.
IrfanView
- The JPEG quality range for IrfanView runs from 0 to 100.
- At quality setting 97, IrfanView uses slightly "worse" compression than the original Canon 10d digital photo.
- At quality setting 98, IrfanView uses slightly "better" compression than the original digital camera photo.
Reader's Comments:
Please leave your comments or suggestions below!wiki
math
cut from js-code
I wanted to know how does the quantization table vary as the quality of the image increases of decreases. What is the mathematical relationship between quality factor and quantization table?
Two questions: what is variance and how do you calculate it?
One other question is how does JPEGsnoop calculate the scaling value?
Thank you
Take PhotoShop's tables for example, does it have the base table like JPEG standard table ?
thank you
Not enough room for me to detail my effort here ;) but I posted it in a different forum for those who are curious:
http://www.imagemagick.org/discourse-server/viewtopic.php?f=22&t=22808
Thank you very much
Can you give me some advice to generte custom quant table for my image ??
Thanks
Amirreza
Feature request: please, please add the count unique color to count unique colors based on RGB or CMYK value.
I can't find any app that do this.
This will be wonderful to count unique colors in the gamut profile.
Unlike file formats (such as GIF) with a color index table, JPEG files generally support a 24-bit color range (16 million colors). Counting distinct colors can be done, but it would slow down the decoder quite a lot. Given that I haven't seen much demand yet for color counting, I'm a bit hesitant to impact the general decoder performance. Can you give me some background how this might be useful? Thanks.
I have a set of rather strange JPEG files produced with a Panasonic HDC-TM300 video camera. The images were saved with the "high-speed burst" capture mode, i.e., a sequence of 180 JPEG images over a period of 3 seconds.
The weird thing is that the images contain tell-tale JPEG artefacts (confirmed to fall nicely within MCU boundaries), yet the quantization matrix is all 1's.
I can of course recreate such images by first saving a synthetic image (no distortion) as a JPEG at say 80% quality, and then load and re-save it at 100% quality.
Can you think of any reason why the Panasonic camera would produce such deceptive JPEG files? It is clear that they quantize heavily (around 85% quality), but then proceed to re-encode with a "lossless" QM.
I can send you some samples if you like.
-F
Your software is awesome. I learned from your website and learn practically by you JPEGSnoop very much.
However I just got one question, in JPEG Snoop software, the quality is approximated after extracting the quantization table. Besides quality value, it provides the 'variance'. So what is the meaning of 'variance'?
Thank you very much.
how can i find the quantization table from quantization step size.can you help me?
Best regards
charan
I have strange jpg format: http://narod.ru/disk/13442757000/img3.jpg.html
That is DJPEG marker?
I no open this image.
p.s. this jpeg is compressed using ijl.dll
Best Regards
I managed to decode this file -- it is an image of a car. The JFIF portion of the file is missing the DHT and DQT tables, and the file has a DJPEG prepend header that I have never seen before.
My guess is that this file format is actually a framegrab from a proprietary video file format very similar to MJPEG (Motion JPEG) except that both the DHT and DQT parameters are defined in the DJPEG header. This will allow for minimization of file size on low resolution video files.
Since I don't know the format of the DJPEG header, I have taken a guess at the DHT and DQT tables, inserted these and then corrected for my errors in the DQT scale factors. The DHT table I used was from the MJPEG standard.
Cal
I also have a Casio 6.0 megapixel camera, model EX-Z600. I tried to take the same scene in my house with both cameras. Although I could only do 6 mP with the Casio and 5 mP (closest comparison) with the Samsung HZ15W, I noticed that the file sizes were 2.89 mB and 1.22 mB respectively. The Casio was taken at it's lowest compression rate and the Samsung was taken at its middle quality compression ratio. It was difficult to really assess the quality differences between the 2 files (this was not a very controlled experiment), but the Samsung definitely took less (at least 50% if you adjust for the difference in the mP range I was using) space on a camera card, hence, more pictures in the same space. In fact, even at the 12 mP with the Samsung, the file size was still less than the Casio (2.72 mB vs. 2.89 mB) but with much better quality.
So what does this all mean? This is probably not the site to discuss something this non-technical but it is the site that I found to be most helpful.
But this compression ratio and it impact on quality and file size seems like it should be a factor in deciding what camera to buy. So how do you correlate the different factors here? Since all the digital compact cameras are within 100.00 in price, it would be good if people had this undestanding between the different megapixel ranges and compression ratios (with regard to quality and file size) for these cameras.
I wonder what your thoughts are in this area. By the way, with everything I have learned from you and this site, I would still buy the new Samsung camera. I just did not appreciate at the time some of these advantages this new camera offered to the buyer. Had I been trying to decide between 2 different companies (e.g., my brother's Panasonic and my Samsung), I would have liked to have had this knowledge before I bought any camera.
Thank you for your site and support.
A 12 megapixel photo would normally take up about 36 MB on your disk if no compression was used. Obviously, this would be pretty impractical or wasteful, so your camera uses compression techniques to reduce the file size dramatically without changing the appearance of the photo too much. Any time that you use "lossy" compression, you are "losing" some of the detail from your original photo. How much you decide to throw away (ignore) is dictated by the compression ratio.
The more detail you preserve, the more file size you'll require. An uncompressed photo has a huge amount of "detail", and therefore a very large file size. A low compression ratio (eg "Superfine" setting) means that your camera will aim to retain the majority of your detail, and therefore it isn't able to reduce the file size so much. A high compression ratio (eg. "Normal" setting) directs your camera to toss away a lot of detail, which translates into a big reduction in file size. Unfortunately, by throwing away this detail, the image quality suffers (you'll start to see "blockiness", some unexpected colors, etc.).
The various quality settings on the camera ("Superfine", "Fine", "Normal") are simple names given to some mathematical tables (quantization tables) that are used by the camera to determine what details can be thrown away when it saves/compresses your photo.
All 12 megapixel photos taken by the same camera with the same quality setting will not give you the same file size because some images have less "detail" to begin with (eg. a photo of the sky), meaning that it's able to create a smaller file.
Hope that helps!
I was mainly looking at dumping the QTables, and am wondering what the two metrics "scaling" and "variance" mean? I am comparing two consumer products with built-in camera, where they have similar scaling value but very different variance value. What does that mean?
Essentially, a large variance value indicates that you should ignore the scaling factor and approximate quality factor estimates, as they are not as meaningful.
>JPEG Standard
>These are the quantization table / matrix coefficients that are >recommended / suggested in the Annex of the JPEG standard.
Can you please tell me in which standard and what annex this tables are defined?
I can't find them...
The only tables I can find are tables from ITU-T 81 Annex K.
But they are different:
Table K.1 "Luminance quantization table"
16 11 10 16 124 140 151 161
12 12 14 19 126 158 160 155
14 13 16 24 140 157 169 156
14 17 22 29 151 187 180 162
18 22 37 56 168 109 103 177
24 35 55 64 181 104 113 192
49 64 78 87 103 121 120 101
72 92 95 98 112 100 103 199
Table K.2 " Chrominance quantization table"
17 18 24 47 99 99 99 99
18 21 26 66 99 99 99 99
24 26 56 99 99 99 99 99
47 66 99 99 99 99 99 99
99 99 99 99 99 99 99 99
99 99 99 99 99 99 99 99
99 99 99 99 99 99 99 99
99 99 99 99 99 99 99 99
OK, a bit lost.
I read some of your answers on this very page suggesting interpolation or other means to device tables of orders higher than 8.
So MCU size is irrelevant to quantization?
Please correct my sequence as I'm a bit confused.
We start with any of these subsmapling settings
(1x1)==8x8 MCU
(2x1)==16x8 MCU
(2x2)==16x16
And replace each orginial region (with the chose size) by the resulting coded sequence only in the Cr and Cb channels leaving the Y unchanged.
Then go one with the JPEG compression steps using 8x8 or 16x16 or ... any qantization matrix?
Sorry one more thing. The two quantization tables mentioned; luminance and chrominance, used to quantize each channel, are the only ones? I read an image could have up to four tables. What does that mean?
A question. Are JPEGs compressed with 16X16 quantization tables (or any other than 8X8) that common? I have an algorithm that measures blocking artifacts and it's deviced for 8X8 blocks (standard tables). I was trying to make it adaptive, but can't find any JPEGs generated from 16X16 Q-tables for testing. If I wanted to check the tables, JPEGSnoop does it well, thank you, but I can never seem to find any 16X16. How do I "make" some?
The mentioned tools, BetterJPEG, RealWorldPhoto, allows adjusting the quality and hence scaling the tables (which are not accessable).
Do I have to do it programmatically?
Sincere regards,
Salma
Two questions: Unless I missed it from somewhere else, what is variance and how do you calculate it? One other question is how does JPEGsnoop calculate the scaling value?
Thank you
Is there value of quantization tables that ensures 100% quality for saved JPEG?
I can pass these tables to encoder, but if i fill it with zeros (or with 1, 2 etc.) JPEG still loses number of unique colors and seems somewhat blured. If fill with 200 or so JPEG is completely lost.
Thx for any response
joe
My question is kind of similar like the one in comment 2006-02-18:
Since you can't change Photoshop's quantization tables and color subsampling settings and cjpeg doesn't have a GUI where you could paste an image... Is there any software preferable with a GUI where one could define a quantization table and control the color subsampling so that it matches the used digital camera's settings?
I guess saving edited JPEGs with the same settings like the used digital camera one's would minimize the generation loss compared to Photoshop's JPG save, wouldn't it?
I resample\resize the image tp save.
Microsoft GDI is dead project and lacks some power.
I played a lot with Encoder and it is just can't handle what I am looking for.
Anyway, I solved my problem.
Thanks
Your knowledge on JPEG, compression and overall imaging is exceptional!!
I searched last 5 days to find solution to my problem. But I am lost.
I have online photo album site. Where users uload photos.
The resampled\resized images (500X375 or 375X500) from original jpg files are losing quality.
I uploaded the same file to Flickr, the quality is far better.
I can't get rid of this problem even with HighQuality options.
g.InterpolationMode = InterpolationMode.HighQualityBicubic;
g.SmoothingMode = SmoothingMode.HighQuality;
g.PixelOffsetMode = PixelOffsetMode.HighQuality;
g.CompositingQuality = CompositingQuality.HighQuality;
Also using quality 100 when saving.
I played with irfanview and found that if I check Disable color subsampling option during save, I get same quality and file size like Flickr.
My question: How can I Disable color subsampling programatically in my code when I create the 500X375 images??
I will wait for your response. Have a great day!!
Thanks
Musa
Digging a little into the MSDN, it would seem that you may want to look into setting the System.Drawing.Imaging Encoder parameters before saving out.
But I guess it depends on the algorithms that each program uses.
I am using ACDSee and I gathered that my Canon 400D uses a similar to 96% quality, when comparing to the output. But when I examine other files that I get, already handled, I sometimes wanted to match the same compression level to return the edited pic.
And thanks for this cool info you have here!
Regards from Portugal
Leonel
is there any known method to encode a JPG image file ("SOS" and scan data) using predefined/user given quantization tables (DQT), SOF0 parameter block and Huffman tables (DHT)?
I need this for reconstruction of EXIF thumbnail images.
Jhead -rgt does rebuild a thumbnail, but it uses different quantization tables and SOF0 parameters which are useless for my purposes.
As for changing the SOF0 parameters, you might want to see if exiftool can do this for you.
I see lots of quantization tables with low values like 2,3 etc for the DC and lower frequencies. The problem i'm having, is that the resulting quantized amplitude does not fit in an 8-bits register.
What do you suggest that can be done about it, without going to 16-bits Quantized values? If I simply lower the values, block-artifacts turn up pretty quickly.
You were right when you doubted that the photo reader would use a hard-coded DQT. After I read your reply, I modified the DQT in the thumb file, which does work, rather than trying to get the thumb file, which does not display, fixed. This confirmed that the photo reader indeed uses the DQT in the file. Changing the DQT table in the working thumb file changed the appearance of the displayed thumb, but is still displayed.
I am still befuddled why one thumb file displays while the other one does not when inserted into an Image file and looked at in the photo player. Both thumb files open in all software based photo editors I tried. Both thumb files have the same set of JPG markers in the same order, and the identical EXIF entries. I could post a link for the two file for anyone to have a look at, and solve this technical puzzle.
[UPDATE 09/15/2007]: After receiving the files from Udo, I figured that the HDTV Photo Player may expect that thumbnails are encoded with 2x1 chroma subsampling. For nearly all digicams, this assumption will be correct -- although it may still fail on images from some very low-end digicams that use 2x2 subsampling.
In the samples that he provided, JPEGsnoop reported that the images that didn't work on the player had hand-embedded thumbnails with either no subsampling or 2x2 (horizontal and vertical). I recoded one of his sample thumbnails to 2x1 and after inserting this thumbnail into his file with his HDTV Converter software, he reports that it now works!
Thanks for your wonderful page on DQT. I learned more from your page than any of the others. I also appreciate the CS2 compression tip.
I prefer to look at, and share, my camera picture on a plasma HDTV using the excellent Panasonic photo reader. I have tried many photo editors; none saved images in a JPG format that they could be read by the hardware-based photo reader.
I am writing a programme that scales images from a camera or images saved with Photoshop (and other photo editors) into the HDTV aspect ratio, and more importantly, saves the image in a JPG format, which can be read by the hardware based image reader. I have accomplished that for the main image. I am stuck to get the embedded thumb preview into a format that can be read by the photo reader. Through trial and error, I have come to the conclusion that the problem is caused by the DQT table used to compress the thumb image. I suspect that the reader does not use the embedded thumb DQT, but uses the digital camera thumb image default DQT. Possibly, because the thumb conversion is hard-coded in the converter IC used in the reader.
I have been looking for some sample code (any language -- C preferred) to create JPG images using custom DQT tables, but have not found a single one. Any help would be appreciated.
So that I understand your problem better: is the photo reader able to read any thumbnails successfully? You were suggesting that the reader uses a "digital camera thumb image default DQT" -- which default DQT would this be? (default for thumbs from Panasonic digicams?) Since there is no universal default DQT for thumbnails, this doesn't seem as likely.
What brought you to the conclusion about the reader ignoring the embedded thumbnail DQT entries? Is the reader able to see any image data at all, or does it simply report an error? If the DQT table were being ignored (and a hardcoded DQT table in the hardware were used instead), then you should not get an error in the reader, but just some garbled image preview instead. If the wrong DHT tables were used, then yes, the decoder may encounter a failure.
As for encoding JPEG images with custom DQT tables, all JPEG encoders can use an arbitrary DQT table. Have a look at the IJG JPEG encoder C source code, and you'll see that you can simply substitute any quantization table you like.
Please feel free to post any more questions you have -- I'm sure that we can get your photo reader to work for you!
with regards
vinodh
I'm looking for the all types of actual used Quantization Tables for each Digital Camera's parameter settings. Could you tell me if it's possible to use any mathematical equations and the Quantization Tables above to get them all?
Best regards!
Eric
Each camera manufacturer and computer program selects their own quantization tables for each arbitrary quality setting. The only case in which you can mathematically figure out the quantization tables from the quality setting is in programs such as Irfanview and other IJG-based JPEG encoders, where the quantization table is scaled by the quality setting (according to the function shown in the comment posted on 2006-03-07).
To summarize, the Discrete Cosine Transform was chosen for JPEG images over Fourier Transforms because the DCT includes basis functions that have twice the period of the lowest-frequency Fourier function. This means that DCT will compress images with slower-changing components (such as the luminance) better than a Fourier Transform.
Then, the top-left corner of the interpolated 16x16 matrix may start as:
So, we are simply inserting an averaged value between each of the original 8x8 matrix values. As I mentioned earlier, this may not be the best way to create a 16x16 quantization matrix, but it may provide a starting point.
Of course, the best approach would be to process hundreds of photos experimenting with different tables (probably based on the interpolated 8x8 version) and then attempt to compare peoples' objective opinion of the quality of the compressed image against the file size savings. This is the process that was used to generate the original 8x8 tables that appear in the JPEG standard. Unfortunately, this is a very difficult and expensive process to use, and hence why many encoders probably rely on the JPEG standard's example tables as a starting point.
Clearly, you can create any quantization table you want, but the difficult part will be finding the optimum balance between image quality and file size reduction.
What is the logic behind the quantization table? Is it purely random or basend on some logic like HVS? What is the significance of the specific numbers. For example why should the first number be 16 and not some other no.
Kindly answer
veeraswamy
So, the human visual system dictates the ratios between values that appear in the quantization table. The absolute values (overall scale factor across the entire table) is determined by the total image compression ratio that one desires.
Many software programs will have a built-in quantization table (sometimes based on the one suggested in the JPEG Standard), one for luminance and another for chrominance. They then scale all values in this base table according to the JPEG Quality that the user chooses (e.g. 0-100). So, the absolute values in the resulting quantization table is based on the desired compression / quality tradeoff, but the original base table was based largely on characteristics of the HVS.
Furthermore, the fact that the Human Visual System is not very well-tuned to see error / loss in the color components (versus the luminance components) means that nearly all programs and digital cameras start with quantization tables that have much larger numbers in the chrominance table than the luminance table.
As for the selection of 16 versus some other number, it really is somewhat arbitrary. But an analysis of thousands of photos with different measurements of human perception have allowed optimum quantization values to be chosen that offer the best compression (largest coefficients) while still preserving the majority of the image quality of the original. These optimum tables are provided in the Annex of the JPEG Standard. Note all manufacturers or software vendors chose to go with these suggested tables -- instead, they came up with their own, probably after a considerable amount of their own analysis.
I have a question, on my Canon A540 there is a 2816x2112 and a lower 2272x1704. In order to keep file size a little lower, I have been opting for SuperFine (the highest compression) while dropping down to the lower res. Looking at your tables, though, I see that the 1704 res isn't a multiple of 16, which will give me problems with doing lossless rotation. Would you recommend going back up to the higher res and dropping down to the Fine compression? I doubt I will ever print anything larger than a 8.5 x 11, if ever that large.
So, what this means is that you certainly can still rotate your photos losslessly, but don't use Windows XP Explorer or Windows Picture and Fax Viewer! Doing so will result in image degradation. Instead, you can use IrfanView, BetterJPEG, jpegtran or a host of other applications.
In light of this, by all means shoot with the reduced resolution, but keep the compression quality high (BTW, SuperFine will be the highest compression quality, but the lowest compression amount). When it comes time to rotate the photos, use something other than Windows to rotate, or better yet, use a photo import utility to perform this for you automatically on the basis of EXIF information.
As an aside, I would never recommend dropping the compression quality over reducing the resolution. JPEG compression artifacts generally do far more damage to the appearance of your picture than simply having less resolution.
Good luck!
Could you describe more particularly process of transformation BMP in Jpeg
I found this page as I was looking for information on Canon EXIF formats. I found the information very useful. Thank you for creating it.
I have written (on my Mac) a C program to extract selected EXIF info from the images from my digital cameras so I can load them in my database. I am trying to find a way to express the "quality" of the JPEG compression, but there does not seem to be a standard way to save this in the EXIF. My Pentax returns in tag 0x9102 the number of compressed bits per pixel, but the Canon does not, although it has a quality value in word 3 of the CanonMaker tag 0x0001. Any suggestion on how I could compute an approximation of quality, say in a range 0-12 as per Photoshop, independently of what device or software created the JPEG file?
Thanks in advance. Regards.
Daniel Guerin
How are you intending on using this quality rating in your database? Are you trying to compare quality between images from different cameras? This is not really practical as cameras all use different quantization tables and it is hard to linearize the comparison between 64-entry quantization matrices.
That said, you have a couple options:
Thank you
If someone is aware of more specific reasoning for this, I would be very interested to hear it.
As for the energy distribution before and after compression, I understand that the DCT transform step itself satisfies Parseval & Plancheral's relations. The variance of a tile in the spatial domain can be represented in the DCT domain as equal to the squared average of the AC coefficients (prior to quantization).
Hope that helps, Cal.
Thank you for helping me last time.
I would like to ask you about JPEG decompression,what are the steps and what is the dequantization step?
Another quation : if you know any thing about the Speach compression , and what is ADPCM and how it is work .
thank you
how I can implement JPEG using MATLAB spacially the Quantization step
thank you
Some authors give a compression ratio (10:1, 30:1, etc) and some give quality factor (10%, 75%, etc) while talking about JPEG compression in their studies. To make a comparison, what is the relation with these terms, i.e can we say for example 30:1 compression ratio is more or less equivalent to bla bla percent quality factor JPEG compression?
Best regards
Erkan Yavuz
That being said, it is easy to determine the compression ratio of an image, as this can be done by looking at the difference in the image stream size versus the uncompressed original (resolution x 3 Bytes per pixel).
Hope that helps... As alluded to earlier, the newer JPEG2000 format actually sets a known relationship between a compression quality setting and an output size (i.e. compression ratio).
I would like to know how to compute the compression ratio using different quantization tables provided above!I would like to know how can I compute the number of bits reduced when I use a certain quantization table
Thank you inadvance.
Mona
As for translation from quality factor into the new quantization matrix, here is the algorithm used by cjpeg:
Hope that helps,
Cal.