HOME      GALLERIES      PHOTOGRAPHY      OPTICS      EXPERIMENTAL      ABOUT      CONTACT

EXPOSING        TYPES OF LIGHT        LENSES        FILTERS        BITS & BYTES        EDITING        COLOR MANAGEMENT
METERS & MICROMETERS        PHOTOGRAPHING THE SKY        MACRO PHOTOGRAPHY        IR PHOTOGRAPHY        SOLARGRAPHY

 

Histogram

 

The histogram is a very useful tool for photography. It's basically a plot of the number of pixels for each level of brightness, and it can be shown separate for each channel (red, green or blue) or for all three together (in most cases I'll only show the RGB histogram). On the left of the x-axis are the dark pixels, and on the right are the light pixels. The picture below on the left roughly has three regions in brightness, land (dark), water (midtone) and sky (light) and this can also be seen in the histograms where each of these three areas have their own peak. The land on the left, the water in the middle and the sky on the right.

 

By knowing this, it is possible to gather a lot of information from the histogram. The example below on the right shows a balanced exposure as well as some derivatives of that picture (note that it can also be seen from the histograms that information is lost in the overexposed and high contrast examples).

In general:

• If the histogram touches the left edge, the photo is underexposed and information is lost.

• If the histogram touches the right edge, the photo is overexposed and information is lost.

• If the bulk of the pixels is on the left side, the photo is too dark.

• If the bulk of the pixels is on the right side, the photo is too light.

• A narrow distribution indicates a scene with a low contrast.

• A distribution across the whole histogram indicates a balanced exposure.

 

Of course these rules don't always hold true as there are situations where you want a scene to be very light or dark, or very low in contrast.

 

The histogram is not only a useful tool when editing pictures, but it is also the most reliable way of checking your photo on the LCD display, a lot better than looking at a miniature of your photo on the LCD.

 

Bit depth

 

The bit depth describes the amount of information for a picture, where each bit only has two possible options (it can either be 0 or it can be 1). This means that a bit depth of 8 bit gives 256 (2^8) levels for each channel (red, green and blue) and therefore a total of 16.777.216 colors (256 × 256 × 256). Here are some examples of what bit depth means for a range of pure green to black. As can be seen, bit depth does not influence the color range, but only the amount of levels between the color range.

8 bit, 256 levels - 6 bit, 64 levels - 4 bit, 16 levels - 2 bit, 4 levels

The bit depth is especially important when editing pictures, as editing with a low bit depth can lead to posterization. In general, 8 bit is ok for viewing (unless your color space is really large), but for editing a larger bit depth is desirable.

 

Another thing to keep in mind is that the same bit depth can be described in two ways, which can be a bit confusing. To distinguish between the two, bpc or bpp is often added (bpc = bits per channel, bpp = bits per pixel). For example, if a file has a bit depth of 8 for each channel, then each pixel is described by the three channels combined which adds up to a total of 24 bit. So the 8 bit file is 8 bpc and 24 bpp, and similarly, a 16 bit file is 16 bpc and 48 bpp.

 

Bit depth - posterization

 

When an image suffers from posterization, this means that the transitions between colors are no longer gradual. An example of this effect is shown below. Two blue bars were produced, ranging from 115 to 130, one with a bit depth of 8 and one with a bit depth of 16. Both look exactly like the original bar below (the gradient is hard to see, but it is there!). This is of course an extreme example, but it shows that bit depth is something to keep in mind when editing pictures.

original bar - result from the 8 bit original - result from the 16 bit original

A bit depth of 8 gives us 256 (2^8) shades of blue, spread out over 256 levels, so 1 blue shade per level, which means that our original 8 bit blue bar has 16 shades of blue (from 115 to 130). If we now spread this so that 130 becomes pure blue (255) and 115 becomes pure black (0) then we get the bar "result from the 8 bit original".

From a distance it might look ok, but on close inspection it is clear that this gives very ugly transitions. This is because we are taking the original 16 blue shades and stretch them out over 256 levels. If you look closely and count the individual pieces in this bar you will find a number of 16, which corresponds to the original 16 levels. The 16 levels can also easily be identified in the corresponding histogram, since there are 16 bars evenly spread over the whole histogram.

Now if we perform the same procedure on the 16 bit bluescale, it means that there are now 65536 shades of blue, divided over 256 levels. This means that each level has 256 shades of blue (65536 shades/256 levels). So if we take the original 16 levels, we'll end up with 4096 shades of blue in total (16 levels × 256 shades per level) which we are spreading out over 256 levels (pure blue to pure black). So we end up with 16 blue shades per level, which is more than enough information to yield a smooth transition as can be seen in "result from the 16 bit original".

On the right is an example of what posterization would look like in a photo.

 

 

Noise

 

Noise is the collection of irregularities that you will see in digital pictures, ranging from colored artifacts to relatively even distributed grainy structures.

 

Noise - causes of noise

 

There are several causes of noise:

• Photon noise is caused by statistical fluctuations in the intervals of photons reaching to the sensor and decreases relative to the square root of the exposure.

• Pixel response non-uniformity (PRNU) is caused by differences in the efficiency of individual pixels and grows proportionally to the exposure.

• Dark noise is caused by statistical fluctuations in the number of electrons thermally generated in the sensor, which is also known as dark current. This type is strongly dependent on temperature and this is why photos in the summer can contain more noise than comparable photos shot during the winter. It grows proportionally to the exposure, but really long exposures will cause another problem as the sensor will heat up during the exposure, generating proportionally even more dark noise, which is also known as amplifier glow. To suppress this type of noise sensor coolers are available for some cameras, and the low budget solution is to take really long exposures only in the winter....

On the right are some examples of how both the ISO value and the exposure time influence amplifier glow caused by heating of the sensor during long exposures (this test was done with an older and pretty noisy camera, so the amount of amplifier glow is not really representative for modern cameras).

• Read noise is caused by voltage fluctuations of the analog signal, which means that it is independent of the exposure level.

• Quantization error is only a small source of noise, which is caused by rounding errors when converting the analog signal to digital.

 

Noise - types of noise

 

Apart from different causes of noise, noise is also split in several types of manifestation, of which some are written below:

• Random noise is the noise that makes pictures look less smooth and this form is not possible to predict (which is not a big surprise for something which is called random noise...). An example of random noise is clearly visible in the sky in the picture below on the left (100 % crop).

Random noise can be divided into two types of noise, luminance noise (variation in the brightness) and color noise (variation in color). Examples of both are shown below on the right (crops at 200 %):

• Fixed pattern noise (or hot pixels) are pixels that, under the same conditions, predictably turn up in every picture. They are more obvious at higher ISO values and/or longer shutter speeds. Because it is predictable, most of the hot pixels can be removed by black frame extraction, which can be done in camera, or later on the computer. When you want to do it on the computer, you need to take a picture with exactly the same conditions as the original, but this time with the lens blocked so that no light enters (the easiest way is to put on the lens cap). This is a so called black frame and when substracted from the original, most of the fixed pattern noise will be gone. When done in camera, it means that after you have taken a picture, the camera automatically takes another picture with the same conditions, but without opening the shutter. This does imply that taking a picture takes twice the shutter speed, so a 10 seconds exposure will take about 20 seconds before you can take another picture. A typical example of hot pixels (the red, blue and white dots) is shown in the long exposure shot on the right (100 % crop, the stripes are stars). More about removing fixed pattern noise can be read here.

• Banding noise is noise that appears as horizontally or vertically aligned streaks. The problem with this type is that the human brain is very good at observing patterns and therefore banding noise can be very apparent, even though the noise might not be very strong. It also depends a lot on camera models, some show more banding noise than others. On the left is an example taken at the insanely high ISO value of 25600! It is known that bright lights can cause bands at these high ISO values, so this is nothing to worry about (who wants to shoot at ISO 25600 anyway??). But some "regular" banding noise can also be seen in the dark upper part. Notice how all of it is gone at ISO 800.

 

 

 

 

 

 

 

 

Noise - expose to the right/ISO to the right (ETTR/ITTR)

 

When using a digital camera, a lower noise is obtained if you "expose to the right". This means that in stead of exposing so that the picture is well balanced, you overexpose to the point where highlights are almost washed out. This means that for scenes with a lot of highlights, this method will not give much room for improvement, but for scenes which don't have strong highlights, it can definitely be worth it to perform this method. What is meant by expose to the right is that you expose in such a way as to get your histogram as far possible to the right without losing any highlights (hence "expose to the right") as is shown below. The first picture is the histogram of a scene when exposed as usual, but it is obvious there is a lot of unused space at the right of the histogram. When "exposed to the right" the photo will be a bit overexposed and the histogram will look like the one in the middle. When the amount of overexposure is corrected on the computer you'll end up with the histogram on the right, which looks pretty much like the original one, but this time the photo has much less noise (not visible on the histogram).

Histogram for normally exposed Exposed to the right (exposed +1 stop) Corrected on the computer (-1 stop)

Related to expose to the right is ISO to the right, which means that in stead of changing the exposure, you change the ISO value. This means that the sensor will receive exactly the same amount of information, but because a higher ISO value gives less noise, the picture will still be cleaner. So where expose to the right increases the signal to noise ratio by increasing the signal, ISO to the right does that by decreasing the noise.

 

The biggest problem with these methods is not to overexpose, which will be easy to see for large bright parts, but if the bright parts are small, then there is a chance that you won't see those highlights on the histogram. So even if the histogram does not show any highlights, they might still be there, but in small quantities, and this can of course render the expose to the right/ISO to the right photo useless. On the other hand, when the highlights are only small, there is a pretty good chance that the overexposure won't be visibly because they are only small. But it's better to be safe than sorry, and I always take a "regular" exposure as a back up when using expose to the right/ISO to the right.

 

So when to use expose to the right and when to use ISO to the right? Basically, always use expose to the right, unless you can't, in which case you resort to ISO to the right. So if you take a picture at 1/160, ISO 400, f/8 and you want to reduce the noise, you can use expose to the right and lower the shutter speed with 1 stop, giving 1/80, ISO 400, f/8. But if you can't use a shutter speed longer than 1/160 and you don't want a larger aperture, then you use ISO to the right and use as settings 1/160, ISO 800, f/8.

 

expose to the right and ISO to the right should only be done when working in RAW mode, since it involves quite a bit of editing on the computer and with 8 bit JPG's that inevitably gives posterization and bad results. Moreover, a RAW file contains more information than what is shown by the histogram on the camera, giving a bit of extra flexibility.

 

On the right are crops at 200 % of an experiment I did to show expose to the right and ISO to the right, where I deliberately took a dark part of the picture for the crop since the expose to the right/ISO to the right effect is strongest in dark regions. The pictures were overexposed 1 stop by respectively doubling the shutter speed and doubling the ISO value, and the RAW files adjusted on the computer so that the brightness matched with the normal exposure (so I made the overexposed RAW files 1 stop darker on the computer). As can be seen, the overexposed pictures look smoother than the normal exposure, especially the expose to the right shot.

 

It is clear that in both cases I overexposed a bit too much since the bright part on the right is a bit washed out, but that doesn't matter for the principle itself. These pictures still perfectly demonstrate the benefit of expose to the right/ISO to the right. The washed out highlights are just a reminder of the fact that you should be really careful not to blow them out with expose to the right/ISO to the right.

 

A thing to bear in mind is that the histogram shown by your camera is based on a converted JPG file, even when working in RAW! In order to obtain this JPG file, the original inputs in the red and blue channels have to be multiplied relative to the green channel to compensate for both the different sensitivities of the sensors color channels and for the type of light during the taking of the photo (midday light, sunset, indoor, etc.). This compensation is done according to the chosen white balance, and as a result, the histogram can show these channels being blown out, while in fact the original signal from the sensor is not blown out! In order to compensate for this, a so called universal white balance, or uniWB, can be used (this uniWB can be downloaded from the internet for many cameras). This uniWB multiplies all the channels with a factor of roughly one, which will give a horrible greenish picture, but the histograms shown now are much more accurate for using with expose to the right and ISO to the right. The white balance can later, on the computer, be adjusted to a normal value to obtain a correct picture (which means that the uniWB should only be used when shooting RAW!).

 

The example below to the left shows this principle, the first picture is a picture of a sky with an automatic white balance, and the histogram clearly shows a blown out blue channel. This would mean that the exposure needs some adjustment to decrease the signal. But the same picture taken in uniWB shows that the blue signal is in fact not blown out at all! So this exposure is not blown out in the RAW file, which is what matters for expose to the right/ISO to the right.

Below on the right is a real life example of a scene that was exposed to the right (overexposed 1.6 stops) and subsequently corrected on the computer.

 

Noise - ETTR/ITTR and combining exposures

 

As described above, a thing to keep in mind with expose to the right/ISO to the right is to be careful not to overexpose the lighter parts of the picture. However, there is a workaround for this issue which involves combining an overexposed and a normal exposure. Basically, the dark parts of the overexposed photo are combined with the lighter parts of the normal exposure.

 

On the right is an example, where the first photo is the normal exposure. The second is the photo that was overexposed with two stops, and after that darkened with two stops on the computer. This gives very clean darker parts, but the lighter parts are washed out because they were overexposed. However, combining this photo with the normal exposure using the "blend if" sliders in photoshop subsequently gives a photo which is both clean in the dark parts and has no washed out highlights (the "blend if" sliders need to be adjusted so that the lighter parts of the overexposed photo are not used, so that it uses the lighter parts of the regular exposure in stead. These sliders can be found in Photoshop under "layer style/blending options"). So where expose to the right/ISO to the right is limited by the amount of unused space in the histogram, this method where exposures are combined does not have that problem, which can be an advantage, although this method obviously involves more post processing. Crops are at 200 %.

 

Noise - ETTR/ITTR - the theory behind ETTR

 

It is often said that the benefit of expose to the right is caused by the darker parts having less levels (see film vs digital for an explanation) than the light parts, but to me that does not seem to be the reason. For me, the reason that expose to the right gives cleaner results is that the sensor is exposed for a longer time and thus is allowed to receive more photons, which will simply give a higher signal to noise ratio.

 

If you look at the picture below, then you can see the distribution of levels for a dynamic range of 5 stops, both for an 8 bit file (top row, 256 levels in total) and a 12 bit file (below the 8 bit row, 4096 levels in total). Now if we use expose to the right at 1 stop, which means overexposing for 1 stop and then decreasing the brightness again on the computer with 1 stop, then all the levels will shift. This is seen in the two lowest rows, again for 8 bit and 12 bit (note also how, in this particular case, there are no levels in the brightest stop after the expose to the right treatment, which is why you have to be careful not to overexpose, because you will loose a lot of information).

8

16

32

64

128

128

256

512

1024

2048

 

After ETTR (1 stop overexposure) and correction on the computer:

16

32

64

128

0

256

512

1024

2048

0

 

Now you can see that a properly exposed RAW file has 128 levels in the darkest stop, whereas an 8 bit expose to the right file has only 16 levels in the darkest stop (the two values which are bold in the first column). So according to the generally advocated theory, the one with the least levels in the lowest stop will have the most noise. But this is not true, as can be seen in the examples on the right. The first picture is a normal exposure in RAW, which gives quite a bit of noise. Correcting the expose to the right shot in RAW gives much less noise, as is expected. But look at the corrected expose to the right shot which was created from an 8 bit TIFF file (so I saved the overexposed RAW file as an 8-bit TIFF file, and then corrected with one stop on that TIFF file), this has approximately the same low level of noise as the expose to the right from the 12 bit RAW file, despite the fact that its number of levels is far lower than the normal exposed RAW file.

 

This, to me, indicates that expose to the right is not based on the amount of levels, but is simply a result of an increased signal to noise ratio because of the longer exposure.

 

Noise - ETTR/ITTR - the theory behind ITTR

 

Where expose to the right decreased noise by increasing the amount of signal (and thus increasing the signal to noise ratio), ISO to the right decreases noise in a different way since the exposure (shutter speed/aperture combi) remains the same. In this case the reason is that the higher ISO values have a lower read noise which results in a better signal to noise ratio. This might seem counterintuitive as it is widely known that photos taken at 1/60, f/8, ISO 400 will produce more noise than photos taken at 1/15, f/8, ISO 100. But just as expose to the right is based on the fact that a bigger exposure (longer shutter speed or larger aperture) gives a better signal to noise ratio, we can also deduct the opposite, namely that a smaller exposure results in a decreased signal to noise ratio and thus has more noise. And that is why high ISO is blamed for increasing noise, while it is actually the smaller exposure, that often goes hand in hand with higher ISO values, that is causing it and not the higher ISO. More info about the ISO can be found under exposing/ISO.

 

Noise - stacking

 

In order to increase the signal to noise ratio, and thus decreasing the amount of noise, it is also possible to use several identical exposures of the same scene and stack them to one photo. It is done by simply taking multiple esposures of the same scene and blending them so that for each layer, the opacity is set at 100/"nr of the layer" in percent: layer 1: 100 % opacity, layer 2: 50 % opacity, layer 3: 33 % opacity, layer 4: 25 % opacity, etc.

 

The principle behind this is that part of the noise is completely random and will change in every shot, and this will even out when stacking multiple images. The more images that are combined, the more the noise is reduced.

 

This method is not very versatile though, since stacking multiple exposures means that the scene cannot change while taking those exposures. But if the scene does not change, you might as well take a low ISO value and a long shutter speed, which requires less editing afterwards. An example of how stacking images reduces noise is on the right, crops are at 200 %.

 

Noise - dark frames

 

In contrast to the techniques described above, dark frames cannot be used to minimize random noise. This is a technique to remove hot pixels, and is especially important for night photography, which makes use of long exposures and high ISO values. It is described under star photography/dark frames.

 

Sensor

 

Foveon type sensor   Bayer type sensor

The principle of a digital sensor is that when a photon hits a pixel, an electron is released and all the electrons which are released during the exposure are stored, which creates a voltage. After the exposure is done, this voltage is converted to a digital value and all these values together create the information needed to create the picture.

 

There are basically two main types of sensors at this moment, the Bayer type sensor and the Foveon type sensor. Only Sigma delivers cameras with the Foveon type, so the Bayer type is by far the predominant type. The difference between the two can be seen in the picture on the left.

 

So the Bayer type has one color for each pixel in a ratio of green:red:blue is 2:1:1, which is based on the fact that the human eye is most sensitive for green. The Foveon type however, has three layers on top of each other and detects all three colors at each pixel site (the red and green light is not blocked by the blue sensor, and the red light is not blocked by the green sensor).

 

This difference in design has some consequences. First, because the Bayer type detects only a specific color at each pixel site, it only uses 1/3 of the light that hits the sensor (1/2 of the green light, 1/4 of the red and 1/4 of the blue light), while the Foveon type uses all available light. Furthermore, a green pixel in the Bayer type sensor has no information about the blue and red channels, so the values of the other colors for that pixel are estimated using an interpolating algorithm, based on the information of the neighboring pixels (which causes some softening). This method has its flaws as a 10 mp sensor won't produce a true 10 mp picture (more on this below under "effective vs actual pixels").

 

Sensor - effective vs actual pixels

 

A sensor is often described as having a higher number of actual pixels than the amount of effective pixels, which has several causes. First, part of the outer edges of the sensor are often covered and do not recieve any light. These pixels are used as black rerences.

Second, because of the interpolation process described above for Bayer sensors, some more pixels are lost. Since each pixel only has information for one of the three colors, algorithms use information from the neighboring pixels to approximate the other two colors for that pixel. But the outer rows have less neighbors than pixels elsewhere on the sensor, so their information is less accurate since it is based on a smaller number of neighbors. For this reason, these pixels are removed.

 

A funny thing is that these hidden pixels can be recovered from the RAW file! You need to download a program, called "DNG Recover Edges" (written by none other than Thomas Knoll) and save your RAW file as a DNG. The program then recovers the hidden pixels from the DNG file, like in the example on the right. The original has a size of 4256 × 2832 pixels (12.052.992 pixels), and the recovered picture has 4284 × 2844 pixels (12.183.696 pixels), so 130.704 pixels are generated extra. It is not a lot, but it might save your picture if you composed it too tight! Of course, these edges may have lower quality.

 

Sensor - aliasing

 

When very fine details are present in the picture, a sensor can give rise to an interference pattern known as aliasing.

The examples below shows some random examples of aliasing, where the top row is the light which arrives at the sensor as a black and white pattern. The blueish row represents the individual pixels, and the bottom row is what will be detected by the pixels. Some different patterns emerge as the black and white details start to approach the pixel size.

 ex 1 - ex 2 - ex 3 - ex 4 - ex 5 - ex 6 - ex 7 - ex 8 - ex 9 

 

 

 

 

 

This is only a monochromatic example, but in the case of Bayer-type camera sensors, this effect can take place for red, green and blue, giving colorful patterns where there aren't any. To prevent this from occuring too much, an anti-aliasing filter is placed in front of the sensor, effectively softening the picture a bit (which, together with the interpolation algorithm, is the reason why sharpening is applied to pictures afterwards).

 

On the left is a classic example of a moiré pattern as a result of aliasing. This is a uniformly colored strap of my backpack, but the pattern of the strap confuses the camera and causes it to show a moiré pattern where there is none in real life.

 

 

 

 

Sensor - pixel size

 

The pixel size is, as the name suggests, the size of the pixels on your sensor, although this is not entirely correct. Since there are unused spaces in between two pixels, it's more correct to see the value obtained from the calculator below as the distance from pixel centre to pixel centre.

 

In general, a larger pixel size has a better signal to noise ratio and will therefore produce less noise and have a higher dynamic range, because a smaller pixel will recieve less photons and the signal needs to be amplified to a greater extent. For this reason, a 10 mp photo from a full frame sensor will look less noisier than a 10 mp photo from a compact camera (with, for example, 1/2 the width and height of full frame) when viewing them both at the same enlargement, like for example 100 % on screen.

 

However, this is not an entirely fair comparison because the pixels are not only smaller, but are also spread over a smaller surface. If you take this into account then the two photos should be compared at 100 % for the full frame and 50 % for the small sensor, and this will in most cases give fairly identical results.

Sensor dimensions (mm):

×

Number of megapixels:

Pixel size (µm):

 

Film vs digital

 

There it is, the big question which about a million pages are spent on to discuss in every possible detail. So I'm not going to do that! There are just too many factors included, one type of film is not the other, one digital camera is not the other, etc. And of course, personal preferences are most important. So in the end, both have their pro's and con's and it is up to one self to decide.

I know I'm happy with my digital camera for a number of reasons. The immediate feedback (especially the histograms) is very helpful when dealing with difficult situations, and the instant reward of seeing your pictures upon arriving home is great I think, since I'm not the worlds most patient person.

But there are two things that bother me about digital. First, the noise when taking long exposures (as in minutes), but there are ways to work around it. The only real disadvantage of digital is the clipping of highlights which can turn photos into crap pretty easy. This has to do with the different behavior in sensitivity over the dynamic range for film and digital.

 

If you look at the schematic (and in no way scientifically correct) exposure curves on the right then you see the response of analog film to light hitting the film, the so called S-curve. The dark regions are on the left of the x-axis, while bright areas are on the right. This means that for really bright areas, the shoulder of the curve will still make the transition look gradual. If we now look at digital, then we see that at a certain point the sensor is overloaded and that is exactly the problem. Everything brighter than that becomes just flat and featureless.

 

For the shadow parts the same principle is not really a big deal, since really dark/black shadow parts in pictures don't look unnatural, we see very dark shadow parts every day. But we do not see featureless burned out objects a lot, only the sun and maybe some bright reflecting areas. And as we are not used to seeing them in everyday life, they don't look very natural in pictures either. The picture below on the left is an example of how a burned out sun does not look unnatural, but the picture on the right is a good example of how ugly clipped highlights can look, those featureless bright areas just don't look natural. But notice how the clouds in the analog picture look a lot better (I know the cloud types are a bit different which makes comparing them a bit unfair, but the principle should be clear from these photos).

 

Furthermore, the areas near the totally clipped areas can have a bit of a color, while they shouldn't. This is especially obvious when an overexposed picture is darkened on the computer. The picture on the left has already a slightly greenish haze on the top right, but after adjusting the exposure with -1.5 stop (done on the RAW file, which is why there are still features in the sky, even though the original was plain white) there is a ridiculous amount of green in the sky on the right and on the left. Fortunately, cameras are getting better and better at this color issue.

The bottom line: make absolutely sure not to overexpose in the digital world.....

 

Another difference is the way film and digital sensors capture the world. Film "sees" light in a non-linear way which corresponds better to how we see the world around us, but for digital that is completely different. A digital sensor detects light in a linear way (see also the curves above), which means that twice the amount of photons results in a doubled signal. Now suppose that we have a camera with a dynamic range of 5 stops, then the stops would be distributed as is shown below (the stops are separated by the red lines). The first and brightest stop accounts for the brightest half of the signal, while one stop darker accounts for the brightest half of the remaining signal, etc. So for an 8 bit file, which has 256 levels of brightness, the first stop has 128 levels, the second stop 64, the third stop 32, the fourth stop 16 and the fifth stop 8. The remaining 8 bit (128 + 64 + 32 + 16 + 8 = 248, so there are still 8 bit left) are spread over the even darker parts. The numbers in the second row are the number of levels you'd get when working in 12 bit RAW mode.

8 16 32 64 128
128 256 512 1024 2048

However, we don't see the world in this linear fashion, and therefore the distribution needs to be adjusted so that it mimics our perception, which means compressing the brighter areas and stretching out the darker areas as is shown below. This process of adding a tonal curve (also called gamma correction) is done in the camera, or later on the computer if you shoot in RAW.

8 16 32 64 128
128 256 512 1024 2048

This uneven distribution of levels is something to keep in mind, because it will give far less room for editing in the dark parts than in the light parts as posterization is more prone to kick in in the dark parts than in the light parts (so prevent taking underexposed pictures that need to be brightened afterwards on the computer). However, because of the bigger amount of levels, shooting in 12-bit RAW will give you some more freedom.

 

RAW vs JPG

 

The two most used file formats for photography are JPG and RAW (if the camera allows you this option). There are quite some important differences between them, which I'll try to explain here.

When an image is captured on a digital camera, the analog signal (the voltage) which is generated by the sensor is amplified and converted into a digital signal which is usually 12 or even 14 bit. And it is at this point where JPG and RAW differ.

 

JPG:

If you choose JPG then the following actions take place in the camera after taking the shot:

• A Bayer interpolation to create color information for every pixel.

• A white balance adjustment.

• Tonal adjustments.

• Sharpening.

• Compression to an 8 bit JPG file (which means 8 bit per color channel, so in total it is 24 bit).

 

RAW:

If you in stead choose RAW then all those actions will be done on the computer using special software and for now, the camera saves the image as the 12 or 14 bit file. This RAW file can be saved uncompressed or lossless compressed, which means (almost) no information is lost while compressing. The fact that the RAW file has not undergone a Bayer interpolation means that at this point, each pixel represents only one color and therefore only has 12 bit of information. So, unlike the JPG file, the RAW file is not a photo yet, but the collection of data which were received from the sensor. (However, when shooting in RAW, the image displayed on the LCD of your camera is still a fully converted JPG file. So care should be taken to judge the photo on this alone, since the RAW file may contain more information then is displayed on the LCD. If possible on your camera, a better option is to judge on the histograms.) Later, the RAW files are loaded into the RAW converter on the computer and this is exactly the big advantage of RAW, it gives a big amount of flexibility as you can manually perform those actions (which the camera did automatically for JPG) on the computer and determine how to do them and to what extent.

 

Advantages:

• Lossless compression so the image does not suffer from compression artifacts.

• A RAW file has a larger bit depth, so there is more flexibility in editing and less chance of posterization.

• No editing done in the camera, which means that accidentally having the wrong settings is no problem.

• Non-destructive editing on the computer. When editing a JPG file, the editing is destructive, which means that it can't be undone once it is saved. In the case of RAW, the original file is saved along with the actions performed, which at any time can be unchecked to give the original image.

• White balance can be set afterwards.

• Sharpening can be done afterwards and it is possible to do this in a more controlled way than by just letting the camera do it.

• Color space not determined yet, so a suitable color space can be chosen afterwards.

• Since computers have better and stronger processors than cameras, the algorithms used for all the post-processing of the RAW file are more sophisticated than the ones on the camera, which gives better results.

 

Of course there are some drawbacks to RAW (but only minor ones if you ask me).

• The files are much larger than JPG's and this will cost more storage on both the computer and the memory disk.

• It takes time to edit the pictures on the computer, which is maybe not to everyone's liking. Of course this goes a lot faster once you know the RAW converter well, and the amount of time playing around on the computer is entirely up to one self. Performing only the basic actions takes far less time than tweaking the picture with every possible tool.

• A RAW converter is needed.

 

To put it simply, if you shoot just for fun and don't want to spend time on the computer, then use JPG's. If you want to be in full control of your pictures and use their full potential, then use RAW.

 

To the right is an example of the power of RAW files when compared to JPG. This does of course not mean that exposing correctly is no longer of importance when shooting in RAW, but it shows the flexibility of RAW. Correct exposing is still of major importance since the less has to be done on a picture, the better its quality.

This shot was deliberately overexposed, and it shows the big advantage of the larger bit depth of RAW. After adjusting the JPG, the histogram shows posterization and, because the JPG has no information in the blown out highlights, they are still blown out highlights after the adjustment. The adjustment from the RAW file shows no posterization and because the RAW file still has information in the blown out highlights, details were preserved there and are present in the adjusted picture.