White balance


Temperature:   Light source:
1000-2000   Candle
2500-3000   Tungsten lighting
3000-4000   Sunrise and sunset
4000-5000   Fluorescent light
5000-5500   Electronic flash
5500-6500   Daylight
6500-8000   Overcast skies and shade
9000-10000   Heavily overcast skies
10000-18000   Sunless blue skies

When objects are photographed, we basically record the light that they reflect (unless we're photographing a light source itself of course). This has as a consequence that the light these objects reflect depends on the light of the source. If the light source emits warm light, then the object will also reflect a warm light. Our brain automatically compensates for this and a white piece of paper is a white piece of paper, wether we stand outside in the shade or inside with the lights on. But a camera has to be compensated for this, otherwise the piece of paper will look blueish in the shade and yellowish inside under the light bulbs.

Different white balances => [too cold] [correct white balance] [too warm]






The color temperature is given in degrees Kelvin and is based on the type of light a black body would emit when heated to this temperature. It may seem a bit counterintuitive, but the higher the temperature, the cooler the light is.


You can either choose to let the camera determine the white balance for you, adjust it manually according to some presets, or choose a color temperature instead of a preset. Optionally, if you're working in RAW format, you can adjust the white balance afterwards on the computer. The camera often does a pretty good job determining the white balance, but it can also be way off.




One of the most efficient ways of editing the tonal range of a picture is the curves tool, which is a graph with two bars, both ranging from white to black. The horizontal one is the input while the vertical one is the output. The easiest way to explain this tool is probably by showing some examples for the photo below to the left.

1) The first one has the curve dragged down at one point, which makes that all points on the curve are darker on the vertical output bar compared to the horizontal input bar. Take for example the point exactly in the middle on the horizontal bar, draw a vertical line to the curve, and from that point a horizontal line to the vertical bar. Now you can see that this point on the curve is darker on the output bar than on the input bar. Since the whole curve has been dragged down, the overall picture is darkened.

2) This is the opposite of the first example, and the whole picture becomes lighter.

3) This time two points are used, which makes the darker parts of the picture become even darker and the lighter parts become even lighter. The midtones, on the other hand, remain relatively unaltered. This is a way of increasing the contrast and it is known as the S-curve.

4) This is again the opposite as the third example and decreases the contrast.

5) This is a way of adjusting the black and white points, and this will also increase contrast as the third example did. The big difference is however that by changing the black and white points, you may discard some information. In this example there are only midtones, so this methods works ok, but if there are some dark areas in a picture, and you move the blackpoint, then there is a chance that these dark areas will now turn black. And the same goes for bright spots which can turn into featureless white when adjusting the white point. This way of adjusting the contrast is similar to adjusting the levels.

6) Again the opposite of example five, decreasing the contrast.

To clarify the difference between the third and fifth example, the same is done on the picture below to the right, which, in contrast to the previous example, already has some dark and light parts. It can now be seen that, although both methods enhance the contrast, the fifth example gives blown out highlights in the clouds and some dark parts near the castle have become featureless black.


Adjusting using curves =>
[original] [example 1] [example 2]
[example 3] [example 4] [example 5] [example 6]


Adjusting using curves =>
[original] [example 3] [example 5]


Of course there are many more ways to use the curves tool, more points can be added and more precise adjustments can be done in any sorts of way. Therefore this is a very nice and useful tool, although it requires a fair amount of practise.


Contrast mask


Using a contrast mask =>
[original] [contrast mask] [contrast mask applied (69 %)]

When a picture is very contrasty, a contrast mask can be applied in order to lower the contrast. It is created by copying the original picture into a new layer, desaturating this layer (making it black and white) and inverting it. The contrast mask is then blended with the original picture in the blending mode "overlay", and its opacity is decreased to obtain the desired effect. Because this slightly decreases the sharpness of the picture (as it is in some way the opposite of applying an unsharp mask), the contrast mask needs to be slightly blurred using a Gaussian blur. This will restore the original sharpness of the picture.


Local contrast enhancement (LCE)


Local contrast enhancement (LCE) is a great way of enhancing the clarity of a picture, and it is very easy to do so by using either the unsharp mask or the high pass filter which are both described below under editing/sharpening. Sharpening is done by enhancing the acutance, but if we change the settings to a large radius and a small amount (as opposed to a small radius and a relative large amount for sharpening), than the same effect will be spread out over a much larger area, and this gives a boost in contrast around edges. But the great thing is that this only applies to edges, so in the example below on the left we can see that the contrast is larger around the edge, but the middle of the circle and the outer parts of the picture are not altered (which is why it is called local contrast enhancement)! Compare to what adding regular contrast (global contrast) does, which darkens the whole circle evenly, and lightens the rest.

Simplified display of contrasts =>
[original] [LCE applied] [global contrast added]


Example of contrasts =>
[original] [LCE applied] [global contrast added]

Example of clipped highlights when using
LCE and how to remove them =>
[original] [LCE applied] [LCE and blend if]


The reason that this works is because our brain is very susceptible to observing sharp transitions, but not to observing slow gradients. So the brain immediately sees a stronger transition in the local contrast enhancement example, but the gradients in the other parts can only be seen by looking closely and will not even be visible in most photos.


There is one thing to keep in mind with this technique, which is that the enhancement in contrast can also clip very bright parts (likewise, it can also turn dark parts into black, but that is most of the time not as big a problem as clipped highlights). In that case you can of course lower the contrast enhancement, but there is a better solution which is using the "blend if" functionality in Photoshop (this can be found in Photoshop under "layer style/blending options").


You'll need two layers, were the background is the original layer, and the second layer is the one to which you applied local contrast enhancement. If you adjust the "blend if" slider so that the brightest parts of the enhanced picture are not blended, then this is a good solution in most cases. There is however the possibility of artifacts, so always keep an eye on the result! On the right is again an example, where I really exaggerated the local contrast enhancement for demonstration purposes. Large parts of the clouds are washed out after applying local contrast enhancement, but using the "blend if" function recovers the clouds.

Luminosity masks


Luminosity masks are a really fantastic way to get the most out of your pictures. It's definitely more time consuming and it takes a while to get the hang of it, but it's worth it! But credit where it's due: there is already a good explanation about luminosity masks written by Tony Kuyper, so I won't waste my time on it.




Example of dodging and burning =>
[original] [D/B layer] [D/B applied]

This is another technique that gets its name from the darkroom. Dodging and burning are processes that can be used to selectively darken (burning) or lighten (dodging) parts of an image, and are an integral part of darkroom processing.


The dodging and burning tools are available in most editing programs, but I personally prefer working with a grey layer set to "overlay" mode. In this way you edit the image non-destructive, and you can always go back and change it some more or undo something. This layer can be created by simply adding a layer filled with 50 % grey and setting its blend mode to "overlay".

Painting with black and white on this layer has the same effect as burning and dodging, respectively. This process works best for me with the opacity of the brush down to a couple of percent (usually 1-5). If I want the effect to be stronger, I use several strokes with the brush at that spot. By painting with 50 % grey, you can undo any dodging or burning that you want to get rid of. Finally, the effect of the dodge/burn layer can also be adjusted by changing the opacity of that layer.


I added a fair amount of dodging and burning to the example on the left, to demonstrate what it does.




Since some blurring occurs when taking a digital picture due to the anti-aliasing filter (and some algorithms), the picture needs to be sharpened afterwards. If you shoot RAW, you'll need to do it yourself, otherwise your camera will do it for you. Unfortunately sharpening does not really enhance the sharpness of a picture, but enhances the apparent sharpness (or acutance). If you look at the example on the right, then you'll see that the original is a bit blurry, and ideally we would like to obtain a sharp edge as in the "ideal" example. But that's impossible to do afterwards, so we'll have to compromise and enhance the acutance, as in the "sharpened" example. As can be seen, it is very easy to get all kinds of halos when sharpening (I actually oversharpened deliberately here to show the halo formation), so the compromise is to sharpen enough so it looks sharp, but not so much that halos start to form. There are several ways for sharpening a picture, and I will discuss two here.


Sharpening - unsharp mask


Sharpening using an unsharp mask =>
[original] [gaussian blur] [difference] [sharpened]

The method most often used is by a method called the unsharp mask (USM) and the reason a method for sharpening is called UNsharp mask, is because an unsharp version of the picture (done by applying a Gaussian blur) is needed to determine where the edges are in the original picture. The difference between the original and the blurred picture tells the software where the edges are in the photo (the white parts in the "difference"-example on the left), and accordingly, where to enhance acutance. This whole process is done by the software itself, and we only need to choose these three parameters:

• Amount: Speaks pretty much for itself, this is the amount of sharpening that is applied.

• Radius: This is the radius that will be applied to get the blurred layer. A larger radius will give broader acutance enhancements.

• Treshhold: This is the minimal amount of difference between the original and the blurred layer that will be regarded as an edge.


Sharpening - high pass filter


Sharpening using a high pass filter => [original] [high pass filter] [sharpened]

Sharpening with a high pass filter gives pretty similar results as using an USM, but it has the advantage of being less sensitive to noise. So for noisy pictures, the high pass filter method might be the preferred one. In this case, a bit more work is needed since this is not a standard process in the software.

You start by duplicating your original layer and applying the high pass filter. Desaturate this layer, as you only want to apply sharpening on the luminosity levels. Then change the blending mode to overlay, which will give a sharpened version of your original. The parameters in this process are:

• Radius: Like with the USM method, a large radius will result in broader acutance enhancements.

• Opacity: Changing the opacity of the second layer (the one with the high pass filter) will also change the amount of sharpening.

• Blending mode: This is optional, but by changing the blending mode to 'soft light' or 'hard light', the sharpening can be decreased or increased, respectively.


Focus stacking


Focus stacking is a method for obtaining pictures with a larger depth of field, which is a very useful method especially for macro photography. The depth of field in macro photography is often extremely small and using a small aperture is not a good solution since diffraction kicks in pretty badly. So focus stacking makes it possible to use a suitable aperture and combine the pictures afterwards on the computer to give a larger combined depth of field. More about the use of focus stacking in macro photography can be found under macro photography/focus stacking.


In order to do focus stacking, several pictures need to be taken, where the focus on each picture is moved a small amount (either by adjusting the focus, or by moving the whole camera setup on a rail). These are then combined on the computer with software which determines the sharpest parts in each picture and combines those in a single picture.

The picture below on the left shows a macro shot of a weevil, which is a focus stack of 35 photos. You can see a big difference between the focus stack and the single exposure, where a lot of the weevil is out of focus. I also included a GIF with the sequence of the 35 photos to clarify the process a bit.

An important thing with focus stacking is that the shifts of focus between the separate photos should not be too big, because that will give blurry transitions in the end result. Below on the right is an example of this, where the photo with bigger steps has several blurry bands, which is not what we want. It's better to take steps that are too small than too big, because too small steps will only give more photos to work with while still resulting in a nice end product, while too large steps will not result in a good photo at all.

Example of stacking => [single exposure] [focus stacked]


The influence of the step size => [whole picture] [small steps] [large steps]

Besides macro photography, focus stacking can also be very useful in for example landscape photography. Below is an example where I wanted both the foreground flowers and the background mountains in focus. Accomplishing that with one single picture would have meant a very small aperture which would have resulted in decreased overall sharpness due to diffraction. So, in stead, I made four exposure with the focus shifted for each of them. This gave me a photo that was sharp from front to end, whithout using a small aperture.

On the right are close-ups at 100 % of both the foreground and the background. In both cases the top half is the single exposure, and the lower half the focus stack version. The background is slightly sharper in the focus stack, but that difference is negligible. The foreground on the other hand is a lot sharper in the focus stack version.


The whole picture.


Details at 100 % => [foreground] [background]




Combining photos to simulate a long exposure =>
[single picture] [ten pictures combined]

Blending pictures can be done for several reasons, like decreasing noise (see bits and bytes/noise), or making star trails from several consecutive photos (as described in photographing the sky/star trails).


It can also be done to simulate a longer exposure on, for example, a bright sunny day, when even a small aperture will result in a short shutter speed. If you don't have access to a ND filter, then blending might be a solution. It is done by simply taking multiple exposures of the same scene and blending them so that, for each layer, the opacity is set at 100/"nr of the layer" in percent: layer 1=>100 % opacity, layer 2=>50 % opacity, layer 3=>33 % opacity, layer 4=>25 % opacity, etc. If ten pictures are blended with each a shutter speed of 1/10 s, then the blended picture will look like it was taken with a shutter speed of 1 s. The picture on the right shows the difference between a single exposure and a blended picture.


Another reason to blend photos is when a scene has a large dynamic range and filters are not an option. In that case, you can take several pictures of the same scene with different exposures so as to correctly capture all parts of the scene, and combine those exposures afterwards. It is similar to the high dynamic rangeprocess described below, but the results are often more realistic and you have more control over the process (and it's a lot more fun to do!).


Combining different exposures =>
[light exposure] [medium exposure] [dark exposure] [blended]

The image on the left is an example of exposure blending, where three pictures were combined (with one stop difference between the pictures). The light photo has a good exposure for the shadow areas, but the light areas are washed out and featureless. The dark photo has a good exposure for the bright highlights, but in this case the shadows are too dark. Combining the pictures gives the blended photo where the exposure is more balanced.


Two things are important when making pictures for exposure blending. First, only adjust the shutter speed as adjusting the aperture will also alter the depth of field, and second, use a tripod, as that prevents you from having to align the pictures prior to combining them.


If you shoot RAW, then it is also a good option to create several photos with different brightness created from the same RAW-file. Combining these often gives good results as well! The advantage is that you don't need to worry about moving objects like branches or clouds, but the disadvantage is that the camera needs to capture the whole dynamic range in one exposure, which is not always possible.


High dynamic range (HDR)


Combining different exposures with the HDR technique =>
[light exposure] [medium exposure] [dark exposure] [HDR blended]

If a scene has more contrast then the dynamic range of your camera can record and filters are not a suitable solution, then high dynamic range photography might also be a solution. Once again, this means that you take several pictures of the same scene with different exposures and combine them aftwerwards.


In general, producing an high dynamic range photo goes as follows, the software will first combine the several pictures to a 32 bit high dynamic range photo, but this can't be displayed by most screens (which are generally 8 bit) so tone mapping is done, which will convert the 32 bit file into an 8 bit file (so most pictures labelled as high dynamic range are not true high dynamic range, but tone mapped pictures of an high dynamic range original). Of course, along this path there are several options to adjust settings to get the picture that you want.


In my opinion, high dynamic range is very capable of screwing a picture, but apparently the same effect which I despise is something that many people like. Just do a Google search on high dynamic range and look at the results. The horror... the horror...

I have found that I get the best results when using high dynamic range very modestly, but then again, I am not a big fan and hardly ever use it. I find that if I need to blend several pictures, then doing it manually (as described above) gives far superior results.


The same two things that were important for exposure blending are important in this case. So only adjust the shutter speed (adjusting the aperture will also alter the depth of field) and use a tripod (prevents you from having to align the pictures prior to combining them).


Dust removal


Most of the time, there will only be a few dust particles on your sensor, and removing them one by one will not be a daunting task. However, in the case of my infrared cameras, I converted those myself and ended up with quite a lot of dust on my sensors as a result. Since the dust from the conversion is sandwiched between the sensor and the infrared filter, there is no way to remove it by cleaning. And since it did get a bit on my nerves to painstakingly clean my infrared pictures with the dust removal tool, I experimented around to find a way that would make it more easy to remove larger amounts of dust. It took some time, but now I found a way that seems to work pretty well, and it is a lot faster than removing every single dust bunny. I apply it only on my infrared photos, but it will work on normal photos as well.


Here's how I do it: When you take your photo, you take a dust reference photo as well. Very important for this dust reference photo is that the aperture and focal length are identical to your normal photo!

Below to the left are examples of how the aperture changes the look of dust bunnies on photos. The smaller the aperture, the more evident the dust particles will be, and at larger apertures the dust is hardly visible.

The influence of the focal length on the appearance of dust is shown below to the right. Light will hit the sensor at a slightly different angle for different focal lengths so the position of the particles on the photo will also change slightly as a result. This change is most significant for small focal lengths.

Influence of the aperture on dust => [f/22] [f/16] [f/11] [f/8] [f/5.6]


Influence of the focal length on dust => [10 mm] [24 mm]


The focus distance also makes a small difference in how the dust appears on the sensor, but not at all as significantly as focal length and aperture. So a reference photo taken at a different focus distance will probably still work fine.


Removing dust bunnies => [original] [dust removed] [dust reference photo]

Capture the reference shot by taking a shot of an evenly lit surface which has as little detail as possible (a uniform grey or blue sky, snow, etc.) while moving and rotating the camera around. The moving around is needed in order to get a reference photo that is as smooth as possible (it will blur any details in the background), and the longer the shutter speed of the reference photo the easier it is to achieve this.

Having the background for the reference photo out of focus further improves the reference photo by blurring background details, but it is not a requirement as long as there is sufficient motion blur due to the moving around of the camera.


On the computer, apply the same RAW-settings to both photos and open them both in Photoshop (do this procedure on 16 bit files, and not on 8 bit files, because that might give ugly posterizations!). Copy the reference photo as a new layer on top of the normal photo and perform a high pass filter on the reference layer (the radius of the high pass filter can be played around with, but so far 200 pixels seems to work best for me). Invert the reference layer and desaturate it, then change its blend mode to "linear light" and its opacity to 50 %. Now most/all of your dust bunnies should have disappeared. An example of this procedure is on the left.


As long as the dust does not move around on your sensor, you can reuse the dust reference photos, as long as the aperture and focal length match that of the normal photos. But if you want to be sure, just take a new reference photo. Better safe than sorry!