Converting images to linear (gamma 1.0) space and back

This page is designed for a PC screen. If you are using an Apple Mac click here.

Those who are interested in high quality image editing know what gamma is and why we have to take it into account.

To put it straight, certain image manipulation should only be done in linear (gamma 1.0) working space. The simplest is ... creating a thumbnail! It could seem incredible that there is something tricky about it but have a look here.

Here are the main contributors to the problem:

1 - The light in its nature is linear substance.
2 - Our perception of light is logarithmic or non-linear.
3 - The device, traditionally used to display images (CRT) has its output responding to the input in a non-linear way (power function).

This all creates a lot of misunderstanding... For example few people can tell apart the fundamental difference between power function L = V2.5 and exponential function involving optical density L = 10D (which is the reverse of logarithmic). But anyway!..

Certain if not all image manipulations, for example majority of image editing are based on the nature of light and therefore should be done in linear working environment. This seems quite obvious but rarely mentioned - the best advocacy that I have seen is presented on AIM-DTP web-site.

From the other side things like most efficient (compact) storage of digital images for later viewing by humans should take into account human and display part of the system and therefore must be non-linear for maximum efficiency. But if we can afford non-compromised storage of the image then going non-linear is not necessary. Especially if we plan to go back to the editing later.

Unfortunately sometimes all we have is non-linear image and we have to go back to its linear representation for editing. Here we hit another problem:

4 - The image has to be represented by a limited number of bits.

This does not make it easy either. Even simple conversion to the linear form and back takes some quality away. Why? Not because linear and non-linear spaces are good or bad but because during conversion we have to squeeze the information into certain number of image bits. If we could have been able to operate in analogue or floating-point editing system the conversion back and forth would be nearly harmless. But all we have is only 8 or 16 bits per channel...

OK, we have decided that we have to convert before editing. But how much will we loose?! This page is a simple example.

I take a grey scale as a test . However there are lots of grey scales! How about simple 0, 1, 2, 3, .. 253, 254, 255 set of steps? Here it is. Notice how everything is compressed at the dark end:

Sometimes people even apply compensation for monitor gamma to it. Look what happens then:

Hey, what are those ticks at the bottom?

They mark the spots where the amount of light from your screen drops exactly twice from the previous tick. Photographers know the concept of "step". Each tick is one step away from the next one. Or in Ansel Adams system the tick marks the next zone. This is compensated for your monitor so if you pick up the camera with a spotmeter and aim it at this picture the necessary exposure for each next tick will be twice higher.

Somehow I want a "proper" grey scale. Like the one on a Kodak Q-60 target. That is why I have created this one:

Two-dimensional effect is there by purpose. It gives more "feel" to a scale. Can you see how my grey scale looks like a light halo in a haze?

My scale (unlike those first two above) has no limit on the right! There is no blackest black on it. If you look at Kodak's scale it ends up somewhere near optical density D=2.5 which is 0.3% reflectance. It is difficult to work with visual densities higher than around D=3 so my scale ends on the right at D=3. We are looking for a real-world example therefore that is enough.

Now is the fun part! Let's convert this scale to a linear working space and back but our linear working space will have a limited number of bits - just like any computer-based editing system. Say 8 or 16. I have also included other values so that you can see how it is gradually changing as we approach higher colour depth resolution. Please notice that the original scale image is 8 bit and looks smooth! Bottom half of each sample below has been converted, top remained unchanged for comparison.

1 bit
2 bits
3 bits
4 bits
5 bits
6 bits
7 bits
8 bits
9 bits
10 bits
11 bits
12 bits
13 bits
14 bits
15 bits
16 bits

Can you see that 12-13 bits is almost enough but 8 is too few?

Short conclusion:

When converting images to a linear working space for editing always use 16 bit processing (if you can of course). That applies even if you will be converting edited images back to 8 bits for viewing. Otherwise you will end up with banding and other nasty artefacts.

© Leo Bodnar, originally posted 8th February 2004.