A lot of people still believe that when producing output for the web or
because their screen is incapable of showing more colors than sRGB they need to
work in sRGB. And then they wonder why the images from the raw files of their
high end DSLR have crappy colors. A myth that is repeated over and over again
is that if your screen can not display the
gamut of your working space you
are not seeing the correct colors and therefore it makes no sense to use a wide
color space.
Your average DSLR today has a much
wider gamut than your monitor. So you will never ever see exactly the
colors your camera delivers. You will always see only an
approximation of the colors that your camera delivers on screen.
Now what happens if you use a small
color space like sRGB (or Adobe RGB which is barely larger than sRGB
anyway) as your working space? (Let's assume you have a properly
calibrated monitor that is capable of showing roughly the gamut of
sRGB)
Your image gets translated from the
wide gamut of the sensor to the small gamut of sRGB and then
translated again to the characteristics of the monitor for display.
As the image colors as delivered by the camera encompass a wider
gamut than sRGB, colors get compressed into sRGB to deliver a
approximation of the sensor's wide gamut as close as possible within
your smaller color profile. To achieve that visual approximation, the
image gamut will hit the borders of sRGB immediately when coming from
the wide gamut of the DSLR. So you are already close to clipping. If
you now change anything in the image that affects the colors (like
saturation or exposure) you will reach clipping faster than you can
blink and your images will suffer. As your monitor profile
will be similar to your working space, not much will happen after
that when the image gets sent to the display. The clipping has
already happened and your screen will just display the clipped
colors.
Let's compare this with a ProPhoto
(or WideGamut) based workflow:
Your image gets translated from the
wide gamut of the sensor to the even wider gamut of ProPhoto RGB and
then translated again to the characteristics of the monitor for
display. As ProPhoto is wider than the camera gamut, the images
colors are easily accommodated. No dangers of clipping even when
heavy image manipulation is applied as ProPhote has enough headroom
to accommodate that. Now the image still needs to be displayed on
your sRGB like screen. So the image gets translated from ProPhoto to
your screen using the gamut your screen can provide to deliver a
close as possible approximation. But as there are no image
adjustments after the monitor conversion, using the full gamut of he
screen will not drive your image into clipping, you just get the best
approximation your screen can do.
The big difference between the two
scenarios is that in the second one you see on screen a much closer
approximation of what your camera provides than in the first scenario
where an artificial gamut compression happened too early in the
pipeline.
Or in other words, do you want rounding and clipping errors on your data
during the processing or only some approximation errors in the final
conversion to the output device?
What I described here, applies also
for printing. Printers do have smaller gamuts than DSLRs but they
will also benefit from keeping the image data wide until the last
conversion.
A few years ago I made an experiment
to see how much the choice of working space influences the final
printed image. I went into the garden, took a properly exposed image
of a red rose in raw, then used various working spaces and file
formats to develop the image before sending it to the printer. Then I
took the prints outside and compared them to the rose. The version
where a completely wide path (ProPhoto RGB and 16 bit TIFF) was used
to feed the printing program delivered the best representation of the
real rose.
All of the above assumes working in
your raw converter (which all use at least 16 bits per channel
internally) or with 16bit TIFF files. Wide gamut profiles will lead
to posterization when used in JPGs or other formats that have only 8
bits per channel, as the range of information that can be represented
in 8 bits per channel is not big enough to encompass a wide gamut.
Modern DSLRs deliver at least 12 bits per channel (14 is more common)
in their raw files, so the source data from the camera is already
using more than 8 bits per channel and it should be kept in 16 bits
per channel as long as possible before converting to the final
output.
But what about wide gamut screens?
If you have a screen that displays a wider gamut than sRGB and want to
create sRGB output, what happens then? The above assumption about the
screen being close to sRGB does not hold.
So you either need to reduce the gamut of the screen in the monitor settings
(but why have a wide gamut screen then?) or you need to soft proof.
Soft proofing adds a conversion layer between the working space and the montor
profile to simulate the results of switching to a specific working space.
Now you can easily emulate the results of a smaller working space on a
screen with a wider gamut.
Unfortuantely, soft proofing is not available in all imaging
applications. But if your imaging application supports it, you should use
it.
As a side note, while it is useful for simulating smaller working spaces,
soft proofing is basically a must when trying to judge potential print
results as the gamut of a printer is usually very much dfferent from a
screen gamut.
Just to give you an idea about the gamut differences between a current
DSLr and the working spaces mentioned above, here are two gamut comparions
images created with
ICC Examin:
The first shows how the camera gamut (from a Nikon D700, in color) is encompassed by ProPhotoRGB (in grey):
And now compare this with how small sRGB (in grey) is compoared to what the D700 delivers:
Further reading
Luminous Landscape: Understanding ProPhoto RGB
Digital Outback Photo: Color Management for Photographers #006Why Use the ProPhoto RGB Color Space?