Revisiting The Zone System… or, How I Learned To Love My Light Meter Again.

by George on June 4, 2012


Photographs © George A. Jardine

Back in the fall of 1972, my very first photo-school assignment for Commercial Photography 101 was entitled An Industrial Interior. I had read Ansel Adams’ Basic Photo series the year before, and I was enthralled with his Zone System for exposure and development. So by the time I got to school, I had enough of the basics to know how to ‘expose for the shadows’, and roughly how much to push or pull my development to map the dynamic range of the scene to the film.

In those days, you could literally walk into a rural power station such as this one in Glenwood Canyon, and start setting up a 4×5 view camera on a tripod without anyone so much as looking twice at you. Look closely and you’ll see the blurred figure of the operator in the control room on the right side of the photo. In about 45 minutes of working the shot, he never even bothered to poked his head out to ask what we were doing.

Reading a 1° spot in the darkest shadows at the end of the turbine facing the camera, I remember starting by putting that value on Zone II, thus determining the base exposure. Then reading a highlight in the clerestory of the roof, I recorded a value to determine my development, knowing full well that this would put the overhead lights into specular (Zone X) territory. Unfortunately, I no longer have my notes from this early effort, but I distinctly remember the dynamic range of the scene was great enough that it required pulling the development by several stops to bring the very bright white of the clerestory down to a Zone VIII.

I had done enough darkroom testing to know what negative densities could be produced on 4×5 Plus-X sheet film with various development times, temperatures and agitations. Despite all that, pulling this first assignment out of the soup was still a revelation. The tones were magnificent.

The Zone System mapped 10 Exposure Values found on the Weston meter (Weston called them Light Values) to 10 shades of gray in your print. Using 10 exposure values came conveniently close to the dynamic range of many films at the time, and so the system gave photographers very direct control over how and where they represented the tones from a given scene. It only required that you ‘previsualize’ your final output—which in those days, meant a print.

To quote Adams (New York Graphic Society edition, fifth printing, pages 15/16): “The photographer who wishes to work toward a predetermined result must visualize the tonalities in which he wants certain important parts of the subject to be represented, and plan his exposure and subsequent treatment accordingly.”

I couldn’t have summed up the whole thing better myself, and he does it here in just one sentence! More importantly, Adams also did a very good job of describing how increasing stops (or Exposure Values) of light represented a geometric increase in luminance, and how that tonal progression of actual luminance was mapped to represent a perceptually uniform progression on the print.

Indeed, in all the books and articles that I have read on the subject of digital exposure during the last 10 years, I have not yet found one description of digital capture that makes the underlying technique as clear as Adams does for film-based photography, in his books from 1948. So far, the one that comes closest is Poynton’s The Rehabilitation of Gamma. Poynton’s work is more about television and digital video, but the principals of how and why gamma-encoding is a necessary part of the process, is critical reading for photographers as well.


Another early 70’s Zone System exploration that was shot with a Hasselblad and a 150mm lens, on Panatomic-X film.

My goal for this article is not to create yet one more tome on how digital exposure works. That’s already been done, repeatedly, and new efforts are always underway. Rather, my goal here is about two other things. First, to try and rectify a minor problem with my own technique of using an already-gamma-encoded gray scale for demonstration purposes in Lightroom—which we’ll get to later. Second, I hope to clarify a few things about the linear capture and gamma-encoding relationship, that seems to continually get muddled in the popular press.

On the first point, if you have watched any of my Lightroom Develop tutorials, you might recall that I frequently use a gray scale to illustrate how the Lightroom Tone and Curve controls allow you to push various tones around. And from the reports of my customers, this approach seems to get the message across pretty well.

The gray scale file that I use looks something like this, except that the one in the videos has 21 steps (5% increments) rather than the 11 steps (10% increments) shown above. The tones in this synthetic gray scale file were created to have a perceptually uniform progression, which simply means each step on the computer screen appears to have about the same delta (change) in brightness relative to adjacent steps.

These values were specifically created for your computer display, and so we think of the tones in the file as being in an output-referred space. An output-referred space could be your computer display, data stored in a JPEG file, or it could be a print. But it is not directly related to the light coming from a scene in nature. That… would be scene-referred data. Scene-referred data is what is captured and stored in a raw file. (For a very good introduction to the subject, and for definitions of scene-referred and output-referred spaces, please be sure to read Karl Lang’s white paper on Rendering The Print, found here.)

Using a synthetic gray scale makes a great visual aid for output-referred data, and it’s convenient because it’s easy to create in Photoshop and then slap into Lightroom. On your display, these tones not only have a reasonably uniform delta in perceived brightness from step-to-step, but they also have a linear progression relative to the unit of measure used in that space. The unit of measure from black-to-white in Lightroom is percentages, from 0 to 100%.

And so working with output-referred data (JPEG’s, PSD’s, etc.) is easy for the photographer. (Of course under the covers, values in the actual file have been gamma-encoded to create this perceptual uniformity. For a thorough discussion on that subject, see Poynton’s excellent white paper on gamma. ) When it comes to scene-referred data (raw data), you have a slightly different thing. Raw data is different is because scene-referred luminance (or exposure) values do not progress from black-to-white in a linear manner, but rather they progress geometrically. Another way of saying that in photography, is that each progressive f-stop provides exactly twice the amount of exposure.

In digital raw photography, just as when using the Zone System, you’re ultimately mapping real-world luminance values into a series of perceptually uniform tones…. or at least into the output-referred tones that you expect to see with your eyeballs and that will be pleasing on your computer display!

So this is where it becomes interesting. In order to start describing how sensors, and light in the real world works, you’ve probably seen some variation on this overused and poorly-explained graphic, that is meant to illustrate how “half of your available bits are taken up in the first stop of exposure.”

Very much like my output-referred gray scale above it, these graphics have a neat and tidy, scientific feel about them. But do they really tell you what’s going on? To compound the problem, they are invariably accompanied by some incomplete text about how scene luminance values have a 1:1 relationship to the available bits (“linear” capture), and therefore when the amount of light falls in half (your first stop of exposure), that increment is recorded in the middle of your camera’s available dynamic range. And then when it falls in half again, half of the remaining steps are used, slicing up that stop of exposure. All of which could be true.

My trouble with this approach is 1) the tones in the graphic are misleading, but much worse than that, 2) any correlation to actual, tested exposure values is rarely made. Which leaves the hapless photographer who doesn’t have a clear grip on this stuff, hanging out in space.

Take the tones in the graphic. Where did they come from? These graphics tend to vary quite a bit (because none of them are real! they are all created in Photoshop), but those that are commonly found seem to put a gray value under the mid-point (theoretically, the end of the first stop of exposure) of something in between 70% – 80% on Lightroom’s scale of 0 – 100%.

This particular one is at about 78% on Lightroom’s scale. Of course I understand that this is just an info-graphic, but let’s at least start the conversation with a value that has some correlation to what happens in the real-world. Do you think Ansel Adams published his charts of negative densities without testing them?

Then there is the matter of terminology. If measured luminance increases geometrically (with say, a uniform decrease in distance from source to subject…), just what does it mean to say that a camera’s sensor will encode those exposure values in a ‘linear’ manner? Because while this may be true down in the bits of some idealized A/D converter, it simply does not help me understand how my camera is really going to react, or see where my tones are going to fall once gamma-encoded by my raw processor!

To try and create a more precise graphic, or at least learn just exactly where my exposure tones would fall once they were in Lightroom, I knew I would have create a gradient in the real world and capture it in a raw file. Sniffing around a bit, I found one or two other folks who were also experimenting with raw captures of light falloff as it actually occurs in front of a camera. But the common approach there seems to be to photograph a logarithmic step wedge that is designed for densitometry. While this is certainly more interesting than just slapping together a synthetically created info-graphic, I still felt that it was somehow missing the mark.

What I was looking for was a way to very accurately measure where specific EV values in front of my camera would fall, once they had been gamma-encoded for display in Lightroom. This led me back to some 4 x 8 pieces of gator board that I had laying around, a bare tungsten bulb, and my trusty light meter.

What I was hoping for was a predictable falloff of Exposure Values with some regular increase in distance (from the light to the board on the hypotenuse, not along the tape measure…). But I didn’t get that. What I did manage to capture was a nice progression with more than 8 stops of fall off across 8 feet. So in this case, the imperfect reflectivity of the board combined with Lambert’s cosine law worked in my favor. Otherwise I would have had to use a card or a wall that was at least twice as long.

Having said that, obtaining mathematically correct distances wasn’t the important thing. All I really needed was precise measurements of real EV values, that I could then correlate to tonal placement in Lightroom.

Of course the raw engineers could probably cook up a synthetic raw file with a perfect progression of raw exposure values for testing, and I have no doubt that they have already done this. But I still wanted the real thing. Having a raw gradient captured using a camera wouldn’t allow me to separate what the camera’s A/D converter might be doing, from the actual tonal placement created by the Adobe raw processing engine. But it would still show me a placement of tones in Lightroom, that had a direct relationship to measured luminance values in front of my camera.

What I found was how dramatic the tonal compression truly is up in the very high Exposure Values, at least with the new 2012 Process Version. 1/2 second at f/5.6 at ISO 100, put EV 11 at 99.8%, with the break to 100% white just 1/4″ to the right of that marker. Going up one-third stop in exposure to .7 second at f/5.6, put the break to 100% white nearly 1.5″ to the left of the EV 11 marker, so I choose 1/2 second as my “perfect exposure.” With EV 11 as close to 100% as I could get it (without clipping), EV 10 fell at 97.5%. Amazing! A full stop of exposure difference produces only 2.3% change in the gamma-encoded highlight value.

After that, EV 9 produced 90.5%, EV 8 fell to 78%, and so on.

Given those numbers, a slightly more representative info-graphic of where the values would fall relative to your bits might look more like this:

Am I really using 50% of my possible gray levels between 97.5% and pure white? And 75% of them just to get to 90% (which is where Adams’ Zone VIII might fall—white with textures—assuming a 10 zone system with 0 being pure black, and IX being glaring white surfaces, snow in flat sunlight). Hard to believe, but possible. But again, as long as I have enough bits and get my exposure where I want it within the available dynamic range, this is a graphic that simply do not care very much about.

Much more interesting to me, would be what the “Zones” actually looked like in my output-referred gray scale, or what the curve looked like.

I bracketed my exposures of course, for two reasons. First, so that I could pick the exposure that put the output-referred value of 100% white as close as possible to the EV nearest to the end of the board, which conveniently turned out to be EV 11. I also bracketed because my board wasn’t long enough. Having a bracket both allowed me to capture exposure values as they would have been recorded had my board been longer (simulating those lower values with the lower exposures…), and also to make precise measurements of how exact the falloff of the EV markers was, relative to f-stops of real exposure difference.

Here are the output-referred values as Lightroom interprets them, when shot with a Canon 5D MKII at ISO 100:

EV 11
  –  99.8%
EV 10
  –  97.5%
EV 9
  –  90.5%
EV 8
  –  78.5%
EV 7
  –  58.5%
EV 6
  –  37.5%
EV 5
  –  24%
EV 4
  –  14.5%
EV 3
  –  9%*
EV 2
  –  6%*
EV 1
  –  2.8%*

(* The last three EV’s in this chart were extrapolated from my bracket, in which each successive stop down in exposure, did indeed prove to reflect new EV values that were within 1 or 2 percent of what the respective values would have been, had my board been long enough.)

These values look something like this, when represented on the gray scale:

OK, so…. what do you do with that? Well, once you have a fairly precise correlation to real-world exposure values, you can much more accurately see what the various Lightroom controls will do to the tones in a raw capture. Which is what I’ll be doing for my next posting, and for my next article in Digital Photo Pro magazine.

{ 17 comments… read them below or add one }

Larry D. White June 6, 2012 at 3:42 PM

Wow, this is amazing! What a difference in your “real world” step wedge from the one generated from Photoshop. It also calls into question the logarithmic falloff of available tonal values from one stop to the next in digital capture. I will very much look forward to your next discussion on this topic. Will you be making your step wedge available?

Paul Beiser June 8, 2012 at 6:59 PM

Wow.. George.. this is good.. but not sure I understand it all.. time for me to re-read.. and re-read..

Bill Janes June 10, 2012 at 6:56 PM

George,

Very interesting stuff. I have been playing around with a 41 step Stouffer wedge and have come to the same conclusion as you that its perceptual non-uniformity is a problem.

You might take a look at Norman Koren’s web site for some information on a simplified zone system. He gives an equation for the zones.

http://www.normankoren.com/zonesystem.html

Randy Sailer June 16, 2012 at 7:51 AM

Fascinating article! I think I will have to reread it a few times to completely get it.

Thanks for putting it together. I would love to read more of this type of stuff on your blog.

Randy

George June 16, 2012 at 8:48 AM

Glad you liked it, Randy. I know it’s a little abstract, but I feel the correlation from actual scene Exposure Values to Lightroom tonal values is essential to understanding how the tools work on your raw captures. Working on the follow-up posting now, with the results of how precise EV’s are changed with positive and negative moves on Hightlights, Whites, Shadows and Blacks, to complete the circle.
George

Chris Doyle July 4, 2012 at 4:49 PM

Great article. I too will have to re read it but I didn’t even know this amount of control was possible with a digital file. I used to use the zone system myself with 5 x 4 view cameras as well as 35mm neg.

I am also a light room virgin! I do have it but I keep on doing what I know in photoshop. Need to change that. so big thanks to Denis Reggie for pointing me over here via facebook.

MichaelT. July 10, 2012 at 1:55 AM

George: As a former view camera / zone system guy really enjoyed the article (and I appreciate your “scientific approach” to the problem). Also really enjoyed the power station and sink images — wonderful dynamic range! I wonder how these would look as Pt/Pd prints. Next you probably have seen Charlie Cramers article on lu-la landscape on LR4.1 titled “Tonal Adjustments in the Age of Lightroom 4”. He is interested, like you, in understanding the highlights, shadows, whites and blacks sliders interact with each other. I look forward to your next post on this topic. Lastly, any plans for a workshop in the Santa Fe, NM area?

George July 10, 2012 at 6:10 AM

Hi Michael,

Glad you enjoyed the article. Most of the net of my research went into the Highlights & Shadows video…. so not sure I’m going to write a follow-up. But I might. Finally, no plans for a Santa Fe workshop, but I always post them on the blog when they do come up.

George

Igor July 13, 2012 at 8:44 AM

Hello, George!

Your article is very intersting and it did force me to make further invstigations on the subject. Don’t you think that it’s the color profile you used that influenced the tone distribution you noticed in your experiment? There is an ICC color profile called “LstarGrey” which should interprete the image data according to the L* function (thanks to your references in the article I learned about L*). So, dosen’t Lightroom “L” scale uses this function, but your image was simply RGB coded? This might account for the discrepencies you mention in the artcle. The role of color profiles seems to be crucial for an image reproduction. So one should mind the ones used by program and devices. Your image reproduction does depend on the color profile in the first place. Kind regards,
Igor

George July 13, 2012 at 10:32 AM

Thanks for your comments, Igor. But it doesn’t matter what space you create the step wedge in. If you’re creating a “synthetic” step wedge in the computer, it ALL ends up in MelissaRGB once it’s been imported into LR. The point is that it’s during raw conversion, the tones that are created are coming from a completely different pipeline than the one used for grayscale and RGB files. In raw conversion, exposure values high up in the range are placed very close together to create a more “film-like” roll off into the highlights, which is a wonderful thing. And the only way to know precisely what tones are going to be created from specific EV’s, is to put some light in front of a camera, and make an exposure.

Then to add a bit more spice to the stew, highlight tonal relationships are variable, depending upon where they fall in the 14-bit capture range, and how they are tweaked using the Tone Controls.

George

Igor December 2, 2012 at 1:43 AM

Hello, George!

Thanks for your response. Should you not estimate raw data for correct results as far as the matrix response is concerned? Just going back to the color space. RAW files are built up according to EV scale. Once you open a file in any editor (raw converter), you apply a color space with a certain gamma correction, so you can’t really estimate your matrix response by that way. There are some raw converters (like, say, Raw Therapee) which allow you to control conversion totally (including gamma correction). There’s also a very good program which allows you to see and analyze raw data, histograms of RAW files (not interpolated, but just raw, in 4 channels RGBG), that program is Raw Digger. Look here http://www.rawdigger.com/. Or maybe you know about it?

Sorry if I am missing something.

Kind regards,
Igor

George December 2, 2012 at 2:58 AM

Igor,

The purpose of my experimentation was to find out where EV values fall, in Lightroom’s gamma-encoded output! It was not to know what the data points might be in the raw.

George

Igor December 4, 2012 at 12:05 AM

George,

Ok, then I don’t get your message in the portion of your article starting “So this is where it becomes interesting….” where you argue the common approach. This approach is true as far as raw data are concerned. But once you deal with the particular sensor which has its own unique response for each channel and particular in-camera analogue-digital converter (say, Nikon applies curves to the blue channel leading to gaps in raw histogram in this channel) and, furthermore, once you apply a particular interpolation algorithm, how can you state anything with any degree of certainty? EV values in Lightroom output are unique for your unique experiment, and it doesn’t seem they can be generalized in any way. There are simply two different things both of which are true: common approach and your experiment. One should just understand where and what is applied in practice. Common approach works in its place and should not be argued.

George December 4, 2012 at 8:00 AM

Igor, I see where my discussion of how EV values are encoded in a linear matter clouded the true message of the article. And to clarify, I was not arguing that this “common approach” should be ignored, but only that it has been inaccurately represented in many published sources.

Indeed, my representation of “where the values fall” also has its own flaws, as it was based on an exposure that put an EV of 11 at 99.8% in Lightroom’s PV 2012. Rawdigger shows this same highlight value to be entirely clipped in both green channels, and is falling so far up on the scale that the surrounding tones are being extremely compressed, as Lightroom attempts to “roll off” these very bright highlight tones to create a “film-like” look.

An exposure that is just one stop lower (with no true clipping in the raw) tells a very different story of what tones are encoded in the first stop of exposure.

But that was not the purpose of the article. I believe the heart of the matter for Ansel Adams was to understand one thing: what tones would be created in his final print, from specific measurable exposure values that are in the real world. And that’s where I tried to leave it at the end of the article.

You are correct that my results cannot be generalized in any way, except to show digital photographers that the message of the Zone System still has value. Understanding how uniformly increasing (or decreasing) exposure values are interpreted by your raw processor, will tell you a lot about why your pictures look the way they do, once on the computer screen. And with luck, after that, make better pictures.

George

Owen Newnan October 8, 2013 at 4:48 PM

When I read “these tones … have a linear progression relative to the unit of measure used in that space. The unit of measure from black-to-white in Lightroom is percentages, from 0 to 100%” I expected gradations of 0%, 10%. 20% etc. However, when I opened the 11 step grey scale image in Lightroom 5 with 2012 process I got percentages of 0, 14.9, 27.8, 32.2, 49, 58, 67.1, 76.5, 83.9, 92.9, and 100 respectively. Do I misunderstand the intent of the graphic?

George October 8, 2013 at 5:37 PM

Yes.

But I also might have not been perfectly clear by saying “the unit of measure in Lightroom“….

That part could have been more clear.

The graphic in question was created in Photoshop using values in an sRGB space, which has a gamma of 2.2. For the masochistic, the source graphic (NOT exactly the same as if you copy it from the WordPress page…) can be downloaded here. Open it in Photoshop, set your info readout to Grayscale, and you will find that the values “have a linear progression” in that space. 10%, 20%, 30%, etc.

Lightroom uses it’s own tone-mapping for RGB (or grayscale) files, and when you bring that file into LR you do indeed get slightly different values.

I wouldn’t lose any sleep over it. Do the values appear to have a “reasonably uniform delta in perceived brightness from step-to-step”?

That…. is the intent of the graphic.

Chirag December 29, 2015 at 5:41 PM

Great article ! Thank you

Leave a Comment

{ 2 trackbacks }

Previous post:

Next post: