Time Travel and Ethical Photojournalism


Responding to my workflow video from last week, some great questions from Tom Leininger. He got me thinking and I came up with this:

Tom-

Thanks for the e-mail with your thoughts and questions about ethics in regard to Adobe’s Lens Correction Profiles and the recent NPPA discussion about the Fill slider. You got me thinking so much about all this that I came up with a full post in response. This post is not about the ethics of toning, saturation, Hipstamatic, etc. It is about software and technology that makes your photographs more real, not less. Here goes…

Our goal in photojournalism is reality. The foundation of ethics in photojournalism is that our photographs of any situation should look the way our eyes saw it. Let’s use the human eye as our benchmark standard of reality. How the eye sees is our goal, and thus our reality. We forget that the human eye is not film or glass.

Since the beginning of photojournalism we have been recording reality with flawed physical materials. Glass lenses introduce distortion and vignetting of light, and film cannot record the full range of latitude that our eyes are capable of seeing. These limitations were accepted, so much so that we even exploited them for artistic purposes.

Take the famed Kodachrome slide film for example, which we loved for its unforgiving latitude that produced deep, dark shadows when we exposed for the highlights. Anyone researching our planet from Kodachrome slides alone would believe that the shade is as black as night even when the sun is shining. To the dismay of thieves everywhere, this is not the case. Despite the scenes our favorite photographers sometimes show us, our eyes can see detail in both the sunlight and the shadows simultaneously. There is shadow detail on Earth, even in Haiti. Kodachrome and its strict latitude was highly inaccurate in contrasty situations. Romantically so.

To counter such defects and limitations in our film and lenses, photojournalists have used countless contraptions. Over the decades these have included such things as fill flash, tungsten film, color balancing gels on strobes, split neutral density filters, developers and processes that altered contrast, and on and on. These were accepted practices, and photographs using these techniques have won awards for decades. And what was the goal of these tricks? To overcome the physical limitations of the photographic process in the pursuit of photographs that were closer to the way we actually see.

The major difference between these time-honored practices and modern technologies like Adobe’s Lens Profile Correction and the Fill slider (note that it’s not called “Fill Flash”) is that today’s solutions are applied in post-processing after the shoot. That’s the sticking point for a lot of old school photojournalists. Decisions in the film days were made at the moment of exposure and were mostly permanent. The exposed slide was considered a finished product and was meant to remain that way. You couldn’t make changes after exposure without altering a physical artifact.

Things have changed. With its advances in RAW conversion software (built into both PhotoShop and Lightroom), Adobe has given photojournalists new ways to get closer to our goal—reality. We are less constrained to the limitations of film and glass.

This introduces a new dynamic to photojournalism. Decisions can now be made AFTER EXPOSURE that bring our images CLOSER TO REALITY. That is the situation, and most ethicists have not realized or accepted it.

When you shoot a photograph in RAW format you retain an amazing amount of control over that image. White balance, exposure, and (soon) focus are all things that you can decide on later while sitting in front of the computer. It’s like having a time machine that takes you back to your camera at the exact moment of exposure, letting you change any of your settings for each shot. This changes everything. Would it be unethical to use a time machine to go back and get a better photograph of a historic moment? Of course not, provided the photograph was true (and you didn’t step on any butterflies).

This time traveling ability in post-processing is an incredible power. Imagine being able to take your old film negatives and re-develop them in new, improved chemistry that eliminated the grain and improved the color. That is happening with digital right now. RAW conversion software improves every year, and your photographs improve with it. Your digital photographs from ten years ago can be reprocessed with today’s software for improvements in color, noise, etc.

This is the new reality: When you click the shutter today, you’re not baking your image permanently into Kodachrome emulsion. You’re bookmarking a moment for later attention. You’ll make decisions about exposure, white balance, and whether the photograph will be in color or black and white at the computer, not the camera. It may be different and new, and it may be scary, but it isn’t necessarily wrong.

Because even with these new tools, our goal is still reality.

The recent “Fill Flash” discussion that appeared online seemed to involve photographers who didn’t quite understand the tool. They even got the name wrong (it’s called Fill and it’s no longer a part of Camera Raw). Like you said, the tool simply opened up the midtones of an image. That’s nothing new to digital image processing (midtone controls were part of the original LeafDesk software circa 1990). Besides, no one in the film era would have questioned the ethics of using a low-contrast developer and some minor dodging and burning to achieve a similar result.

What is the difference, ethically, in the two following scenarios?
1. Using a battery-operated flash unit on your camera as fill flash to lighten up a backlit subject.
2. Using a slider in software that triggers a mathematical algorithm to lighten up a backlit subject, using the original capture data from the sensor.

There is no difference ethically. Both should be accepted practices. But if you had to pick one that was less natural, surely the strobed fill flash would be it, the way it artificially changes reality by casting unnatural (though pleasing) light and adding shadows. Using a slider to adjust contrast and give your image more latitude (similar to that seen by the human eye), preserves the natural look of the scene and doesn’t intrude on the moment with a bright flash of artificial light.

What we are seeing in photojournalism ethics today is a lack of understanding when it comes to technology. There are definitely things to be concerned with, notably those techniques that move our photos toward unreality, such as cloning, heavy-handed toning, and over saturation & desaturation.

However, technology is providing us with great tools—new algorithms that bring our photographs closer to reality and overcoming the physical limitations of lens design.

Adobe’s Lens Profile Correction is the perfect example. Here’s how it works:

Modern lenses are riddled with flaws like pincushion and barrel distortion, color fringing, and vignetting at wide apertures. Even the pricey zoom lenses that most photojournalists carry today have these flaws to some degree. With Lens Profile Correction, the flaws in your glass are corrected. Adobe has tested most of the lenses available today, from the Nikon 14-24 to the Canon 600, and built profiles that correct the distortion and vignetting in each lens design. In other words, Adobe’s Lens Profile Correction automatically corrects the UNREALITY that your lens is putting into your photographs. The scientists and mathematicians behind Lens Profile are bringing your photograph closer to what was seen with the human eye and away from the world as seen through glass elements.

It makes you wonder, should we keep our professional standards and ethics modeled on the physics of light passing through glass and onto film? Or should we base them on the superior quality and light gathering ability of the human eye?

If you could go back in time to the moment you took a favorite photograph, would you rather shoot it with a $400 Nikon/Canon 24mm lens, or a $6,500 Leica 24mm lens with its exceptional sharpness, contrast and lack of distortion? Would you have any ethical concerns with making that choice before you shot the photograph?

Those were easy questions. What about this: Would you have ethical concerns with using a mathematic algorithm designed by scientists to bring your photograph closer to reality by correcting the flaws inherent in your lens, with the understanding that the algorithm will not change the content of your photograph, only correct the distortion, vignetting and other unnatural flaws of your equipment?

These are the kinds of questions we face. We are at the moment where post-processing is as much a part of any photojournalist’s kit as their cameras and lenses. The physical limitations of the past are vanishing. We are able to make ethical decisions about technique retroactively. Don’t be afraid of advanced techniques that lead to reality, even as they astonish you.

Remember, reality is the goal.

Comments

Tom Leininger: I talked with a photographer who recently upgraded to Lightroom 4 and asked him about the changes Adobe had made. He said that he could get more out of his RAW files with that version than he could with previous versions. That statement sold me on the software. It also made me realize that the days of finding what works best for you and sticking with it are over, flexibility is the key. Thanks for the thoughtful answer to my question.

Robert Gumpert: It is not the eye that sees but the brain and the brain routinely throughs out data the eye transmits. The job has always been to transmit what the photographer “saw” not what the eye sees.

Trent Nelson: Tom- so true. it’s a game changer.

Trent Nelson: Robert - okay, what’s your point?

BWJones: Robert, The eye is the brain neurobiologicaly speaking. The eye does preprocessing before sending those data to the CNS.

Alex Gallivan: Trent, Thoroughly enjoyed your post Thanks,

Trent Nelson: Michael- I think you’ve hit it. Robert is talking about one thing and I’m talking about another. In this post I’m talking about one narrow slice of photojournalism, technology and ethics. Make sure you check out his work, it’s great: http://robertgumpert.com/exhibits/lost-promise/

Trent Nelson: Robert- I get what you’re saying, but your comments are not addressing what I have written about here, which is technology and ethics. Specifically, how new tools have been misunderstood when it comes to ethical photojournalism. Your comments are taking the discussion in another direction. Also, you fail to realize that we can both be right. There is room for my points to coexist with your points. Good technique does not need to take away from good seeing. You can have both. When you understand technology you actually reduce the time you spend at the computer. Lens Corrections are automatically applied when import into Lightroom, resulting in better, more accurate photographs. You can’t argue that away in a hastily-written blog comment. It’s a fact. If your vision and mine are equal but my “expertise in working my tools” is better than yours, I win.

Michael Mangum: Trent, this is yet another reason why I love the UPJ site (and subsequently your own blog, and others). As a young photojournalist, I only really know the new school, and only really know of the old school. It’s good to have these insights into why the questions that are posed today are actually important questions when it comes to ethics in certain situations. I’ve questioned things like fill, polarizer filters, fill-flash, and other things. My question (to you, Trent) is also in a way a response to Roberts comment. When speaking about “the way the eye sees it,” we’re talking about “seeing” technically and objectively, right? And not subjectively? Robert is speaking of the subjective, but is that really the photojournalists job? My understanding is that we develop our own styles of capturing and displaying the reality, but isn’t reality itself objective?

robert gumpert: The point is talking about using software in a time travel way, or hardware on scene to record “exactly what the eye sees” is a bogus concept, we see what our minds tell us we see, given the information coming in. It is why an image taken of a scene by two separate photographers with the same equipment, standing next to each other can look different. It is why “eye witness” accounts or ids is so problematical. The premise of your piece focusing on what the eye sees is wrong. The question has been and remains, when talking about documentary/journalistic photography, is the photo a “truthful” representation of what was going on. It is not about whether or not the image, worked at the time or in post production, matches exactly what was there. At least if the desire is more than a simple insurance photo of data. Too many of us focus on the tools and forget that just because you can do it doesn’t mean it should be done. Using a tool doesn’t mean the photo is better for it. I show photos to share a scene not my expertise in working my tools. And to be clear I do not mean by that, that a through knowledge of how to use them isn’t important, just that part of that knowledge is also knowing how to use the tools to keep the content front and center, not my skills.

.|.
June 15, 2012