Editor's note: This topic is so important that in 2022 I was asked to give a zoom lecture to the Royal Photographic Society in the UK. If you prefer watching video to reading a long blog post, then you may wish to absorb the information this way:
Computational Photography
This blog post has many beginnings...
Beginning #1
I get many emails from photographers the world over, expressing frustration that they schlep their high-quality equipment, shoot RAW and post process, all the while their significant other shoots a similar image with their iPhone, and then posts it to Facebook seconds after it was taken - and the image looks great, with no post-processing needed. How humiliating!
Beginning #2
In 1973, Paul Simon wrote a song called "Kodachrome", which he said "...gives you those nice bright colors, give us the greens of summers, makes you think all the world's a sunny day". According to
Wikipedia, "... the real significance was that Kodachrome film gave unrealistic color saturation. Pictures taken on a dull day looked as if they were taken on a sunny day. (To correct this, serious photographers would use a Wratten 2b UV filter to normalize the images.)"
Years later, Fujifilm would produce films that made Kodachrome colors look subdued by comparison.
Today, smartphone images represent the latest in a trend to create people-pleasing images that deviate from how the world actually looks to a raw sensor. Is it still photography with so much misrepresentation going on?
Beginning #3
When the Light L16 camera first came out, I thought it was genius and I thought that this would be the future of smartphone cameras. This flat slab of a camera employed 16 small sensors/lenses of various focal lengths and stitched several of them together to create a high-resolution 52 MP image better than what any single sensor could produce. Different focal lengths were combined to emulate a "zoom" between the fixed focal lengths. The camera was able to produce a depth map by configuring at least two of the lenses into a stereo arrangement. You could change the depth-of-field after the fact. If there was ever a good example of what Computational Photography can achieve, this was it - produce an image of greater quality than just what a sensor and optics can provide.
As great as the idea was, plastic optics, a slow processor, sluggish desktop software, and a high price
doomed the first iteration. The company wisely regrouped and focused (no pun intended) on licensing their technology to smartphone companies, resulting in the 5-camera
Nokia 9. Unsuccessful in the marketplace,
the idea died.
Beginning #4
When 35mm film first came out, the "serious" photographers shunned it, as it offered an inferior quality to the medium-format films being used at the time. Eventually, convenience won out, as people decided the quality was more than good enough for their needs.
Beginning #5 - Why can't the camera just make it look the way I see it?
In my seminars, I would talk about how the camera and the eye see light differently. I explain to attendees that the limited dynamic range of our modern sensors is narrow on purpose. I then show this "devil's advocate" example:
This image was a merged bracketed exposure - perhaps 30 stops in total range; much wider than what the traditional HDR feature on your camera can produce. It shows everything my eye could see from the detail in the backyard through the doors, to the detail in the shadow under the piano bench.
But an image that can see everything your eyes can see can look very flat and low contrast, as in the example above. "One day", I would say to my seminar attendees, "psychologists will figure out what kind of image processing is happening inside our brains, and then the camera would just make it look like it appeared to our eyes."
===
My friends, that day has nearly arrived. And the advancements didn't come from the camera companies. It came from the smartphone manufacturers who had to be clever in order to achieve higher quality results than what their camera's tiny lenses and sensors would otherwise allow. Yes, the iPhone images can look relatively poor when you pixel peep, and the saturation and HDR might be a little over-the-top when compared to a traditional camera, but if all you do is post to Instagram that difference become meaningless - people LIKE those nice bright colors, and those enhanced greens of summer. Plus, in my experience, most modern smartphones handle difficult light and HDR much better / more naturally than shooting in HDR mode, and just as good as spending two minutes tweaking the RAW file with conventional cameras to make it look the way your eyes saw it.
What computational tricks are the smartphones using that conventional cameras aren't? Is it really photography when so much manipulation is automatically applied, or when the image is enhanced to the point of near-fiction?