Wednesday, September 1, 2021

Computational Photography

Editor's note: This topic is so important that in 2022 I was asked to give a zoom lecture to the Royal Photographic Society in the UK.  If you prefer watching video to reading a long blog post, then you may wish to absorb the information this way:



Computational Photography

This blog post has many beginnings...

Beginning #1

I get many emails from photographers the world over, expressing frustration that they schlep their high-quality equipment, shoot RAW and post process, all the while their significant other shoots a similar image with their iPhone, and then posts it to Facebook seconds after it was taken - and the image looks great, with no post-processing needed.  How humiliating!

Beginning #2

In 1973, Paul Simon wrote a song called "Kodachrome", which he said "...gives you those nice bright colors, give us the greens of summers, makes you think all the world's a sunny day".  According to Wikipedia, "... the real significance was that Kodachrome film gave unrealistic color saturation. Pictures taken on a dull day looked as if they were taken on a sunny day. (To correct this, serious photographers would use a Wratten 2b UV filter to normalize the images.)"

Years later, Fujifilm would produce films that made Kodachrome colors look subdued by comparison.

Today, smartphone images represent the latest in a trend to create people-pleasing images that deviate from how the world actually looks to a raw sensor.  Is it still photography with so much misrepresentation going on?

Beginning #3

When the Light L16 camera first came out, I thought it was genius and I thought that this would be the future of smartphone cameras.  This flat slab of a camera employed 16 small sensors/lenses of various focal lengths and stitched several of them together to create a high-resolution 52 MP image better than what any single sensor could produce.  Different focal lengths were combined to emulate a "zoom" between the fixed focal lengths.  The camera was able to produce a depth map by configuring at least two of the lenses into a stereo arrangement.  You could change the depth-of-field after the fact.  If there was ever a good example of what Computational Photography can achieve, this was it - produce an image of greater quality than just what a sensor and optics can provide.


As great as the idea was, plastic optics, a slow processor, sluggish desktop software, and a high price doomed the first iteration.  The company wisely regrouped and focused (no pun intended) on licensing their technology to smartphone companies, resulting in the 5-camera Nokia 9.  Unsuccessful in the marketplace, the idea died.  

Beginning #4

When 35mm film first came out, the "serious" photographers shunned it, as it offered an inferior quality to the medium-format films being used at the time.  Eventually, convenience won out, as people decided the quality was more than good enough for their needs.

Beginning #5 - Why can't the camera just make it look the way I see it?

In my seminars, I would talk about how the camera and the eye see light differently.  I explain to attendees that the limited dynamic range of our modern sensors is narrow on purpose.  I then show this "devil's advocate" example:


This image was a merged bracketed exposure - perhaps 30 stops in total range; much wider than what the traditional HDR feature on your camera can produce.  It shows everything my eye could see from the detail in the backyard through the doors, to the detail in the shadow under the piano bench.  

But an image that can see everything your eyes can see can look very flat and low contrast, as in the example above.  "One day", I would say to my seminar attendees, "psychologists will figure out what kind of image processing is happening inside our brains, and then the camera would just make it look like it appeared to our eyes."

===

My friends, that day has nearly arrived.  And the advancements didn't come from the camera companies.  It came from the smartphone manufacturers who had to be clever in order to achieve higher quality results than what their camera's tiny lenses and sensors would otherwise allow.  Yes, the iPhone images can look relatively poor when you pixel peep, and the saturation and HDR might be a little over-the-top when compared to a traditional camera, but if all you do is post to Instagram that difference become meaningless - people LIKE those nice bright colors, and those enhanced greens of summer.  Plus, in my experience, most modern smartphones handle difficult light and HDR much better / more naturally than shooting in HDR mode, and just as good as spending two minutes tweaking the RAW file with conventional cameras to make it look the way your eyes saw it.

What computational tricks are the smartphones using that conventional cameras aren't?  Is it really photography when so much manipulation is automatically applied, or when the image is enhanced to the point of near-fiction?  

Computational Photography

The idea of merging images from multiple sensors (as in the Light L16 above) is just one example of computational photography.  There are many other techniques that smartphones employ that traditional cameras just don't.  Below are just a few examples (as always, click on any image to view larger and sharper):

How the Samsung Galaxy S21 analyzes your image and optimizes faces.  This unfortunate example was taken from their website, as the claimed "improvement" is hard to see.

Smartphones are incorporating machine learning techniques (sometimes coupled with dedicated neural processing hardware) to identify subjects and automatically perform the kind of editing that would brighten the image, smooth the skin, fill in detail, and blur out the background in certain modes.  DSLRs often had some of these features scattered throughout their menus but rarely would they be automatically invoked without the photographer's knowledge.  Smartphones do it every day.  Normal people love the results they provide straight-out-of-camera.

Here's a straight-out-of-phone example taken in "Photo" mode.

Scroll down to see the same picture taken in "Pro" mode, which doesn't try to enhance the image.  (I'm separating them on purpose).

.
.
.
.
.
.
.
.
.
.
.
.
.
.
. (keep scrolling)
.
.
.
.
.
.
.

.
.
.
.
.
.


Taken with the smartphone in "Pro" mode.

Why didn't I show you these examples side-by-side?  Because if I had, you would have looked at the first image and shouted, "Overprocessed!"  Here, I'll prove it:



See?  You probably didn't object as strongly when you saw just the one image.

Comparisons aside, the new algorithms also provide the most realistic HDR I've seen without resorting to bracketing and tone mapping.  Keep in mind that the images were also designed to be viewed by a high-contrast display typically found on smartphones, boosting the bright-and-saturated look even further.

Here are some more impressive examples I took with a recently acquired Samsung S21 (not the 'plus' or 'ultra') in both "Photo" (left) and "Pro" (right) mode.  (Pretend the buildings are straight :-) ): 







"Photo" mode

"Pro" mode

Photoshopped .dng file

More examples of how smartphones handle High Dynamic Range:



A conventional camera couldn't do this without shooting RAW and postprocessing.

The Merging Several Adjacent Images Technique

Another computational photography trick that the big cameras tackled first was the idea of merging several shorter-exposure pictures into one, simulating a longer exposure.  Traditionally, really long exposures were enabled by using dark Neutral Density filters; then programs like Photoshop and StarStax came out allowing you to do the merging on your computer, while averaging away the random noise in the process.  Sony's Smooth Reflections app did the same thing in-camera, which I vastly preferred to using ND filters to get long waterfall shots.  (I compare the two techniques in this blog post from a few years ago.)

A waterfall in Iceland.  It was too cold and the constant hail discouraged me from using ND filters; Sony's Smooth Reflections App in the A7R II did it in a fraction of the time and it nailed the exposure on the first try.  Sony cameras have since stopped supporting downloadable apps, so that's why I will always take the A7R II when I travel. 

The Shaky Hand Technique 

Wait, it gets better!  You know how some cameras offer a feature called pixel-shift, where you put your camera on a rock-solid tripod, and the camera takes multiple pictures, shifting the sensor a quarter-pixel in each direction, and then merges these into one über-high resolution image?  Smartphones can emulate that technique, not by moving the sensor a quarter pixel, but by utilizing the shakiness of your hand.  Smartphones can continuously capture and buffer images at video speeds, allowing the camera to select the best source images - that is, those which are offset about one pixel from the other (and also not blurry) to use for this.  All invisible to you.  A higher-quality image (in some cases also a higher resolution image) without the burden of higher-quality hardware!

This technique also the way many smartphones do image stabilization.  Well, that's not true - when you are composing your image on your phone, the phone is capturing full-res still images at video speeds, and analyzing each image for sharpness.  When you press the virtual shutter release button, the camera goes back through the buffer and simply selects the most recent image that is sharp.  Genius, but that also means you won't necessarily get that 'decisive moment' you were hoping for.

Another great benefit to using the 'shaky hand' technique is you can get also get a real value for each Red, Green, and Blue pixel instead of relying on Bayer demosaicing technique.  (Your next question will be, "Will I notice a difference?"  The truthful answer is, "Probably not", although the image noise has the potential to be lower.)

Night Mode - Without a Tripod

Combining multiple short-exposure images is also how the various "Night Mode" features work, and it's a combination of all the techniques mentioned above: shaky hand, multiple image captures, in-phone alignment, and averaging away all the noise.  In testing out this feature I compared it to what my Sony RX100 VII produced (which technically needed a tripod since the SteadyShot would only do so much in extremely low light).  

S21                                       RX100 VII

S21                                 RX100 VII
  

S21 in "Night" mode

S21 in "Pro" mode (this is actually how it looked to my eyes - it was DARK there!!!)

Brightened DNG file (not as good)

And here are a few more Night mode images to share, just because I like them :-)  :

  

  

Remember, these are all snapshots.  But how nice to get them all to look better without having to post-process!

S21 TIP: If you're shooting in PRO mode you have the option to also save the image as a .dng (which is kind of like a non-proprietary RAW file).  If you also have your images automatically uploaded to Google Photos, you should know that if you have the Upload Size set to "Storage Saver",  your raw files will be converted to .jpg before being uploaded, losing all the benefits of shooting RAW.  Instead, hook up your phone to your computer via USB and download them that way.  Quality loss avoided.

Time-Of-Flight

Then there's everyone's favorite "Portrait" mode, which simulates the kind of shot you'd get with one of those expensive white 70-200 f/2.8 portrait lenses.  The way it's done is pretty complex - the camera has to figure out which compositional elements are close and which are far, and it will just do a nice Photoshop-like Gaussian blur on the objects that are far.  How the phone determines the distance varies - some phones like the latest iPhones use laser-based LiDar to build a 3D model of the scene; other phones do the same thing with IR light (using a sensor called Time-of-Flight) - both techniques send out either a laser or IR light and time how long it takes for the light to reflect back off the subject.  The longer it takes, the further away the subject is.  Still other phones like the early Google Pixels had to be clever about building their depth maps without any such special hardware, leaning heavily on machine learning and Convolutional Neural Networks to identify what the subjects might be.  Clever but not as good, which is why early Portrait Mode examples were kind of sloppy around the edges; sometimes stray hairs or parts of clothing would be blurred which kind of gave it away.  I'm not seeing those artifacts now, though.  

Surely you've all seen portrait mode examples before.

Here's an image that a conventional camera couldn't take.  The phone identified four faces and blurred everything but the faces.  So you end up with the front row that's in focus, the space between the front and back rows that are blurred, and the back row that's in focus.  Good or bad?  You decide. 

I love the Portrait feature; when coupled with the phone's optical telephoto lens it produces nice results and saves the weight and expense of a 'real' lens (although people won't take you seriously as a wedding photographer using this :-) ). 

TIP: Normally the depth map can't be stored in .jpg format (nor RAW format for that matter), which is one reason the new HEIF file format is now a thing.  In addition to higher quality compression, there's also a means of storing the depth map information inside.  (Also video, audio, and a few other things.)  Why is that useful?  It's possible to open the depth map in Photoshop as a variable density selection layer, allowing you to control your own blur - the "redder" the mask, the more the blur is applied.  

What I don't understand is why can't modern cameras with > 600 focusing points be programmed to generate a depth map automatically when a picture is taken?  Just evaluate the distance behind each AF point (the camera does this anyway when identifying the closest object to focus on) and build a depth table?  I would love to be able to use my RX10 IV at 600mm and then get higher-end-camera blur after the fact using this technique.  Something for Sony to consider for their next models.

Lighting as an Afterthought

That depth map alluded to above also allows for after-the-fact CGI lighting for your portraits (which I proposed more than a decade ago) which Google is starting to offer via the Google Photos app on images it thinks are workable.  While the purists will insist that "there's no substitute for good light", the rest of the world will say, "Ooooh, another Instagram filter!" 
Machine Learning

No overview of computational photography would be complete without discussing machine learning, where you feed a gazillion images to an algorithm and let it learn what common things are supposed to look like.  Why would that be helpful?

I'll just cut to the chase here.  A few months ago Google published some amazing examples of their Machine Learning projects, in this case to scale very-low-res images into high-res ones.  The results contain detail that wasn't there originally - detail that was extracted and transmorphed from the training images.  Have a look at some closeups from their recent published paper:

Here, the reference image in the right column was the starting point.  A low-res version of that image (left column) was fed into their SR3 algorithm, producing the center column which is amazingly close to the original reference (impressive until you start to look closely).


Here is a close-up example.  Notice that the eye looks convincing but the texture in the hair does not.


Another case where the detail (the hair) looks convincing, yet also completely different from the original.  The algorithm simply makes up convincing-looking detail.

Here's the acid test.  Can it reconstruct letters or candy wrappers?  No.  It wouldn't do well on license plates either.

Google also used a second mind-blowing technique, which builds the high resolution image from pure noise.  On what principle does THAT work?  Google explains all in their easy-to-understand online paper.


Right away two things are clear:

1) These techniques won't work everywhere; the best results are obtained when the test shot is similar to an image from the training set.

2) This kind of technology might be great for smartphone snapshooters, but absolutely wrong for things like surveillance cameras and video, since the detail presented wasn't necessarily there in the original scene.  (And of course you know that's EXACTLY how law enforcement is going to start using it until the lawsuits start.)

The Pixel 6 phone is going to be formally announced in a few weeks; how much do you want to bet that some version of this will be used to get great high-res images from a standard smartphone camera module?

===

So why am I telling you all this?  As alluded to earlier, my old smartphone died and I got myself a Samsung Galaxy S21.  And while there's no contest when it comes to pixel-peeping and comparing to conventional cameras, I continue to be impressed by what the camera does on its own, behaving like how we wished point-and-shoots would have worked since the early days of point-and-shoots.  They produce more people-pleasing images out-of-the-gate than your conventional camera.  Which leads directly to Beginning #1 above.

We live in amazing times.  The gap between smartphones and traditional cameras continues to shrink.   I even find myself leaving my RX100 series cameras behind whenever I go out now, something I never would do five years ago.  Ever since I was able to license images taken with my older Galaxy S8, I stopped worrying about enlargability.  The market demand for high-res images has dropped considerably over the last 20 years, and companies like Adobe and Topaz are developing image-scaling tools that are decent (as long as you don't examine your images with an electron microscope).

(I'll still be using conventional cameras in the new studio, though!)


In the Pipeline

The ebook for the Sony A1 is being translated into Spanish.  Email me (Gary at Friedman Archives dot com) to be notified of its release!  

Also, version 1.04 of said A1 ebook has just been released.  You should have automatically received a free update, but if not just email me your purchase receipt and I'll provide you with a download link.


Next Time in Cameracraft

I sit down for a conversation with Andrea Pizzini, who was so frustrated by the misinformation regarding the COVID 19 pandemic that he spent the last six months documenting a COVID ward and how the virus has personally affected real people he grew up with in his native Italy.  

Zoom Lecture for your Photo Club


I've been giving Zoom lectures to photo clubs around the world for over nine months now, and I've gotten (that's a word!) the setup down to a small footprint, that can be used in small spaces (like a kitchen table in a tiny 2-bedroom condo in Boston :-) ).  In the image above I've highlighted five key items in my setup:
  1. A camera that's way better than a webcam.  Here it's the RX100 V, at eye level so attendees aren't looking up my nose.
  2. The screen so I can see all of the participants; see if anyone's raised their hands or are sharing something.  (This is important to me, as I like my lectures to be interactive and being able to see people is paramount and more enjoyable.)
  3. A camera attached via HDMI so I can give a live demonstration of an exposure or wireless flash technique or camera operation principle.
  4. My "control panel" where I can switch between different virtual cameras (the RX100, the demo camera, my laptop screen, a video, or the powerpoint program running in the background.)
  5. An HDMI monitor so I can see what the participants see (usually a powerpoint slide or the demo cam).
(This is a more portable version than my original setup from last year, which I blogged about here.)  

I've already lost track of how many photo clubs that have hired me, but I can tell you this: Every club has enjoyed them immensely - some have even asked me back to give a 2nd or 3rd lecture on different topics.  

I can do this for your photo club as well!  My most popular Zoom talks have been:
  • RAW vs. JPG – I tackle this very religious technical subject with clarity and challenge the experienced photographer to re-think everything they were told was true about .jpgs. (This is, by far, my most popular talk and the one that has changed the most minds about what is true and what is hype.)
  • How to “Wow!” with Wireless Flash – Here I demonstrate how easy it is to move your flash off your camera and add great drama with no need for technical knowledge. Think a new lens will improve your photography? Learning to use light will have a dramatically greater impact on your images.
  • The Forgotten Secrets of the Kodachrome Shooters – How pros in the 1960’s got “Wow!” shots without fancy cameras and without Photoshop. (These secrets apply to today’s digital cameras, too!)
I can also put together a talk to address your club’s most pressing questions. (Hey, I’m working for YOU!!)  Contact me at Gary at Friedman Archives dot com for more details.


That's it for now!  Until next time,
Yours Truly, Gary Friedman
(Creator of the densest blogs on the planet (tm))

ZZ Top, the latest in my year-long Quarantine Beard Self-Portrait series


34 comments:

  1. DSP and convolution mapping revolutionized signal processing in sound, music and audio engineering.
    Image Processing is entering a new realm, where situational intelligence and machine learning can improve the results of photos taken with non-traditional cameras.

    ReplyDelete
  2. Thanks for the article.
    Not calling myself guilty.
    Back in my NZ trip In 2010, I bagged Sony A350, a few lenses, a tripod, and 2 or 3 Sony flashes and a laptop.
    Last year (before Covid), I carried only my iPhone and iPad for photographing… and a backup RX100 II which I didn’t actually get to use.
    I truly enjoy travelling light. Of course I did realise the limitations of iPhone vs DSLR, however I wanted to enjoy the trip better.

    ReplyDelete
  3. An excellent write up on a complex topic combining many elements….WELL DONE! BillJ

    ReplyDelete
  4. Love this Blog Gary. I've been a fan since your Sony A350 book, which I'm still using!

    ReplyDelete
  5. The question you should be asking is why can’t I immediately post to Facebook and Instagram, etc. with my “high end” camera? Or get location data without standing on my head?

    ReplyDelete
    Replies
    1. boy, i echo this sentiment. if the camera can do all that with the camera's image, why can't the camera accept images from a good camera and do the same? yeah, it might take 10x as long. i'm willing to wait that long. ¯\_(ツ)_/¯

      Delete
    2. And why don't our cameras that cost thousands of dollars offer these options for when you need a quick snapshot?

      Delete
  6. A very instructive article. Thank you so much!

    ReplyDelete
  7. I enjoyed your post, thanks. I saw your comment on the Sony Smooth Reflections app and downloaded and installed it on my A7 M2 camera, can't wait to try it out, thanks.

    ReplyDelete
    Replies
    1. That was the only truly useful app they offered. I wish they would have opened up the API so third-party developers could have offered more innovative apps. Oh, well...

      Delete
  8. Very nice article. The same kind of things are happening on the Apple (iPhone) side. One nice thing about the Apple approach is that they have a "Apple Pro RAW" format (not RAW, not Pro, but anyway....) - it is a linear DNG. But Apple exposes a local tone map control and the (Mac and iOS) Raw Power app lets you adjust that - I generally back off a little because I think the phone goes a bit too far.

    I really wonder why Sony doesn't incorporate at least some of these capabilities in their "serious" cameras - the Sony Alpha Rumors site says the A7IV will have a built in multi-image high def mode, which would be interesting if it shows up. I know this is possible because I have an Olympus D-M5 III camera that has a lot of these kinds of features built in - and that camera was released in late 2019.

    ReplyDelete
  9. Another terrific post. Thanks Gary. You really know a photographic revolution is in full swing when innovative geniuses such as Gary endorse the smarts of a smartphone. Love it.

    ReplyDelete
  10. A phone is the modern version of the instamatic which was intended to be always on hand, easy to use and produce pleasing results.

    Considering how a lot of images are consumed now, which is social media, this is going to be perfectly good enough much of the time.

    But if you use images professionally or make big prints, you will use your DSLR (what do we call them these days?) because it provides options for capturing images not available on the phone. Super wide angles, tricky light, super tele, macro etc. You have enough resolution for effective post processing. In short, there is far more control.

    However, the phone is continually getting better as a painless image capture device. It's in the sweet spot for many situation much of the time. It's always at hand. It frequently makes sense. The way it renders images appeal to the eye.

    But its still not a camera.

    ReplyDelete
  11. Gary, Kit from Australia here. What are you using as your "Control Panel" in your Zoom setup? I plan on using the micro 4/3 camera in the same way as you use your little Sony. Cheers, Kit

    ReplyDelete
    Replies
    1. Hi, Kit! I'm using a program called "Macro Deck" running on an old android phone. That and every other program (and hardware) was outlined in ridiculous detail in my blog post at https://friedmanarchives.blogspot.com/2020/05/turning-your-camera-into-high-quality.html .

      Delete
  12. There can never be too much detail for anyone who can read! Thanks Gary.

    ReplyDelete
  13. After nearly 45 years professional photography I am legally retired , but could not 'sit idle'. So I'm in the process of starting a small portrait studio ... again. Since my daily expenses are covered by a pension I allowed my self a pure artistic / call it emotional approach building up my gear.
    This is what I did: I choose as camera a Fujifilm X-H1 for the simple reason that in the entire Fuji range it is has most 'natural' viewfinder - very comparable to my Nikon F3. (I tried all Fuji X cameras) Outfitted it with only prime focus lenses in the 2.0 range. To me that set gives me the closest feeling of my old film days is affordable , but above all it somehow triggers my creativity - is it the nostalgia , is it the perfect balance , the solid brick in my hand ? I don't care - it just fits me as a glove. Then added a secondhand Broncolor studio flash - just because I started with that system in 1976. Together that makes me happy like a little kid with a lolly. Maybe the Smartest Phone can outsmart me and all my gear ? I don't care - it does not make me happy , it does not give me the feeling of being an artist or a craftsman. To me that is what counts : my photography is a craft ... without the long gone smell of the darkroom ;-)

    ReplyDelete
    Replies
    1. You and I are on the same page. I also am starting a studio practice from scratch, I also love the feel of an A99 Ii in my hands, and I also abhor the smell of darkroom chemicals. 😊

      Delete
    2. So, having retired on July 30th this year, and being new to it all, what the hell is "legally retired"?

      Delete
  14. Interesting article but computational photography will never be my thing. I enjoy doing it all myself with a dedicated camera. If the camera did everything for me I would give up the hobby. The truth is the process is just as important as the result.

    ReplyDelete
  15. Hi, Gary—Thanks so much for the informative article, much appreciated. I would just add one more advantage to the smartphone camera: weight, a lot less. At 78 I found I was not enjoying packing a DSLR plus lenses plus tripod. When I tiptoed into using the phone I was delighted to discover how east it is to carry, how its RAW files were editable, how it could produce decent prints up to 13x19” and how freeing it has been to get off the eye-level tripod and discover different viewpoints. So like you I wonder if big cameras might not benefit from some computational enhancements. Best, Mike

    ReplyDelete
  16. Gary, a fascinating topic and superbly written article (as usual!) Like it or not, the traditional camera makers have tended to evolve slowly as they introduce new generations of camera technology, while the smartphone makers are "disrupters," introducing "wow" stuff with each new phone they bring to market. I hope the camera manufacturers can bring better connectivity (for quick uploads, social media posts, and file backup), and offer computational options that can be toggled on and off via menus for those times when photographers want or don't want them. Randy

    ReplyDelete
  17. Gary, your blog is informative, detailed, and the examples very helpful. You also got me to thinking… as I age, physical comfort plays as large a part of camera choice as does image quality. My aching hands precipitated a change from the Canon 5DMII to the smaller, lighter Sony A6500. Thinking I might prefer a full sensor, I also acquired a Sony AR 7III. The added weight of more equipment in my camera backpack slowed me down and hurt my back. So I decided to take only a couple of lenses per outing, but inevitably wished I had a lens left behind.
    Then I discovered the Sony RX10MIV, which offers a 24mm-600mm focal range equivalent in a single, fixed lens. One camera, one lens, great image quality… throw in the small Sony RX100M5A for good measure, and I’m all set, so I think.
    On a recent 7-day RV trip, I accidentally left my cell phone behind. I discovered that I use it more than I realized to upload quick updates to Facebook and Instagram. I had to download apps to my partner’s old iPhone 6 to be able to transfer, process, and upload images to my accounts. (I had purposely left my laptop at home on this shorter trip.)
    While I read your revelations about the advanced cell phone image gathering and evaluation capabilities, I thought of a question once posed by our college professor to the class: What is the best camera to use? Answer: The one you have with you. :)
    Since I don’t always carry my “real cameras,” I think I may be updating my iPhone 11 sooner than I’d planned. —Janet

    ReplyDelete
  18. Combine u our smartphone with an upscale gimbal and you have a nice setup. Better gimbal have some great software adding capabilities.

    ReplyDelete
  19. I appreciate many of the computational photography functions of the latest smartphones and wish my Sony cameras had at least a few of them.

    But one big disadvantage of smartphones remains: the lack of a viewfinder. In bright sunshine I can't even see on the phone what I'm shooting. And even if it's not as bright around me, I find I can compose much better - more accurate - by looking into a viewfinder.

    ReplyDelete
    Replies
    1. That's how I felt about the original NEX 3 and 5. It's also how all DSLR shooters feel when they try to video their kid's soccer game in bright daylight. :-) Larger cameras will always have advantages; the point I was making is that the gap is closing and smartphones use some innovative techniques to get there.

      Delete
  20. Mind blowing article dear Gary!
    But I'm perplex after reading it and very thoughtful hahaha
    Actually I was always interested and keen to phone photography until I discovered the RX100 and bought your books RX100, RX100 III, RX100 V (that was stolen unfortunately) and then passed to A7RIII and what you said in the beginning is completely me (my friends with their iphone are frustrated I don't give the shots except weeks later, time to download, sort, develop,... while people want things immediately nowadays)

    ReplyDelete
    Replies
    1. Yes, I agree with you. See also my blog post on "The Value of Immediacy" at https://friedmanarchives.blogspot.com/2018/01/the-value-of-immediacy.html

      Delete
  21. As a long time member of Overshooters Anonymous, I have started shooting all my "snapshots" with my S21 Ultra instead of my Sony A7Riv or A1. I have found that it makes me more intentional with my captures with the big cameras and saves on disk space too.

    ReplyDelete
    Replies
    1. Funny, people usually say the big cameras make them think more about their compositions and controls! Glad you found a tool that resonates with you.

      Delete
  22. Thank you for bringin up the technology and some implications of this photo-elephant in the room. Would be interested to see another episode maybe going into how (traditional) professionals can better convey and sell the remaining advantages to traditional gear. (While they last...?)

    ReplyDelete
    Replies
    1. I'm way ahead of you; I talk about that very subject in the latest issue of Cameracraft magazine! You should subscribe. :-) https://cameracraft.online/welcome-to-cameracraft-magazine/following-gary-friedman-support-his-work-by-subscribing-here/

      Delete
    2. Oh, my bad for not catching. I am subscriber!

      Delete
  23. Well, Gary, you pushed me into an S21 Ultra. Such a pain to get all the apps moved and logged in, but the photo quality is definitely superior to my Galaxy Note 8.

    ReplyDelete

Thank you for your comment! All comments must be approved by a moderator before they will appear.