Trends

The Pixel 3’s dual cameras are a tacit admission that AI can’t do everything — yet

Google’s latest flagship smartphones — the Pixel 3 and Pixel 3 XL — are finally shipping to customers, and the reviews are unanimous: The rear camera and dual selfie cams are best in class.
But as good as those cameras might be, they’re a bit puzzling — and sort of paradoxical. Allow me to explain.
The original Pixel and Pixel XL have two cameras: one front and one rear. The Pixel 2 and Pixel 2 XL have two cameras: one front and one rear. And the Pixel 3 and Pixel 3 XL have three cameras: two front and one rear.
Until now, Google has put most of its chips on AI.
“The notion of a software-defined camera or computational photography camera is a very promising direction and I think we’re just beginning to scratch the surface,” Google AI researcher Marc Levoy told The Verge in October 2016, shortly after the Pixel and Pixel XL’s debut. “I think the excitement is actually just starting in this area, as we move away from single-shot hardware-dominated photography to this new area of software-defined computational photography.”
In the weeks preceding the launch of the Pixel 2 and Pixel 2 XL, Mario Queiroz — Google’s GM and VP of phones — insisted that the phones’ single front and rear cameras were just as capable as dual-sensor setups from the likes of Apple and Huawei. The Mountain View company even considered bragging about needing one camera in its marketing materials, according to 9to5Google.
I’ve been trying to reconcile the inconsistency since the Pixel 3 was announced last week. Is the addition of a second front camera a tacit admission that AI can’t do everything? That a physical camera is superior? That a hybrid approach might be best?
Perhaps it’s a bit of all of those things.

What AI can do on the Pixel

As we’ve established, the Pixel 3 and Pixel 3 XL have two selfie cams — both 8MP sensors — and on the rear of the phones, there’s a single camera 12.2-megapixel sensor. The rear camera is the biggest beneficiary of Google’s truly impressive AI. Pixel 3’s Top Shot feature captures a burst frame before and after you tap the shutter button, and uses on-device AI to pick the best picture. Photobooth taps machine learning to take the “best” photos automatically — i.e., minimally blurry and well-lit — pics. And when Night Sight makes its debut in the coming weeks, it will leverage machine learning to boost the brightness of ultra-dark images.

Pixel 3 XL

Above: The Pixel 3 XL’s dual front-facing cameras.

Image Credit: Kyle Wiggers / VentureBeat

That’s why, despite having one camera compared to the LG V40 and Huawei P20’s three, the Pixel 3 and Pixel 3 XL take some of the best photos we’ve seen from a smartphone. Last year’s Pixel dominated the DxOMark charts, and we expect this year’s models to do the same — all thanks to AI.
For another example of AI’s photo-manipulating prowess, look no further than Google’s own portrait mode, which filters out facial features, clothing, and skin in portrait pics with a machine learning model that detects pixel-level differences in color and edges. (It works on both the front and rear cameras.) Yet another case study is HDR+, Google’s in-house approach to high dynamic range. It captures up to 10 images in rapid succession and merges them together to minimize noise, and cleverly leaves them all underexposed to help keep colors saturated in low light.

A return to form

I’m not suggesting that single-camera setups imbued with AI are the end-all, be-all of smartphone photography. They have substantial barriers to overcome.
AI can’t adjust focal length — the distance between the lens and image sensor when in focus. Super Lens Zoom, a Pixel feature that captures multiple frames from different angles (achieved with the tiny movements from shaky hands and the OIS system) and combines them into a single higher-resolution picture, helps to an extent, but it generally produces inferior results compared to optical alternatives.
But it’s not like AI can’t lend a hand here. In 2017, Adobe demoed a system that detects facial features and tweaks them — de-emphasizing the nose and cheeks, for instance, and flattening the face — to mimic the look and feel of photos taken with a telephoto lens. It even takes into account the properties of different camera lenses, and the results speak for themselves: They bear none of the distortion characteristic of front-facing camera pictures.

Pixel 3 XL

Above: The Pixel 3 XL’s rear camera.

Image Credit: Kyle Wiggers / VentureBeat

With the Pixel 3, Google says its strictly optical solution on the front — the combination standard and wide-angle camera — can capture wide-angle shots showing 184 percent more of a scene than the iPhone XS. I predict, though, that the next Pixel phones will ditch the two-camera selfie sensor for a single wide-angle, AI-powered sensor. By then, computational photography might well advance to the point where software consistently corrects for the dreaded fisheye effect.
It’s an idea worth pursuing if for no reason other than the high price of today’s camera sensors. The Samsung Galaxy S9+’s 12-megapixel dual-lens module, for instance, costs an estimated $44.95 per unit, of which $34.95 is from the primary camera.
And if anyone can solve a problem with AI, it’s Google.
Source: VentureBeat
To Read Our Daily News Updates, Please visit Inventiva or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button