Trends

The best smartphones for the AI enthusiast

Black Friday — the U.S.’s single-biggest shopping day — is nearly upon us, and that’s great news if you’re in the market for a smartphone. Retailers like Target and Best Buy are offering hundreds of dollars off the purchase of flagship Samsung Galaxy handsets. There’s a buy-one-get-one-free deal on LG G7 ThinQ at T-Mobile. And even Apple’s 2018 iPhone lineup, which was unveiled just two months ago, is seeing price reductions.
It’s almost too much of a good thing — particularly if you haven’t committed to a brand or model yet. Conventional wisdom would have you compare the screen resolutions, processors, accessories, and other hardware components to find the phone that most appeals to you. But this being VentureBeat’s AI Channel, we’re proposing a different metric: AI features. In our smartphone buying guide, we’ve pitting the year’s top phones against each other in a battle of AI wits, and have crowned champions in three categories: the smartphone with the best AI chip, the best AI camera features, and the best alternative AI assistant.

Smartphone with the best AI chip

iPhone XS, iPhone XS Max, and iPhone XR ($749, $999, $1,099)

Apple’s bleeding-edge smartphone trio — the iPhone XS, XS Max, and XR — have something in common: the A12 Bionic, the 7-nanometer custom-architected system-on-chip that Apple characterized as its “most powerful ever” designed for smartphones. It boasts six cores — two performance cores and four high-power cores — plus a four-core GPU. But arguably the real highlight is the neural engine: an eight-core, dedicated machine learning processor.
It’s Apple’s second-generation neural engine — the first shipped with the iPhone X’s A11 Bionic chip — and it’s a step up from its predecessor in every way.
The neural engine can perform 5 trillion operations per second compared to the last-gen silicon’s 600 billion, and apps created with Core ML 2 — the newest release of Apple’s machine learning framework — tap it to run up to nine times faster and launch up to 30 percent faster using one-tenth of the power. Moreover, the Apple’s AI chip takes advantage of a “smart compute” system that can automatically determine whether to run algorithms on the A12’s primary processor cores, the GPU cores, the neural engine, or a combination of all three.
That’s all well and good, but what really sets the neural engine apart from the competition is the wealth of features optimized for it. Face ID, Animoiji and Memoji, Portrait Lighting, and Apple’s ARKit 2.0 augmented reality framework are among them, in addition to the iPhone’s improved Portrait mode. When you hit the shutter button, neural engine-accelerated algorithms attempt to figure out what kind of scene is being photographed and distinguish a person from the background. That AI-informed understanding of depth enables postproduction editing of the blur and sharpness.
On the third-party side of the equation, developers can run code on the neural engine. San Jose software company Nex Team’s basketball app, HomeCourt, taps it to track and log shots, misses, and a player’s location on court in real time.
App development on the neural engine benefits from Core ML 2. Core ML 2, for the uninitiated, allows developers to load on-device machine learning models onto an iPhone or iPad, or to convert models from frameworks like XGBoost, Keras, LibSVM, scikit-learn, and Facebook’s Caffe and Caffe2. Core ML 2 is 30 percent faster than Core ML, Apple says, thanks to a technique called batch prediction, and enables developers to shrink the size of trained machine learning models by up to 75 percent through quantization.

Runner-up: Huawei Mate 20 Pro (~$1,150)

Huawei Mate 20 Pro: Leica lenses

Above: Huawei Mate 20 Pro: Leica lenses

Image Credit: Paul Sawers / VentureBeat

Apple’s latest-gen neural engine is an impressive chip, to be sure. And so is the Neural Processing Unit (NPU) in Huawei’s Kirin 980, the 7-nanometer system-on-chip inside the Mate 20 Pro.
It’s the second NPU iteration — the first made its debut in September 2017, in the Kirin 970 — and it’s specially optimized for the sort of vector math that makes up the heart of machine learning models. The chip’s two NPUs (up from one in the Kirin 970) can recognize up to 4,500 images per minute, compared to the Qualcomm Snapdragon 845‘s 2,371 images and the A11 Bionic’s 1,458. And it boasts superior object recognition, real-time image processing, and real-time object segmentation, achieving up to 135 percent better performance in benchmarks like ResNet and Inception v3 while consuming 88 percent less power than the Snapdragon 845.
On the Mate 20 Pro, system-level AI running on the NPU intelligently ramps up the GPU’s clock speed during intense gaming sessions, minimizes system lag, and delivers “smoother outdoor gaming experiences” in areas with weak signals. Additionally, Huawei says its heterogeneous computing structure — HiAI — can automatically distribute voice recognition, natural language processing, and computer vision workloads across the chip.
The NPU also powers camera features like AI Color, a Sin City-inspired effect that keeps a subject in color while everything else in the scene is black and white, and a 3D object-scanning tool — Live Object — that recreates real-world objects in digital environments. The Mate 20 Pro’s Animoji-like Live Emoji and 3D Face Unlock tap into the NPU for facial tracking, while its Master AI 2.0 camera mode leverages it to recognize scenes and objects automatically and adjust settings like macro and lens angle. Additionally, AI Zoom uses NPU-accelerated object tracking to automatically zoom in and out of subjects; video bokeh highlights the foreground subject while blurring the background; and Highlights generates edited video spotlights around recognized faces.
Third-party developers can tap the NPU through Huawei’s HiAI library and API, and some already have. Microsoft’s Translator app, for example, uses it for tasks like scanning and translating words in pictures, and image editing app Prisma leverages it to perform on-device style transfer in seconds.

Smartphone with the best AI camera features

Google Pixel 3 and 3 XL ($799, $899)

Google Pixel 3 XL
As we wrote in our review of Google’s Pixel 3, many of the phone’s best features — whether it’s predictive battery-saving or the ability to screen calls  — are made better with AI. And that’s particularly evident on the photography front.
The Pixel 3 has two selfie cams — both 8MP sensors — and on the rear of the phone, there’s a single camera 12.2-megapixel sensor. The rear camera is the biggest beneficiary of Google’s truly impressive AI. Pixel 3’s Top Shot feature captures a burst frame before and after you tap the shutter button, and uses on-device AI to pick the best picture. Photobooth taps machine learning to take the “best” photos — i.e., minimally blurry and well-lit — automatically.
That’s why, despite having one camera compared to the LG V40 and Huawei P20, which each have three, the Pixel 3 takes some of the best photos we’ve seen from a smartphone.
For another example of AI’s photo-manipulating prowess, look no further than Google’s own portrait mode, which filters out facial features, clothing, and skin in portrait pics with a machine learning model that detects pixel-level differences in color and edges. (It works on both the front and rear cameras.) Another example is HDR+, Google’s in-house approach to high dynamic range. It captures between 9 and 15 images in rapid succession and merges them together to minimize noise, and cleverly leaves them all underexposed to help keep colors saturated in low light. And yet another is Super Res Zoom, which merges frames to form a higher-resolution image that’s “roughly competitive with the 2x optical zoom lenses on many other smartphones,” according to Google.
And then, there’s the Pixel 3’s truly impressive Night Sight mode. Using algorithmically driven alignment and merging techniques, including a modified version of the HDR+ stack that detects and rejects misaligned frames and a custom auto white balance (ABW) model trained to discriminate between a well-balanced image and a poorly balanced one, it’s able to reduce noise in photos taken in environments with just 0.3-3 lux (the amount of light arriving at a surface per unit area, as measured in lumens per meter squared). For point of reference, 3 lux is equivalent to a sidewalk lit by street lamps.

Smartphone with the best alternative AI assistant

Galaxy Note 9 ($999)


Bixby, Samsung’s homegrown voice assistant, gets a lot of well-deserved flak for performing a little bit worse in some scenarios than the Google Assistant and Amazon’s Alexa. But for all of its shortcomings, it’s come a long way since its debut alongside the Samsung Galaxy S8 in 2017.
The improved Bixby in the Galaxy Note 9 — Bixby 2.0 — has better natural language processing, faster response times, and built-in noise reduction tech, and it retains the ability to recognize chained instructions like “Open the gallery app in split-screen view and rotate misaligned photos” and “Play videos on a nearby TV.” It’s also more conversational — when you ask it about upcoming concerts over Thanksgiving weekend, for example, it will remember that date range when looking for tickets in the future. Finally, thanks to forthcoming support for new languages, availability in more than 200 markets, and the recently launched Bixby Developer Center, it promises wider app support than ever before.
There’s more to Bixby than voice recognition, of course. Bixby Home, a “social stream for your device,” is a dashboard of reminders and social media updates collated in cards that can be dismissed, pinned, or permanently hidden.
Also on tap is Bixby Vision, a Google Lens-like object recognition app that leverages integrations with Vivino, Amazon, Adobe, Nordstrom, Sephora, Cover Girl, and others (and Samsung’s data-sharing partnerships with FourSquare and Pinterest). It can scan barcodes, turn receipts into readable files, and show relevant product listings, recommend wine, display the calorie counts of food, and let you virtually “try on” makeup products.

Conclusion

So there you have it: four flagship smartphones that make innovative use of AI across three distinct categories.
Apple’s 2018 iPhone lineup is far and away the winner on the chipset front — its upgraded neural engine, combined with powerful software tools and a thriving developer ecosystem cement its lead in hardware. That said, Huawei is beginning to nip at its heels, particularly when taking into account the Mate 20 Pro’s first-party camera features that tap the Kirin 980’s improved AI chip, like AI Color and Master AI 2.0.
As impressive as the Mate 20 Pro’s camera is, though, it falls just short of Google’s Pixel 3 in the computational photography category. The Pixel 3 has one of the best smartphone cameras we’ve ever tested, thanks in large part to AI.
Last but not least, there’s the Samsung Galaxy Note 9, a showcase for the latest version of Samsung’s Bixby assistant. Bixby might not be the most reliable platform on the block, but it’s grown considerably more robust in recent months. And to our knowledge, Bixby Voice is one of the only (if not the only) voice assistants that can recognize chained commands and interact with app menus and submenus, making it great for hands-free usage.
The iPhone XS, iPhone XS Max, and iPhone XR, Galaxy Note 9, and Google Pixel 3 are available for purchase at carrier stores, Amazon, Best Buy, and other major brick-and-mortar electronics retailers. The Mate 20 Pro, it’s worth noting, hasn’t been made officially available in the U.S. — you’ll have to transact with a third-party retailer to get your hands on it.
Source: VentureBeat
To Read Our Daily News Updates, Please visit Inventiva or Subscribe Our Newsletter & Push.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button