We’ve seen widespread adoption of dual-camera technologies across practically all smartphone segments and manufacturers in the last two years. We have seen dual cameras adopted in various configurations for various end goals, both front, and back.
According to the market reports, twin camera technologies will be used in 30% of smartphones in 2018, rising to 50% the following year. Even though it took more than ten years for the smartphone market to adopt a second camera, it appears that the inclusion of a third camera is just around the corner, coming within two years of the dual camera’s adoption. This article will go through some of the reasons for adding a third camera to a smartphone’s image system and the issues it poses, and potential solutions.
Dual Cameras on the Horizon
The thickness of smartphone cameras has always hampered mobile photography. The aperture is narrow, the pixel size is shrinking as technology develops, and auto-focus and picture stabilization are still needed to fit everything in. Until recently, smartphone manufacturers struggled to achieve excellent lowlight performance, high resolution, and low SNR at a 6mm camera height, even when zooming in. Dual camera technology came to the rescue, posing the following challenge to camera module producers and smartphone OEMs:
The HTC One (M8) was the first smartphone to have two rear cameras solely to provide depth and focus effects on the final image. Various OEMs tested dual-camera technologies in some of their flagship smartphones until mid-2016, employing different dual-camera arrangements, including depth-only, RGB-Mono, and Wide-and-Wide duos. There was no “killer camera app” or winning dual-camera combination identified.
The iPhone 7 Plus, which has a dual-camera on the back, was released in September 2016. Apple promoted a particular dual-camera composition, Wide + Tele, as the premium camera system, and two photography capabilities as customer’s most wanted optical zoom and digital bokeh (or “portrait mode”). Since then, the dual-camera smartphone domain has evolved, with premium and flagship handsets employing identical duos, while mid-to low-end models offer depth-only capabilities.
Even though dual-camera smartphones have become a commodity in the high-end market segment, new dual-camera topologies are on the way to significantly improve today’s dual camera performance. The adoption of folded camera architecture, which can enhance substantially zoom factor and low light performance while allowing even lower camera module height for a slimmer handset device, is one example of the next generation of dual-camera innovation that is just around the corner.
In 2017, OPPO made public an early version of this technology, a 5X zoom smartphone. The adoption of a triple camera complex could be another appealing trend in smartphone camera evolution. It’s easier said than done, with a third camera posing substantial obstacles (and rewards) for smartphone manufacturers while opening up a vast range of choices and setups.
Triple camera systems pose major issues
The First Challenge: “Real-Estate” and Cost
Three camera options add to the camera system’s bill of materials (BoM) and take up extra space at the expense of other technologies incorporated into the mobile device (e.g., IR sensing, proximity sensor, structured light, larger battery, etc.).
This penalty is almost unavoidable, but OEMs will have to measure it against total value for money, which will be determined in part by the priorities of their target audience. The additional cost of the third camera is directly proportional to the camera setup, as detailed later in this article, and can range from $10 to $30.
Calibration is the second challenge.
The camera must be carefully calibrated for both intrinsic and extrinsic aspects of the triple aperture imaging system to produce a seamless user experience in video/preview and avoid artifacts or long processing times during image fusion or bokeh. To correct for dynamic physical changes like temperature differences and device-drop impact, this calibration must be done as part of the camera production line, thoroughly, and maybe even continuously and autonomously.
In this more complicated camera system, calibration and frame-to-frame synchronization pose new challenges for camera module makers and OEMs. If each of the three cameras must be correctly calibrated, the camera assembly procedure must be meticulously planned, and the predicted yield will be reduced. As a result, the total camera cost is directly affected.
Firmware, Algorithms, and Power constitute the third challenge.
A triple camera system necessitates greater firmware complexity as well. The new framework will have to deal with three cameras that must work together as a single unit. Processes within the camera manager, such as power management, frame request, memory management, and other state machines, will have to deal with more logic, more data and allow for more parallel processing in the pipeline while serving the application level in more efficient ways to meet real-time performance. Algorithms, on the other hand, face the same difficulties.
These include ensuring an adequate processing run time and allowing for zero image quality abnormalities caused by multiple camera inputs, all while coping with frame-to-frame synchronization inconsistencies, occlusions, and faults in the triple camera calibration data. As a result of these difficulties, this configuration’s system (cameras + processing platform) could be significantly altered. Following that, we’ll recommend a few other tri-aperture camera setups. These are specific examples of a family of trios, each with advantages and disadvantages, while many combinations are available.
This camera will allow the user to snap images in a dimly light environment while maintaining adequate zoom capabilities. Taking shots of a concert stage, for example, is a brutal scene that necessitates zoom and lowlight capability. The capacity to zoom in and out indefinitely comes from the following:
By omitting the Bayer filter array that is generally put over the sensor pixels in a color camera, the monochromatic camera (Camera I) can provide a higher diagonal resolution. Color cameras are used in this technique to achieve color reproduction (Cameras II and III).
The mono wide camera’s varied spatial sample size (i.e., pixel size) against the color wide camera (Cameras I and II) will also add to the dual subsystem camera’s overall magnification power.
Because of its telescopic lens, the third camera will have even more center resolution.
SNR may be significantly improved by combining the color camera output frame (Camera II) with the monochromatic camera output frame (Camera I), which gains twice as much light as the former. A two-fold increase in light exposure is achieved by using a color filter array, in which each pixel is filtered to film only one of three colors at the expense of total potential light absorption.
The sizeable overlapping field of vision (FoV) between Cameras I and II is another clear advantage of this recommended setup over existing zoom twin cameras. This capability enables stereo depth sensing throughout the broad field of view (FOV), valid for augmented reality and digital bokeh (shallow depth of field effect).
Two apparent drawbacks of such a power regime are a somewhat considerable shutter lag during still shooting and no increased low light performance during video recording. It’s also crucial to grasp a close eye on the power usage of such a camera system to avoid an outburst of electricity from three cameras streaming at the same time.
The order in which the cameras are installed affects the system’s performance. For example, placing the wide color camera in the middle provides a smoother transition from the comprehensive to tele cameras when recording a video. It makes fusing two cameras that are close to one other easier (color and mono). This design will degrade stereo depth sensing accuracy, which could improve by putting the expansive color and wide mono cameras at opposing ends.
This camera may be ideal for those who enjoy traveling. The super-wide-angle lens, for example, eliminates the traditional panoramic stitching capture mode while photographing an open scene. They were able to catch small details while zooming at the same time, which is quite helpful. With today’s smartphones, users must choose between high-quality optical zoom and superwide photos, but not with this triple camera arrangement.
This trio design can handle power consumption more efficiently than the preceding triple camera system. Per-user zoom factor, there will usually be just one active camera. The theory behind the camera array sequence is more straightforward because the camera magnification power will always cause seamless changeover between neighboring cameras.
The system’s difficulties originate from the superwide camera’s rather significant lens distortion when applying smooth video transitions, the fusion of two images, or even the factory calibration procedure. A greater focal length for the tele camera would also be greatly appreciated by photography-obsessed mobile users, allowing them to get a far better close-up on the target object even from a distance.
This triple camera will provide users with an unprecedented optical magnification of 5x genuine optical zoom without affecting the form factor of today’s smartphones (i.e., camera thickness of 5mm that can coexist with a bezel-less display). The folded tele camera entrance pupil will allow five times more light than a conventional RGB wide camera and almost 2.5 times more light than the above-mentioned wide camera in this configuration, despite the relatively high F/# (i.e., f/2.8).
The upper-zoom triple camera will deliver a seamless, continuous zoom experience at any chosen zoom factor from 1x to 5x during still capture and 4K video recording. This camera might deliver up to a 25x magnification factor when combined with multi-frame technology, image fusion, and multi-scaling.
The cutting-edge tri-aperture system, which is enabled by folded zoom optics and tucking OIS technologies, excels in that it fully addresses two significant flaws in today’s smartphone photography: low light performance and the lack of a powerful optical zoom.
CONCLUSION
This article reviewed three critical challenges for triple cameras and three specific configurations representing a wide variety of triple-camera setups, which OEMs might soon adopt. Multi-aperture technologies, in general, are subject to the law of diminishing returns.
In twin configurations, the second camera provides the most substantial improvements in user experience. To justify its supplementary cost, size, and complexity, the third camera in any triple arrangement would need to add significant value to the overall user experience. OEM’s will be especially interested in triple camera combinations that address lowlight performance restrictions (both capture and video modes) and provide acceptable optical zoom capabilities (beyond 3x).
edited and proofread by: nikita sharma