Release Note
ZenTalk

[Reviews] Experience the ZenFone 5: Photography Powered by AI

2754 3
Jump to specific floor
Latest Post
Wayne_ASUS TWN Moderator | All posts
Starting from late 2017 with the ZenFone 4 series, every new ASUS smartphone was equipped with dual cameras. This made ASUS the first smartphone manufacturer to use dual cameras in in every model of a new series of smartphones.

But with the rapid advancements in smartphone hardware, it is no longer sufficient to compete purely on specifications. How, then, should the next-generation ZenFone distinguish itself from all the competition?

At the Mobile World Congress (MWC) in February 2018, ASUS announced the next-generation ZenFone 5 that integrates all the latest technologies, including artificial intelligence.



AI cameras for intelligent photography

The integration of AI in the ZenFone 5 is based on the idea of transcending from “smart phones” to “intelligent phones”. ASUS designed AI features in the phone to serve the user, where it will be able to learn and adapt to new information based on two primary paradigms the company calls: “Learn from Cloud” and “Learn from You (the user)”.



The best example that demonstrates this philosophy is in the ZenFone 5’s camera system where the ASUS R&D team applied AI machine learning technology to let the smartphone autonomously set the best settings for different subjects or scenes. This allows the user to leave the details to the camera, and simply let the smartphone choose the optimal settings for any situation. Furthermore, the AI in the smartphone will continue to learn with every photo taken, so that the next picture it takes will be even better.

The ZenFone series’ cameras are designed based on the “We Love Photo” philosophy. The ASUS team, therefore, examined how best to extend this philosophy to future smartphone designs. A global survey of ASUS users showed that 36% of photos were group selfies, 34% were group photos of family and friends, 34% were portrait selfies, 32% were scenic landscape pictures, and 28% were selfies with objects in the background. This data formed the basis of the ASUS R&D team’s designs.

As stated by C, a researcher at ASUS, “For many people, when you say ‘AI’ they automatically think of something like AlphaGo, Google’s go-playing computer prorgam. That’s not what we do though, we’re not trying to make an AI that plays board games. Instead, we want to train the AI to sense its environment and make adjustments in order to create the best possible user experience when they take photos. Most users want to take pictures of happy times with friends and family, so we started thinking about how we can help our users take great photos of people, the natural environment, food, and so on.”


Hyper-intelligent AI can distinguish 16 different environments

Simply point the ZenFone 5’s camera at something, and the software will be able to identify what it is seeing, based on its previous machine learning and big data optimization. The phone is capable of distinguishing between sixteen different subjects or scenes, including people, dogs, cats, sunsets, food, the sky, grass, the ocean, flowers, plants, snow, night view, stages, text, QR codes, and tripods. The ZenFone 5 will then make the necessary adjustments based on the detected image, including contrast, saturation, brightness, etc. This means users can simply point and click, knowing that the camera will give them the best possible photo.

How does this system work exactly? As explained by J, an ASUS product designer, the ZenFone 5 will automatically detect its subject or scene — let’s say, for example, that the user is taking a picture of flowers; the phone will automatically make adjustments to the flowers’ colors, so that the final result will be more vivid and attractive. If the user is taking pictures of the sky, then the ZenFone 5 will automatically optimize the color contrast for the image.

In other words, ZenFone 5 users can take beautiful pictures without having to worry about the nitty-gritty of photography. They no longer need to have professional photography knowledge, or to adjust all the settings manually, or beautify the pictures in post-processing. The ASUS team has taken care of all of that with the ZenFone 5’s AI.

Furthermore, the ZenFone 5 comes with a real-time beautification mode, which makes automatic adjustments to factors such as facial shape and skin tone before the photo is snapped.

However, the ASUS team has also taken the needs of professional photographers into account when designing the ZenFone 5. The smartphone comes with a Pro mode, which has more than 60 different built-in settings that can be adjusted manually, such as shutter speed, color contrast, and exposure settings. The camera can even accept exposure times of up to 32 seconds. As ASUS product manager T said, “There aren’t many phones in the same price range that can provide the same level of flexibility that we do. If you wanted to go to the summit of a mountain to take pictures of star trails, you can do that with just a tripod and our phone!”

So how does an AI automatically detect its scenes and subjects? The ASUS R&D team first defined a series of subjects and scenes based on the types of photos most commonly shot by their users. Then, they fed over 2.4 million photos into a cloud-based system to train the AI to recognize these subjects and scenes. The entire process took over six months of testing before the ZenFone’s AI was ready.

ASUS image technology engineer E pointed out that, “Every artificial intelligence starts with real human intelligence! The biggest challenge was that every photo had to be manually categorized and tagged. This was the only way to make sure the machine would get the right information.” Teaching the AI to tell the difference between the sky and the sea was itself a huge challenge. Recognizing food was also easier said than done because of the wide variety of foods in the world. The team had to collect many pictures of different types of foods to train the AI. E also said, “Dogs were also a challenge, because there are so many different breeds. Some dogs look like wolves, and the computer had trouble telling them apart.”

ASUS image technology engineer M went one step further, and said, “The AI must be trained to recognize the object being photographed before it can perform setting optimization. Many adjustments were made to every environment to ensure the best possible optimization. At the moment, the ZenFone 5 can achieve 95% accuracy when recognizing environments; sometimes it can go up to 98% or even 99%. Every ZenFone 5 must pass our quantitative testing before it can be shipped.”

It is worth noting that ASUS does not intend to have the ZenFone 5 recognize every possible environment. Instead, they focused their efforts on the most common images and environments. As C notes, it was tricky to find a balance between the phone’s functionality, accurate environmental recognition, and power usage.




AI learns your photography preferences

Apart from automatically determining its environment, the AI in the ZenFone 5 also comes with the exclusive AI Photo Learning feature that demonstrates the “Learn from You” philosophy. This feature will apply two similar effects on two a photo that the user has taken, and ask them to choose which one they prefer. In this way, the AI will learn about its user's preferences, so that in time it will be able to automatically make the optimal settings for the user. M explained that, “We wanted the AI to work for you, and for your phone to be more personalized. A phone is like the user’s baby, it’s capable of learning and growing up to reflect the user’s unique personality.”

As J pointed out, “Every user has their subjective preferences for photography and post-processing.” For example, some people like to add filter to their pictures, while others do not. By providing such feedback to the AI, the phone will have a better idea of what adjustments to make for the next photo taken.

The ASUS team went through many, many discussions on the AI’s machine learning capabilities. ASUS software designer W candidly said, “When we were talking about the AI’s functionality at the beginning, we said we wanted users to be able to customize and personalize their phones, so we wanted to give users the opportunity to participate in the AI’s adjustments. But on the other hand, we want to make our phone as user-friendly as possible. So how often should the phone ask the user for feedback? How do we find a balance between improving the AI while not annoying the user? In the end we settled on a lower frequency of AI learning, where the phone would ask for user feedback approximately once every week or two.”

The ASUS team also emphasized that the ZenFone 5 would not collect or upload users’ preferences. All of the feedback provided by a user would be stored in their own phone only to help that particular phone’s AI learn its user’s preferences. The user is also free to decide whether or not to provide feedback to the phone.



Outstanding photosensitivity for both day and night photography


When it was launched in late 2017, the ZenFone 4’s hardware specs were already top of the line, representing a primary force driving hardware innovation and quality in the market. The ZenFone 5 takes it further by using the latest Sony IMX 363 image sensor, released in 2018. It has a wide f/1.8 aperture, large 1.4μm pixel size, large 1/2.55’’ sensor, and 4-axis optical image stabilization. The camera also comes with dual pixel autofocus technology that can focus on objects within 0.03 seconds. In other words, this is a cutting-edge piece of technology that is usually found only in top-tier digital cameras.


Senior product manager T noted that, “We were willing to invest heavily in the camera hardware because we want to provide the best possible user experience. The ZenFone 5 uses top-of-the-line Sony IMX 363 sensors, so that users will be able to take the photos they want in any circumstances.”


Such outstanding hardware means that users are no longer bound by the limits of time or lighting. Instead of only being able to take clear photos in well-lit daytime environments, the Sony IMX 363 allows users to take crystal-clear pictures, with excellent color fidelity, even in dim nighttime environments. C pointed out that, “The IMX363 has larger light sensing pixels and outstanding performance, so that even tiny details can be captured when there’s insufficient lighting.”


Apart from the light sensor, the ZenFone 5’s software has also been optimized in its high dynamic range (HDR) and exposure adjustments, allowing it to achieve five times the effectiveness of other phone cameras when it comes to light sensing. Adjustments to HDR means that all the minor details can still be seen even in backlit photos. This means that nothing is lost even if the user is taking pictures in bright sunlight, or photographing bright lights at night, or even when they are snapping photos of vendors or signs in night markets.


The ZenFone 5's color correction sensor / RGB sensor is also 10% more precise than previous models. The color correction sensor accurately determines the color composition of the environment, and feeds this information to the camera’s algorithms. The camera then adjusts the color spectrum accordingly to produce the most optimal image. For example, if there is a lot of white in the environment and the camera’s lens does not detect it accurately, then it might end up using a dim yellow color as the baseline for the color spectrum. This results in a bluer photo. Consequently, the camera’s ability to detect the color composition of the environment is crucial to the quality of the picture. Very few phones on the market are equipped with color correction sensors, but ASUS has included this feature in its ZenFone series starting from the ZenFone 3 in 2016.




Ultra wide angle for bigger pictures

A global survey has shown that 60% of ASUS users take pictures daily. This indicates that the camera is a very frequently used feature in ASUS’s phones. Consequently, the ZenFone 5 continues to use the dual rear camera setup from previous ZenFones, where a supplementary lens helps to shoot 120° ultra wide angle pictures. This setup remains quite rare in smartphones; instead, most other phones use a telephoto lens setup in their cameras.

There are three primary advantages to this ultra wide angle design. The first is convenience, in that users can easily take wide angle group photos without having to invest in a wide angle lens. The second advantage is that more people can fit inside one photo, and more of the background can be included as well. Thirdly, by providing more options for the focus angle, the user’s photography becomes more varied and interesting.



Super-accurate instant depth of field

The ASUS team has noticed that depth of field (DoF) photography tends to add vitality and drama to photos, which has made it a very popular feature among users. Therefore, the ZenFone 4 featured a Portrait Model mode, where the camera would capture an image of a subject, which would then undergo post-processing with software in order to blur the background. This would create an effect where the person in the foreground is in sharp focus, while blurring the extraneous details in the background. Most smartphone cameras on the market today use a similar strategy for DoF photography.

However, C pointed out that, "The ZenFone 5’s dual camera system allows it to more precisely focus on what the camera is aimed at, means it can create DoF photography in real time.”

Of course, this begs the question: If DoF effects can be achieved with a software solution, why bother doing the same thing with hardware? To put it simply, the ASUS team noted that the hardware solution was more effective. As E explained: “When only one camera lens is used and the DoF effect is added with post-processing, the camera cannot accurately determine the distance and depth of the person being photographed. This means that the software has limited information to work on, which is detrimental to its effectiveness and the naturalness of the final photo. For example, the person’s hair or shoulders may end up blurred along with the background. Furthermore, software solutions for DoF photography are limited to photos of people; they don’t work on photos of objects.”

As a result, the ASUS R&D team searched for a hardware solution for DoF photography, where both lenses would be used in the process. E said, “The angles obtained from the two rear cameras can be used to triangulate the depth of the person or object being photographed, which is then modeled by the AI. This allows up to six layers of DoF to be applied to the background.”

The hardware tuning for the dual-camera Portrait Mode needs requires both cameras to focus precisely on the same points, so they can accurately determine what is in the foreground and what is in the background. This, in turn, means that the object in the foreground can be clearly defined, while the background can be blurred.

However, designing the hardware for this system was also a challenge for the R&D team, because it involved coordinating the efforts of many different parties, including the camera manufacturer, the software engineers, and the hardware vendors. It was difficult to perform all the testing and adjustments needed, while also maintaining the required yield rates.

What were the challenges encountered during the development of hardware alignment technology? As E said, the first issue was that the ZenFone 5 uses two cameras, a basic camera and an ultra wide lens camera. Oftentimes the edges of photos become distorted when a wide angle lens is used. Therefore, it was necessary to ensure the fidelity of the wide angle image, in order to make accurately adjustments when the images from both cameras were combined.

Furthermore, DoF photography requires both cameras to take a picture at precisely the same time, otherwise the two images may not be able to be combined. M described this challenge: “The two cameras had to be synchronized to within a thousandth of a second. Forcing the two cameras to work in sync like that involved making adjustments very early on in the manufacturing process!”




ASUS has insisted on using the best possible light sensing technology since the ZenFone 4. The company is now entering its fourth year of producing its own branded smartphones, and the return to the ZenFone 5 name represents a new chapter of this journey. The ZenFone 5’s excellent specs and competitive price mean that it is excellent value for users, and allows them to enjoy top-tier performance without having to pay exorbitant costs. Furthermore, the ZenFone 5’s advancements in quality and technology, including the customized AI, mean that the ZenFone 5 is the most user-friendly phone in the entire ZenFone series, and it will only grow even more user-friendly as the AI learns from the cloud and also its users’ preferences.





This post contains more resources..

You need to login to download attachments. No account?Register Now

x
Props
OTHERS Level 1 | All posts
I had looked forward to the Zenfone 5 and the improvements in the camera. I am disappointed that ASUS chose to go with ultra-wide angle instead of a mild zoom. I have been a photography hobbyist for 60 years and rarely had a use for ultra-wide shots. What I always had a use for was some sort of zoom. I just don't understand the choice made by ASUS. My Zenfone 4 Pro with 2X zoom is most handy. I use it for at least half the shots I take, and I take a lot of shots. I attended a family dinner party yesterday and took about 300 shots - about half with zoom. There was no appropriate place for the ultra-wide angle. I look forward to a Zenfone 5 Zoom in the future. For now, I will keep my obsolete and much more useful Zenfone 4 Pro.
Props
PHL Level 1 | All posts
Nice!                                                                                                                       
Props
Advanced mode
You need to login first Login | Register Now

Points policy of this forum

wishes

Archiver|Mobile ver.|

Fast reply Top Return to list
Share by pm