newsdog Facebook

Google Pixel 3 Smartphones Focus on Computer Vision

findbiometrics 2018-10-12 00:08:24

Google Pixel 3 Smartphones Focus on Computer Vision

Posted on October 11, 2018

Google’s focus on artificial intelligence has been central to its strategy in delving into hardware over the last couple of years, and with this week’s launch of its latest Pixel smartphones, that focus is sharper and clearer than ever, with computer vision in particular at its center.

To be clear, the voice-based Google Assistant is still a vital component of the company’s AI arsenal, with an increasing role in its user interfaces. But while the Google Assistant has long had good ears, it’s now getting much sharper vision. Google Lens, for example — the computer vision system launched with last year’s Pixel 2 smartphone, is built into the camera system of the Pixel 3 and is even more sophisticated. Taking a picture of a phone number on a business card, for example, might trigger a prompt asking if the user wants to save the number to their contacts; or the user can access more information about some nice shoes featured in a Instagram image thanks to Google Lens. All of this can be activated with a long press in the camera.

Meanwhile, perhaps taking a cue from Apple, Google has also sharpened its focus on Augmented Reality, launching a new, flagship app called Playground. Built into the Pixel 3’s camera system, the app lets the user insert into a given video animated characters called “Playmoji”, which use AI to interact with each other and the user. Characters can range from cute animals to a dancing Childish Gambino, and the AI system will even recommend tailored Playmojis depending on what the user is imaging.

And Google also continues to infuse AI into more traditional smartphone photography. A “Portrait Mode” feature uses AI to automatically blur the background of a given subject, while another feature called “Top Shot” feature will automatically choose the best shot from a given “motion photo”. There’s also a “Super Res Zoom” feature that Google says is an astronomical imaging technique that allows the user to zoom in on image while maintaining a relatively sharp image quality; and the company promises that a forthcoming “Night Sight” feature will produce highly detailed images even in dark settings.

It all serves to illustrate the emergence of computer vision in consumer technology. Google isn’t alone in bringing AI to photography, and it isn’t even the only smartphone maker to leverage computer vision to identify objects seen through a camera’s lens (though it is the most prominent); but in bringing all of these strains of image-focused AI into a couple of central products, the company is steering consumer technology toward a future that not only consumers will see, but that their devices will see, too.

(Originally posted on Mobile ID World)

Tags: AI

,

AI imaging

,

artificial intelligence

,

computer vision

,

Google

,

Google Assistant

,

Google Pixel

,

Google Pixel 3

,

mobile devices

,

mobile identification

,

mobile imaging

,

Pixel 3