Google has open-sourced the AI tool that makes the Pixel’s portrait mode so good


Google’s Pixel phone has one hell of a camera, and one of the reasons for this is AI. Google has used its machine learning talent to squeeze better shots and shooting modes out of a tiny smartphone lens. And now, the company is open-sourcing one of these AI tools — a piece of software that underpins the Pixel’s portrait mode.

As announced in a blog post earlier this week, Google has open-sourced a lump of code name DeepLab-v3+. This is an image segmentation tool built using convolutional neural networks, or CNNs: a machine learning method that’s particularly good at analyzing visual data. Image segmentation analyzes objects within a picture, and splits them apart; dividing foreground elements from background elements.

A diagram showing how image segmentation works for a typical photograph.
Image: Google

This may sound a bit trivial, but it’s a very useful skill for cameras, and Google uses it to power its portrait mode images on the Pixel. These are the bokeh-style photographs that blur the background of a shot, but leave the subject pin-sharp. The iPhone popularized them, but it’s worth noting that Apple uses two lenses to create the portrait effect, while Google does it with just one. (Is Apple’s portrait mode better than Google’s? I’ll leave that debate for the commenters.)

As Google software engineers Liang-Chieh Chen and Yukun Zhu explain, image segmentation has improved rapidly with the recent deep-learning boom, reaching “accuracy levels that were hard to imagine even five years [ago].” The company says it hopes that by publicly sharing the system “other groups in academia and industry [will be able] to reproduce and further improve” on Google’s work.

At the very least, opening up this piece of software to the community should help app developers who need some lickety-split image segmentation, just like Google does it.

Source :


Please enter your comment!
Please enter your name here