Portrait Mode takes images with a shallow depth of field, focusing on the primary subject and blurring out the background for a professional look. It does this by using machine learning to estimate how far away objects are from the camera, so the primary subject can be kept sharp and everything else can be blurred.

In order to estimate depth, the Pixel 4 captures an image using two cameras, the wide and telephoto cameras, which are 13 mm apart. This produces two slightly different views of the same scene, which, like information from human eyes, can be used to estimate depth. In addition, the cameras also use a dual pixel technique in which every pixel is split in half and is captured by a different half of the lens to give even more depth information.

Source: Google explains the science behind the Pixel 4’s Portrait Mode | Engadget

Edelleye Digital

Subscribe To Our Newsletter

Join our mailing list to receive the latest news and updates from our team. We will pass on helpful tips about design and technology, and give you a heads up when we run meetups in your area.

Success! You have subscribed.

Share This