in

Apple’s new iPhone SE has just one 12-megapixel rear camera lens

Apple’s new iPhone SE has just one 12-megapixel rear camera lens. But it turns out that one camera is still pretty good, at least in well-lit situations.

For portrait photos, the iPhone SE’s camera uses machine learning to estimate depth of field. Ben Sandofsky, one of the developers of mobile photography app halide, took a closer look at how at how portrait photos actually work.

The iPhone SE’s portrait mode can capture depth maps for photos differently than the iPhone XR, which also has a single lens.

In one example, Sandofsky took a picture of an iPhone XR and an iPhone se. Sandofsky then took the picture with an iPhone and a se.

The iPhone XR’s depth map is on the left, and the iPhone SE’s is in the right. The iPhone’s depth maps are on the right, while the phone se’s was on right.

The iPhone SE’s depth map is n’t a perfect representation of the actual depth of a flat photo entirely using machine learning.

The se’s camera captures a lot more detail about its depth mapping in action. Sandofsky says he takes a few minutes to read his blog in full.

Spread the AI news in the universe!

What do you think?

Written by Nuked

Leave a Reply

Your email address will not be published. Required fields are marked *