Apple’s new iPhone SE has just one 12-megapixel rear camera lens. But it turns out that one camera is still pretty good, at least in well-lit situations.
For portrait photos, the iPhone SE’s camera uses machine learning to estimate depth of field. Ben Sandofsky, one of the developers of mobile photography app halide, took a closer look at how at how portrait photos actually work.
The iPhone SE’s portrait mode can capture depth maps for photos differently than the iPhone XR, which also has a single lens.
In one example, Sandofsky took a picture of an iPhone XR and an iPhone se. Sandofsky then took the picture with an iPhone and a se.
The iPhone XR’s depth map is on the left, and the iPhone SE’s is in the right. The iPhone’s depth maps are on the right, while the phone se’s was on right.
The iPhone SE’s depth map is n’t a perfect representation of the actual depth of a flat photo entirely using machine learning.
The se’s camera captures a lot more detail about its depth mapping in action. Sandofsky says he takes a few minutes to read his blog in full.
Hello, my fellow tech enthusiasts! Today, I want to talk to you about a fantastic…
Hello, my tech-savvy followers! Today, let's talk about how to create PDFs on your iPhones…
Hey there, my fellow tech-loving pals! It's your funny guy Nuked here with some news…
Hello, my followers! Today, let's talk about a great deal for all the tech lovers…
Hello my fellow tech enthusiasts! Today I bring you some news about Amazon Kindle book…
Hello my followers! Today we have some exciting news about Google's upcoming Pixel 9 lineup.…