Depth Sensors in Mobile Apps
Use TrueDepth, structured-light and stereo depth sensors for portraits, AR effects and 3D measurement on iOS and Android.
A depth sensor returns a per-pixel distance map of the scene in front of the camera. iPhones with Face ID have a TrueDepth structured-light sensor on the front; many phones with multi-lens systems can compute depth from disparity. Android exposes depth through the ARCore Depth API or, on supported devices, AVCapture-equivalent ToF sensors.
Key Takeaways
- Depth lets you separate foreground from background, measure objects, and place AR content correctly.
- iOS provides depth via `AVDepthData` (front TrueDepth, rear dual/triple cameras).
- Android exposes depth via the ARCore Depth API on most modern devices, regardless of dedicated hardware.
- Depth data is privacy-sensitive — both platforms gate it behind the camera permission.
Depth Sensors at a Glance
What It Is & How It Works
What it is. A sensor (or computed signal) that returns the distance from the camera to each point in the scene. Implementations include TrueDepth (structured light), stereo / dual-pixel disparity, time-of-flight (ToF) and ML-derived monocular depth.
How it works. You attach a depth output to your camera session. iOS gives you AVDepthData with depth-in-metres at every pixel; ARCore exposes a similar depth image. Most pipelines downsample to 256×192 or so for performance.
Units & signal. Per-pixel depth in metres (16-bit float on iOS, millimetres or normalised float on Android). Confidence values per pixel where available.
What You Can Build With It
Portrait / cut-out effects
Blur the background, replace it, or apply different looks to subject vs. backdrop.
Example: A video-call app that replaces the background with a static image.
AR object occlusion
Hide virtual objects behind real-world geometry instead of always drawing on top.
Example: A furniture-preview app where the virtual chair gets hidden behind the real coffee table.
Measurement
Estimate distances and sizes from depth + camera intrinsics.
Example: A "measure my window" app for blinds and curtains.
3D scanning / capture
Build a textured 3D model of an object or face.
Example: A face-scanning sample for a glasses try-on app.
Permissions & Setup
Depth output rides on top of the camera permission. There is no separate "depth permission".
iOS · Info.plist
NSCameraUsageDescription
Android · AndroidManifest.xml
android.permission.CAMERA
Code Examples
Setup
- Expo: use react-native-vision-camera with the depth plugin or a custom native module
- iOS: configure an `AVCaptureDepthDataOutput` on your session and read `AVDepthData`
- Android: integrate ARCore and request a `Frame.acquireDepthImage16Bits()`
// Sketch using react-native-vision-camera + a depth frame processor
import { Camera, useCameraDevice, useCameraPermission, useFrameProcessor } from 'react-native-vision-camera';
import { processDepth } from './depth-plugin'; // your worklet plugin
export function DepthCamera() {
const device = useCameraDevice('back');
const { hasPermission, requestPermission } = useCameraPermission();
const frameProcessor = useFrameProcessor(frame => {
'worklet';
const meanDepth = processDepth(frame); // returns metres
console.log('avg depth (m):', meanDepth);
}, []);
if (!hasPermission) return null;
return device ? (
<Camera device={device} isActive frameProcessor={frameProcessor} pixelFormat="depth" />
) : null;
}Tip: With Newly, you describe the feature you want and the AI agent wires up the sensor, permissions, and UI for you. Try it free.
Best Practices
Downsample for inference
Most ML / segmentation runs on a 256×192 depth map. Don't pass full-resolution depth to a neural net.
Use confidence values
Both iOS and ARCore provide per-pixel confidence. Mask out low-confidence pixels before measurements.
Match RGB and depth resolution
Resize one to match the other before any per-pixel operation, otherwise you'll be off by chunks.
Test in varied lighting
Structured-light and ToF sensors degrade in direct sunlight; ML depth degrades in low light.
Common Pitfalls
Assuming every device has TrueDepth
Only Face ID iPhones have a front structured-light sensor. Rear depth comes from dual cameras.
Mitigation: Feature-detect with `AVCaptureDevice.default(.builtInTrueDepthCamera, ...)` and provide an alternative UI.
Privacy concerns
Face geometry is sensitive. Apple may flag apps that capture detailed facial depth.
Mitigation: Process on-device, request the minimum data, and document what you do with it.
Slow frame rate
Depth output at the same rate as RGB will halve your camera fps.
Mitigation: Throttle depth callbacks (e.g. 10–15 Hz) and reuse the last depth map between RGB frames.
Wrong coordinate system
Depth coordinate space and RGB coordinate space differ; your measurements will be off if you ignore the camera intrinsics.
Mitigation: Always project depth into the RGB frame using the provided intrinsic matrix.
When To Use It (And When Not To)
Good fit
- Portrait mode and background replace effects
- AR object occlusion and physics
- Object measurement (size, distance)
- Face / object 3D scanning
Look elsewhere if…
- Apps that need to ship to every device — depth availability is uneven
- Outdoor depth at long range — most consumer sensors top out at a few metres
- Replacing LiDAR for high-precision 3D scanning
- Reading depth without a camera permission
Frequently Asked Questions
Do all iPhones have a depth sensor?
All iPhones with Face ID have a front TrueDepth sensor. Rear depth is computed from multiple cameras on Pro models and on dual-camera iPhones.
How is depth different from LiDAR?
LiDAR is a longer-range time-of-flight sensor available on iPhone Pro and iPad Pro. It measures up to 5 m at high precision; "regular" depth is shorter range and lower precision.
Can I get depth on Android without ARCore?
Some OEMs expose ToF directly via Camera2, but ARCore Depth is the only consistent cross-OEM way.
What's the typical accuracy?
TrueDepth: sub-millimetre at <30 cm. Stereo / ToF: a few centimetres at 1–2 m. ML monocular depth: relative only, not metric.
Build with the Depth Sensor on Newly
Ship a depth sensor-powered feature this week
Newly turns a description like “use the depth sensor to portrait / cut-out effects” into a real React Native app — permissions, native modules and UI included. Full source code is yours, and you can publish to the App Store and Google Play directly from the dashboard.
Want a deeper dive on the underlying APIs? See Expo Sensors, Apple Core Motion and Android sensor framework.
