Camera in Mobile Apps: A Practical Guide

Use the front and rear cameras for photos, video, scanning and computer vision in iOS and Android — with permissions, code samples and pitfalls.

Timothy Lindblom

Founder, Newly

The camera is the most expressive sensor on a phone — and the most complicated to integrate well. From a simple "take a photo" button to live ML inference on every frame, the camera APIs across Expo, AVFoundation and CameraX cover an enormous surface area. This guide focuses on the patterns you'll actually use day-to-day.

How to Use Expo Camera and Media Library in React Native — Capture and Save Photos· Khaoula's DEV tutosWatch on YouTube ↗

Key Takeaways

  • Both platforms require an explicit camera permission and a clear usage description string.
  • For images and short videos, use a high-level API (Expo Camera, UIImagePickerController, CameraX `ImageCapture`).
  • For real-time CV (QR codes, ML, AR), use a streaming API (Vision Camera, AVCaptureSession, CameraX `Analyzer`).
  • Always release the camera when leaving the screen — leaks lock the hardware until the app restarts.

Camera at a Glance

4K+
Common video resolution
60–240 fps
Slow-motion ranges
Required
Permission on iOS & Android
~100%
Of smartphones

What It Is & How It Works

What it is. A CMOS image sensor and lens stack, with optional autofocus and image-stabilisation hardware. Most modern phones have several rear cameras (wide, ultra-wide, tele, depth) and one or two front cameras.

How it works. You request permission, open a session, configure inputs (lens, resolution, format) and outputs (still image, video, frame analyser). For most apps, a high-level wrapper (Expo Camera, react-native-vision-camera, UIImagePickerController, CameraX) is the right starting point.

Units & signal. Image dimensions in pixels. Video frame rate in fps. Sensor frame buffers in YUV / RGB. ISO, exposure time and white-balance values when you go manual.

What You Can Build With It

Photo / video capture

Standard "take a photo" or "record a video" flows that save to the gallery.

Example: A check deposit feature that captures the front and back of a cheque.

QR / barcode scanning

Real-time decoding of QR, EAN, UPC, PDF417 and more.

Example: A loyalty app that scans coupon barcodes at checkout.

Document scanning + OCR

Detect document edges, perspective-correct, and extract text with on-device ML.

Example: A receipt-tracking app that turns photos into structured expense entries.

AR and computer vision

Stream frames into ARKit / ARCore or a custom ML model for object detection, pose, segmentation.

Example: A try-on app that overlays virtual sunglasses on the user's face.

Permissions & Setup

Both platforms prompt the user the first time you open the camera. iOS additionally requires a clear, user-facing string explaining why you need camera (and microphone) access — App Review is strict about this.

iOS · Info.plist

  • NSCameraUsageDescription
  • NSMicrophoneUsageDescription (if recording video with audio)

Android · AndroidManifest.xml

  • android.permission.CAMERA
  • android.permission.RECORD_AUDIO (for video with audio)

Code Examples

Setup

  • Expo: `npx expo install expo-camera`
  • iOS: import AVFoundation; add `NSCameraUsageDescription` to Info.plist
  • Android: declare `<uses-permission android:name="android.permission.CAMERA"/>` and use CameraX
import { CameraView, useCameraPermissions } from 'expo-camera';
import { useRef } from 'react';
import { View, Button, Image } from 'react-native';

export function CameraScreen() {
  const ref = useRef<CameraView>(null);
  const [permission, requestPermission] = useCameraPermissions();
  const [photo, setPhoto] = useState<string | null>(null);

  if (!permission) return null;
  if (!permission.granted) return <Button title="Allow camera" onPress={requestPermission} />;

  async function takePhoto() {
    const result = await ref.current?.takePictureAsync({ quality: 0.8 });
    if (result?.uri) setPhoto(result.uri);
  }

  return (
    <View>
      <CameraView ref={ref} facing="back" style={{ width: '100%', aspectRatio: 3 / 4 }} />
      <Button title="Capture" onPress={takePhoto} />
      {photo && <Image source={{ uri: photo }} style={{ width: 100, height: 100 }} />}
    </View>
  );
}

Tip: With Newly, you describe the feature you want and the AI agent wires up the sensor, permissions, and UI for you. Try it free.

Best Practices

  • Pick the right wrapper

    Expo Camera for simple use; react-native-vision-camera for high-rate frame processing; AVFoundation / CameraX directly when you need full control.

  • Stop the session aggressively

    Always release the camera in `onDisappear` / `onPause` / cleanup — otherwise the hardware stays locked.

  • Pre-compress images before upload

    Photos are huge. Use HEIC / JPEG with quality 0.7–0.85 and downscale to the sizes you actually use.

  • Match preview aspect ratio to capture

    A 4:3 preview that captures 16:9 confuses users. Pick one and document it.

  • Run ML on a downscaled feed

    You rarely need 4K frames for object detection — a 320×240 stream is plenty and 100x faster.

Common Pitfalls

Permission denial loops

A user who taps "Don't allow" cannot retry from your in-app prompt — you must deep-link to settings.

Mitigation: On second denial, show an explainer and link to `Linking.openSettings()`.

Memory crashes from full-resolution captures

A 48 MP photo is ~150 MB in RAM as a bitmap.

Mitigation: Read pixel data from the file URL on demand, never hold full bitmaps in memory.

Wrong orientation

EXIF orientation differs from device orientation; many apps display photos rotated 90°.

Mitigation: Always honour the EXIF orientation tag when displaying or processing.

iOS Privacy Manifest miss

Apps shipped after May 2024 must declare camera usage in `PrivacyInfo.xcprivacy`.

Mitigation: Generate or update the privacy manifest as part of your release checklist.

When To Use It (And When Not To)

Good fit

  • Photo / video capture flows
  • Document scanning, OCR and KYC
  • QR / barcode scanning
  • AR and on-device ML inference

Look elsewhere if…

  • Background photo capture (banned on both platforms)
  • Continuous high-rate streaming over the network without user awareness
  • Reading other apps' camera output (sandboxed)
  • Replacing depth APIs — use the depth sensor for accurate 3D

Frequently Asked Questions

Expo Camera vs Vision Camera — which should I pick?

Expo Camera covers 80% of use cases (capture photo / video, basic preview). Vision Camera is better when you need to process frames in real time (ML, custom filters) or use multi-lens features.

How do I scan a QR code?

Both Expo Camera and Vision Camera include QR scanning out of the box. Native: AVFoundation `AVCaptureMetadataOutput` on iOS and ML Kit barcode scanner on Android.

Why is my preview black?

Almost always a missing or denied permission, or the session not started after configuration. Check the console — both platforms log the cause.

Can I capture from background?

No. Both iOS and Android disallow camera access while the app is backgrounded for privacy reasons.

Build with the Camera on Newly

Ship a camera-powered feature this week

Newly turns a description like “use the camera to photo / video capture into a real React Native app — permissions, native modules and UI included. Full source code is yours, and you can publish to the App Store and Google Play directly from the dashboard.

Start Building Your App

Want a deeper dive on the underlying APIs? See Expo Sensors, Apple Core Motion and Android sensor framework.

Continue Learning