Skip to main content
Back to blog

AI Depth Map Generator — 2026 buyer guide

Free Alternative to LeiaPix/Immersity AI, Runway & MiDaS Web Depth Tools

We compared the four biggest AI depth map generators against ToolChamp on seven parameters. Here is the honest version — LeiaPix moved to paid + watermarked + non-commercial after the Immersity AI rebrand, Runway is bundled inside a $12-15/mo credit-based video editor, Hugging Face Spaces is free but queue-throttled with academic-grade UX, and Photoshop's Depth Blur Neural Filter requires a $19.99-22.99/mo Photoshop subscription.

By ToolChamp EditorialPublished 13 min read
Table of contents7 sections

LeiaPix used to be the free unlimited go-to for AI depth maps. Leia Inc. rebranded it as Immersity AI in 2023, and the unlimited free tier was replaced by watermarked 720p non-commercial-only exports — with raw depth gated behind the parallax-first workflow. Runway buried Extract Depth inside a credit-based editor. Hugging Face Spaces still hosts free demos but queue waits at peak hours are real. The clean-raw-depth-PNG niche has fewer no-signup options than it used to.

Depth-map generation feeds three downstream workflows: ControlNet inputs for Stable Diffusion, Blender displacement maps for 3D, and parallax-photo animations. Most tools optimise for one and hide the others. We rate each on depth accuracy, edge sharpness, output options, speed, free-tier generosity, privacy, and what each tool genuinely gives you for free.

AI Depth Map Generators compared at a glance

All products side-by-side on the same parameters and the same rubric. Detailed breakdowns are below.

AI Depth Map Generators comparison — free alternatives vs paid leaders, rated on 14 parameters.
ProductFree tierCheapest paidRaw depth export16-bit / float depthColored vizParallax export3D mesh exportVideo depthControlNet readyBatchSignupFile retentionOverall
LeiaPix / Immersity AIUnlimited watermarked, 720p, non-commercialImage $4.99/mo (500 credits/mo)Yes; raw depth gating by tier is unclearYes (vendor states 16-bit PNG)Not clearly documentedYes (3D photo / video formats)Documentation unclearYes (video conversion plan)Yes (export grayscale PNG manually)Not clearly documentedYesAccount-stored; retention unclear3.3 / 5
Runway125 one-time credits (no renewal)Standard $12/mo annual ($15/mo monthly)Yes (video clip depth export)Not clearly documentedNot documentedNo (broader suite has camera-motion VFX, not parallax-export)NoYes (Extract Depth specialty)Manual (export frames + use)No (depth-specific)YesStored in Private assets; SOC 2 Type II compliant2.9 / 5
Hugging Face Spaces (MiDaS / Depth-Anything)Unlimited, queue-throttledN/A — free demosYes (PNG download)Implementation-dependent (most Spaces output 8-bit visualization)Yes (varies by Space)NoNo (official demos)Some community video Spaces existYes (directly drops into ControlNet)No in UI; Yes via Python scriptNo for many public SpacesPer-Space implementation; unclear retention3.6 / 5
Photoshop Depth Blur Neural Filter7-day trialPhotography plan $19.99/moYes (Depth Map As Layer workflow)Yes (16-bit Photoshop document workflow)No (manual gradient mapping)NoNoNoYes (manual layer export → ControlNet)Partial via Actions / scriptsYesLocal desktop unless cloud features used3.1 / 5
ToolChampUnlimited, no signup, no watermarkN/A — fully freeYes (PNG)No (8-bit)Yes (Viridis)NoNoNoYes (PNG → ControlNet)NoNoDeleted post-job3.9 / 5

Scroll horizontally to see all columns. Highlighted row is the free option benchmarked against the paid leaders.

How each depth map generator stacks up

Each product is rated 1–5 stars on seven parameters using the same rubric. Overall score is an unweighted average.

#1 of 5 · AI Depth Map Generators

LeiaPix / Immersity AI

The consumer parallax / 3D-photo specialist. Free is unlimited but watermarked, 720p, non-commercial. Vendor docs state 16-bit PNG depth maps — but the raw depth is hidden behind the parallax-first workflow.

immersity.ai·Leia Inc. (LeiaPix Converter rebranded as Immersity AI in 2023)·Web·Launched 2022 (LeiaPix); rebranded 2023 (Immersity AI)
3.3/ 5

Scorecard

Depth Accuracy
4.0
Edge Sharpness
3.0
Output Options
4.0
Speed
5.0
Free-Tier Generosity
1.0
Privacy & Trust
3.0
Extras
3.0
Strengths
  • Very easy 2D-photo-to-3D-motion workflow for non-technical creators.
  • Strong parallax animation output is the headline feature — usable for social posts and "3D photo" videos.
  • Vendor documentation specifies 16-bit PNG depth precision — better than many basic 8-bit web demos when raw depth is available.
  • Broader Leia ecosystem connects to lightfield / spatial display hardware for users in that workflow.
Honest weaknesses
  • Free tier is watermarked, capped at 720p, and non-commercial — a hard regression from the original LeiaPix-era free use.
  • Raw depth export is hidden behind the parallax-animation-first UX; depth-only users have to dig through settings.
  • Like every monocular depth model, results soften on hair, foliage, glass, mirrors, and unusual close-ups.
  • Credit-based pricing makes one-off depth-map use feel disproportionate.

Pricing

Free watermarked 720p; Image $4.99/mo; Video $24.99/mo; Video Pro $49.99/mo

Free
$0
Unlimited watermarked conversions
Image
$4.99 / mo
500 conversion credits per month
Video
$24.99 / mo
Video conversion plan
Video Pro
$49.99 / mo
Higher video plan with larger credit allowance
Enterprise
Contact sales
Custom SDK / enterprise usage through Leia ec…

The unlimited-and-free LeiaPix era is gone. The Immersity AI free tier is unlimited but watermarked, capped to 720p, and marked non-commercial. The Image plan at $4.99/mo unlocks 500 conversion credits with monthly expiry. Vendor docs describe depth maps as 16-bit PNG, but the exact gating for raw depth export by tier is not clearly published on the public pricing page.

Capabilities

Input
PNG, JPG, JPEG, WebP for images; H.264 / H.265 for video
Output
PNG depth maps (vendor docs state 16-bit precision); 3D photo / 3D video parallax exports
Batch
Not clearly documented on public pages
API
Yes (enterprise SDK references via Leia / Immersity ecosystem)

Modes / specializations

Consumer parallaxVideoVideo Pro tiers gate longer3D content platform

What real users say

Trustpilot
No reliable product-specific page
G2
No page found
Reddit pulse
Mixed. Users still like the easy 3D-photo motion workflow, but multiple r/CrossView and AI-art threads complain the rebrand replaced the old free-unlimited LeiaPix experience with watermarks and credit gating.
"Depth maps must be in PNG format."
— Immersity / Leia FAQ, May 2026
"Immersity AI is not giving free exports anymore."
— Reddit r/CrossView, 2025

Our verdict

Immersity AI is the right tool when finished 3D-photo parallax animation is the actual goal — that's the product. The depth map is intermediate, not the headline output. It is the wrong tool when you just need a raw depth PNG for ControlNet, Blender displacement, or any external pipeline — the parallax-first UX and watermarked / non-commercial free tier are friction you don't need.

Who it's for: Social creators making "3D photo" parallax videos for Instagram / TikTok, casual users animating family photos, and Leia-ecosystem users with spatial / lightfield workflows.

#2 of 5 · AI Depth Map Generators

Runway Extract Depth

Depth tool buried inside Runway's broader video / Gen-4 creative suite. Strong for video / clip-based VFX (displacement, fake camera moves) — overkill for one static photo.

runwayml.com·Runway AI, Inc.·Web app·Launched Runway launched 2018; Extract Depth documented by 2023
2.9/ 5

Scorecard

Depth Accuracy
3.0
Edge Sharpness
3.0
Output Options
2.0
Speed
3.0
Free-Tier Generosity
2.0
Privacy & Trust
3.0
Extras
4.0
Strengths
  • Extract Depth is genuinely useful for video parallax / fake drone moves / displacement workflows.
  • Broader Runway suite means users may already have credits for video generation, editing, motion tools — depth is a bonus, not the headline.
  • Depth export from a timeline clip beats single-image-only web demos for VFX / video workflows.
  • SOC 2 Type II compliance is the strongest formal posture in this comparison.
Honest weaknesses
  • Credit pricing is hard to reason about for one-off depth tasks — there's no published per-export cost.
  • Depth tool is buried inside a video editor, not presented as a standalone static depth-map generator.
  • Support and billing complaints appear in Trustpilot reviews for Runway generally.
  • No raw static-image depth-map workflow clearly documented — Extract Depth is clip-based.

Pricing

Free 125 one-time credits; Standard $12/mo annual; Pro $28/mo; Unlimited $76/mo; Enterprise custom

Free
$0
125 one-time credits (do not renew)
Standard
$12 / mo annual ($15 / mo monthly)
625 monthly credits
Pro
$28 / mo annual ($35 / mo monthly)
2
Unlimited
$76 / mo annual ($95 / mo monthly)
2
Enterprise
Contact sales
Security

Runway prices everything in compute credits. Free is 125 one-time credits with no renewal; paid plans reset monthly and credits do not roll over. The pricing page does not expose a specific 'cost per depth-map export' for Extract Depth — depth is one feature inside the broader video editor, not a clean standalone SKU.

Capabilities

Input
Video clips supported in Runway editor; exact Extract Depth input list varies by clip
Output
Video clip depth map export (per-frame) into Private assets
Batch
No (depth-specific)
API
Yes for Runway platform; no public Extract Depth pricing

Modes / specializations

motion brushgreen-screenAI editingtrust materials

What real users say

Trustpilot
runway.ai 2.2/5 over 9 reviews; runwayml.com page also shows negative customer-service complaints — small samples, treat as directional
G2
No reliable product-specific rating captured
Reddit pulse
Positive among r/AfterEffects and VFX users for using Extract Depth as a displacement / fake-drone-move tool. Critical themes around Runway generally center on credits, support, and broader Gen-4 generation consistency rather than depth export specifically.
"Choose Depth Map from the Export as dropdown."
— Runway Help (Extract Depth)
"No support. Their audio to add is not working."
— Trustpilot review snippet, February 2026

Our verdict

Runway is the right tool when depth is one stage in a broader video / VFX workflow — clip displacement, fake camera moves, motion brushing — and you're already paying for the credits. It is the wrong tool for a single static photo depth map: the credit math, video-editor wrapper, and lack of standalone static-image depth pricing make it overkill.

Who it's for: VFX and motion designers, After Effects users wanting video parallax via displacement, and Gen-4 users who already have Runway credits from broader workflows.

#3 of 5 · AI Depth Map Generators

Hugging Face Spaces (MiDaS / Depth-Anything)

Free academic web demos from the original research teams — MiDaS, Depth-Anything, Depth-Anything-V2. Best-in-class accuracy and ControlNet compatibility, but bare-bones UX and queue waits.

huggingface.co·Hugging Face hosts; MiDaS from Intel ISL; Depth-Anything from Lihe Yang et al.·Web demo·Launched MiDaS research 2019–2021; Depth-Anything V1 + V2 Spaces 2024; Depth-Anything V3 Space late 2025
3.6/ 5

Scorecard

Depth Accuracy
5.0
Edge Sharpness
4.0
Output Options
3.0
Speed
2.0
Free-Tier Generosity
5.0
Privacy & Trust
3.0
Extras
3.0
Strengths
  • Free access to academic state-of-the-art models — Depth-Anything-V2 is widely considered the SoTA monocular depth model as of 2024–2025.
  • Direct ControlNet / ComfyUI compatibility — grayscale PNG is the canonical preprocessor input for SD depth conditioning.
  • Open-source model downloads let advanced users run local batch inference, script video sequences, and choose larger model variants for non-commercial use.
  • Best raw accuracy in this comparison when the Space is running properly.
Honest weaknesses
  • Public Spaces UX is minimal and can be broken, sleeping, or queued — search results show several runtime-error Spaces at any given time.
  • The largest Depth-Anything-V2 variants (Base / Large / Giant) ship under CC-BY-NC-4.0 non-commercial licences — commercial projects can only use the Small variant.
  • Monocular depth still fails on glass, mirrors, hair, foliage, and unusual close-up scenes — universal across all depth tools but worth flagging.
  • Queue waits during peak hours can stretch a 'few seconds' job into multi-minute waits on community compute.

Pricing

Free public demos (queue-limited); HF Pro / ZeroGPU priority varies; self-host local inference is free of software cost

Free Spaces demo
$0
Unlimited practical use subject to queue waits
Hugging Face Pro / ZeroGPU priority
Varies
Priority and compute benefits depending on HF…
Self-host local model
$0 software
Unlimited local inference if you have a GPU
Enterprise
Custom
Hugging Face Enterprise with private Spaces a…

The model demos themselves are free. Reliability depends on Space status — Hugging Face search results often show some Spaces running, others sleeping, runtime-error, or build-error. For production / batch use, advanced users self-host the models locally or via inference endpoints.

Capabilities

Input
Common web image formats accepted by Gradio demos (varies by Space)
Output
Usually 8-bit visualization PNG; bit depth depends on Space implementation
Batch
No in public UI; Yes via Python script for self-hosted models
API
Partial — Spaces can be called programmatically depending on configuration; no formal consumer API for every demo

Modes / specializations

MIT-licensedV2commercial OK and BaseLargeGiant CC-BY-NC-40scriptingbatchpro pipeline integration

What real users say

Trustpilot
Not applicable for academic Spaces
G2
No product-specific page
Reddit pulse
Strongly positive in r/StableDiffusion and r/comfyui for Depth-Anything-V2 quality and ControlNet compatibility. Common criticism: public Spaces are sometimes broken or queued, and licence complexity matters for commercial users.
"Depth-Anything-V2 is a better model."
— Reddit r/StableDiffusion, 2024
"Upload any picture and the app will compute a grayscale depth map."
— MiDaS Space (intentionally bare-bones UX)

Our verdict

Hugging Face Spaces is the right tool when you need academic state-of-the-art depth quality, ControlNet compatibility, or batch / video / local-inference workflows — and you can tolerate Gradio UX and occasional queue waits. It is the wrong tool when you need polished consumer UX, predictable uptime, parallax animation export, or a guarantee that the Space will be running when you visit.

Who it's for: Stable Diffusion / ComfyUI users preprocessing depth for ControlNet, researchers benchmarking depth models, technical 3D artists running local Blender displacement workflows, and anyone fine with running Python.

#4 of 5 · AI Depth Map Generators

Photoshop Depth Blur Neural Filter

Depth as an intermediate layer in Photoshop's bokeh-blur workflow. 'Depth Map As Layer' exposes the raw depth — with full manual mask / curves / 16-bit editing. Strongest manual refinement workflow, locked behind a $19.99-22.99/mo subscription.

adobe.com·Adobe Inc.·Photoshop desktop (Mac, Windows)·Launched 2021 (Neural Filters era); Depth Blur added as part of Neural Filters
3.1/ 5

Scorecard

Depth Accuracy
3.0
Edge Sharpness
3.0
Output Options
4.0
Speed
4.0
Free-Tier Generosity
1.0
Privacy & Trust
4.0
Extras
3.0
Strengths
  • Strongest manual refinement workflow in this comparison — masks, curves, alpha channels, manual painting on the depth layer.
  • Professional editing environment integrates depth as one layer among many — combine with Lens Blur, healing, clone stamp.
  • Adobe compliance posture is the strongest in this comparison (SOC 2 Type 2, ISO 27001 series).
  • 16-bit document workflow supports pro 3D / displacement pipelines where 8-bit banding would matter.
Honest weaknesses
  • Expensive subscription if you only need one depth map — $19.99-22.99/mo for a single static-image workflow is overkill.
  • Steep learning curve compared with one-click web tools — Photoshop is a professional editor, not a quick depth-map generator.
  • 2025 user reports describe broken / hanging depth-map output in some Photoshop 2026 builds — version-specific bugs matter.
  • Adobe cancellation friction is well-documented on Trustpilot — read terms before subscribing.

Pricing

7-day trial; Photography plan $19.99/mo; Photoshop standalone $22.99/mo; Teams $37.99/mo

Free trial
$0
7-day Photoshop trial
Photography plan
$19.99 / mo
Photoshop + Lightroom
Photoshop standalone
$22.99 / mo
Photoshop desktop
Teams Photoshop
$37.99 / mo per license
Business Photoshop license with admin features
Creative Cloud Pro (teams)
$99.99 / mo per license
20+ apps including Photoshop

Depth Blur Neural Filter cannot be purchased separately. The Photography plan at $19.99/mo is the cheapest legitimate path; standalone Photoshop is $22.99/mo. Adobe pricing pages vary by region — verify at checkout before subscribing.

Capabilities

Input
Broad Photoshop file support — JPG, PNG, TIFF, PSD, RAW via Camera Raw, HEIC via system support
Output
Photoshop layer / channel workflow — export as PNG, TIFF, or PSD with 16-bit document configuration
Batch
Partial via Photoshop Actions / scripts; Depth Blur batch reliability varies
API
No public Depth Blur Neural Filter API

Modes / specializations

curvesmaskblendSOC 3ISO 27001ISO 27017

What real users say

Trustpilot
photoshop.com page has 22 reviews — small sample, mostly negative on subscription / support
G2
Adobe Photoshop has a G2 review page; reviews praise Photoshop as professional workstation power, criticize learning curve
Reddit pulse
Photoshop users discuss Depth Blur as useful but imperfect; some prefer to export and manually edit the depth layer because the automatic blur 'can't get the blur right' on difficult images.
"One of the fullest capable photo editing packages."
— Capterra review, January 2026
"Price point is expensive."
— Capterra review, December 2025

Our verdict

Photoshop Depth Blur is the right tool when manual depth refinement matters — paint on the depth layer, fix the AI's mistakes on hair, mask out the background — and you're already a Photoshop subscriber. It is the wrong tool when you just want a raw depth PNG from one photo: $19.99-22.99/mo plus a learning curve is overkill for a single-file workflow.

Who it's for: Photo retouchers, designers, and VFX artists already paying for Photoshop who want manual control over the depth output and 16-bit document precision.

#5 of 5 · AI Depth Map GeneratorsThe free option

ToolChamp AI Depth Map Generator

Direct raw depth-map PNG download — Grayscale (3D-pipeline standard input) or Viridis (colored visualization), Compare or Depth-only view, HEIC input. Free, no signup, no watermark, no credit math.

Open the tool·ToolChamp·Browser-based — no install, no extension
3.9/ 5

Scorecard

Depth Accuracy
4.0
Edge Sharpness
4.0
Output Options
3.0
Speed
5.0
Free-Tier Generosity
5.0
Privacy & Trust
4.0
Extras
2.0
Strengths
  • Direct raw depth-map PNG download — no parallax-first UX detour, no credit math, no watermarked / 720p / non-commercial gating like Immersity AI free.
  • Two output modes (Grayscale + Viridis) cover both the 3D-pipeline standard input format and human-readable visualization.
  • Compare and Depth-only result views — drag a slider over the original to see the depth, or view the depth full-frame.
  • No signup, no email, no credit card, no daily/monthly cap, no watermark, no upsell modal — strongest zero-friction free workflow in the comparison.
  • Files deleted post-job — no account, no retention beyond processing window.
  • HEIC / HEIF input — iPhone photos work directly without conversion.
  • Several × faster than Hugging Face Spaces during peak hours — Spaces share community compute with queue waits; ToolChamp runs on its own GPU backend.
  • Grayscale output drops directly into Stable Diffusion ControlNet depth conditioning, Blender displacement, and AR / parallax tools.
Honest weaknesses
  • No 16-bit / 32-bit float depth export — output is 8-bit PNG, which can introduce visible banding in pro 3D pipelines.
  • No parallax / 3D-photo animation export — LeiaPix/Immersity specialise here; ToolChamp stops at the depth map.
  • No 3D mesh / point cloud export (OBJ / PLY / GLTF) — depth-to-mesh is a separate step.
  • No video / sequence depth — single image only, no temporal consistency across frames.
  • No manual depth refinement — depth edges on hair, foliage, and glass are as good as the model produces.
  • Relative / normalised depth, not metric depth — no camera intrinsics or focal length. Unsuitable for photogrammetric reconstruction.
  • Quality depends on the source — strong perspective cues and clean foreground/background separation give the cleanest depth.

Capabilities

Input
JPG, JPEG, PNG, WebP, HEIC, HEIF (HEIC auto-converted in browser)
Output
PNG (8-bit)
Batch
Single photo at a time
API
No — UI only

Modes / specializations

lighter closerthe format BlenderControlNetOutput typeFile size

Our verdict

ToolChamp wins this category at 3.9/5 overall because the simplest workflow wins when raw depth PNG is the actual goal. Hugging Face Spaces matches us on accuracy (5/5 vs our 4/5) and free-tier generosity (5/5 each), but loses on speed (2/5 — queue waits) and UX. LeiaPix/Immersity moved to paid with watermarks. Runway buries depth inside a credit-based video editor. Photoshop is $19.99-22.99/mo. We score 4/5 (not 5/5) on Depth Accuracy because no published independent benchmark validates us against Depth-Anything-V2 Large. Output Options is 3/5 — Grayscale + Viridis PNG covers the common cases, but no 16-bit, no parallax, no mesh. Extras is 2/5 — no API, no batch, no plugin, no video depth.

Who it's for: Anyone with one photo who needs a raw depth-map PNG — for Blender displacement, Stable Diffusion ControlNet conditioning, parallax animation in another tool, or AR / VR prototyping — and doesn't want a subscription, credit math, watermark, or queue wait.

Which free depth map generator should you pick?

Common situations and the product that actually fits them.

You want a finished 3D-photo parallax animation (not raw depth)

LeiaPix / Immersity AI

Parallax animation is the headline feature. Free is unlimited but watermarked, 720p, non-commercial; Image plan $4.99/mo unlocks 500 credits.

You work in video / VFX and need clip-based depth (displacement, fake camera moves)

Runway Extract Depth

The only tool in this comparison with documented timeline-clip depth export. Standard plan $12/mo annual gives 625 monthly credits across the broader Gen-4 suite.

You need academic state-of-the-art depth quality for ControlNet preprocessing or local batch scripts

Hugging Face Spaces (Depth-Anything-V2)

Free public demos from the original research teams. Direct ControlNet compatibility. Self-host the model locally for unlimited batch / video / pro use. Tolerate queue waits or broken Spaces during peak hours.

You already pay for Photoshop and want manual depth refinement

Photoshop Depth Blur Neural Filter

Depth Map As Layer workflow puts the depth on a Photoshop layer with full mask / curves / 16-bit document editing. $19.99/mo Photography plan or $22.99/mo standalone.

You want a raw depth PNG for ControlNet, Blender, or AR — and don't want to subscribe

ToolChamp

Free, no signup, no watermark, no credit math. Direct PNG download in Grayscale (3D-pipeline standard) or Viridis (colored visualization). Simplest workflow that exists in 2026.

You're allergic to credit math and watermarks

ToolChamp

LeiaPix watermarks free output. Runway prices in credits. Photoshop is a subscription. Hugging Face is free but Spaces can be broken / queued. ToolChamp is the only zero-friction raw-depth workflow in this comparison.

Frequently asked questions about ai depth map generators

Quick answers to questions that come up before, during, and after picking a tool.

Is there a genuinely free alternative to LeiaPix/Immersity AI, Runway, and MiDaS web tools?
Yes. ToolChamp gives you direct raw depth-map PNG download in two output modes (Grayscale for 3D pipelines, Viridis for colored visualization), Compare and Depth-only result views, HEIC input — all free, no signup, no watermark, no daily cap, no credit math. Immersity AI free is watermarked / 720p / non-commercial. Runway free is 125 one-time credits. Hugging Face Spaces is free but queue-throttled. Photoshop is a 7-day trial.
Why does ToolChamp win this category at 3.9/5?
Because the simplest workflow wins when raw depth PNG is the actual goal. Hugging Face Spaces ties us on free-tier (5/5 each) and beats us on raw academic accuracy (5/5 vs 4/5), but Spaces lose on speed (queue waits) and UX (Gradio bare-bones). LeiaPix/Immersity moved to paid with watermarks. Runway buries depth inside a credit-based video editor. Photoshop is $19.99-22.99/mo. We score 4/5 on Depth Accuracy (not 5/5) because no published independent benchmark validates us against Depth-Anything-V2 Large. The headline differentiator is zero-friction raw-depth-PNG download.
Why does ToolChamp output 8-bit PNG instead of 16-bit?
Three reasons. First, the most common downstream uses (Stable Diffusion ControlNet depth conditioning, parallax animation, AR effects, screen-resolution Blender displacement) work fine with 8-bit. Second, 16-bit PNG output is bandwidth-heavier — bigger files for users who download many depth maps. Third, the more demanding pro 3D pipelines that genuinely need 16-bit / float precision typically also need batch processing, manual edge refinement, and metric depth — all of which ToolChamp doesn't ship. For pro 3D, Photoshop's Depth Map As Layer + 16-bit document workflow or a self-hosted Depth-Anything-V2 local inference is the right tool.
Can I use the ToolChamp depth output directly with Stable Diffusion ControlNet?
Yes. Pick the Grayscale output type, download the PNG, and drop it directly into your ControlNet depth preprocessor / model. The grayscale convention (darker = farther, lighter = closer) matches what SD ControlNet expects. The Viridis colored output is for visual inspection only — convert to grayscale before using as a ControlNet input.
Why isn't there a parallax / 3D-photo animation export?
Because parallax animation is a separate workflow from depth-map generation. LeiaPix/Immersity AI specializes in parallax as the headline feature — that's their whole product. ToolChamp's scope is the depth map. If you want a parallax animation, take the depth PNG into LeiaPix's free animator, Depthy, Blender's camera-projection setup, or Stable Diffusion ControlNet-driven motion workflows.
Does ToolChamp work on iPhone HEIC photos?
Yes — HEIC and HEIF are supported as input and auto-converted in the browser before processing. Most paid competitors expect JPG / PNG only; ToolChamp handles iPhone photos directly.
What's the difference between 'relative' and 'metric' depth?
Relative (or normalized) depth maps show how far each pixel is from the camera in a 0-1 range — useful for visual effects, ControlNet conditioning, and parallax animation. Metric depth maps include absolute scale (meters or centimeters) plus camera intrinsics (focal length, principal point) — needed for photogrammetry, 3D reconstruction, and measurement. ToolChamp outputs relative depth, like every consumer-facing tool in this comparison. For metric depth, look at Apple's Depth Pro model, ZoeDepth, or commercial sensor-based capture (LiDAR / RGB-D cameras).

How to generate a depth map for free in ToolChamp

Free in-browser AI monocular depth estimation with Grayscale and Viridis output — no signup, no watermark.

  1. Step 1

    Upload a photo

    Drag JPG, PNG, WebP, HEIC, or HEIF up to 10 MB and 4096 px. Depth runs on the GPU backend.

  2. Step 2

    Pick output type

    Grayscale (standard 3D-pipeline input — drops into ControlNet depth or Blender displacement). Viridis (perceptually uniform visualization).

  3. Step 3

    Inspect + download PNG

    Compare side-by-side or view depth-only full-frame, then download the depth PNG. Files are deleted server-side post-job.

The honest summary

There is no single best AI depth map generator for every situation. LeiaPix/Immersity AI wins when finished parallax animation is the goal. Runway wins for video / clip-based VFX inside a broader creative suite. Hugging Face Spaces wins on raw academic accuracy and ControlNet preprocessing when you can tolerate queue waits. Photoshop wins for manual depth refinement and 16-bit document workflows.

But for the most common case — a person with one photo who wants a raw depth-map PNG for Blender, ControlNet, AR, or just to see what the depth looks like — ToolChamp wins at 3.9/5 because the simplest workflow wins when raw depth is the actual goal. We sacrifice 16-bit precision (use Photoshop), parallax export (use LeiaPix), video depth (use Runway), and academic state-of-the-art accuracy on the largest variants (use Hugging Face Spaces) to keep the core workflow free, unlimited per session, and frictionless.

If parallax animation is what you actually want, pay for Immersity AI. If video depth matters, pay for Runway. If you can run Python, Hugging Face Spaces + self-hosted Depth-Anything-V2 gives you the best raw accuracy for free. If you already own Photoshop, the Depth Map As Layer workflow is unmatched for manual refinement. If you have one photo to convert to depth today, you do not need a subscription. Pick the tool that matches the workflow — and remember that the zero-friction PNG download is what sets ToolChamp apart.

See all

No affiliate links. No sponsored placements. Pricing accurate as of May 12, 2026, in USD unless otherwise noted. Reviewed by ToolChamp Editorial.

Spotted an error? Email [email protected] and we will fix it.

We use anonymous analytics to improve ToolChamp. No personal data is stored or sold. Privacy Policy