Framecut - AI Wedding Photo Culling
Cull 1,000 raw wedding photos down to your best 100 in under 10 minutes, so you can edit and deliver faster.
Wedding photographers spend 4-6 hours per event manually culling 1,000+ raw images down to 100-300 deliverables, directly bottlenecking their delivery speed and revenue capacity. Existing tools still take 20-90 minutes and lack wedding-specific intelligence for detecting emotional moments, expressions, and multi-shooter consistency.
Build This Idea
Run Claude Code then copy and paste this command
Sign In to Build
Join with a free account
Claude Code will scaffold the full project for you based on the idea spec, tech stack, and features
Talk to Claude Code to edit features, add integrations, or customize anything in your new project
The Business
$720M-$1.4B
Market Size
$120-$500
Global
Worldwide Potential
Customer
Professional wedding photographers and small studios (1-5 shooters) shooting 20-80+ weddings per year with 1,000-5,000 raw images per event, primarily in the US, UK, Australia, and Western Europe.
Pricing
Annual subscription at $149/year (or $15/month) for unlimited culling, undercutting Aftershoot's $120/year culling-only tier by offering faster speed and wedding-specific intelligence at a slight premium. A free tier allows 3 wedding culls to drive trial adoption. An optional Pro tier at $299/year adds style learning, multi-shooter merge, and client preview galleries.
$96K
Estimated Annual Revenue
800 customers at $120-$500/year
10% market capture
Features
Local GPU-accelerated model scores 1,000+ raw images in under 10 minutes for technical quality (focus, exposure, noise) and compositional quality.
Detects blinks, closed eyes, unflattering expressions, and identifies emotional peak moments like first kiss, vows, and laughter using a wedding-trained classifier.
Groups burst sequences and near-duplicates, auto-selecting the sharpest frame with the best expressions from each cluster.
Exports selections as star ratings, color labels, or flags directly into Lightroom Classic catalogs and Capture One sessions.
Merges primary and second shooter photos into a unified chronological timeline, flagging color inconsistencies between cameras.
Learns from a photographer's past cull decisions to personalize scoring weights and selection preferences over time.
Generates a shareable low-res web gallery of top picks for rapid client feedback before full editing begins.
Shows per-wedding analytics including keeper rate, common rejection reasons, and processing speed to help photographers optimize their shooting.
Sign In to View
Join with a free account
Tech Stack
apis
OpenAI Vision API
Cloud fallback for expression and emotion detection when local GPU is insufficient, and for training data labeling pipeline
Stripe
Subscription billing and license key management for annual/monthly plans
Adobe Lightroom SDK
Direct catalog integration to write star ratings, flags, and color labels into Lightroom Classic
backend
Next.js API Routes
Handles license validation, user accounts, style learning sync, and client gallery hosting
Python/ONNX Runtime
Local inference engine running the wedding-trained vision model on-device with GPU support
hosting
Vercel
Hosts the marketing site, web dashboard, license portal, and client preview galleries
database
Supabase
User accounts, license management, learned style preferences, and analytics storage
SQLite (local)
On-device catalog of image scores, groupings, and cull decisions for offline-first operation
frontend
5 Day Sprint UI
Component library for the web dashboard, license management portal, and client preview gallery interface
Electron + React
Desktop app shell for local-first processing with native file system access and GPU acceleration
Sign In to View
Join with a free account
Frequently Asked Questions
Start with your own idea and setup an AI business today