From jobsite photo dump to searchable construction record
A multi-year residential water-feature build was generating thousands of site photos, but the archive had become a scroll wall. I built a private photo review platform that reads EXIF GPS, pins photos to a project map, uses AI vision to suggest tags from a project-specific taxonomy, and lets a reviewer confirm or reject each tag in seconds.
§ 01 · The problemThe photo library was the construction record — and it was illegible
A multi-phase build runs for years and generates a constant stream of photos from the site team — phone-camera shots from every trade, every visit, every surprise. Each photo gets a GPS stamp and a timestamp from the camera and that’s where the metadata stops. After a year of this, the library is a scroll wall of thousands of items: unsearchable, unfindable, uncategorized, unsharable. When an engineer asks “show me everything from the equipment-room rough-in last September with the trench drain visible,” the answer is: nobody can find that.
The team had been triaging by folder convention and date — which works at small scale and falls apart at multi-year, multi-trade scale. The photos were the construction record for a project too complex to keep in any one person’s head. But the record was illegible.
The wedge: replace the folder-and-date workflow with a system that auto-tags, auto-pins, and auto-organizes. The reviewer’s job becomes “confirm or reject” instead of “type and remember.” Each photo earns its own searchable identity the moment it’s ingested.
The same photos in two systems. The work was making them findable — tagged, pinned to the project map, and filterable by trade, phase, and system.
Why not just use Dropbox or Lightroom?
Off-the-shelf photo tools (Dropbox, Google Photos, Lightroom, generic asset management) are built for breadth. They handle any kind of photo from any kind of user. That generality is exactly the wrong fit here: the customer didn’t need a tool that handled vacations and weddings and construction equally well; the customer needed a tool that knew construction trades, knew which phase the build was in, knew the project’s own tag taxonomy, and could pin photos to a satellite-overlaid project map. Building for one project beats configuring a generic tool to pretend it knows the project.
The other thing off-the-shelf can’t do is tag against your own taxonomy. Claude Vision can; it just needs the taxonomy injected into the prompt. The system suggests tags only from the 89 the project actually uses, and a non-technical reviewer confirms or rejects in seconds. That’s a workflow no general-purpose tool offers.
§ 02 · ScopeOne document. Signed before any code.
Every project starts with a SCOPE document. It’s short, opinionated, and names what’s out of scope as carefully as what’s in. For this engagement, the Phase 1 scope was the four working screens (Browse, Map, Review, Tag Manager), the ingest pipeline, the AI tagger, and the encrypted-SPA deploy. Everything else (auto-accept thresholds, role-based access, activity feed, collections, mobile polish) was named in the Phase 2 candidate list and explicitly deferred. Most of those have either shipped since or are in active planning. The “never” list is still never.
The OUT-of-scope list is doing real work here. Naming the Phase 2 candidates kept Phase 1 honest about what it was. Naming the “never” items kept the tool from drifting into Dropbox-with-fewer-features.
§ 03 · PlanThe implementation plan, before any code
After SCOPE, I write the PLAN. It’s the implementation strategy — concrete enough to argue with, abstract enough to not pretend to be code. The customer reviews it, pushes back, signs off. Then I start building.
§ 04 · BuildWhat we shipped
Phase 1 shipped Browse, Map, Review, Tag Manager, the ingest pipeline, the AI tagger, and the encrypted Cloudflare deploy. v1.5 added the subject-point feature on the map (drag a pin to mark not where the camera was but where the camera was pointing, computed via heading × distance haversine) plus a unified header and help-modal pass across all five templates. v2 is in active planning — tag-category CRUD, auto-accept thresholds, activity feed, role-based access, collections. Each phase scoped, priced, and signed before the previous one shipped.
From phone photo to searchable record. Steps 1–3 and 5 are automated; only step 4 needs a human, and only briefly — one tap per suggested tag.
The Review wizard, demo data shown. One photo at a time. Left: the photo, filename, capture date. Below: AI-suggested tags from the project’s own 89-tag taxonomy — each one a confirm/reject pill. Right: the GPS panel, with the drag-to-correct pin. Bottom: confirm-and-next as a single keyboard shortcut. The reviewer’s job becomes “tap through” rather than “type and remember.”
Spec at a glance
| Frontend | Five-screen private review interface: Browse, Map, Review, Tags, Roadmap. |
| Backend (dev) | Flask + SQLite for fast local iteration. |
| Backend (prod) | Cloudflare Worker + D1 + R2; same SQL schema as dev. |
| AI tagging | Claude Vision API with the project’s tag taxonomy injected into the prompt; tag-from-list-only is enforced; confidence stored per suggestion for future thresholding. |
| EXIF / GPS | EXIF extraction at ingest. Subject-point (where the camera was pointing) computed via heading × distance haversine. Draggable pin correction in the Review wizard. |
| Privacy | Private, access-controlled deployment. Encrypted at rest. No public sharing links; access requires authorization. |
| Hosting | Cloudflare infrastructure. No dedicated server bill at the project’s current traffic level. |
§ 05 · ShipWhat changed for the customer
Before, the photo library was a folder tree the site team appended to and nobody else opened. After Phase 1 shipped, thousands of site photos became searchable by trade, phase, system, date, location, and review status. The AI suggests, the reviewer taps through, the photo joins a searchable index in the same week it’s captured. An engineer asking for everything from a particular phase with a particular system visible gets the answer in one click.
v1.5’s subject-point feature added the most useful navigation aid on the map: not where the camera was, but where the camera was pointing, computed from the EXIF heading and a measured distance.
Heading from EXIF × measured distance → subject point on the map.
The catalog stopped being a record of where photographers stood and started being a record of what they were looking at.
The Map view, demo data shown. Each pin is a photo positioned by EXIF GPS plus the v1.5 subject-point correction. Pins cluster automatically; selecting one opens the photo card with its confirmed tags. Filters on the left narrow by phase, system, date, and review status.
project-specific taxonomy
browse · map · review · tags · roadmap
before any human review
· review status
The customer bought Phase 1, then came back for v1.5, and is now scoping v2 because the tool proved useful. Each phase scoped, priced, and signed before the previous one shipped. The customer owns the source code, the schema, the AI prompt, and a runbook for keeping it running. Hosting runs on Cloudflare infrastructure with no dedicated server bill at the project’s current traffic level. The pattern they bought was a one-time engagement they could come back to when they wanted more — not a subscription that bills whether they use it or not.
Generic tools could store the photos. Custom software made them useful.
You’ve been pricing SaaS.
If your business is sitting on a pile of data that’s the record of how you actually work — photos, spreadsheets, paperwork, anything — and the tools you’ve been pricing don’t quite know what your data is, let’s talk. The first conversation is free. If it’s a fit, the next thing you’ll get is a written SCOPE document.
Get in touch →Clients aren’t named on a public site. References available on request to qualified prospects.