Case Study

We built the BD engine we'd sell to a client — and ran it on ourselves first.

Scout is V1's internal business development system. It watches for companies that need our services, audits their digital presence, drafts hyper-personalised outreach, and routes everything to a human for one decision: send or skip. We built it to prove the model works before we pitch it to anyone.

Scout pipeline dashboard — dark operational UI showing a table of leads with company names, signal type badges (Embarrassing Site, Stuck Product, Manual Team, Launch Window), industry, founder contact, and status badges (New, In Review, Sent, Replied); stats row at top showing Total, New, In Review, Replied counts
Overview

Timeline

3 weeks

Service

AI Systems

Scope

Internal tool

Status

Live

Scout is V1's own business development engine — a multi-agent AI pipeline that discovers leads by signal, audits their digital presence, and drafts two personalised outreach variants before a human ever sees it. The whole point was to eat our own cooking: build the system we'd build for a client, run it on our own BD, and document what actually works.

The brief

Good BD is research-intensive. We automated the research.

Effective outreach starts with a specific reason to reach out — not "we help companies grow" but "your site scored 34 on mobile PageSpeed, you just raised a Series A, and you're still running Webflow." Getting to that level of specificity manually takes 20–30 minutes per company. At scale, that's not a process — it's a bottleneck.

We defined four signals that reliably indicate a company needs V1's services: a strong business with a weak or outdated site, a product with a public backlog of unshipped features, a team posting repetitive ops roles that should be automated, and a company that just announced a pivot, raise, or rebrand. Scout watches for these and builds the brief automatically.

The constraint we held throughout: a human had to approve every email before it sent. The quality bar for outreach is high and the cost of a bad email is real. The system handles the research and drafting. The human handles the judgement call.

How it works

Four signals. Three agents. One send button.

01

Signal detection & lead intake

Leads enter Scout tagged with one of four signals — Embarrassing Site, Stuck Product, Manual Team, or Launch Window. Each signal pre-loads the Auditor agent with the right context: what to look for, why now, and which V1 service is the likely fit. The signal is the brief. Without one, the lead doesn't move.

02

The Auditor agent

On lead creation, Inngest fires an event that triggers the Auditor. It runs a Google PageSpeed audit, fingerprints the tech stack (Next.js, Webflow, custom, legacy), and passes everything to Claude with a precise prompt: evaluate this company as a potential client, score the site 1–10, identify the top three issues, and write a 'why now' paragraph that references the specific signal and explains why reaching out this week is timely. Output is structured JSON.

03

The Voice agent

Audit complete, Inngest fires a second event — the Voice agent. It receives the audit findings and generates two email variants: Variant A leads with urgency (the cost of the current situation), Variant B leads with curiosity (a specific, unexpected observation). Hard rules enforced by the prompt: never mention AI, maximum three sentences plus a CTA, reference exactly one company-specific finding, peer-level tone — not a pitch, a peer noticing something.

04

Human review & approval

The lead lands in the review queue with everything the human needs: audit findings, a Why Now summary, site issues, and the two draft variants side by side. One click to approve A, one click to approve B, or toggle to custom edit mode and rewrite from scratch. Approve & Send triggers Resend, logs the send, and schedules a follow-up four days out via Inngest if no reply arrives.

The build

Operational UI. Every screen had one job.

The interface was built for speed of review — a human should be able to process a lead in under two minutes. Dense information layout, colour-coded status and signal badges, a split-pane lead detail view that keeps audit context visible while you're reading draft copy. No decorative UI. Everything is load-bearing.

Scout pipeline page — full lead table with column headers (Company, Signal, Industry, Founder, Status, Added); a row expanded showing Paystack with 'Embarrassing Site' signal badge in amber, 'Fintech' industry, founder name and email, and an 'In Review' status badge in amber; stats row at top with four coloured boxes: Total Leads 47, New 12, In Review 8, Replied 6
Scout review queue — stacked lead cards each showing company name and external link, three badges (Signal: Launch Window in blue, Service: AI System in teal, Confidence: High in green), a teal 'WHY NOW' section with a paragraph referencing the company's recent Series A and legacy stack, amber site issue badges (Mobile score: 34, No HTTPS, jQuery 1.x), and two email variant columns side by side with action buttons: Approve A, Approve B, Skip, Blacklist
Scout lead detail page — split pane layout: left sidebar (380px) showing Company section (website, industry, funding stage), Founder section (name, teal email address, LinkedIn link), Audit section (site score circle showing '34' in red, three amber issue badges, two tech stack badges: jQuery and Cloudflare), Why Now paragraph in a teal-bordered box; right panel showing two-column draft comparison with radio buttons, Variant A labelled 'Urgency' and Variant B labelled 'Curiosity', each showing subject line and full email body; action bar at bottom with Approve & Send (teal), Skip (ghost), Blacklist (red outline)
Scout sent tracking page — stats row with four boxes (Total Sent: 31 in purple, Replied: 9 in green, Followed Up: 18 in blue, Reply Rate: 29% in teal); table below showing Company + founder email, Subject + variant label (A: Urgency or B: Curiosity), Sent date, Follow-up column (purple checkmark if sent, grey clock if pending), Reply column (green checkmark with date if replied, grey dash if awaiting)
The results

A system that works, documented so clients can trust it.

4Lead signals that define when and why to reach out — each one maps to a specific V1 service and a specific urgency trigger
2Email variants drafted per lead — one urgency-led, one curiosity-led — with a human choosing the angle before anything sends
0Mentions of AI in any outreach email — enforced by prompt constraints. The output sounds like a thoughtful person, not a system

Scout runs on V1's own pipeline. We use it to find clients, send outreach, and track what converts. The same architecture — event-driven agents, structured AI output, human-in-the-loop approval — is what we now build for clients under the AI Systems service. We didn't build it as a demo. We built it as infrastructure.

From V1
We built Scout because we wanted to know if this kind of system actually works — not in theory, but in production, on real outreach, with real reply rates. It does. That's why we sell it.
V1

V1 Team

Built internally at V1