CASE STUDY · AI / SaaS · 2025–2026

JobCannon:
0 to a live AI career platform.

A senior engineering team designing and shipping an AI-powered career discovery product — 2,500+ careers, 1,500+ skills, 64,000+ graph edges, premium tier live, ranking on the first page of Google for high-intent assessment queries.

Engagement
Founder + Engineering Pod
Stack
Next.js 14 / Supabase / Stripe
Status
Live · Premium shipped 2026-04
Role
Build & operate

Challenge

Most career-test products on the internet are content marketing dressed up as a tool — five generic personality questions, an email gate, and a glossy PDF that nobody reads. The opportunity was to build the opposite: a real assessment engine that produces specific, defensible career recommendations and converts free users to a paid tier they actually want.

The brief from the founder was unambiguous. Ship a product that:

  • Runs scientifically grounded assessments without cloning copyrighted clinical instruments (no PHQ-9, no GAD-7, no MBTI verbatim items).
  • Scales to thousands of career profiles, each with skill mappings, salary bands, and growth signals.
  • Holds up as a discoverable surface — programmatic SEO, AEO-grade JSON-LD, free results forever, no paywall on core insight.
  • Charges money on day one with a premium tier that earns the upgrade rather than withholding the result.

None of those goals are individually hard. Compounded — under one codebase, with one engineering pod, against a real launch date — they become a small platform.

Approach

We treated JobCannon as five products glued by one funnel.

1. The assessment engine

We designed a domain-specific scoring engine in TypeScript rather than reaching for an LLM. Each test ships with a typed config (questions, dimensions, scoring math, trait thresholds) and a content module with localized copy. The engine handles serialization, reverse-coding, weighted axes, and result composition. New tests are added by writing two files, not patching prose into a results page.

Crucially, premium content lives in templates plus per-test data files. There is no hardcoded long-form prose tucked into a route — every premium section is generated from structured data so it can be regenerated, retranslated, or A/B tested without touching components.

2. The career graph

2,536 EN careers. 1,533 skills. 64,317 graph edges. Each career carries skill weights, adjacency to neighboring careers, salary bands, and tags consumed by recommendation logic. The graph is a Supabase Postgres dataset behind a typed read API, regenerated nightly from a deterministic seed pipeline. Numbers are canonical and verified by check-scripts before any UI quotes them — never a guess in copy.

3. Sacred-flow architecture

The path from Start → first question → result → paywall → premium unlock is the entire business. We isolated those screens behind validators that block UI swaps, ref-binding regressions, and conditional handler assignments. CTAs that change shape on phase transitions get tested end-to-end, not just visually. A "visibility ≠ working" rule sits in the test plan — every CTA has a Playwright assertion that the click handler is actually wired.

4. Money flows under adversarial review

Stripe code, premium gating, webhook handlers, and migrations get a mandatory adversarial review from a second LLM (Codex/GPT-5) before merge. Not a polite review — an actively hostile one, looking for double-charges, race conditions on webhook retries, idempotency gaps. The rule emerged after real bugs slipped through single-model review: now nothing money-touching ships without it.

5. Programmatic SEO + AEO

130+ test landing pages, 350+ blog articles, JSON-LD generators that emit Article + FAQPage + BreadcrumbList per page. We schema-validate every render path because Google saw payload duplication that curl couldn't see — Playwright post-hydration DOM became the verification layer. Free tier is permanent, by policy, so AEO answers can quote results without paywall traps.

"Visibility is not working. Build pass is not deploy. Deploy is not verified. Every claim closes with a curl or a Playwright assertion."

What we built

  • 14 production assessments (career, personality, relationship, cognitive) with their own scoring math, premium content modules, and result visualizations.
  • Custom result components per test — radar charts, hexagons, gauges, tetrad maps — bound to typed scoring outputs, never repurposed across tests without explicit approval.
  • Premium content engine generating ~15 long-form sections per test from structured data (template + facts), localized, A/B-testable, never inline prose.
  • Stripe Single + All-Access paywall ($19.99 single / $9.99 monthly all-access). Two options, exactly. No third "free email" path. Inline placement, never sticky bottom bar.
  • Telegram bug-report pipeline + Stripe subscription monitoring + Supabase event log; new bug reports surfaced in session.
  • Programmatic SEO surface — 130+ test pages, 350+ blog articles, multi-locale ready, AEO-grade JSON-LD on every route.
  • Deploy-lane classifier — fast lane to main for content/copy, staging lane for risky changes (test entry, paywall, schema, migrations). Risky changes never merge directly.
  • Validators in CI — popup brand validator, JSON-LD dedupe checker, paywall-config guard. Standards without enforcement become nostalgia.

Results

JobCannon is live, charging real money, and indexed in Google for the queries that matter. The premium tier shipped on schedule (April 2026), money flows through Stripe in production, and the Codex-adversarial review process has caught real defects before they reached users — including webhook race conditions and gating regressions that single-model review had missed.

The platform now operates on a publish-don't-rebuild rhythm. New tests, new careers, and new locales add to a stable engine rather than spawning new code paths. Operationally, the founder runs the product day-to-day; the engineering pod ships features, audits, and infra rather than firefighting bugs.

What we don't quote here: vanity numbers we can't ground in a source. We have stable canonical metrics (career count, skill count, graph edges) and we'd rather give those than invent a conversion percentage. That principle, ported into client work, is one of the reasons clients pick this team.

Stack & tooling

FRAMEWORKNext.js 14 (App Router) on Vercel
DATABASESupabase Postgres + RLS
AUTHSupabase Auth + magic links
BILLINGStripe (Subscriptions + One-time)
ANALYTICSPostHog + Cloudflare Web Analytics + GSC + Bing Webmaster
EMAILResend (transactional, API-first only)
BUG REPORTINGTelegram bot pipeline → Supabase
QAPlaywright (post-hydration assertions)
REVIEWCodex (GPT-5) adversarial review on money/legal/migrations
DEPLOYLane-aware classifier; staging gate for risky changes

What this engagement looks like for you

The JobCannon playbook is not a JobCannon-only playbook. It's the same shape we apply to AI/SaaS builds for clients: a small senior pod, a typed engine with structured content, real billing on day one, programmatic SEO baked in from the start, validators and adversarial review in CI, no infinite roadmap.

If you're building something where the funnel is the business — assessment, onboarding, scoring, paywall — this is the team that has done it end to end and is doing it now.

◆ START A PROJECT

Want similar results?

Tell us what you're building. NDA available. We respond within 24 hours.