A senior engineering team designing and shipping an AI-powered career discovery product — 2,500+ careers, 1,500+ skills, 64,000+ graph edges, premium tier live, ranking on the first page of Google for high-intent assessment queries.
Most career-test products on the internet are content marketing dressed up as a tool — five generic personality questions, an email gate, and a glossy PDF that nobody reads. The opportunity was to build the opposite: a real assessment engine that produces specific, defensible career recommendations and converts free users to a paid tier they actually want.
The brief from the founder was unambiguous. Ship a product that:
None of those goals are individually hard. Compounded — under one codebase, with one engineering pod, against a real launch date — they become a small platform.
We treated JobCannon as five products glued by one funnel.
We designed a domain-specific scoring engine in TypeScript rather than reaching for an LLM. Each test ships with a typed config (questions, dimensions, scoring math, trait thresholds) and a content module with localized copy. The engine handles serialization, reverse-coding, weighted axes, and result composition. New tests are added by writing two files, not patching prose into a results page.
Crucially, premium content lives in templates plus per-test data files. There is no hardcoded long-form prose tucked into a route — every premium section is generated from structured data so it can be regenerated, retranslated, or A/B tested without touching components.
2,536 EN careers. 1,533 skills. 64,317 graph edges. Each career carries skill weights, adjacency to neighboring careers, salary bands, and tags consumed by recommendation logic. The graph is a Supabase Postgres dataset behind a typed read API, regenerated nightly from a deterministic seed pipeline. Numbers are canonical and verified by check-scripts before any UI quotes them — never a guess in copy.
The path from Start → first question → result → paywall → premium unlock is the entire business. We isolated those screens behind validators that block UI swaps, ref-binding regressions, and conditional handler assignments. CTAs that change shape on phase transitions get tested end-to-end, not just visually. A "visibility ≠ working" rule sits in the test plan — every CTA has a Playwright assertion that the click handler is actually wired.
Stripe code, premium gating, webhook handlers, and migrations get a mandatory adversarial review from a second LLM (Codex/GPT-5) before merge. Not a polite review — an actively hostile one, looking for double-charges, race conditions on webhook retries, idempotency gaps. The rule emerged after real bugs slipped through single-model review: now nothing money-touching ships without it.
130+ test landing pages, 350+ blog articles, JSON-LD generators that emit Article + FAQPage + BreadcrumbList per page. We schema-validate every render path because Google saw payload duplication that curl couldn't see — Playwright post-hydration DOM became the verification layer. Free tier is permanent, by policy, so AEO answers can quote results without paywall traps.
"Visibility is not working. Build pass is not deploy. Deploy is not verified. Every claim closes with a curl or a Playwright assertion."
main for content/copy, staging lane for risky changes (test entry, paywall, schema, migrations). Risky changes never merge directly.JobCannon is live, charging real money, and indexed in Google for the queries that matter. The premium tier shipped on schedule (April 2026), money flows through Stripe in production, and the Codex-adversarial review process has caught real defects before they reached users — including webhook race conditions and gating regressions that single-model review had missed.
The platform now operates on a publish-don't-rebuild rhythm. New tests, new careers, and new locales add to a stable engine rather than spawning new code paths. Operationally, the founder runs the product day-to-day; the engineering pod ships features, audits, and infra rather than firefighting bugs.
What we don't quote here: vanity numbers we can't ground in a source. We have stable canonical metrics (career count, skill count, graph edges) and we'd rather give those than invent a conversion percentage. That principle, ported into client work, is one of the reasons clients pick this team.
The JobCannon playbook is not a JobCannon-only playbook. It's the same shape we apply to AI/SaaS builds for clients: a small senior pod, a typed engine with structured content, real billing on day one, programmatic SEO baked in from the start, validators and adversarial review in CI, no infinite roadmap.
If you're building something where the funnel is the business — assessment, onboarding, scoring, paywall — this is the team that has done it end to end and is doing it now.
Tell us what you're building. NDA available. We respond within 24 hours.