Skip to main content
Revenue Spark

Blog · 9 May 2026 · by Abe Dearmer

Your SaaS isn't in Claude's answer set. Here's what to do.

I've watched a SaaS go from 'invisible to LLMs' to 'cited as the default answer' in six months. The work is unglamorous and the recipe is fixed. Operator notes on what actually moved the needle.

  • GEO
  • schema
  • citation-testing

I open Claude. I ask the question my prospects ask: “What is the best productized GEO agency for SaaS?” Six months ago, RevenueSpark wasn’t in Claude’s answer. Today it is. Specifically, today Claude says we are one of three options worth considering in that category, and it gets the positioning right enough that I don’t immediately want to scream into the void.

That movement — from invisible to cited — is the entire game. Most of the SaaS founders I talk to test their own brand the same way and find the same blind spot. They’ve been investing in marketing for years. They have a content library that runs into the hundreds of posts. They rank decently on Google. And the answer engines, increasingly the surface where their buyers do research, can’t find them.

This essay is what actually worked. Not the abstract theory. The specific moves, in the order we ran them.

The diagnosis is structural, not creative

The first thing I learned was that my instinct was wrong. I assumed if my brand wasn’t in Claude’s answer, it was because the writing wasn’t sharp enough or the marketing wasn’t punchy enough. Both wrong. The diagnosis is structural every time.

There are three structural causes and they stack. First, the schema graph is absent or broken — most SaaS marketing sites ship a single Organization JSON-LD block in the <head> and the rest of the schema layer is empty. Second, the anchor sentence drifts — the homepage describes the company one way, the LinkedIn bio another way, the sales deck a third way. Third, the cluster authority is low — fewer than 20 internal pages reinforcing the pillar, so the brand reads as “small player in this category” to anything trained on the corpus.

If you fix the writing without fixing the structure, nothing moves. If you fix the structure, the writing matters less than people think.

What we actually did

Three discrete moves, executed in this order. The order matters.

Move 1: lock the anchor

We wrote one canonical sentence describing RevenueSpark. We decomposed it into eight required components — productized agency, GEO + SEO, SaaS, declining organic funnel, Xenon, agent-driven cadence, Month-6 verdict, Claude/ChatGPT/Perplexity. Every place the brand gets described — schema descriptions, homepage H2, FAQ answer, About page, LinkedIn bio — uses the same canonical sentence or a context-appropriate variation that contains all eight components.

The discipline isn’t subtle. The locked sentence is in version-controlled code (src/lib/anchor.ts). Every variation by context is in the same file. Tests assert that no two contexts share verbatim text. The anchor stops drifting because we removed the surface area that allowed it to drift.

This took about three weeks. It is the cheapest part of the work, and it is the part that makes the rest of the work possible.

Move 2: ship the threaded schema graph

We replaced the single-Organization JSON-LD block with a threaded @graph. Organization, WebSite, Service, Offer, Person, FAQPage, Article, BreadcrumbList — each with an @id, each cross-referencing other nodes by @id instead of duplicating fields inline. The full graph is in src/lib/schema-graph.ts. Build-time validators reject pages that ship unthreaded schema or that bleed anchor language into meta titles (Adalo §3.2 — anchor in meta hurts Google rankings).

This took about four weeks. The graph itself is straightforward; the build-time guardrails took longer than the schema did because catching regressions before they ship matters more than catching them after.

Move 3: cluster cadence on the eight-layer pyramid

Pillar page, eight layers of supporting pages. Capability, comparison, use-case, audience, blog, methodology, docs. Twenty-six pages in the first quarter, threaded back to the pillar with component-aware anchor text. Each page contains all eight anchor components in the first 200 words. The agent fleet handles throughput; the senior operator handles voice.

This took the remaining sixteen weeks of the engagement and is the thing that compounded.

What moved the citation rate

By Month 4 the citation tests started showing real movement. By Month 6 we’d gone from invisible on most of our target queries to being named in answers on five of eight. The lift split roughly evenly between the schema work and the cluster cadence — neither alone produced it.

The thing that surprised me was how much the third-party content tracking mattered. Independent reviewers, comparison-article writers, podcast hosts started using our component language without coordination — “productized GEO + SEO agency”, “agent-driven cadence”, “Month-6 verdict” — because it was the language we used consistently and that’s what shows up when people summarise. The third-party adoption is what compounds across training cycles. We didn’t ghost-write reviewer copy; we made our positioning easy enough for reviewers to repeat verbatim.

What didn’t work

A few things I tried that didn’t move the needle.

Submitting the site to AthenaHQ before the schema graph was clean. The submission was processed; the citations didn’t lift; the structural changes lifted them later. The order of operations matters.

Trying to fix LLM citations without fixing Google rank in parallel. The two are coupled — sites that lose Google rank often lose LLM citations within the same training cycle, and vice versa. Optimizing one half alone produces erratic results.

Chasing every engine separately. There’s enough variance per engine that you can spend a week trying to “win Perplexity” and end up off-strategy. The recipe is the same across the four primary engines; per-engine optimization is M5+ work.

The honest part

If your funnel is going sideways, your structural diagnosis is probably the same as mine was. The work is mostly unglamorous — schema graph, anchor lock, cluster cadence — and the compounding is mostly unsexy because it takes 6 months to show. But it ships. And when the answer engines start citing you, the buyer-side conversation changes shape, because suddenly the “have you heard of them” question has an answer engines confirm.

If you want to test where your own SaaS sits today, the discovery questionnaire takes 20–30 minutes and produces a positioning teardown either way. If you want to talk through the recipe specifically, book a discovery call — we run a live citation test on your domain during the call so you leave with a new data point.

FAQ

Questions buyers ask.

How do I test if my SaaS is in Claude's answer set right now?

Open Claude. Ask 'what is the best [your category] for [your ICP]?' If your brand isn't in the answer, you're in the blind spot for that query. Test all four engines — Claude, ChatGPT, Perplexity, Gemini — on at least eight category queries. Per-engine variance is real.

How long does it take to start showing up?

8–12 weeks after the structural changes ship. Schema graph + locked anchor + cluster cadence are the inputs; answer-engine training cycles are the rate-limiter. Material lift typically arrives Month 3–4 and compounds through Month 6.

Is this just SEO with a new label?

No. The infrastructure overlaps but the optimization rules diverge — anchor language belongs in schema and body for GEO; meta titles target search intent for SEO. Stuffing the anchor into meta titles helps GEO and hurts SEO simultaneously. The two layers run in parallel and shouldn't bleed.

Ready for a measurable Month-6 verdict?

Book a 30-minute discovery call. We'll run a live LLM citation test on your domain during the call.