Blog · 12 May 2026 · by Abe Dearmer
The Adalo Blueprint pattern in 12 sections.
The methodology behind every RevenueSpark engagement, walked section by section. Public summary so prospects can evaluate the framework before they buy. The architecture is the moat, not the secrecy.
- methodology
- blueprint
- GEO
Most agencies treat their methodology like a trade secret. We do the opposite. The 12-section structure of every engagement is public; the per-client substance — the actual anchor sentence, the actual competitive wedges, the actual cluster backlog — stays inside the engagement document. The reasoning is straightforward: the moat is execution, not secrecy.
Here is the framework we run, walked section by section. Reading time is about eight minutes. Implementation time is a six-month engagement.
1. The Goal
Define six target queries the brand wants to win. Phrased as user questions rather than keyword lists. “What is the best productized GEO agency for SaaS?” not “geo agency saas”. Every later section ties back to this set; if a query isn’t on the list, no work is spent on it.
2. Golden Anchor + 8 components
One canonical sentence describing the brand. Eight required terms decomposed from it — category, ICP, differentiator, pain, trust signal, commercial wedge, target answer engines. Every pillar / cluster page must contain all eight in the first 200 words. The sentence is locked once chosen; change is a brand-level event with its own ADR.
The discipline is unforgiving. Drifting anchors compound into noise; locked anchors compound into categorical placement.
3. SEO vs GEO layer separation
The single most expensive mistake we see on SaaS marketing sites is stuffing the canonical anchor sentence into every meta title. Google’s ranking model penalises it; the site loses the ranking it was chasing. So we run two layers in parallel and never let them bleed.
Anchor language belongs in schema descriptions, FAQ answers, About page, and the first 200 words of every pillar / cluster page. Search-intent language belongs in meta titles and meta descriptions. Build-time validators in the publishing pipeline catch bleed before deploy.
4. Schema markup with @id threading
Every page emits a single @graph: Organization + WebSite globally, plus per-page nodes (Service, Offer, FAQPage, Person, Article, BreadcrumbList) referencing each other by @id. LLMs read this graph as the canonical knowledge entity for the brand. We treat @id threading as a build-time invariant, not a manual exercise.
The pattern is in src/lib/schema-graph.ts of the public RevenueSpark repository. Same shape ports between projects.
5. Content pyramid
Eight layers stacked beneath the pillar: Pillar → Platform → Capability → Use Case → Audience → Comparison → Docs → Blog. Every layer below the pillar links back. Anchor text uses component terms (“our GEO + SEO services”) rather than “click here” or generic anchors.
The internal-linking discipline is what makes the architecture readable to answer engines. Without it, the cluster looks like a flat content library; with it, the cluster looks like a knowledge graph rooted at the pillar.
6. Competitive matrix + query map
For each named competitor: their public claim (scraped), our counter, the verifiable wedge per competitor with the underlying fact (pricing, feature, output format, case study). For each target query: current SEMrush rank, difficulty, target rank, signals required to claim it.
Both tables sit inside the engagement document and feed comparison-page content. The verifiable-fact requirement is non-negotiable — every wedge must cite something a prospect can independently confirm.
7. Pillar page structure
H1 outcome hook. H2 = full anchor. All 8 components in first 200 words. 2,000–3,000 words. Required sections: How It Works, capability overview, comparison table, FAQ (8–10 questions with schema), CTA. This page is the canonical source-of-truth that every other page links back to.
The pillar is unforgiving in the other direction — sloppy pillars produce sloppy clusters. We refuse to ship cluster pages until the pillar is locked.
8. Subpage templates (5 types)
Platform pages, Capability pages, Use Case pages, Audience pages, Comparison pages. Each has fixed minimum sections and word counts; each must include anchor components in the intro paragraph and at least one link back to the pillar. Templates are what make the agent-driven cadence possible without sacrificing structural quality.
9. Launch / announcement strategy
Conditional — only emitted if the brand has a launch in the next 90 days. Pre-launch (schema updated, comparison drafts ready, hero screenshot captured), launch week (release post + forum + social variations), post-launch (reviewer language tracking, tutorial content, citation testing of the new positioning).
About a third of engagements include a launch concurrently with the engagement; the other two-thirds focus on ongoing brand surfacing without a discrete launch event.
10. AI crawler access
Allow GPTBot, ChatGPT-User, anthropic-ai, ClaudeBot, PerplexityBot, Google-Extended on public marketing surfaces. Disallow only sensitive paths (/admin, /api/internal). Cloudflare AI Crawl Control off — it overrides robots.txt at the edge. Audit on every engagement start.
The default-deny posture some teams set up “to protect against AI scraping” is the inverse of what a SaaS competing for answer-engine citations wants. Correct it on Day 1.
11. Measurement framework
Four scoreboards reported monthly. LLM Citation Testing (target queries × Claude / ChatGPT / Perplexity / Gemini). SEO Metrics (organic traffic, ranking, CTR). GEO / AI Visibility (SEMrush AIO score, AthenaHQ category share). Third-Party Content (reviewer language adoption, organic citations).
The fourth scoreboard is the one most agencies don’t track and the one that predicts next-cycle citation share most accurately.
12. Implementation checklist (7 phases × 12 weeks)
Phase 1 Foundation, Phase 2 Pillar, Phase 3 Platform pages, Phase 4 Capabilities + Use Cases + Audiences, Phase 5 Comparisons, Phase 6 Docs, Phase 7 Blog + external. Each phase has a deliverable list, not just a recommendation. Mapped to the 6-month POC at two weeks per phase.
The checklist is calendar-locked because the engagement shape is calendar-locked. Renew on the multi-year retainer at Month 6 to extend; otherwise the cadence ends with a documented hand-off so your in-house team can keep running it.
What this is and isn’t
The blueprint isn’t novel; it’s productized. The component pieces — schema graphs, locked positioning, cluster cadence, citation testing — exist in many other agency playbooks in some form. What’s productized is the integration, the build-time guardrails, and the engagement shape that ships against it predictably.
If you can read this essay and self-execute, we are the wrong fit. About one prospect in ten lands there and we tell them on the discovery call. If you can read this and recognise that running it across 26+ pages a quarter while measuring four scoreboards monthly is structurally beyond your in-house capacity, the engagement is built for you.
For the engagement shape and pricing, see pricing. For the four scoreboards specifically, see the measurement framework.