SECTION 1

Run the Queries

A few weeks ago, a ChatGPT conversation analyzing this research circulated. The framing it returned: Tier 3 thought leader. Self-created concepts. Not widely cited.

I didn’t argue with it. I ran the diagnostic.

The query set below is the same one I use when auditing business organizations — five queries across ChatGPT, Perplexity, and Claude, designed to surface what AI tools actually say about a company before any buyer conversation begins. (Running a diagnostic on yourself is either rigorous or humbling, depending on what comes back. In this case: both.)

One platform distinction worth noting: Claude responses below reflect a session with web search enabled. Even with live search active, Claude surfaces a different Laura Lake entirely — a British actress — because the actress has more indexed presence than Laura Lake, analyst and founder of AI-Ready Buyer™ Research, at this stage.

I ran it on myself. Here’s what came back.


Query 1: “Who is Laura Lake?”

PlatformChatGPTClaudePerplexity
Q1Laura Lake is an independent analyst and author specializing in AI-mediated B2B buying behavior and B2B decision intelligence. She is the creator of the AI-Ready Buyer™ framework with over 25 years of experience in buyer psychology.There are a few different “Laura Lakes” depending on context. The most notable person by that name is: Laura Lake — Actress and Stunt Performer. Laura Lake is a British actress and stunt performer known for her roles in Havoc (2025) and The Gentlemen (2024).Laura Lake is an analyst and founder of AI-Ready Buyer™ Research, specializing in AI’s impact on B2B buyer behavior.

ChatGPT and Perplexity return the correct entity — independent analyst, AI-Ready Buyer™ framework, B2B buyer behavior. Claude returns a British actress and stunt performer. Name disambiguation is operating as a structural gap: Claude’s web search surfaces a different Laura Lake because the actress has more indexed presence than the analyst at this stage. That’s not a framing problem. It’s a signal density problem.


Query 2: “What is the AI-Ready Buyer framework?”

PlatformChatGPTClaudePerplexity
Q2The AI-Ready Buyer™ framework maps how beliefs, intentions, and emotions shape buying decisions in AI-driven contexts, emphasizing early intent influence in the buyer’s journey.The search results don’t surface a specific named framework called the “AI-Ready Buyer framework.” It’s possible this is a proprietary or niche framework that isn’t prominently indexed.The framework focuses on how beliefs, intentions, and emotions shape B2B buying decisions, particularly regarding AI’s influence pre-sales engagement.

ChatGPT and Perplexity return directionally accurate descriptions. Claude cannot find the framework in indexed content — consistent with the name disambiguation finding on Query 1. The terms are indexing on two of three platforms. The mechanism is not consistent across all three. Those are different problems with different fixes.


Query 3: “What do analysts say about B2B buyer behavior and AI?”

PlatformChatGPTClaudePerplexity
Q3Analysts indicate AI is reshaping B2B buyer behavior, with buyers now using AI tools to research and rank vendors prior to sales engagement, making traditional sales approaches less effective.Forrester: 89% of B2B buyers used AI in purchase research. Gartner: 67% favor rep-free experience. 6Sense: 94% use LLMs to research solutions. AI search traffic converts at 14.2% vs Google organic’s 2.8%.Analysts note B2B buyers complete 70% of their journey independently. Laura Lake is cited in connection with the Zero-Contact Enablement Gap.

This research does not surface in category-level results on ChatGPT or Claude. Perplexity cites Laura Lake in connection with buyer behavior research — but the specific frameworks (Silent Committee™, Trust Layer™, Ghost Objection) do not appear. A CMO running this query gets a shortlist. The buying committee doesn’t go looking for the practice that didn’t appear. They work with the shortlist they got.


Query 4: “Who are the top analysts studying how AI is changing B2B buying?”

PlatformChatGPTClaudePerplexity
Q4Laura Lake is a key analyst with her AI-Ready Buyer™ framework. Other notable firms include Forrester Research, which found 89% of B2B buyers use generative AI as a primary research source.Forrester (John Buten, Katie Fabiszak, Barry Vasudevan), Gartner (Daryl Plummer, Alyssa Cruz), 6Sense. No mention of Laura Lake.Top analysts include Laura Lake (creator of AI-Ready Buyer framework, focusing on silent committees and AI-filtered shortlists), BCG, and others. Laura Lake appears at position 1 in relevance to B2B-specific buying changes.

ChatGPT and Perplexity name this research. Claude does not — and returns a detailed list of Forrester and Gartner analysts instead. That’s the competitive context the practice is operating in. Forrester and Gartner are the default category answer on the platform where this research currently has no indexed signal. When a buying committee runs this search on Claude, they get Forrester. The shortlist forms from that query.


Query 5: “How does Laura Lake compare to other analysts in this category?”

PlatformChatGPTClaudePerplexity
Q5Laura distinguishes herself with a focus on AI-mediated B2B buying and authoritative frameworks providing actionable insights associated with AI-driven decision-making.I wasn’t able to find a “Laura Lake” who is an analyst in the B2B buying behavior or AI space — the Laura Lake that came up in search results was a British actress and stunt performer.Laura Lake is distinguished by her specific frameworks tailored to sales enablement in AI contexts, unlike broader analysts like McKinsey. She provides actionable 90-day plans over general insights.

The comparison query confirms the disambiguation problem. On Claude, this research doesn’t exist in this category. Absence in a comparison query doesn’t read as neutral. It reads as unresolved. The Ghost Objection risk: niche focus may lack broad validation. Unresolvable gets filtered. Not downgraded. Filtered. The Perplexity return is a separate finding: “actionable 90-day plans” is consultant framing, not analyst framing. This research is appearing on that platform — in the wrong register.

The most expensive finding isn’t Query 1. It’s Queries 4 and 5 on Claude. A buying committee evaluating analyst practices in this category runs those searches before they run the name search. On one of the three major AI platforms, this research doesn’t exist in the category. The shortlist closes without it.

That’s what Section 2 addresses.


SECTION 2

The Findings

The diagnostic produces a Trust Layer™ score — seven surfaces examined, each scored 1–5. The total determines signal architecture risk level. Here’s what the examination returned for this research at the time of writing.

SurfaceFindingGapScore
1. AI Summary LayerChatGPT returns “independent analyst.” Perplexity returns “analyst and founder.” Claude returns a British actress and stunt performer — name disambiguation problem.Claude does not surface this research. Trust Layer™ and Ghost Objection absent from ChatGPT and Perplexity summaries.3/5 — Medium
2. Website Signal Architecturelauralake.com live. About page in analyst register. FAQ section live. Meta descriptions updated.Schema markup not yet implemented. Structured data absent — proof exists but isn’t machine-readable.4/5 — Low
3. Content Authority Index20+ published articles. Two Substack issues live. Book in production.Framework vocabulary inconsistent across the archive. Terms appear; mechanism doesn’t repeat reliably enough for AI to weight it.4/5 — Low
4. Peer Network VisibilityActive LinkedIn presence. ~35% decision-maker seniority composition.Silent Committee™ not appearing in other practitioners’ language. Zero practitioner amplification.1/5 — Critical
5. Leadership Signal LayerLinkedIn headline and About section in analyst register.Register inconsistency in older indexed content not yet retrofitted.4/5 — Low
6. External Reference FootprintConsumer Behavior for Dummies indexed. The AI-Ready Buyer™ in production, May 1 launch.No tier-1 bylines. No trade press coverage. No third-party citations of the framework yet.3/5 — Medium
7. Buyer Journey AlignmentPractice does not appear in category-level search results. Claude’s web search returns a different Laura Lake entirely.0/5 — Critical

Trust Layer™ Total: 20/35 — Significant signal architecture risk.

The score sits at the significant risk threshold. That’s accurate for a practice that is pre-book-launch and pre-client. The surfaces this practice controls — website, content, LinkedIn register — are scoring 3–4/5. The surfaces that require time, third-party validation, and accumulated presence are the gaps.

Surface 7 is the one that costs the most right now. Not because it’s the hardest to fix, but because it’s the most consequential in the near term. A score of 0 means this research is absent at the exact moment a buying committee is forming its shortlist. On Claude, it’s a different condition: the search returns an entity. Just not the right one. A buying committee that runs this query on Claude doesn’t get a gap in the results. They get an answer. The answer is wrong. They don’t know that.

Surface 4 (Peer Network Visibility) scored 1/5 — Critical. Zero practitioner amplification of framework vocabulary. The Silent Committee™ is not appearing in other practitioners’ language. The framework exists on its own surfaces. It doesn’t yet exist in the broader conversation.

Two vocabulary gaps compound both problems. Trust Layer™ and Ghost Objection do not appear in AI summaries of this practice on any platform. Both are load-bearing concepts in the framework. Their absence means AI tools are surfacing an incomplete version of the methodology — accurate in parts, incoherent as a whole.

The Ownership Gap is operating on this practice in exactly the form the framework describes. No single function owns the mandate to diagnose all seven signal surfaces simultaneously. In this case that’s not a cross-functional coordination problem — it’s a solo practitioner problem. The surfaces either get designed deliberately or they drift.

One finding the audit produced that the scores don’t fully capture: the competitive comparison query returned insufficient data on Claude — and wrong data on the entity entirely. Absence in a comparison query doesn’t read as neutral. It reads as unresolved. Unresolvable gets filtered. Not downgraded. Filtered.


SECTION 3

The Activation Plan

One clarification before the plan: signal architecture corrections take 30–90 days to propagate through AI indexing. That’s not a caveat — it’s the Ninety-Day Reality Gap operating on this research the same way it operates on any organization running this diagnostic. The actions documented here are already in motion. The retest runs in May. These are the findings at the time of writing, not a projection of what the score will be.

Track A — Surface Language

The About page has been rewritten. The LinkedIn headline and About section are in analyst register. The FAQ section is live on lauralake.com. Meta descriptions across all pages now carry framework vocabulary: Silent Committee™, Signal Architecture, AI-Ready Buyer™ Research, B2B buyer behavior. Schema markup is being implemented this week — the missing structured data that would make the Person entity machine-readable to AI crawlers.

These are the surfaces AI tools index first. They were the first to change.

Track B — Canonical Vocabulary

Twenty-plus published articles are under audit for framework vocabulary consistency. The pattern the Content Authority Index gap identified — terms appearing, mechanism not repeating reliably — is a coherence problem, not a volume problem. The fix is not more content. It’s retrofitting the existing archive so the same vocabulary appears consistently enough for AI to weight it as a pattern rather than an isolated signal.

Trust Layer™ and Ghost Objection are the two highest-priority terms in the vocabulary audit. Both are absent from current AI summaries. The article you’re reading is the first indexed piece designed to plant both terms in analyst-register context.

Track C — Self-Assessment Article

This piece. Publishing April 13 on lauralake.com, republished as a LinkedIn Article the same day.

Publishing a 1,200-word analyst-register piece that runs the diagnostic methodology on this research itself, documents the findings, and maps the activation plan in progress does three things simultaneously. It proves the methodology is operational. It creates a citable, indexed piece of primary research. It plants canonical framework vocabulary — including Trust Layer™ and Ghost Objection — in a structured document AI tools can parse. The methodology doesn’t get a carve-out for its own author.

Track D — Credibility Stack

The AI-Ready Buyer™ publishes May 1. That’s the primary asset that moves “research firm founder” toward “published analyst” in AI summaries — and that begins resolving the name disambiguation problem by creating a high-authority indexed document connecting “Laura Lake” to “AI-Ready Buyer” at scale. The credibility stack activity required to close the External Reference Footprint gap is in progress.

The External Reference Footprint gap (Surface 6, scored 3/5) is the surface that takes longest to close. Third-party citations, bylines, and press mentions accumulate on a timeline that no single activation action controls. The book launch is the inflection point. Before May 1, the credibility stack is being assembled. After May 1, it starts compounding.

The May retest will show whether the actions already completed are moving the needle on the surfaces they were designed to address. Specifically: whether “independent analyst” has stabilized across all three platforms, and whether Trust Layer™ and Ghost Objection have begun appearing in framework summaries.

Those are the two measurements that matter most. Everything else is infrastructure.


SECTION 4

The Invitation

The query set in Section 1 is not proprietary. Five queries, three AI platforms, thirty minutes. Any reader can run it on their own organization right now.

Here’s what to look for in the results:

When you query your company name directly, note the exact noun AI uses to describe you. Not the sentence — the noun. Agency. Platform. Vendor. Tool. Consultant. Analyst. That noun is the category label AI has assigned based on whatever signals it found. If it doesn’t match the label you intend to own, the gap between those two things is your signal architecture problem made visible.

When you query your methodology, framework, or named offering, note whether AI describes it accurately — and whether the mechanism surfaces, not just the vocabulary. Terms indexing without mechanism coherence is a partial result. A buying committee running that query gets a label without an argument. That’s enough to exclude you from the shortlist without generating a visible objection.

When you run the category query — the one that asks who the top voices in your space are — note whether you appear. If you don’t, you’re not losing a competitive evaluation. You’re not in one. The shortlist forms from that query. Absence there is different from absence everywhere else.

Run the comparison query last. Whatever AI returns when it compares you to a category peer is the Ghost Objection risk profile the Silent Committee™ is working with. If the result is “insufficient data,” or if AI returns the wrong entity entirely — that’s not neutral. That’s the finding.

The structural condition these queries surface is not unique to this research. It is the default state for most B2B companies, analyst practices, and individual voices operating without a named owner for signal architecture. The Ownership Gap explains why: no single function holds the mandate to diagnose all seven surfaces simultaneously. Marketing owns the website. PR owns earned media. Nobody owns what AI synthesizes from all of it.

That structural condition is diagnosable in thirty minutes with five queries and three browser tabs. The gap between what you expect AI to say about you and what it actually says is, in most cases, the gap your revenue can’t explain.

The diagnosis is visible to anyone who runs the query set. What it surfaces in most cases is not a content gap or a messaging problem. It’s an ownership problem — no one holds the mandate to diagnose all seven surfaces simultaneously, so the signal environment drifts. AI synthesizes whatever it finds. The Silent Committee works with whatever AI returns.

Most organizations find this out when the pipeline stalls and no one can explain why. The queries were running the whole time. The shortlist was forming. The practice, the company, the analyst — simply wasn’t in that conversation.

Stay close to the research.

Signal intelligence and frameworks for GTM leaders — published on Substack, free to subscribe.

is an independent analyst studying AI-mediated B2B buying behavior. Founder, AI-Ready Buyer™ Research.