Picture a regional automotive consignment business. The kind where a local consultant drives the neighbourhood, shakes hands at the car club, earns trust one Saturday morning at a time. They’ve had a flyer built — brand strategy mapped, audience identified, every element placed for a reason. The consultant’s photo is front and centre because in a provincial market where people buy from people they recognise, his face is the trust signal.
The business owner does something perfectly sensible. They paste the flyer into ChatGPT and ask for feedback. A second opinion. Why wouldn’t you?
Here’s what comes back: “Kill 50% of the text.” “Think like Porsche.” “Luxury brands don’t explain — they assert.” Strip the consultant’s photo down to a tiny name and phone number. Replace the service explanation with two words and a cinematic photo spread.
The advice sounds like a senior creative director at a top-tier agency. It’s confident. Specific. Polished. And catastrophically wrong.
This isn’t a Porsche print campaign. It’s a one-page flyer for a used-car consignment service in a small city. The consultant’s photo isn’t clutter — it’s the entire strategy. The model that said “shrink the broker card” didn’t know that. Couldn’t know that. It had never heard of the business, didn’t know the audience, and had no idea why every element was where it was.
It just pattern-matched against every brand playbook it had ever seen and told a small business to be Porsche.
The pattern has a name
What happened to that flyer is happening in businesses everywhere, and it has a name: context collapse. Not the social media kind — a new, more commercially dangerous variety. It’s what occurs when strategic work built with deep business context gets run through a system that has none.
The sequence looks like this. A business owner gets work from their agency or consultant — copy, design, strategy, whatever it is. The work reflects weeks of accumulated context: brand positioning, audience research, competitive analysis, specific business goals. Then the owner opens ChatGPT, pastes the work in, and types some version of “how can I improve this?”
The model does what models do without context. It reaches for the average of everything it’s been trained on. Best practice. Industry norms. Safe, credible-sounding advice that could apply to any business in any market. The output sounds like expertise. It reads like a creative brief from someone who knows what they’re doing.
But it’s advice from a stranger at a bus stop. A very articulate stranger who’s read every marketing textbook ever published — but who doesn’t know your name, your customers, or your Tuesday afternoon problem.
The "second opinion" is now the default behaviour
This isn’t a quirk of one client. It’s the dominant way small business owners interact with AI.
Anthropic’s own research, analysing a million conversations on their platform, found that augmentation patterns — where users bring existing work for review, iteration, or validation — account for 52% of all AI conversations. More than half the time, people aren’t asking AI to create something from scratch. They’re asking it to judge something that already exists.
Microsoft’s Work Trend Index draws a sharp line between two types of users. Power users treat AI as a thought partner — conversational, iterative, challenging. They’re 56% more likely to experiment with new prompting approaches and 49% more likely to stop before a task and ask whether AI is the right tool for it. Casual users treat it as a vending machine. Input, output, done.
The problem is that 62% of marketing professionals say their companies provide zero training on how to prompt effectively. Not limited training. Zero. So when they sit down to get that second opinion, they’re using the tool at the lowest possible level of sophistication — one-shot prompts with no context, no constraints, no strategic framing.
They’re asking a genius with amnesia for advice.
What the "improvement" actually costs
The gap between contextual and context-free AI output isn’t subtle. It’s been measured, and the numbers are brutal.
MIT’s Initiative on the Digital Economy studied nearly 1,900 participants and found that roughly half of all performance improvement came from how users adapted their prompts — not from the model upgrade itself. When they tested automatic AI rewriting — the equivalent of pasting your work into ChatGPT and saying “make this better” — performance degraded by 58% compared to skilled human prompting.
Read that again. The “make it better” reflex didn’t just fail to improve things. It made the output 58% worse than what a skilled prompter would have produced.
The BCG/Harvard study tells the same story from a different angle. When 758 BCG consultants used AI with proper context and prompt training, they completed 12% more tasks, worked 25% faster, and produced results rated over 40% higher quality. But — and this is the finding that matters here — when tasks fell outside the AI’s capability boundary, users with AI were 19 percentage points less likely to produce correct solutions than those working without it.
Without strategic context, users can’t tell when AI is helping versus when it’s actively dismantling their work. They lack the frame to evaluate the advice. So they accept it. “Kill 50% of the text” sounds decisive. “Think like Porsche” sounds aspirational. Both sound better than the uncertain feeling of staring at a draft and wondering if it’s good enough.
"But you use AI too"
This is the objection I can hear forming. If AI built the work in the first place, why can’t AI improve it?
It’s a fair question with a precise answer: the AI that built it isn’t the same intelligence as the AI that’s “improving” it. Same technology. Completely different system.
When we build brand strategy, the AI operates inside a project environment loaded with that business’s positioning, past decisions, audience research, competitive context, and specific goals. It’s not a blank chat window. It’s a trained system that knows why the consultant’s photo is large, why the copy addresses a specific anxiety, and why the layout follows a particular hierarchy. Ask it to improve the flyer and it’ll suggest tightening a headline or adjusting the visual weight of the call to action — within the strategic frame.
The generic chat window knows none of this. It sees a flyer. It reaches for the average of all flyers. It produces advice that would be correct for no specific business and therefore useful to none.
We watched this distinction play out in reverse with another business. The owner had built a solid referral programme concept in ChatGPT — two reward tiers, database-wide distribution, basic tracking. The instincts were good. But a generic model couldn’t see the gaps that the business context made obvious.
Run through a contextual process that understood the business model, the finance product structure, and the clawback risk, that concept transformed. The two reward tiers became an escalating structure for repeat referrers. The “send it to everyone” approach split into differentiated handling — settled clients get reward vouchers; enquiry-only contacts get a softer brand-awareness touchpoint. A separate acknowledgement loop fires the moment a referral arrives, not when the reward triggers. And a referability audit framework sits underneath everything, asking the question the generic model never thought to raise: is the service actually worth referring right now?
The gap between “solid concept” and “implemented system” wasn’t incremental. It was categorical.
Same technology at both ends. Context made one useful and the other dangerous.
The convergence tax
The damage from context collapse doesn’t stop at individual businesses making their own work worse. It scales. When every business runs every piece of work through the same models with the same lack of context, the outputs converge on the same average.
This is now measurable. The NeurIPS 2025 paper “Artificial Hivemind” found that competing models from entirely different companies — GPT-4o, Claude 3.5, Llama 3.1 — produce responses with 71% to 82% semantic similarity. They’re not just giving similar advice. They’re giving the same advice, phrased slightly differently.
When Italy temporarily banned ChatGPT in 2023, researchers measured what happened to restaurant marketing copy during the ban. Lexical similarity between businesses dropped 15%. Syntactic similarity dropped 12%. And consumer engagement — likes, interactions, the signals that indicate someone actually noticed — rose 3.5%. People responded better to marketing that didn’t sound like everyone else’s. The moment the ban lifted, the homogeneity returned.
Mark Ritson put it best: “When they invent a machine for copying zigging, the value of a zag goes into the stratosphere.”
The mechanism is mechanical. RLHF — the training process that makes AI outputs feel safe and polished — functions as a low-pass filter on creativity. It penalises outlier responses, even brilliant ones, in favour of answers a majority of graders find acceptable. The models aren’t trying to make you average. They’re trained to be average. Without specific context pushing them away from the centre, the centre is exactly where they’ll land.
Invisible to the machines that now decide who gets found
Here’s where the cost compounds into something existential.
The discovery layer of the internet is shifting from ranked lists to synthesised answers. When someone asks an AI assistant to recommend a service, the AI doesn’t show ten blue links. It names specific businesses. And the research on what makes a brand get named is clear: specificity wins, generality loses.
The Princeton GEO study tested 10,000 queries and found that adding quantifiable statistics to content boosted visibility by 41%. Citing authoritative sources improved it by 30–40%. But promotional copy — the kind of generic positioning language that one-shot AI prompting produces (“comprehensive solutions,” “industry-leading platform,” “making tomorrow’s dreams come true today”) — has a negative 26% correlation with AI citation. It’s not just unhelpful. It actively reduces your chances of being surfaced.
Seer Interactive’s analysis of over 500,000 AI responses found a 5× visibility gap between brands that get mentioned and those that don’t. Their key finding: the AI generates its answer first, deciding which brands to name from memory. Then it looks for sources to support what it already wrote. The citations are the bibliography, not the brainstorm.
A business producing generic, context-free marketing copy is building a brand that AI systems have no reason to remember and no evidence to cite. They’re not just becoming average. They’re becoming invisible to the systems that increasingly decide who gets found.
The gap that won't close on its own
If context collapse were evenly distributed — everyone getting a little worse together — it might not matter. But it isn’t. A small cohort of sophisticated users is pulling away from the pack, and the distance is accelerating.
OpenAI’s data from over a million business customers shows that workers at the 95th percentile of usage send six times more messages than the median employee at the same company. For coding tasks, the gap stretches to 17×. For data analysis, 16×. Anthropic’s research shows that users with six months of experience have 10% higher success rates even after controlling for task type — and the report warns explicitly that “the benefits from early adoption may be self-reinforcing.”
Only 5% of workers are using AI to genuinely transform their work. The other 95% are experimenting, dabbling, or — most commonly — using it for basic tasks that save little to no time. The businesses whose leaders develop real contextual AI skill will compound that advantage quarter after quarter. The ones whose leaders keep pasting strategy into blank chat windows will compound something else entirely.
The flyer, one more time
That automotive consignment flyer is still in circulation. The consultant’s photo is still prominent. The copy still explains what the service does, who it’s for, and why this particular person is the one to talk to. It works — not because it looks like Porsche, but because it looks like a business run by someone you could call on a Tuesday afternoon.
The AI that told them to strip all of that out and “assert, don’t explain” wasn’t malicious. It was just average. A statistical composite of every brand playbook ever written, applied to a business it had never met.
In an age when the machines are making everyone sound the same, context is the only asset that appreciates. The question for every business owner isn’t whether they’re using AI. It’s whether the AI they’re using knows anything worth knowing.
Find out where your brand is losing context to AI
Most audits we run surface the same pattern: strategy built with deep context, feedback run through a model with none, and output that slowly stops sounding like the business it was built for.
Takes 30 minutes.
Book Your Audit →
