A diner sits in a Red Robin in early 2018. She watches a server hustle past the table next to her for the fourth time. Sticky residue of bottomless steak fries. A half-melted milkshake glass. A handheld ordering device clipped to his belt, blinking. The table has been uncleared for twenty minutes and the restaurant is filling up behind her.
Six hundred miles away, in Red Robin’s Greenwood Village headquarters, the dashboards are turning green. Labour-cost-as-a-percentage-of-revenue just dropped. Same-store productivity improved. The CFO, at the ICR Conference a few weeks earlier, had called out $8 million in annual savings from eliminating busser and expediter roles across nearly 600 restaurants. The number is real. The saving is real. The dashboards cannot see the table.
This is the moment the whole thing breaks.
The law nobody quoted correctly
You’ve probably seen the quote: “When a measure becomes a target, it ceases to be a good measure.” It gets attributed to Charles Goodhart, a British economist who advised the Bank of England in the 1970s.
That version of the line isn’t his.
Goodhart’s actual 1975 wording, buried in a footnote of a paper on UK monetary policy, was drier: “Any observed statistical regularity will tend to collapse once pressure is placed upon it for control purposes.” The pithy restatement everyone quotes was coined twenty-two years later by an anthropologist named Marilyn Strathern, who was writing about how British universities were gaming their own assessment systems.
Getting the attribution right matters, because the two versions point at slightly different things. Strathern’s version is about incentives. Goodhart’s original is about something colder: the moment a correlation becomes a lever, it stops being a correlation. A sociologist called Donald Campbell arrived at the same idea independently in 1979 – quantitative indicators, once used for decision-making, corrupt the processes they were supposed to measure. Three thinkers, three fields, same mechanism.
Here’s the mechanism in plain terms. A metric is an observation about reality – it captures something true, until enough people start being rewarded for moving it. Then they move it. Not by changing reality, but by changing their behaviour around the measurement. The correlation between indicator and reality breaks. The dashboard keeps showing green.
This is not a hot take. It’s fifty years of academic consensus that marketing has spent a decade ignoring.
What happened next at Red Robin
The busser decision was not stupid. That’s the uncomfortable part.
Minimum wages were rising across Red Robin’s markets. Labour was running at 33–35% of restaurant revenue. The CFO and then-CEO Denny Marie Post looked at the line item, identified roles that did not appear to generate revenue directly – bussers clear tables, they don’t take orders – and made a move any competent operator would recognise. Cut the cost. Show the board. Save $8 million.
The labour-cost-to-revenue ratio was the target. On paper, by Q2 2018, it was improving.
In the dining rooms, weekend walkaways climbed 85% year-over-year. Total ticket times from the kitchen grew by about a minute on average. Servers, now responsible for clearing tables and greeting guests and taking orders, couldn’t turn peak-hour tables fast enough. Customers arrived, saw the lobby stuffed with people waiting, saw empty tables still cluttered with someone else’s dinner, and left. By 2023, Red Robin’s labour-cost ratio had risen to 37.2% – three full points higher than before the cuts. The company was paying more in labour as a share of revenue after eliminating the labour it had considered non-essential.
The stock had peaked at $92.90 in August 2015. By April 2025 it hit $2.50.
Before busser cuts
FY2017
34.0%
Six years later
FY2023
37.2%
Q1 2018 — the intervention
Bussers eliminated across the chain. Projected $8M annual savings.
By FY2023 — the outcome
Labour ratio rose to 37.2% — three points higher than before the cuts.
Over the same period:
Restaurant revenue
$1.38B → $1.30B
Down $80M
Restaurant count
566 → 415
Down 151 locations
Source: Red Robin 10-K filings, FY2017 and FY2023.
You have to sit with that for a moment. The ratio they optimised went the wrong way because they optimised it. The bussers were not a cost on a P&L. They were the connective tissue that made the rest of the P&L possible. Remove them, and the server can’t turn the table. The table can’t be sold. The revenue drops faster than the cost. The ratio gets worse. And the dashboard, still looking only at labour cost, still reports the saving.
This is Goodhart’s Law in a uniform with a name tag.
The digital version of the same restaurant
If you’re reading this and thinking this is a restaurant problem, not a marketing problem, I’d ask you to stop and notice something.
Every SMB operator I talk to has a version of this conversation running in their head right now. The Meta dashboard shows 4.2x ROAS. The Google dashboard shows sensible CPAs. The multi-touch attribution setup, which cost $30,000 and six months of implementation time, is producing clean-looking reports. And yet: the phone is quieter than last quarter. The pipeline is thinner. Inbound interest from people who already knew the brand has softened. You can’t explain why, because the numbers say it’s working.
This is your crusty table.
In April 2021, Apple shipped iOS 14.5 with App Tracking Transparency. Every app had to ask users, in plain language, whether they wanted to be tracked. Roughly three out of four said no. The pixel that powered Meta’s targeting and attribution – the mechanism that had been reporting all those excellent ROAS figures for years – lost most of its visibility overnight.
What happened next was a natural experiment on a scale marketing had never seen.
In February 2022, Meta’s CFO told investors that ATT would cost the company $10 billion in ad revenue that year. The stock fell 26% in a single session. $232 billion in market cap, gone in a day – the largest one-day loss in US stock market history. Across thousands of DTC brands, reported ROAS cratered. CPAs skyrocketed. Founders panicked. Many slashed Meta budgets expecting business revenue to fall in proportion.
It didn’t.
Revenue, for a lot of these brands, stayed roughly flat. Which meant the ROAS number they had been managing to for years — the number they had celebrated, reported to boards, built agency retainers around, set up dashboards to maximise – had been measuring something that wasn’t quite real. Or rather: it had been measuring Meta’s ability to serve ads to people who were going to convert anyway. A study by Cassandra App ran a controlled geo-lift test and found that Meta was claiming credit for four times the conversions it was actually causing. Haus, running incrementality tests for omnichannel brands, found that roughly a third of Meta’s reported impact was being poached from non-DTC channels the platform couldn’t see.
The ROAS was a target. It ceased to be a good measure. Goodhart, exactly as written.
And then, in April 2025, Meta quietly released a new Ads Manager feature called Incremental Attribution. Their own description, paraphrased: “If you’ve ever looked at your ROAS and wondered how many of these conversions would have happened anyway, this feature answers that question.”
Read that sentence twice. That is the platform confessing.
For a decade, the default attribution model – seven-day-click, one-day-view, last-touch – had been inflating itself. Meta could not admit it directly without triggering a lawsuit from every advertiser who’d overpaid, so they shipped a new toggle and let operators draw the conclusion themselves. The quiet ones did. The ones still optimising to the old number are still optimising to the old number.
"But I have a board meeting on Thursday"
At this point, if you’re the kind of operator I wrote this for, you’re pushing back. And your pushback is legitimate, so I want to take it seriously.
“Fine. So the ROAS number was inflated. Are you telling me to stop measuring? To run the business on feel? I have a board meeting on Thursday. ‘Trust me, it’s working’ isn’t going to fly. And every essay that ends with ‘invest in brand’ is written by someone selling brand investment. What am I actually supposed to do differently on Monday morning?”
This is the right objection. Here’s the honest answer.
The alternative to metric-as-target is not metric-absence. It’s metric-as-diagnosis. A pilot consults the fuel gauge. The pilot does not try to maximise the fuel gauge. The gauge exists to tell the pilot when the plane will stop flying. The job is to fly the plane. The gauge’s job is to inform the pilot’s judgment — not to replace it.
Deming, whose shadow still falls across every serious conversation about management measurement, put it more sharply. “It is wrong to suppose that if you can’t measure it, you can’t manage it — a costly myth.” The most important figures for running a business, he argued, are often unknown or unknowable. The willingness of your best employee to stay. The goodwill a customer feels when they mention you to a colleague. The brand salience that determines whether someone thinks of you first when the need arises. None of these sit on a dashboard. All of them determine whether the dashboard numbers mean anything.
And then there’s the finding that should have ended the brand-versus-performance argument years ago, and didn’t. At the IPA’s Effectiveness Conference in October 2025, Les Binet presented thirty years of data from the IPA Databank — nearly a thousand case studies, seven hundred brands. The headline: budget explains 89% of the variation in incremental profit across campaigns. ROI explains 11%. And yet 65% of marketers surveyed believe ROI is the most important driver of effectiveness.
The profession has organised itself around the 11% and ignored the 89%. That’s not data-driven. That’s the opposite. That’s mistaking the easy-to-measure for the important.
The diagnostic: finding your crusty table
If you want to know whether Goodhart’s Law is currently eating your marketing budget, here are three questions. I’d encourage you to answer each one out loud, because the ones that are uncomfortable to say are usually the ones worth noticing.
First: was this metric developed by the people doing the work, or imposed from above? Jerry Muller, in The Tyranny of Metrics, makes this distinction sharply. Metrics built bottom-up by practitioners for their own learning tend to stay honest – because the people using them have no incentive to game them. Metrics imposed top-down as instruments of control get gamed. If your ROAS target was set in a board meeting and then handed down, you have the conditions for corruption before anyone has done anything wrong. That’s not a people problem. It’s a structural one.
Second: what gets easier to measure, and simultaneously more trivial, as this metric climbs? The strong form of Goodhart’s Law – the version that actively destroys the underlying business – happens when the discrepancy between the metric and the thing it measures has a long tail. In marketing, that discrepancy has a name: attribution theft. An algorithm optimising for ROAS will route spend toward users who would have converted anyway, because their conversion rate is highest. The ROAS climbs. The incremental revenue does not. The metric gets cleaner as the business gets hollower. If your best-performing campaign is your retargeting – if the easiest-to-measure revenue is the revenue from people who already knew you – you are probably harvesting, not acquiring.
Third: if we hit this number, what does the customer experience? This is the Red Robin question, and it’s the one most marketing teams never ask. The busser elimination hit every target it was designed to hit. The customer experienced longer waits, dirty tables, and a sticky booth. Hitting the number and improving the business are not the same thing. If you can articulate a scenario where you exceed every metric on the dashboard and the customer’s experience of your brand gets worse, you have located your crusty table.
The boardroom version
I know the Thursday meeting is still happening. So here’s what to put on the slide.
Stop reporting a single-channel ROAS as the headline number. Report Marketing Efficiency Ratio – total revenue over total marketing spend – alongside it. MER can’t be gamed by attribution theft because it doesn’t care which channel gets credit. The ratio goes up when the business gets healthier. It goes down when it doesn’t.
Run an incrementality test on your highest-spending channel, even if you have to geo-lift it yourself with a matched-market study. The gap between what the platform claims and what the test reveals is the size of your Goodhart problem. If it’s small, celebrate. If it’s large, you’ve just found money.
And commit to tracking something the dashboard can’t see. Share of search. Direct traffic as a percentage of total. Unaided brand recall from a quarterly panel. These numbers are noisier and slower than ROAS and they will frustrate your data team. They will also move in the same direction as the business, which is more than can be said for some of the things you’re currently measuring.
ROAS vs MER: two numbers, two different questions
ROAS tells you what the platform did. MER tells you what the business did.
The diner, again
The woman in the Red Robin in 2018 didn’t know she was watching Goodhart’s Law. She just knew the table was dirty and the wait was long and the food, when it arrived, arrived to a sticky booth. She didn’t come back. Neither did enough of her friends.
The dashboards in Greenwood Village took three years to catch up with what she already knew, because the dashboards had been built to measure the wrong thing. Not because the executives were stupid. Because the metric they chose was a proxy, and they confused the proxy for the reality, and by the time the gap between them was obvious, the stock was $2.50.
Every business has a crusty table. Most of them have two or three. They’re sitting, right now, in the space between what your dashboard measures and what your customer experiences. The job is not to stop measuring. The job is to remember that the measurement was supposed to point at something — and to go look at the thing.
Find yours before the metric makes it invisible.
Find out which of your metrics is quietly eating your business
Most audits we run surface the same problem: dashboards that look healthy, pipelines that don’t, and nobody asking what the number is actually measuring. We’ll show you where your crusty tables are.
Takes 30 minutes.
Book Your Audit →
