In 2005, a restaurant or bar that wasn't on Google's first page for its own city name was quietly losing diners every week. Most owners didn't know. The bookings just went somewhere else. Twenty years on, the same thing is happening — except now it's ChatGPT, Perplexity, Gemini, Google AI Overviews, and Bing Copilot doing the redirecting.
Gartner's 2026 forecast puts 25% of organic search traffic moving to AI chatbots by the end of this year. Seer Interactive's citation study showed that 87% of ChatGPT's recommendations come from Bing's top 20 organic results. If you're not in that pool, you're not in the answer. And unlike 2005, you can't even see the diners walking past. They ask ChatGPT, it names three other venues, the conversation ends.
We kept getting the same question from owners: how do I know my venue is actually findable by ChatGPT? There wasn't a clean way to answer it. So we built one.
This article covers what AI visibility actually means for a restaurant or bar in 2026, what booteek measures, how the 0-100 score works, and — just as important — what it doesn't do.
What "AI visibility" actually means
AI visibility is the probability that an AI assistant will surface, cite, or recommend your venue when a diner asks it something you should rank for. It's the sum of five separate outcomes across five different surfaces:
- ChatGPT naming your venue in a "best brunch in [your area]" answer
- Perplexity citing your website, menu, or review page as a source
- Google AI Overviews pulling a line from your profile into the blue box at the top of the page
- Gemini recommending you inside Android Assistant and Google's app experiences
- Bing Copilot surfacing you in Microsoft's AI-powered search
All five run on different models, different crawl stacks, and different citation logics. But they share roughly the same inputs: your Google Business Profile, your website's structured data, the third-party review corpus (Google, TripAdvisor, a handful of local directories), and whatever brand mentions exist across the open web.
Being visible to all five at once requires being legible to all five. That's not about SEO tricks. It's about the machine-readability of your own business data.
The four pressures that forced this into our roadmap
We don't build features in a vacuum. This one pulled on every crisis hitting independent hospitality at once — which is why it's the platform's core pitch.
The hospitality crisis. UK hospitality carried 132,000 unfilled vacancies into 2026 (ONS, March 2026) and closure rates for independents ran above 30% over the last 18 months. Margin is vanishingly thin. An owner can't afford a quarter where AI assistants quietly hand bookings to three chain competitors.
SaaS saturation. The average independent venue already runs 8-12 subscriptions. We keep booteek at £0.99 a day during our founder member phase (intro pricing for our first 200 customers) specifically because a tool has to earn that against the payroll tool, the booking tool, the POS. AI visibility monitoring bundled into £89.99 a quarter at that intro level is a line we can hold — a specialist AI-search tool charging £150+ a month isn't.
Cost-of-living pressure on consumers. Diners are more review-dependent than ever because discretionary spend is down year-on-year. When someone asks ChatGPT where to go instead of scrolling Google reviews themselves, the AI's first three picks are the whole shortlist. Position four doesn't exist.
Cost-of-operating a small business. The 2026 UK rates revaluation hit independents harder than the chains it was supposed to rebalance. Every hour of owner time matters. "Go fix your schema markup" is not a sentence that respects an owner's Tuesday.
That's why the score exists as a score, not a 40-page audit.
What booteek measures
The AI Visibility score draws from 12 signals grouped into four clusters. Each cluster reflects something AI models have been observed to weight, based on Princeton GEO research, Lily Ray's citation analysis from February 2026, and our own testing across 4,060 venues in 13 cities.
Cluster 1 — Machine-readability. This is the plumbing. Does your site declare itself cleanly to AI crawlers? We check four things: `robots.txt` permissions for `GPTBot`, `PerplexityBot`, `Google-Extended`, `CCBot` and `ClaudeBot`; the presence and structure of an `llms.txt` file; schema.org markup completeness (`Restaurant`, `Bar`, `LocalBusiness`, `Menu`, `OpeningHours`, `PostalAddress`, `Review`); and the `speakable` property on your FAQ sections, which decides whether voice assistants read your answers aloud.
Cluster 2 — Citability. This is how AI models decide whether your content is worth quoting. We measure FAQ and Key Data Point structure (short answer-first paragraphs that LLMs extract preferentially), statistical density (numbers per 1,000 words), source attribution presence, and a synthetic "citability score" that approximates how likely a paragraph is to be pulled into an AI answer verbatim.
Cluster 3 — Entity density. AI models can't recommend you if they don't know who you are. We measure `sameAs` property density (the JSON-LD links that tell models your Facebook, Instagram, TripAdvisor, and OpenTable accounts are all the same business), brand-mention presence across the open web, and consistent Name-Address-Phone-Number signals across third-party directories.
Cluster 4 — Freshness and review signal. Stale venues get silently deprioritised. We check how recently your Google profile was updated, whether reviews in the last 30 days are present and answered, and whether your menu content has been refreshed in the last quarter. AI models treat review recency as a proxy for "is this place still open and still good."
The score is one number from 0 to 100, rolled up from those four clusters with deliberately boring weighting. We don't hide the sub-scores — the point of the dashboard is to show you the gap you can close this week, not to protect a secret formula.
What actually moves the score
The honest answer is: the four things that cost nothing and take an afternoon.
Opening your `robots.txt` to AI crawlers (most venues have no `robots.txt` at all, which is worse than you'd think) tends to move the score by 4-6 points on its own. Adding `Restaurant` or `Bar` schema with a properly-declared `hasMenu` link moves it 6-8 points, because it's the single most-parsed signal across all five platforms. Publishing an `llms.txt` that names your venue, your city, your hours, and your top three menu categories moves it by 3-5 points. Answering your last 20 Google reviews moves it by 4-7 points and tends to bump your Google star average at the same time.
We know these numbers because we've measured them on the 4,060 venues across 13 cities in our own data (the donde-onde-where hot-list corpus, which also powers zone pages in Manchester, Porto, Bilbao, Seville, Lisbon, and nine others). The average venue in the corpus scores 34/100 at first pass. A focused afternoon typically gets it to 55-60. Sustained work across a quarter gets it into the 75+ band where AI citation is observably more frequent.
Sustained is the word that matters. This isn't a one-off audit.
What it doesn't do
We said we'd name the limits in the same breath as the benefits, so here they are.
The score is a proxy for AI readiness, not a guarantee of citations. You can score 90/100 and still not appear in a ChatGPT answer if your venue has fundamental business problems — two stars on Google, no menu online, nine months of unanswered reviews. AI models are reading the same signals diners read. A readable profile of a struggling business is still a struggling business.
The score can't override Google's own ranking decisions. If your venue is in a neighbourhood where twelve other restaurants rank above you for every relevant query, no amount of schema is going to leapfrog them. AI models quote the organic top 20 — if you're at position 47, citability work alone won't get you in. That's why booteek bundles AI visibility with Review Boost and Competitor Check: the three of them move the same underlying signal in parallel.
The score isn't predictive of brand-new venues. If you opened three weeks ago, the corpus has nothing useful to compare you against and most of the third-party signals haven't had time to accrue. Wait until you have 90 days of operation and at least 30 reviews before treating the score as load-bearing — until then, Breo (our onboarding AI companion) is the better tool, because it focuses on the foundational profile fields AI models will look at once they've noticed you.
How to use it this week
Open your dashboard, look at the four cluster sub-scores, and pick the lowest one. That's where the next afternoon of work will move the headline number most.
If Cluster 1 is your weakest: open `robots.txt` to AI crawlers, publish an `llms.txt`, and add `Restaurant` or `Bar` schema with a `hasMenu` link. Three changes, one developer, one morning.
If Cluster 2 is your weakest: rewrite your top three menu pages in answer-first paragraphs — a question as the heading, the literal answer in the first sentence, the supporting detail underneath. LLMs extract that structure preferentially.
If Cluster 3 is your weakest: claim every directory listing you can find, link them via JSON-LD `sameAs`, and check that your name, address, and phone number are byte-identical across all of them. AI models match entities by exact-string consistency more than they should — but they do.
If Cluster 4 is your weakest: answer your last 30 days of Google reviews, refresh the menu, update your hours. AI models read recency as a survival signal.
The four clusters move in parallel. None of them is owner-time-prohibitive. The mistake is doing all four badly across a year instead of one well across a week.
Where this goes next
We're rolling AI Visibility scoring into every booteek Pro dashboard from Q2 2026, with quarterly re-scoring built in. Existing members see their score on the next dashboard refresh; new members see it from day one. The donde-onde-where corpus (4,060 venues across 13 cities) keeps serving as the public reference dataset — your venue's score is benchmarked against the median for your own city, not a global average, because Manchester and Lisbon weight the underlying signals very differently.
If you want to see your venue's score before becoming a member: the Competitor Check (£29 one-off, intro pricing) includes a standalone AI Visibility section. Same scoring engine, run once on your venue, returned as a PDF.
Being invisible to ChatGPT in 2026 is the new version of being absent from Google in 2005. The difference is you can act on it before the diners walk past.