What llms.txt is
llms.txt is a plain-text file you publish at the root of your domain — bookedwild.com/llms.txt, your-lodge.co.uk/llms.txt. It lists the canonical, machine-readable URLs on your site that AI engines should treat as authoritative sources for facts about your business.
It looks like this, in its simplest form:
# Booked Wild
> Marketing agency for European independent travel and hospitality operators.
## Services
- [AI Visibility Fix](https://bookedwild.com/services/ai-visibility-fix): £495 schema and listings deploy in two weeks.
- [Direct Booking OS](https://bookedwild.com/services/direct-booking-os): Booking infrastructure with iCal sync.
## Pricing
- [Pricing page](https://bookedwild.com/pricing): All productised offers with fixed prices.
That’s the entire format. Markdown headings, bulleted links, one-line descriptions. The spec was proposed by Jeremy Howard in 2024 and has been adopted by Anthropic, Perplexity, and a growing list of crawler-based AI engines.
The intent is simple: instead of forcing AI engines to crawl every page on your site and guess which ones hold the canonical facts about your business, you publish a curated index that says these URLs are the source of truth.
Why it matters for travel and hospitality
Independent travel sites have a structural disadvantage in AI extraction: the brand-led, photo-heavy, atmospherically-designed pages that convert well for human visitors are often opaque to machine reading. The page that a traveller falls in love with — full-bleed photography, evocative copy, parallax scroll — is frequently the same page where AI engines can’t reliably extract the rate, the policy, the menu, or the location.
llms.txt is one mitigation. It lets you point AI engines past the brand layer to the structured layer underneath: the rates page, the room descriptions, the menu, the itinerary, the FAQ. If you’ve done the structural work — schema markup, plain-text pricing, FAQPage on the right pages — llms.txt is the navigation file that gets engines to those pages first.
For a typical independent lodge, restaurant, or tour operator, the file is short — twenty to fifty links — and takes about ten minutes to write once the underlying pages exist.
A working template for travel operators
Adapt the structure below for your business. The headings are guidance; the links are what matters.
# {Your Business Name}
> {One-sentence description: what you do, where you are, who you serve.}
## About
- [About us](https://example.com/about): {One line.}
- [Our story](https://example.com/our-story): {One line.}
## Stays / Tours / Experiences
- [Room category 1](https://example.com/rooms/category-1): {One line.}
- [Room category 2](https://example.com/rooms/category-2): {One line.}
## Rates
- [Current rates](https://example.com/rates): {Period covered, currency, what's included.}
## Booking
- [Book direct](https://example.com/book): {Booking flow URL.}
## FAQ
- [FAQ](https://example.com/faq): {Topics covered.}
## Location
- [Getting here](https://example.com/getting-here): {Region, nearest station / airport.}
## Contact
- [Contact](https://example.com/contact): {Hours, response time.}
A lodge with five room categories, a restaurant, an events offering, and a spa might have thirty links across six headings. A single-property B&B might have twelve links across four headings. A tour operator running ten itineraries might list each itinerary URL plus the booking and contact pages. Don’t pad the file with everything; pick the URLs that hold the canonical facts.
What llms.txt does not do
llms.txt is a navigation file, not a citation engine. Publishing it does not, on its own, make AI engines cite you more often. It makes the citation work — the schema, the plain-text pricing, the FAQPage, the named-author content — easier for engines to find and extract.
Three honest caveats:
Adoption is partial. Anthropic and Perplexity respect it; OpenAI and Google have not formally committed. The cost of publishing is ten minutes, so the math still favours doing it, but treat the upside as compounding rather than immediate.
It points to pages; it does not improve them. If the URL you list resolves to an opaque, image-heavy, JavaScript-rendered page with no structured data and no plain-text body, llms.txt has handed an engine a poor source. Fix the underlying page first.
It does not replace robots.txt or sitemap.xml. The three files coexist: robots.txt for what to exclude, sitemap.xml for what to crawl, llms.txt for what to treat as canonical AI-source material. Deploy all three.
Where to deploy it
/llms.txt at the root of your domain. Plain text, UTF-8, no authentication. Test that curl https://your-domain.com/llms.txt returns the file with HTTP 200. Reference it from your robots.txt with a # llms.txt: https://your-domain.com/llms.txt comment line if you want to be belt-and-braces about discoverability.
That’s the whole deployment. The harder work is the pages it points to.