How to Make Your Website Show Up in ChatGPT Answers
A practical guide to optimising your website so AI assistants can read, understand, and cite your content.
I was curious the other day about whether my blog posts were showing up in ChatGPT answers.
So I asked it a few questions about no-code and Bubble to see if I’d get mentioned.
I didn’t.
And it got me thinking. More and more people are asking AI assistants instead of Googling. If my content isn’t showing up in those answers, I’m missing out on a whole new audience.
So I went down a rabbit hole researching how to make websites more readable for LLMs. Turns out most of it is just good web hygiene that we should all be doing anyway.
Here’s what I found.
Why this matters
When someone asks ChatGPT “what’s the best landing page builder for startups?”, the AI pulls from content it can parse and understand.
If your site is a mess of JavaScript with no structure, you probably won’t get mentioned. If your content is clear, well-organised, and machine-readable, you might.
This is basically SEO for AI. Instead of optimising for Google’s crawler, you’re optimising for the AI models that millions of people are now using to find answers.
1. Add an llms.txt file
This is a simple text file you put in your website’s root directory. It tells AI systems what your site is about and where to find the good stuff.
Think of it like robots.txt, but for LLMs.
# NoCodeLife
> A blog about building SaaS products without code, using Bubble and AI tools.
## Content
- /blog: Articles about no-code development, AI, and SaaS
- /about: About Kieran Ball
## Key Pages
- [How to Build a SaaS in Bubble](/blog/build-saas-bubble): Step-by-step guide
- [AI Won't Kill No-Code](/blog/ai-wont-kill-nocode): Why visual development still matters
Plain text. No fancy formatting. Just a clear summary of your site.
2. Add structured data (JSON-LD)
This is metadata that tells machines exactly what type of content they’re looking at. It’s the same stuff that powers Google’s rich snippets.
For a blog post, you add this to your page’s <head>:
<script type="application/ld+json">
{
"@context": "https://schema.org",
"@type": "Article",
"headline": "How to Build a SaaS Without Code",
"author": {
"@type": "Person",
"name": "Kieran Ball"
},
"datePublished": "2024-01-15",
"description": "A practical guide to building SaaS products using Bubble."
}
</script>
It helps AI models understand the content type, who wrote it, and when. Pretty useful if you want to be cited properly.
3. Use semantic HTML
Stop using <div> for everything.
Use HTML elements that actually describe what the content is:
<article>for blog posts<main>for your main content area<nav>for navigation<aside>for sidebars<header>and<footer>for… you get it
And use proper heading hierarchy. One <h1>, then <h2> for sections, <h3> for subsections. Not random heading levels because you liked the font size.
Also add alt text to every image. Not “image1.jpg”. Actual descriptions like “Screenshot of the Bubble workflow editor showing a conditional statement.”
4. Write good meta tags
Every page needs these:
<meta name="description" content="A practical guide to building SaaS products using no-code tools like Bubble.">
<meta name="author" content="Kieran Ball">
<link rel="canonical" href="https://nocodelife.com/blog/build-saas-bubble">
The description should be a clear, factual summary. Not marketing fluff. AI models use this to understand what the page is about before parsing the full content.
5. Have an RSS feed and sitemap
Your sitemap (/sitemap.xml) tells crawlers what pages exist and when they were last updated.
Your RSS feed (/rss.xml or /feed.xml) provides a clean, structured version of your blog content that’s easy for any system to parse.
Most static site generators create these automatically. If you’re on WordPress, there are plugins. If you’re building custom, just make sure they exist.
6. Front-load your key information
Put the most important stuff first. Inverted pyramid style.
AI models often truncate or summarise content. If your key point is buried in paragraph seven, it might get missed.
Your first paragraph should answer the main question. Everything after is supporting detail.
7. Avoid JavaScript-only content
If your content only appears after JavaScript runs, many AI systems can’t see it.
This doesn’t mean you can’t use JavaScript. It just means your core content should be in the HTML that gets served initially. Use server-side rendering or static generation where possible.
8. Set up robots.txt properly
Make sure you’re not accidentally blocking AI crawlers:
User-agent: *
Allow: /
Sitemap: https://yoursite.com/sitemap.xml
That’s it. Allow everything, point to your sitemap.
The full Claude Code prompt
I use Claude Code for most of my development work these days. If you do too, here’s a prompt you can paste into your project instructions to have it implement all of this automatically:
# LLM Optimisation Instructions
## Core Principles
Make content **machine-readable** without sacrificing human experience.
## Implementation Checklist
### 1. Add `llms.txt` (Root)
Create a file at /llms.txt with:
- Site name and brief description
- Content sections and what they contain
- Key pages with one-line descriptions
### 2. Structured Data (JSON-LD)
Add to every page:
- @type: Article (for blog posts), WebPage, or appropriate schema
- headline, author, datePublished, description
- Use schema.org vocabulary
### 3. Semantic HTML
- Use proper heading hierarchy (h1 → h2 → h3)
- Use article, main, nav, aside tags
- Add alt text to all images
- Use descriptive link text (not "click here")
### 4. Meta Tags
Every page needs:
- meta name="description" with clear, factual summary
- meta name="author"
- link rel="canonical"
### 5. RSS & Sitemap
- Ensure /sitemap.xml exists and is current
- Provide /rss.xml or /feed.xml for blog content
### 6. Content Best Practices
- Front-load key information (inverted pyramid)
- Use lists and tables for structured data
- Avoid content locked behind JS hydration
- Include publish/update dates on articles
### 7. Robots.txt
Allow all crawlers and point to sitemap.
## Verification
After implementing, check:
- curl -s https://yoursite.com/llms.txt
- curl -s https://yoursite.com/sitemap.xml | head -20
- Validate structured data at schema.org validator
Copy that into your CLAUDE.md or project instructions, and it’ll handle the implementation for you.
The bottom line
Most of this is just good web development. Clean HTML, structured data, proper meta tags, accessible content. Stuff we should be doing anyway.
The difference now is that it’s not just helping Google understand your site. It’s helping the AI assistants that millions of people use every day.
If you want to be part of the conversation, make sure the AIs can actually read what you’re saying.
That’s it. Now go optimise something.