Bootstrapping · · 11 min read

Make something agents want

Y Combinator's motto was for humans. The new version is for machines. If AI agents can't parse, compare, and act on your content — you don't exist.

By Alex Diaz

Google sent you traffic for 20 years. That era is ending.

The new visitors don’t have browsers. They don’t scroll. They don’t click your landing page and think “this feels trustworthy.” They parse structured data, compare specs across 50 tabs in parallel, and make purchase decisions in under a second. They’re AI agents. And they can’t read your website.

Key takeaways:

  • AI agents can’t read your website — they need structured data, not marketing copy
  • Add JSON-LD schema markup, /llms.txt, and /agents.txt to every site
  • Unblock AI crawlers in robots.txt — blocking GPTBot in 2026 is blocking Googlebot in 2004
  • Answer engines cite content that leads with the answer, not content that buries it under setup
  • Early movers in agent optimization will compound that advantage for years — same as early SEO in 2003

Y Combinator’s motto has always been “Make something people want.” They just asked the obvious next question: what happens when the buyer isn’t a person?

The agent economy is already here

Not speculation. Already happening.

OpenClaw is building autonomous agents that browse, compare, and buy. Claude can use a computer — click buttons, fill forms, navigate checkout flows. OpenAI’s Operator does the same. Pi-mono and a dozen other agent frameworks are racing to make this mainstream.

AI dev tools are exploding. Agent-to-agent protocols are emerging. The businesses that win will be the ones agents can actually interact with. Not the ones with the prettiest homepage. The ones with the cleanest data.

Your website is invisible to machines

You spent months writing landing page copy. A/B tested headlines. Hired a designer. The page converts at 3.2% and you’re proud of it. None of that matters to an agent.

An agent doesn’t read your hero section. It doesn’t care about your brand story. It looks for structured data — pricing in a parseable format, feature specs it can compare, API endpoints it can hit, trust signals it can verify. If that data isn’t there, the agent moves to a competitor whose site it can actually understand.

Today, a human Googles “best quiz app for Shopify,” reads three blog posts, checks reviews, and decides. Tomorrow, an agent queries multiple sources simultaneously, extracts structured comparisons, and recommends — or buys — without a human ever seeing your site.

If you’re not in the agent’s data set, you’re not in the conversation.

Two audiences, one website

The shift isn’t “humans vs. agents.” It’s both, simultaneously, with completely different needs.

HumansAI Agents
How they find youSearch engines, social, referralsLLM training data, structured crawls, API discovery
What they readHeadlines, visuals, social proofJSON-LD, schema markup, machine-readable tables
How they decideEmotion, trust, brand feelingSpecs, pricing, feature matrices, SLAs
What they doClick CTA, maybe buy laterHit API, complete transaction, move on
What they ignoreStructured data, meta tagsYour beautiful hero image and brand voice

Agents learn. They remember which sites give them clean data and which ones waste their time. An agent that successfully extracts your pricing once will come back. One that hits a wall of marketing prose won’t.

What agents actually need

Forget everything you know about SEO. Agent optimization is a different discipline.

Structured data on every page. JSON-LD schema markup — Product, FAQPage, HowTo, SoftwareApplication. Not because Google rewards it (though it does). Because agents parse it directly. Your pricing page without schema is a picture. Your pricing page with schema is a database.

Machine-readable facts, not marketing language. “Best-in-class performance” means nothing to an agent. “99.9% uptime SLA, 200ms p95 latency, 10,000 req/min rate limit” — that’s decision-ready data. Agents compare numbers, not adjectives.

Discovery files. Almost nobody has implemented these yet:

  • /llms.txt — a markdown file that tells LLMs what your site is about, what pages matter, and where to find key information. A README for AI crawlers.
  • /agents.txt — a YAML file that tells autonomous agents what they can do with your service, where your API lives, how to authenticate, what capabilities you offer. Service discovery for the agent economy.
  • /.well-known/api-catalog — RFC 9727. An API discovery endpoint. If you have an API and this file doesn’t exist, agents can’t find it programmatically.

Endpoints for action. An agent that can read your pricing but can’t sign up programmatically will recommend a competitor that lets it complete the task. API docs, webhook setup guides, direct signup links — conversion paths for machines.

The answer engine layer

Agents are one half. The other half is already mainstream: LLM answer engines.

ChatGPT, Claude, Perplexity, Gemini — hundreds of millions of people now ask AI instead of Googling. When someone asks “what’s the best product recommendation quiz for Shopify?” — the LLM either cites your content or it doesn’t. There’s no page two. There’s cited or invisible.

The rules for answer engines are different from agents but equally ignored:

Answer-first content. LLMs extract the first 2-3 sentences that directly answer a query. If your first paragraph is a windup about “the evolving e-commerce landscape,” you’re giving the citation to someone who leads with the answer.

Intent-matched headings. Your H2 should match the exact query someone types. Not “Our Approach to Product Recommendations.” Try “Best Product Recommendation Quiz for Shopify.” The heading IS the query.

Quotable blocks. LLMs pull clean, self-contained statements with specific numbers. “Used by 4,000+ stores with a 4.9/5 rating” gets cited. “We help businesses grow” gets ignored. Write sentences worth quoting.

The robots.txt problem

Most businesses have a robots.txt that blocks AI crawlers.

User-agent: GPTBot
Disallow: /

User-agent: ClaudeBot
Disallow: /

This made sense when the fear was “AI is stealing our content.” It makes zero sense when AI is sending you customers. Blocking GPTBot in 2026 is blocking Googlebot in 2004 because you were worried about “search engines stealing your traffic.”

Let them read your site. It’s that simple.

Why this matters more for bootstrappers

VC-funded companies will hire “AI SEO specialists” and spend six months building agent-friendly infrastructure. Committees. Roadmaps. Quarterly OKRs.

Bootstrappers can do it in an afternoon. That’s the advantage of being small.

Add schema markup to your key pages. Write an llms.txt file. Create an agents.txt. Make your pricing machine-readable. Remove AI crawler blocks from robots.txt. Structure your content so the first sentence answers the question.

Not a massive infrastructure project. A checklist. And right now, almost nobody has done it.

The compounding part

Agent behavior is self-reinforcing. An agent that successfully extracts data from your site adds you to its working set. Next time it needs to compare products in your category, you’re already in the mix. You become a trusted source — not because of brand recognition, but because your data is clean and your endpoints work.

For the tactical details: Answer Engine Optimization covers the mechanics of getting cited. The flip side: sites that agents learn to skip get skipped permanently. No second chance to make a first impression when the visitor processes 50 sites per second.

Early movers in SEO — the people who figured out Google in 2003 before every marketing agency on earth started gaming it — compounded that advantage for a decade. We’re at that same moment now for agent optimization. Before the playbooks. Before the spam. Before everyone catches on and the edge disappears.

How I think about this

I built the AI Rank skill because I needed it myself.

Two frameworks. LLM — six principles for making content citable by answer engines. AGENT — five pillars for making your site actionable by autonomous agents. Each one scores your content, identifies gaps, and rewrites what needs rewriting.

It generates the discovery files — llms.txt, agents.txt, the whole stack. It audits your schema markup. It rewrites your content to lead with answers instead of windups. It does in 30 minutes what would take a week of manual work.

The skill runs in Claude Code. /ai-rank and point it at a page. It does both audits, rewrites the content, generates schema suggestions, and outputs exactly what an agent or answer engine would extract from your page. If the extraction preview is empty, your page is invisible.

Open source. No paywall. Clone it and run it.

FAQ

What is llms.txt?

A markdown file at /llms.txt that tells LLM answer engines what your site is about, what pages matter, and where to find key information. Think of it as a README for AI crawlers. Spec at llmstxt.org.

What is agents.txt?

A YAML file at /agents.txt that tells autonomous AI agents what they can do with your service — where your API lives, how to authenticate, and what capabilities you offer. Service discovery for the agent economy. Spec at agentstxt.dev.

How do I optimize content for AI agents?

Structured data on every page (JSON-LD schema markup), machine-readable facts instead of marketing language, discovery files (llms.txt, agents.txt), and endpoints agents can act on. Agents compare numbers, not adjectives.

Should I block AI crawlers in robots.txt?

No — unless you want to be invisible to the next generation of buyers. Blocking GPTBot and ClaudeBot in 2026 is the equivalent of blocking Googlebot in 2004. If you want agents and answer engines to recommend you, let them read your site.


Y Combinator told a generation of founders to make something people want. The next generation needs to make something agents want too. The agents are already here. The question is whether they can find you.

AI Rank skill on GitHub — dual-framework content optimizer for LLM answer engines and autonomous AI agents.

ai agents seo bootstrapper

The stuff nobody tells founders

Flag theory, bootstrapper playbooks, and hard-won lessons from building location-independent businesses. No "get rich quick" course — just a founder sharing what worked and finding others on the same path.

No spam. No affiliate garbage. Unsubscribe whenever.