I research how AI agents work, how they consume information, and how the ecosystems around them are evolving. This page collects that work in one place. For my documentation background, see Documentation & Developer Education. For my programming projects, see Programming.
Talks & Interviews
- State of Docs Report 2026 - Featured discussing AI consumption of documentation. Podcast forthcoming.
- Why AI Agents Struggle with Modern Documentation (YouTube, 2026) - Interview covering how agents access documentation in real time and the failure modes most docs teams don't know about.
Specifications & Standards
Agent-Friendly Documentation Spec
A specification defining 21 checks across 8 categories for evaluating how well a documentation site serves AI agent consumers. Covers llms.txt discovery, markdown availability, page size, content structure, URL stability, and more. Based on real-world agent access patterns I've been researching since late 2025.
- Links: agentdocsspec.com
Tools
afdocs
A CLI tool that implements the Agent-Friendly Documentation Spec and tests docs sites against it. Point it at a URL and it reports where your docs stand. Published on npm.
skill-validator
A CLI that validates Agent Skills against the agentskills.io specification. Checks directory structure, frontmatter, content quality, cross-contamination risk, and token budget composition.
- Language: Go
- Links: GitHub
Research & Analysis
Agent Skill Ecosystem Analysis
An ecosystem-scale analysis of 673 Agent Skills across 41 repositories, examining compliance with the Agent Skills specification and content quality. Includes an interactive dashboard and a downloadable paper.
- Links: Interactive Report ・ Blog post
Agent Skill Implementation Research
Empirical research into how agent platforms actually implement Agent Skill loading, management, and presentation. Catalogs 23 checks across 9 categories, with 17 benchmark skills containing canary phrases for testing platform behavior without relying on model self-reporting. A community-driven project accepting per-platform contributions.
- Links: agentskillimplementation.com ・ GitHub ・ Blog post
Agent Web Fetch Behavior
Research into how coding agents actually fetch and process web content, including truncation behavior, redirect handling, and content negotiation across platforms.
- Links: Blog post
Agent-Friendly Documentation Audit
An analysis of hundreds of documentation pages across popular developer tools, examining how well they serve AI agent consumers. The research that led to the Agent-Friendly Documentation Spec.
- Links: Blog post
Writing
I write about agents, documentation, and the AI ecosystem on this blog and at AE Shift.
Selected articles:
- When a Feature Request Becomes a Research Project - How an
evals/directory question turned into a 26-platform empirical research project - Why a Platform Shouldn't Own an Open Spec - How Anthropic's stewardship of the Agent Skills spec is fragmenting the ecosystem
- Is Your llms.txt Already Stale? - Building a freshness check and discovering the tools were the problem
- Agent Skill Mega Repo Woes - Validating a 23.7k-star skill mega repo and finding problems the star count won't tell you
- An Agent is More Than Its Brain - What's inside a coding agent, and why the model is only one piece
- LLMs vs. Agents as Docs Consumers - Why "AI-friendly docs" means two different things
- Case Study: upgrade-stripe Agent Skill - Deep dive on a real-world Agent Skill
- Make Your Hugo Site Agent-Friendly - Practical how-to for static site owners
- Upskilling in the AI Age - Advice for people getting started with AI tools
For all AI-related posts, see the ai tag.