About
Who built Agent Friendly Code, why it exists, and what it deliberately isn't.
Who
Built and maintained by Himanshu Singh. Independent project — no affiliation with Anthropic, OpenAI, Google, Cognition, Anysphere, or any of the agent vendors ranked here.
Why this exists
The gap between “repo with a README” and “repo that actually helps an AI coding agent ship code” keeps widening, and there's no public way to tell who's doing the work. Agent Friendly Code tries to make that visible — per model, because the agents aren't interchangeable. Claude Code wants an AGENTS.md and a fast test loop; Cursor wants strong types and a skim-readable README; Devin wants a runnable dev environment with declared deps and tests. The same repository can score very differently across them, and a single overall number would hide that.
What it isn't
This is not a benchmark of agent performance. Today every score is derived from static signals— file existence and content-length checks on the cloned tree. No agent is actually run. Per-model rationales are derived from each agent's published documentation (sources are linked on the methodology page), but the weight values themselves are still pre-benchmark — not yet calibrated against measured agent success. Read the methodology for the full picture, including the production-cut plan to replace pre-benchmark weights with measured ones.
Open source
MIT-licensed. The signal definitions, weight profiles, scoring code, seed list, and every score in the database are all in the source repository. If a repo's score looks wrong, file an issue with a link and the rubric to revisit; if a signal is missing, propose one.
Contact
Best signal: open an issue or discussion on GitHub.