Why We Published the State of AI Code Review

We build agent-agnostic collaboration. So we asked ourselves: which agents are actually showing up? We analyzed 40.3 million pull requests from 2022 to 2025 to find out.

Zak Mandhro
Zak Mandhro - Dec 28, 2025
Why We Published the State of AI Code Review

We build agent-agnostic collaboration. So we asked ourselves: which agents are actually showing up?

Not the ones getting funding announcements. Not the ones winning X debates. The ones actually leaving comments on pull requests, day after day, across thousands of organizations.

We did not have a good answer. Neither did anyone else.

So we analyzed 40.3 million pull requests spanning 2022 through 2025. And what we found changed how we think about the market we are building for.


Why PullFlow Had to Do This Research

PullFlow connects GitHub, Slack, IDEs, and AI agents into a unified code review experience. We give GitHub bots and AI agents first-class conversational presence on Slack. We are not building another AI reviewer. We are building the collaboration layer that works regardless of which AI tools your team adopts.

That position gives us a unique vantage point. We see CodeRabbit comments flowing into Slack threads. We see Copilot suggestions discussed alongside human feedback. We watch teams juggle multiple AI tools in the same workflow.

But seeing is not the same as understanding.

We needed hard data. Not vendor claims. Not cherry-picked case studies. Actual participation numbers across the entire public GitHub ecosystem.

The internal question was simple: Where does the market actually stand, and where is it headed?


What 40.3 Million Pull Requests Revealed

What we measured (and what we did not)

Before diving into the findings, it is critical to understand what this research actually captures.

We tracked AI agents that participate in code review with their own GitHub identities - bots that leave comments, reviews, and open PRs under recognizable agent accounts like @coderabbitai or @github-actions[bot].

What we did not measure: The vast majority of AI-assisted code that shows up under human GitHub logins.

When a developer uses Cursor, GitHub Copilot, Claude Code, or ChatGPT to write code and commits it themselves, that contribution appears as fully human-authored. Sometimes there is a Co-authored-by attribution in the commit message. Often there is not.

Our 14.9% AI participation rate represents only the visible agents - the ones with distinct identities in the GitHub workflow. The actual volume of AI-influenced code is orders of magnitude higher.

This matters because it means we are measuring the tip of the iceberg. The agents we tracked are the ones that organizations explicitly integrated into their workflows as collaborative participants. The invisible AI assistance happening in every developer’s IDE is a separate, much larger phenomenon.

We focused on the visible agents because those are the ones that collaboration infrastructure needs to support. They are the ones leaving comments in PR threads. The ones teams need to route, filter, and respond to. The ones that change how code review actually works.

The adoption curve is steeper than anyone expected

AI agent participation in code review went from 1.1% in February 2024 to 14.9% in November 2025.

14X growth in under two years.

In 2025 alone, adoption accelerated 3.7X - from 4.0% in January to 14.9% in November.

This is not a gradual shift. Teams are adopting AI review tools faster than most infrastructure can adapt to support them.

Platform distribution is winning

Here is the uncomfortable finding: CodeRabbit was purpose-built for AI code review. It is genuinely good at what it does. And it leads in total PR volume for 2025.

But GitHub Copilot now leads in organizational adoption.

Copilot was not built for code review. It started as autocomplete. Yet it dominates because it is pre-installed, zero-friction, and bundled into existing GitHub subscriptions.

The best product does not always win. The most distributed product often does.

Public vs. Private Repositories

This report analyzes public GitHub activity. Private repositories tell a different story. Anonymized PullFlow data shows CodeRabbit leading adoption among our customers’ private organizations. This suggests a divergence between public and private adoption patterns that we will explore in future research.

Consolidation happened before most people noticed

The top three AI review agents - CodeRabbit, Copilot, and Google Gemini - now control 72% of all activity.

Google Gemini emerged as the fastest climber of 2025, with 43X growth. Yet even with this rapid rise, the market is consolidating around platform giants and a few specialized leaders.

Korbit, a purpose-built AI code review startup that had raised funding and built real traction, shut down this year. The market consolidated faster than the market realized there was a market.

AI agents are not just reviewing - they are authoring independent PRs

This one surprised us: AI agents authored over 99,000 pull requests in 2025.

Copilot leads with 75,000+ PRs. But newer coding agents like Devin and Jules are emerging as significant contributors. The boundary between “AI that reviews code” and “AI that writes code” is dissolving.


What This Means for Engineering Leaders

This report is not about picking which AI tool wins. The landscape will keep shifting. Copilot, ChatGPT, Gemini, Claude - they will trade positions. New entrants will emerge. Some current players will not survive 2026.

The strategic question is different: Is your team’s collaboration infrastructure ready for a world where AI agents are first-class participants in code review?

Most teams are not ready. Their Slack channels do not surface AI feedback effectively. Their workflows treat bot comments as noise rather than signal. Their tooling assumes humans are the only reviewers that matter.

The AI agents are already in your PRs. The question is whether your collaboration infrastructure sees them - or ignores them.

This is exactly why we built PullFlow the way we did. Agent-agnostic. Integrated across GitHub, Slack, and IDEs. Treating AI reviewers as first-class conversational participants.

The report confirmed we are building for the right shift.


The Full Data

Everything we cited is from our State of AI Code Review 2025 report:

Full report and methodology

Includes monthly breakdowns, full agent rankings beyond the top 10, and detailed methodology for how we identified and classified AI agents.

We published this because we think the industry needs shared data to have real conversations about where AI code review is headed.

We want you to challenge it. Tell us where we are wrong. Show us what we missed.

The conversation matters more than being right.


PullFlow is the collaboration platform for human + AI software teams. We connect GitHub, Slack, IDEs, and AI agents into unified code review. Try PullFlow free.

Experience seamless collaboration on
code reviews.