
The New Git Blame: Who's Responsible When AI Writes the Code?
Git blame used to be simple—it told you who wrote a line of code. But when AI writes the code, who's responsible when something breaks? Discover how teams are adapting to maintain accountability in the age of AI-assisted development.

git blame
used to be simple.
It told you who wrote a line of code—and maybe, if you squinted at the commit message, why.
But now? That line might’ve been written by GPT-4. Or Claude. Or merged automatically by a bot you forgot existed.
And when something breaks in production, no one’s quite sure who’s on the hook.
We’re entering a new era of software development—where authorship, responsibility, and accountability are getting harder to untangle.
🚨 Claude Tried to Co-Author My Commit
Let’s start with a real example.
Claude Code, Anthropic’s AI coding assistant, automatically adds itself as a co-author on any commit it helps generate:
Co-authored-by: Claude <noreply@anthropic.com>
You don’t ask it to. It just does it by default.
And for a while, that email address wasn’t registered to Anthropic on GitHub. So in some public repos, Claude commits showed up as authored by a completely unrelated user—someone who had claimed that address first.
So now your commit history says:
“This line was written by Claude… and also Panchajanya1999?”
Even if the attribution worked, Claude still provides:
- No prompt history
- No reviewer
- No model version
- No audit trail
If that line breaks production, good luck tracing it back to anything useful.
⚙️ If you’re using Claude, disable this by setting:
includeCoAuthoredBy: false
in your Claude config.
But the bigger issue? This is what happens when AI tries to act like a teammate—without any of the structure real teammates require.
🧠When Git Blame Isn’t Enough
Claude isn’t the only case. Here’s how authorship is already breaking in modern, AI-powered workflows:
Scenario | What Happened | git blame Says | What’s Missing |
---|---|---|---|
Copilot bug | Dev accepts a buggy autocomplete | Dev is blamed | No trace AI was involved |
Bot opens PR | LLM agent opens PR, human merges | Bot is author | No reviewer listed |
AI refactor | Script rewrites 100+ files | Bot owns commit | Was it tested or reviewed? |
Auto-review | ChatGPT-style bot approves PR | âś… from bot | No human ever looked at it |
👥 Developers Are Reframing AI Responsibility
Teams are starting to adopt new mental models:
- 🛠AI as a tool → You used it, you own the result.
- 👶 AI as a junior dev → It drafts, you supervise.
- 🤖 AI as an agent → It acts independently, so policy and traceability matter.
- 👥 AI as a teammate → It commits code? Then it needs review, metadata, and accountability.
One lightweight approach:
Bot Sponsorship — any AI-authored or reviewed PR must have a named human who takes responsibility.
đź› Making AI-Assisted Development Accountable
Here are a few things teams are doing to keep ownership clear and prevent surprise postmortems:
1. Annotate commits and PRs clearly
git commit -m "Refactor auth logic [AI]"Co-authored-by: GPT-4o <noreply@openai.com>Reviewed-by: @tech-lead
In PR descriptions:
### AI InvolvementModel: Claude 3Prompt: "Simplify caching layer"Prompted by: @victoria-devReviewed by: @tech-lead
2. Store lightweight metadata
ai_contribution: model: gpt-4o prompted_by: victoria reviewed_by: tech-lead model_version: 4o-2025-06
This makes it way easier to debug or explain later.
3. Treat bots like teammates (with guardrails)
- Don’t auto-merge bot PRs
- Require human signoff
- Keep prompt + model logs for important changes
đź§ľ Why It Actually Matters
This isn’t just a Git trivia problem. It’s about:
- 🪵 Debugging — Who changed what, and why?
- 🛡 Accountability — Who’s responsible if it breaks?
- 📋 Compliance — In fintech, healthtech, or enterprise software, this stuff has legal consequences
Even in small teams, having unclear authorship leads to tech debt, confusion, and wasted time later.
đź’¬ What About Your Team?
If you’re using Claude, Copilot, Cursor, or any AI tools:
- Do you annotate AI-generated code?
- Do bots ever open or merge PRs?
- Have you had to debug a “ghost commit” yet?
Drop a comment — I’m working on a follow-up post with real-world policies and would love to hear what’s working (or not) on your end.