
AI Code Creates 1.7x More Review Issues
On April 3, 2026, GitHub shipped commit signing for Copilot cloud agent. Every commit the agent makes now carries a Verified badge on GitHub. That badge answers one question: did this commit really come from the Copilot cloud agent, and was it unaltered in transit? It's a useful signal. It's also the wrong signal to use as a quality gate.
The distinction matters because teams are already conflating "signed" with "safe." If your branch protection rules require signed commits and your agent's commits now pass that check, the path from AI-generated code to production just got shorter. Whether that's a good thing depends entirely on what else stands between the commit and the merge button.
Commit signing is a cryptographic operation. GitHub's signing key, applied on behalf of the Copilot cloud agent, produces a signature that proves two things:
That's the full scope. The signature says nothing about whether the code is correct, whether it introduces a security vulnerability, whether it follows your codebase conventions, or whether it even compiles. Cryptographic authorship verification and code quality verification are fundamentally different operations, and no amount of signing infrastructure can bridge that gap.
Think of it like a notarized document. The notary confirms that the person who signed is who they claim to be. The notary has no opinion on whether the contract's terms are reasonable.
Before this change, Copilot cloud agent couldn't push to repositories that required signed commits. That was a friction point — teams with strict branch protection rules had to carve out exceptions or avoid using the agent entirely. Signing removes that friction, which is genuinely useful for adoption.
The risk is subtle. When a commit shows Verified in the GitHub UI, it triggers a psychological association with trustworthiness that extends beyond the cryptographic claim. "Verified" reads as "vetted." For human-authored commits, the gap between those two meanings is small — a verified commit from a senior engineer carries implicit trust in both authorship and judgment. For agent-authored commits, the gap is enormous. The agent's identity is verified; the agent's judgment is not.
This conflation is already showing up in practice. Teams that previously used required_signatures as a gatekeeping mechanism now find that agent commits pass the gate automatically. If required signatures was the only hard constraint before merge, agent-generated PRs can land with less scrutiny than before the feature shipped.
A signed commit from Copilot cloud agent can contain any of the following problems, and the signature will still verify cleanly:
Every one of these is a code review problem, not a signing problem. And every one of them becomes more frequent as agent-authored code volume increases.
This is where automated code review fills the gap that signing leaves open. Tenki's code reviewer runs as a GitHub Actions check on every PR, regardless of who (or what) authored the commits. It evaluates the actual diff for logic issues, security anti-patterns, and convention violations — the exact categories that a cryptographic signature has no opinion on.
The right mental model: treat the agent's signature as an audit trail, not a quality gate. The Verified badge tells you the commit's provenance is intact. Good — you now have reliable attribution for every line the agent wrote. That attribution is useful for post-incident analysis, compliance reporting, and understanding how much of your codebase is agent-generated.
What it shouldn't do is reduce the review bar. If anything, a commit from an AI agent warrants more review scrutiny, not less. Human developers build context over months. They know which modules are fragile, which tests are flaky, which APIs are being deprecated. The agent starts fresh on every task.
GitHub's branch protection offers require signed commits and require pull request reviews as separate ruleset controls. They're independent checks, and both should be enabled when agents are pushing commits. Here's why each matters:
Require signed commits ensures every commit has verified provenance. No unsigned or spoofed commits land on protected branches. This is your tamper-proofing layer.Require a pull request before merging with required approvals ensures that signed agent commits still get reviewed before they reach the main branch. This is your correctness layer.Require status checks to pass lets you wire in automated review tools as required checks. A PR from Copilot cloud agent lands in the review queue just like any other PR — it passes only when the automated reviewer and a human reviewer both approve.The configuration that gets teams into trouble is enabling required_signatures without required_reviews. In that setup, agent commits are now cryptographically verified and can reach the default branch without any review at all. That's the specific misconfiguration to audit for.
A branch ruleset that works well for repositories using Copilot cloud agent looks like this:
# GitHub repository ruleset (conceptual)
rules:
- type: require_signed_commits # tamper-proofing
- type: pull_request # correctness gate
parameters:
required_approving_review_count: 1
dismiss_stale_reviews_on_push: true
- type: required_status_checks # automated review
parameters:
strict_required_status_checks_policy: true
required_status_checks:
- context: "tenki-review" # or your review tool
- context: "ci / test"With this configuration, a Copilot cloud agent PR goes through three independent checks: the commit signature verifies provenance, the automated review tool evaluates code quality, and a human reviewer signs off. The signature doesn't accelerate the merge — it just confirms who's responsible for the code under review.
Tenki slots into the required_status_checks layer. It runs on every PR as a GitHub Actions check, reviews the diff for the categories of issues listed above, and posts findings inline on the PR. The check blocks merge if critical issues are found. It runs at $0.50 per review with 10 free reviews to start, so the cost of reviewing agent-generated PRs is trivial relative to the cost of a bug reaching production.
None of this means commit signing is useless. It solves a real problem — just not the one teams hope it solves.
Signed agent commits give you a reliable way to answer "what percentage of our code was written by an AI agent?" after the fact. That's valuable for compliance reviews, for understanding your codebase's risk profile, and for measuring whether agent-authored code has a different defect rate than human-authored code. You can't answer those questions accurately if the agent's commits aren't reliably attributed.
Signing also prevents a class of supply chain attack where a malicious actor forges commits to look like they came from the agent. In a world where teams increasingly trust agent-authored code, impersonating the agent becomes an attractive attack vector. Signed commits close that vector.
Both of those are legitimate reasons to enable commit signing. Neither is a reason to skip code review.
Commit signing answers "who wrote this?" Code review answers "should this ship?" They're complementary controls, not substitutes. As AI agents generate a growing share of commits in your repositories, keeping that distinction sharp is the difference between a well-governed development pipeline and one that's fast-tracking unreviewed code to production under a green checkmark.
Enable signing. Require review. Don't confuse the two.
Tags
Recommended for you
What's next in your stack.
GET TENKI