Track AI Code all the way to production

An open-source Git extension for tracking AI code through the entire SDLC.

Tracking Code by

Cursor
Codex
Claude Code
GitHub Copilot
Gemini
OpenCode
RovoDev
Junie
Continue
Droid
Cursor
Codex
Claude Code
GitHub Copilot
Gemini
OpenCode
RovoDev
Junie
Continue
Droid

Just Install and Commit

Install the Git extension and then work as usual. Git AI uses hooks to track every line of AI-code that enters your codebase.

  • Works with every coding agent
  • AI Attribution survive rebases, merges, cherry-picks
  • No workflow changes
Install (Mac, Linux, Windows)

AI Blame

Git AI links each line of AI-code to the agent, model and prompt that generated it, helping engineers and their agents understand the "why" behind every line.

Read the Docs

Ask the Author (Agent)

See AI code you don't understand? The /ask skill lets you talk to the Agent that wrote any code about how to use it, architecture decisions, and the intent of the engineer's original intent.

  • Cross-agent — ask Cursor about code it wrote from Claude Code
  • Answers include the original intent, not just what the code does
Read the Docs

How it works

  1. Supported Agents mark the code they wrote by calling `git-ai checkpoint`
  2. post-commit, AI line attributions are saved into a Git Note
  3. Git AI preserves these attributions through rebases, squashes, ammends, merges,resets, cherry-picks, etc.
Learn More
AI Blame
1
pub fn post_clone_hook(
2
parsed_args: &ParsedGitInvocation,
3
exit_status: std::process::ExitStatus,
4
) -> Option<()> {
5
6
if !exit_status.success() {
7
return None;
8
}
9
10
let target_dir =
11
extract_clone_target_directory(&parsed_args.command_args)?;
12
13
let repository =
14
find_repository_in_path(&target_dir).ok()?;
15
16
print!("Fetching authorship notes from origin");
17
18
match fetch_authorship_notes(&repository, "origin") {
19
Ok(()) => {
20
debug_log("successfully fetched authorship notes from origin");
21
print!(", done.\n");
22
}
23
Err(e) => {
24
debug_log(&format!("authorship fetch from origin failed: {}", e));
25
print!(", failed.\n");
26
}
27
}
28
29
Some(())
30
}
Git Note (refs/notes/ai #<commitsha>)
hooks/post_clone_hook.rs
  promptid1 6-8
  promptid2 16,21,25
---
{
"prompts": {
  "promptid1": {
    "agent_id": {
      "tool": "copilot",
      "model": "Codex 5.2"
    },
    "human_author": "Alice Person",
    "summary": "Reported on GitHub #821: Git AI tries fetching authorship notes for interrupted (CTRL-C) clones. Fix: gaurd note fetching on successful clone.",
    ...
  },
  "promptid2": {
    "agent_id": {
      "tool": "cursor",
      "model": "Sonnet 4.5"
    },
    "human_author": "Jeff Coder",
    "summary": "Match the style of Git Clone's output to report success or failure of the notes fetch operation.",
    ...
  }
}
}

Git AI maintains the open standard for tracking AI authorship in Git Notes. Learn more on GitHub

Personal prompt analysis dashboard

Personal Dashboard

Track your usage across Agents, compare models, and analyze your prompting to figure out what works best.

  • Get better at prompting
  • Track your output and effectiveness with AI
Setup your Personal Dashboard
Why Git AI
[*]

No workflow changesJust prompt and commit. Git AI tracks AI code accurately without cluttering your git history.

[*]

"Detecting" AI code is an anti-patternGit AI does not guess whether a hunk is AI-generated. Supported agents report exactly which lines they wrote, giving you the most accurate attribution possible.

[*]

Local-firstWorks 100% offline, no login required.

[*]

Git native and open standardGit AI uses an open standard for tracking AI-generated code with Git Notes.

[*]

Transcripts stay out of GitStored locally in a private SQLite or in your team's cloud or self-hosted prompt store — keeping your repos lean, free of sensitive information, and giving you control over your data.

Install the Git Extension

Open Source

Install the Git AI extension. It's free, local-first, and open source.

  • Accurate AI-Attribution on every commit
  • AI Blame in your IDE
  • Local storage for every Agent session
  • Let Agents read past prompts while planning
  • Measure % AI Code and AI-Accepted Rate of every commit

For Teams

CloudSelf-Hosted

Unified analytics and context storage. Accelerate your team's AI adoption and ensure all your new AI code that can be maintained and built upon.

Measure AI Code
  • Track % AI Code across all your repos
  • Track AI Code through the entire SDLC: Generated → Accepted → PR → Merged → Durability
  • PR-level % AI Code, token cost, and AI efficacy
Shared Prompts and Context
  • Link intent, requirements, and architecture decisions to generated code
  • Make your agents smarter by using past prompts as context
  • Own your prompts — don't let agent vendors lock you in
Spread What Works
  • Automated insights into what's working
  • Personalized tips for every engineer
  • Compare AI effectiveness across teams, repositories, and task types