Git AI

OTEL Events

The events Git AI exports to help you measure AI adoption, compare agents, and track the durability of AI code

AI Metrics

The otel-cli exports OpenTelemetry log events that you can query in your observability platform to understand AI tool adoption, compare agent effectiveness, and measure code durability.

Event Schemas

The CLI exports three types of events:

PR Events (git.pr.merged / git.pr.closed)

Emitted when a PR is merged or closed. One event is created per tool/model combination, plus an aggregate event with tool="all".

AttributeTypeDescription
event.namestringgit.pr.merged or git.pr.closed
git.pr.linkstringFull URL to the pull request
git.pr.repositorystringRepository URL
git.pr.toolstringAI tool name (e.g., cursor, copilot) or all for aggregate
git.pr.modelstringModel name (e.g., claude-3.5-sonnet, gpt-4) or all for aggregate
git.pr.human_additionsint64Lines written entirely by humans
git.pr.mixed_additionsint64Lines written by an AI, then edited by a human
git.pr.ai_acceptedint64AI-generated lines that were accepted without changes
git.pr.ai_additionsint64ai_accepted + mixed_additions
git.pr.total_ai_additionsint64Total AI lines generated while working on the PR. Will usually be greater than ai_additions because engineers revert AI-code
git.pr.total_ai_deletionsint64Total lines deleted by the AI while working on the PR.
git.pr.time_waiting_for_aiint64Cumulative seconds spent waiting for AI responses
git.pr.pr_total_addedint64Total lines added in PR (from git diff)
git.pr.pr_total_deletedint64Total lines deleted in PR (from git diff)

Daily Summary Events (git.daily)

Emitted during scheduled exports (e.g., nightly cron). Summarizes commits from the last 2 weeks. One event is created per tool/model combination, plus an aggregate event with tool="all".

AttributeTypeDescription
event.namestringgit.daily
git.daily.datestringUTC date in YYYY-MM-DD format
git.daily.repositorystringRepository URL
git.daily.toolstringAI tool name (e.g., cursor, copilot) or all for aggregate
git.daily.modelstringModel name (e.g., claude-3.5-sonnet, gpt-4) or all for aggregate
git.daily.total_commitsint64Total commits in the date range
git.daily.commits_with_authorshipint64Commits that have authorship data
git.daily.git_diff_added_linesint64Total lines added (from git diff)
git.daily.git_diff_deleted_linesint64Total lines deleted (from git diff)
git.daily.human_additionsint64Lines written by humans
git.daily.mixed_additionsint64Lines written by an AI, then edited by a human
git.daily.ai_additionsint64ai_accepted + mixed_additions
git.daily.ai_acceptedint64AI-generated lines that were accepted without changes
git.daily.total_ai_addedint64Total AI additions
git.daily.total_ai_deletedint64Total AI deletions
git.daily.time_waiting_for_aiint64Seconds waiting for AI

Examples

Here are a few examples of how you can use these events to understand your team's AI adoption:

Team Adoption Rate

Question: What percentage of PRs have AI assistance?

filter: git.pr.merged where tool = "all"

total_prs      = count(*)
prs_with_ai    = count(*) where ai_additions > 0
ai_assisted    = prs_with_ai / total_prs

Adoption by Tool

Question: Which AI tools are being used across our org?

filter: git.daily where tool != "all"
group by: tool

total_ai_lines     = sum(ai_accepted)
cumulative_ai      = sum(total_ai_added)
repos_using        = count_distinct(repository)

Weekly Active Repositories

Question: How many repositories had AI-assisted commits this week?

filter: git.daily where tool = "all" and ai_accepted > 0
group by: week

active_repos    = count_distinct(repository)
total_ai_lines  = sum(ai_accepted)

Comparing Agents

Use the tool and model attributes to compare effectiveness across different AI coding assistants.

Acceptance Rate by Agent

Question: Which agent's code is accepted most often?

filter: git.pr.merged where tool != "all"
group by: tool, model

accepted_lines   = sum(ai_accepted)
total_ai_lines   = sum(ai_additions)
acceptance_rate  = ai_accepted / ai_additions

Code Volume by Agent

Question: How much code is each agent contributing?

filter: git.pr.merged where tool != "all"
group by: tool

pr_count         = count(*)
ai_lines_added   = sum(ai_additions)
ai_lines_deleted = sum(total_ai_deletions)
total_pr_lines   = sum(pr_total_added)

Model Comparison Within Tool

Question: For Cursor users, which model performs best?

filter: git.pr.merged where tool = "cursor"
group by: model

pr_count         = count(*)
accepted_lines   = sum(ai_accepted)
total_lines      = sum(ai_additions)
acceptance_rate  = ai_accepted / ai_additions

Under development

  • AI-Code Halflife - How long does AI-generated code last in production before being removed or rewritten? Broken out by repository, tool, model, and contributor/group.
  • Accepted after Review - What percentage of AI-generated code is accepted in a Code Review? Inital ai_accepted - # of those lines changed/removed during the PR
  • o11y events for AI-generated code - How does AI-generated code perform in production? Production errors linked back to LoC,repository, tool, and model that generated the code.

Set up a call if you want the feature-flag for any of the above enabled: https://calendly.com/acunniffe/meeting-with-git-ai-authors

Requesting additional metrics

Need something you don't see here? We probably have it, and just need to start emiting events. Set up a call or open an issue on GitHub.