“The AI did it.”

This sentence is increasingly common in post-incident reports. It’s also increasingly problematic.

When an AI agent takes an action — deletes a file, sends an email, executes a trade — who is accountable? What happened? Can it be proven?

In most current systems: no.

The Missing Primitive

Software systems have had audit trails for decades. Database transactions are logged. File system access is recorded. Network requests are inspectable.

AI agent actions are often not.

Not because it’s technically hard. Because it wasn’t designed in.

Footprint is my implementation of what I believe every AI native system needs: a local-first MCP server that automatically records every AI conversation and action into a structured, encrypted, tamper-evident log.

This isn’t a debugging tool. It’s infrastructure.

Why Local-First

The instinct is to send audit data to a cloud service. It’s convenient, searchable, always available.

The problem: who audits the auditor?

If your AI compliance record lives on a vendor’s servers, the vendor controls the narrative. They can also lose the data, get breached, or simply shut down.

Local-first audit means:

This is not an edge case. It’s the baseline expectation for any AI native system operating in the real world.

What “Tamper-Evident” Actually Means

Tamper-evident doesn’t mean tamper-proof. It means: if the record is altered, it’s detectable.

Using cryptographic hashing across a chained audit log, any post-hoc modification — even a single character change — produces a detectable inconsistency. The integrity of the record can be verified without trusting the storage system.

This is the same mechanism behind certificate transparency logs in web security. It’s a solved problem in infrastructure. It hasn’t been applied consistently to AI.

The Accountability Gap

The real motivation for tamper-evident AI isn’t compliance. It’s accountability.

AI agents operating in high-stakes environments — healthcare scheduling, legal document review, financial execution — need to be accountable in the same way human professionals are.

A doctor keeps records. A lawyer keeps records. An engineer keeps records.

An AI agent that keeps no records — or records that it controls — is not operating at the same accountability level as the humans it replaces.

This gap will be closed, either by industry practice or by regulation. Building audit infrastructure into AI native systems from the start is not defensive design. It’s the correct default.