We were 4 months into our SOC 2 audit when the auditor asked a question that stopped us cold: "How do you prove that your audit logs haven't been modified after the fact?"
We looked at each other. Our audit logs were in a PostgreSQL table. Anyone with database access could UPDATE or DELETE rows. We had no mechanism to detect if someone had changed a log entry.
"We trust our engineers" was not the answer the auditor was looking for.
Why Immutability Matters
The whole value of an audit log depends on one thing: trust. If someone can modify the logs, then the logs prove nothing. An employee could delete evidence of unauthorized access. A compromised admin account could alter the record. An insider threat could cover their tracks.
According to NIST SP 800-92, audit log integrity is a fundamental requirement. The guidance specifically states that logs should be protected against unauthorized modification and that integrity checking mechanisms should be employed.
The auditor wasn't being difficult. They were checking for a basic security control that we'd simply never thought about.
The Hash Chain Approach
The standard solution for proving log integrity is a hash chain, sometimes called a blockchain-lite pattern (without the cryptocurrency nonsense). Each log entry includes a hash of its own content plus the hash of the previous entry. This creates a chain where modifying any entry invalidates every subsequent hash.
import { createHash } from 'crypto';
interface AuditEventWithIntegrity {
id: string;
eventType: string;
actor: { id: string; email: string };
target: { type: string; id: string };
changes: Record<string, unknown>[];
timestamp: Date;
// Integrity fields
contentHash: string;
previousHash: string;
chainHash: string;
}
function computeContentHash(event: Omit<AuditEventWithIntegrity, 'contentHash' | 'previousHash' | 'chainHash'>): string {
const content = JSON.stringify({
id: event.id,
eventType: event.eventType,
actor: event.actor,
target: event.target,
changes: event.changes,
timestamp: event.timestamp.toISOString(),
});
return createHash('sha256').update(content).digest('hex');
}
function computeChainHash(contentHash: string, previousHash: string): string {
return createHash('sha256')
.update(contentHash + previousHash)
.digest('hex');
}
async function appendAuditEvent(
event: Omit<AuditEventWithIntegrity, 'contentHash' | 'previousHash' | 'chainHash'>
): Promise<AuditEventWithIntegrity> {
// Get the previous event's chain hash
const lastEvent = await db.auditEvents.findFirst({
orderBy: { timestamp: 'desc' },
});
const previousHash = lastEvent?.chainHash || 'GENESIS';
const contentHash = computeContentHash(event);
const chainHash = computeChainHash(contentHash, previousHash);
const fullEvent: AuditEventWithIntegrity = {
...event,
contentHash,
previousHash,
chainHash,
};
await db.auditEvents.create({ data: fullEvent });
return fullEvent;
}
Now if someone modifies a log entry, recalculating its content hash will produce a different value. And since the next entry's chain hash depends on it, the entire chain after the modified entry becomes invalid.
Verifying the Chain
Having a hash chain is only useful if you actually verify it. This means periodically walking the chain and checking that every hash is consistent.
interface VerificationResult {
valid: boolean;
totalEvents: number;
checkedEvents: number;
firstInvalidEvent: string | null;
errors: string[];
}
async function verifyAuditChain(
tenantId: string,
startDate?: Date,
endDate?: Date
): Promise<VerificationResult> {
const events = await db.auditEvents.findMany({
where: {
tenantId,
timestamp: {
gte: startDate,
lte: endDate,
},
},
orderBy: { timestamp: 'asc' },
});
const result: VerificationResult = {
valid: true,
totalEvents: events.length,
checkedEvents: 0,
firstInvalidEvent: null,
errors: [],
};
for (let i = 0; i < events.length; i++) {
const event = events[i];
result.checkedEvents++;
// Verify content hash
const expectedContentHash = computeContentHash({
id: event.id,
eventType: event.eventType,
actor: event.actor,
target: event.target,
changes: event.changes,
timestamp: event.timestamp,
});
if (expectedContentHash !== event.contentHash) {
result.valid = false;
result.firstInvalidEvent = event.id;
result.errors.push(
`Event ${event.id}: content hash mismatch (content was modified)`
);
break;
}
// Verify chain hash
const expectedPreviousHash = i === 0 ? 'GENESIS' : events[i - 1].chainHash;
if (event.previousHash !== expectedPreviousHash) {
result.valid = false;
result.firstInvalidEvent = event.id;
result.errors.push(
`Event ${event.id}: chain hash mismatch (event was inserted or removed)`
);
break;
}
const expectedChainHash = computeChainHash(event.contentHash, event.previousHash);
if (expectedChainHash !== event.chainHash) {
result.valid = false;
result.firstInvalidEvent = event.id;
result.errors.push(
`Event ${event.id}: chain hash invalid`
);
break;
}
}
return result;
}
Run this daily or weekly. If it ever reports an invalid chain, something has been tampered with and you need to investigate immediately.
Beyond Hash Chains: External Attestation
A hash chain proves that logs haven't been modified since the chain was created. But what if someone replaces the ENTIRE chain? If an attacker has full database access, they could recalculate all hashes from scratch with modified data.
The solution is external attestation. Periodically publish the current chain hash to an external, immutable store:
// Periodic chain state attestation
async function attestChainState(tenantId: string) {
const lastEvent = await db.auditEvents.findFirst({
where: { tenantId },
orderBy: { timestamp: 'desc' },
});
if (!lastEvent) return;
const attestation = {
tenantId,
timestamp: new Date().toISOString(),
lastEventId: lastEvent.id,
lastChainHash: lastEvent.chainHash,
eventCount: await db.auditEvents.count({ where: { tenantId } }),
};
// Store in an external immutable ledger
// Options: AWS QLDB, Azure Immutable Blob Storage,
// or even just a signed email to a compliance address
await externalLedger.append(attestation);
// Also useful: publish to a transparency log
// Similar concept to Certificate Transparency
await transparencyLog.publish(attestation);
}
AWS QLDB (Quantum Ledger Database) was designed exactly for this use case, though Amazon announced its deprecation in 2025. Azure Immutable Blob Storage and Google Cloud's various immutable storage options are alternatives.
The Simple Version
If all of this feels like overkill, here's the minimum viable integrity setup that most auditors will accept:
- Append-only table with no UPDATE/DELETE permissions (remove those from the application's database role)
- Content hashes on each event (proves individual events werent modified)
- Daily hash chain verification (automated, alerts on failure)
- Weekly database backups that can be compared against live data
-- Remove UPDATE and DELETE permissions on the audit table
REVOKE UPDATE, DELETE ON audit_events FROM app_user;
GRANT INSERT, SELECT ON audit_events TO app_user;
-- Only the backup/admin role can access with full permissions
-- And that role should NOT be used by the application
This wont stop a determined attacker with root database access. But it prevents accidental modification, casual tampering, and application-level bugs from corrupting your audit trail. And its usually enough to satisfy a SOC 2 auditor.
What We Did
After the uncomfortable auditor conversation, we implemented the hash chain approach with daily verification and weekly external attestation. It took about two weeks of engineering time.
The auditor was satisfied. More importantly, we now have actual proof that our audit logs haven't been tampered with, not just trust.
The Broader Point
Most developers think about audit logging as a data storage problem. Capture events, store them, query them. But the integrity question is what separates a debug log from a real audit trail.
If you cant prove your logs are unmodified, they're just text files with timestamps. And any auditor, attorney, or regulator worth their salt will point that out.
Build integrity in from the start. Its much easier than retrofitting it after an auditor asks the question you dont have an answer to.
This article was originally published by DEV Community and written by GrimLabs.
Read original article on DEV Community