There is a quiet change happening in almost every codebase in America. Developers are still writing code, but they are no longer typing most of it. AI coding assistants — Copilot, Cursor, the built-in assistants in every major IDE — are now producing a large share of the cryptographic code that ships to production. Key generation, signature verification, TLS configuration, token signing: all of it, written at AI speed, merged at AI speed.

That's a productivity story. It's also a cryptographic inventory story. Because the patterns the AI picks up are the patterns in its training data, and the training data was overwhelmingly written before post-quantum cryptography was standardized. The default suggestion for "generate an RSA key" is still rsa.generate_private_key(key_size=2048). The default suggestion for "sign this message" is still ECDSA over P-256. Every one of those calls is quantum-vulnerable, every one of them is being generated at unprecedented scale, and every one of them is going to show up on a federal inspector general's desk sometime in the next five years.

We are generating more quantum-vulnerable cryptographic code in one year than the entire industry produced in the decade before AI assistants existed.

This is not a future problem. It's a present exposure that compounds daily.

The Timeline Has Moved

Three things are happening at once, and together they shorten the window every organization has to get its cryptographic house in order.

Shor's algorithm is real, and the engineering is catching up. A cryptographically relevant quantum computer — the kind that breaks RSA-2048 or ECDSA over P-256 — doesn't exist yet. But NIST's standards office doesn't plan around "doesn't exist yet." It plans around the Commercial National Security Algorithm Suite 2.0 and National Security Memorandum 10, which mandate that federal systems deprecate classical asymmetric cryptography starting 2030 and complete migration by 2035. The Department of War's own timeline is tighter than that for weapon systems.

Harvest-now, decrypt-later is not hypothetical. Every piece of RSA-protected traffic captured today can be stored and decrypted when a cryptographically relevant quantum computer arrives. Intelligence services with patience and storage budgets are making exactly that bet. For anything that needs to stay confidential past 2035 — long-lived secrets, diplomatic cables, intellectual property, medical records — the migration deadline isn't 2035. It's today.

The code exposure is growing faster than anyone is migrating it. We pulled 30 days of Python commits across a representative sample of open-source repositories. Net RSA key-generation call sites grew. Net ECDSA usage grew. Net adoption of ML-KEM or ML-DSA: measurable only in single-digit projects. The stock of vulnerable code is not shrinking. It is expanding, and AI assistants are the reason.

Why Manual Cryptographic Audit Does Not Scale

The traditional approach to cryptographic inventory is a consulting engagement. Engineers walk the codebase, fill in a spreadsheet, flag the vulnerable calls, propose replacements, and hand the spreadsheet to a compliance team. That approach worked when a large enterprise had a few hundred cryptographic call sites. It does not work when AI assistants are adding new ones every day.

The time for a human auditor to review a single call site — look at the function, understand how the returned key is used downstream, determine whether it's being used for signing or key exchange, choose the right post-quantum replacement — is on the order of five to ten minutes. A mid-sized enterprise codebase has tens of thousands of call sites. The math does not work, and the math gets worse every week the AI writes more code.

The only approach that survives this is machine-readable inventory, machine-checkable evidence, and a signed paper trail a regulator can verify without re-doing the work.

What Audit-Grade Evidence Actually Looks Like

We've spent the last two weeks building out exactly this pipeline, and the part of it that matters is not the scanner. Scanners are easy. What's hard — and what the PQC migration actually requires — is producing evidence that a federal auditor will accept.

When our tooling scans a codebase, the output is not a PDF. It's a bundle:

And then the manifest is signed. Not hand-waved. A detached Ed25519 signature from our signing service, bound to our trust store, which an auditor can independently verify with a standalone tool that never contacts our servers. If the signature verifies and the hashes verify, the evidence holds. If anything in the bundle has been tampered with — a single byte in the dossier, a single line in the claim graph — the verifier says so, and tells you exactly which file and what the hashes were.

That's the audit trail we think regulated industries are going to need. An LLM cannot produce this. A scanner that outputs a PDF cannot produce this. It has to be designed in.

Why AI Makes This Harder — and Also Possible

AI is what got us into this compounding exposure. AI is also the only thing that scales the response.

The exposure comes from code generation. The response comes from the same underlying capability turned inward: parse every call site in a repository, classify each against a deterministic rule set, pull the right replacement from a knowledge graph, and emit machine-readable evidence. No free-text "here's what I think is wrong." Specific rules, specific citations, reproducible output.

The reason we can run this fast is the same reason our audit-readiness architecture works for other governance use cases. The reasoning layer is a graph, not a model. The rules are typed nodes. The output is a structured artifact, not a paragraph. When we said "AI systems built to pass an audit" three weeks ago, we were talking about this exact shape of system.

What This Means If You're Responsible for a Codebase Today

If your organization operates in a regulated environment — defense, critical infrastructure, financial services, healthcare — and your codebase uses AI-assisted development, three things are probably true:

  1. Your cryptographic inventory is larger than it was a year ago, and nobody has a current count.
  2. Your cryptographic inventory is mostly quantum-vulnerable, because that's what the AI assistants have been suggesting.
  3. The deadline to migrate it is tighter than your existing compliance calendar assumes, because CNSA 2.0 and NSM-10 are already in effect.

The right first action is not a consulting engagement. It's a scan. A machine-readable one, with a signed manifest, run against the HEAD of your repository, repeated against every pull request. That's how you catch the new exposure your own team is generating, and how you have something to hand a regulator when they ask — in machine-checkable form — what state your cryptographic inventory is in.

The Takeaway

Post-quantum cryptography stopped being a 2035 problem the day the AI coding assistants shipped at scale. The timeline hasn't moved because of quantum computers. It's moved because the rate at which we are creating new exposure went up by an order of magnitude, and the tools for finding and documenting that exposure didn't keep pace.

The organizations that will be ready are the ones that build cryptographic inventory into their engineering loop the same way they built unit tests — continuously, with signed evidence, and with a replay record an auditor can verify. The organizations that won't be ready are the ones still thinking of PQC as a project, not an architecture property.

We know which side we're on. We built the tool.

Running AI-assisted development in a regulated environment?

We're talking to design partners about the PQC scanner and signing service. If you're a program manager, CISO, or compliance lead who has to explain your cryptographic inventory to someone who reports to Congress, we're happy to run a scan against a representative repository and walk you through the manifest.

Start a Conversation