The most dangerous threat to your organisation’s AI systems may not exist yet — but the data it will steal is being collected right now. That is not a hypothetical scenario from a science fiction screenplay. It is a documented strategy already being executed by sophisticated threat actors who are banking on quantum computing’s arrival to unlock encrypted data they are harvesting today. For any company building or deploying AI systems, this changes the calculus of security in ways that cannot be deferred to a future IT roadmap.
The Hidden Vulnerability Inside Every AI System
Most conversations about AI security focus on the moment of use — the inference stage, where someone tries to manipulate a model through clever prompting or extract proprietary information from its outputs. That risk is real, but it is also visible. The less-discussed vulnerability lives deeper in the pipeline: in the training data, the model weights, and the cryptographic infrastructure holding everything together.
A report from hardware security firm Utimaco makes a pointed argument — organisations must protect AI systems across the entire development lifecycle, not just at the edges where users interact with them. The data used to train AI models often contains the most sensitive information a company holds. If that data is exposed or tampered with, the model built on it is compromised at its foundation.
What “Harvest Now, Decrypt Later” Actually Means
Here is the attack strategy that should be keeping security architects awake at night. Organised threat groups are currently intercepting and storing encrypted data — financial records, proprietary model training datasets, intellectual property — with no immediate ability to read it. They are waiting. The bet they are making is that quantum computers, which can break today’s standard encryption methods, will become accessible within a decade.
Think of it like photographing a locked safe today, then waiting until you acquire the tools to crack it open. The safe looks secure in the present. But the photograph already exists. For any dataset with long-term sensitivity — which describes most serious AI training sets — this is not a future problem. It is a present one, because the data being locked away today is what will be unlocked later.
Why Current Encryption Has an Expiry Date
The encryption methods that secure the vast majority of digital infrastructure today — public key cryptography — rely on mathematical problems that classical computers cannot solve in any reasonable timeframe. Quantum computers, once sufficiently powerful, can solve those same problems with relative ease. The Utimaco report estimates this window of vulnerability could open within the next ten years, though the exact timeline is genuinely uncertain.
What is certain is that migrating away from current cryptographic standards is not a switch you flip overnight. Changes will cascade across protocols, key management systems, and the way different software components communicate with each other. That kind of infrastructure migration typically takes years to execute without breaking things. Organisations that start planning now are already late by some estimates. Those who wait for quantum computers to arrive before acting will have no viable response.
The Case for Crypto-Agility and Hybrid Protection
The approach the report advocates is called crypto-agility — designing systems so that cryptographic algorithms can be swapped out without rebuilding the entire underlying architecture. It is the digital equivalent of designing a building so that the electrical wiring can be replaced without tearing down the walls. This principle, combined with what the industry calls hybrid cryptography, pairs today’s proven encryption methods with post-quantum algorithms recommended by the US National Institute of Standards and Technology.
The practical benefit is resilience through transition. You do not need to abandon what works today to start preparing for what will be required tomorrow. Both layers of protection operate simultaneously, buying time while the migration to fully quantum-resistant standards proceeds across an organisation’s systems.
Hardware Trust: The Physical Layer That Software Cannot Replace
Cryptographic algorithms alone cannot solve every dimension of this problem. The report makes a compelling case for hardware-based security modules that physically isolate cryptographic keys and sensitive operations from the rest of a computing environment. This matters because software-based protections can be circumvented by sufficiently privileged actors — including, in some scenarios, system administrators with legitimate access credentials.
Hardware security modules create what security professionals call a chain of trust. Model integrity can be verified before deployment. Inference workloads — the computations happening when the AI is actually in use — can be processed inside isolated enclaves where even infrastructure administrators cannot observe the data. External attestation processes verify that an enclave is in a trusted state before cryptographic keys are released to it. These are not exotic or experimental techniques. They are established practices being applied to a new domain: the AI development pipeline specifically.
AI Security Across the Full Lifecycle
| AI Lifecycle Stage | Key Security Concern | Recommended Protection |
|---|---|---|
| Data Ingestion | Unauthorised access to training datasets | Hardware-based encryption, access controls |
| Model Training | Data tampering, poisoning attacks | Isolated enclaves, tamper-resistant logging |
| Model Deployment | Model integrity and authenticity | Hardware-signed model verification |
| Inference in Production | Prompt injection, data extraction | Isolated workloads, encrypted processing |
| Long-term Data Storage | Harvest-now-decrypt-later attacks | Post-quantum cryptography migration |
| Compliance Reporting | Audit trail integrity | Tamper-resistant hardware logs (EU AI Act) |
What the EU AI Act Adds to the Urgency
Regulatory pressure is compressing the timeline even for organisations that might otherwise defer these decisions. The EU AI Act includes provisions that demand demonstrable accountability over how AI systems handle data — and tamper-resistant audit logs generated by hardware security modules are one of the cleaner ways to satisfy those requirements. This means the investment in hardware-based trust infrastructure serves a dual purpose: near-term regulatory compliance and long-term quantum resilience.
That dual utility matters for how organisations justify the cost internally. Security investments that solve one problem are easier to deprioritise when budgets tighten. Security investments that solve two distinct problems at once — compliance today, quantum resistance tomorrow — become significantly harder to defer.
What the Next 12 to 24 Months Will Reveal
Over the next two years, I expect two things to happen in parallel. First, NIST’s post-quantum cryptography standards will move from recommendations to mandates in regulated industries, particularly finance, healthcare, and government contracting. Second, the organisations that have been quietly building quantum-resilient infrastructure will gain a measurable competitive advantage in enterprise AI procurement — because their clients will demand it.
If you are thinking about the security posture of any AI system your organisation is building or procuring right now, the question worth asking is not whether quantum computing will eventually threaten your encryption. It almost certainly will. The question is whether the data you are protecting today will still matter when that day arrives — and for most training datasets, the answer is yes. Starting this conversation inside your organisation now, rather than when the threat becomes visible, may be the most consequential infrastructure decision of the decade. I would genuinely encourage every team building on AI to put this on the agenda before the next quarterly planning cycle closes.