On December 3, 2025, the Cybersecurity and Infrastructure Security Agency (CISA), together with cybersecurity agencies from Australia, Canada, Germany, the Netherlands, New Zealand, the United Kingdom, and the U.S. National Security Agency and FBI, released a landmark document: Principles for the Secure Integration of Artificial Intelligence in Operational Technology. establishing a shared baseline for how AI systems should be safely and securely integrated into critical infrastructure environments.
The guidance responds to the rapid adoption of machine learning, LLMs, and AI agents across critical infrastructure, while warning that integrating AI into systems that directly control physical processes introduces new cyber, safety, and reliability risks, and therefore requires new specialized safeguards. The publication outlines a structured approach anchored in four core principles: understanding AI risks, assessing the business case, building governance frameworks, and embedding oversight and failsafe mechanisms.
This release marks one of the most comprehensive multi-national efforts to date to define what “secure AI in OT” should look like and signals a shift toward more prescriptive expectations for AI governance, vendor accountability, and operational resilience.
Political and Strategic Context
The guidance arrives at a moment when AI adoption in critical infrastructure has surged. Policymakers and operators have voiced concerns that existing AI security guidance is too IT-centric and lacks provisions for safety, timing constraints, system drift, and the high-reliability engineering required in OT.
The new principles attempt to fill that gap, emphasizing that AI in OT is fundamentally different from AI in business or IT environments, and requires dedicated safeguards to prevent cascading failures, unsafe control logic, and unmonitored autonomous decision-making, or the erosion of human operator awareness.
The document was also shaped by broader geopolitical dynamics. Nations are racing to adopt AI to improve industrial competitiveness, while also preparing for increasingly AI-enabled cyber operations. By releasing a jointly authored framework, the participating countries aim to set a shared baseline for secure AI practices in critical infrastructure worldwide.
Understanding Risks & Building a Secure AI Framework
Integrating AI into operational technology requires security measures beyond traditional OT protections as AI systems themselves introduce new attack surfaces, failure modes, and dependencies. Critical infrastructure operators must safeguard AI models, training data, and deployment pipelines against manipulation, poisoning, or unauthorized access while maintaining data integrity and quality.
They should also limit AI connectivity through push-based or one-way, architectures, proper network segmentation, careful cloud use, and strict control over vendor capabilities, including SBOMs, AI-specific SBOMs that document model dependencies, data sources, and update mechanisms as well as the ability to disable risky features.
Model drift, hallucinations, data manipulation, and operator overreliance directly compromise functional safety. As a result, AI must operate within OT’s established safety and reliability frameworks, not redefine them. Effective integration requires robust oversight mechanisms such as explainable AI, continuous monitoring, anomaly detection, and enforced human-in-the-loop decision points. Governance is the backbone of this process, embedding AI within existing OT security frameworks with robust access controls, integrated incident response, and continuous assurance activities such as AI red-teaming to identify vulnerabilities, evaluate unsafe autonomy, and validate resilience against adversarial manipulation. These governance measures may also become regulatory obligations depending on sector requirements.
Cloud or hybrid deployments introduce additional risks, including network exposure and data transmission vulnerabilities, which must be mitigated through encryption, strict contractual responsibilities, and careful operational controls. Continuous model testing, validation, and retraining are essential to manage drift, detect anomalies, and preserve reliability throughout the AI lifecycle. Ultimately, failsafe mechanisms, incident response plans, and strong functional safety practices ensure that AI-enabled OT systems operate safely, reliably, and predictablyThe guidance reinforces that AI must adapt to the safety, reliability, and determinism of OT systems, because insecure or unreliable OT environments amplify the consequences of AI failures.
Expected Impacts for AI Vendors
OT vendors play a pivotal role in integrating AI into operational technology environments by embedding intelligent capabilities directly into devices, often with privileged access to operational data and control workflows.. They are responsible not only for providing advanced features, such as predictive analytics or autonomous control, but also for ensuring these features are secure, transparent, and aligned with the operator’s requirements. Critical infrastructure operators should demand transparency and control from OT vendors regarding how AI is embedded in their products. This includes negotiating contractual agreements detailing AI functionalities, requesting a Software Bill of Materials (SBOM) and supply chain information for AI models, establishing vendor notifications for improper AI behavior, and enforcing explicit data usage policies to protect sensitive operational data. Vendors also need to clarify whether AI features can operate on-premises without continuous cloud connectivity and define operator-controlled conditions under which AI functions can be enabled or disabled.
What This Means for Cybersecurity, AI, and Critical Infrastructure Companies
The new guidance signals a major shift: AI used in operational technology can no longer be treated as a generic IT deployment or governed solely through traditional enterprise AI risk frameworks. Systems that influence physical processes require higher scrutiny, rigorous testing, and robust oversight.
Its emphasis on AI-SBOMs, vendor transparency, human-in-the-loop oversight, continuous testing, and AI red-teaming indicates that regulators may soon expect organizations to implement comprehensive governance and risk-management structures. Supply chain accountability, explicit data usage policies, and documentation of model dependencies are likely to become regulatory requirements, ensuring both operators and vendors are responsible for AI security and functional safety. Continuous model validation, anomaly detection, AI-drift monitoring, model-integrity verification, and resilience testing may be codified as standard compliance practices, while integration of AI within existing OT security frameworks, including secure architectures for push-based data flows and AI-specific incident response, will likely be required rather than treating AI as a standalone system.
For cybersecurity companies, demand will grow for:
- AI-drift monitoring
- model-integrity validation
- OT-aware detection tools
- AI-specific incident response processes
- secure architectures for push-based data flows
For AI developers, the message is clear: secure-by-design and transparency-by-design are becoming mandatory expectations, especially regarding embedded AI in vendor devices and the handling of sensitive OT data.
For critical infrastructure organizations, the guidance provides a practical, structured roadmap for integrating AI, without compromising safety, reliability, or regulatory compliance. As AI adoption accelerates, these principles will likely shape procurement decisions, regulatory frameworks, and investment strategies for years to come. Taken together, these principles are likely to influence procurement requirements, regulatory expectations, and cross-border alignment on AI safety in critical infrastructure, reinforcing the need for early engagement between operators, vendors, and policymakers.


