Last week, Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) released its eighth edition of the AI Index Report, capturing a year of unprecedented growth, opportunity, and emerging risks in artificial intelligence. AI adoption has surged across industries, with over three-quarters of companies deploying AI systems, and generative AI use more than doubling in enterprise settings. At the same time, 2024 witnessed a significant increase in AI-related security incidents, including growing concerns about misinformation, falling costs of AI deployment, and the accelerating spread of AI into operational technologies.
As global governance frameworks expand, the report illustrates the mounting challenges businesses must tackle to stay ahead – creating a demand for enterprise partners that deliver AI-native security, infrastructure risk management, and compliance monitoring. Here are some key takeaways from this report:
1. AI Adoption is Accelerating and so are AI-Powered Threats
AI adoption has soared across sectors, reaching new heights in 2024:
- 78% of companies now use AI, up from 55% in 2023.
- Use of generative AI has doubled, with 71% of enterprises using generative AI for at least one business function.
- Alongside these adoptions is a wave of AI-related incidents, jumping up to 233 in 2024 – a sharp 56.4% increase over 2023.
As AI continues to integrate into business operation models, it is critical for risk controls to also adapt. These findings highlight the need for better AI security posture management, end-to-end visibility, and risk scoring for AI models and supply chains.
2. AI Governance is Expanding – But Unevenly
Governance frameworks advanced globally in 2024, through the OECD, EU, UN, and African Union, but their adoption and risk mitigation remains inconsistent:
- Legislative mentions of AI rose 21.3% across 75 countries.
- Government investment in AI initiatives continues to grow: Canada ($2.4 billion), China ($47.5 billion), France (109 billion euro), India ($1.25 billion), Saudi Arabia ($100 billion).
- AI Safety Institutes expanded, with Japan, France, Germany, Italy, Singapore, South Korea, Australia, Canada, and the European Union pledging institutes.
- U.S. states expanded deepfake regulations, with 24 states passing new regulations in 2024.
While responsible AI (RAI) risks – accuracy, compliance, and cybersecurity – are widely recognized, active mitigation lags behind, with intellectual property infringement (57% relevant, 38% mitigated) and organizational reputation (45% relevant, 29% mitigated) representing some of the largest gaps. To help bridge these divides, AI governance automation, encrypted collaboration environments, and policy enforcement tools are increasingly necessary to keep organizations protected.
3. AI Pipelines and Infrastructures Grow as an Emerging Attack Surface
AI infrastructure is rapidly evolving:
- Inference costs fell 280x since 2022, expanding opportunity and exposure.
- AI training compute now doubles every 5 months.
- Industrial AI use is also expanding, particularly in OT, IoT, and robotics environments.
To address these challenges, AI providers who can address inference infrastructure monitoring and secure AI pipelines and APIs are urgently needed. Additionally, the AI Supply Chain presents a range of salient targets, necessitating the need for foundation model transparency, third-party AI monitoring, and dataset lineage management to rapidly detect and respond to adversarial attacks.
4. AI Misinformation, Jailbreaking, and Trust Challenges
AI products have widely integrated into everyday life, from medical devices to autonomous vehicles. At the same time, AI generated misinformation rose in 2024, fueling foreign influence operations, impacting global elections, and eroding public trust.
Other areas of high vulnerability include:
- Infectious jailbreaks of Multimodal Large Language Model (MLLM) systems, which lack practical mitigation measures.
- LM AI Agents, which show promise in automating complex tool interactions but remain unreliable in high-stakes applications (failing 23.9% of critical scenarios).
- Rise of deepfake AI-generated images, causing reputational harm, financial extortion, and harassment.
To combat these threats, AI-powered secure tools are needed at scale, including detection tools, bias monitoring, and real-time media forensics and authenticity scoring to flag manipulated content before it spreads. Additionally, threat intelligence tools should be strengthened, necessitating the need for systems that can use advanced linguistic, behavioral, and metadata analysis across languages.
5. Post-Quantum Security: AI’s Next Blind Spot
Lastly, while the AI Index 2025 highlights growing risks around model security, infrastructure vulnerabilities, and AI governance gaps, it leaves one emerging area largely unaddressed: cryptographic resilience in AI pipelines.
Why this matters:
- AI Systems rely on vast, interconnected supply chains, such as APIs communicating transactions and proprietary data sent to edge devices and third parties.
- These systems use classical encryption methods, which have the potential to be compromised retroactively once quantum computing reaches viability – a reality that will be here sooner than we think.
- Sensitive AI-generated content, proprietary models, and inference results being stored today will be vulnerable to decryption in a post-quantum future – making AI infrastructure a new and overlooked blind spot for post-quantum security preparedness.
Businesses should start investing in post-quantum encryption for AI pipelines and APIs today, as well as cryptographic provenance and over-the-air (OTA) integrity protections.
At OpenPolicy, the insights from the 2025 AI Index Report directly inform our operations and support strategies for innovators and startups navigating AI security, governance, and policy landscapes. The significant rise in AI adoption and related incidents highlighted by the index underscores the urgency of enhancing AI-native security measures and comprehensive risk management frameworks. Given the uneven global expansion of AI governance outlined in the report, we intensify our efforts in policy advocacy, intelligence gathering, and strategic engagement with influential bodies. Furthermore, the emerging threats identified by the report—such as AI misinformation, jailbreak vulnerabilities, and post-quantum encryption blind spots—guide our targeted advisory services, ensuring startups remain at the forefront of compliance, security innovation, and responsible AI practices. OpenPolicy leverages these insights to help organizations proactively address vulnerabilities, shape effective governance strategies, and foster resilient, trusted AI ecosystems.