Increased instances of AI-driven assaults emerge as AI security concerns top the agendas of Chief Information Security Officers (CISOs)
Article: AI Security Challenges Dominate CISOs' Concerns in 2025
In a rapidly evolving digital landscape, AI security risks have become the top priority for Chief Information Security Officers (CISOs) worldwide, according to a recent report by cybersecurity firm Team8. The report, based on interviews with over 110 security leaders from major enterprises, reveals that AI-related threats now outrank traditional concerns such as vulnerability management, data loss prevention, and third-party risk.
The main challenges for CISOs include securing AI agents, governing AI usage among employees, detecting sophisticated AI-driven threats, enabling enterprise-wide AI adoption while mitigating risks, and integrating AI capabilities defensively.
Securing AI agents, software tools that perform tasks for users but can be tricked or manipulated by attackers, is a significant concern for 37% of the security leaders interviewed. These agents pose unique vulnerabilities due to their autonomous decision-making capabilities.
Governing AI usage among employees in a way that balances security and innovation is another major concern. Many CISOs struggle with the trade-off of restricting AI tools to reduce risk or allowing use without sufficient controls, creating unmanaged risk.
Detecting sophisticated AI-driven threats that mimic human behavior and evade traditional security metrics requires advanced analytics such as "time to exploitation" and "velocity indicators."
Enabling enterprise-wide AI adoption while mitigating risks is also a challenge. As boards aggressively push for AI integration, CISOs find themselves in a difficult position to enable transformation without mature controls.
Preparing to integrate AI capabilities defensively, notably by automating lower-level analyst roles in security operations centers, is another priority. This is to match the speed and scale at which AI assists attackers.
The report highlights that while AI offers powerful tools for both attackers and defenders, CISOs face a rapidly evolving threat landscape with underdeveloped security controls, creating a challenging environment for risk mitigation.
Executives expect AI to replace humans in the areas of penetration testing, third-party risk assessments, reviews of user access requests, and threat modeling. However, the report also shows that executives are worried about the unintended security consequences of their own companies' use of AI.
The demand for effective 'allow-by-default' controls is acute due to shadow AI usage and the absence of enterprise-grade governance frameworks. More than two-thirds of the companies using or testing AI agents said they were developing them in-house.
Nearly seven in 10 companies are already using AI agents, and another 23% are planning to deploy them next year. The report suggests that the true number of companies targeted by AI-powered attacks may be higher due to the difficulty in detecting such threats.
AI agents could "unlock expert-level capabilities across a broader surface area" in the areas of penetration testing and threat modeling. However, the report does not provide specific details about the AI-powered phishing and malware development mentioned.
In conclusion, AI security risks dominate CISOs’ concerns in 2025, requiring new governance models, enhanced detection methods, and a delicate balance between innovation and security.
- The report by cybersecurity firm Team8 reveals that AI-related threats have surpassed traditional concerns such as vulnerability management and data loss prevention for Chief Information Security Officers (CISOs) worldwide.
- Securing AI agents, which pose unique vulnerabilities due to their autonomous decision-making capabilities, is a concern for 37% of the security leaders interviewed.
- CISOs are grappling with the challenge of detecting sophisticated AI-driven threats that mimic human behavior and evade traditional security metrics using advanced analytics.
- The demand for effective 'allow-by-default' controls is high due to the absence of enterprise-grade governance frameworks, especially given the increase in shadow AI usage.
- The report suggests that the number of companies targeted by AI-powered attacks may be higher than what is currently known, as it can be challenging to detect such threats.