featured Corporate /en/research-insights/featured/special-editorial/ai-for-security-and-security-for-ai-two-aspects-of-a-pivotal-intersection content esgSubNav

AI for Security, and Security for AI: Two Aspects of a Pivotal Intersection

Scott Crawford
Research Director,
451 Research,
S&P Global Market Intelligence
scott.crawford@spglobal.com
Sudeep Kesh
Chief Innovation Officer,
S&P Global Ratings
sudeep.kesh@spglobal.com
Maria Mercedes Cangueiro
Associate Director,
Cyber Risk Expert,
Emerging Risks – R&D,
S&P Global Ratings
maria.cangueiro@spglobal.com



This is a thought leadership report issued by S&P Global. This report does not constitute a rating action, neither was it discussed by a rating committee.
Scott Crawford
Research Director,
451 Research,
S&P Global Market Intelligence
scott.crawford@spglobal.com
Sudeep Kesh
Chief Innovation Officer,
S&P Global Ratings
sudeep.kesh@spglobal.com
Maria Mercedes Cangueiro
Associate Director,
Cyber Risk Expert,
Emerging Risks – R&D,
S&P Global Ratings
maria.cangueiro@spglobal.com



This is a thought leadership report issued by S&P Global. This report does not constitute a rating action, neither was it discussed by a rating committee.

Published: November 29, 2023

Highlights

Prior to generative AI bursting on the scene in 2022, machine learning was already well established in cybersecurity technology, primarily used for threat and fraud detection.

However, the data that powers these same AI technologies (given their inherent sensitivity), as well as new technological advances, demand focus on security measures to ensure protection from malicious activity.

Unsurprisingly, security for AI is a widely cited concern among respondents to S&P Global Market Intelligence surveys of IT and cybersecurity practitioners, and many cite it as their top concern.

Artificial intelligence can and likely will boost security efficacy, though vulnerabilities may also escalate with increasing use of AI. Both of these developments will require organizations to advance and perhaps rethink and adapt risk management frameworks to keep pace.


The convergence of AI and security took center stage at the Black Hat and DEF CON 2023 conferences — security's "summer camp" in Las Vegas. In Figure 1, we break down the two primary ways in which these priorities are coming together: leveraging AI to enhance security, and applying security to defend AI (click on the graphic below for details).

Cybersecurity has traditionally been a “go-to” use case for one of the fundamental technologies behind AI — machine learning. Machine learning generally uses supervised or unsupervised computational algorithms to solve problems in one of four distinct groups: classification, clustering, dimensionality reduction, and prediction/regression problems. In cybersecurity, all four problem types are addressed in processes such as identifying normal versus irregular patterns for data and application access controls, isolating potential threat actors from normal users based on automated behavior analysis, refining criteria used in monitoring normal business operations versus irregularities that may indicate threat actors and exploitations, and recognizing new or evolving methods of attack such as malware variants. For decades, machine learning has been part of the toolkit used in cyber threat management, but recent advancements in technology — particularly the transformer, which puts the “T” in ChatGPT — are poised to transform the cybersecurity industry.

AI has been a trending topic in technology for years, but nothing has fueled interest like the recent explosive emergence of generative AI. With many nascent tech trends, cybersecurity is a top area of opportunity as well as concern, and this is no less true with AI. The US National Security Agency (NSA) recently created an entity to oversee development and integration of AI capabilities in US national security systems. AI was a central focus of this year’s RSA Conference. It was also the theme of the opening keynote at Black Hat, where the AI Cyber Challenge, a Defense Advanced Research Projects Agency (DARPA) initiative launched by the Biden-Harris administration, was announced. That same week, DEF CON hosted the largest public "red teaming" (penetration testing) exercise against AI models to date. While these initiatives may sound like efforts to compromise generative AI, they are just the opposite. Technology has long benefited from the efforts of informed security researchers to identify and resolve vulnerabilities and exposures. These efforts represent ways to focus the security community on generative AI to help match its pace of innovation, improve risk mitigation, and foster awareness of concerns that require resolution.

In this report, we explore two related aspects of the intersection between AI and security: the application of AI to security issues, which we abbreviate here as "AI for security," and mitigating security risks of the implementation and use of AI, which we refer to as "security for AI."

According to 451 Research's Voice of the Enterprise: AI & Machine Learning, Infrastructure 2023 survey, both aspects of this intersection are prominent concerns for respondents implementing AI/machine-learning initiatives. In terms of AI for security, threat detection is the most frequently reported area of existing investment (47% of respondents), and another 37% say they plan to invest. In terms of security for AI, security is the most frequently reported concern about the infrastructure that hosts, or will host, AI/ML workloads (21% of respondents, ahead of cost at 19%). These two issues (security and cost) well outdistance the next concern, reliability (11%). Another 46% of respondents say security is a concern, if not a top concern, which amounts to 67% of respondents reporting some degree of concern about security — the largest percentage of any response. The two categories of AI for security and security for AI help define the broad outlines of our planned research coverage in this area. Both aspects have already made a substantial mark on the technology products and services markets, and we expect their impact and importance to grow. 

 

AI for security

For years, machine learning has played a role in security efforts such as malware recognition and differentiation. The sheer number of malware types and variants have long demanded an approach to this aspect of threat recognition that is both scalable and responsive, given the volume of attacks and the rapid pace at which new attacks emerge. The application of machine learning to identify activity baselines and anomalies has spurred the rise of user and entity behavior analytics, which can often provide early recognition of malicious activity based on variations from observed norms in the behavior of people as well as technology assets.

Supervised machine learning has often been used to refine approaches to security analytics previously characterized by rules-based event recognition. Unsupervised machine learning approaches, meanwhile, provide greater autonomy to security data analysis, which can help alleviate a security operations team's burden in recognizing significant events and artifacts in an often overwhelming volume of telemetry from a wide range of sources.

The emergence of generative AI has introduced further opportunities to apply AI to security priorities. Security operations (SecOps) is a particularly fertile ground for innovation. Since attackers seek to evade detection, security analysts must correlate evidence of suspicious activity across a staggering volume of inputs. They must quickly prioritize identifiable threats in this data for response, making the constantly shifting playing field between attacker and defender a race against not only innovation but time, given that attacks can have an impact within minutes. Security analytics and SecOps tools are purpose-built to enable security teams to detect and respond to threats with greater agility, but the ability of generative AI to comb through such volumes of data, extract valuable insight, and present it in easily consumable human terms should help alleviate this load. Early applications of generative AI in this context show promise for enabling analysts — often limited in number relative to the challenges they face — to spend less time on data collection, correlation and triage, and to focus instead where they can be most effective. Generative AI can also be useful in finding and presenting relevant insights to less experienced analysts, helping them build expertise as they grow in the field (thus augmenting their productivity, rather than replacing them) — an option that could prove useful in helping organizations counter the enduring challenges of sourcing and retaining cybersecurity skills

It is therefore noteworthy that some of the largest competitors in the security market are also among the biggest investors in generative AI. Examples seen earlier this year include OpenAI-partner Microsoft Corp.'s introduction of Microsoft Security Copilot, the new offerings powered by Google Cloud Security AI Workbench, and Amazon Web Services' alignment of Amazon Bedrock with its Global Security Initiative in partnership with global systems integrators and managed security service providers. Supporters of the DARPA AI Cyber Challenge announced at Black Hat include Anthropic, Google, Microsoft, OpenAI, the Linux Foundation, and the Open Source Security Foundation, in addition to Black Hat USA and DEF CON. The AI Cyber Challenge is a two-year competition that will offer nearly $20 million in prizes for innovation in finding and remediating security vulnerabilities in software using AI. Companies significantly invested in AI (and AI for security) are also highly visible in efforts to promote security for AI. 

Many other vendors tout the application of AI to security challenges — as the prior examples of machine-learning applications to security suggest — in a field that seems likely to grow in both innovation among new entrants and evolution among current competitors. The range of opportunities is broad, as suggested by the ways in which our survey respondents already employ machine learning for security, compliance, and related use cases.

 

Applications of AI and other technological growth areas for the security industry will likely require developments in critical areas of risk management and control for all organizations using them. Namely, use of AI in security applications could strengthen companies’ cyber preparedness, as it allows advancement in mitigant techniques against threat actors, rapid analysis of vulnerabilities, an ability to simulate various scenarios involving threat actors, data integrity, security and utilization, and other applications. However, this amplifies, not replaces, the need and demand for robust risk management schemes. The absence of robust risk management may result in companies facing limitations in their ability to proactively identify, assess, and mitigate risks effectively. Therefore, entities may be ill-prepared to address the dynamic landscape of cybersecurity, even with the use of AI (and other technologies) for security. 

 

Security for AI

The other major aspect of the security-AI intersection is the mitigation of security exposures related to the implementation and application of AI. These include security vulnerabilities that may be incorporated in the body of both open-source and proprietary software on which AI is built, the exposure of AI/ML functionality to misuse or abuse, and the potential for adversaries to leverage AI to define and refine new types of exploits.

This area has already begun to affect the cybersecurity products and services markets, from startups to major vendors and systems integrators, including a significant presence at the 2023 RSA Conference's Innovation Sandbox and the Black Hat Startup Spotlight. Practitioners are growing the body of research on threats to security and privacy that target AI, and they are identifying ways to detect and defend against malicious activity across a number of concerns. Among the most prominent recent examples, the Generative Red Team Challenge hosted by the AI Village at DEF CON 2023 was, according to organizers, the largest "red teaming" exercise held so far for any group of AI models. Supported by the White House Office of Science, Technology and Policy; the National Science Foundation's Computer and Information Science and Engineering Directorate; and the Congressional AI Caucus, models provided by Anthropic, Cohere, Google LLC, Hugging Face Inc., Meta Platforms Inc., NVIDIA Corp., OpenAI, and Stability AI, with participation from Microsoft Corp., were subjected to testing on an evaluation platform provided by Scale AI. Other partners in the effort included Humane Intelligence, SeedAI, and the AI Vulnerability Database (AVID).

Existing approaches that have demonstrated value are getting an uplift in this new arena. MITRE Corp., for example, spearheaded an approach to threat characterization with its Adversarial Tactics, Techniques, and Common Knowledge (ATT&CK) knowledgebase, which describes threat attributes in ways consumable by detection and response technologies to improve performance and foster automation. Recently, MITRE introduced a similar initiative in Adversarial Threat Landscape for Artificial-Intelligence Systems (ATLAS), which seek to bring the same systematic approach demonstrated with ATT&CK to threat characterization for AI. While ATT&CK focuses on threats, the AI Vulnerability Database, noted above as a participant in the Generative Red Team Challenge, is a separate effort to catalog exposures, described as "an open-source knowledgebase of failure modes for AI models, datasets, and systems." 

Techniques that have been used more broadly to secure the software supply chain are also being applied to AI by those specializing in this domain. Another perspective being brought to bear on the challenge is that of safety, whereby those with experience in both AI and safety engineering are applying the practices of safety assurance to AI, with security included among the objectives.

The aim of these initiatives is not only to help increase assurance for those adopting AI. They also seek to help make AI safer by taking more of an active stance in defending innovative technology and providing foundations for proper digital governance, auditability and controls for security, privacy, safety, and other risks. Many of these issues are in their infancy, and an increase in viable use cases will inevitably yield standards, norms, and regulation to help enable the balance of safety and security, as well as innovation and progress.

Such standards, norms, and regulation will then need to be used in the form of updated governance and risk management strategies across organizations if they are to succeed in an increasingly digital future. 

Successful companies will need to maintain effective governance for AI and other technological developments — a hallmark of adaptive, successful companies today. In our view, effective governance includes the establishment of policies and procedures for AI usage, oversight from boards of directors, and a proactive approach to assess and mitigate risks. Furthermore, governance should include regular audits, transparency in AI decision-making, and mechanisms for adapting to changing threat landscapes, ensuring responsible and secure AI integration across the organization.  

 

Frequently asked questions:

What is the relationship between AI and cybersecurity?

There are two main aspects: AI for security, and security for AI. Engaging AI in threat recognition and streamlining processes of data collection and response to better mitigate threats are two examples of AI for security: engaging AI in ways that improve cybersecurity efforts for organizations. A focus on the actual or potential vulnerabilities and exposures of AI speaks to efforts to improve security for AI, which in turn helps assure confidence in AI and the increasingly significant role it is playing in technology evolution.


What are the security and privacy risks associated with AI, and how can these be mitigated?

The range of risks is broad, and investment in addressing these issues is increasing, especially given the focus on generative AI. Any summary of these risks will therefore be shaped by the ongoing evolution of AI — an evolution that is happening with breathtaking speed — and lists composed even in the near future may therefore differ from any presented today. Part of the challenge with generative AI in particular is that its interactions can be very broad and are dynamic because large language models learn from ongoing “conversational” interactions. An understanding of the nature of their risks and exposures is therefore developing along with them. Among the risks already seen, however, are the potential for manipulating large language model (LLMs) to disclose sensitive or protected information; the risks of exposing sensitive content to LLMs as training data beyond the acceptable control of organizations; the possibility of malicious actors “poisoning” training data to skew outputs; malicious implementations of AI that can be used for nefarious purposes; and potential compromise of the software supply chain used in developing AI implementations. These tactics may be employed in efforts ranging from misinformation or disinformation to privacy exploits to a variety of security threats.

How is AI manifesting itself in the landscape of cybersecurity technology?

With respect to “AI for security” (as defined above), AI already plays a significant role in cybersecurity, and the potential for generative AI applications has become as apparent in this field as it has in other technology domains. Many of generative AI’s major players also have a significant presence in fields such as cyber threat detection and response. These efforts leverage the ability of AI to catalog and recognize adversary tactics and pull together relevant contextual data and threat intelligence quickly, which can help accelerate response to security threats and mitigation of their impact. Current efforts target the ability of AI to digest overwhelming volumes of security telemetry and help augment the ability of skilled security experts to respond to demand. The ability of generative AI to create programming code, meanwhile, has potential for accelerating security automation. 

In terms of security for AI, major players in generative AI as well as several startups and innovators have come forward with new approaches to address concerns regarding the security of AI. Many of these have been featured at major cybersecurity conferences such as the RSA Conference and Black Hat/DEF CON. These innovators are seeking to mitigate many of the known or potential threat vectors already seen targeting AI — and aim to position themselves to tackle those that emerge as the rapid pace of AI innovation continues.

 

Related research

 

 

Contributors 

Martin Whitworth
Lead Cyber Risk Expert,
Emerging Risks – R&D,
S&P Global Ratings,
martin.whitworth@spglobal.com

Paul Alvarez
Lead Cyber Risk Expert,
Emerging Risks – R&D,
S&P Global Ratings,
paul.alvarez@spglobal.com

Martin Whitworth
Lead Cyber Risk Expert,
Emerging Risks – R&D,
S&P Global Ratings,
martin.whitworth@spglobal.com


Paul Alvarez

Lead Cyber Risk Expert,
Emerging Risks – R&D,
S&P Global Ratings,
paul.alvarez@spglobal.com