OWASP Top 10 for LLM and new tooling guidance targets GenAl security
2024-11-12 21:0:0 Author: securityboulevard.com(查看原文) 阅读量:9 收藏

target-owasp-top10-llmNew guidance for organizations seeking to protect the generative AI tools they’re running has been released by the OWASP Top 10 LLM Applications Security Project.

OWASP’s LLM project lead, Steve Wilson, said in a statement:

“We’re two years into the generative AI boom, and attackers are using AI to get smarter and faster. Security leaders and software developers need to do the same. Our new resources arm organizations with the tools they need to stay ahead of these increasingly sophisticated threats.”

Here’s what you need to know about the updated OWASP Top 10 for LLM and the new tooling landscape guide —and how securing AI and machine-learning tools in your organization requires a more comprehensive approach.

Newsletter

AWS Hub

[ Special Report: Secure Your Organization Against AI/ML Threats | See Webinar: The MLephant in the Room ]

What’s been added to the OWASP Top 10 for LLM

The new OWASP Top 10 for LLM includes guidance for preparing and responding to deepfake events (focusing on risk assessment, threat actor identification, incident response, awareness training, and event types) and establishing centers of excellence for gen AI security (designed to develop security policies, foster collaboration, build trust, advance ethical practices, and optimize AI performance). The new “AI Security Solution Landscape Guide” offers insights into both open-source and commercial solutions for securing LLMs and gen AI applications.

Deepfakes are on the rise, making guidance essential

Deepfakes — images, videos, or audio recordings created or manipulated using deep neural networks — pose significant threats, said Matthew Walsh, team lead and senior data scientist in the CERT division at Carnegie Mellon University’s Software Engineering Institute. “Given the rising frequency and sophistication of deepfake attacks, organizations must have clear guidance on how to prepare for and respond to these incidents,” he said.

One reason guidance on deepfakes is needed is that their use is rising sharply, Walsh said, citing data from the AIAAIC (AI, Algorithmic, and Automation Incidents and Controversies) and the AI Incident Database. The data shows that between 2022 and 2023, there was nearly a fivefold increase in deepfake attacks. “Businesses across various sectors, government agencies, and private citizens have become targets of these attacks,” Walsh said.

“While deepfakes are often associated with individual defamation or fake content, these attacks can also be used for financial fraud, identity theft, and the spread of misinformation. To mitigate this risk, organizations must prioritize campaigns that raise employee awareness about the deepfake threat.”
Matthew Walsh

Guidance is also needed because deepfake technology is becoming increasingly sophisticated. “Advances in generative adversarial networks (GANs), variational auto-encoders (VAEs), and diffusion models have made it easier to produce highly realistic videos, images, and audio that are virtually indistinguishable from genuine media,” he noted. “As these generative methods become more powerful and accessible, organizations must implement robust detection technologies to identify deepfakes. Additionally, organizations should invest in training programs that help employees to spot fraudulent content before it can cause harm.”

Rapid response is critical when dealing with deepfake attacks, Walsh said. “Disinformation spread via deepfakes can quickly go viral, especially on social media platforms. In some cases, deepfakes have been used in live social engineering attacks, adding to the urgency,” he said.

“Organizations must be prepared with an incident-response plan well in advance of an attack. Waiting until a deepfake event occurs to implement a response strategy is not an option — timely, coordinated action is needed to limit damage and prevent the further spread of false information.”
—Matthew Walsh

Henry Patishman, executive vice president for identity verification solutions at Regula, a forensic devices and identity verification firm, said the deepfake guidance provided by the OWASP team is timely and should be taken up by all businesses around the world, regardless of size, industry, and location. “‘The OWASP Guide to Preparing and Responding to Deepfake Events’ very clearly outlines the current threats and guidance on how to deal with some specific events,” Patishman said. “This guide acts as a great starting point for organizations to understand the threat and begin developing their own internal strategies.”

Patishman added that, based on a survey conducted by Regula in August 2024, about half of all businesses worldwide reported cases of audio or video deepfake fraud in the past year. “This represents more than a 12% rise in audio deepfakes and almost a 20% rise in video deepfakes when compared to a similar study conducted in 2022,” he said.

“This threat is not industry-specific, with all surveyed industries — crypto, financial services, aviation, technology, healthcare, telecom, and law enforcement — showing more than 40% of companies within each industry experiencing deepfakes.”
Henry Patishman

AI centers of excellence: Bring your teams together

The new OWASP team is also offering guidance for organizations to create their own AI security center of excellence (CoE) that brings together essential stakeholders from security, legal, data science, and operations to develop comprehensive security practices. J. Stephen Kowski, field CTO at the computer and network security company SlashNext, said the guidance is very practical, offering clear frameworks for policy development and implementation across organizations.

“The biggest challenge lies in coordinating cross-functional teams while maintaining operational efficiency and keeping pace with rapidly evolving threats.”
J. Stephen Kowski

Sean Wright, head of application security at the fraud prevention fintech company Featurespace, said another important consideration when established a CoE is that the wide-scale adoption of AI is still new.

“[There] are still many unknowns as well as constant shifts in things such as regulations and compliance. Having a mechanism in place to ensure that your organization is able to adapt effectively as well as remain compliant is incredibly important.”
Sean Wright

As organizations increasingly adopt AI technologies, the associated risks for those organizations grow as well, said Jason Soroko, senior vice president of product for the digital certificate provider Sectigo. “Establishing an AI security center of excellence helps proactively manage these risks by developing and implementing secure practices throughout every stage of AI projects, from data collection to model deployment,” he said.

Soroko said it also means having a dedicated team responsible for staying updated on emerging threats and mitigation strategies, ensuring that the organization’s AI initiatives remain secure and effective.

“A good center of excellence that involves risk will include executive team members who own the risk of a company and can help guide a cross-functional team. Top-down approaches to risk are usually best.”
Jason Soroko

Iftach Ian Amit, founder and CEO of the automated cloud infrastructure security firm Gomboc.ai, said that since AI tools are a force multiplier, organizations can deliver more with less. But that also means proper quality and safety assurances need to be embedded into its use.

“Gen AI is not inherently safe and needs to be augmented with the right guardrails and mechanisms that are specific to the organization using it. A CoE that provides both the policy as well as the processes and tools to do so would enable faster and more secure adoption of AI.”
Iftach Ian Amit

MJ Kaufmann, an author and instructor at O’Reilly Media, said that by pooling expertise from domains such as cybersecurity, data science, compliance, and risk management, an AI CoE can develop and enforce consistent security protocols. “A CoE can even be more effective because of the very fact that they draw expertise from a breadth of domains,” she said.

However, Kaufman said that while the OWASP guidance is effective for larger and well-funded organizations, “not every organization has the resources to implement a CoE or the AI investment and adoption to warrant it.”

“Building and maintaining a CoE demands a significant investment in personnel, technology, and ongoing training. This can be a barrier for smaller organizations or companies with tighter budgets. Allocating resources effectively while showing short-term value to stakeholders can be challenging, especially in companies with limited AI budgets.”
MJ Kaufmann

Indeed, CoEs might be a can of worms for many organizations, said Casey Bleeker, CEO and co-founder of the secure gen AI services platform SurePath AI. “While an AI security CoE is valuable when you have the right talent and skills, most organizations don’t have the expertise or the appropriate scope,” Bleeker said.

“Is the CISO now responsible for overseeing the data science team’s work and monitoring and enforcing policies in technology areas they don’t have exposure to today? Are legal and risk departments responsible for defining every single use case employees are ‘approved’ to use AI for when the organization has no way to even monitor or enforce application of those policies?”
Casey Bleeker

Bleeker said OWASP’s recommendation should instead be viewed by most organizations as aspirational, not reachable for a few years. He said you first need to answer what the CoE will define, how it will measure adherence, and whether the teams have the right tools and people.

A look at the AI security tooling landscape

The OWASP team did not stop with the new Top 10 for LLM. Its “AI Security Solution Landscape Guide” aims to serve as a comprehensive reference, offering insights into both open-source and commercial solutions for securing LLMs and gen AI applications. By categorizing existing and emerging security solutions, it can provide organizations with guidance to effectively address risks identified in the OWASP Top 10 LLM vulnerabilities list, the group said.

SurePath’s Bleeker said that because most customers are just now understanding the basics of AI technology and there are misconceptions about how it functions and its associated risks, the tooling guidance was warranted.

“We meet with customers daily with wildly varying levels of understanding, and there is always some foundational element of misunderstanding because these are complex problems that sprawl across legal, compliance, technology, and data.”
—Casey Bleeker

AI in security can be misunderstood, Bleeker said. Many security solutions on the market protect against AI-generated threats or use AI in detecting threats but play no part in securing the actual use of AI, he said.

Featurespace’s Wright said that AI brings hype and that security tool marketing hype is worrisome. “This thankfully seems to be simmering down, but we still see some products making some bold claims,” he said.

“My advice is to make sure that if you are considering purchasing a tool, make sure that you first validate many of its claims and ensure that the product does what it says it can do. There is a tremendous risk attached to believing in a tool will cover you from a security perspective when in fact it doesn’t.”
—Sean Wright

Bleeker said solutions to secure model deployments are often small improvements in web application firewall frameworks that protect against DDoS attacks, prompt engineering, or data exfiltration. “We see most of these risks being obviated long term by best practices, such as not allowing raw input from end users to LLM models,” he said.

Solutions for securing model training are often focused on data cleansing or redaction, classification of source data, and the labeling and tagging of trained models based on the data sources used for input, Bleeker said. “This is often coupled with AI governance processes to ensure safe practices were followed and documented but contains no active enforcement of policy during training or after deployment when in use,” he said.

“End-user security is often too focused on just shadow AI use, leaving massive gaps on private model access controls and internal data access controls leaving internal data leakage unaddressed.”
—Casey Bleeker

Dhaval Shah, senior director of product management at ReversingLabs, wrote recently to describe how securing the ecosystem around ML models is more critical than ever. Shah described in technical detail how new ML malware detection capabilities in Spectra Assure, ReversingLabs’ software supply chain security platform, ensure that your environment remains safe at every stage of the ML model lifecycle:

  • Before you bring a third-party LLM model into your environment, check for unsafe function calls and suspicious behaviors and prevent hidden threats from compromising your system
  • Before you ship or deploy an LLM model that you’ve created, ensure that it is free from supply chain threats by thoroughly analyzing it for any malicious behaviors
  • Make sure models saved in risky formats such as Pickle are meticulously scanned to detect any potential malware before they can impact your infrastructure

*** This is a Security Bloggers Network syndicated blog from ReversingLabs Blog authored by John P. Mello Jr.. Read the original post at: https://www.reversinglabs.com/blog/owasp-top-10-for-llm-takes-aim-at-genai-security


文章来源: https://securityboulevard.com/2024/11/owasp-top-10-for-llm-and-new-tooling-guidance-targets-genal-security/
如有侵权请联系:admin#unsafe.sh