LLM AI Security Checklist
2024-3-15 20:51:57 Author: infosecwriteups.com(查看原文) 阅读量:12 收藏

Piyush Kumawat (securitycipher)

InfoSec Write-ups

Web Checklist: https://securitycipher.com/llm-ai-security-checklist/

Are you seeking a skilled freelance penetration tester? https://securitycipher.com/services

  • Scrutinize how competitors are investing in artificial intelligence. Although there are risks in AI adoption, there are also business benefits that may impact future market positions.
  • Investigate the impact of current controls, such as password resets, which use voice recognition which may no longer provide the appropriate defensive security from new GenAI enhanced attacks.
  • Update the Incident Response Plan and playbooks for GenAI enhanced attacks and AIML specific incidents.
  • How will attackers accelerate exploit attacks against the organization, employees, executives, or users? Organizations should anticipate “hyper-personalized” attacks at scale using Generative AI. LLM-assisted Spear Phishing attacks are now exponentially more effective, targeted, and weaponized for an attack.
  • How could GenAI be used for attacks on the business’s customers or clients through spoofing or GenAI generated content?
  • Can the business detect and neutralize harmful or malicious inputs or queries to LLM solutions?
  • Can the business safeguard connections with existing systems and databases with secure integrations at all LLM trust boundaries?
  • Does the business have insider threat mitigation to prevent misuse by authorized users?
  • Can the business prevent unauthorized access to proprietary models or data to protect Intellectual Property?
  • Can the business prevent the generation of harmful or inappropriate content with automated content filtering?
  • Catalog existing AI services, tools, and owners. Designate a tag in asset management for specific inventory.
  • Include AI components in the Software Bill of Material (SBOM), a comprehensive list of all the software components, dependencies, and metadata associated with applications.
  • Catalog AI data sources and the sensitivity of the data (protected, confidential, public)
  • Establish if pen testing or red teaming of deployed AI solutions is required to determine the current attack surface risk.
  • Create an AI solution onboarding process.
  • Ensure skilled IT admin staff is available either internally or externally, following SBoM requirements.
  • Actively engage with employees to understand and address concerns with planned LLM initiatives. □
  • Establish a culture of open, and transparent communication on the organization’s use of predictive or generative AI within the organization process, systems, employee management and support, and customer engagements and how its use is governed, managed, and risks addressed.
  • Train all users on ethics, responsibility, and legal issues such as warranty, license, and copyright.
  • Update security awareness training to include GenAI related threats. Voice cloning and image cloning, as well as in anticipation of increased spear phishing attacks.
  • Any adopted GenAI solutions should include training for both DevOps and cybersecurity for the deployment pipeline to ensure AI safety and security assurances.
  • Enhance customer experience
  • Better operational efficiency
  • Better knowledge management
  • Enhanced innovation
  • Market Research and Competitor Analysis
  • Document creation, translation, summarization, and analysis
  • Establish the organization’s AI RACI chart (who is responsible, who is accountable, who should be consulted, and who should be informed)
  • Document and assign AI risk, risk assessments, and governance responsibility within the organization.
  • Establish data management policies, including technical enforcement, regarding data classification and usage limitations. Models should only leverage data classified for the minimum access level of any user of the system. For example, update the data protection policy to emphasize not to input protected or confidential data into nonbusiness-managed tools.
  • Create an AI Policy supported by established policy (e.g., standard of good conduct, data protection, software use)
  • Publish an acceptable use matrix for various generative AI tools for employees to use.
  • Document the sources and management of any data that the organization uses from the generative LLM models.
  • Confirm product warranties are clear in the product development stream to assign who is responsible for product warranties with AI.
  • Review and update existing terms and conditions for any GenAI considerations.
  • Review AI EULA agreements. End-user license agreements for GenAI platforms are very different in how they handle user prompts, output rights and ownership, data privacy, compliance, liability, privacy, and limits on how output can be used.
  • Organizations EULA for customers, Modify end-user agreements to prevent the organization from incurring liabilities related to plagiarism, bias propagation, or intellectual property infringement through AI-generated content.
  • Review existing AI-assisted tools used for code development. A chatbot’s ability to write code can threaten a company’s ownership rights to its product if a chatbot is used to generate code for the product. For example, it could call into question the status and protection of the generated content and who holds the right to use the generated content.
  • Review any risks to intellectual property. Intellectual property generated by a chatbot could be in jeopardy if improperly obtained data was used during the generative process, which is subject to copyright, trademark, or patent protection. If AI products use infringing material, it creates a risk for the outputs of the AI, which may result in intellectual property infringement.
  • Review any contracts with indemnification provisions. Indemnification clauses try to put the responsibility for an event that leads to liability on the person who was more at fault for it or who had the best chance of stopping it. Establish guardrails to determine whether the provider of the AI or its user caused the event, giving rise to liability.
  • Review liability for potential injury and property damage caused by AI systems.
  • Review insurance coverage. Traditional (D&O) liability and commercial general liability insurance policies are likely insufficient to fully protect AI use.
  • Identify any copyright issues. Human authorship is required for copyright. An organization may also be liable for plagiarism, propagation of bias, or intellectual property infringement if LLM tools are misused.
  • Ensure agreements are in place for contractors and appropriate use of AI for any development or provided services.
  • Restrict or prohibit the use of generative AI tools for employees or contractors where enforceable rights may be an issue or where there are IP infringement concerns.
  • Assess and AI solutions used for employee management or hiring could result in disparate treatment claims or disparate impact claims.
  • Make sure the AI solutions do not collect or share sensitive information without proper consent or authorization.
  • Determine Country, State, or other Government specific AI compliance requirements.
  • Determine compliance requirements for restricting electronic monitoring of employees and employment-related automated decision systems (Vermont, California, Maryland, New York, New Jersey)
  • Determine compliance requirements for consent for facial recognition and the AI video analysis required (Illinois, Maryland, Washington, Vermont)
  • Review any AI tools in use or being considered for employee hiring or management.
  • Confirm the vendorś compliance with applicable AI laws and best practices.
  • Ask and document any products using AI during the hiring process. Ask how the model was trained, and how it is monitored, and track any corrections made to avoid discrimination and bias.
  • Ask and document what accommodation options are included.
  • Ask and document whether the vendor collects confidential data.
  • Ask how the vendor or tool stores and deletes data and regulates the use of facial recognition and video analysis tools during pre-employment.
  • Review other organization-specific regulatory requirements with AI that may raise compliance issues. The Employee Retirement Income Security Act of 1974, for instance, has fiduciary duty requirements for retirement plans that a chatbot might not be able to meet.
  • Threat Model LLM components and architecture trust boundaries. □ Data Security, verify how data is classified and protected based on sensitivity, including personal and proprietary business data. (How are user permissions managed, and what safeguards are in place?)
  • Access Control, implement least privilege access controls and implement defense-in-depth measures
  • Training Pipeline Security, require rigorous control around training data governance, pipelines, models, and algorithms.
  • Input and Output Security, evaluate input validation methods, as well as how outputs are filtered, sanitized, and approved.
  • Monitoring and Response, map workflows, monitoring, and responses to understand automation, logging, and auditing. Confirm audit records are secure.
  • Include application testing, source code review, vulnerability assessments, and red teaming in the production release process.
  • Check for existing vulnerabilities in the LLM model or supply chain.
  • Look into the effects of threats and attacks on LLM solutions, such as prompt injection, the release of sensitive information, and process manipulation.
  • Investigate the impact of attacks and threats to LLM models, including model poisoning, improper data handling, supply chain attacks, and model theft.
  • Supply Chain Security, request third-party audits, penetration testing, and code reviews for third-party providers. (both initially and on an ongoing basis)
  • Infrastructure Security, ask how often a vendor performs resilience testing? What are their SLAs in terms of availability, scalability, and performance?
  • Update incident response playbooks and include an LLM incident in tabletop exercises.
  • Identify or expand metrics to benchmark generative cybersecurity AI against other approaches to measure expected productivity improvements.
  • Establish continuous testing, evaluation, verification, and validation throughout the AI model lifecycle.
  • Provide regular executive metrics and updates on AI Model functionality, security, reliability, and robustness.
  • Review a models model card
  • Review risk card if available
  • Establish a process to track and maintain model cards for any deployed model including models used through a third party.
  • Retrieval Augmented Generation (RAG) & LLM: Examples
  • 12 RAG Pain Points and Proposed Solutions
  • Incorporate Red Team testing as a standard practice for AI Models and applications.

Are you seeking a skilled freelance penetration tester? https://securitycipher.com/services

Follow me on:
Twitter: https://twitter.com/piyush_supiy
Linkedin:
https://linkedin.com/in/piyush-kumawat
Website:
https://securitycipher.com
Telegram:
https://t.me/securecipher

Guide for Penetration Testing https://play.google.com/store/apps/details?id=com.securitycipher.penetrationtesting&hl=en-IN


文章来源: https://infosecwriteups.com/llm-ai-security-checklist-06ce587d42fa?source=rss----7b722bfd1b8d--bug_bounty
如有侵权请联系:admin#unsafe.sh