Building, pre-training, and fine-tuning an AI model is no small task and you’ll naturally want to protect your efforts from theft, denial of service attacks, malicious use, and other threats. In this blog we’ll go over the basics of keeping AI models safe from both legal issues and bad actors.
So long as the code and techniques that make up an AI model are human-written, they can be copyrighted, patented, and protected by intellectual property law just like any other application. If your model is built on top of existing models, you’ll need to make sure that you are in compliance with the open source licenses that govern the use of those models. Whether or not you can copyright your derivative model will be dependent on those terms.
Generative AI output, that is the text, images, or sounds that your AI model produces, has so far been considered by US courts to have not been authored by a human and therefore ineligible for copyright protection. This means that if you were to use one AI model to train or to generate code or functionality for another AI model, that derivative AI model will probably not be fully protected by copyright law. If a work is a combination of human and AI effort, only the human-authored portions of the work are eligible for copyright protections. We’re not lawyers, but because AI prompts themselves require a degree of human intelligence and creativity, there is a legal argument to be made that input-catalyzed AI outputs should be copyrightable, but it’s still early days and hasn’t been tested by the courts.
AI models bring risk to an organization from many angles. From an attacker’s point of view, there is much to be gained from infiltrating or attacking AI models. An attacker can hold the model ransom, bleed underlying resources and redirect them to mine cryptocurrency, and steal trade secrets, sensitive personal data, or the model itself. The large number of components that make up AI models can leave more vulnerabilities for attackers to exploit and AI models can be insecure by design, making them difficult to protect sufficiently.
AI models also introduce legal risks. If your models are biased against certain groups, leak sensitive information, or perform poorly, your organization may find itself in court. As previously mentioned, the underlying components to your AI model – including pre-trained models, open source libraries, and training data sets – can also land your organization in hot water if you’re not carefully considering the licenses and copyright restrictions of each ingredient.
You can read about the top threats to large language models (LLMs) here, many of which are applicable to AI models broadly.
AI security is a broad topic and each kind of model has its own risks, but many of the secure design concepts that apply to all applications apply to AI models just the same. Here are some things you can do to make your AI more secure:
AI models are usually very big and need to live across multiple servers. If you are using cloud storage, you will need to use a provider who takes its security seriously. Here are some other considerations for secure AI storage:
Like any valuable asset, your AI models need protecting. Security must be considered early, when designing and building, and throughout the development process for the best chance at success.
The post How Do I Protect My AI Model? appeared first on Mend.
*** This is a Security Bloggers Network syndicated blog from Mend authored by AJ Starita. Read the original post at: https://www.mend.io/blog/how-do-i-protect-my-ai-model/