Best practices in cybersecurity for AI
2023-9-19 15:11:50 Author: www.tarlogic.com(查看原文) 阅读量:10 收藏

It is critical to implement good cybersecurity practices for AI

ENISA has developed a framework to help companies implement the best practices in cybersecurity for AI

The company Worldcoin, created by the founder of ChatGPT, has made an AI system designed to differentiate humans from robots once Artificial Intelligence becomes ubiquitous. A question long imagined by literature or audiovisual culture and which is becoming more and more real every day. To do so, they need to scan people’s eyeballs. This news shows that Artificial Intelligence systems already have enormous data on citizens and companies.

Therefore, companies developing AI systems and their suppliers must implement good AI cybersecurity practices to prevent cyberattacks and security incidents.

The European Agency for Cybersecurity (ENISA) has recently released a framework for AI cybersecurity best practices (FAICP) to facilitate this task. This framework is based on the recurrent research and publications carried out by the agency in recent years and, in particular, this 2023.

Below, we will break down the key elements of this framework for implementing good cybersecurity practices for AI, highlighting the need for companies to have advanced security strategies in place and to be able to assess the dynamic cybersecurity risk they face effectively.

1. The complex AI threat landscape

Why is it so important for organizations to implement best practices in AI cybersecurity?

Machine Learning, Deep Learning, natural language processing, robots, voice recognition… Under the umbrella of Artificial Intelligence, we can find several technologies that, together or separately, are already transforming multiple economic sectors.

The explosion of Artificial Intelligence and its growing impact on society and the economy brings an increasingly complex threat landscape.

1.1. Theft, manipulation, destruction… Categorizing threats

ENISA has mapped all threats, systematizing them into eight broad categories:

Nefarious activity and abuse

Malicious actions aimed at stealing information, altering or destroying specific targets, such as the model used by a generative AI. ENISA lists some 30 specific threats, ranging from model sabotage to data poisoning, DDoS attacks against AI systems, and compromising the validation of Machine Learning training data or its confidentiality.

Espionage, interception, and hijacking

This category includes actions aimed at obtaining confidential information or disrupting the operation of an AI system. This includes, for example, data inference or data theft, as well as illegitimate disclosure of an AI model.

Physical attacks

These threats seek to destroy or sabotage assets, for example, by launching physical attacks against AI system infrastructures, manipulating communication networks, or undermining the model.

Unintentional or accidental damage

Threats include attacks and unintentional errors and bugs that can compromise and limit AI outputs, affect the accuracy of data inferred by the system, or cause model misconfigurations.

Bugs or malfunctions

This category revolves around problems in the operation of an AI system. The ENISA list includes more than ten threats: degradation of the performance of an ML model, insufficient data, failure of a provider, corruption of data indexes, etc…

Disruptions

If ICT infrastructure, systems, or communication networks are disrupted, the service of the AI system is also disrupted.

Disasters

Such as natural disasters or environmental phenomena.

Legal

For example, the privacy of data used by an AI is compromised, personal information is disclosed, or providers do not comply with their data protection obligations.

2. FAICP, three levels of cyber security best practices for AI

In light of this diverse and growing threat landscape, ENISA has developed an AI cybersecurity best practice framework with three main objectives in focus:

  1. To protect ICT infrastructures and the AI hosted on them, considering both the AI lifecycle and all elements of the AI supply chain and the associated processes and technologies.
  2. To gather information on cybersecurity requirements for AI from EU member states.
  3. To identify the challenges we face in AI security and the gaps in current cybersecurity practices to optimize them and strengthen the protection of AI systems and all businesses and citizens interacting with AI systems.

The result is an AI cybersecurity best practice framework structured around three layers:

  1. Layer 1. It captures cybersecurity best practices to protect the ICT environments where AI systems are hosted, developed, integrated, maintained, or distributed.
  2. Layer 2. Presents cybersecurity practices focused on the specificities of AI: lifecycle, properties, specific threats, security controls, etc.
  3. Layer 3. Includes specific cybersecurity practices for companies in critical healthcare, automotive, or energy sectors. This layer of the framework is designed for AI systems categorized as high-risk in the future AI regulation that is expected to be approved by the end of the year in the European Union.

Good cybersecurity practices for AI are of paramount importance in critical sectors

3. Securing the AI ICT ecosystem (Layer 1)

AI systems are not developed, deployed, and maintained in a vacuum but are part of an ICT ecosystem that hosts them. Cybersecurity is the discipline in charge of protecting the technological assets of companies, public administrations, and AI ecosystems. This involves ensuring the infrastructure’s confidentiality, integrity, authenticity, and availability.

To this end, companies must carry out efficient and comprehensive security management, with cybersecurity services available to:

  • Analyse risks, assessing threats and vulnerabilities and the impact of a successful security incident.
  • Manage risks. With the information obtained from security assessments, prioritize mitigating threats and vulnerabilities, implementing effective countermeasures to protect business assets, and considering available resources.

Companies can also engage proactive threat-hunting services to scrutinize potential hostile actors, discover the tactics, techniques, and procedures they employ, improve detection and response capabilities, and increase their resilience to advanced persistent threats (APTs).

These cybersecurity best practices represent the first level of protection for AI systems. Why? They seek to ensure that these systems operate in a secure environment.

Moreover, it should be noted that the security management of ICT infrastructures is an issue not only of paramount importance but also mandatory for many companies within the European Union following the approval in recent years of the NIS and NIS2 directives, the RGPD and the Cybersecurity Act (CSA).

4. Implementing specific actions to protect AI systems (Layer 2)

The most critical layer in the AI cybersecurity best practice framework is the second one, as it directly addresses best practices designed to protect AI systems.

As noted above, the adoption of the European regulation on AI is imminent, and the draft proposed by the Commission and amended by the European Parliament is already known, pending negotiations with the Council. The draft states that all AI systems placed on the EU market must be safe and respect EU fundamental rights.

This implies putting in place a comprehensive security strategy to protect the assets that make up AI systems:

  • Data: raw data, training data, testing data, etc.
  • Models: algorithms, models, model parameters…
  • Artifacts: model frameworks, data management policies…
  • Actors involved: data owners, data scientists, data engineers, model providers…
  • Processes: data collection, data processing, model training, and tuning…
  • Environment and tools: algorithm libraries, Machine Learning platforms, integrated development environments…

4.1. AI security assessments

Beyond traditional security assessments, ENISA recommends that companies developing, hosting, or integrating AI systems make additional efforts to assess the specific risks of this technology:

  • Include the threats that will feature in the future European AI regulation: loss of transparency, loss of interpretability, loss of bias management, and loss of accountability.
  • Optimize the typologies of impact factors: robustness, resilience, impartiality, explicitness.
  • Opt for a dynamic cybersecurity risk assessment with a focus on anomaly detection.

4.1.1. Analysing and monitoring threats and vulnerabilities

The aim should be to have the necessary mechanisms and best practices in place to address the main threats to the security of AI systems and, in particular, Machine Learning systems, such as generative AI, which are the most attractive target for hostile actors:

  • Evasion. Attackers seek to breach the inputs of the AI system’s algorithm to exploit the outputs. The input perturbations that are generated are called adversarial examples.
  • Poisoning. Hostile actors alter the data or the AI model, intending to modify the algorithm’s behavior, fulfilling their criminal objectives: sabotaging the AI system, inserting a backdoor into it…
  • Model or data disclosure. This threat includes information leaks from internal or external sources, affecting the model, its parameters, or training data.
  • Compromise of components of an AI application. For example, hostile actors successfully exploit vulnerabilities in an open-source library to develop the AI algorithm.
  • Failure or malfunctioning of an AI application. For example, through a successful DoS attack, the introduction of malicious input, or an undetected bug in the code.

These broad categories of threats can be translated into a map of specific vulnerabilities that may be present in AI systems or their supply chains: ineffective management of access to critical information, use of vulnerable components to develop the AI system, poor control of the data retrieved by the model, or that the security assessment of the AI system is not integrated into the organization’s security strategy to improve its resilience.

4.2. AI security management

Security management is essential to protect systems from threats and to detect vulnerabilities before they are successfully exploited. To this end, it is necessary to implement security controls and conduct security tests based on information gathered during security assessments.

In addition, an essential aspect of AI systems must also be considered: their trustworthiness. ENISA’s cybersecurity best practice framework for AI defines trustworthiness as ‘the confidence that AI systems will behave within specified standards, based on certain characteristics’. These characteristics can be systematized into three broad groups:

  • Technical design characteristics: accuracy, reliability, robustness, resilience.
  • Socio-technical characteristics: explicitness, interpretability, privacy, security, bias management.
  • Principles contributing to the reliability of the AI system: impartiality, accountability, transparency.

Cybersecurity is essential in the development of AI

4.2.1. Security controls

ENISA’s cybersecurity best practice framework for IA proposes several specific security controls to prevent and mitigate the main threats outlined above and to ensure the reliability of systems:

  • Evasion. Implement tools to detect whether an input is an adversarial example, use adversary training to strengthen the security of the model, or use models that are not easily transferable to prevent hostile actors from studying the algorithm of the AI system.
  • Poisoning. To prevent poisoning attacks, it is critical to secure system components throughout their lifecycle, continuously assess the cyber exposure of the model the system employs, expand the size of the dataset to reduce the ability of malicious samples to influence the model, and implement pre-processing mechanisms to clean the training data.
  • Model or data disclosure. Mechanisms to control access must be robust.
  • Compromise of AI application components. Reducing the level of component compromise involves implementing appropriate security policies integrated into the organization’s security strategy and IT asset management.
  • Failure or malfunctioning of the IA application. To prevent AI application failures, it is essential that algorithms have a low bias and are continuously evaluated to ensure their resilience in the environment in which they will operate and that they can detect vulnerabilities in them.

4.2.2. Security testing

Security testing of AI has many commonalities with security testing of traditional software applications, but it must also take into account the specificities of this technology:

  • Differences between sub-symbolic AI and traditional systems impact security and how testing is performed.
  • AI systems can evolve through self-learning so that security properties can degrade. Hence, dynamic testing is essential to check the efficiency of the implemented security controls.
  • In AI systems, training data shapes the behavior of sub-symbolic AI in contrast to traditional software systems.

Therefore, following ETSI’s AI security testing report, it is necessary to:

  • Employ new approaches to security testing for AI.
  • Use AI security test oracles to determine when a test has been successfully passed, i.e., when no vulnerability could be detected and when a vulnerability was detected.
  • Define criteria for the adequacy of AI security tests to measure overall progress in cybersecurity and establish when a security test should be stopped.

5. Cybersecurity best practices for specific sectors (Layer 3)

The third layer of the cybersecurity best practices framework for AI is focused on proposing specific measures for some critical sectors at an economic and social level, in which, moreover, Artificial Intelligence already plays a key role:

  • Energy. Cybersecurity in this sector can be hampered by using technologies with known vulnerabilities, the lack of a cybersecurity culture among companies, suppliers, and contractors, or outdated control systems.
  • Healthcare. Cyberattacks on hospitals and medical centers have been on the rise in recent times. For example, the cyberattack paralyzed the Hospital Clínic activity in Barcelona. Criminals can attack medical devices, communication channels, and applications. AI is set to play an essential role in the healthcare field, so it is vital to protect the models and data of these systems.
  • Automotive. Automotive is a sector that has always been at the forefront of robotization and AI solutions. So much so that the production of autonomous vehicles can mean a radical change in our economy and society, hence cybersecurity is essential to avoid:
    • Cyberattacks against image processing models that enable traffic sign recognition and lane detection.
    • Data poisoning attacks on stop sign detection.
    • Attacks related to large-scale deployments of fraudulent firmware.
  • Telecommunications. The integration of AI systems can be essential to have networks capable of self-optimizing, using predictive analytics to improve maintenance and increase security by detecting fraudulent activities. This implies having a robust security strategy that protects all AI systems employed and prevents tampering or service interruption.

6. The future of the AI-cybersecurity binomial

The cybersecurity best practices framework for AI designed by ENISA highlights the close relationship between Artificial Intelligence and cybersecurity. It highlights how the synergies between the two are essential to building a secure world, especially when AI becomes increasingly relevant economically and socially.

What strategic issues can make the difference in building a secure AI ecosystem? ENISA proposes at the end of the guide on good cybersecurity practices for AI a series of recommendations for cybersecurity experts and companies developing or integrating AI systems.

6.1. How to address AI cybersecurity

  • Dynamic assessment of sources and data since the reliability of AI algorithms depends on them.
  • Continuous data security analysis throughout its lifecycle since data poisoning can occur anytime.
  • As opposed to static security testing, a cutting-edge methodology such as dynamic cybersecurity risk should be chosen, as AI systems are characterized by constantly learning and evolving. Dynamic cybersecurity risk analysis and threat prioritization are essential for securing AI systems, especially Machine Learning systems, throughout their entire lifecycle.
  • Collaboration between cybersecurity experts, data scientists, and other professionals such as psychologists or lawyers is essential to identify emerging threats, take effective countermeasures, and improve the resilience of AI systems.

In short, Artificial Intelligence is a constantly evolving field set to generate enormous transformations in the productive fabric and our way of life. Therefore, cybersecurity strategies must pay attention to the specificities of AIs and protect the models and data they consume and the infrastructure that hosts them.

Collaboration between cybersecurity experts, data engineers, and data scientists will be crucial to build a secure, reliable, and compliant AI ecosystem. Just as AI helps optimize cybersecurity services and increase their efficiency, cybersecurity professionals’ knowledge, skills, and capabilities are essential to design, develop, deploy, and maintain secure AI systems.

More articles in this series about AI and cybersecurity

This article is part of a series of articles about AI and cybersecurity

  1. What are the AI security risks?
  2. Top 10 vulnerabilities in LLM applications such as ChatGPT
  3. Best practices in cybersecurity for AI

文章来源: https://www.tarlogic.com/blog/best-practices-cybersecurity-ai/
如有侵权请联系:admin#unsafe.sh