Fake Biden Robocall Demonstrates the Need for Artificial Intelligence Governance Regulation
2024-1-24 02:43:38 Author: www.trustwave.com(查看原文) 阅读量:9 收藏

The proliferation of artificial intelligence tools worldwide has generated concern among governments, organizations, and privacy advocates over the general lack of regulations or guidelines designed to protect against misusing or overusing this new technology.

The need for such protective measures came to the forefront just days before the New Hampshire Presidential Primary, when a potentially AI-generated robocall mimicking President Joe Biden was sent to potential voters telling them to stay away from the polls, according to published reports.

The call stated, “Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again,” the voice mimicking Biden says. “Your vote makes a difference in November, not this Tuesday.”

The New Hampshire attorney general’s office is investigating the incident.

An International Response

In response to the overall need for AI governance, governments have begun publishing frameworks that will start the process for protective measures and legislation that will guide the use of AI. These activities, at this time, are not coordinated internationally, but each nation is taking its own approach with new AI-related steps. 

AI is even causing concern at The Vatican. Pope Francis recently called for a binding international treaty to oversee the development and application of AI, sharing concerns about the potential for a “technological dictatorship,” according to Computing.co.uk. In the Pope’s World Day of Peace message, he emphasized the importance of regulating AI to prevent harmful practices and promote best practices. 

The International Association of Privacy Professionals has created a global tracker to follow these developments, but let’s dive into a few to see how nations are developing plans to govern AI.

United States

Like most nations, the US does not have a comprehensive AI regulation in place, but it has been busy pushing out frameworks and guidelines. In addition, Congress has passed legislation to preserve US leadership in AI research and development and control government use of AI. 

In October 2023, President Joe Biden signed the first-ever Executive Order designed to regulate and formulate the safe, secure, and trustworthy development and use of artificial intelligence within the United States.

In general, Trustwave’s leadership commended  the Executive Order but raised several questions concerning the government’s ability to enforce the ruling and the impact it may have on AI’s development in the coming years. The 111-page order covers a myriad of AI-related topics designed to protect privacy, enhance law enforcement, ensure responsible and effective government use of AI, stand up for consumers, patients, and students, support workers, and promote innovation and competition.

Other efforts put forth include:

European Union

In December 2023, the EU parliament reached an agreement on the final version of the European Union Artificial Intelligence Act, possibly the first-ever comprehensive legal framework on artificial intelligence.

The EU introduced the EU AI Act in April 2021, and it is expected to go into effect in 2026.

The EU AI Act is tiered, impacting organizations depending on the risk level posed by the AI system. Those AI systems presenting a limited risk would be subject to similarly minimal transparency obligations, such as informing users that the content they are engaging with is AI-generated. An example of limited-risk AI usage is chatbots or deepfakes.

High-risk AI systems will be allowed but will come under tougher scrutiny and requirements, such as carrying a mandatory fundamental rights impact assessment. High-risk AIs are those used in sensitive systems, such as welfare, employment, education, and transport.

AI uses demonstrating unacceptable levels of risk would be prohibited. These include social scoring based on social behavior or personal characteristics, emotion recognition in the workplace, and biometric categorization to infer sensitive data, such as sexual orientation, according to the law firm Mayer Brown.

mdr

A Managed Detection and Response solution is the first step in defending against cyber threats.

United Kingdom

Like the US, the UK does not have a comprehensive AI regulation in place yet, but will rely on existing sectoral laws to impose guardrails on AI systems. The UK’s National AI Strategic Action Plan focuses on harnessing AI as an engine for economic growth but also takes into consideration protective measures.

On the economic front, the UK will invest and plan for the long-term needs of its AI ecosystem to continue its leadership as a science and AI superpower, support the transition to an AI-enabled economy, capture the benefits of innovation in the UK, and ensure AI benefits all sectors and regions. Finally, the plan ensures the UK gets the national and international governance of AI technologies right to encourage innovation and investment and protect the public and our fundamental values.

Under the plan, the UK will strive to be the most trustworthy jurisdiction for the development and use of AI, one that protects the public and the consumer while increasing confidence and investment in AI technologies in the UK.

To accomplish this goal, the UK has established an AI governance framework, developed practical governance tools and standards, and has published several papers laying out its methodology. 

Australia

Australia just published Safe and Responsible AI in Australia Consultation, which outlines its official takeaways after receiving input from the public, academia, and businesses on safe and responsible AI. The interim paper focuses on governance mechanisms to ensure AI is developed and used safely and responsibly in Australia. These mechanisms can include regulations, standards, tools, frameworks, principles, and business practices to help alleviate public concerns.

The paper noted, “There is low public trust that AI systems are being designed, developed, deployed and used safely and responsibly. This acts as a handbrake on business adoption, and public acceptance. Surveys have shown that only one-third of Australians agree Australia has adequate guardrails to make the design, development, and deployment of AI safe.”

These, and other concerns, require testing (for example, testing of products before and after release), transparency (for example, labeling of AI systems in use or watermarking of AI-generated content), accountability (for example, requiring training for developers and deployers and clearer obligations to make organizations accountable and liable for AI safety risks), the paper said.

The regulatory measures listed above are a small fraction of what is being done worldwide. AI holds a great deal of promise. This technology can make organizations much more efficient, but at the same time AI’s ability to gather data could impact privacy laws and there is the added danger of threat actors using AI to conduct even more powerful cyberattacks.

Latest Trustwave Blogs

Let’s Get Physical with Security Requirements

Not every criminal illegally entering a business is looking to steal cash, equipment, or merchandise; some are looking to take something a bit more ephemeral. This scenario is particularly true for...

Read More

Enhancing Ransomware Resilience: 5 Essential Strategies for Organizations

Ransomware poses a pervasive threat to businesses, with no foolproof method to completely ward it off. However, organizations can adopt practical measures to reduce their vulnerability and bounce...

Read More

Trustwave Government Solutions Achieves “FedRAMP In Process – PMO Review" Designation

Trustwave Government Solutions (TGS) is proud to announce its designation as “In Process Program Management Office (PMO) Review" by the Federal Risk and Authorization Management Program (FedRAMP) for...

Read More


文章来源: https://www.trustwave.com/en-us/resources/blogs/trustwave-blog/fake-biden-robocall-demonstrates-the-need-for-artificial-intelligence-governance-regulation/
如有侵权请联系:admin#unsafe.sh