U.S. Deputy Attorney General Lisa Monaco last month was in the UK speaking at the University of Oxford and outlining the different paths the Justice Department is taking to address the benefits and threats associated with AI.
Monaco spoke about how existing laws offer a “firm foundation” as the law regarding AI evolves, as it did with cybersecurity.
“Discrimination using AI is still discrimination,” she said. “Price fixing using AI is still price fixing. Identity theft using AI is still identity theft. … Our laws will always apply.”
That said, the law also has applied harsher penalties to crimes that are committed using a weapon – a gun, for example – that enhances the danger of the crime. Like a gun, AI can enhance the danger, so the justice system will respond, Monaco said.
In a talk this month at the American Bar Association’s 39th National Institute on White Collar Crime in San Francisco, Monaco returned to that theme, saying that prosecutors will be seeking harsher sentences for both corporations and individuals who use AI when committing white-collar crimes.
“We have long used sentencing enhancements to seek increased penalties for criminals whose conduct presents especially serious risks to their victims and to the public at large, like increased penalties for criminals that use firearms or other dangerous weapons,” she said. “The same principle applies to AI. Where AI is deliberately misused to make a white-collar crime significantly more serious, our prosecutors will be seeking stiffer sentences – for individual and corporate defendants alike.”
She added that “all new technologies are a double-edged sword, but AI may be the sharpest blade yet. It holds great promise to improve our lives, but great peril when criminals use it to supercharge their illegal activities, including corporate crime.”
It was part of a larger speech about the myriad steps the DOJ is taking regarding AI, with Monaco pointing to other initiatives the department is putting in place. That includes instructing prosecutors assessing whether a company is complying with federal security regulations to now include how they address AI in their reports.
The new directive for the DOJ’s Criminal Division includes a corporation’s handling of “disruptive technology risks” in its guidance for their evaluation of corporate compliance programs.
“While we work to responsibly harness the benefits of AI, we are alert to its risks, and we will be using our tools in new ways to address them,” she said. “And compliance officers should take note. When our prosecutors assess a company’s compliance program – as they do in all corporate resolutions – they consider how well the program mitigates the company’s most significant risks. And for a growing number of businesses, that now includes the risk of misusing AI.”
In addition, at the talk in the UK, Monaco introduced Justice AI, an effort to bring together individuals from industry, law enforcement, academia, science, and civil society in a series of meetings over six months to address the influence of AI, how it will affect the DOJ’s actions, and how the government can help society reap the benefits of AI while mitigating its risks.
She raised the initiative again to the ABA, saying that the DOJ “will use these conversations to inform the department’s AI policy on a range of fronts, including the corporate compliance issues I’ve asked the Criminal Division to consider.”
President Biden in late October 2023 issued an executive order regarding the safe and secure development and use of AI, noting that “responsible AI use has the potential to help solve urgent challenges while making our world more prosperous, productive, innovative, and secure. At the same time, irresponsible use could exacerbate societal harms such as fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security.”
The EO called for ensuring that AI technology is safe and secure, investing in AI-related education, training, research, and development, managing risk, and protecting American workers, consumers, civil rights, and privacy.
It also called on all parts of the federal government, including the DOJ, to be involved. As part of that effort, the department last month named Jonathan Mayer as its first chief science and technology advisor and chief AI officer. Mayer will advise Attorney General Merrick Garland and other DOJ leaders about cybersecurity, AI, and other emerging technologies.
He’ll also weigh in on the DOJ’s initiatives to build out its technology capabilities, including recruiting skilled people to the department.
Mayer comes from Princeton, where he’s been an assistant professor at the university’s Computer Science Department and School of Public and International Affairs.
Recent Articles By Author