Within just a couple years, AI seems to have overtaken a head-spinning amount of aspects in our lives. And there seem to be few groups who are benefiting and innovating more with AI than foreign hackers and cyber criminals.
On Feb. 14, Microsoft released a report detailing, among other things, how state-backed hackers from Russia, China, and Iran have been using tools from Microsoft-backed OpenAI to hone their skills and trick their targets.
Originally reported by Reuters, one of the ways these groups have been leveraging OpenAI is by using its large language models (LLMs) to pull from gigantic amounts of text in order to generate more human-sounding responses.
“Independent of whether there’s any violation of the law or any violation of terms of service, we just don’t want those actors that we’ve identified – that we track and know are threat actors of various kinds – we don’t want them to have access to this technology,” Microsoft Vice President for Customer Security Tom Burt told Reuters in advance of the report’s release.
In terms of how these nations have been utilizing OpenAI, Reuters breaks it down:
These revelations by Microsoft are but one drop in a growing storm related to how AI is being used to craft more effective ways to commit cyberattacks.
Just this week, OpenAI announced a new AI tool, Sora, that allows users to create stunning videos based on text prompts. While Sora hasn’t yet been released to the general public, it’s almost mind-bending to think of the way it’s going to be used by bad actors.
As noted in a recent webinar hosted by Dror Liwer, co-founder and chief marketing officer of Coro, AI is being used to hack passwords more quickly and also improve the ability of crooks to falsify communications through email, social media, and other social engineering attacks.
And it’s growing far beyond just written communications. Just recently, a finance employee was deceived by a deep fake, multi-person video conference. The result was a transfer of millions of dollars to criminals.
And, as cybersecurity expert, Joseph Steinberg, noted in our recent webinar, these types of AI tricks are not just impacting businesses.
“The reality is that criminals can now impersonate people so well their voices, their way of speaking,” Steinberg said on the webinar. “You take TikTok videos that a kid has made and feed it into an AI, and it can speak like that person. And so you get calls to parents where it’s a child pretending to be in trouble. And it’s coming from a criminal. And that’s happening. Now that’s already happening, and, as you said, it’s only getting worse.”
The good news is AI is also helping to catch and protect against these threats, and will continue to do so. To hear more cybersecurity predictions for AI in the year ahead, watch our webinar below or on-demand here.
*** This is a Security Bloggers Network syndicated blog from Blog – Coro Cybersecurity authored by Kevin Smith. Read the original post at: https://www.coro.net/blog/ai-is-the-new-major-accomplice-for-cyber-crimes