Good morning from New York City! Recorded Future News will be providing live coverage from the Aspen Cyber Summit today at the 92nd Street Y, where we're serving as the presenting media sponsor.
You can follow along live online to see Suzanne Smalley, Recorded Future News' privacy reporter, moderate a panel on cybersecurity in a world of generative AI, or read our coverage below from reporters on our team. We will be posting stories, analysis, and interviews throughout the day, and you can view the full agenda here.
Stay tuned for more updates...
By Jonathan Greig
Updated at 6:00 p.m. EST
The U.S. Department of Homeland Security (DHS) is using generative artificial intelligence for a range of missions, including detecting the manufacture of dangerous materials and combating human trafficking.
During a panel at the Aspen Cyber Summit on Wednesday, DHS Assistant Secretary Iranga Kahangama walked the audience through several different tasks that officials are now using machine learning and generative AI to conduct.
“We have many different operational applications of AI that we are actively using and looking to improve upon using and whether it's on the generative side or more traditional machine learning type applications,” he said.
“Leveraging AI to make our casework more efficient for human trafficking or child sex, [and for] abuse material online and combating that. We also have a countering weapons of mass destruction mission that we've been tasked with at the border, leveraging AI to create systems to detect when malicious types of chemical [or] biological materials are being created.”
In addition to its use internally, Kahangama noted DHS’ role — through the Cybersecurity and Infrastructure Security Agency (CISA) — in helping private organizations use generative AI responsibly.
In addition to a recent executive order about responsible AI use from the White House, CISA released its own project exploring how the technology can be used for cybersecurity efforts.
Kahangama said CISA is working to put out guidelines for how generative AI is used in red teaming exercises — where hired hackers test the security of an organization. CISA is working with sector risk management agencies and other stakeholders to help organizations “implement safe and secure use of AI.”
He added that one benefit of the rush to use generative AI is that cybersecurity concepts can be applied from the beginning.
“I think the onslaught of generative AI has made us more receptive to putting guardrails on early and thinking through how we can almost look back with the benefit of hindsight. What we didn't do in cybersecurity, baking it in from the start, we have the opportunity to bake that in AI right now,” he said.
Kahangama added that DHS is looking into using AI for other tasks, including how Border Patrol inspects incoming goods and how officials can sift through reports that come through Homeland Security Investigations — allowing them to pick out license plates or names associated with human trafficking.
By Suzanne Smalley
Updated 4:40 p.m. EST
Erik Gerding, the director of the division of corporation finance at the Securities and Exchange Commission, said the SEC pushed forward a recent cybersecurity disclosure rule in part because it was concerned about the underreporting of cybersecurity incidents by public companies.
The rule, which is set to go into effect next month, has been criticized by industry groups and Congressional Republications, who are planning to try to overturn the rule using a rare procedure known as the Congressional Review Act.
The rule requires public companies to disclose cybersecurity incidents within four business days of determining they are material, with an exception for events that the Attorney General determines could pose a national security risk if made public.
Investors deserve prompt information on cyber incidents, Gerding said, calling them "very similar to other kinds of risks companies face" such as equipment burning down or interest rate movements.
Read more about his remarks here.
By Jonathan Greig
Updated: 4:23 p.m. EST
Google will distribute another 100,000 free pieces of security hardware to protect people involved in high-risk industries, the company announced Wednesday.
Google’s Titan Security Keys work as a “second factor” that can be used after passwords are entered. They can also store passkeys — which let users sign in to apps and sites the same way they unlock their devices: with a fingerprint, a face scan or a screen lock PIN.
Read our full coverage here.
By Suzanne Smalley
Updated 3:15 p.m. EST
Michigan Secretary of State Jocelyn Benson said Wednesday that one of her top worries about the 2024 elections stems from the potential for artificial intelligence to foment what she called “hyper-localized” dissemination of mis- and disinformation.
“Imagine on election day, information goes out about long lines [in a given precinct] that are calling for violence that is false, but it's generated through artificial intelligence,” Benson said during an interview at the Aspen Cyber Summit in New York.
Read our full coverage of Benson's remarks here.
By Jonathan Greig
Updated 2:43 p.m. EST
CISA Executive Director of Cybersecurity Eric Goldstein pointed to the Viasat attack last February as an example of how importance redundancy and resiliency is for defenders.
"It actually wasn't that impactful for the Ukrainian military because they have built in resilient communications to make sure that they were able to quickly alter measures and keep fighting," he said in a panel with retired U.S. Air Force general Jack Weinstein.
Read our full coverage of his remarks here.
By Martin Matishak
Updated 2:20 p.m. EST
At a lunch panel moderated by Recorded Future News' Dina Temple-Raston, the NSA's Rob Joyce said Israel is experiencing direct cyber and misinformation attacks from a variety of adversaries as it battles Hamas.
By Martin Matishak
Updated 11:30 a.m. EST
China will likely try to keep the U.S. “focused domestically” by hacking the country’s transportation and other sectors should Beijing decide to invade Taiwan, the National Security Agency’s cyber chief predicted.
“I would expect transportation and logistics, defense companies all to be hit pretty hard with the intent of breaking those supply chain lines and the ability to deliver material,” Rob Joyce said during a panel discussion.
"If you can stop shipping, if you can stop air, if you can stop the rail that feeds” supplies to the theater of war “those will all be things that will be focused on and targeted,” he added.
The warning comes as national security officials and U.S. policymakers are increasingly on edge that China will invade the neighboring country.
"There is a real and tangible threat from the PRC against our critical infrastructure,” according to Joyce, noting that officials have previously observed “pre-positioning” in critical infrastructure.
By Adam Janofsky
Updated 9:40 a.m. EST
Laurie Locascio, the director of the National Institute of Standards and Technology (NIST), kicked off the conference by discussing what will surely be a recurring theme: artificial intelligence.
NIST, which has taken a leading role in developing standards for cybersecurity, as well as related areas like quantum computing, was included in the White House’s recent executive order on AI in a number of ways. The agency is tasked with helping develop guidelines and best practices for AI safety and security, as well as setting standards for things like red team testing.
“We are ultimately trying to develop trust in technology,” Locascio told Jeff Greene, senior director at Aspen Digital. NIST and the White House’s key goal is developing “safe, secure and trusted AI.”
Locascio also discussed ongoing efforts to create algorithms that will be able to keep data secure in a post-quantum cryptography environment.
“We don’t have thousands of people working on this,” Locascio said, adding that the agency will likely finalize winning algorithms in its post-quantum cryptography project next year. “We have a relatively small team and we asked the world to give us their best solutions.”
“Migrating to post-quantum cryptography is going to take years and years, it’s going to be expensive, it’s not going to be cheap,” she added.
Get more insights with the
Recorded Future
Intelligence Cloud.
No previous article
No new articles
Adam Janofsky is the founding editor-in-chief of The Record by Recorded Future. He previously was the cybersecurity and privacy reporter for Protocol, and prior to that covered cybersecurity, AI, and other emerging technology for The Wall Street Journal.