Microsoft is giving more organizations access to its months-old Copilot generative-AI security tool through an early access program.
The IT giant in March introduced Security Copilot, the latest iteration of the Copilot technology that Microsoft is aggressively planting throughout its massive product portfolio, include Bing, Edge, Microsoft 365, Windows, and OneDrive . As with other Copilot programs, Microsoft is promising that the security-focused offering will make it easier for security pros at organizations to more quickly analyze the massive amounts of incoming data and put protections in place.
The digital security assistant, built atop a large-language model (LLM), pulls in security signals and data from a range of other Microsoft products – including Sentinel, Microsoft 365 Defender, Intune, and Defender Threat Intelligence – analyzes it and provides guidance to organizations’ security teams.
In addition, by enabling security professionals to expand and accelerate what they can do, Security Copilot also can address many of the challenges enterprises face due to the shortage of skilled security workers and enables less-experienced professionals to do more, according to Vasu Jakkal, corporate vice president of security, compliance, identity, and management at Microsoft.
“Security Copilot can effectively up-skill a security team, regardless of its expertise, save them time, enable them to find what previously they might have missed, and free them to focus on the most impactful projects,” Jakkal wrote in a blog post. “Microsoft 365 Defender and Security Copilot together help analysts focus on what matters most to protect faster.”
He said organizations using Security Copilot in the private preview program saved as much as 40% of their time on such tasks as writing complex queries based only on natural-language questions and summarizing security incidents. Companies participating in the private preview program include Dow, Fidelity, and Avanade.
Along with the rolling out the early access program, Microsoft also is embedding Security Copilot into its Microsoft 365 XDR (extended detection and response) platform and is including its Defender Threat Intelligence tool with Security Copilot at no charge.
Embedding Security Copilot with Microsoft 365 Defender will enable security teams to summarize incidents in natural language for a better understanding of threats or to share with others, run real-time analysis of malware, and guide security analysts – “of any skill level” – through remediation and response.
It also enables natural language queries and automatically generate Kusto Query Language (KQL), which is used to more easily run such tasks as discovering patterns in data, identify anomalies, and creating spreadsheets.
Making Defender Threat Intelligence and its API available for free to Security Copilot users will give them more information about security threats, adversaries, tools, and vulnerabilities, according to Jakkal. The vendors 10,000 researchers and analysts receive 65 trillion security signals from clouds, devices, and workloads every day, he wrote.
“It provides a mechanism to connect indicators of compromise to finished intelligence, such as vulnerability articles, enriched open-source intelligence, and Microsoft’s own articles,” Jakkal wrote, adding that “customers may now access Defender Threat Intelligence directly to expose and eliminate modern cyberthreats and cyberattacker infrastructure, identify cyberattackers and their tools, and accelerate cyberthreat detection and remediation.”
IT and security firms are rushing to incorporate AI into their product portfolios as the methods used by bad actors get increasingly sophisticated. For example, Barracuda Networks in August outlined how its embedding of AI into its managed XDR service is helping enterprises.
Terra Nova Security earlier this month noted how AI in cybersecurity programs and products can enable faster threat detection and remediation, improve accuracy and efficiency, automate manual tasks, and process massive amounts of data and respond to threats much more quickly than humans.
However, the company also warned that threat groups also are embracing AI for their nefarious activities. The technology allows them to more easily spin up malware, create more sophisticated phishing attacks, create deepfakes, and create new hacking tools.
Jack Stockdale, founding CTO at cybersecurity firm Darktrace, echoed that sentiment, adding that increasingly the challenge is for organizations to use AI to protect against hackers that are using similar tools.
“In cyber security, AI is a double-edged sword,” Stockdale wrote in a blog post in September. “Its use by cyber-attackers is still in its infancy, but Darktrace expects that the mass availability of generative AI tools like ChatGPT will significantly enhance attackers’ capabilities by providing better tools to generate and automate human-like attacks.”
Recent Articles By Author