As Microsoft aggressively integrates AI into its broad portfolio of products and services, the IT giant now is looking for help to ensure they are free of vulnerabilities.
The company this month unveiled a new bug bounty program that will pay between $2,000 and $15,000 for flaws found in its AI-powered Bing offerings, including its various browser iterations, Bing Chat, Bing Chat for Enterprise, and Bing Image Creator.
In addition, the program will cover AI-powered Bing integration in the Windows Edge browser as well as integration in Microsoft Start application and Skype Mobile application, both in iOS and Android devices.
“The new Microsoft AI bounty program comes as a result of key investments and learnings over the last few months, including an AI security research challenge and an update to Microsoft’s vulnerability severity classification for AI systems,” Lynn Miyashita, technical program manager II with Microsoft Security Response Center, wrote in a blog post.
Microsoft in August updated its existing severity classification to address flaw categories related to the company’s use of AI in its product and services portfolios.
Worries about security risks associated with the use of AI have been pushed to the forefront over the past year with the wide popularity and rapid streamlining of OpenAI’s ChatGPT and similar generative AI tools, such as Google’s Bard, with concerns reaching as high up as company boards of directors.
The list of security concerns is long, as outlined recently by Malwarebytes, and include bad actors using generative AI and large-language models (LLMs) for their own nefarious activities, the LLMs leaking private data, the theft of AI models, and data manipulation.
Gartner analysts last month said that 34% of organizations responding to a survey said they already are using AI application security tools to address the risks that come with generative AI and that 56% said they are exploring these kinds of solutions. In addition, 57% said they were concerned about secrets being leaked in AI-generated code and 58% are worried about incorrect or biased outputs.
“Organizations that don’t manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage,” Avivah Litan, vice president and distinguished vice analyst at Gartner, said in a statement. “This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes.”
Microsoft is turning to the community of security researchers to help find and fix critical or high-severity flaws in its AI-powered offerings. The company is looking for vulnerabilities in inference or model manipulation and inferential information disclosure.
In addition, the vendor also wants to hear about vulnerabilities that influence or change Bing’s chat behavior in ways that impact other users, modify Bing chats by adjust client or server visible configuration – think setting debug flags or changing feature flags – or break Bing’s cross-conversation memory protections and delete histories.
Other areas of focus are flaws that reveal Bing’s internal workings and prompts, decision-making processes, and confidential information or bypass Bing’s chat mode session limits, restrictions, or rules.
Analysts should submit their findings to Microsoft’s MSRC Researcher Portal. Those submissions should identify a flaw in the AI-powered Bing that hadn’t already been reported to or known by Microsoft.
Other information can be found on the program’s page and through its FAQ site.
Microsoft isn’t the first vendor to offer a bug bounty program for AI applications. OpenAI – the developer of ChatGPT and GPT-4 LLM – announced such an effort in April in partnership with Bugcrowd, which offers a crowd-sourced bug bounty program. OpenAI officials said that they “recognize the critical importance of security and view it as a collaborative effort.”
Microsoft has invested billions in OpenAI and acquired a stake in the company.
In addition, advisory and technology firm Project AI in August acquired Huntr.dev, a platform that pays researchers for finding bugs in open source software, and launched huntr, an AI and machine-learning bug bounty program aimed at protecting AI and ML open-source software, foundation models, and machine-learning systems.
Recent Articles By Author