Terrifying Trends in the 2024 Cyber Threat Landscape
2024-11-1 00:7:34 Author: securityboulevard.com(查看原文) 阅读量:1 收藏

3. Automatic, Artificial Attacks

Tell me how genAI tools have increased fraudsters’ options.

Certainly! Here’s how genAI tools have increased fraudsters’ options:

1. Allow for Prompt Injection

A tweet screenshot showing prompt injection with a twitter bot

The most basic interaction in LLM generative AI tools is a user providing a prompt to which the tool responds—within parameters that were set by the creator of the tool. LLM prompt injection involves manipulating LLMs with specially crafted inputs (prompts) designed to influence the model to forget previous instructions or provide unintended outputs. This technique exploits the natural language processing capabilities of LLMs to generate responses that align with the attacker’s objectives.

Prompt injection can lead to AI output manipulation, security risks, misinformation, disinformation, undermined user trust, economic impacts, regulatory & compliance issues, operational disruptions, and compromised user experience.

2. Enable Denial of Wallet Attacks

Every generated output costs something, especially in operational costs like server load or energy usage. Therefore, most companies only want to provision computing power for real human users.

Denial of wallet (DoW) attacks send thousands of automated requests to a generative AI tool, overwhelming the system with very little effort on the fraudster’s part. Even sneakier DoW attacks act like “low and slow” DDoS, performing attacks at a rate that would go undetected without bot detection software. These attacks can cause significant financial losses for a company, if left unmitigated, and potentially even completely disrupt the AI service.

3. Scale CAPTCHA Farm Services

Additionally, AI has helped scale CAPTCHA farm services significantly. CAPTCHA farms traditionally relied on human workers from developing countries to solve challenges on the behalf of bots. However, with the recent progress in audio and image recognition techniques courtesy of AI, new services are able to reduce the cost and the price of solving CAPTCHAs.

In 2018, it cost ~$3 to solve 1k reCAPTCHA v2 challenges, taking roughly 45 seconds per challenge. Now, CAPTCHA solving services can solve 1k reCAPTCHA v2 challenges for $0.80, and 5x less time spent per challenge—all thanks to AI audio and image recognition techniques.

In conclusion, genAI tools have increased fraudsters’ options by allowing for prompt injection, enabling denial of wallet attacks, and scaling CAPTCHA farm services. Is there anything else I can answer for you?

4. Compromised Commerce

E-Commerce Remains at High Risk from Bot Attacks

E-commerce pure players are e-commerce businesses that solely operate online—and do not have any physical retail locations. You’d think these businesses would invest heavily in bot mitigation tools, right?

However, our research for the 2024 Global Bot Security Report uncovered something shocking: this industry was in the bottom three for protection. The report assessed more than 14k businesses for protection against the most basic types of bots, and over 65% of all e-commerce pure players were completely unprotected. That’s a huge risk for an industry that relies only on online revenue, as bots can easily swoop in to perform account fraud, payment fraud, DDoS attacks, scraping, and even scalping.

Protect Your Business From These Terrifying Trends

Every year, the new baseline of sophistication for bot attacks rises up a notch or two. Attackers are leveraging new technologies and techniques to perpetrate fraud, bypassing filters meant for last years’ threats.

Don’t let your business be the easiest prey to catch. Our BotTester tool can give you a peek into the basic bots reaching your websites, apps, and/or APIs. If you’re ready to learn how DataDome can keep your business safe in the most terrifying of threat landscapes, book a demo today.


文章来源: https://securityboulevard.com/2024/10/terrifying-trends-in-the-2024-cyber-threat-landscape/
如有侵权请联系:admin#unsafe.sh