Deepfake Attacks Prompt Change in Security Strategy
2024-7-26 16:11:27 Author: securityboulevard.com(查看原文) 阅读量:2 收藏

A surge in deepfake attacks and identity fraud has led organizations to develop more effective response strategies, with 60% of IT and security professionals implementing defenses against AI-generated deepfakes.

These were among the findings of a GetApp survey of more than 2,600 IT and cybersecurity professionals worldwide, which revealed among the 73% of US companies with a dedicated deepfake response, 57% of this group are making simulation exercises a key deepfake defense tactic.

Nearly half (48%) of global respondents who had suffered a cyberattack in the past 18 months are prioritizing network security improvements, whilst 41% are putting data encryption in place.

More than three-quarters (77%) of survey respondents work in companies that have increased their investments in cybersecurity over the last 18 months.

David Jani, security analyst at software marketplace GetApp, explained an effective deepfake response plan must be actionable, straightforward, and simple enough for all employees to understand.

“It should include clear protocols for detecting deepfakes, such as training staff to identify anomalies in biometric verification processes and implementing tools for identity fraud and deepfake detection,” he said.

This plan should also emphasize the importance of regular training sessions to keep staff vigilant and updated on the latest deepfake tactics–much like IT departments test employees with mock malicious link emails, they should test employees with mock identity fraud.

According to the survey, 68% of US businesses prioritize this training (above the global average of 65%), ensuring their teams are prepared to identify and respond to deepfake threats effectively.

Jani added organizations can keep their deepfake response plans current by continuously monitoring industry trends and integrating new technologies.

“Staying connected with security providers, engaging with professional networks, participating in cybersecurity conferences and following the news can also provide essential insights to mitigate threats,” he said.

The survey indicated nearly half of global security professionals engage with industry groups, and 60% attend cybersecurity conferences to stay informed on the latest threats and solutions.

Adopt Zero-Trust Architecture

However, recent research reveals IT and security leaders feel ill-equipped to defeat both deepfake technology (30%) and AI-powered attacks (35%).

Patrick Tiquet, vice president of security and architecture at Keeper Security, said one key approach to defend against AI-generated deepfake attacks adopting a zero-trust security architecture, which assumes nobody – internal or external – should be trusted by default.

“This model continuously verifies the identity of users and devices, ensuring that only legitimate users have access to network resources,” he explained.

Complementing this, the principle of least privilege restricts users’ access rights to the bare minimum necessary for their roles, limiting the blast radius if an account is compromised.

Tiquet said while AI may be used to build novel new attacks, such as deepfakes, the actions that cybercriminals take once they gain access tend to follow predictable patterns.

Typically, they will attempt to locate and exfiltrate or encrypt critical assets – the “crown jewels” of the organization.

“By tracking and monitoring these predictable behaviors, organizations can detect and mitigate threats even if the initial access method was sophisticated,” he said.

Conduct Independent Testing, Evaluation

Narayana Pappu, CEO at Zendata, said organizations can keep their deepfake response plans up to date via information sharing between companies in the industry on evolving threats and a triangulated approach with a combination of technical measures instead of just one.

For example, authentication could be device fingerprint, user fingerprint (activity driven) and biometric – rather than just one which most applications use currently.

“Software updates, network security improvements and stronger password policies play a significant role in protecting against AI-driven cybersecurity threats,” Pappu said.

He cautioned however, that considering the recent CrowdStrike outage, it is good to do independent testing and evaluation before pushing updates or at least have a rollback strategy in place if the updates don’t go according to plan.

Recent Articles By Author


文章来源: https://securityboulevard.com/2024/07/deepfake-attacks-prompt-change-in-security-strategy/
如有侵权请联系:admin#unsafe.sh