Our digital world never stands still.
How we do business and interact with each other is evolving at a breakneck pace. We saw during the pandemic that digital transformation of all kinds can happen faster than we ever thought possible. It’s a thrilling time to work in cybersecurity, but new technology and unprecedented opportunities also present us with extraordinary challenges.
The problem of malware, specifically its ever-shifting flavors, has been plaguing us for decades. One such variety, ransomware, is a “trend” that has provoked cybersecurity teams for years. As trends go, it has more longevity than most. And there are no signs of ransomware attacks decreasing.
However, we’re currently seeing an uptick in strains of ransomware exploitation. Meanwhile, bad actors may contribute data to large language models (LLMs) too. The good news? These criminals leave behind digital trails that help us find more ways to fight them.
Here are 10 of the most powerful trends I’ve seen this year — both in the realm of threats and the sphere of security strategies.
When an attack occurs (and trust me, one will if it hasn’t already), understanding what happened is important, but figuring out what will happen next is obligatory.
Threat intelligence teams can and should conduct postmortem investigations, so to speak. If we can’t map breaches, threat intelligence is reactive at best. Imagine standing in front of your company’s CEO, board or clients and swearing: This is what happened, but I’ll make sure this specific thing doesn’t happen again.
These days, that’s not enough. And if you try to convince leadership that reactive tactics will work, you’ll remain one step behind the bad actors.
Many of us know “dwell time” as the five or 10 crucial minutes between applying a cleaning product and applying one’s elbow grease. But in cybersecurity, dwell time is the time between bad actors’ initial break in and the attack itself, when target data is encrypted.
Recently, that average time has dropped dramatically — and by days, not hours. This phenomenon highlights the importance of immediate visibility into anomalies, as well as understanding what happens inside your environment and inside your network.
As that dwell time gets smaller, organizations have less time to react. Meanwhile, ransomware operators continue to evolve their techniques and dream up new ways to monetize their attacks.
Ransomware began purely from an encryption perspective. First, the modus operandi was to encrypt and hold data for ransom. Then it evolved to encryption plus stealing and potentially selling data, which monetizes both sides of the equation. The initial victims must pay to get their data back (ostensibly to keep it from being exposed), while the thieves make additional money from whoever buys copies of stolen data on the black market.
Some ransomware operators aren’t even bothering to encrypt anymore. They may find it more economical and effective to simply steal the data and extort victimized organizations. In that sense, even bad actors evaluate ROI and do what makes good business sense.
The “genuine” security models most of us rely on have built-in safeguards. But from the very first days of ChatGPT’s massive initial release, the tinkerers among us anticipated the inevitable question: What’s an engineer going to do when they get their hands on this?
In a nutshell, attacks evolve — then safeguards get more sophisticated and so do ever-more-effective attempts to jailbreak them. We should accept that it will always be possible to use large-language models for malicious purposes.
But something curious is happening alongside the rise of mass-audience/input AI. Now, programs like FraudGPT and WormGPT are on the rise. These models, built by criminal communities, will also contribute to a larger, more learned LLM.
Lately, we’ve seen a lot of indirect prompt injection attacks into LLMs. That means the prompt comes from a third party, which tells the AI to read the content of, say, a website or a PDF. The AI reads that text, which contains hidden instructions for the AI system to follow.
Regardless of how it happens, if someone can put data into an LLM, they can manipulate what it spits back out. That goes for non-weaponized versions of injections via ChatGPT, Google Bard and others.
We’ve also seen criminals use LLMs to build truly adaptive, polymorphic, “pick your own attack vector” and “write your own code” attacks. A few months ago, our team at HYAS generated one of these polymorphic attacks that we dubbed “EyeSpy.” It simply underscored the fact that malicious attacks via AI are already here, and they are a threat we need to address now.
Bad actors using AI aren’t the only imminent danger. I heard a fascinating (and true) story at BlackHat this year that reminds us why an organization’s biggest threat often comes from within.
Apparently, an employee at Company X used an LLM to help them finish a whitepaper. They did something innocuous at first blush: Ask the AI to write an executive summary and a conclusion. But of course, the employee had to feed the paper into the LLM, which absorbed and then would use it to provide data and answers to other queries from other individuals even before the author even published their work.
In that way, the rise of LLMs is not just a threatening new channel hackers use. It’s also a privacy issue. I urge everyone who conducts workforce security training to share this anecdote along with phishing and social-engineering cautionary tales.
Plenty of groups, both large and small, unorganized and fragmented, are willing to bankroll cybersecurity incursions. Some of these are well-funded and well-organized; they spend months planning their attacks.
We need to remove biases about cybersecurity, particularly the false idea that security architecture, tools and systems we build are effective walls. We can build them higher and higher, but as soon as a bad actor gets in they can do whatever they want, unfettered. That’s why malicious attackers often lurk inside networks for months or even years undetected, establishing the infrastructure necessary to carry out their attacks and then stealing whatever data they want, causing ensuing damage, and generally run amok inside the organization. It’s not just about the walls on the outside – increasingly it’s about the visibility of what’s going on inside that matters.
In some ways, the near-ubiquitous move to the cloud across industries means we’re all working in a much more complex information environment. In many organizations, there is no longer a data center, a single source of truth or a single corporate tool.
The information we depend on to do our jobs and provide value to our customers is dispersed across countless data centers, companies and even continents, depending on the organizations we work for. We need to ensure that how we think about information security reflects our distributed workforces — and the diversity of our organizational cultures.
Visibility and observability across networks are an absolute must (see below). Still, they mean nothing unless we also have tools to aggregate the various feeds of data, applications and other panes of glass that threaten to overwhelm cybersecurity analysts with noise. The right tools can assess and combine data — and put automation in place so analysts can spend time shaping solutions for the future.
Despite cybersecurity having a much higher profile in recent years, I still see organizations unprepared for ransomware and other attacks. They might not spot the “digital exhaust” signs inside their network (for example, the connections to adversary infrastructure and C2 that bad actors need to make from inside your environment) because the enterprise doesn’t collect the right data, can’t turn meta-data into actionable intelligence, and can’t see the anomalies that occur. And if the anomalies are identified too late,the data captured by criminals may have already been exfiltrated, encrypted, or both. Time’s up, and without the right level of visibility, it’s game over …
As attacks continues to evolve, the most important thing organizations can do is get immediate visibility into any and all anomalies inside their environments.
The sophistication and scope of cyberattacks go far beyond breaching core data stores. Today’s hackers don’t stop there. They seek out ways to exploit organizations’ backup systems. So as security professionals, we must consider each network in its entirety.Endpoints aren’t the only vulnerable aspects of a network. Neither are backups stored with cloud services.
We need a security strategy based on solid fundamentals like defense-in-depth, separation of concerns and least privilege. Ultimately, these practices help security teams build solid defenses against threats, no matter how and when they emerge.
Ready to step up your defensive game? Learn how HYAS can transform your cybersecurity strategy from reactive to proactive.
*** This is a Security Bloggers Network syndicated blog from HYAS Blog authored by David Ratner. Read the original post at: https://www.hyas.com/blog/10-cybersecurity-trends-that-emerged-in-2023