From the 1940s to the present, discover how cybercrime and cybersecurity have developed to become what we know today
Many species evolve in parallel, each seeking a competitive edge over the other. As cybersecurity and technology have evolved, so have criminals and ‘bad actors’ who seek to exploit weaknesses in the system for personal gain – or just to prove a point. This arms race has been going on since the 1950s, and this article explains the evolution of cyberattacks and security solutions. For nearly two decades after the creation of the world’s first digital computer in 1943, carrying out cyberattacks was tricky. Access to the giant electronic machines was limited to small numbers of people and they weren’t networked – only a few people knew how to work them so the threat was almost non-existent. Interestingly, the theory underlying computer viruses was first made public in 1949 when computer pioneer John von Neumann speculated that computer programs could reproduce. The technological and subcultural roots of hacking are as much related to early telephones as they are to computers. In the late 1950s, ‘phone phreaking’ emerged. The term captures several methods that ‘phreaks’ – people with a particular interest in the workings of phones – used to hijack the protocols that allowed telecoms engineers to work on the network remotely to make free calls and avoid long-distance tolls. Sadly for the phone companies, there was no way of stopping the phreaks, although the practice eventually died out in the 1980s. The phreaks had become a community, even issuing newsletters, and included technological trailblazers like Apple’s founders Steve Wozniak and Steve Jobs. The mold was set for digital technology. IgorGolovniov / Shutterstock.com The first-ever reference to malicious hacking was in the Massachusetts Institute of Technology’s student newspaper. Even by the mid-1960s, most computers were huge mainframes, locked away in secure temperature-controlled rooms. These machines were very costly, so access – even to programmers – remained limited. However, there were early forays into hacking by some of those with access, often students. At this stage, the attacks had no commercial or geopolitical benefits. Most hackers were curious mischief-makers or those who sought to improve existing systems by making them work more quickly or efficiently. In 1967, IBM invited school kids to try out their new computer. After exploring the accessible parts of the system, the students worked to probe deeper, learning the system’s language, and gaining access to other parts of the system. This was a valuable lesson to the company and they acknowledged their gratitude to “a number of high school students for their compulsion to bomb the system”, which resulted in the development of defensive measures – and possibly the defensive mindset that would prove essential to developers from then on. Ethical hacking is still practiced today. As computers started to reduce in size and cost, many large companies invested in technologies to store and manage data and systems. Storing them under lock and key became redundant as more people needed access to them and passwords began to be used. Roman Belogorodov / Shutterstock.com Cybersecurity proper began in 1972 with a research project on ARPANET (The Advanced Research Projects Agency Network), a precursor to the internet.
ARPANET developed protocols for remote computer networking. Researcher Bob Thomas created a computer program called Creeper that could move across ARPANET’s network, leaving a breadcrumb trail wherever it went. It read: ‘I’m the creeper, catch me if you can’. Ray Tomlinson – the inventor of email – wrote the program Reaper, which chased and deleted Creeper. Reaper was not only the very first example of antivirus software, but it was also the first self-replicating program, making it the first-ever computer worm.
An example of the Creeper’s taunting message. (Image credit: Core War) Challenging the vulnerabilities in these emerging technologies became more important as more organizations were starting to use the telephone to create remote networks. Each piece of connected hardware presented a new ‘entry point’ and needed to be protected. As reliance on computers increased and networking grew, it became clear to governments that security was essential, and unauthorized access to data and systems could be catastrophic. 1972-1974 witnessed a marked increase in discussions around computer security, mainly by academics in papers. Creating early computer security was undertaken by ESD and ARPA with the U.S. Air Force and other organizations that worked cooperatively to develop a design for a security kernel for the Honeywell Multics (HIS level 68) computer system. UCLA and the Stanford Research Institute worked on similar projects. ARPA’s Protection Analysis project explored operating system security; identifying, where possible, automatable techniques for detecting vulnerabilities in software. By the mid-1970s, the concept of cybersecurity was maturing. In 1976 Operating System Structures to Support Security and Reliable Software stated: “Security has become an important and challenging goal in the design of computer systems.” In 1979, 16-year-old Kevin Mitnick famously hacked into The Ark – the computer at the Digital Equipment Corporation used for developing operating systems – and made copies of the software. He was arrested and jailed for what would be the first of several cyberattacks he conducted over the next few decades. Today he runs Mitnick Security Consulting. Gennady Grechishkin / Shutterstock.com The 1980s brought an increase in high-profile attacks, including those at National CSS, AT&T, and Los Alamos National Laboratory. The movie War Games, in which a rogue computer program takes over nuclear missiles systems under the guise of a game, was released in 1983. This was the same year that the terms Trojan Horse and Computer Virus were first used. At the time of the Cold War, the threat of cyber espionage evolved. In 1985, The US Department of Defense published the Trusted Computer System Evaluation Criteria (aka The Orange Book) that provided guidance on: Despite this, in 1986, German hacker Marcus Hess used an internet gateway in Berkeley, CA, to piggyback onto the ARPANET. He hacked 400 military computers, including mainframes at the Pentagon, intending to sell information to the KGB. Security started to be taken more seriously. Savvy users quickly learned to monitor the command.com file size, having noticed that an increase in size was the first sign of potential infection. Cybersecurity measures incorporated this thinking, and a sudden reduction in free operating memory remains a sign of attack to this day. 1987 was the birth year of commercial antivirus, although there are competing claims for the innovator of the first antivirus product. Also in 1987: The Cascade virus made text ‘fall’ to the bottom of the screen By 1988, many antivirus companies had been established around the world – including Avast, which was founded by Eduard Kučera and Pavel Baudiš in Prague, Czech Republic. Today, Avast has a team of more than 1,700 worldwide and stops around 1.5 billion attacks every month. Early antivirus software consisted of simple scanners that performed context searches to detect unique virus code sequences. Many of these scanners also included ‘immunizers’ that modified programs to make viruses think the computer was already infected and not attack them. As the number of viruses increased into the hundreds, immunizers quickly became ineffective. It was also becoming clear to antivirus companies that they could only react to existing attacks, and a lack of a universal and ubiquitous network (the internet) made updates hard to deploy. As the world slowly started to take notice of computer viruses, 1988 also witnessed the first electronic forum devoted to antivirus security – Virus-L – on the Usenet network. The decade also saw the birth of the antivirus press: UK-based Sophos-sponsored Virus Bulletin and Dr. Solomon's Virus Fax International. The decade closed with more additions to the cybersecurity market, including F-Prot, ThunderBYTE, and Norman Virus Control. In 1989, IBM finally commercialized their internal antivirus project and IBM Virscan for MS-DOS went on sale for $35. Further reading: For more nostalgia, check out our guide to the best hardware of the 1980s. 1990 was quite a year: Early antivirus was purely signature-based, comparing binaries on a system with a database of virus ‘signatures’. This meant that early antivirus produced many false positives and used a lot of computational power – which frustrated users as productivity slowed. As more antivirus scanners hit the market, cybercriminals were responding and in 1992 the first anti-antivirus program appeared. By 1996, many viruses used new techniques and innovative methods, including stealth capability, polymorphism, and ‘macro viruses’, posing a new set of challenges for antivirus vendors who had to develop new detection and removal capabilities. New virus and malware numbers exploded in the 1990s, from tens of thousands early in the decade growing to 5 million every year by 2007. By the mid-‘90s, it was clear that cybersecurity had to be mass-produced to protect the public. One NASA researcher developed the first firewall program, modeling it on the physical structures that prevent the spread of actual fires in buildings. The late 1990s were also marked by conflict and friction between antivirus developers: Heuristic detection also emerged as a new method to tackle the huge number of virus variants. Antivirus scanners started to use generic signatures – often containing non-contiguous code and using wildcard characters – to detect viruses even if the threat had been ‘hidden’ inside meaningless code. Towards the end of the 1990s, email was proliferating and while it promised to revolutionize communication, it also opened up a new entry point for viruses. In 1999, the Melissa virus was unleashed. It entered the user’s computer via a Word document and then emailed copies of itself to the first 50 email addresses in Microsoft Outlook. It remains one of the fastest spreading viruses and the damage cost around $80 million to fix. With the internet available in more homes and offices across the globe, cybercriminals had more devices and software vulnerabilities to exploit than ever before. And, as more and more data was being kept digitally, there was more to plunder. In 2001, a new infection technique appeared: users no longer needed to download files – visiting an infected website was enough as bad actors replaced clean pages with infected ones or ‘hid’ malware on legitimate webpages. Instant messaging services also began to get attacked, and worms designed to propagate via IRC (Internet Chat Relay) channel also arrived. The development of zero-day attacks, which make use of ‘holes’ in security measures for new software and applications, meant that antivirus was becoming less effective – you can’t check code against existing attack signatures unless the virus already exists in the database. Computer magazine c't found that detection rates for zero-day threats had dropped from 40-50% in 2006 to only 20-30% in 2007. As crime organizations started to heavily fund professional cyberattacks, the good guys were hot on their trail: A key challenge of antivirus is that it can often slow a computer’s performance. One solution to this was to move the software off the computer and into the cloud. In 2007, Panda Security combined cloud technology with threat intelligence in their antivirus product – an industry-first. McAfee Labs followed suit in 2008, adding cloud-based anti-malware functionality to VirusScan. The following year, the Anti-Malware Testing Standards Organization (AMTSO) was created and started working shortly after on a method of testing cloud products. Another innovation this decade was OS security – cybersecurity that’s built into the operating system, providing an additional layer of protection. This often includes performing regular OS patch updates, installation of updated antivirus engines and software, firewalls, and secure accounts with user management. With the proliferation of smartphones, antivirus was also developed for Android and Windows mobile. The 2010s saw many high-profile breaches and attacks starting to impact the national security of countries and cost businesses millions. The increasing connectedness and the ongoing digitization of many aspects of life continued to offer cybercriminals new opportunities to exploit. Cybersecurity tailored specifically to the needs of businesses became more prominent and in 2011, Avast launched its first business product. As cybersecurity developed to tackle the expanding range of attack types, criminals responded with their own innovations: multi-vector attacks and social engineering. Attackers were becoming smarter and antivirus was forced to shift away from signature-based methods of detection to ‘next generation’ innovations. Next-gen cybersecurity uses different approaches to increase detection of new and unprecedented threats, while also reducing the number of false positives. It typically involves: Who knows what the next decade will bring? Whatever happens, Avast Business will be there to provide advanced protection for organizations and offer peace of mind for business leaders and IT professionals. Learn more about our range of solutions and find which one is best suited for your business using our Help Me Choose tool.
1940s: The time before crime
1950s: The phone phreaks
1960s: All quiet on the Western Front
1970s: Computer security is born
1980s: From ARPANET to internet
1987: The birth of cybersecurity
1990s: The world goes online
Email: a blessing and a curse
2000s: Threats diversify and multiply
2010s: The next generation