In the previous article, we discussed how artificial intelligence and machine learning could significantly boost the strength of cybersecurity systems – endpoint security solutions in particular.
Unfortunately, it is not only cybersecurity solution providers who understand the value of AI and ML; so do cybercriminals. So today, we will dive deeper into the subject of offensive AI and separate reality from fiction.
The evolution of AI and ML over the last decade
The term artificial intelligence was coined in 1956 at a conference at Dartmouth College. Alan Turing, however, published a paper several years earlier – in 1950 – discussing how to build intelligent machines and test their intelligence. Since the 50s, AI has undergone several springs – increased attention and significant progress made in the field. But, it wasn’t until the 1990s that computational power caught up with the needs of AI and ML. A famous example is Deep Blue, a chess-playing computer developed by IBM that in 1997 won a match against chess grandmaster Garry Kasparov.
Fast-forward to the last decade. In 2011, IBM’s Watson defeated reigning Jeopardy! champions Brad Rutter and Ken Jennings. This victory was a testament to how well a machine can understand the context in human language. The same year saw a landmark achievement in a machine’s ability to understand images when a convolutional neural network won the German Traffic Sign Recognition competition. As a result, the beginning of the decade saw the launch of such products as Siri, Google Glass, and Oculus VR.
A significant breakthrough came in 2014 when Ian Goodfellow created generative adversarial networks – a machine learning system that pits two neural networks against each other. Through competition, these networks can work out better solutions to problems and create original content. In 2018, an AI system that utilized generative adversarial networks created a painting sold for $400,000 at Christie’s auction.
Over the last decade, these advancements in AI have given us products like Amazon’s Alexa, chatbots, and autonomous cars (though these are still, in large, in the development stage). None of these would’ve been possible 30 years ago due to the lack of computational power to accommodate the complexity of AI and ML systems.
The current state of AI and ML
While in many ways we are already highly accustomed to using AI- and ML-based products, be it using voice search to play a song or setting up a dentist appointment via a chatbot, AI and ML still have a long way to go before they get to the level of JARVIS. That’s due to some limitations that the scientific and technological communities have yet to overcome, including:
- Access to data: ML- and AI-based technologies require large volumes of training data. Not every organization generates enough data. Much of the world’s data are aggregated in just a few companies such as Facebook and Google.
- Computing time: There are hardware limitations in more advanced forms of AI and ML, and only organizations with powerful financial resources can afford to build custom hardware needed for these advanced programs.
- Training costs: Training an AI or ML model is expensive. For example, according to one estimate, training GPT -3 would cost at least $4.6 million.
- Lack of adaptability: Most AI- and ML-based models can function perfectly well in a highly controlled, supervised environment. However, it’s taking the model outside of its environment that creates a challenge.
The use of AI and ML by malicious actors
While we’re not yet at a stage where artificial intelligence would threaten human intelligence, AI and ML have already taken much of the manual work off of our hands by automating it. While this has allowed organizations to improve their productivity and reduce operational costs significantly, it also opened up the same opportunities to malicious actors.
Automation allows cybercriminals to unleash a computer army in their attacks. For example, hackers can utilize AI to identify targets and create highly personalized messages in social engineering attacks. Automated bots can scan social media chatter and identify prime targets.
Deepfakes can be used to imitate the voice and image of trusted individuals. For example, in 2019, the CEO of a UK-based energy firm received a call from the CEO of the parent company based in Germany, requesting to make an immediate transfer of €220,000 to the bank account of a Hungarian supplier. The UK-based CEO complied. It wasn’t until the second call, which followed shortly after, that he got suspicious and called back the German-based CEO, who was clueless about the transfer request. It turns out that a malicious group used a voice deepfake to trick the victim into making the transfer.
Cybercriminals also can use AI to identify vulnerabilities in a network. They can develop malware that will continuously evolve to stay hidden while penetrating deeper and deeper into the network. As it does, it can learn what constitutes “normal” behavior within the network – for example, at what time each user logs in, what applications they primarily use, where they connect to the network from, and what their devices’ vulnerabilities are. Then, it can replicate that normal behavior to steal data without raising any red flags in the system.
AI-powered attacks can collect data on every single employee of an organization – by scanning LinkedIn, for example – and hack into their devices, waiting for an employee to access the corporate network.
Finally, due to advancements in the image-recognition capabilities of AI, malicious actors can use AI to break the CAPTCHA layer of protection.
Now, malicious groups face the same limitations as the rest when utilizing the full potential of AI and ML. Therefore, most cyberattacks still follow the more traditional routes. However, it is not hard to imagine the future where AI technologies will fight cyberwars.
As with every other technological revolution, eventually, the costs of development decrease. Technologies will become mainstream—open-source results in free solutions.
As an organization, whether you are a private business, a healthcare provider, a municipal government, a utility provider, or an educational institution, you need to strengthen your cybersecurity with AI and ML capabilities now. Not next year. Not five years from now.
The only way to minimize risk is to utilize security solutions and frameworks that stay ahead of the curve – the ones that can identify vulnerabilities and threats with maximum precision.