getty
However, as with any nascent technology, AI has the potential to cause harm when placed in the wrong hands. We’ve begun seeing AI used for nefarious purposes, chiefly in the form of AI-facilitated Cyberattacks, and forecast Adversarial AI to be the next challenge to be faced in this area. By definition, this is AI being deployed by cybercriminals to fool the same security tools that it was often used to create.
What is Adversarial AI and How is it Being Used by Cybercriminals?
Adversarial AI attacks exploit the analytic and decision-making powers of established machine learning (ML)-based security tools to evade detection. Adversarial AI works by outsmarting less advanced ML technology, convincing these security tools that this AI-based malware is benign, when in reality it is a serious threat. By evading detection, adversarial AI malware gains free entry into networks.
Adversarial AI attacks can be grouped into three main types:
AI-based Cyberattacks – Perhaps the most direct application, this method involves the threat actor deploying malware that operates ML algorithms as part of its attack logic. ML-powered malware is capable of automating activities that previously required manual human guidance. The resulting malware is faster, more aggressive, and acts more independently from the threat actor. We’re already seeing evidence of this technique in the wild, albeit not at scale.
AI-facilitated Cyberattacks – In this case, malware is deployed on the victim’s endpoint and AI-based algorithms are deployed on the attacker's client-side server. Here, the technology’s incredible ability to crunch through data and identify patterns can be used to better orchestrate and automate cyberattacks. For example, an info-stealer malware exfiltrating a huge dataset of personal information with the AI algorithm then rapidly finding and classifying valuable data such as credit card numbers, passwords, and confidential documents.
Adversarial Learning – Last, but by no means least, this is a case of machine vs machine as malicious AI algorithms are used to subvert ML-powered security solutions. Traditional machine learning tools need to be trained with data sets to start identifying patterns, and threat actors can inject and poison the well with false datasets that cause the algorithm to mis-classify data. Though largely theoretical, this is an extremely dangerous scenario as defensive ML-powered solutions can be taught to mis-identify malware as harmless, effectively rendering them useless.
How Close is the Danger?
Adversarial AI is not yet being widely used by threat actors, but we have begun to encounter isolated experimental cases. The cyber-threat landscape moves at lightning speed, and advanced techniques have a way of landing in the hands of cyber-criminals.
In our estimation, it could be as little as 18 months before the dam breaks and AI-powered attacks start causing large, headline-grabbing security incidents. When this happens, unprepared organizations will face attacks that will outpace or subvert traditional security solutions – even those using the most advanced ML.
However, very few organizations are proactively preparing for this threat. And it’s easy to see why – between juggling a dozen different security solutions, trying to safeguard against threats like ransomware, and managing long-term security projects, most CIOs and CISOs have too many challenges in front of them to think about the evolution of future threats.
What Should Enterprises be Doing About Adversarial AI?
Adversarial AI is not a new category of cyberthreat, it’s a known approach to potentially evade machine learning-based inference models, by changing the final score/classification.
Yet very few in the security industry are talking about adversarial AI right now, and this needs to change. More vendors should be investing time and resources in this direction.
Deep learning is the most effective for dealing with Adversarial AI, as its highly sophisticated approach, which is by order of magnitudes more resilient to changes. It can identify more complex, high-dimensional patterns and therefore be more resilient versus traditional machine learning. This allows it to counteract adversarial AI by outpacing the attacks and resisting attempts to change the model’s labeling.
The first adversarial AI strikes are likely to be from sophisticated threat actors going after high-value targets such as financial services or even nation-state-sponsored cyber espionage campaigns. But that doesn’t mean that CIOs and CISOs in at-risk industries should not explore options for defending against these advanced attacks now – before they become the ones making the headlines.