Wired: Weaponized AI
Darktrace Advisory Board member, Dr Lynch, discusses why we can expect to see more and more weaponized AI as 2018 becomes the year of the machine-on-machine attack.
As early as next year, we can expect to see weaponized AI that delivers its blows slowly, stealthily, and virtually without trace. 2018 will be the year of the machine-on-machine attack.
There is a lively debate going on in the U.S. about the use of AI by the military. Ground robots and drones, the targeting of enemy networks and the crafting of fake intelligence all benefit from machine intelligence, and can help save lives. The idea of autonomous weapons, however, is less welcome.
AI is already starting to be deployed on another battlefield however – digital networks. And not just on the defense side. Today cyber-attackers are keen to get their hands of the best AI technologies that help them not only infiltrate an IT infrastructure, but to stay on that network for many months, perhaps years, without getting noticed.
In 2018, we can expect these algorithmic presences to use their intelligence to learn about their environments, and blend in with the daily commotion of network activity. The drivers of these automated attacks may have a defined target – the blueprint designs of the jet engine – or persist opportunistically, where the chance for money- or mischief-making avails itself. As they sustain their presence, they grow stronger in their knowledge of the network and build up control over data and entire systems.
Like the HIV virus, which is so pernicious because it attacks the body's own defenses and replicates itself, these new machine intelligences will target the very defenses deployed against it. They learn how the firewall works, the analytics models used to detect attacks, and times of day that the security team is in the office, and adapts to avoid and weaken them. All the while, it uses its strength to spread, creating new inroads for compromise and contaminating new devices with brutal efficiency.
Another way that AI will be used to attack us is by impersonating people. We already have AI assistants that do our scheduling, emailing on our behalf, and asking us what we'd like to order for lunch. But what happens if your AI assistant gets taken over by a malicious attacker? Or indeed what happens when weaponized AI is refined enough to convincingly impersonate a real person that you trust?
A stealthy, long-term AI presence on your network will have ample time to learn what your writing style is and how this differs depending on who you email, your contact base and the distinctions in professional and personal relationships based on the language you use, and key themes in your conversations.
You send your wife five emails a day, particularly at the beginning of the day, and at the end. She signs her emails ‘x'. Your football coach emails weekly with the team list for your Saturday five-aside games. He signs his emails ‘Be there!'.
The richness of the data that you interact with everyday is all fodder for a machine intelligence, which can use this to target an individual way. So when you get an email from someone says they are your football coach, and looks and feels like your football coach, the attacker is in position to manipulate your trust in that relationship – and automatically. Malware will be spread in exactly this way, and continues to propogate itself within new victim environments. The cycle is self-perpetuating.
It is not clear who will win in this war of algorithms pitched against algorithms. There are promising technologies that are standing up to the fight, arming themselves with AI to protect themselves proactively. More likely we will see a series of bloody battles, and weaponized AI will be at the heart of them.
A version of this article appeared in Wired.