Machine learning is a fantastic tool — in fact, we’ve been using it in some form or another at NTT Security for over a decade. It helps to protect our customers by allowing us to spot signs of a potential cyber threat which may otherwise have gone unnoticed. As the technology develops and we begin to see true artificial intelligence (AI), I’m sure these capabilities will become even more valuable to us and others in the security industry. But is it the Holy Grail of breach detection that many believe?

So far, we’ve all been studiously avoiding a major issue heading straight for us: what happens when the black hats start using the same technology to outwit us? If and when this happens, we’ve merely swapped one weapon for another in the never-ending arms race that is cybersecurity.

Tit-for-tat

Machine learning continues to grow and develop, from multi-layered deep learning tools to unsupervised algorithms. It’s particularly good at joining the dots between isolated events to indicate a coming threat. It’s not only better at doing this than the human eye, but automating such tasks takes the strain off increasingly limited human resources in IT security. The global shortage of industry professionals is predicted to reach 1.8 million by 2022.

But is it the be-all-and-end-all?

All through the relatively brief history of modern computing, there’s been a continual back and forth between those tasked with defending systems and data and those looking to attack. You can see it in the development of polymorphic malware to outwit traditional signature-based detection methods. And you can certainly see it in the battle between spammers and anti-spam tools that has taken place over the past two decades.

Is there any reason to assume that, when it comes to AI, the bad guys won’t seek to return our fire?

How will it help them?

AI could give cyber criminals, nation state operatives and hacktivists several useful advantages. Just as it helps us find the needle in the haystack — the malware threat hiding in plain sight — it could also enable them to automate the discovery of vulnerabilities in key systems. Imagine what havoc could be reaped by self-learning malware designed to continually adapt to its environment, with no input required from its masters. As always, the upper hand is with the attacker, who only needs to find one vulnerability to succeed, whereas we defenders must make only one mistake to let them in.

AI could also be used by the black hats to model, baseline and then imitate “normal” user behavior to craft highly convincing phishing emails.

AI might start off the preserve of a select few cyber crime gangs and nation states, who have the resources to invest in it. But just as with previous tools and techniques before it, the trickle-down effect will see the technology eventually democratized via dark web forums to the majority.

So what does the future hold? Machines fighting machines in an endless repeat cycle? While I’m not quite as pessimistic as Elon Musk, who has warned of a potential Skynet-style “AI-pocalypse”, I do think we should remember the importance of human input and moral and ethical standards in the race to embrace AI as the future of cybersecurity. Without human training and guidance, machine learning systems would be useless. Could humans also be the deciding factor when it comes down to a machine versus machine battle? Perhaps.

In the meantime, one report suggests 91% of IT security professionals already believe AI will be used against organizations in the future. One thing’s for certain: the AI wars have only just begun.