Remember that old sporting saying, “offense wins games, defense wins championships”? Well, it’s becoming increasingly relevant to today’s cybersecurity market. The truth is that many companies are skipping straight to offense, in the form of flashy machine learning tools, without first getting their defensive basics right. They don’t realize that the most advanced threat detection solutions in the world will be worth very little if they fail to mitigate risk via best practices like patching. Just look at the impact WannaCry had on unpatched organizations around the world.

Where to start

I’m still amazed at how many organizations that should know better fail to follow best practices that industry experts have been advocating for years. At the very least, IT managers should focus on:

Patching: Comprehensive patch management can prevent the vast majority of vulnerabilities ever impacting your organization. But failure to implement fixes swiftly could turn it into the next Equifax.

Identity and access management: A properly defined and implemented access control policy can go a long way to mitigating security risk. Done right, it will ensure that staff only get access to the data and systems they need to do their jobs, whilst off-boarding is automatic once they leave the company. It’s especially important to minimize and tightly manage the number of privileged users in the enterprise, to mitigate insider risk and the threat of account hijacking by hackers.

Data classification: You can’t secure what don’t know you have. Data classification is the first step to drawing up effective security policies governing how that data should be controlled.

Network segmentation: This is particularly important in OT and industrial environments, where there’s a pressing need to separate production systems from the rest. I’m still surprised by how many major organizations operate with flat networks — exposing them to ransomware and info-stealing malware threats.

A case in point

To illustrate my point, let’s look at WannaCry. The ransomware campaign landed on 12 May 2017 and within a couple of days had infected around 200,000+ organizations in 150 countries around the world. The NHS was forced to cancel an estimated 19,000 operations and appointments. Yet a patch for the critical Windows SMB vulnerability exploited in the attack had been available since March. This was in many ways an operational, rather than an IT security fail.

It’s getting increasingly challenging for IT admins to stay afloat given the tsunami of patches flowing from vendors on a monthly basis. In fact, almost half (49%) of IT professionals polled at VMworld Europe last year claimed they take more than two weeks to patch, while 20% take more than a month. But it’s far from impossible, as long as organizations take the right, risk-based approach and automated tools.

Insurance drives changes

On a more positive note, I do think things are beginning to change, and this change is being driven by cyber insurance. If companies can’t demonstrate basic cybersecurity and risk management best practice today, they may well be denied insurance, or find an existing policy is invalid. NTT Security’s 2017 Risk:Value Reportfor example, revealed that 45% of companies felt that failing to patch existing IT systems would or could invalidate their company insurance.

As cyber insurance becomes the norm across industries, it will hopefully drive improvements in baseline security.

Don’t get me wrong. Advanced machine learning tools are a great way to spot sophisticated malware missed by other filters. But they should be used as a complement to, rather than a replacement for, standard best practices. So, as we head through 2018, let’s blow the final whistle on preventable security incidents.