We’re now just over one month into the brave new world of the General Data Protection Regulation (GDPR). The sky has not yet fallen in; regulators, crazed with their new-found powers have not (yet) started issuing multi-billion dollar fines for minor infractions. Indeed, they seem to also be playing ‘catch-up’– as anyone trying to register their Data Protection Officer (DPO) will know.

In June, I attended Infosecurity Europe in London for the first time in a couple of years. The GDPR was very much in evidence, and speakers across the event majored on it, unsurprisingly – while the regulation itself is derivative of 20+ years of caselaw, for many businesses it’s the first time they have taken privacy seriously. It’s also a great way to make a case for more budget teams – time to dust off wish lists, while the board is worrying that they might be the first in a new set of caselaw.

Being at Infosec, the focus was on security rather than privacy, with the rise of ‘machine learning’ and ‘AI’ a key theme. We baseline normal network behavior to detect when an anomaly occurs, which is then flagged to security analysts. NTT Security is itself using that approach and it makes absolute sense; it’s a key way to detect asymmetric threats.

I asked every vendor whether it had considered the privacy implications of this. The answer was mostly that the systems are looking at a ‘box’; an endpoint, IP addresses, and the behavior of ‘kit’.  But, certainly for endpoints, and sometimes for other bits of kit, there is a human – a data subject – on the other end.

So, in baselining ‘normal’ network or endpoint behaviors, by design and default it also baselines employees’ specific patterns of behavior – building a profile of every employee in the business. It’s in effect surveilling their working patterns, in order to detect when they (or their endpoint) are doing something out of the ordinary.

AI-based security is therefore also creating a picture of end-user behavior, which is, by its nature, personal data. You can’t divorce the kit from the individual using it.

We know that the insider threat is a huge one – employees can exfiltrate data or just get phished. So, it’s legitimate to be looking at their behavior. As the privacy guy at NTT Security, it’s my job to make sure our company keeps personal data safe. Making sure it can’t be stolen or compromised by a colleague is a key part of that. This in law, though, must be balanced against the privacy rights of the individual.

The tool for testing this balance of interests is the Data Protection Impact Assessment (DPIA) – we assess the processing to look for risks to individual privacy rights. NTT Security is undertaking this at the moment, to make sure our customers have the information they need through our managed services.

Machine learning and AI-based security analytics are the way forward in helping meet the myriad threats we face. But the case in favor needs to be thoroughly explained – through DPIAs, privacy notices and good communication. This is about keeping employees’ data safe, and protecting the business. And documented controls are needed too – against mis-use, combining of data-sets for performance management or just snooping.  

In fact, there is a strong argument that it’s actually less privacy-invasive that a machine looks at employee behaviors, rather than some guys in the IT team that you’ll bump into in the break room… a machine doesn’t judge, obeys rules and doesn’t chat to its friends about you – at least, of course, until the machine itself becomes truly sentient and can make up its own mind…