Emerging deception tools and techniques could offer game-changing capabilities for advanced malware detection, enabling organisations to detect, analyse and defend against zero day and sophisticated APT attacks. Analysts such as Gartner are excited about the possibilities of deception techniques and a new generation of distributed decoy technologies that could make life much more difficult and costly for those in the malware business.
So how do deception solutions work? The goal is to disrupt malware at multiple points along the attack chain. When intrusions are detected, the malicious actors and systems compromised are automatically isolated and held in a network deception zone. In this zone, attackers are forced to invest valuable time and resources trying to establish what is real and how to proceed with an attack. They are deceived into seeing things on the network or endpoint that are not there. In some cases, they are convinced they have been successful on fake systems and network components that operate exactly like an organisation’s real assets.
The good news for those organisations looking to build a resilient cyber defence architecture is that a deception approach can be built into an existing infrastructure. Utilising existing security, network and server components, controls can be configured to transport connections from known malicious threat actors to network emulation services or to deception decoy services within the enterprise network. And for those whose focus right now is endpoint protection; endpoint detection and response tools can also be utilised using deception at the host layer.
Another innovative solution that integrates with existing SOC tools and endpoint protection is user behaviour anomaly detection. This technology particularly complements existing SIEM (Security Information Event Management) deployments. Whereas SIEM focuses on collecting all log data and correlating specific events, techniques such as user behaviour anomaly detection focus on building comprehensive visibility (based on user logs) of what is normal for a user and identifying when a user is acting outside of ‘normal’.
In my previous post, I talked about establishing an 'attacker’s eye view'. In the case of user behaviour anomaly detection, the goal is to join the behavioural dots across all user accounts, devices, and IP addresses – and build up a picture over time of normal practice so that exceptions can be identified in real-time and investigated. This technique can add a range of capabilities to a resilient cybersecurity architecture but, from a malware perspective, it can help analysts spot where malware has entered and how malware is trying to behave in a certain way.
Managing 'unknown unknowns' is very different for each business but, by using user behaviour anomaly detection information, security professionals can 'know' their colleagues and gain invaluable context for rapid analysis and action.