10 Jul Drowning In Data – The Event Fatigue Problem
Modern security systems generate lots of events and logs for security teams to look at. Unbelieveably, it’s considered perfectly ‘normal’ for a security solution to generate hundreds or thousands of alerts for the hapless defender to sift through. Approaches such as machine learning and corellation are supposed to help, but in practice, they only help make post-mortem analysis easier.
‘Event fatigue’ is a real concern. It’s not even surprising to seasoned security professionals to find that the alerts from monitoring systems are ignored, or even worse – disabled, often in the name of ‘tuning’ the system.
The consequences? Public information has it that Target Corp’s anti-malware solution faithfully raised alerts about a possible malicious binary, however, they were ignored.
Only after an analyst has waded through the log data, analysed the events and removed false positives, are they able to deal with the actual threats.
In practice, this process never even occurs because it’s so expensive and time-consuming. Nobody has the time to pro-actively convert gigabytes of data into meaningful information. It only happens after an incident occurs.
Is there a better way? Why not design systems that only alert when something meaningful truly happens? When the event is the anomaly, you save time, money, and can actually get around to dealing with real threats.
This is one of the primary benefits of decoy based systems. By definition, any traffic is malicious, and any event is an alert that requires your attention.
We’ve all tried the old way. It didn’t work. It’s time for something better.