Cybersecurity: When they talk about confusion matrix
Confusion Matrix
A confusion matrix is a tabular summary of the number of correct and incorrect predictions made by a classifier. It is used to measure the performance of a classification model. It can be used to evaluate the performance of a classification model through the calculation of performance metrics like accuracy, precision, recall, and F1-score.
If you have an imbalanced dataset to work with, it’s always better to use confusion matrix as your evaluation criteria for your machine learning model.
Understanding Confusion Matrix:
The following 4 are the basic terminology which will help us in determining the metrics we are looking for.
- True Positives (TP): when the actual value is Positive and predicted is also Positive.
- True negatives (TN): when the actual value is Negative and prediction is also Negative.
- False positives (FP): When the actual is negative but prediction is Positive. Also known as the Type 1 error
- False negatives (FN): When the actual is Positive but the prediction is Negative. Also known as the Type 2 error
For a binary classification problem, we would have a 2 x 2 matrix as shown below with 4 values:
What You Need To Know About False Positives and False Negatives
The differences between false positives and false negatives, and how they’re related to cybersecurity is important for anyone working in information security. Why? Investigating false positives is a waste of time as well as resources and distracts your team from focusing on real cyber incidents (alerts) originating from your SIEM.
On the flip side, missing false negatives (uncaught threats) increases your cyber risk, reduces your ability respond to those attackers, and in the event of a data breach, could lead to the end of your business.
What Are False Positives?
False positives are mislabeled security alerts, indicating there is a threat when in actuality, there isn’t. These false/non-malicious alerts (SIEM events) increase noise for already over-worked security teams and can include software bugs, poorly written software, or unrecognized network traffic.
By default, most security teams are conditioned to ignore false positives. Unfortunately, this practice of ignoring security alerts — no matter how trivial they may seem — can create alert fatigue and cause your team to miss actual, important alerts related to a real/malicious cyber threats (as was the case with the Target data breach).
These false alarms account for roughly 40% of the alerts cybersecurity teams receive on a daily basis and at large organizations can be overwhelming and a huge waste of time.
What Are False Negatives?
False negatives are uncaught cyber threats — overlooked by security tooling because they’re dormant, highly sophisticated (i.e. file-less or capable of lateral movement) or the security infrastructure in place lacks the technological ability to detect these attacks.
These advanced/hidden cyber threats are capable of evading prevention technologies, like next-gen firewalls, antivirus software, and endpoint detection and response (EDR) platforms trained to look for “known” attacks and malware.
No cybersecurity or data breach prevention technology can block 100% of the threats they encounter. False positives are among the 1% (roughly) of malicious malware and cyber threats most methods of prevention are prone to miss.
Strengthening Your Cybersecurity Posture
The existence of both false positives and false negatives begs the question: Does your cybersecurity strategy include proactive measures? Most security programs rely on preventative and reactive components — — establishing strong defenses against the attacks those tools know exist. On the other hand, proactive security measures include implementing incident response policies and procedures and proactively hunting for hidden/unknown attacks.
Thanks for Reading
Hope you liked it…
If you want to connect me, below is my LinkedIn URL -