In-depth, ongoing conversations about threat feed requirements, key performance metrics, and the overall quality of threat data in feeds are critical to the threat feed evaluation process. Equally as critical are conversations to address potential misconceptions or incorrect assumptions that can end up skewing or misrepresenting the results of the evaluation — and not in a good way. To build on our last post that focused on key considerations for threat feed evaluations, this post aims to share a few of the most common threat feed evaluation misconceptions and false assumptions.
#1: More Threat Data is Better
The most common threat feed evaluation misconception that we run into is quantity vs quality and the belief that having more threat data equates to better security.
The Reality: It Isn’t.
In fact, this notion can lead organizations into a trap. More often than not, ingesting large volumes of raw threat data actually causes more problems than it solves. Too much threat data creates a lot of unnecessary noise, increasing the volume of false positives which ultimately lead to alert fatigue and staff burnout instead of preventing it.
The reality for many organizations is that they purchase threat data with a primary goal of blocking threats, not conducting threat research. Therefore, the primary focus should not be on accumulating vast amounts of raw threat data but rather procuring high-quality threat intelligence which has already been curated — i.e., de-duplicated, verified for accuracy, pruned for inactive threats, and subjected to numerous other checks to ensure the threat data being provided can be trusted. Curated threat feeds prioritize high-quality information that is accurate, relevant, and timely to set it apart from the noise. By prioritizing quality over quantity, organizations can increase the efficiency of their threat detection and response, making the most of their resources to protect against emerging threats.
#2: You Can Use Old Data or Open-Source Threat Intel to Evaluate Threat Feeds
The next most common evaluation misconception we run into is that clients or prospects want to evaluate threat feeds by using open-source threat intelligence, or threat data that includes old and inactive threats for comparison.
The Reality: You Shouldn’t.
The reason neither of these options work for evaluating threat data is that it’s challenging to get a sense of accuracy, relevance or timeliness of the threat feed in this way — especially when it comes to comparing these against zvelo’s curated threat intelligence feeds. Why? Because it’s not an apples-to-apples comparison. zvelo’s threat data is meticulously curated to focus on currently active threats that have been validated and enriched with additional metadata attributes to ensure maximum accuracy with the lowest possible false positive rates. By contrast, open-source intelligence or other non-curated threat feeds tend to have a lot of duplicates, false positives, and old data that lists threats which have not been active in as long as a year or more and are no longer relevant.
For example, if you were to compare a list of threats from zvelo’s threat intelligence feeds against an older list of threats or open-source feeds, it might seem that the zvelo feed is missing a lot of the threats. However, upon further inspection, it becomes clear that the supposed ‘missing threats’ are actually duplicates, threats that have long been deactivated, or those which cannot be validated and positively identified as phishing or malicious. While many of the evaluators think that they want — or need — the inactive threat data, the reality is that these have no actual value to the vast majority of organizations. They do, however, generate expenses that include infrastructure to store irrelevant data, high false-positives that cause alert fatigue, team burnout, and all around inefficiencies that hinder threat protection and decrease an organization’s ROI.
The most effective threat feed evaluation approach is to obtain a test list from zvelo and use that to compare against what is identified within the active traffic on your network. Measure the percentage of real events in recent history that correlate against the feed you are evaluating. If this percentage is high, there is a good chance the feed will help you identify similar events in the future. However, it’s important to note that it takes time in a production environment to see a success rate because what you see during the test is just a snapshot of the active traffic, which has a lower chance of getting immediate hits.
#3: Blocking the Base Domain vs Full-Path URL is a Good Strategy
The next most common threat feed evaluation misconception is the idea that blocking threats at the base domain level, vs the full-path level, is a viable defense strategy for strong threat protection.
The Reality: A Optimal Strategy Requires Both Capabilities.
We see threats at both base domain and full-path levels. However, a threat that exists at the full-path level does not automatically mean that the entire domain should be considered malicious or phishing. There are plenty of commonly whitelisted sites being used to deliver malware, such as OneDrive, Google Drive, or Dropbox. For example, attackers are frequently observed using Dropbox to host malicious documents and malware. However, one malicious full-path URL does not mean that all of Dropbox is malicious. While it’s certainly possible to block the entire domain, doing so can be problematic and result in huge consequences given Dropbox’s widespread and legitimate usage.
Conversely, there are scenarios — like phishing campaigns using personalized URLs — where blocking a base domain would be the most effective defense strategy. In this type of scenario, attackers automate bot phishing campaigns that generate personalized or randomized URLs for each victim. While you could block each full-path URL, it would be far more effective to block these types of phishing threats at the campaign level because waiting for the next full-path URL phish to be detected would be too late for the intended target.
For example, if a campaign is identified at the domain level, a record like https://<tld+1>.tld/ would be inserted into zvelo’s phishing threat feed. This means that all URLs with this TLD+1 should be blocked whether or not there is a path or a subdomain on the URL. If the campaign was detected at the subdomain level, the record inserted will be similar to https://<sub1.sub2.tld+1.tld>/.
The top 3 threat feed evaluation misconceptions and false assumptions we have shared in this post are intended to facilitate discussions around what does and does not work for evaluating a threat feed, and why. Ultimately, these conversations can help set appropriate expectations around the results to guide organizations in getting the most accurate results from an evaluation, thus ensuring they select a threat feed that is best suited to their needs and will deliver maximum threat protection.
Next Up:
Part 3: Primary Questions an Evaluation Should Answer