They always independently verify that their client is the best. Well, independent tests these days are a joke.
In the last week, two different reports from December 2008 came to my attention: one from Cascadia Labs commissioned by Trend Micro and the other from Tolly Group (report has since been removed from the web) commissioned by Websense. They both have sections on the effectiveness of the major web filtering companies in blocking malicious websites.
Of these two reports, the Cascadia Labs report was slightly more fair ranking Trend Micro as able to block 53% of web threats (the highest — presumably with Anti-Virus enabled as well as URL filtering) followed by McAfee (42%), Blue Coat (31%), Websense (23%) and IronPort (20%). I’m ignoring the SurfControl entry (9%) because since Websense bought SurfControl, the product is essentially defunct and SurfControl partners are being urged to change to Websense.
The Tolly Group report said, “In tests with 379 URLs containing binary exploits or compromise code, Websense blocked 99% of URLs, versus other vendors who blocked between 53% to 91%.” Lets look just at the results for Websense versus Trend Micro in terms of exploit detection in the two tests:
Well, Trend Micro is consistent, but depending on who you ask, Websense is either twice as good or half as good. But here’s the kicker, the Tolly report says, “All the URLs tested were mined from Websense ThreatSeeker network.” So what they’re saying is that Websense is very good (but not perfect) at detecting exploits on URLs it knows to have exploits.
Now here’s the bottom line. A lot of folks make claims about security, but its a hard thing to verify. zvelo, the sponsor of this blog, for example, detected 35k new malicious URLs last week and has over 1.5m recently verified malicious URLs in its database at the moment. The combined lists of Google, Trend Micro, Sunbelt, PayPal, Mozilla, AOL, and Consumer Reports on the other hand have only 318k [source: stopbadware.org]. But these might be 318k not covered in the zvelo list, so the question becomes: how do you test these types of products?
I have some thoughts on how truly independent testing could be done including the collection and verification of malicious URLs without relying on a particular list that some vendor may already include directly, but I want to put it out there.
- What testing methodology should be used in a fair comparison of the ability of different products to block access to compromised, phishing, and otherwise malicious websites?
- Should the tests include things like malware call-home addresses? If so where does the source of URLs come from? And what is a fair sample size?
- What is a fair timeframe from first detection?
Any feedback is greatly appreciated.