zvelo, Inc. (pronounced “zeh-vee-low”) provides the industry’s most accurate and comprehensive URL categorization and malicious detection solutions for web content, traffic, and connected devices. Our data and security services support some of the world’s most successful Network Security, Communications, Router, and Ad Tech companies—helping make the internet safer and more secure for all!
This is our story…
From the Beginning…
zvelo, formerly eSoft, Inc., was founded in 1984 by Philip L. Becker. As a former rocket scientist, Becker designed and built a proprietary bulletin board system (BBS) software named The Bread Board System (TBBS), helping to revolutionize commercial BBS and the trajectory of online communications in the early 90’s. As broadband became widely available in the late 90’s and early 2000’s, the company evolved to provide “all-in-one” internet appliances, integrating the functions of a router, firewall, VPN, DHCP, web hosting, anti-spam, web filtering, anti-virus, and more.
The “network security” appliance went through a number of product iterations—among them a product named the “IPAD”, a shortening of Internet Protocol Adaptor. The IPAD line, running IPAD-OS provided TBBS and internet access just as the World Wide Web in its modern form was growing. The IPAD—one of the very first UTM appliances (Unified Threat Management)—and was supported from 1995 to 2000. Inevitably the name and registered trademark were dropped—as it was a “ridiculous” name for computing product.
During this timeframe (mid 2000’s), eSoft licensed the appliance’s underlying software including anti-spam, anti-virus, and a web filtering database from third parties. Around this time there were three “800 pound gorillas” who dominated the URL categorization database market. These companies took an OEM (Original Equipment Manufacturer) approach—licensing their databases to network security and other companies who would integrate the database into their applications and products. It was around 2005 that eSoft began experiencing serious limitations and problems with the third party web filtering databases.
Website Categorization and URL Filtering in Web 2.0
It was at this time that websites were becoming increasingly dynamic and interconnected. The explosion of social media (myspace, facebook, etc.), networking, and blogging gave way to significantly larger and more complex websites. It was now entirely possible for social media or networking site to contain tens of thousands—if hundreds of thousands—of individual pages, all wrapped under a single domain. The web categorization databases at the time were sufficient (just barely) at categorizing—”crawling” and organizing websites that were predominantly “static” in nature. As web technologies took off, these systems weren’t designed to or capable of managing categorizations for dynamic websites in the world of Web 2.0.
Additionally, as businesses and organizations increased security measures during this period (anti-spam, antivirus, firewalls), hackers were looking for new methods and vectors for attacks. The complexity and scale of newly emerging website trends proved to be the perfect environment for bad actors. It was now possible to obtain nearly unlimited inventories of sites for use in phishing and other campaigns. Needless to say, the cybersecurity community struggled to maintain pace as threats largely shifted from email protocols (and attaching files to emails) towards leveraging web-based exploits for phishing, drive-by malicious downloads, and more.
The new generation of highly dynamic websites and hosting services exposed major flaws in the traditional approaches used by the industry giants who dominated the URL categorization database market. These major problems included:
Base Domain categorization was no longer sufficient.
Content on joe.blogspot.com could vary dramatically (understatement, it could be completely different) from content on jane.blogspot.com. Page-level categorization across an entire website was mandatory to provide any meaningful level of accuracy and understanding. At the time most new hosting, blogging, and social media sites had little interest (or bandwidth for) policing or moderating content—resulting in common place situations where “child safe” content resided right next to objectionable content or pornography on the same page. This required the ability to differentiate and categorize content at the individual page, post, or article level—across the entire website.
There was LITTLE to NO detection or Protection against malicious websites.
At the time, the industry giants focus was on performing categorization of content at the base domain level only. This resulted in an oversight of any malware lurking on a site beneath the content (words and pictures) on a website’s homepage. Again, they were not looking at anything below the base domain level. Hackers fully understood the holes in these oversights. They began obtaining access or hacking pages deep within a website—particularly reputable websites and hosting providers—where they would plant/deploy exploits and infections. Once this was complete, it was child’s play to use social engineering and social media to drive swaths of traffic to the infected page. If a hacker couldn’t get to a person/PC—it was now exceedingly easy to get the person/PC to “infect themselves”. The market required a solution for malicious website detection and protection beyond the base domain level.
Weekly or Monthly Updates just didn’t cut it. Content changed continuously on the dynamic websites in this new era of the internet.
Bloggers and news organizations were adding new articles and updating posts on an hourly basis. Hackers were planting exploits and attacks that had shelf-lives of just a few days—after which the hacker would move to a new location to avoid attention and detection (similar to fine watch, jewelry, and fashion “merchants” on big city streets). At the time, industry heavyweights had weekly update schedules (at best), which was woefully insufficient for covering a market that was changing on a continuous, real-time basis. Network security and communications companies required a to not only provide categorization and malicious detection at the page level—but to propagate updates to web filtering and protection systems in real-time. This was the only way to provide any level of protection to end users and web surfers. It was quickly becoming paramount to provide immediate protection against the most sensitive objectionable/porn content as well as dangerous (aka malicious) sites.
Response times of days or weeks (or never) for correcting false positives left websites unreachable and users in the dark.
It happened then and it still happens today. As long as web content is dynamic and continuously changing—this will remain. If a URL is found to be miscategorized users will ask for the categorization to be changed/updated. After all, if a flower shop and website is miscategorized as pornography (whether by accident or because of a recent compromise) that may cause web filtering and network security solutions to entirely block access. That is bad for business and could even cause the business to shut down. It’s entirely reasonable for users and website owners to request or demand a URL categorization be changed. And it is important for these updates to propagate quickly—if not immediately. The market leaders of the time would often take days (at the very best), and more often would require weeks (if they ever got around to it) to change categorizations. This delay (even negligence) hurt everyone in the process—the user, the website owner, and the network security company who had integrated the URL database into their web filtering offering. Categorization changes needed to be immediate. But what good was a fast response if database updates only occurred weekly or monthly? It became clear that the solution required responsiveness, a company culture to support it, and an infrastructure that propagated changes to databases worldwide in near real-time.
URL Categorization: The Answer to Your Web Security Challenges
So, by late 2005, the company and our customers had reached a point of complete frustration. We weren’t able to get the service that we—AND our customers – rightfully demanded. In fact, we weren’t even able to get lip service from the industry giants—showing that they neither understood, nor had any interest in addressing, these problems. At this point, we made a decision. We believed that we could build a more effective mouse trap—as well as the team and processes required to support it.
It was in late 2005 that we started down the path to build the industry’s leading URL categorization database. The company shifted (the popular term has since become “pivoted”) engineering and other resources towards building the infrastructure to support this new service for the market.
Our overarching and guiding principles were pretty simple:
- We were going to “Categorize the Web”
- We were going to do so in an effort to make the web safer and more secure
- We were going to be a 100% OEM Channel and support our partners (Compared to the “800 pound gorillas” or “major players” of the past who would license their database and then undercut pricing to compete on market opportunities)
- We were going to be the MOST RESPONSIVE vendor partner that any of our customers would have—not just for the URL database, but for any service/product they received from a vendor.
- In terms of the actual service offering, it would provide:
- The MOST ACCURATE content categorizations in the industry
- The BEST COVERAGE of ActiveWeb URLs in the industry
- The BEST MALICIOUS WEBSITE DETECTION of new and emerging threats in the industry
- Provide page-level categorization AND malicious detection across the entire internet
- Support real-time updates for immediate protection against sensitive, objectionable, and malicious URLs
- Provide the fastest response time for miscategorized URLs
- Be the most responsive organization our partners would have. Period. From emails, calls, updates, miscategorizations, etc.
We understood these were lofty objectives and expectations, but we knew it was the RIGHT way to approach the industry’s problems. So, starting in 2005, we began working in parallel on content categorization systems and malicious detection systems. Over the course of several years, we encountered one technical or operational challenge after another. Fortunately, we had some incredibly committed and brilliant team members who were able to overcome the challenges. We encountered a variety of steep learning curves including: how to leverage artificial intelligence and machine learning (AI/ML), how to identify malicious exploits through reverse engineering, how to detect phishing attacks, how to measure accuracy, how to support hundreds of millions of users with real-time updates, and many more.
Introducing a Next Generation of URL Categorization Database
By mid 2008, we launched our initial service offering that supported 53 unique content categories in English only. We believed this minimum viable product (MVP) would be suitable for some time. How wrong we were….
We were incredibly pleased with market demand and knew we were solving a critical market need better than almost any other company–regardless of size. However, almost immediately, we had prospective partners requesting various European languages, Chinese, Japanese, and demanding many, many more categories.
Categorization-As-A-Service: The Emergence of zvelo
By 2010, we understood that there was considerably more opportunity for zvelo in the web categorization business. The decision was made to rebrand to zvelo and sell off the eSoft “all-in-one” appliance product line—allowing us to focus 100% of our energies on pursuing our goal of “Categorizing the Web.”
Over the last several years, we have continued to see increasing demand from web filtering and security as well as in new markets. Our URL database and categorization services are now used in the ad tech industry for brand safety and contextual targeting applications—as well as in data and subscriber analytics for behavioral profiling applications, as well as other industries. All of these opportunities and challenges have resulted in continuous growth and improvements of our categorization infrastructure.
2018 And Beyond…
As of 2018, we provide categorization and malicious detection to OEM partners who collectively represent a global network of over 600 Million end users. Our services support over 500 unique categories in 200 languages—and detection for a never-ending litany of malicious exploits and phishing attacks. In early 2018, we went live with our 3rd Generation of AI-based categorization—having completed a massive project to deploy a scalable, SOA-based (Service-Oriented Architecture) platform that accommodates the incredible growth, data storage requirements, and support infrastructure required to provide industry-leading categorization of social media, posts, blogs, tweets, news, and other dynamic websites that are generating billions and billions of new web pages and pieces of content.
Throughout this time, zvelo has maintained a culture of ultra-responsiveness. We are gratified to hear from our partners about our speed and support in responding to false positives, miscategorizations, and new categories to meet emerging web trends. Our global support team provides 24 hour coverage and continuously helps to build training data that improves our AI-based categorization systems.
And finally, remember those “800 pound gorillas” from the 2005 time period? Well, we can proudly claim several are now our OEM customers!