If you've ever had the pleasure of reading Security, Accuracy, and Privacy in Computer Systems by futurist James Martin, the first thing you'll realize is that not much has changed since 1973 with regard to information security fundamentals. His observations are remarkably prescient – even to this day. However, within the last decade, an innovative approach to incentivizing researchers for finding and reporting vulnerabilities has spurred the creation of a number of highly touted and successful companies offering "crowdsourced security as a service." While effective with a bug bounty delivery model, the value of crowdsourcing becomes less apparent when promising to deliver comprehensive, repeatable security testing services for the purposes of security assurance. Point in time, sure. Over time, not so fast.
There are various limits to how far crowdsourcing can carry you when expectations move beyond the traditional bug bounty model. Many of the challenges with crowdsourcing have been covered in depth elsewhere, but I will focus on these three:
I joined Bishop Fox to work on building a Continuous Attack Surface Testing (CAST) managed service offering that both augments and empowers a select group of human operators to test at scale and high frequency. Not too long ago, I believed crowdsourcing security expertise was the panacea for overcoming the traditional security service delivery pain points. However, I realized that in order to effectively deliver security services at scale, the crowdsourced model must be inverted – with technology functioning as the “crowd” that continuously delivers asset, attack surface, and vulnerability context to a select group of expert operators that can clearly synthesize and communicate these results in terms reflecting business context.
With this model, our technology objectively performs routine, tedious, or otherwise repetitive actions comprehensively and without objection or deviation from an established quality baseline – a feat incredibly challenging to manage across humans. Enhancing these capabilities by applying machine learning techniques empowers our expert operators to execute at unprecedented scale. With this combination, our clients get the timely and comprehensive view of their attack surfaces without sacrificing human ingenuity and quality.
When we’re talking about security assurance, we like to use the analogy of an annual physical — you trust the expertise of the medical professional to properly evaluate your health from top to bottom and, hopefully, when all is said and done, the result is that you're in good health. You don't look forward to your visit hoping for the most impressive, devastating ailments to be discovered — but, should you have that misfortune, you can immediately start treatment to begin the road to recovery; thus, you're relieved you scheduled the appointment. The outcome of your visit results in the assurance from a trained, medical professional that you are in good health based off diagnostic testing and actionable next steps should any concerns arise.
We take this approach and extend the frequency to continuously deliver this same level of assurance. We age, we have accidents, we catch various viruses, or make a poor food choice and gamble with gas station sushi. The same is true for information security. As an enterprise, infrastructure and software age and require regular updates and patches. Sometimes we may rush to implement unvetted, insecure solutions in haste or out of necessity. New exploits are released at an unpredictable, yet frequent cadence; requiring continuous vigilance, visibility, and oversight. The ideal outcome from a security assessment is a prioritized list of exposures that, when remedied, will help increase your security posture accompanied by a structured, results-focused list of tests performed that comprehensively demonstrate all enumerative, testing, and finding activity.
In contrast, a crowdsourced security assessment relies on an assembled group of researchers that may span multiple time zones, speak different languages, have disparate philosophies on testing methodologies and, most importantly, varying degrees of skill. Furthermore, these researchers are operating under contract without the benefits of a salaried employee. How well do these researchers work together, if at all? How do we know the strongest team of researchers has been assembled to assess our mission critical systems? How much of their time will be dedicated to testing your needs? How do these vendors enforce researcher non-disclosure agreements and keep your sensitive system details private? To be clear, these characteristics are perfectly fine for a bug bounty-style service offering, but when security assurance matters, these are all valid questions that I’d ask my crowdsourced security services provider. No matter how modern, revolutionary, or “continuous” these services may claim to be, they’re still a point-in-time assessment that delivers only a glimpse of your overall security posture.
As with security assurance, quality often suffers when you go for the cheaper, crowdsourced option. When the price point is seemingly too good to be true, your supervisors and financial team are going to want proof that paying more for a security consulting firm is worth the investment. “Quality” is a subjective concept. For the purposes of security assurance, you should measure quality by the breadth, depth, and overall coverage. I consider “coverage” to be the demonstrative adherence to a defined process; which encompasses concepts like asset identification, attack surface enumeration, and testing methodology.
The “quality” measure is often conflated with "severity" — meaning that if an engagement doesn't result in high or critical findings within the first few days, the assigned researchers have somehow failed or aren’t doing their job. Given the gig-economy nature of crowdsourcing, stretching the expectations and roles of researchers beyond reporting vulnerabilities quickly devolves into disdain and contempt...unless, of course, proper compensation and recognition accompany such expectations. Having been a security consultant myself, hearing “why haven’t you found anything critical or high yet?” while just getting started with an engagement isn’t exactly a welcomed inquiry.
Quality is an especially challenging metric to articulate with crowdsourced security testing — surface all attack traffic supplied by researchers to demonstrate test activity, and, quickly, privacy concerns creep in as that's viewed as a competitive differentiator for each researcher. Exacerbating this is the productization of researcher attack traffic – which quickly becomes a double-edged sword. Competing agendas shouldn't factor into the quality of security services rendered for the customer – the results should speak for themselves. While Clint Eastwood and Lee Van Cleef demonstrated that bounty hunters actually are capable of collaboration in For a Few Dollars More and The Good, the Bad, and the Ugly, a bug bounty is a race to the finish to find high impact vulnerabilities for a reward. Which, I believe, does have a place in the security lifecycle but should not be viewed as a mutually exclusive replacement for comprehensively evaluating your security posture.
Another perceived benefit of crowdsourcing your security services is the presumption that there are “thousands of researchers” readily available across the world through your vendor of choice. Many are looking to hone their tradecraft because it’s their passion. Why not make money while following your passion? It certainly beats the criminal alternative.
It doesn’t take long to realize that it’s just one pool of the same researchers across a few vendors…with a small percentage of elite researchers delivering most of the value. Isn’t this the reason crowdsourcing is a thing in the first place? Why shift staffing challenges to a model that succumbs to the same challenges? What about all of the other researchers that aren’t in that group? This may not be as big of a concern with a bug bounty, but what about with penetration testing? Researchers actively participating in crowdsourced security services will continue to evaluate which ecosystems produce the most lucrative results and allocate their time accordingly...as they should.
When planning to deliver security services, expecting to scale humans by orders of magnitude each year in order to meet delivery expectations is another double-edged sword. As a crowdsourced vendor, you may be able to meet monthly recruitment goals, but you risk introducing the quantity vs. quality conundrum or, even worse, alienating tenured researchers with a sudden influx of unproven “reinforcements” that are unfamiliar with “the process” and take time to ramp up and become assimilated with the larger pool. Arguably, with a bug bounty this is less impactful given that the goal is to uncover high impact vulnerabilities, not perform a comprehensive, thorough, and systematic penetration test. Furthermore, the collaborative spirit and a “team-first” mindset is not a prerequisite with the bug bounty approach; however, this approach is imperative with a penetration test delivered via crowdsourcing.
In order to keep pace with testing the ever-evolving modern attack surface, the human element of security testing will never be completely eliminated, nor should it. We must become much more industrious in quickly identifying assets, classifying them, and exposing their respective attack surface for further testing. By leveraging down to our platform, we empower our hand-picked collective of expert operators to be more effective at identifying anomalies, abnormalities, or other attack surface changes so that they may execute at unparalleled scale.
Think of our operators as the top 1% of researchers in the crowdsourced population that you “hope” get assigned to your assessment. Effectively, we’re inverting the crowdsourced security model. By assembling an elite group of human experts and empowering them with the ability to execute at scale through technology, we eliminate many, if not all, of the major concerns plaguing the crowdsourced security movement. Instead of relying on a vast crowd of researchers, we’re incorporating innovative supervised and unsupervised machine learning algorithms to uncover insights and correlate massive amounts of information that we surface to our select group of fully employed, fully vetted, and supremely talented operators. By leveraging down with technology, this allows us to organically scale human expertise much more effectively while simultaneously preserving the quality of service delivery.