As security has become a growing focus within the DevOps community – resulting in the new and expanding field of DevSecOps – developers have been integrating more security testing techniques into the software development lifecycle. One important aspect of security testing that developers should consider is secure code review. Why? Because the source code reveals the entire attack surface, which helps ensure that even edge cases and issues in obscure sections of code are caught. In other words, the code reveals all.
Since developers already know code, they can easily understand and adopt the practice of code review. Developers may even already perform some form of code reviews, such as peer code reviews or paired programming, or use automated code checkers that review code for bad coding practices or code quality issues.
By growing their understanding of what makes software vulnerable and by integrating some simple strategies for secure code review into their DevSecOps lifecycle, developers can continuously deliver more secure software.
Before we begin digging into the components of secure code review, let's review some of the common methods used to understand and test for software vulnerabilities.
The DevOps lifecycle, shown above, depicts the iterative nature of modern software development as a sequence of repeating phases. There is a beginning of sorts, but never really an end. And throughout that lifecycle, different activities are performed, including the phase where software is actually designed, created, and tested. It is in these phases that I want to highlight a few important security testing activities. Each of these is important in its own way and adds different value to the process of creating secure software.
There are advantages and disadvantages to each of the above testing techniques. Threat modeling can uncover systemic issues at the design level so they can be mitigated early. Penetration testing is well understood by most security professionals, but not necessarily by developers. Results can be obtained quickly and are usually easily demonstrated as exploitable.
Secure code review offers one distinct advantage in that the source code reveals all. The entire attack surface is exposed by the source code, making it possible to identify issues in edge cases and hard-to-reach states.
Put simply, secure code review is reviewing code for security issues. Okay, I know that’s unfair - enough with the circular definitions already. Let’s first talk about “code review” in the more general sense.
Most developers should be familiar with some form of code review, whether it is through formal code inspections (rare), “desk checking” (a form of self-review), pair programming, peer reviews, or something similar. These techniques are typically intended to find code correctness issues (aka bugs), code quality issues, or ensure adherence to organizational coding style standards – potentially all of the above.
Secure code review has similar purposes. Its primary goal is to identify security bugs and may also include ensuring adherence to organizational secure coding standards, assuming they exist.
But how do you start such a review? Do you just start reading the source code line-by-line, hoping you’ll come across something that looks suspicious? You could, but that will quickly lead to exhaustion and a lack of focus for most people. It also isn’t very effective, especially if you don’t know what’s “suspicious” or how to determine if simply “suspicious” is vulnerable.
Going into great depth on performing a secure code review is beyond the scope of this article. Instead, I’ll start by defining a few terms, and then discussing a commonly accepted strategy for approaching a secure code review. First, here are some definitions.
Static code analysis is analysis of software without executing it. This is commonly performed on source code, but sometimes object code, and is usually automated. It has many uses – type checking, style checking, program understanding, program verification, bug finding, etc.
Static analysis security testing (SAST) is static code analysis that is applied specifically to the problem of identifying conditions within software that indicate a security vulnerability. The acronym SAST is commonly used when referring specifically to automated security “code scanning” tools.
Secure code review typically involves a combination of automated SAST tools and manual code inspection focusing on high-risk or security critical components of the source code. It is not an exhaustive, manual line-by-line review of every single line of code in the application. Not all code “matters,” or at least doesn’t matter much from a security perspective. Nor is it just running an automated SAST tool to scan the code and generate a report of findings.
With that out of the way, let’s review at a high level a commonly accepted methodology for performing a secure code review in three “easy” steps.
But when is secure code review done? There’s no easy answer to that, but if you stick to a determined, methodical, and focused approach using the above techniques – rather than randomly perusing through code hoping you’ll find something interesting – you’ll know what "done" looks like.
Next, let’s move on to discussing some strategies for how secure code review can be adopted in DevSecOps, or really any development methodology.
I’m not going to be able to turn everyone reading this article into a skilled secure code review specialist in a few pages. Nor can I give you a stepwise checklist of things to do to integrate SAST tooling into your CI/CD pipeline, thereby enabling you to instantly achieve DevSecOps nirvana. I wish it were that simple, but one size doesn’t fit all and your mileage may vary.
Rather, I’ll present a few strategies that I believe can help you adopt secure code review in DevSecOps (or really whatever methodology you use in your SDLC). I’ll leave the tactical implementation of these strategies to you. These strategies, like the phases of DevOps, should also be thought of as a lifecycle. There may be a beginning, but there should not be an end. They should be repeated, enhanced, and refined as more is learned, and as development processes and developers themselves change. One strategy often builds off and informs the others. So here we go…
Secure coding standards (or guidelines) are meant to provide developers a baseline that they can use to guide the development of more secure software – the rules of the road, if you will. There are many challenges to this. Most organizations use multiple programming languages and frameworks, so it can be difficult to develop secure coding standards for each. And things change with those languages and frameworks, so the standards must be regularly updated. A number of resources can be used to support this effort, such as the OWASP Code Review Guide, the OWASP Top 10 list, the Common Weakness Enumeration (CWE) Top 25, CERT Coding Standards, and more. The development of secure coding standards should be driven by and owned by development, with input and guidance from security.
Developers should be trained up on secure coding practices through a combination of regular formal training as well as self-study and research. Performing secure code review, with some expert guidance, can itself be an effective training tool. Learning by doing is often the best way. Again, use the wealth of material from OWASP as a resource, and seek out a training partner that can keep your development teams fresh with formal secure coding training.
Although we touched only briefly on threat modeling so far, it is not only an important part of ensuring that software is designed and developed securely but also a great way to achieve several other goals. Developers who are involved in the threat modeling process will better understand the threats to their application, which – when coupled with understanding of secure coding practices, vulnerabilities, and anti-patterns – will help them to develop a “security mindset” and think about how their software might be compromised. Threat modeling is also an effective tool to help focus a secure code review on the riskiest parts of an application.
Use “checklists” or guidelines for what code needs to be reviewed. Not all code matters as much from a security perspective. Remind developers that they should be looking for things beyond just code quality, clarity, or correctness. They should look for security issues, often in non-obvious places, using resources like the organization’s secure coding standards and the OWASP Code Review Guide.
Developers should incorporate secure code review into their development routine. Encourage them to review a little bit of code at a time, as it is being developed, by incorporating security checks into “desk checking” (self-review), paired programming, or other existing code review approaches already in place. This is a good time to introduce SAST tools, using the IDE plug-ins of such tools within the developer’s existing development environment where they are most comfortable – but only after some education and preferably secure coding standards are available. They need to know what they are looking for before they use any automated tool to start looking.
Ultimately, DevOps is all about tooling and automation. But tools, especially tools to automate security testing, are not a panacea, and in fact can be a bit of a Pandora’s box. Automated SAST tools, especially without any “tuning” or customization of rulesets, always err on the side of reporting everything that looks suspicious based on their rulesets and heuristics. For this reason, expect a lot of false positives. Weeding through those and validating them is time consuming and aggravating and may lead to SAST tools becoming shelfware.
Don’t think that you can just throw a SAST tool into your CI/CD pipeline and you’re good – you will end up blocking a lot of code commits due to false positives. Carefully tuned for specific, well-known, and well-understood issues, SAST tools can be effectively applied in this context.
Remember that there’s only one way to eat an elephant: one bite at a time. Start slowly with automation, understand the strengths and limitations of the tools, and accumulate and build on small successes to increase the usage and value of the chosen tools.
If you don’t track it, you can’t control it. I encourage you to track vulnerabilities like any other bugs. Your security team may want insight and tracking specifically on vulnerabilities, but the development team should also track and backlog them like any other bug, as that team is ultimately going to be responsible for fixing them. Use risk analysis to prioritize security bugs in the backlog. Critical- or high-severity security bugs should never be left there for very long.
I also believe that an effective strategy is to develop your own “Top 10” (or eleven, or fifteen, or twenty) security bugs. These are the issues that you think are most prevalent across your application portfolio based on previous testing or other information or are the most severe or high risk. Use whatever measure fits. Tune your tooling toward these issues, use them to inform and adjust your secure coding standards, training, checklists – literally everything.
Use an expert – there’s always a need for an in-depth, point-in-time, external secure code review by an independent expert. Developers can eventually make good secure code reviewers; after all, it’s their code. But they are also much more invested in that code and may have a lot of preconceptions and misconceptions. A second or third set of eyes can make a huge difference.
And oh yeah, lest we forget, security is a journey, not a destination. All of these strategies should be continuously revisited, revised, and enhanced. Learn from mistakes, celebrate successes, and deliver more secure code.
I’m a big fan of code review, whether for identifying security bugs or “regular” bugs. When I was in college, I worked in the computer center as a student “consultant,” where my job was to help other students figure out bugs in their code – usually just syntax errors and the like when their code wouldn’t compile. I wasn’t supposed to help them with “logic flaws”; that would be like cheating on their programming assignment. I realized then, as I do now, that that’s the best way to fix bugs: read the code, understand the code, and fix the code. After all, the code reveals all!
Join author Chris Bush as he goes a little deeper into the secure code review process in his upcoming webinar, “Cracking the Code: Secure Code Review in DevSecOps.”