SAST: Static Code Analysis to identify Security Vulnerabilities

What is SAST?

SAST stands for Static Application Security Testing. Before ShiftLeft became fashionable, security companies were thinking wouldn’t it be better to identify security vulnerabilities while the Developer is writing the code as opposed to later in the process. What a futuristic thought?

Static Application Security Testing (SAST) tools analyze source code or compiled code to identify security vulnerabilities. The intent of SAST tools is to Shift Security Testing to the left and allow Developers to scan code, identify and fix vulnerabilities during the software development process. SAST solutions have been available in the industry for 10+ years. Fortify, AppScan, Checkmarx, Veracode are some of the leading commercial SAST providers. There are mature Open Source tools and frameworks such as SonarQube, FindSecurityBugs, PMD, that have plugins/rules to scan for security vulnerabilities and the tools allow you to implement custom rules applicable for the software languages and frameworks used in your organization.

How do SAST tools analyze the code?

SAST tools are very effective at identifying the use of insecure APIs.  For e.g.,

  1. Use of Dynamic SQL that could lead to SQL Injection vulnerabilities
  2. Insecure use of Encryption & Hashing APIs such as the use of weak encryption and hashing algorithms
  3. Information disclosure with logs and stack traces emitted to stdout, stderr
  4. Use of deprecated APIs that are vulnerable

The commercial SAST tools provide deeper analysis by building Data Flow Analysis wherein the tools build a model of untrusted inputs from Source (say HTML Form) to a Sink (say a Database). SAST tools than analyze the Data Flow models to identify if the untrusted input is properly sanitized with input validations and output encoding to defend against Injection and XSS type of attacks. The tools allow you to customize the sanitizers to account for your custom libraries to whittle down false positives.

The SAST tools either consume raw source code or Binary code for analyzing the code.

Open Source Tools are very good in identifying vulnerabilities in the first category but fall woefully short in doing complex Data Flow Analysis that span multiple modules.

Coverage and Effectiveness of SAST Tools

The advantage of using SAST tools is that they offer coverage across many categories and industry standards such as OWASP Top-10, SANS 25, PCI DSS, HIPAA, CWEs etc across many languages and frameworks. Some organizations use SAST as a mechanism to be compliant with regulatory requirements such as PCI DSS.

Most of the tools have coverage for Java, JavaScript, .NET, Mobile (iOS & Android), Python, Ruby etc. and the vendors keep adding support to new and upcoming languages such as Go.

You can think of SAST as offering coverage that is mile wide but only an inch deep! The tools are very noisy and produce false positives. Don’t be under the impression that you can drop these tools in your environment and run them on autopilot! This will only drive Developers away from adopting these tools and lose goodwill with them! You need to spend time and resources training the tools by creating customizations that apply for the languages, frameworks that are used in your organization! For e.g., if you have custom input validators, you need to train the tool to recognize these validators or else the tool would report false positives in injection vulnerabilities.

SAST Tools Evaluation Criteria

Given the number of commercial tool vendors, what are some of the considerations for evaluating and selecting a SAST tool? I would rate the tools based on the following criteria:

  1. Does the tool cover the languages and frameworks used in your organization?
  2. Does the tool cover industry vulnerabiltiy categories such as OWASP Top 10, SANS 25, PCI DSS etc?
  3. Is the tool customizable to limit false positives being reported?
  4. Does the tool provide mitigation guidance and is the guidance customizable?
  5. Does the tool integrate with Build Pipeline to automate static analysis during Build time and fail the build if based on preset policies?
  6. Does the scan performant? Especially, if the tool is integrated into the pipeline, the scans have to be performant. Does the tool allow incremental scans to speed up the scans?
  7. Does the tool have an IDE plugin that integrates with the Developer IDE and allows the Developer to scan code during Development and consume the scan results from your Build Pipeline scans?
  8. Is there a usable dashboard and reporting capability that allows your Security team to identify vulnerable applications and work with the application teams to mitigate the vulnerabilities?
  9. Does the vendor offer on-demand training or coaching is the Developer needs further assistance?
  10. Does the tool integrate with Bug tracking systems such as Jira and productivity tools such as Slack or ChatOps?
  11. Does the tool integrate with your Authentication and Authorization systems?

The vendors demonstrate their tools with readily available vulnerable applications such as WebGoat. OWASP Benchmark project also provided a suite of test cases and code to help evaluate these tools. The project also ranks open source SAST tools and commercial SAST tools although they do not call out the commercial tools due to licensing constraints.

In my opinion, the tools are highly tuned to work for test apps such as WebGoat and the best way to evaluate a tool is to throw a variety of applications that you have in your organization against the tool and see what you get!

In Conclusion

Given the coverage and effectiveness shortfall associated with SAST tools, is there a place for SAST tools in your Development process? I believe there is value in having SAST tools in your arsenal. But, instead of looking at SAST tools as the be all and end all, I would consider using SAST tools as guardrails that can help Developers avoid common security anti-patterns and adhere to secure coding practices!

If your organization does not currently use SAST tools, there is no reason for not adopting Open Source Tools that are free and make them available for your Developers today!

 

 

Advertisements

Break Builds: Getting Devs to take action !

As your organization matures DevSecOps, enabling Developers to release features faster, as a security professional you are trying to assure that the code is free of security defects. As part of you Security team, you incorporated automated security tools in to the build pipeline such as Static Code Analysis Tools, Dynamic Code Analysis, Security Test Cases etc. But, you can only bring the horse to the water, but can you make it drink the water? In other words, how do you get your development teams to consume results reported by the security tools? That is the million dollar question.

There is an aspirational goal that no security defects should ever leak in to production. There is an ask from clients to prevent code with security defects from getting deployed to production. Why don’t you break the build and force the developers to take action and fix the defects reported by the security tools before any code is deployed to production?

The challenge is multi-fold.

  1. The error ratio, ration of false positives to false negatives is significantly high for security tools. This is especially true for Application Security, you have better results with Infrastructure Security. Some of the tools reports results that run in to literally 1000s of pages long that a Developer needs to triage to identify true positives that they can take action and fix. The tools lack the Business Context to alert if an issue is High or Low for a given attack vector. This is especially challenging if you are focused on Application Security. Would lack of Input validation indicate or out out escaping result in an exploitable vulnerability depends up on whether the data can be exploited by the attacker to compromise your system.
  2. The criterion you use to break a build gives you a false sense of security. Given the limitations of tools, there are whole bunch of false negatives that leak in to production giving the organization a false sense of security. The Developers can always point back and say the Build went through, hence there must be no security defects in the code.
  3. Given the negative testing involved with uncovering security defects, the current generation of tools are not capable of identifying all possible attack vectors and assuring defenses against the attacks. I would think AI plays a role in being able to identify security weaknesses in code that are exploitable.

One of the tools that is used to measure code quality, SonarQube has removed the capability that supports breaking builds based on criterion such as severity level of the issues, number of issues, etc. The team recommends that the better approach is for teams to look at the dashboard of issues, triage and identify real issues that are applicable to your code to be fixed. This did not go well with security minded folks, but in reality that is a pragmatic approach until the security tools mature and are able to report security issues that are exploitable as opposed simply identifying security code syntax that may or may not lead to a vulnerability.

Until then, there is no easy button for assuring application security in the DevOps pipeline. Security organizations would have to continuously tune security tools, follow-up with Developers to look at the results, add exploitable defects to the Backlog for the Devs to work on to have success at assuring application security in DevOps Pipeline.

I would like to hear from you. Please share what you have done to assure security in DevOps Build Pipelines. What kind of automated tools do you run? How do you make Developers take action on them? Were you successful in implementing automated enforcement gates that prevents Developers from deploying code with vulnerabilities identified by security tools in the DevOps Pipeline?

 

 

 

Bamboo APIs – Love ’em

Recently, I had a chance to learn and use Bamboo APIs. We integrated Fortify static code analysis in to the Bamboo Build Pipeline and wanted an easy way to verify that:

  1. All the applicable projects are setup with Fortify scans
  2. Scans are completing successfully and generating artifacts for the developers to consume

Bamboo publishes a robust set of APIs that allows us to get all the information about plans, builds and the artifacts that are produced. The APIs provide several options to filter and customize the amount of data that can be retrieved allowing you to optimize for performance.

I used AngularJS as the front-end frameowork to invoke the APIs and build a summary page to show the following information:

  1. Project name
  2. Plan name
  3. If Fortify scans apply for the specific project type
  4. If the Fortify scans are setup
  5. Link to the build results
  6. Link to the Fortify scan report

So, instead of navigating the Bamboo UI to harvest this information, the AngularJS page invokes the APIs to systematically retrieve this information and and display on a single page. The success of automating security tools in the DevOps pipeline depends up on making it easy for Developers to consume outputs of the tools.

The Bamboo APIs used for building the page:

  1. /rest/api/latest/project?expand=projects.project.plans
  2. /rest/api/latest/result/PROJECTKEY-PLANKEY?expand=results.result.artifacts

The PROJECTKEY-PLANKEY information is obtained from the first API call.

The first API call returns all the projects and the plans that are active in Bamboo. In AngularJS, make the REST call and parse the results to get the PROJECTKEY-PLANKEY information and make a second REST API call to get the artifacts information.

Making the nested REST call in AngularJS offers challenges. I used the promise feature to make the nested REST calls possible. Although there was some learning curve to get up to speed with AngularJS, it paid off handsomely as the wiring in of REST calls, display of results was much simpler with use of AngularJS.

Now that I have a summary page with AngularJS and REST calls wired in, it would be easy to extend the page to display additional information that would be useful for a Developer or the Security assessment to consume results from the security tools integrated in to the Build Pipeline.

Error Handling: Prevent Information Disclosure

Are your developers handling unexpected server side errors? If server side error handling is weak, attackers can gain insights in to the technology stack being used in your applications and use that information to exploit known vulnerabilities in the technology stack as reported on National Vulnerability Database (NVD).

Typically un-handled server side errors bubble up as stack-traces, default error pages disclosing web server/app server names, version numbers etc. If an attacker were to know that your application is using Apache Structs 2, the recent OS Command Injection vulnerability disclosed in Apache Structs2 provides an easy attack vector for an attacker to compromise your application. Why would you let anyone know that you are using Apache Struts with lax error handling?

What are the common techniques used for error handling in applications?

Exceptions can happen in any tier of your application – UI/Mid-tier/Data-tier. Typically any modern language offers try/catch/finally to handle exceptions. Also, the web tier offers configuration to handle any uncaught exception and display generic error messages when un-handled exceptions are raised in code. Although configuring the web tier to handle un-handled exceptions provides you the safety net, educate your developers to handle exceptions in all tiers of the code. In addition to helping prevent information disclosure, having  proper exception handling logic allows your application to gracefully fail and provide debug information in the logs to help you triage those stubborn issues that typically only happen sporadically in production environments.

OWASP also provides guidance on exception handling that is worth giving a quick read.

No body wants to see stack-traces reported on their web sites, which is a bad user experience, but on top of that you are providing valuable information to attackers. This is an easy one to fix with proper education to your Development teams !