Just In Time (JIT) Training

Long live traditional security training models where a Developer has to schedule training, go sit in a classroom or take an on-demand course for a day or two, learn whole bunch of topics that may or may not be pertinent to that they do when they get back in to the office. As DevSecOps becomes main stream and teams adopt Continuous Delivery practices, it becomes imperative that Security Training is delivered to the Developer in byte sized modules right when they need it and in the tools they use to develop software!

Security Training for DevOps Developers

In order for the JIT security training to be effective for the Developer, I believe it has to meet the following criteria:

  1. Training module must be very focused and should not take more than 5 minutes to complete.
  2. Training must be interactive and offer opportunity for Developers to type in code
  3. Gamify training by offering points, badges etc. based on number of training modules completed, scores in tests and quizzes etc.
  4. Training needs to be accessible from the tool the Developer is coding in, for e.g., IDEs such as Eclipese, IntelliJ
  5. Training must be linked from a vulnerability reported by security testing tool making it easy for the Developer to find pertinent training module to fix the vulnerability

Products offering Security Training

Online learning platforms such as Pluralsight, Udemy, Lynda etc. are a big leap from where we were just a few years ago offering vast variety of training courses at affordable prices! But, these learning platforms are not optimized for DevOps Developer and they do not meet the criteria mentioned above. However, there are security vendors in this space that are offering on-demand training that seems to be much more aligned with DevOps and meet the criteria mentioned above.

Some of the vendors I looked at and that seem to have robust offering are listed below. Please do not take this as product endorsement, but some references for you to get started if you are interested in products in this space.

  1. Codebashing from Checkmarx
  2. eLearning from Veracode
  3. Secure Code Warrior

The products from Checkmarx and Veracode integrate their training modules with their SAST offerings and provide links to specific training modules with in the description of the vulnerability identified by their SAST tool. This makes it very effective for the Developer to get up to speed on the vulnerability and the fix they have to implement to re-mediate the vulnerability.

Free Learning Resources!

While commercial products are good for a large enterprise, the best Security learning I ever got is all free! Just in the past few years, you can find a plethora of security focused webinars offered by Security Vendors and Security Experts out there.

On a personal note, just a few years ago, in order to maintain my CISSP credential and meet the required CPE credits, I had to look for conferences or multiple day trainings offered by vendors. But, these days, I make it a point every week to listen to an hour of security focused webinars offered for free! There is no better way to keep up with CPE requirement and continuously learn about the happenings in the Security space. The resources out there are extensive, but some of the ones I go to the most are:

  1. AWS Security
  2. ISC2 Resources
  3. BrightTALK

Whether you are considering buying commercial products or taking advantage of free resources out there, I feel organizations have to focus on training the Developers and provide them opportunities to quickly learn and implement security in their code, while they are developing software and within the tools they are using for developing software!

Happy Learning!

Advertisements

CORS Demystified!

Background

CORS stands for Cross Origin Resource Sharing. This is security feature supported by Browsers that allows the UI applications to be hosted on a separate domain from the underlying APIs that are used. This is what enables the API Economy work! The consuming applications can call APIs hosted on different domains from different providers to build meaningful applications to the end user. If the UI and API were to be hosted on the same domain, this makes the applications very restrictive in the functionality they can provide as they can only call services that are hosted on the same domain!

Lets talk about how this all started! Long time ago when Dinosaurs were roaming the earth, yeah not so long, but you get the point, browsers allowed AJAX calls only to the domain where the page was loaded from. Let us say that the page was loaded from http://www.xyz.com, the browser would allow AJAX calls back to http://www.xyz.com. The reason they did that was to protect the end user from below scenario.

  1. User logs on to http://www.xyz.com and gets authentication cookie. Let’s call it xyzauthtoken cookie that is scoped to the domain xyz.com
  2. Now the user while logged on to xyz.com navigates to another site, http://www.evil.com
  3. evil.com generates an AJAX request to xyz.com. Browser sends the xyzauthtoken cookie along with the request allowing evil.com to impersonate the user logged in to xyz.com. This is pretty bad!

However as web applications evolved, this security feature made it very restrictive for applications from consuming services/APIs from different providers that are hosted on different domains.

CORS spec was put-together by major Browser providers to solve this problem. Chorme, Safari, Firefox were the early adopters of this security feature where as IE started supporting CORS only from IE version 10 and above!

How does CORS work?

CORS empowers the API providers and allows them to control who can consume their services over AJAX. The below diagram depicts the CORS flow!

  1. User logs on to http://www.xyz.com and gets authentication cookie. Let’s call it xyzauthtoken cookie that is scoped to the domain xyz.com
  2. Let us say, the webapp invokes services hosted on api.123.com to get some data. The user has to authenticate to api.123.com, but that is beyond the scope of this article! Just in case you are curious, OAuth solves that problem, a topic for future blog post!
  3. The CORS supported browser, holds the request and issues something called a pre-flight OPTIONS request to api.123.com with a request header set to Origin=www.xyz.com. The browser does not send any cookies with the preflight request.
  4. Now api.123.com can decide weather to accept a request from xyz.com. If api.123.com is okay serving up a request from xyz.com, it responds with response header, Access-Control-Allow-Origin, that specifies all the domains that can access api.123.com. The header can take regular expressions and if api.123.com can accept requests from any domain, you can set something like Access-Control-Allow-Origin=*.
  5. The browser checks if the value in Access-Control-Allow-Origin matches to the value in Origin. If there is a match, the browser releases the original request to api.123.com
  6. The AJAX request to api.123.com is sent and the app is able to get the data from a service hosted on a different domain!

How does CORS protect the end user?

In the above scenario, lets say api.123.com allows AJAX only from xyz.com and abc.com. The service could have set Access-Control-Allow-Origin header as follows:

Access-Control-Allow-Origin=xyz.com,abc.com

Now, imagine a scenario where the user navigates to http://www.evil.com and the malicious site issues a AJAX request to api.123.com. Let us see how CORS protects the user from this call.

  1. The CORS supported browser, holds the request and issues a pre-flight OPTIONS request to api.123.com with a request header set to Origin=www.evil.com. The browser does not send any cookies with the preflight request.
  2. api.123.com responds with Access-Control-Allow-Origin=xyz.com,abc.com
  3. The browser checks if the value in Access-Control-Allow-Origin matches to the value in Origin. In this case, it does not match and the Browser drops the request and no call is made to api.123.com

CORS spec allows the Provider of any API to specify what Http Methods and Http Headers can be sent by the Consumers and provides flexibility in accepting lets say a GET request from any domain and restrict a PUT/POST to only a sub-set of trusted domains.  If you need more information on CORS and all the features supported by CORS, it is well documented here!

Next time you are Banking or checking your investments online, turn on Developer Tools (F12) in your browser and see how the AJAX calls are being handled!

SAST: Static Code Analysis to identify Security Vulnerabilities

What is SAST?

SAST stands for Static Application Security Testing. Before ShiftLeft became fashionable, security companies were thinking wouldn’t it be better to identify security vulnerabilities while the Developer is writing the code as opposed to later in the process. What a futuristic thought?

Static Application Security Testing (SAST) tools analyze source code or compiled code to identify security vulnerabilities. The intent of SAST tools is to Shift Security Testing to the left and allow Developers to scan code, identify and fix vulnerabilities during the software development process. SAST solutions have been available in the industry for 10+ years. Fortify, AppScan, Checkmarx, Veracode are some of the leading commercial SAST providers. There are mature Open Source tools and frameworks such as SonarQube, FindSecurityBugs, PMD, that have plugins/rules to scan for security vulnerabilities and the tools allow you to implement custom rules applicable for the software languages and frameworks used in your organization.

How do SAST tools analyze the code?

SAST tools are very effective at identifying the use of insecure APIs.  For e.g.,

  1. Use of Dynamic SQL that could lead to SQL Injection vulnerabilities
  2. Insecure use of Encryption & Hashing APIs such as the use of weak encryption and hashing algorithms
  3. Information disclosure with logs and stack traces emitted to stdout, stderr
  4. Use of deprecated APIs that are vulnerable

The commercial SAST tools provide deeper analysis by building Data Flow Analysis wherein the tools build a model of untrusted inputs from Source (say HTML Form) to a Sink (say a Database). SAST tools than analyze the Data Flow models to identify if the untrusted input is properly sanitized with input validations and output encoding to defend against Injection and XSS type of attacks. The tools allow you to customize the sanitizers to account for your custom libraries to whittle down false positives.

The SAST tools either consume raw source code or Binary code for analyzing the code.

Open Source Tools are very good in identifying vulnerabilities in the first category but fall woefully short in doing complex Data Flow Analysis that span multiple modules.

Coverage and Effectiveness of SAST Tools

The advantage of using SAST tools is that they offer coverage across many categories and industry standards such as OWASP Top-10, SANS 25, PCI DSS, HIPAA, CWEs etc across many languages and frameworks. Some organizations use SAST as a mechanism to be compliant with regulatory requirements such as PCI DSS.

Most of the tools have coverage for Java, JavaScript, .NET, Mobile (iOS & Android), Python, Ruby etc. and the vendors keep adding support to new and upcoming languages such as Go.

You can think of SAST as offering coverage that is mile wide but only an inch deep! The tools are very noisy and produce false positives. Don’t be under the impression that you can drop these tools in your environment and run them on autopilot! This will only drive Developers away from adopting these tools and lose goodwill with them! You need to spend time and resources training the tools by creating customizations that apply for the languages, frameworks that are used in your organization! For e.g., if you have custom input validators, you need to train the tool to recognize these validators or else the tool would report false positives in injection vulnerabilities.

SAST Tools Evaluation Criteria

Given the number of commercial tool vendors, what are some of the considerations for evaluating and selecting a SAST tool? I would rate the tools based on the following criteria:

  1. Does the tool cover the languages and frameworks used in your organization?
  2. Does the tool cover industry vulnerabiltiy categories such as OWASP Top 10, SANS 25, PCI DSS etc?
  3. Is the tool customizable to limit false positives being reported?
  4. Does the tool provide mitigation guidance and is the guidance customizable?
  5. Does the tool integrate with Build Pipeline to automate static analysis during Build time and fail the build if based on preset policies?
  6. Does the scan performant? Especially, if the tool is integrated into the pipeline, the scans have to be performant. Does the tool allow incremental scans to speed up the scans?
  7. Does the tool have an IDE plugin that integrates with the Developer IDE and allows the Developer to scan code during Development and consume the scan results from your Build Pipeline scans?
  8. Is there a usable dashboard and reporting capability that allows your Security team to identify vulnerable applications and work with the application teams to mitigate the vulnerabilities?
  9. Does the vendor offer on-demand training or coaching is the Developer needs further assistance?
  10. Does the tool integrate with Bug tracking systems such as Jira and productivity tools such as Slack or ChatOps?
  11. Does the tool integrate with your Authentication and Authorization systems?

The vendors demonstrate their tools with readily available vulnerable applications such as WebGoat. OWASP Benchmark project also provided a suite of test cases and code to help evaluate these tools. The project also ranks open source SAST tools and commercial SAST tools although they do not call out the commercial tools due to licensing constraints.

In my opinion, the tools are highly tuned to work for test apps such as WebGoat and the best way to evaluate a tool is to throw a variety of applications that you have in your organization against the tool and see what you get!

In Conclusion

Given the coverage and effectiveness shortfall associated with SAST tools, is there a place for SAST tools in your Development process? I believe there is value in having SAST tools in your arsenal. But, instead of looking at SAST tools as the be all and end all, I would consider using SAST tools as guardrails that can help Developers avoid common security anti-patterns and adhere to secure coding practices!

If your organization does not currently use SAST tools, there is no reason for not adopting Open Source Tools that are free and make them available for your Developers today!

 

 

Authorization Checks in REST APIs

Lets take a look at sample REST APIs that allows Students to view their grades and professors to add/update grades.

If a student wants to view his/her grade in English, your system perhaps has a REST API like:

GET /student/{student_id}/english/grade

The Professor perhaps has access to the following REST APIs to be able to view and update any one of their students’ grades:

  1. View a student’s grade

GET /student/{student_id}/english/grade

2. Post a student’s grade

POST /student/{student_id}/english/grade

3. Update a student’s grade

PUT /student/{student_id}/english/grade

Of course there is a portal that the Student logs in to to view their grades and the professor logs in to post and update grades.

A savvy student is able to turn on HttpWatch to view the REST APIs being called by the portal. Lets say, the student A observes that when she clicks on a link to view her English grade, a REST API call is generated that looks like the following:

GET /student/1234/english/grade

What if the student is smart enough to replay this request with a different student id, 6789?

GET /student/6789/english/grade

If there are no proper Authorization checks in the APIs, the student A with id, “1234” is now able to view student B with id “6789” grade.

Even worse if if the student was able to update their own grade. How about the student try the following API?

PUT /student/1234/english/grade with payload “A”

or update someone else’s grade to “F”

PUT /student/6789/english/grade with payload “F”

However, a professor should be able to view, post or update anyone of their students’ grades. All of the above APIs are fair if they are issued on behalf of a professor.

So, every REST API call needs to enforce proper authorization to make sure that the user on whose behalf the API call is being made has access to the Resource on which they are taking action.

Since Java 6, the following Authorization annotations were added to make it easy to decorate methods with Authorization checks:

Annotation Types Summary
DeclareRoles Used by application to declare roles.
DenyAll Specifies that no security roles are allowed to invoke the specified method(s) – i.e that the methods are to be excluded from execution in the J2EE container.
PermitAll Specifies that all security roles are allowed to invoke the specified method(s) i.e that the specified method(s) are “unchecked”.
RolesAllowed Specifies the list of roles permitted to access method(s) in an application.
RunAs Defines the identity of the application during execution in a J2EE container.

However, these are annotations do not allow you to check if the user has access to a specific Resource ID. This is where, I find using Spring Security to be very helpful.

Spring Security provides a mechanism to do Data Level Authorizations using “hasPermission” annotation. Below is a reference from the documentation:

hasPermission(Object targetId, String targetType, Object permission) Returns true if the user has access to the provided target for the given permission. For example, hasPermission(1, ‘com.example.domain.Message’, ‘read’)

This makes it a bit easy to centralize logic for authorization checks. In the case of a REST Call such as POST /student/1234/english/grade “A” would translate in to a hasPermissions check:

hasPermission(“1234”, Grade, “update”) The method has access to the Principal on whose behalf the call is being. So, you have access to the Principal and the Target, and you can verify if the Principal has access to act up on the target ID. In the case of a student, this check returns a true only if the Principal and the Target ID are the same and also the permission is “view”. In the case of a professor, the check may be something like does the Target ID belong to a student that is currently enrolled in the Professor’s class.

If you are using Spring Security or some other framework, make sure you provide an easy way to apply authorization checks to ensure that the Principal is able to act on a given Resource.

Break Builds: Getting Devs to take action !

As your organization matures DevSecOps, enabling Developers to release features faster, as a security professional you are trying to assure that the code is free of security defects. As part of you Security team, you incorporated automated security tools in to the build pipeline such as Static Code Analysis Tools, Dynamic Code Analysis, Security Test Cases etc. But, you can only bring the horse to the water, but can you make it drink the water? In other words, how do you get your development teams to consume results reported by the security tools? That is the million dollar question.

There is an aspirational goal that no security defects should ever leak in to production. There is an ask from clients to prevent code with security defects from getting deployed to production. Why don’t you break the build and force the developers to take action and fix the defects reported by the security tools before any code is deployed to production?

The challenge is multi-fold.

  1. The error ratio, ration of false positives to false negatives is significantly high for security tools. This is especially true for Application Security, you have better results with Infrastructure Security. Some of the tools reports results that run in to literally 1000s of pages long that a Developer needs to triage to identify true positives that they can take action and fix. The tools lack the Business Context to alert if an issue is High or Low for a given attack vector. This is especially challenging if you are focused on Application Security. Would lack of Input validation indicate or out out escaping result in an exploitable vulnerability depends up on whether the data can be exploited by the attacker to compromise your system.
  2. The criterion you use to break a build gives you a false sense of security. Given the limitations of tools, there are whole bunch of false negatives that leak in to production giving the organization a false sense of security. The Developers can always point back and say the Build went through, hence there must be no security defects in the code.
  3. Given the negative testing involved with uncovering security defects, the current generation of tools are not capable of identifying all possible attack vectors and assuring defenses against the attacks. I would think AI plays a role in being able to identify security weaknesses in code that are exploitable.

One of the tools that is used to measure code quality, SonarQube has removed the capability that supports breaking builds based on criterion such as severity level of the issues, number of issues, etc. The team recommends that the better approach is for teams to look at the dashboard of issues, triage and identify real issues that are applicable to your code to be fixed. This did not go well with security minded folks, but in reality that is a pragmatic approach until the security tools mature and are able to report security issues that are exploitable as opposed simply identifying security code syntax that may or may not lead to a vulnerability.

Until then, there is no easy button for assuring application security in the DevOps pipeline. Security organizations would have to continuously tune security tools, follow-up with Developers to look at the results, add exploitable defects to the Backlog for the Devs to work on to have success at assuring application security in DevOps Pipeline.

I would like to hear from you. Please share what you have done to assure security in DevOps Build Pipelines. What kind of automated tools do you run? How do you make Developers take action on them? Were you successful in implementing automated enforcement gates that prevents Developers from deploying code with vulnerabilities identified by security tools in the DevOps Pipeline?

 

 

 

Bamboo APIs – Love ’em

Recently, I had a chance to learn and use Bamboo APIs. We integrated Fortify static code analysis in to the Bamboo Build Pipeline and wanted an easy way to verify that:

  1. All the applicable projects are setup with Fortify scans
  2. Scans are completing successfully and generating artifacts for the developers to consume

Bamboo publishes a robust set of APIs that allows us to get all the information about plans, builds and the artifacts that are produced. The APIs provide several options to filter and customize the amount of data that can be retrieved allowing you to optimize for performance.

I used AngularJS as the front-end frameowork to invoke the APIs and build a summary page to show the following information:

  1. Project name
  2. Plan name
  3. If Fortify scans apply for the specific project type
  4. If the Fortify scans are setup
  5. Link to the build results
  6. Link to the Fortify scan report

So, instead of navigating the Bamboo UI to harvest this information, the AngularJS page invokes the APIs to systematically retrieve this information and and display on a single page. The success of automating security tools in the DevOps pipeline depends up on making it easy for Developers to consume outputs of the tools.

The Bamboo APIs used for building the page:

  1. /rest/api/latest/project?expand=projects.project.plans
  2. /rest/api/latest/result/PROJECTKEY-PLANKEY?expand=results.result.artifacts

The PROJECTKEY-PLANKEY information is obtained from the first API call.

The first API call returns all the projects and the plans that are active in Bamboo. In AngularJS, make the REST call and parse the results to get the PROJECTKEY-PLANKEY information and make a second REST API call to get the artifacts information.

Making the nested REST call in AngularJS offers challenges. I used the promise feature to make the nested REST calls possible. Although there was some learning curve to get up to speed with AngularJS, it paid off handsomely as the wiring in of REST calls, display of results was much simpler with use of AngularJS.

Now that I have a summary page with AngularJS and REST calls wired in, it would be easy to extend the page to display additional information that would be useful for a Developer or the Security assessment to consume results from the security tools integrated in to the Build Pipeline.

Newest OWASP Top 10 Release Candidate List is Out

OWASP Top 10 is updated every 3 years and the latest 2016 Release Candidate list of OWASP Top 10 Vulnerability Categories list is out. There two new categories of vulnerabilities that were added and that is the focus of this Blog Post.

OWASP Top 10 Project team added the following two new categories to the list:

  1. A7 – Insufficient Attack Protection
  2. A10 – Underprotected APIs

Lets talk about these two.

A7 – Insufficient Attack Protection

This category suggests that most applications have insufficient Preventive, Detective and Corrective Controls built in to the applications. Does your AppSec team provide common libraries, frameworks and services for Authentication, Authorization, Input Validations etc.? Do these frameworks log Security events if someone tries to circumvent these controls? Do you have actively monitor for these errors and respond if you receive alerts on these events? Are your applications self healing if they detect an attack in progress? These are some of the questions that this new category prods us to think.

As in most organizations, they rely upon Developers to implement sufficient Logging to be able to Detect malicious activity. But more often than not, the organization that does the monitoring is separate from Development Organizations and most likely are not aware that they need to monitor and respond to these Application Security Events. Most of the times, the monitoring and alerting is only at the Network and Infrastructure layers and not at the App layer. Hopefully, having this new category allows organizations to fortify SOC monitoring to include Application Security Alerts for monitoring.

There are a few critiques of this new category, especially, as this category proposes including a RASP (Realtime Application Security Protection) in your application stack that can detect and protect in real time as the exploit is in progress. The reason being the this category proposal came from Jeff Williams, the co-founder of Contrast Security, that offers RASP capability. Is there a conflict of interest? May be. But, does that stop me from exploring this new category and looking at ways to get better at application security? Definitely not.

I am not endorsing the criticism, but here is one such post to learn about the controversy on A7 – Insufficient Attack Protection. But, who does not like some drama in their chosen field?

http://www.skeletonscribe.net/2017/04/abusing-owasp.html

A10 – Underprotected APIs

If you do not have APIs in your organization, you must be hiding under a Rock or accidentally hitched on to the Mars Rover looking for Alien life. APIs are every where. Every company worth its salt is embracing micro-services architecture. So what is the problem?

All the OWASP Top 10 vulnerabilities typically associated with Web Applications apply to APIs as well. All APIs need to be Authenticated, Authorized, etc. etc. But, one area that we see vulnerabilities is Insecure Direct Object References, where Resource Ids are exposed in the URIs and an attacker can change the Resource Id to see if they are able to get to Resources that they are not authorized for. For e.g., if you have an API to retrieve your bank account information and the API is, /rs/account/{id}, an attacker will enumerate this API with different Account Ids to see if they accidentally get access to someone else’s account information. This is Insecure Direct Object Reference. Does the Authenticated user has access to the Resource being requested, in this Account information referred to by the Account Id, is Business Logic and it is very hard to verify outside of the context of your application.

How do mitigate this vulnerability?

API Developers needs to implement what we can call Data Level Authorization checks in the application code. Does the authenticated user has access to Account Id “xyz” check needs to be implemented before any Account information is sent back to the user. This check may vary based on the Resources and APIs that you deal with and needs to be built with in the context of the application.

Please share your thoughts on what you are seeing as your organization adopt APIs and Micro-Services architecture.