Bamboo APIs – Love ’em

Recently, I had a chance to learn and use Bamboo APIs. We integrated Fortify static code analysis in to the Bamboo Build Pipeline and wanted an easy way to verify that:

  1. All the applicable projects are setup with Fortify scans
  2. Scans are completing successfully and generating artifacts for the developers to consume

Bamboo publishes a robust set of APIs that allows us to get all the information about plans, builds and the artifacts that are produced. The APIs provide several options to filter and customize the amount of data that can be retrieved allowing you to optimize for performance.

I used AngularJS as the front-end frameowork to invoke the APIs and build a summary page to show the following information:

  1. Project name
  2. Plan name
  3. If Fortify scans apply for the specific project type
  4. If the Fortify scans are setup
  5. Link to the build results
  6. Link to the Fortify scan report

So, instead of navigating the Bamboo UI to harvest this information, the AngularJS page invokes the APIs to systematically retrieve this information and and display on a single page. The success of automating security tools in the DevOps pipeline depends up on making it easy for Developers to consume outputs of the tools.

The Bamboo APIs used for building the page:

  1. /rest/api/latest/project?expand=projects.project.plans
  2. /rest/api/latest/result/PROJECTKEY-PLANKEY?expand=results.result.artifacts

The PROJECTKEY-PLANKEY information is obtained from the first API call.

The first API call returns all the projects and the plans that are active in Bamboo. In AngularJS, make the REST call and parse the results to get the PROJECTKEY-PLANKEY information and make a second REST API call to get the artifacts information.

Making the nested REST call in AngularJS offers challenges. I used the promise feature to make the nested REST calls possible. Although there was some learning curve to get up to speed with AngularJS, it paid off handsomely as the wiring in of REST calls, display of results was much simpler with use of AngularJS.

Now that I have a summary page with AngularJS and REST calls wired in, it would be easy to extend the page to display additional information that would be useful for a Developer or the Security assessment to consume results from the security tools integrated in to the Build Pipeline.

Newest OWASP Top 10 Release Candidate List is Out

OWASP Top 10 is updated every 3 years and the latest 2016 Release Candidate list of OWASP Top 10 Vulnerability Categories list is out. There two new categories of vulnerabilities that were added and that is the focus of this Blog Post.

OWASP Top 10 Project team added the following two new categories to the list:

  1. A7 – Insufficient Attack Protection
  2. A10 – Underprotected APIs

Lets talk about these two.

A7 – Insufficient Attack Protection

This category suggests that most applications have insufficient Preventive, Detective and Corrective Controls built in to the applications. Does your AppSec team provide common libraries, frameworks and services for Authentication, Authorization, Input Validations etc.? Do these frameworks log Security events if someone tries to circumvent these controls? Do you have actively monitor for these errors and respond if you receive alerts on these events? Are your applications self healing if they detect an attack in progress? These are some of the questions that this new category prods us to think.

As in most organizations, they rely upon Developers to implement sufficient Logging to be able to Detect malicious activity. But more often than not, the organization that does the monitoring is separate from Development Organizations and most likely are not aware that they need to monitor and respond to these Application Security Events. Most of the times, the monitoring and alerting is only at the Network and Infrastructure layers and not at the App layer. Hopefully, having this new category allows organizations to fortify SOC monitoring to include Application Security Alerts for monitoring.

There are a few critiques of this new category, especially, as this category proposes including a RASP (Realtime Application Security Protection) in your application stack that can detect and protect in real time as the exploit is in progress. The reason being the this category proposal came from Jeff Williams, the co-founder of Contrast Security, that offers RASP capability. Is there a conflict of interest? May be. But, does that stop me from exploring this new category and looking at ways to get better at application security? Definitely not.

I am not endorsing the criticism, but here is one such post to learn about the controversy on A7 – Insufficient Attack Protection. But, who does not like some drama in their chosen field?

http://www.skeletonscribe.net/2017/04/abusing-owasp.html

A10 – Underprotected APIs

If you do not have APIs in your organization, you must be hiding under a Rock or accidentally hitched on to the Mars Rover looking for Alien life. APIs are every where. Every company worth its salt is embracing micro-services architecture. So what is the problem?

All the OWASP Top 10 vulnerabilities typically associated with Web Applications apply to APIs as well. All APIs need to be Authenticated, Authorized, etc. etc. But, one area that we see vulnerabilities is Insecure Direct Object References, where Resource Ids are exposed in the URIs and an attacker can change the Resource Id to see if they are able to get to Resources that they are not authorized for. For e.g., if you have an API to retrieve your bank account information and the API is, /rs/account/{id}, an attacker will enumerate this API with different Account Ids to see if they accidentally get access to someone else’s account information. This is Insecure Direct Object Reference. Does the Authenticated user has access to the Resource being requested, in this Account information referred to by the Account Id, is Business Logic and it is very hard to verify outside of the context of your application.

How do mitigate this vulnerability?

API Developers needs to implement what we can call Data Level Authorization checks in the application code. Does the authenticated user has access to Account Id “xyz” check needs to be implemented before any Account information is sent back to the user. This check may vary based on the Resources and APIs that you deal with and needs to be built with in the context of the application.

Please share your thoughts on what you are seeing as your organization adopt APIs and Micro-Services architecture.

Error Handling: Prevent Information Disclosure

Are your developers handling unexpected server side errors? If server side error handling is weak, attackers can gain insights in to the technology stack being used in your applications and use that information to exploit known vulnerabilities in the technology stack as reported on National Vulnerability Database (NVD).

Typically un-handled server side errors bubble up as stack-traces, default error pages disclosing web server/app server names, version numbers etc. If an attacker were to know that your application is using Apache Structs 2, the recent OS Command Injection vulnerability disclosed in Apache Structs2 provides an easy attack vector for an attacker to compromise your application. Why would you let anyone know that you are using Apache Struts with lax error handling?

What are the common techniques used for error handling in applications?

Exceptions can happen in any tier of your application – UI/Mid-tier/Data-tier. Typically any modern language offers try/catch/finally to handle exceptions. Also, the web tier offers configuration to handle any uncaught exception and display generic error messages when un-handled exceptions are raised in code. Although configuring the web tier to handle un-handled exceptions provides you the safety net, educate your developers to handle exceptions in all tiers of the code. In addition to helping prevent information disclosure, having  proper exception handling logic allows your application to gracefully fail and provide debug information in the logs to help you triage those stubborn issues that typically only happen sporadically in production environments.

OWASP also provides guidance on exception handling that is worth giving a quick read.

No body wants to see stack-traces reported on their web sites, which is a bad user experience, but on top of that you are providing valuable information to attackers. This is an easy one to fix with proper education to your Development teams !

 

How do you know if your AMIs are secure?

Do you want your Development teams to use a Linux AMI (Amazon Machine Image) offered in the Amazon Market Place published by Joe Smith that could have malware embedded in it?

Most of the “big” firms do not use AMIs (Amazon Machine Images) available in the Amazon Market Place. The “big” firms do not allow use of publicly available AMIs and build their own AMIs, with their own security stack embedded in the AMIs.

But, how do you know the AMIs are being built with the right security stack and the OS at the right patch level? You need to test the AMIs as best as you can before you make them available for your Development teams to use.

Couple of simple options could be to:

  1. Write server spec test cases to test the AMIs meet your specifications
  2. Implement some sort of static analysis on the AMIs. But, currently I am not aware of any products that can do this.

But, these are not effective and there is no way to know if the test cases are written and the AMIs are safe to use.

How about you deploy the AMIs  in a lower SysLevel and run scanning tools against a EC2 instance that uses the AMI? So, before you promote the AMIs for general use, have a private test VPC where you deploy an EC2 instance with the AMI and run your scanning tools such as Amazon Inspector, Qualys etc., to make sure that the OS is at the right patch level. This gives your better results and the confidence that the AMI is safe to use.

Once you are confident that AMIs are safe to use, promote the AMIs for use by Development teams. You can rely upon tagging to indicate what AMIs are available for production use.

Now that you have made a safe AMI available for Development teams to use, how do you know they continue to be safe, given that the threat environment changes and new attacks emerge constantly.

You need to create a farm of EC2 instance with AMIs that are currently in use in production and constantly scan them on a schedule basis. This allows you to be on top of new threats. If a vulnerability is detected, you need to have line of sight in to all the EC2 instance that are using that vulnerable image. After you identify all the EC2 instances, you have couple of options:

  1. Patch the EC2 instance in place
  2. Re-hydrate with EC2 instances that have the patched AMIs

The options you pick depends on your environment.

Now that we considered new AMI development and ongoing scanning, the EC2 server farm that you setup for scheduled scans also allows you to verify zero day vulnerabilities.

Building this automation for scanning your AMIs and promoting safe AMIs to use, will allow your vulnerability scanning to be effective for cloud native infrastructure. Without this option, you scan all available instances in the cloud and the instances that you scan might be part of auto scale and you will never have an accurate way to map the vulnerability back to a specific instance that need to be patched.

Having a server farm with all the AMIs that are currently promoted to be used by the development teams, you make the scanning pretty effective and you can take the results and map it back to all the EC2 instances currently using the vulnerable AMIs and limits the number of servers you need to scan and use your scanning licences in a cost effective manner.

Please share your approach to making AMIs safe for your development teams to use.

 

CSRF Defense for REST API

What is CSRF?

CSRF stands for Cross Site Request Forgery. Let us say you are on your Bank Web Site, logged on and paying your bills. While the session is activity, your 5th Grader interrupts you with a Math question. Of course you need to google the answer as you have long forgotten 5th Grade Math Mr. Andy Ruport taught you. You end up on a site that is infected with Malware while you are still logged on to your Bank’s website. The malware on the site could issue a state changing operation such as Fund Transfer on your behalf to your Bank’s website. Since you are already logged on to your Bank Website, there is an active session cookie that get sent on the request to your Bank’s site. So, unbeknownst to you a Fund Transfer request to your Bank on your behalf. This in a nut shell is CSRF Attack.

Refer to the CSRF documentation on OWASP Website for additional information.

The defense for CSRF Attack in a traditional web application is to expect a  CSRF Token on every state changing request that the server validates with the token saved in the Session. Since APIs are supposed to be stateless, there is no CSRF Token that can be maintained across requests in REST API calls. Refer to the CSRF Prevention sheet sheet on OWASP website for additional details. CSRF Guard is a widely used library to implement CSRF Defense in traditional web applications.

The CSRF Defense in REST APIs is to expect a custom header such as X-XSRF-TOKEN by the API. By making the client set a custom header in the request, the attacker can not rely upon traditional CSRF Attach such as auto-submit of a HTML Form as the attacker can not set custom headers via that attack vector. If the attacker were to rely up on  sending an AJAX request using XMLHttpRequest (Level 2) than the CORS protection mechanism supported by all modern browsers would kick in and allow the REST API to accept or deny the request from the attacker controlled site.

This is a simple and elegant CSRF Defense that can be implemented for stateless REST APIs.

OWASP (Open Web Application Security Project)

I recently came across this Blog on AppSec related topics at https://appsecfordevs.com. I found the articles to be very pertinent and helpful to me. Please take a look at this Blog is you are in the Application Security domain.

AppSec For Devs

If you are new to Application Security domain, OWASP has a treasure trove of information on getting you started on path to a great career in application security as a Security Architect, Penetration Tester, Security Assurance Engineer etc. The OWASP Website is located @ https://www.owasp.org/index.php/Main_Page

Some of the main things that I learnt from the OWASP Website are:

  1. Top 10 Web Application Vulnerabilities that are published every 3 years. This list allows you to prioritize your dollars on what defenses you need to build for securing your web application assets. They have recently released the Release Candidate for 2016 OWASP Top 10. The additions to the 2016 list are: a) Automated detection and remediation of vulnerabilities. b) Lack of sufficient security controls in APIs (for e.g. REST APIs). I agree that most organizations that are re-architecting their sites using REST/CSA technologies are missing basic security controls such as authorizations, output escaping…

View original post 229 more words

API Gateway: Why you need it?

If your decent sized organization embarking on Micro Services Architecture, I think you need to look at a few infrastructure components to make it easy to manage the APIs.

Some of the basic building blocks for having a robust Micro Services Architecture are:

  1. Distributed Cache: MemCache, Gemfire and several other products out there help you build and manage your cache infrastructure, which is central for building high performing APIs.
  2. Service Registry: Don’t you want to know what your APIs, especially the ones that are exposed to the Internet? Having a Service Registry greatly enables adoption and use of Micro Services. Without a Service Registry, be ready for duplicated services, orphaned services that are no longer used, in ability to scan all APIs if there is no record of them are some of the major headaches you will face if you do not have a Service Registry. But, this is a hard one to get correct unless registering and updating your APIs in the Service Registry is fully automated in the Build Pipeline.
  3. API Deployment: You need a platform that makes it easy for the Development teams to deploy APIs easily. Platforms such as Pivotol Cloudfoundry that offer containers to run your services fill this space.
  4. API Gateway: APIs offer unique challenges in terms of Authentication, Authorization, Service Composition, Request/Response Transformation (Payload, Data Format from XML to JSON and Vice Versa etc.), Throughput handling etc. that need a robust solution. API Gateways fill  this space.

In this post I want to share some of thoughts on API Gateways, particularly Layer 7.

Traditional Web Application Authentication technologies are agent based. You have an agent running on your web server that would intercept and interrogate the Request based on the pre-configured policies. With Micro Services running on containers or server less platforms such as Lambda this architecture is no loner applicable.

API Gatways such as Layer7 make it very easy to apply the follow processing steps for you micro services:

  1. Authentication schemes such as OpenID, Basic, Form, Certificate based authentication
  2. Authorizations Open Auth, RBAC etcc.
  3. Request/Response Transformations
  4. Protocol conversions such as SOAP to REST
  5. Conditional Logic
  6. Error Handling
  7. Threat Protection
  8. Caching
  9. Throttling
  10. API Registry -> This is very important to have to avoid duplication of services and allows service discovery

Organizations that jump on the micro services bandwagon with out first building infrastructure components, will soon run in to Operational and Security nightmare that could be easily avoided by architecting and implementing above mentioned building blocks. These days there are various robust open source solutions that meet your needs without having to break your bank.