Error Handling: Prevent Information Disclosure

Are your developers handling unexpected server side errors? If server side error handling is weak, attackers can gain insights in to the technology stack being used in your applications and use that information to exploit known vulnerabilities in the technology stack as reported on National Vulnerability Database (NVD).

Typically un-handled server side errors bubble up as stack-traces, default error pages disclosing web server/app server names, version numbers etc. If an attacker were to know that your application is using Apache Structs 2, the recent OS Command Injection vulnerability disclosed in Apache Structs2 provides an easy attack vector for an attacker to compromise your application. Why would you let anyone know that you are using Apache Struts with lax error handling?

What are the common techniques used for error handling in applications?

Exceptions can happen in any tier of your application – UI/Mid-tier/Data-tier. Typically any modern language offers try/catch/finally to handle exceptions. Also, the web tier offers configuration to handle any uncaught exception and display generic error messages when un-handled exceptions are raised in code. Although configuring the web tier to handle un-handled exceptions provides you the safety net, educate your developers to handle exceptions in all tiers of the code. In addition to helping prevent information disclosure, having  proper exception handling logic allows your application to gracefully fail and provide debug information in the logs to help you triage those stubborn issues that typically only happen sporadically in production environments.

OWASP also provides guidance on exception handling that is worth giving a quick read.

No body wants to see stack-traces reported on their web sites, which is a bad user experience, but on top of that you are providing valuable information to attackers. This is an easy one to fix with proper education to your Development teams !

 

How do you know if your AMIs are secure?

Do you want your Development teams to use a Linux AMI (Amazon Machine Image) offered in the Amazon Market Place published by Joe Smith that could have malware embedded in it?

Most of the “big” firms do not use AMIs (Amazon Machine Images) available in the Amazon Market Place. The “big” firms do not allow use of publicly available AMIs and build their own AMIs, with their own security stack embedded in the AMIs.

But, how do you know the AMIs are being built with the right security stack and the OS at the right patch level? You need to test the AMIs as best as you can before you make them available for your Development teams to use.

Couple of simple options could be to:

  1. Write server spec test cases to test the AMIs meet your specifications
  2. Implement some sort of static analysis on the AMIs. But, currently I am not aware of any products that can do this.

But, these are not effective and there is no way to know if the test cases are written and the AMIs are safe to use.

How about you deploy the AMIs  in a lower SysLevel and run scanning tools against a EC2 instance that uses the AMI? So, before you promote the AMIs for general use, have a private test VPC where you deploy an EC2 instance with the AMI and run your scanning tools such as Amazon Inspector, Qualys etc., to make sure that the OS is at the right patch level. This gives your better results and the confidence that the AMI is safe to use.

Once you are confident that AMIs are safe to use, promote the AMIs for use by Development teams. You can rely upon tagging to indicate what AMIs are available for production use.

Now that you have made a safe AMI available for Development teams to use, how do you know they continue to be safe, given that the threat environment changes and new attacks emerge constantly.

You need to create a farm of EC2 instance with AMIs that are currently in use in production and constantly scan them on a schedule basis. This allows you to be on top of new threats. If a vulnerability is detected, you need to have line of sight in to all the EC2 instance that are using that vulnerable image. After you identify all the EC2 instances, you have couple of options:

  1. Patch the EC2 instance in place
  2. Re-hydrate with EC2 instances that have the patched AMIs

The options you pick depends on your environment.

Now that we considered new AMI development and ongoing scanning, the EC2 server farm that you setup for scheduled scans also allows you to verify zero day vulnerabilities.

Building this automation for scanning your AMIs and promoting safe AMIs to use, will allow your vulnerability scanning to be effective for cloud native infrastructure. Without this option, you scan all available instances in the cloud and the instances that you scan might be part of auto scale and you will never have an accurate way to map the vulnerability back to a specific instance that need to be patched.

Having a server farm with all the AMIs that are currently promoted to be used by the development teams, you make the scanning pretty effective and you can take the results and map it back to all the EC2 instances currently using the vulnerable AMIs and limits the number of servers you need to scan and use your scanning licences in a cost effective manner.

Please share your approach to making AMIs safe for your development teams to use.

 

CSRF Defense for REST API

What is CSRF?

CSRF stands for Cross Site Request Forgery. Let us say you are on your Bank Web Site, logged on and paying your bills. While the session is activity, your 5th Grader interrupts you with a Math question. Of course you need to google the answer as you have long forgotten 5th Grade Math Mr. Andy Ruport taught you. You end up on a site that is infected with Malware while you are still logged on to your Bank’s website. The malware on the site could issue a state changing operation such as Fund Transfer on your behalf to your Bank’s website. Since you are already logged on to your Bank Website, there is an active session cookie that get sent on the request to your Bank’s site. So, unbeknownst to you a Fund Transfer request to your Bank on your behalf. This in a nut shell is CSRF Attack.

Refer to the CSRF documentation on OWASP Website for additional information.

The defense for CSRF Attack in a traditional web application is to expect a  CSRF Token on every state changing request that the server validates with the token saved in the Session. Since APIs are supposed to be stateless, there is no CSRF Token that can be maintained across requests in REST API calls. Refer to the CSRF Prevention sheet sheet on OWASP website for additional details. CSRF Guard is a widely used library to implement CSRF Defense in traditional web applications.

The CSRF Defense in REST APIs is to expect a custom header such as X-XSRF-TOKEN by the API. By making the client set a custom header in the request, the attacker can not rely upon traditional CSRF Attach such as auto-submit of a HTML Form as the attacker can not set custom headers via that attack vector. If the attacker were to rely up on  sending an AJAX request using XMLHttpRequest (Level 2) than the CORS protection mechanism supported by all modern browsers would kick in and allow the REST API to accept or deny the request from the attacker controlled site.

This is a simple and elegant CSRF Defense that can be implemented for stateless REST APIs.

OWASP (Open Web Application Security Project)

I recently came across this Blog on AppSec related topics at https://appsecfordevs.com. I found the articles to be very pertinent and helpful to me. Please take a look at this Blog is you are in the Application Security domain.

AppSec For Devs

If you are new to Application Security domain, OWASP has a treasure trove of information on getting you started on path to a great career in application security as a Security Architect, Penetration Tester, Security Assurance Engineer etc. The OWASP Website is located @ https://www.owasp.org/index.php/Main_Page

Some of the main things that I learnt from the OWASP Website are:

  1. Top 10 Web Application Vulnerabilities that are published every 3 years. This list allows you to prioritize your dollars on what defenses you need to build for securing your web application assets. They have recently released the Release Candidate for 2016 OWASP Top 10. The additions to the 2016 list are: a) Automated detection and remediation of vulnerabilities. b) Lack of sufficient security controls in APIs (for e.g. REST APIs). I agree that most organizations that are re-architecting their sites using REST/CSA technologies are missing basic security controls such as authorizations, output escaping…

View original post 229 more words

API Gateway: Why you need it?

If your decent sized organization embarking on Micro Services Architecture, I think you need to look at a few infrastructure components to make it easy to manage the APIs.

Some of the basic building blocks for having a robust Micro Services Architecture are:

  1. Distributed Cache: MemCache, Gemfire and several other products out there help you build and manage your cache infrastructure, which is central for building high performing APIs.
  2. Service Registry: Don’t you want to know what your APIs, especially the ones that are exposed to the Internet? Having a Service Registry greatly enables adoption and use of Micro Services. Without a Service Registry, be ready for duplicated services, orphaned services that are no longer used, in ability to scan all APIs if there is no record of them are some of the major headaches you will face if you do not have a Service Registry. But, this is a hard one to get correct unless registering and updating your APIs in the Service Registry is fully automated in the Build Pipeline.
  3. API Deployment: You need a platform that makes it easy for the Development teams to deploy APIs easily. Platforms such as Pivotol Cloudfoundry that offer containers to run your services fill this space.
  4. API Gateway: APIs offer unique challenges in terms of Authentication, Authorization, Service Composition, Request/Response Transformation (Payload, Data Format from XML to JSON and Vice Versa etc.), Throughput handling etc. that need a robust solution. API Gateways fill  this space.

In this post I want to share some of thoughts on API Gateways, particularly Layer 7.

Traditional Web Application Authentication technologies are agent based. You have an agent running on your web server that would intercept and interrogate the Request based on the pre-configured policies. With Micro Services running on containers or server less platforms such as Lambda this architecture is no loner applicable.

API Gatways such as Layer7 make it very easy to apply the follow processing steps for you micro services:

  1. Authentication schemes such as OpenID, Basic, Form, Certificate based authentication
  2. Authorizations Open Auth, RBAC etcc.
  3. Request/Response Transformations
  4. Protocol conversions such as SOAP to REST
  5. Conditional Logic
  6. Error Handling
  7. Threat Protection
  8. Caching
  9. Throttling
  10. API Registry -> This is very important to have to avoid duplication of services and allows service discovery

Organizations that jump on the micro services bandwagon with out first building infrastructure components, will soon run in to Operational and Security nightmare that could be easily avoided by architecting and implementing above mentioned building blocks. These days there are various robust open source solutions that meet your needs without having to break your bank.

Do you know what your attacks are?

Do you know what your attacks are so that you can validate that the defensive controls that you are building are really helping keep your applications safe?

In most of the organizations, the attacks detected by SOC (Security Operations Center) never make it to the Development teams or even their counterparts in SSG (Software Security Group) that work with development teams to fortify their application defensive controls. In my opinion this leaves a big gap between the actual attacks and the defenses that SSG and Application teams are building. This leaves organizations susceptible as they may not be fortifying defenses where they need to. Does this sound like what happens in your organization?

In the absence of real information on the attacks seen by the organizations, the SSG teams rely upon OWASP Top 10 Categories to prioritize application defenses to build out. Although this better than nothing, the OWASP Top 10 are based on the vulnerabilities found in the applications based on the penetration testing done by security firms that participate in the OWASP Top 10 projects. For the most part, the Top 10 vulns seems like they align with High Value rewards the attackers go after. But, there is no validation or data supporting this that I could put my hands on.

So, why does the SOC team hesitant to share the attack patterns they detect and respond with the SSG and Development teams. If you work in SOC, I would like to hear from you. Please drop in a note.

In the meantime, OWASP Top 10 is the best we have to prioritize defensive controls to fortify applications. So, long !

How to make effective presentations – A Survival Guide for Techies

Practical advice.. Like this very much !

AppSec For Devs

Lot of folks in the security field, would rather be in front of a computer instead of people. But, in order to grow professional and become a leader in any field, one needs to be comfortable talking, presenting, teaching in various settings and group sizes. I am not a natural public speaker. I envy people who jump in front of a group and speak eloquently and give effective presentations. But, I would like to get better at this skill. I teach course on Defensive Security for Developers and them on implementing & verifying security controls in their code to mitigate OWASP Top 10 vulnerabilities. I will blog about teaching techniques in another post. Recently, I had to get in front of the CTO, Managers and other Lead architects in the organization. This is a situation where if you do well, it will be a “no op”, you may get a…

View original post 503 more words

REST API: Input Validations & Output Encoding

If you are implementing micro services in your organization, one of the basic defenses you need to implement is validating un-trusted inputs from users and escaping the data for the data sink such as a Database, reflected back to the browser, passed as inputs to other systems.

If your micro-services are being invoked from a Browser based application, do not rely up on the input validations performed on the client side. This may sound like dah, I have seen many smart Developers rely upon client side validations and skipped implementing server side validation. Input validations on the client side improves user experience, and the input validations and output escaping on the server side provide your application security defense against injection attacks. Client side is usability, server side is security.

Input validations are easy to implement in your micro-services if you are using any of the modern frameworks such as Jersey, Spring etc., to implement micro-services. These frameworks support JSR 303 and JSR 349 Bean Validation Annotations, making it very easy by simply decorating input fields with Bean Validation Annotations. For e.g., if you are taking user input for his/her address, you could enforce input validations as follows:

Ref: http://beanvalidation.org/1.0/spec/#example-groupsequence

public class Address {
    @NotNull @Size(max = 50)
    private String street1;

    @NotNull @ZipCode
    private String zipcode;

    @NotNull @Size(max = 30)
    private String city;

Refer to JSR 303 Specification for full list of capabilities such as custom validations, fail fast with ordering of input validations, error messages to return to the client app etc.

Most of the frameworks automatically encode data for XML or JSON based on the format of the data returned to the client. So, there is not much you need to do. Refer to the Jersey user guide on output encoding.

 

Open Source Software (OSS) – Do you know what you are vulnerable to?

Do you know what percent of software that runs in your application is Open Source libraries?

Do you know if your application is vulnerable because you are using a version of OSS library that is old and has known vulnerabilities?

Are you still writing custom software for your plumbing code instead of using OSS libraries?

If you answered “yes” to any of these questions, you have some work to do, my friend !

If you answered “yes” to the last question, you must be either working for Microsoft or wasting your organizations resources on something that you could get for free, and probably better software than what you could write yourself.

In any case, a large number of organizations: (a) use OSS libraries in their custom software development;  (b) have OSS libraries in their environment from commercial products they use

You can take the time to query CVE Database for the OSS libraries that are in your environment. This is obviously tedious with 1000’s of libraries and various versions that you need to keep track.

There are commercial vendors out there who offer products that solve this problem for a fee. Some of the well known vendors in this space are:

  1. Nexus Lifecycle from Sonatype
  2. Blackduck

OWASP has a free version “OWASP Dependency Track Project

Although the free version has limited capabilities, it is a pretty good start if you are constrained by resources. However, the commercial products provide extensive coverage of the OSS components you use in your organization across Licence management, Vulnerabilities and Architecture.

The capabilities provided by commercial tools and make case for your dollars are listed below:

  1. Scan for OSS components with know vulnerabilities during development or build pipeline with IDE Plugins and CI/CD integration plugins
  2. Set policies to fail builds in CI/CD if OSS components fail compliance against enterprise policies
  3. The products provide coverage for Licence compliance, Vulnerability checks, and compliance with Architecture policies
  4. The products allow automation of your OSS procurement process by blocking download of OSS components with know vulnerabilities and fail compliance against set policies
  5. Dashboard of your OSS vulnerability posture
  6. Integration with Bug tracking tools for resolution
  7. Real time updates if your apps are using OSS components that have zero day vulnerabilities

If your organization is using Sonatype products such as SonarQube, Nexus Pro, etc., OSS Scanning products from Sonatype: Nexus Lifecycle and Nexus Firewall provide efficiency of integration in to the echo system. The Developers can have better user experience with tool set and that is more than half the battle getting developers to consume findings from Security products.

Please share if your organization is using any of these products or others to keep a health check on the OSS libraries that you have exposure to in your organization.

How to make effective presentations – A Survival Guide for Techies

Lot of folks in the security field, would rather be in front of a computer instead of people. But, in order to grow professional and become a leader in any field, one needs to be comfortable talking, presenting, teaching in various settings and group sizes. I am not a natural public speaker. I envy people who jump in front of a group and speak eloquently and give effective presentations. But, I would like to get better at this skill. I teach course on Defensive Security for Developers and them on implementing & verifying security controls in their code to mitigate OWASP Top 10 vulnerabilities. I will blog about teaching techniques in another post. Recently, I had to get in front of the CTO, Managers and other Lead architects in the organization. This is a situation where if you do well, it will be a “no op”, you may get a compliment or two, but if you don’t that gets amplified. Talk about pressure 🙂

I want to share the techniques that I used to prepare and give the presentation. Luckily, it went well ! So, hopefully these techniques would be useful for you if and when you need to make a presentation to a senior leadership or your peers. Some of these may be just obvious and intuitive, but here it goes.

Prep work before the presentation

  1. First, just the thought of giving a presentation may induce anxiety. The key is to get started on planning and preparing the presentation. The longer you procrastinate, the anxiety just grows. So, just get to action !
  2. Be passionate about the topic you are presenting. Its very hard to get yourself excited and get the audience excited if you are not in to the topic and it shows.
  3. Make sure you master the topic that you are going to present. You may not be able to answer every question, but you need to have reasonable mastery on the subject. I always fret about not being able to answer every question. You do not have to. You can always ask for time to research and get back to them. Do not react and say something that you are not sure about.
  4. Prepare, prepare, and prepare ! You do not and should not memorize your script. But, say the words out loud and run through the presentation more than once. Not only this gives you a sense of time it takes, but also makes you presentation go smoother when it is time to deliver for real.
  5. The first minute of the presentation is always the most anxious one. So, prepare the few minutes really, really well and everything else will follow through.

Few minutes before getting on the stage

  1. Loosen your vocal chords. I always find a quite place where no one can hear you and practice what Julian Treasure presented in this wonderful TedTalk. It is 1 minute long from minute 8 – 9. Excellent, always works for me and gives me stronger and confident voice.
  2. Make mental note to self to
    1. speak slow
    2. speak loud
    3. speak confident
    4. pace your presentation
  3. See if you can weave in some humor during the introduction.. this will help you relax and get the audience to join you in the journey

During the presentation

  1. The first minute of the presentation is always the most anxious one. Given you preparation to the point, trust that Adrenalin will kick in and you will become alert and at the same time relaxed as you progress through the talk.
  2. The feeling at the end when the audience learns something from your talk and appreciates your talk, that makes it all worthwhile.
  3. Enjoy the ride !

I am sure effective presentation skills will allow you to showcase your skills and grow in your professional career !

Please drop in a comment if you have your tips and tricks in your arsenal that helps you make a fantastic presentation.