Security vs Speed and Flexibility: A Conundrum for Security Professionals!

Micro-Service architecture has no doubt proven to be the anti-dote the industry needs to deal with complexities of monolith applications that organizations have built and nurtured over the last couple of decades! Lot of organizations are at a point where monolith applications that are their bread and butter, now stifle innovation and are cost prohibitive to maintain!

So, what is the problem?

Monolith applications were in a way good for security, where you put all your eggs in one basket and watched it closely! Now with micro-service architecture, specifically in large organizations, security teams are being challenged to support variety of platforms, languages and  frameworks that the Development teams want to use that is the best fit for the problem they are trying to solve. The Development teams say that as long as the contract for an API remains the same, how does it matter what tools/technologies are used under the covers to implement those APIs. Fair question, isn’t it?

In order for a security control to be effective, one needs to design, implement and be able to verify that the control is configured and is working properly within the context of the application. This new paradigm challenges the Security Teams as we need to understand how these platforms, languages and  frameworks work, so that we can offer security best practices that the teams need to follow and the security anti-patterns that the teams need to avoid!

Let me offer you couple of examples to illustrate the challenges that the Security teams face in this new paradigm.

1. Let us say, you need to implement Authorizations in a Java based micro-services. There are many ways to solve this problem:

  • Java provided JSR-250 authorization annotations
  • Spring Security framework
  • Apache Shiro
  • Custom solution etc.

If you allow Developers to pick any of these authorization frameworks, how do you know if the frameworks are being used effectively? For e.g., Spring Security has so many nuances that we can not expect Development teams to understand the nuts and bolts of the framework to use it the right way. Just having an authorization annotation does not guarantee that the authorization is being enforced unless Spring Security is configured properly!

For e.g., the authorization checks at the URI level are based on the sequence of URIs specified in the Spring Security configuration where the first match wins. There are scenarios where if the sequence is incorrectly specified, authorization checks may not be triggered on a secure end-point even though authorization annotation exists at the method level. The Developers must have a robust suite of automated test cases to verify that Spring Security is configured correctly and authorizations are working as expected in their applications.

2. Input validations are another area of concern.

The crux of input validations is the regex that sits behind the input validator. In the absence of an enterprise wide input validation library, it is for each team to figure out the regex and it is not trivial to get the regex correct! The Developers either due to lack of expertise in creating regex or short of time, would come up with weak regex such as “.*” that allows any character and essentially the input validator does nothing to check if the application is consuming dangerous input from un-trusted control sphere.

I get the problem, but how do I solve this?

This is where the conundrum comes in: How do you allow Developers flexibility so that they can deliver business functionality faster with out compromising on security?

If you allow the Development teams flexibility to pick and choose any which way they want to solve the problem, this imposes burden on the security teams to understand the nuances of each framework, the best practices to follow and security anti-patterns to avoid.

There is no one size fit all, but, this is my proposition. Both sides have to give-in a bit and meet some where in the middle! Let me elaborate!

For a long time, Security teams have provided one set of enterprise controls for the Development teams to use and created policies enforcing that the Development teams use those controls. Instead of forcing everybody to use the same set of controls, security teams must look to support 3 to 5 (or some manageable set) platforms/languages/frameworks and allow flexibility so that Development teams do not feel constrained to use a Hammer where a Screw Driver would have been efficient! Security teams must strive to expand the tool box that is available for the Development teams and the correct size of the tool box is dependent upon your organization!

At the same time, the Development teams need to understand that they can not simply jump on to the next fad in the industry. This is especially true in the case of JavaScript eco-system, where there are a plethora of frameworks and libraries available for the Developers to use. All these frameworks are Open Source and there is an added risk of these frameworks going through shorter and shorter hype cycles and there is a risk of the libraries not being maintained or patched for security vulnerabilities.

The answer lies in having the Security Teams and Development teams collaborate to mature re-usable security patterns, libraries, code snippets in the new platforms/languages/frameworks that Development Teams want to use. And for sure hold Development teams accountable for security vulnerabilities found in their applications. Remember, with great power comes great responsibility!




Interactive Application Security Testing (IAST) Tools

The security industry started on security testing automation journey almost a decade ago! First there were Static Analysis Security Testing (SAST) Tools and then there were Dynamic Analysis Security Testing (DAST) tools. Neither tools lived up to the expectations for various reasons, but primarily due to the large number of False Positives in SAST tools and difficulty automating DAST tools that require manual intervention needed to train the tool and triage results. Contrast Security was one of the early pioneers in a new space called Interactive Application Security Testing (IAST) to fill this gap! In this post we will discuss IAST tools and what they bring to the table.

Effectiveness of IAST Tools Over SAST/DAST Tools

IAST tools look to combine the best of what SAST tools and DAST tools offer, but with out the baggage these tools bring with them. The basic principle of IAST tools is that you configure your application with an IAST agent that can track the request from its “source” to the “sink” and determine is there is a vulnerability in the path due to a missing Sanitizer or an Encoder.

The agent is configured at the Runtime and has better context of the execution than a SAST tool and this allows IAST to provide better results with lower false positives compared to a SAST tool. Unlike DAST tools ISAT tools do not drive the application, but depend upon automated functional test cases or manual testing to drive the application and the agent runs in the background identifying vulnerabilities in the execution path. However, this is also a drawback of the IAST tools, in the sense that if you do not have sufficient test coverage either through automated testing or manual testing the effectiveness of IAST would be limited as the agent can only see the code that is executed. The agent does some analysis at startup, but the results you get there are comparable to what SAST tool provides. So, if you are planning to get an IAST tool, make sure that you have sufficient test coverage for your application either through automated or manual functional testing.

Does IAST eliminate manual pen testing?

This is a frequently asked question from senior management when you recommend big investment on any security tool including IAST. Although IAST offers significant improvements over traditional SAST or DAST tools, the tooling is not there at the moment to completely eliminate manual pen testing. Yes, the tools are much better now at identifying certain category of application security vulnerabilities such as XSS vulns, Injection vulns, Open Source Software vulns etc., but the tools are not able to identify vulnerabilities in business logic, back doors in the code, privilege escalation type of attacks etc. The way to look at the tools is to help you catch low hanging fruit so that you can free up pen testers to focus on more complex attacks on your applications! You need to augment security testing tools with robust code reviews and focused pen testing to fortify your application security!

Commercial IAST Tools

  1. Contrast Security is the leader in this space! Contrast has been on this journey longer than others and have improved their capabilities, language/framework coverage over the years. Contrast Security seems to check off on most of the capabilities you look for in a security testing tool! Contrast does a good job of identifying vulns in the Open Source Software (OSS) you use in your application. In addition to identifying OSS vulns, the tool would identify if you are using that component in your application. About 80% of the Open Source Software you include in your application is dead weight and is never used! So, having tool that would help you identify if you are using a vulnerable OSS component is a big win for prioritizing fixes for your OSS components!
  2. Seeker from Synposys is another IAST tool for you to consider! Seeker differentiates itself in this space by not only identifying a vulnerability, but also actively tries to exploit the vulnerability thus eliminating any false positives reported by the tool. The advantage with Seeker is that it is part of Synopsys that offers broad range of security testing tools: Coverity for SAST, BlackDuck for OSS scanning, Seeker for IAST. As Synopsys integrates these products and matures the platform, you will have single pane of glass for vulnerabilities reported across SAST, DAST, OSS, and IAST tools. Isn’t that an appealing proposition?
  3. Veracode and Checkmarx are on the early stages of building out IAST capabilities on their platforms. Watch out for updates from these vendors. The competition will only make the products better and will be win-win for the Consumers!

Trust, but verify your IAST Tools!

IAST is a Runtime tool, so you must verify that the agent is compatible for the languages, frameworks and libraries that you use including the specific versions that you use. Unlike a SAST tool where you may get partial coverage depending on how much code you throw at the SAST tool, IAST tool may be “all” or “nothing” if the tool is not able to detect “sources” and “sinks” in the versions of languages/frameworks that you use. So, it is imperative to evaluate the tool within your environment!

In Conclusion

If you budget for only one testing tool Test, I would strongly encourage you to consider IAST! But also be aware of the pitfalls of IAST:

  • Verify the tools in your enviroment for the languages/frameworks and the versions you use
  • Make sure you have effective test coverage in your organization via automated or manual pen testing. IAST is only as good as the test coverage!


Just In Time (JIT) Training

Long live traditional security training models where a Developer has to schedule training, go sit in a classroom or take an on-demand course for a day or two, learn whole bunch of topics that may or may not be pertinent to that they do when they get back in to the office. As DevSecOps becomes main stream and teams adopt Continuous Delivery practices, it becomes imperative that Security Training is delivered to the Developer in byte sized modules right when they need it and in the tools they use to develop software!

Security Training for DevOps Developers

In order for the JIT security training to be effective for the Developer, I believe it has to meet the following criteria:

  1. Training module must be very focused and should not take more than 5 minutes to complete.
  2. Training must be interactive and offer opportunity for Developers to type in code
  3. Gamify training by offering points, badges etc. based on number of training modules completed, scores in tests and quizzes etc.
  4. Training needs to be accessible from the tool the Developer is coding in, for e.g., IDEs such as Eclipese, IntelliJ
  5. Training must be linked from a vulnerability reported by security testing tool making it easy for the Developer to find pertinent training module to fix the vulnerability

Products offering Security Training

Online learning platforms such as Pluralsight, Udemy, Lynda etc. are a big leap from where we were just a few years ago offering vast variety of training courses at affordable prices! But, these learning platforms are not optimized for DevOps Developer and they do not meet the criteria mentioned above. However, there are security vendors in this space that are offering on-demand training that seems to be much more aligned with DevOps and meet the criteria mentioned above.

Some of the vendors I looked at and that seem to have robust offering are listed below. Please do not take this as product endorsement, but some references for you to get started if you are interested in products in this space.

  1. Codebashing from Checkmarx
  2. eLearning from Veracode
  3. Secure Code Warrior

The products from Checkmarx and Veracode integrate their training modules with their SAST offerings and provide links to specific training modules with in the description of the vulnerability identified by their SAST tool. This makes it very effective for the Developer to get up to speed on the vulnerability and the fix they have to implement to re-mediate the vulnerability.

Free Learning Resources!

While commercial products are good for a large enterprise, the best Security learning I ever got is all free! Just in the past few years, you can find a plethora of security focused webinars offered by Security Vendors and Security Experts out there.

On a personal note, just a few years ago, in order to maintain my CISSP credential and meet the required CPE credits, I had to look for conferences or multiple day trainings offered by vendors. But, these days, I make it a point every week to listen to an hour of security focused webinars offered for free! There is no better way to keep up with CPE requirement and continuously learn about the happenings in the Security space. The resources out there are extensive, but some of the ones I go to the most are:

  1. AWS Security
  2. ISC2 Resources
  3. BrightTALK

Whether you are considering buying commercial products or taking advantage of free resources out there, I feel organizations have to focus on training the Developers and provide them opportunities to quickly learn and implement security in their code, while they are developing software and within the tools they are using for developing software!

Happy Learning!

Automated Security Test Cases

A paradigm shift in security testing

Let me propose a radical new idea: Implement security test cases to test security controls in your application similar to how you test functional requirements. Yeah, I know there is nothing radical about it and this is not a new concept. But, lot of organizations have accepted Test Driven Development (TDD) practice and are extremely good at automating testing of functional requirements, but when it comes to security testing, it is still being offloaded as a point in time assessment to the security teams. Lets try to make automated security testing as common as implementing test cases for functional testing! Are you with me on this?

Benefits of implementing security test cases

  1. Build secure coding practice in the Development teams and makes them aware of security vulnerabilities and their remediation.
  2. Over a period of time, you build a strong suite of security test cases that get executed continuously in a pipeline providing continuous security assurance as opposed to point in time assessments.
  3. Security tools can’t detect logic and functional security vulnerabilities such as what roles are required to execute a capability, if there are authorizations based on dollar amounts, user registration flows etc.
  4. Most importantly this is “free” as in there is no licensing costs for expensive application security testing tools out on the market that at the most provide partial coverage and effectiveness compared to manual security assessments. I put “free” in quotes because you still need to expend time to implement security test cases.

What are common security test cases to implement?

Please see below for some examples of automated security test cases to implement across various categories.

Authentication test cases

  1. Verify that you are not able to login with invalid credentials
  2. Verify that you are not able to bypass multi factor authentication steps
  3. Verify that account is locked after 3 tries
  4. Is access to secure end points limited to authenticated users?
  5. Is the timeouts of authentication tokens working as expected?
  6. Is logout working as expected?

Authorization test cases

  1. Is access limited to users that have certain Role?
  2. Authorizations based on thresholds, verify is access is limited based on business logic rules
  3. Can the user assume a Role that they are not assigned to by sending a role on the request?
  4. Is the user authorized to access the object referenced in the URL? This is significant for REST end points where as best practice, IDs are specified in the URL. For e.g. if you have REST end point to retrieve account information, /rs/account/123456789, verify that the authenticated user is authorized to access account 123456789 as the end user can easily manipulate the account number to somebody else’s account number. This is called Direct Object Reference Vulnerability.

Input Validation and Output Encoding test cases

  1. Does the system reject invalid or malicious inputs?
  2. Is the system encoding user inputs before persisting to a sink? For e.g., sending in data containing SQL Injection pay load.

CSRF Protection test cases

  1. Verify that you are not able to perform state changing operations with out a CSRF token in the request

Miscellaneous test cases

  1. Verify is data is encrypted in transit with the use of https protocol
  2. Verify that TLS Certificate is valid
  3. Verify if “secure” flag and “http only” flag is set on cookies that contain sensitive data

How to implement security test cases?

Now that you are convinced to implement security test cases, how do you go about implementing them? There is no secret sauce to security test cases, Developers can use commonly available Open Source testing frameworks such as Selenium, Cucumber, JUnit, Mockito, Jersey Test Framework etc. for implementing security test cases. There are some open source projects such as Gauntlt that can give you a head start in implementing security test cases. In a future blog, I can provide some sample Cucumber test cases to demonstrate how easy it is to incorporate security testing in to your applications!

CORS Demystified!


CORS stands for Cross Origin Resource Sharing. This is security feature supported by Browsers that allows the UI applications to be hosted on a separate domain from the underlying APIs that are used. This is what enables the API Economy work! The consuming applications can call APIs hosted on different domains from different providers to build meaningful applications to the end user. If the UI and API were to be hosted on the same domain, this makes the applications very restrictive in the functionality they can provide as they can only call services that are hosted on the same domain!

Lets talk about how this all started! Long time ago when Dinosaurs were roaming the earth, yeah not so long, but you get the point, browsers allowed AJAX calls only to the domain where the page was loaded from. Let us say that the page was loaded from, the browser would allow AJAX calls back to The reason they did that was to protect the end user from below scenario.

  1. User logs on to and gets authentication cookie. Let’s call it xyzauthtoken cookie that is scoped to the domain
  2. Now the user while logged on to navigates to another site,
  3. generates an AJAX request to Browser sends the xyzauthtoken cookie along with the request allowing to impersonate the user logged in to This is pretty bad!

However as web applications evolved, this security feature made it very restrictive for applications from consuming services/APIs from different providers that are hosted on different domains.

CORS spec was put-together by major Browser providers to solve this problem. Chorme, Safari, Firefox were the early adopters of this security feature where as IE started supporting CORS only from IE version 10 and above!

How does CORS work?

CORS empowers the API providers and allows them to control who can consume their services over AJAX. The below diagram depicts the CORS flow!

  1. User logs on to and gets authentication cookie. Let’s call it xyzauthtoken cookie that is scoped to the domain
  2. Let us say, the webapp invokes services hosted on to get some data. The user has to authenticate to, but that is beyond the scope of this article! Just in case you are curious, OAuth solves that problem, a topic for future blog post!
  3. The CORS supported browser, holds the request and issues something called a pre-flight OPTIONS request to with a request header set to The browser does not send any cookies with the preflight request.
  4. Now can decide weather to accept a request from If is okay serving up a request from, it responds with response header, Access-Control-Allow-Origin, that specifies all the domains that can access The header can take regular expressions and if can accept requests from any domain, you can set something like Access-Control-Allow-Origin=*.
  5. The browser checks if the value in Access-Control-Allow-Origin matches to the value in Origin. If there is a match, the browser releases the original request to
  6. The AJAX request to is sent and the app is able to get the data from a service hosted on a different domain!

How does CORS protect the end user?

In the above scenario, lets say allows AJAX only from and The service could have set Access-Control-Allow-Origin header as follows:,

Now, imagine a scenario where the user navigates to and the malicious site issues a AJAX request to Let us see how CORS protects the user from this call.

  1. The CORS supported browser, holds the request and issues a pre-flight OPTIONS request to with a request header set to The browser does not send any cookies with the preflight request.
  2. responds with,
  3. The browser checks if the value in Access-Control-Allow-Origin matches to the value in Origin. In this case, it does not match and the Browser drops the request and no call is made to

CORS spec allows the Provider of any API to specify what Http Methods and Http Headers can be sent by the Consumers and provides flexibility in accepting lets say a GET request from any domain and restrict a PUT/POST to only a sub-set of trusted domains.  If you need more information on CORS and all the features supported by CORS, it is well documented here!

Next time you are Banking or checking your investments online, turn on Developer Tools (F12) in your browser and see how the AJAX calls are being handled!

SAST: Static Code Analysis to identify Security Vulnerabilities

What is SAST?

SAST stands for Static Application Security Testing. Before ShiftLeft became fashionable, security companies were thinking wouldn’t it be better to identify security vulnerabilities while the Developer is writing the code as opposed to later in the process. What a futuristic thought?

Static Application Security Testing (SAST) tools analyze source code or compiled code to identify security vulnerabilities. The intent of SAST tools is to Shift Security Testing to the left and allow Developers to scan code, identify and fix vulnerabilities during the software development process. SAST solutions have been available in the industry for 10+ years. Fortify, AppScan, Checkmarx, Veracode are some of the leading commercial SAST providers. There are mature Open Source tools and frameworks such as SonarQube, FindSecurityBugs, PMD, that have plugins/rules to scan for security vulnerabilities and the tools allow you to implement custom rules applicable for the software languages and frameworks used in your organization.

How do SAST tools analyze the code?

SAST tools are very effective at identifying the use of insecure APIs.  For e.g.,

  1. Use of Dynamic SQL that could lead to SQL Injection vulnerabilities
  2. Insecure use of Encryption & Hashing APIs such as the use of weak encryption and hashing algorithms
  3. Information disclosure with logs and stack traces emitted to stdout, stderr
  4. Use of deprecated APIs that are vulnerable

The commercial SAST tools provide deeper analysis by building Data Flow Analysis wherein the tools build a model of untrusted inputs from Source (say HTML Form) to a Sink (say a Database). SAST tools than analyze the Data Flow models to identify if the untrusted input is properly sanitized with input validations and output encoding to defend against Injection and XSS type of attacks. The tools allow you to customize the sanitizers to account for your custom libraries to whittle down false positives.

The SAST tools either consume raw source code or Binary code for analyzing the code.

Open Source Tools are very good in identifying vulnerabilities in the first category but fall woefully short in doing complex Data Flow Analysis that span multiple modules.

Coverage and Effectiveness of SAST Tools

The advantage of using SAST tools is that they offer coverage across many categories and industry standards such as OWASP Top-10, SANS 25, PCI DSS, HIPAA, CWEs etc across many languages and frameworks. Some organizations use SAST as a mechanism to be compliant with regulatory requirements such as PCI DSS.

Most of the tools have coverage for Java, JavaScript, .NET, Mobile (iOS & Android), Python, Ruby etc. and the vendors keep adding support to new and upcoming languages such as Go.

You can think of SAST as offering coverage that is mile wide but only an inch deep! The tools are very noisy and produce false positives. Don’t be under the impression that you can drop these tools in your environment and run them on autopilot! This will only drive Developers away from adopting these tools and lose goodwill with them! You need to spend time and resources training the tools by creating customizations that apply for the languages, frameworks that are used in your organization! For e.g., if you have custom input validators, you need to train the tool to recognize these validators or else the tool would report false positives in injection vulnerabilities.

SAST Tools Evaluation Criteria

Given the number of commercial tool vendors, what are some of the considerations for evaluating and selecting a SAST tool? I would rate the tools based on the following criteria:

  1. Does the tool cover the languages and frameworks used in your organization?
  2. Does the tool cover industry vulnerabiltiy categories such as OWASP Top 10, SANS 25, PCI DSS etc?
  3. Is the tool customizable to limit false positives being reported?
  4. Does the tool provide mitigation guidance and is the guidance customizable?
  5. Does the tool integrate with Build Pipeline to automate static analysis during Build time and fail the build if based on preset policies?
  6. Does the scan performant? Especially, if the tool is integrated into the pipeline, the scans have to be performant. Does the tool allow incremental scans to speed up the scans?
  7. Does the tool have an IDE plugin that integrates with the Developer IDE and allows the Developer to scan code during Development and consume the scan results from your Build Pipeline scans?
  8. Is there a usable dashboard and reporting capability that allows your Security team to identify vulnerable applications and work with the application teams to mitigate the vulnerabilities?
  9. Does the vendor offer on-demand training or coaching is the Developer needs further assistance?
  10. Does the tool integrate with Bug tracking systems such as Jira and productivity tools such as Slack or ChatOps?
  11. Does the tool integrate with your Authentication and Authorization systems?

The vendors demonstrate their tools with readily available vulnerable applications such as WebGoat. OWASP Benchmark project also provided a suite of test cases and code to help evaluate these tools. The project also ranks open source SAST tools and commercial SAST tools although they do not call out the commercial tools due to licensing constraints.

In my opinion, the tools are highly tuned to work for test apps such as WebGoat and the best way to evaluate a tool is to throw a variety of applications that you have in your organization against the tool and see what you get!

In Conclusion

Given the coverage and effectiveness shortfall associated with SAST tools, is there a place for SAST tools in your Development process? I believe there is value in having SAST tools in your arsenal. But, instead of looking at SAST tools as the be all and end all, I would consider using SAST tools as guardrails that can help Developers avoid common security anti-patterns and adhere to secure coding practices!

If your organization does not currently use SAST tools, there is no reason for not adopting Open Source Tools that are free and make them available for your Developers today!



Authorization Checks in REST APIs

Lets take a look at sample REST APIs that allows Students to view their grades and professors to add/update grades.

If a student wants to view his/her grade in English, your system perhaps has a REST API like:

GET /student/{student_id}/english/grade

The Professor perhaps has access to the following REST APIs to be able to view and update any one of their students’ grades:

  1. View a student’s grade

GET /student/{student_id}/english/grade

2. Post a student’s grade

POST /student/{student_id}/english/grade

3. Update a student’s grade

PUT /student/{student_id}/english/grade

Of course there is a portal that the Student logs in to to view their grades and the professor logs in to post and update grades.

A savvy student is able to turn on HttpWatch to view the REST APIs being called by the portal. Lets say, the student A observes that when she clicks on a link to view her English grade, a REST API call is generated that looks like the following:

GET /student/1234/english/grade

What if the student is smart enough to replay this request with a different student id, 6789?

GET /student/6789/english/grade

If there are no proper Authorization checks in the APIs, the student A with id, “1234” is now able to view student B with id “6789” grade.

Even worse if if the student was able to update their own grade. How about the student try the following API?

PUT /student/1234/english/grade with payload “A”

or update someone else’s grade to “F”

PUT /student/6789/english/grade with payload “F”

However, a professor should be able to view, post or update anyone of their students’ grades. All of the above APIs are fair if they are issued on behalf of a professor.

So, every REST API call needs to enforce proper authorization to make sure that the user on whose behalf the API call is being made has access to the Resource on which they are taking action.

Since Java 6, the following Authorization annotations were added to make it easy to decorate methods with Authorization checks:

Annotation Types Summary
DeclareRoles Used by application to declare roles.
DenyAll Specifies that no security roles are allowed to invoke the specified method(s) – i.e that the methods are to be excluded from execution in the J2EE container.
PermitAll Specifies that all security roles are allowed to invoke the specified method(s) i.e that the specified method(s) are “unchecked”.
RolesAllowed Specifies the list of roles permitted to access method(s) in an application.
RunAs Defines the identity of the application during execution in a J2EE container.

However, these are annotations do not allow you to check if the user has access to a specific Resource ID. This is where, I find using Spring Security to be very helpful.

Spring Security provides a mechanism to do Data Level Authorizations using “hasPermission” annotation. Below is a reference from the documentation:

hasPermission(Object targetId, String targetType, Object permission) Returns true if the user has access to the provided target for the given permission. For example, hasPermission(1, ‘com.example.domain.Message’, ‘read’)

This makes it a bit easy to centralize logic for authorization checks. In the case of a REST Call such as POST /student/1234/english/grade “A” would translate in to a hasPermissions check:

hasPermission(“1234”, Grade, “update”) The method has access to the Principal on whose behalf the call is being. So, you have access to the Principal and the Target, and you can verify if the Principal has access to act up on the target ID. In the case of a student, this check returns a true only if the Principal and the Target ID are the same and also the permission is “view”. In the case of a professor, the check may be something like does the Target ID belong to a student that is currently enrolled in the Professor’s class.

If you are using Spring Security or some other framework, make sure you provide an easy way to apply authorization checks to ensure that the Principal is able to act on a given Resource.

Break Builds: Getting Devs to take action !

As your organization matures DevSecOps, enabling Developers to release features faster, as a security professional you are trying to assure that the code is free of security defects. As part of you Security team, you incorporated automated security tools in to the build pipeline such as Static Code Analysis Tools, Dynamic Code Analysis, Security Test Cases etc. But, you can only bring the horse to the water, but can you make it drink the water? In other words, how do you get your development teams to consume results reported by the security tools? That is the million dollar question.

There is an aspirational goal that no security defects should ever leak in to production. There is an ask from clients to prevent code with security defects from getting deployed to production. Why don’t you break the build and force the developers to take action and fix the defects reported by the security tools before any code is deployed to production?

The challenge is multi-fold.

  1. The error ratio, ration of false positives to false negatives is significantly high for security tools. This is especially true for Application Security, you have better results with Infrastructure Security. Some of the tools reports results that run in to literally 1000s of pages long that a Developer needs to triage to identify true positives that they can take action and fix. The tools lack the Business Context to alert if an issue is High or Low for a given attack vector. This is especially challenging if you are focused on Application Security. Would lack of Input validation indicate or out out escaping result in an exploitable vulnerability depends up on whether the data can be exploited by the attacker to compromise your system.
  2. The criterion you use to break a build gives you a false sense of security. Given the limitations of tools, there are whole bunch of false negatives that leak in to production giving the organization a false sense of security. The Developers can always point back and say the Build went through, hence there must be no security defects in the code.
  3. Given the negative testing involved with uncovering security defects, the current generation of tools are not capable of identifying all possible attack vectors and assuring defenses against the attacks. I would think AI plays a role in being able to identify security weaknesses in code that are exploitable.

One of the tools that is used to measure code quality, SonarQube has removed the capability that supports breaking builds based on criterion such as severity level of the issues, number of issues, etc. The team recommends that the better approach is for teams to look at the dashboard of issues, triage and identify real issues that are applicable to your code to be fixed. This did not go well with security minded folks, but in reality that is a pragmatic approach until the security tools mature and are able to report security issues that are exploitable as opposed simply identifying security code syntax that may or may not lead to a vulnerability.

Until then, there is no easy button for assuring application security in the DevOps pipeline. Security organizations would have to continuously tune security tools, follow-up with Developers to look at the results, add exploitable defects to the Backlog for the Devs to work on to have success at assuring application security in DevOps Pipeline.

I would like to hear from you. Please share what you have done to assure security in DevOps Build Pipelines. What kind of automated tools do you run? How do you make Developers take action on them? Were you successful in implementing automated enforcement gates that prevents Developers from deploying code with vulnerabilities identified by security tools in the DevOps Pipeline?




Bamboo APIs – Love ’em

Recently, I had a chance to learn and use Bamboo APIs. We integrated Fortify static code analysis in to the Bamboo Build Pipeline and wanted an easy way to verify that:

  1. All the applicable projects are setup with Fortify scans
  2. Scans are completing successfully and generating artifacts for the developers to consume

Bamboo publishes a robust set of APIs that allows us to get all the information about plans, builds and the artifacts that are produced. The APIs provide several options to filter and customize the amount of data that can be retrieved allowing you to optimize for performance.

I used AngularJS as the front-end frameowork to invoke the APIs and build a summary page to show the following information:

  1. Project name
  2. Plan name
  3. If Fortify scans apply for the specific project type
  4. If the Fortify scans are setup
  5. Link to the build results
  6. Link to the Fortify scan report

So, instead of navigating the Bamboo UI to harvest this information, the AngularJS page invokes the APIs to systematically retrieve this information and and display on a single page. The success of automating security tools in the DevOps pipeline depends up on making it easy for Developers to consume outputs of the tools.

The Bamboo APIs used for building the page:

  1. /rest/api/latest/project?expand=projects.project.plans
  2. /rest/api/latest/result/PROJECTKEY-PLANKEY?expand=results.result.artifacts

The PROJECTKEY-PLANKEY information is obtained from the first API call.

The first API call returns all the projects and the plans that are active in Bamboo. In AngularJS, make the REST call and parse the results to get the PROJECTKEY-PLANKEY information and make a second REST API call to get the artifacts information.

Making the nested REST call in AngularJS offers challenges. I used the promise feature to make the nested REST calls possible. Although there was some learning curve to get up to speed with AngularJS, it paid off handsomely as the wiring in of REST calls, display of results was much simpler with use of AngularJS.

Now that I have a summary page with AngularJS and REST calls wired in, it would be easy to extend the page to display additional information that would be useful for a Developer or the Security assessment to consume results from the security tools integrated in to the Build Pipeline.

Newest OWASP Top 10 Release Candidate List is Out

OWASP Top 10 is updated every 3 years and the latest 2016 Release Candidate list of OWASP Top 10 Vulnerability Categories list is out. There two new categories of vulnerabilities that were added and that is the focus of this Blog Post.

OWASP Top 10 Project team added the following two new categories to the list:

  1. A7 – Insufficient Attack Protection
  2. A10 – Underprotected APIs

Lets talk about these two.

A7 – Insufficient Attack Protection

This category suggests that most applications have insufficient Preventive, Detective and Corrective Controls built in to the applications. Does your AppSec team provide common libraries, frameworks and services for Authentication, Authorization, Input Validations etc.? Do these frameworks log Security events if someone tries to circumvent these controls? Do you have actively monitor for these errors and respond if you receive alerts on these events? Are your applications self healing if they detect an attack in progress? These are some of the questions that this new category prods us to think.

As in most organizations, they rely upon Developers to implement sufficient Logging to be able to Detect malicious activity. But more often than not, the organization that does the monitoring is separate from Development Organizations and most likely are not aware that they need to monitor and respond to these Application Security Events. Most of the times, the monitoring and alerting is only at the Network and Infrastructure layers and not at the App layer. Hopefully, having this new category allows organizations to fortify SOC monitoring to include Application Security Alerts for monitoring.

There are a few critiques of this new category, especially, as this category proposes including a RASP (Realtime Application Security Protection) in your application stack that can detect and protect in real time as the exploit is in progress. The reason being the this category proposal came from Jeff Williams, the co-founder of Contrast Security, that offers RASP capability. Is there a conflict of interest? May be. But, does that stop me from exploring this new category and looking at ways to get better at application security? Definitely not.

I am not endorsing the criticism, but here is one such post to learn about the controversy on A7 – Insufficient Attack Protection. But, who does not like some drama in their chosen field?

A10 – Underprotected APIs

If you do not have APIs in your organization, you must be hiding under a Rock or accidentally hitched on to the Mars Rover looking for Alien life. APIs are every where. Every company worth its salt is embracing micro-services architecture. So what is the problem?

All the OWASP Top 10 vulnerabilities typically associated with Web Applications apply to APIs as well. All APIs need to be Authenticated, Authorized, etc. etc. But, one area that we see vulnerabilities is Insecure Direct Object References, where Resource Ids are exposed in the URIs and an attacker can change the Resource Id to see if they are able to get to Resources that they are not authorized for. For e.g., if you have an API to retrieve your bank account information and the API is, /rs/account/{id}, an attacker will enumerate this API with different Account Ids to see if they accidentally get access to someone else’s account information. This is Insecure Direct Object Reference. Does the Authenticated user has access to the Resource being requested, in this Account information referred to by the Account Id, is Business Logic and it is very hard to verify outside of the context of your application.

How do mitigate this vulnerability?

API Developers needs to implement what we can call Data Level Authorization checks in the application code. Does the authenticated user has access to Account Id “xyz” check needs to be implemented before any Account information is sent back to the user. This check may vary based on the Resources and APIs that you deal with and needs to be built with in the context of the application.

Please share your thoughts on what you are seeing as your organization adopt APIs and Micro-Services architecture.