Shining a Flashlight on Mobile Application Permissions

April 23, 2014 by · Leave a Comment
Filed under: application security, Mobile 

Brightest Flashlight App Permissions
The Federal Trade Commission (FTC) recently completed and announced the terms of a settlement with GoldenShore Technologies, a one-man development shop based out of Idaho and creator of the popular “Brightest Flashlight” application for Android. Back in December the FTC, in response to a number of complaints, began investigating the app, which was doing a lot more than turning on your phone’s LED camera flash. Prior to installation, the app requested permission to reach the internet, to access contacts, and even to track real-time geolocation via GPS or IP address. So why does a basic flashlight app need all those permissions? To sell the private data of its 50 to 100 million users to less-than-scrupulous third parties, of course.

Consumers often don’t pay attention to the EULA, allowing developers to slip in all kinds of pernicious language. And lest you think this is just an Android problem, it occurs in Apple mobile applications as well. Because apps like this don’t behave in the way that traditional malware behaves they often get through both Android’s and Apple’s vetting processes. It becomes incredibly easy for developers to collect private information on a massive scale and then sell that data to a disreputable party. These types of privacy issues are only amplified in enterprises with weak or no MDM policies. Think about the types of data your employees could be unknowingly transmitting just by clicking “OK” to a set of permissions they didn’t read for some mobile app they thought was innocuous. Pretty scary, huh?

But the FTC just doled out some punishment, right? Well, yes, but it amounts to a slap on the wrist with a wet noodle. GoldenShore Technologies is ordered to delete all existing geolocation and device-specific data the app has collected. Going forward, the app must make clear to consumers that the it is collecting their data and what will happen to it. There are a few other restrictions, but most importantly, there is no financial penalty. The developer won’t even have to remit the profits he made from selling user data. Without a significant monetary penalty it’s unlikely that this type of behavior will be curbed in any way. Developers will continue to profit from exposing consumer and enterprise data, to the detriment of us all.

So the question is, what can enterprises do to mitigate the risks inherent in mobile applications? Our static and dynamic behavioral analysis can pick up on the types of things that Android and even Apple gatekeepers miss. Our dynamic testing simulates the way the end-user would deploy an app and then reports exactly what is happening: the internal mechanisms, network connections made, and the data that is compiled and sent out across those connections. Our partnerships with MDM and MAM vendors help enterprises use the information provided by our APIs to easily enforce BYOD policies by setting up rules that use risk ratings to allow or block apps from the mobile device. That way you can protect your enterprise from applications dangerous to your privacy, your network, and your information – because it’s unlikely that the GoldenShore Technology settlement will encourage widespread development of less risky mobile apps.

Time to Crowdfund Open Source Security?

Will crowd funding bug bounties for OpenSSL solve its security problems? Probably not.

crowfund-openssl-bug-bounty

For years, security experts and thought leaders have railed against the concept of “security through obscurity” – the notion that you can keep vulnerable software secure just by preventing others from understanding how it works.

Corporate executives worried about relying on open source operating systems and software like the Linux operating system – whose underlying source code was managed by volunteers and there for the whole world to see.

The answer to such doubters was the argument that open source was more secure (or, at least, no less secure) than closed source competitors precisely because it was open. Open source packages– especially widely used packages– were managed by a virtual army of volunteers. Dozens or even scores of eyes vetted new features or updates. And anyone was free to audit the code at any time. With such an active community supporting open source projects, developers who submitted sub-par code or, god forbid, introduced a bug, vulnerability or back door would be identified, called to task and banished.

That was a comforting notion. And there are certainly plenty of examples of just this kind of thing happening. (Linux creator Linus Torvalds recently made news by openly castigating a key Linux kernel developer Kay Sievers for submitting buggy code, suspending him from further contributions.)

But the discovery of the Heartbleed vulnerability puts the lie to the idea of the ‘thousands of eyes’ notion. Some of the earliest reporting on Heartbleed noted that the team supporting the software consisted of just four developers – only one of them full time.

“The limited resources behind the encryption code highlight a challenge for Web developers amid increased concern about hackers and government snoops,” the Wall Street Journal noted. OpenSSL Software Foundation President Steve Marquess was later asked about security audits and replied, “we simply don’t have the funding for that. The funding we have is to support food and rent for people doing the most work on OpenSSL.”

So does Heartbleed mean a shift away from reliance on open source? Is it a final victory of security-through-obscurity? Not so fast. As I noted in my post last week, vulnerabilities aren’t limited to open source components – any third party code might contain potentially damaging code flaws and vulnerabilities that escape detection.

Akamai learned that lesson the hard way this week with a proprietary code the company had been using to do memory allocation around SSL keys. The company initially claimed the patch provided mitigation against the Heartbleed vulnerability and contributed it back to the OpenSSL community. But a quick review found a glaring vulnerability in the patch code that, combined with the Heartbleed vulnerability, would have still left SSL encryption keys vulnerable to snooping.

“Our lesson of the last few days is that proprietary products are not stronger,” Akamai’s CSO Andy Ellis told me in an interview. “So, ‘yes,’ you can move to proprietary code, but whose? And how can you trust it?” Rather than run away from open source, Ellis believes the technology community should ‘lean in’ (my words not his) and pour resources – people and money – into projects like OpenSSL.

But how? Casey Ellis over at the firm BugCrowd has one idea on how to fund improvements to- and a proper audit of OpenSSL. He launched a crowd-funded project to fund bug bounties for a security audit of OpenSSL.

“Not every Internet user can contribute code or security testing skills to OpenSSL,” Ellis wrote. “But with a very minor donation to the fund, everyone can play a part in making the Internet safer.”

A paid bounty program would mirror efforts by companies like Google, Adobe and Microsoft to attract the attention of the best and brightest security researchers to their platform. No doubt: bounties will beget bug discoveries, some of them important. But a bounty program isn’t a substitute for a full security audit and, beyond that, a program for managing OpenSSL (or similar projects) over the long term. And, after all, the Heartbleed vulnerability doesn’t just point out a security failing, it raises questions about the growth and complexity of the OpenSSL code base. Bounties won’t make it any easier to address those bigger and important problems.

As I noted in a recent article over at ITWorld, even companies like Apple, with multi-billion dollar war chests and a heavy reliance on open source software are reluctant to channel money to the organizations like the Apache Software Foundation, Eclipse or the Linux Foundation that help to manage open source projects. This article over at Mashable makes a similar (albeit broader) argument: if companies want to pick the fruit of open source projects, they should water the tree as well.

In the end, there’s no easy solution to the problem. Funding critical open source code is going to require both individuals and corporations to step up and donate money, time and attention – whether through licensing and support agreements, or as part of a concerted effort to provide key projects with the organizational and technical support they need to maintain and expand critical technology platforms like OpenSSL.

Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part II

Welcome to another round of Agile SDLC Q&A. Last week Ryan and I took some time to answer questions from our webinar, “Building Security Into the Agile SDLC: View from the Trenches“; in case you missed it, you can see Part I here. Now on to more of your questions!

Q. What would you recommend as a security process around continuous build?

Chris

Chris: It really depends on what the frequency is. If you’re deploying once a day and you have automated security tools as a gating function, it’s possible but probably only if you’ve baked those tools into the build process and minimized human interaction. If you’re deploying more often than that, you’re probably going to start thinking differently about security – taking it out of the critical path but somehow ensuring nothing gets overlooked. We’ve spoken with companies who deploy multiple times a day, and the common theme is that they build very robust monitoring and incident response capabilities, and they look for anomalies. The minute something looks suspect they can react and investigate quickly. And the nice thing is, if they need to hotfix, they can do it insanely fast. This is uncharted territory for us; we’ll let you know when we get there.

Q. What if you only have one security resource to deal with app security – how would you leverage just one resource with this “grooming” process?

Chris

Chris: You’d probably want to have that person work with one Scrum team (or a small handful) at a time. As they security groomed with each team, they would want to document as rigorously as possible the criteria that led to them attaching security tasks to a particular story. This will vary from one team to the next because every product has a different threat model. Once the security grooming criteria are documented, you should be able to hand off that part of the process to a team member, ideally a Security Champion type person who would own and take accountability for representing security needs. From time to time, the security SME might want to audit the sprint and make sure that nothing is slipping through the cracks, and if so, revise the guidelines accordingly.

Q. Your “security champion” makes me think to the “security satellite” from BSIMM; do you have an opinion on BSIMM applicability in the context of Agile?

Chris

Chris: Yes, the Security Satellite concept maps very well to the Security Champion role. BSIMM is a good framework for considering the different security activities important to an organization, but it’s not particularly prescriptive in the context of Agile.

Q. We are an agile shop with weekly release cycles. The time between when the build is complete, and the release is about 24 hours. We are implementing web application vulnerability scans for each release. How can we fix high risk vulnerabilities before each release? Is it better to delay the release or fix it in the next release?

Chris

Chris: One way to approach this is to put a policy in place to determine whether or not the release can ship. For example, “all high and very high severity flaws must be fixed” makes the acceptance criteria very clear. If you think about security acceptance in the same way as feature acceptance, it makes a lot of sense. You wouldn’t push out the release with a new feature only half-working, right? Another approach is to handle each vulnerability on a case-by-case basis. The challenge is, if there is not a strong security culture, the team may face pressure to push the release regardless of the severity.

Q. How do you address findings identified from regular automated scans? Are they added to the next day’s coding activities? Do you ever have a security sprint?

Ryan

Ryan: Our goal is to address any findings identified within the sprint. This means while it may not be next-day it will be very soon afterwards and prior to release. We have considered dedicated security sprints.

Q. Who will do security grooming? Development team or security team? What checklist included in the grooming?

Ryan

Ryan: Security grooming is a joint effort between the teams. In some cases the security representative, Security Architect in our terminology, attends the full team grooming meeting. In the cases where the full team grooming meeting would be too large of a time commitment for the Security Architect, they will hold a separate, shorter security grooming session soon afterwards instead.

Q. How important to your success was working with your release engineering teams?

Chris

Chris: Initially not very important, because we didn’t have dedicated release engineering. The development and QA teams were in charge of deploying the release. Even with a release engineering team, though, most of the security work is done well before the final release is cut, so the nature of their work doesn’t change much. Certainly it was helpful to understand the release process – when is feature freeze, code freeze, push night, etc. – and the various procedures surrounding a release, so that you as a security team can understand their perspective.

Q. How to you handle accumulated security debt?

Chris

Chris: The first challenge is to measure all of it, particularly debt that accumulated prior to having a real SDLC! Even security debt that you’re aware of may never get taken in to a sprint because some feature will always be deemed more important. So far the way we’ve been able to chip away at security debt is to advocate directly with product management and the technical leads. This isn’t exactly ideal, but it beats not addressing it at all. If your organization ever pushes to reduce tech debt, it’s a good opportunity to point out that security debt should be considered as part of tech debt.

view the webinar

This now concludes our Q&A. A big thank you to everyone who attended the webinar for making it such a huge success. If you have any more questions, we would love to hear from you in the comments section below. In addition, If you are interested in learning more about Agile Security, you might be interested in this upcoming webinar from Veracode’s director of platform engineering. On April 17th, Peter Chestna will be hosting this webinar entitled “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this webinar Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success. Register now!

Customer Announcement: Securing Your Applications From Heartbleed

April 12, 2014 by · Leave a Comment
Filed under: Customer Success, Vulnerabilities 

heartbleedIf you are a current Veracode customer, we’re delighted to announce that we can help you rapidly address the Heartbleed bug. We are offering our comprehensive capabilities for application vulnerability detection to all our customers, at no-charge, to help you respond to this threat.

What is Veracode doing to help our customers?

We have two capabilities in particular to help you determine your risks from Heartbleed. These services will identify potentially vulnerable components in both your application code and public facing websites.

  • Heartbleed Component Analysis: Our software composition analysis engine looks for evidence of use of OpenSSL in your code (static analysis) and produces a report detailing at risk applications.
  • Heartbleed Web Perimeter Analysis: Our massively parallel dynamic analysis Discovery technology detects the use of OpenSSL and produces a report of vulnerable websites.

Learn more about what we’re doing to help our customers here.

Or reach out to us directly to get started with securing your application infrastructure.

Heartbleed And The Curse Of Third-Party Code

The recently disclosed vulnerability in OpenSSL pokes a number of enterprise pain points. Chief among them: the proliferation of vulnerable, third-party code.

heartbleed

By now, a lot has been written about Heartbleed (heartbleed.com), the gaping hole in OpenSSL that laid bare the security of hundreds of thousands of web sites and web based applications globally.

Heartbleed is best understood as a really, nasty coding error in a feature that was added to a ‘heartbeat’ feature that was added to OpenSSL in March 2012. The heartbeat was designed to prevent OpenSSL connections from timing out – a common problem with always-on Web applications that was impacting the performance of those applications.

If you haven’t read about Heartbleed, there are some great write-ups available. I’ve covered the problem here. And, if you so inclined, there’s a blow-by-blow analysis of the code underlying the Heartbleed flaw here and here. Are you wondering if your web site or web-based application is vulnerable to Heartbleed vulnerability? Try this site: http://filippo.io/Heartbleed.

This one hurts – there’s no question about it. As the firm IOActive notes, it exposes private encryption keys, allowing encrypted SSL sessions to be revealed. But it also appears to leave data such as user sessions subject to hijacking, and exposes encrypted search queries and passwords used to access major online services– at least until those services are patched. And, because the vulnerable version of OpenSSL has circulated for over two years, it’s apparent that many of these services and the data that traverses them has been vulnerable to snooping.

But Heartbleed hurts for other reasons. Notably: it’s a plain reminder of the extent to which modern, IT infrastructure has become dependent on the integrity of third-party code that too often proves to be unreliable. In fact, Heartbleed and OpenSSL may end up being the poster child for third-party code audits.

First, the programming error in question was a head-slapper, Johannes Ullrich of the SANS Internet Storm Center told me. Specifically, the TLS heartbeat extension that was added is missing a bounds check when handling requests. The flaw means that TLS heartbeat requests can be used to retrieve up to 64K of memory on the machine running OpenSSL to a connected client or server.

Second: OpenSSL’s use is so pervasive that even OpenSSL.org, which maintains the software, can’t say for sure where it’s being used. But Ullrich says the list is a long one, and includes ubiquitous tools like OpenVPN, countless mailservers that use SSL, client software including web browsers on PCs and even Android mobile devices.

We’ve talked about the difficulty of securing third-party code before – and often. In our Talking Code video series, Veracode CTO Chris Wysopal said that organizations need to work with their software suppliers – whether they are commercial or open source groups. “The best thing to do is to tell them what issues you found. Ask them questions about their process.”

Josh Corman, now of the firm Sonatype, has called the use and reuse of open source code like OpenSSL a ‘force multiplier’ for vulnerabilities – meaning the impact of any exploitable vulnerability in the platform grows with the popularity of that software.

For firms that want to know not “am I exposed?” (you are) but “how am I exposed?” to problems like Heartbleed, there aren’t easy answers.

Veracode has introduced a couple products and services to address the kinds of problems raised by Heartbleed. Today, customers can take advantage of a couple services that make response and recovery easier.

Services like the Software Composition Analysis can find vulnerable, third-party components in an application portfolio. Knowing what components you have in advance makes the job of patching and recovering that much easier.

Also, the Web Application Perimeter Monitoring service will identify public-facing application servers operating in your environment. It’s strange to say: but many organizations don’t have a clear idea of how many public facing applications they even have, or who is responsible for their management.

Beyond that, some important groups are starting to take notice. The latest OWASP Top 10 added the use of “known vulnerable components” to the list of security issues that most hamper web applications. And, in November, the FS-ISAC added audits of third-party code to their list of recommendations for vendor governance programs.

Fixing Heartbleed will, as its name suggests, be messy and take years. But it will be worthwhile if Heartbleed’s heartburn serves as a wake-up call to organizations to pay more attention to the third-party components at use within their IT environments.

Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part I

April 10, 2014 by · Leave a Comment
Filed under: research 

Recently, Ryan O’Boyle and I hosted the webinar “Building Security Into the Agile SDLC: View From the Trenches”. We would like to take a minute to thank all those who attended the live broadcast for submitting questions. There were so many questions from our open discussion following the webinar that we wanted to take the time to follow up and answer them. So without further ado, the Q&A.

Q. Did using JIRA give you greater visibility?

Ryan

Ryan: Standardizing on one tool for tracking development work across all development teams, and using the same tool to track the security reviews gave both us and the development teams improved visibility.

Q. Was the Kanban team a dedicated security team or was it just a team performing in a different way?

Ryan
Ryan: Just a team performing in a different way. This meant that while we had developed our core process around Scrum teams, we had to find a similar way to integrate with a new team operating with a different process.

Q. Do you recommend we have security training and expect security requirements coming from those writing stories/reqs or would that all be on the SCRUM team?

Ryan
Ryan: In our process, the Security Architect is responsible for working with the Product Owner to define security-related Acceptance Criteria or entire stories. As those participating in security grooming gain familiarity and certain patterns emerge, they can write them as well. I would recommend security training for everyone involved.

Q. Can a Technical Lead/Scrum Master play Security Engineer Role if they have security background?

Chris Chris: Yes, though I think you want to be careful of putting too many responsibilities on the Scrum Master. A Tech Lead can certainly be trained up to pitch in on some subset of the Security Engineer role, such as routine code reviews. This is similar to what we are rolling out with our Security Champions program, except that the Security Champion can be any member of the team. It will take longer for them to develop the expertise and intuition needed to perform tasks like security design reviews or focused penetration testing.

Q. How did you ensure test strategy, test plan, and security considerations are still correct when the stories are constantly being added or modified during the sprints?

ChrisChris: Modifying stories during sprints is a violation of Scrum principles, so if/when this does happen, we try to make sure it is addressed during Retro. Adding stories during sprints can still be challenging in the cases where the story was created on-the-fly. If it was pulled out of backlog, it would already have security criteria attached. However if it was a “just-in-time” story (e.g. acute customer pain point), we ask the Scrum Masters to inform us ASAP so that we can assess the security needs. In the near future it will be the Security Champion’s job to keep an eye out for things like this.

Q. What threat modeling tools do you use? Do you use any risk analysis/assessments to shape how you develop security requirements and their priorities?

ChrisChris: We do not use formal threat modeling tools. At the story level, we are doing light, informal threat modeling focused heavily on protecting against unauthorized access to customer data. We plan to take some steps to formalize this, but we also want to be cautious of creating a bloated process.

Q. Outside of reviewing every user story, how do you ensure you don’t miss things?

ChrisChris: We run automated static and dynamic analyses against each release candidate after code freeze. Every once in a while this picks up an implementation issue that might have been missed during code review, so it serves as a nice additional layer of defense. Additionally, we hire external consulting firms to perform a web app penetration test twice a year. All that being said, we’ll absolutely miss things. Nothing is perfect. When we do become aware of any security issues that have escaped to production, we take a risk-based approach to determining the urgency of the fix. What’s nice is that our deployment process allows us to test and push fixes relatively quickly if an off-cycle patch is needed.

Q. Did you guys make security requirements as part of Definition of Done of user stories?

RyanRyan: Yes, we consider security a part of our Definition of Done and to that point add and review against specific Acceptance Criteria on stories with security impact.

Q. So for any security testing, are the results ever sent directly back to the contributing developer? Or are the security test results always reviewed first by SMEs to triage/prioritize?

RyanRyan: Development teams run their own static analysis scans and do the initial review of the results. A security SME will review the results of later scan that incorporates many developers. Code review or pen. test findings that result from an in-sprint security review will be communicated back to the developer immediately so they can be addressed.

Q. Do you see any process changes for security testing?

Ryan
Ryan: Automation, automation, automation.

view the webinarThat is all we have time for at the moment, but check back next week for the second half of our Agile SDLC Q&A. In the meantime, if you found the Agile Security webinar useful, consider registering for Veracode’s director of platform engineering, Peter Chestna’s webinar: “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this technical webinar, Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success.

Agile SDLC Q&A with Chris Eng and Ryan O’Boyle – Part I

April 10, 2014 by · Leave a Comment
Filed under: research 

Recently, Ryan O’Boyle and I hosted the webinar “Building Security Into the Agile SDLC: View From the Trenches”. We would like to take a minute to thank all those who attended the live broadcast for submitting questions. There were so many questions from our open discussion following the webinar that we wanted to take the time to follow up and answer them. So without further ado, the Q&A.

Q. Did using JIRA give you greater visibility?

Ryan

Ryan: Standardizing on one tool for tracking development work across all development teams, and using the same tool to track the security reviews gave both us and the development teams improved visibility.

Q. Was the Kanban team a dedicated security team or was it just a team performing in a different way?

Ryan
Ryan: Just a team performing in a different way. This meant that while we had developed our core process around Scrum teams, we had to find a similar way to integrate with a new team operating with a different process.

Q. Do you recommend we have security training and expect security requirements coming from those writing stories/reqs or would that all be on the SCRUM team?

Ryan
Ryan: In our process, the Security Architect is responsible for working with the Product Owner to define security-related Acceptance Criteria or entire stories. As those participating in security grooming gain familiarity and certain patterns emerge, they can write them as well. I would recommend security training for everyone involved.

Q. Can a Technical Lead/Scrum Master play Security Engineer Role if they have security background?

Chris Chris: Yes, though I think you want to be careful of putting too many responsibilities on the Scrum Master. A Tech Lead can certainly be trained up to pitch in on some subset of the Security Engineer role, such as routine code reviews. This is similar to what we are rolling out with our Security Champions program, except that the Security Champion can be any member of the team. It will take longer for them to develop the expertise and intuition needed to perform tasks like security design reviews or focused penetration testing.

Q. How did you ensure test strategy, test plan, and security considerations are still correct when the stories are constantly being added or modified during the sprints?

ChrisChris: Modifying stories during sprints is a violation of Scrum principles, so if/when this does happen, we try to make sure it is addressed during Retro. Adding stories during sprints can still be challenging in the cases where the story was created on-the-fly. If it was pulled out of backlog, it would already have security criteria attached. However if it was a “just-in-time” story (e.g. acute customer pain point), we ask the Scrum Masters to inform us ASAP so that we can assess the security needs. In the near future it will be the Security Champion’s job to keep an eye out for things like this.

Q. What threat modeling tools do you use? Do you use any risk analysis/assessments to shape how you develop security requirements and their priorities?

ChrisChris: We do not use formal threat modeling tools. At the story level, we are doing light, informal threat modeling focused heavily on protecting against unauthorized access to customer data. We plan to take some steps to formalize this, but we also want to be cautious of creating a bloated process.

Q. Outside of reviewing every user story, how do you ensure you don’t miss things?

ChrisChris: We run automated static and dynamic analyses against each release candidate after code freeze. Every once in a while this picks up an implementation issue that might have been missed during code review, so it serves as a nice additional layer of defense. Additionally, we hire external consulting firms to perform a web app penetration test twice a year. All that being said, we’ll absolutely miss things. Nothing is perfect. When we do become aware of any security issues that have escaped to production, we take a risk-based approach to determining the urgency of the fix. What’s nice is that our deployment process allows us to test and push fixes relatively quickly if an off-cycle patch is needed.

Q. Did you guys make security requirements as part of Definition of Done of user stories?

RyanRyan: Yes, we consider security a part of our Definition of Done and to that point add and review against specific Acceptance Criteria on stories with security impact.

Q. So for any security testing, are the results ever sent directly back to the contributing developer? Or are the security test results always reviewed first by SMEs to triage/prioritize?

RyanRyan: Development teams run their own static analysis scans and do the initial review of the results. A security SME will review the results of later scan that incorporates many developers. Code review or pen. test findings that result from an in-sprint security review will be communicated back to the developer immediately so they can be addressed.

Q. Do you see any process changes for security testing?

Ryan
Ryan: Automation, automation, automation.

view the webinarThat is all we have time for at the moment, but check back next week for the second half of our Agile SDLC Q&A. In the meantime, if you found the Agile Security webinar useful, consider registering for Veracode’s director of platform engineering, Peter Chestna’s webinar: “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It“. In this technical webinar, Peter will share how we’ve leveraged Veracode’s cloud-based platform to integrate application security testing with our Agile development toolchain (Eclipse, Jenkins, JIRA) — and why it’s become essential to our success.

Beware the Takeout Menu

April 9, 2014 by · Leave a Comment
Filed under: Third-Party Software 

When addressing enterprise security, the weakest links – the points of least resistance – should be hardened to prevent breaches.

Chinese-Menu

An illuminating article came out in the New York Times yesterday about the cyber-security risk posed to large enterprises by third-parties.

The article describes a classic, drive-by application-layer attack in which cyber-attackers breached a big oil company by injecting malware into the online menu of a Chinese restaurant that was popular with employees. When the workers browsed the menu, they inadvertently downloaded code that gave the attackers a foothold in the oil company’s network — and presumably, access to all kinds of valuable IP such as the quantity and location of all of the company’s oil discoveries worldwide.

The point of the article is that cyber-attackers are now targeting third-party applications and suppliers — such as the Chinese takeout software used in in the watering hole attack and the HVAC company whose credentials were stolen for the Target breach — as the path of least resistance to sensitive enterprise data. One of the sources quoted in the article suggests that third-party suppliers are involved in as many as 70% of breaches.

(Someone posted an amusing comment that “The movie 2001 had it wrong. It won’t be HAL that won’t open the pod bay door but a pimply faced kid in New Jersey hacking into HAL” — but the reality is that it’s more likely to be an organized crime gang in Eastern Europe or foreign military units performing state-sponsored espionage.)

As security teams get better at hardening their networks with next-generation technologies such as Palo Alto and FireEye, attackers are simply getting smarter by looking for weak links at the application layer and in the software supply chain. As the article points outs, this is a clever strategy because supply chain vendors are already behind the firewall and “often don’t have the same security standards as their clients.”

The analytics collected by our cloud-based application security platform reinforces that point: 90% of third-party applications uploaded to the platform include at least one OWASP Top 10 vulnerability such as SQL Injection and Cross-Site Scripting (Enterprise Testing of the Software Supply Chain).

What are the best practices for addressing third-party risk? Start by understanding all aspects of your third-party supply chain: the software you outsource, purchase or use via SaaS; the software you incorporate as components and frameworks in your in-house applications; and the service providers and contractors who have privileged access to your systems. If you aren’t continuously assessing these, you are accepting a much higher level of risk.

whitepaper longAnother interesting factoid from the Times article: Unlike banks which spend up to 12% of their IT budgets on security, retailers spend, on average, less than 5% of their budgets on security. To see what leaders in financial services — such Morgan Stanley, Goldman Sachs, GE Capital, Thompson Reuters — are recommending as three critical controls for managing third-party software risk, see the FS-ISAC whitepaper “Appropriate Software Security Control Types for Third Party Service and Product Providers”.

One of the controls recommended by FS-ISAC is the use of automated binary static analysis to ensure your third-party software is compliant with corporate security policies, based on minimum acceptable levels of risk (e.g., OWASP Top 10, CWE severity levels, etc.). This matches our experience working with hundreds of third-party vendors — enterprises can successfully reduce third-party software risk by creating ongoing, enterprise-wide governance programs with standardized policies and by working directly with their vendors to ensure they’re compliant.

As Target taught us, the security posture of your third-party vendors is also your responsibility. And if they turn out to be the path of least resistance for cyber-attackers, it’s your company and your customers that ultimately suffer.

Automating Good Practice Into The Development Process

April 7, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY, SDLC 

I’ve always liked code reviews. Can I make others like them too?

9478191_m

I’ve understood the benefit of code reviews, and enjoyed them, for almost as long as I’ve been developing software. It’s not just the excuse to attack others (although that can be fun), but the learning—looking at solutions other people come up with, hearing suggestions on my code. It’s easy to fall into patterns in coding, and not realize the old patterns aren’t the best approach for the current programming language or aren’t the most efficient approach for the current project.

Dwelling on good code review comments can be a great learning experience. Overlooked patterns can be appreciated, structure and error handling can be improved, and teams can develop more constancy in coding style. Even poor review feedback like “I don’t get this” can identify flaws or highlight where code needs to be reworked for clarity and improved maintenance.

But code reviews are rare. Many developers don’t like them. Some management teams don’t see the value, while other managers claim code reviews are good but don’t make room in the schedule (or push them out of the schedule when a project starts to slip.) I remember one meeting where the development manager said “remember what will be happening after code freeze.” He expected us to say “Code Reviews!”, but a couple members of the team responded “I’m going to Disney World!” Everyone laughed, but the Disney trips were enjoyed while the code reviews never happened.

In many groups and projects, code reviews never happened, except when I dragged people to my cubicle and forced them to look at small pieces of code. I developed a personal strategy which helped somewhat: When I’m about ready to commit a change set I try to review the change as though it would be sent to a code review. “What would someone complain about if they saw this change?” It takes discipline and doesn’t have most of the benefits of a real code review but it has helped improve my code.

The development of interactive code review tools helped the situation. Discussions on changes could be asynchronous instead of trying to find a common time to schedule a meeting, and reviewers could see and riff on each other’s comments. It was still hard to encourage good comments and find the time for code reviews (even if “mandated,”) but the situation was better.

The next advancement was integrating code review tools into the source control workflow. This required (or at least strongly encouraged depending on configuration) approved code reviews before allowing merges. The integration meant less effort was needed to set up the code reviews. There’s also another hammer to encourage people to review the code: “Please review my code so I can commit my change.”

The barriers to code reviews also exist for security reviews, but the problem can be worse as many developers aren’t trained to find security problems. Security issues are obviously in-scope for code reviews, but the issue of security typically isn’t front of mind for reviewers. Even at Veracode the focus is on making the code work and adjusting the user interface to be understandable for customers.

But we do have access to Veracode’s security platform. We added “run our software on itself” to our release process. We would start a scan, wait for the results, review the flaws found, and tell developers to fix the issues. As with code reviews, security reviews can be easy to put off because it takes time to go through the process steps.

As with code reviews, we have taken steps to integrate security review into the standard workflow. The first step was to automatically run a scan during automated builds. A source update to a release branch causes a build to be run, sending out an email if the build fails. If the build works, the script uses the Veracode APIs to start a static scan of the build. This eliminated the first few manual steps in the security scan process. (With the Veracode 2014.2 release the Veracode upload APIs have an “auto start” feature to start a scan without intervention after a successful pre-scan, making automatic submission of scans easier.)

To further reduce the overhead of the security scans, we improved the Veracode JIRA Import Plugin to match our development process. After a scan completes, the Import plugin notices the new results, and imports the significant flaws into JIRA bug reports in the correct JIRA project. Flaws still need to be assigned to developers to fix, but it now happens in the standard triage process used for any reported problem. If a flaw has a mitigation approved, or if a code change eliminates the flaw, the plugin notices the change and marks the JIRA issue as resolved.

The automated security scans aren’t our entire process. We also have security reviews for proposals and designs so developers understand the key security issues before they start to code, and the security experts are always available for consultation in addition to being involved in every stage of development. The main benefit of the automated scans is that they take care of the boring review to catch minor omissions and oversights in coding, leaving more time for the security experts to work on the higher level security strategy instead of closing yet another potential XSS issue.

view the webinar

Veracode’s software engineers understand the challenge of building security into the Agile SDLC. We live and breathe that challenge. We use our own application security technology to scale our security processes so our developers can go further faster. On April 17th, our director of platform engineering, Peter Chestna, will share in a free webinar, how we’ve leveraged our cloud-based platform to integrate application security testing with our Agile development toolchain—and why it’s become essential to our success. Register for Peter’s webinar, “Secure Agile Through An Automated Toolchain: How Veracode R&D Does It” to learn from our experience.

CERF: Classified NSA Work Mucked Up Security For Early TCP/IP

April 3, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY 

Internet pioneer Vint Cerf says that he had access to cutting edge cryptographic technology in the mid 1970s that could have made TCP/IP more secure – too bad the NSA wouldn’t let him!

computer-guy

Did the National Security Agency, way back in the 1970s, allow its own priorities to stand in the way of technology that might have given rise to a more secure Internet? You wouldn’t be crazy to reach that conclusion after hearing an interview with Google Vice President and Internet Evangelist Vint Cerf on Wednesday.

As a graduate student in Stanford in the 1970s, Cerf had a hand in the creation of ARPANet, the world’s first packet-switched network. He later went on to work as a program manager at DARPA, where he funded research into packet network interconnection protocols that led to the creation of the TCP/IP protocol that is the foundation of the modern Internet.

Cerf is a living legend who has received just about every honor a technologist can: including the National Medal of Technology, the Turing Award and the Presidential Medal of Freedom. But he made clear in the Google Hangout with host Leo Laporte that the work he has been decorated for – TCP/IP, the Internet’s lingua franca – was at best intended as a proof of concept, and that only now – with the adoption of IPv6 – is it mature (and secure) enough for what Cerf called “production use.”

Specifically, Cerf said that given the chance to do it over again he would have designed earlier versions of TCP/IP to look and work like IPV6, the latest version of the IP protocol with its integrated network-layer security and massive 128 bit address space. IPv6 is only now beginning to replace the exhausted IPV4 protocol globally.

“If I had in my hands the kinds of cryptographic technology we have today, I would absolutely have used it,” Cerf said. (Check it out here)

Researchers at the time were working on the development of just such a lightweight but powerful cryptosystem. On Stanford’s campus, Cerf noted that Whit Diffie and Martin Hellman had researched and published a paper that described a public key cryptography system. But they didn’t have the algorithms to make it practical. (That task would fall to Ron Rivest, Adi Shamir and Leonard Adleman, who published the RSA algorithm in 1977).

Curiously enough, however, Cerf revealed that he did have access to some really bleeding edge cryptographic technology back then that might have been used to implement strong, protocol-level security into the earliest specifications of TCP/IP. Why weren’t they used, then? The culprit is one that’s well known now: the National Security Agency.

Cerf told host Leo Laporte that the crypto tools were part of a classified project he was working on at Stanford in the mid 1970s to build a secure, classified Internet for the National Security Agency.

“During the mid 1970s while I was still at Stanford and working on this, I also worked with the NSA on a secure version of the Internet, but one that used classified cryptographic technology. At the time I couldn’t share that with my friends,” Cerf said. “So I was leading this kind of schizoid existence for a while.”

Social-mediaHindsight is 20:20, as the saying goes. Neither Cerf, nor the NSA nor anyone else could have predicted how much of our economy and that of the globe would come to depend on what was then a government backed experiment in computer networking. Besides, we don’t know exactly what the cryptographic tools Cerf had access to as part of his secure Internet research or how suitable (and scalable) they would have been.

And who knows, maybe too much security early on would have stifled the growth of the Internet in its infancy – keeping it focused on the defense and research community, but acting as an inhibitor to wider commercial adoption?

But the specter of the NSA acting in its own interest without any obvious interest in fostering the larger technology sector is one that has been well documented in recent months, as revelations by the former NSA contractor Edward Snowden revealed how the NSA worked to undermine cryptographic standards promoted by NIST and the firm RSA .

It’s hard to listen to Cerf lamenting the absence of strong authentication and encryption in the foundational protocol of the Internet, or to think about the myriad of online ills in the past two decades that might have been preempted with a stronger and more secure protocol and not wonder what might have been.

Next Page »