Video Survey: What Would You Do with a Monster in Your Corner?

July 11, 2014 by · Leave a Comment

In our final video survey installment as part of the Future of AppSec Series, we talk about the idea of having a “Monster in Your Corner“. Application security often feels like a massive intractable problem, the sort of problem that requires a really big friend to help you solve, or in our thinking – a monster.

When we talk about having a monster in your corner, what do we mean? Well, we’re talking about the Veracode platform, our automated scanning techniques and the time-saving, massively scalable approach they give you to mountains of code and thousands of applications. But we’re also talking about the brilliant security engineers and minds behind the technology, the same minds that are continuously driving our service to improve and make the world of software safer on a daily basis. And lastly but certainly not least, our amazing Customer Services team of trained application security experts that are on-hand to help every customer get the most out of our cloud-based service.

Everyone at Veracode makes up the monster and we’d love to be in your corner. Help us understand what appsec problems you need help with tackling because this is what do day in and day out.

If You Had an Application Security Monster in Your Corner What Problem Would it Attack?

Watch Our Other Video Surveys

Truth, Fiction and a 20 Year Old Vulnerability

July 10, 2014 by · Leave a Comment
Filed under: application security 

The impact of a 20 year old flaw in the LZ4 is still a matter of conjecture. The moral of the story isn’t.

I think we can all agree it's not quite THIS critical.

I think we can all agree it’s not quite THIS critical.

What were you doing in 1996? You remember ’96, right? Jerry McGuire, Independence Day and Fargo were in the theaters. Everybody was dancing the “Macarena”?

In the technology world, 1996 was also a big year. Among other, less notable developments: two obscure graduate students, Larry Page and Sergey Brin, introduced a novel search engine called “Backrub.” Elsewhere, a software engineer named Markus F. X. J. Oberhumer published a novel compression algorithm dubbed LZO. Written in ANSI C, LZO offered what its author described as “pretty fast compression and *extremely* fast decompression.” LZO was particularly adept at compressing and decompressing raw image data such as photos and video.

Soon enough, folks found their way to LZO and used it. Today, LZ4 – based upon LZO – is a core component of the Linux kernel and is implemented on Samsung’s version of the Android mobile device operating system. It is also a part of the ZFS file system which, in turn, is bundled with open source platforms like FreeBSD, MySQL and Hadoop. But the true reach of LZ4 is a matter for conjecture.

That’s a problem, because way back in 1996, Mr. Oberhumer managed to miss a pretty straight-forward, but serious integer overflow vulnerability in the LZ4 source code. As described by Kelly Jackson Higgins over at Dark Reading, the flaw could allow a remote attacker to carry out denial of service attacks against vulnerable devices or trigger

the flaw could allow a remote attacker to carry out denial of service attacks against vulnerable devices or trigger remote code execution on those devices

remote code execution on those devices – running their own (malicious) code on the device. The integer overflow bug was discovered during a code audit of LZ4.

Twenty years later, that simple mistake is the source of a lot of heartbleed…err…heartburn as open source platforms, embedded device makers and other downstream consumers of LZ4 find themselves exposed.

Patches for the integer overflow bug were issued in recent days for both the Linux kernel and affected open-source media libraries. But there is concern that not everyone who uses LZ4 may be aware of their exposure to the flaw. And Mr. Bailey has speculated that some critical operating systems – including embedded devices used in automobiles or even aircraft might be vulnerable. We really don’t know.

As is often the case in the security industry, however, there is some disagreement about the seriousness of the vulnerability and some chest thumping over Mr. Bailey’s decision to go public with his findings.

Writing on his blog, the security researcher Yann Collett (Cyan4973) has raised serious questions about the real impact of LZ4. While generally supporting the decision to patch the hole (and recommending patching for those exposed to it), Mr. Collett suggests that the LZ4 vulnerability is quite limited.

Specifically: Collett notes that to trigger the vulnerability, an attacker would need to create a special compressed block to overflow the 32-bits address space. To do that, the malicious compressed block would need to be in the neighborhood of 16 MB of data. That’s possible, theoretically, but not practical. Legacy LZ4 limits file formats to 8MB blocks – maximum. “Any value larger than that just stops the decoding process,” he writes, and 8MB is not enough to trigger a problem. A newer streaming format is even stricter, with a hard limit at 4 MB. “As a consequence, it’s not possible to exploit that vulnerability using the documented LZ4 file/streaming format,” he says. LZ4, Mr. Collett says, is no OpenSSL.

In response to Collett and others, Bailey wrote an even more detailed analysis of the LZ4 vulnerability and found that attackers actually wouldn’t be limited by the 8MB or 4 MB limit. And, while all kinds of mitigating factors may exist, depending on the platform that LZ4 is running on, Bailey concludes that exploits could be written against the current implementations of LZ4 and that block sizes of less than 4MB could be malicious. While some modern platforms may have features that mitigate the risk, “this is the kind of critical arbitrary-write bug attackers look for when they have a corresponding memory information disclosure (read) that exposes addresses in memory.”

While the LZ4 vulnerability debate has become an example of security industry “insider baseball,” there is (fortunately) a larger truth here that everyone can agree on. That larger truth is that we’re all a lot more reliant on software than we used to be. And, as that reliance has grown stronger, the interactions between software powered devices in our environment , has become more complex and our grasp of what makes up the software we rely on has loosened. Veracode has written about this before – in relation to OpenSSL and other related topics.

It may be the case that the LZ4 vulnerability is a lot harder to exploit that we were led to believe. But nobody should take too much comfort in that when a casual audit of just one element of the Linux Kernel uncovered a 20 year old, remotely exploitable vulnerability. That discovery should make you wonder what else is out there has escaped notice. That’s a scary question.

Applications are Growing Uncontrollably and Insecurely

Insecure Application Growth

This year I’m working with IDG to survey enterprises to understand their application portfolio, how it’s changing and what firms are doing to secure their application infrastructure.

The study found that on average enterprises expect to develop over 340 new applications in the 12 months. As someone that has been working in and around the enterprise software industry for more years than I care to admit here, I find that number astounding. Enterprises really are turning into software companies.

Think about it – how many new applications did software vendors like Microsoft, Oracle, or SAP bring to market in the last 12 months? The number is probably in the hundreds, but you would expect that because they are software vendors. Every application sold is money in their pocket. The more software they make the more opportunities there are for them to increase their revenue and profits.

On average enterprises expect to develop over 340 new applications in 12 months.

So why are enterprises developing as many applications as software vendors? The answer is the same. The more software they make the more opportunities there are for them to increase their revenue and profits. It may not be a short and direct line between software development, revenue and profits like it is for software vendors, but the connection is there otherwise enterprises wouldn’t be doing it.

The problem is that all those applications represent both opportunities and risks for the enterprises developing them. How much risk? It’s hard to say without assessing them for vulnerabilities. However, most of those 300+ new applications will not be assessed for security risks. The survey found that only 37% of enterprise-developed applications are assessed for security vulnerabilities.

Or look at it another way – enterprises are blindly choosing to operate in a hostile environment for 63% of the business opportunities represented by software. If it were me, I would rather take off the blindfold and see exactly what I’m getting into. I can only hope that enterprise executives start feeling the same way.

Related Links

Med Tech’s Promiscuity Problem

July 1, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY, Compliance 

A roundtable discussion of medical device security finds that innovation in the connected health space is outstripping security. And the problem will get worse before it gets better.


Physicians are used to counseling their patients on the need to take care of themselves and take reasonable precautions to protect themselves from harm. Are you fond of cycling? Remember to wear a helmet to protect yourself from traumatic brain injury! Enjoy a drink at the pub? Remember not to over-indulge, or you risk a wide range of ills: from liver disease to fist fights and automobile accidents. And, if you’re out there hooking up willy-nilly and you fail to take precautions, don’t be surprised when you contact herpes or some other common (and preventable) STD.

There’s ample scientific evidence to back up each of these recommendations. That doesn’t mean, of course, that the risky behaviors don’t continue regardless. It just means that the physicians dispensing the advice know they’re standing on pretty solid ground when they dole it out.

When it comes to the fast-moving world of medical technology and connected healthcare, however, unchecked, promiscuous behavior is (unfortunately) the norm these days. And, unlike that other kind of promiscuity, doctors and hospitals are only just beginning to recognize the problem – let alone figuring out what to do about it.

That was one conclusion of a recent panel discussion that was held on June 12 under the auspices of the National Institute of Standards and Technology’s Information Security and Privacy Advisory Board (ISPAB).

Dr. Dale Nordenberg

Dr. Dale Nordenberg of MDISS, Photo Source:

Speaking as part of a panel discussion, “Emerging Guidance and Standards Affecting Medical Device Security” earlier this month, Dale Nordenberg, a co-founder and executive director of Medical Device Innovation, Safety & Security Consortium (and a practicing physician) said that the momentum pushing adoption of connected devices in the healthcare field is far outstripping the ability of healthcare providers to safely and securely deploy them. (Panel audio recording available here.)

“These connected care environments are evolving far more rapidly than associated best practices,” Nordenberg said. “It’s far easier to go out and buy something and implement it than to educate and train everyone (to work) in that new environment,” he said.

One problem is that connected health devices are coming on the market at a furious pace – driven by innovation at thousands of large and small device makers.

The investments are driven by hospitals desire to provide better and more efficient care, as well as by financial incentives embedded in legislation like The Affordable Care Act, which has turbo-charged investments in technologies like Electronic Health Records.

But Nordenberg noted that connected health initiatives and EHR are a double-edged sword. “If you’re going to increase the promiscuity, you better increase your security and monitoring and assess your devices and do cyber security exercises,” he said. Alas, very little of that is happening anywhere in the U.S.

Why? There are many reasons. Bakul Patel a Policy Advisor in the Center for Devices and Radiological Health at the FDA noted that medical device makers have deep expertise in writing embedded software that is reliable, but little experience with the kinds of networked applications that have long been common in enterprise environments. “So when they think about adding connectivity, its just out of the box and off-the-shelf stuff,” Patel said. That has led to a lot of unintentional security breaches in recent years.

There’s also a critical shortage of talent that’s hobbling manufacturers and their customers, alike. Patel said that only the top tier of medical device manufacturers – companies like Phillips, GE – sport internal teams to do cyber security assessments on new and established products. The rest – a long tail of smaller manufacturers – do not. In fact: many device makers struggle to accept the notion that hackers would be interested in compromising their connected health device. Period.

On the customer side, expertise is lacking, as well. Ken Hoyme, a Distinguished Scientist at Adventium Labs, said that there’s lots of “denial” in the healthcare sector. “Hospitals deal with the security of their pharmacy or of their newborns,” he noted. “But they’re not dealing with the security of their connected health devices.” (Check out Ken’s presentation here.)

The focus at most hospitals and healthcare networks is, understandably, on compliance with regulations like HIPAA. “There’s just no thought that we might be involved in a targeted attack,” he said.

You can’t entirely blame the healthcare providers. The FDA itself is struggling to find its footing in this Brave New World. Mandating security is good – in theory – but not if it impedes progress, locks in obsolete solutions or interferes with patient care, Patel suggested. “You don’t want someone to have to punch in a 15 digit password when they’re trying to turn on an infusion pump. That would not be useful.”

In other areas, such as mobile apps, the scrutiny given to traditional healthcare devices that have a lifespan measured in decades just can’t scale to accommodate a market that churns out hundreds of new apps a month – most of which never gain broad adoption and die a quiet death on Apple’s iTunes App Store or Google Play. “The whole mobile app world has its own ecosystem and its user and consumer driven…These apps have a lifecycle of their own.”

What’s the cure? Patel said that the medical device security industry has lots to learn from its bigger, older sibling: the IT security industry, where tools, processes and roles have been honed over the last two decades.

As things stand, however, it is early days in the medical device sector and most of that hard work is still ahead.

You can listen to a podcast of the roundtable conversation here. (Note: the sound quality starts out weak, but improves vastly after the first minute or two – be patient.)

Video Survey: What’s in the future for application security?

June 26, 2014 by · Leave a Comment

Security professionals, analysts, and headlines all seem to agree that many of the most critical vulnerabilities discovered and exploited today are happening on the application layer. Organizations around the world are redirecting their efforts to find and fix these flaws. Thought leaders in the security field are calling for others to follow in their efforts to ensure applications from the entirety of their software supply chain — especially third-party vendors — are being held to the same standard as internally developed apps. While this transition is far from complete, it brings up the question, “what’s next?”

In the sixth installment of our Future of Application Security survey video series we asked industry experts where they thought the future of application security will be.

Where is the future of application security leading?

What do you think? Where do you see the future of application security?

Watch Our Other Video Surveys

First Prioritize, Then Patch: Yes, Another Blog on PCI 3.0

June 25, 2014 by · Leave a Comment
Filed under: Compliance 
Your scan results may have you feeling a bit overwhelmed but our actionable data and sorting can help streamline your remediation!

Your scan results may have you feeling a bit overwhelmed but our actionable data and sorting can help streamline your remediation efforts!

In November’s update to PCI DSS, now on version 3.0, you may have noticed that the PCI Security Council switched the order of the first two application security focused sub-requirements. Requirement 6.1 now focuses on establishing ongoing best practices, while 6.2 moves on to patching and remediation efforts. Some of our customers have questioned the logic of such a small change, so I thought we should cover it in a blog post!

The new guidance to 6.1 recognizes the need to prioritize and rank your vulnerabilities prior to executing a remediation or mitigation plan and encourages enterprises to remain aware of new vulnerabilities that may threaten their systems and ultimately disrupt “business-as-usual.” In conjunction with the guidance in 6.1, 6.2 urges enterprises to build a process by evaluating risks and prioritizing patch cycles on the most critical systems within 30 days, and less business critical systems within 2 to 3 months following a patch release. We all know you can’t fix everything right away, but this approach ensures that all systems will get patched, not just the most critical ones.

Let’s face it, as new application languages, architectures, and platforms are introduced and adopted, new vulnerabilities are sure to follow. And as enterprises evolve to close the gap those vulnerabilities pose, hackers will shift tactics and find new ways to exploit them. History does have a way of repeating itself, especially when it’s an arms race. Since applications will always be choice targets for hackers, understanding what the threat landscape is before you decide how to tackle it should be the first step toward developing an ongoing process you can be proud of, rather than checking the compliance box once a year.

From the very beginning, the Veracode platform was designed to make remediation efforts for the sake of compliance as straightforward as possible. Our ‘Fix First Analyzer’ organizes internal and third-party scan results into actionable data for developers (yes developers!) to consume. Flaws identified in the code are prioritized based on the severity level, but also by the effort to fix, and the exploitability of the flaw, all according to the OWASP Top 10, and/or SANS Top 25 CWE definitions. Developers are presented with a visual and navigable map to help determine which vulnerabilities pose the greatest risks and should be remediated soonest, including remediation advice on how to go about addressing each flaw.

While earlier versions of the standard stressed the importance of patching, this revision to PCI DSS moves toward the adoption of an application security process rather than a one-time step on the way to compliance. The only way to achieve true security is to never give up on the applications…well, until you end-of-life them. The tools you need to help comply with application security requirements found in PCI DSS 3.0 are just a cloud away.

Related Content

Focus Shift: From the Critical Five Percent to the Entire Application Infrastructure

June 24, 2014 by · Leave a Comment
Filed under: application security 
The IDG study found that more than sixty percent of internally developed applications are not assessed for critical security vulnerabilities such as SQL Injection.

The IDG study found that more than sixty percent of internally developed applications are not assessed for critical security vulnerabilities such as SQL Injection.

Later this week I’ll be joining IDG Market Research Manager, Perry Laberis for a webinar to discuss a study on how application infrastructures are changing and how security teams will keep up with those changes to manage enterprise risk.

At Veracode this is a very important discussion because we know that applications are the lifeblood of every enterprise. The last time we did a survey like this we found that focus had shifted from securing only mission critical applications to instead a broader and better understanding of your entire application infrastructure. Discussions with our customers showed that they were increasingly concerned about their entire application infrastructure.

Register for this webinar here!They are concerned because attackers are using well known vulnerabilities in low priority applications as a stepping stone to get access to more valuable data. For example, we’ve known how to find, fix and prevent SQL injection vulnerabilities for 20+ years. Yet it still shows up — and is exploitable — in modern web applications.

It’s still showing up in enterprise application infrastructures because most enterprise development teams are not required to find and fix security vulnerabilities. The IDG study found that more than sixty percent of internally developed applications are not assessed for critical security vulnerabilities such as SQL Injection.

So there is this gap between what people worried about securing two years ago and what they are worried about now.

The fundamental question our customers are asking us is – how can they go further faster? They also ask us a lot of questions about what are other people doing:

  • What baseline should I be comparing myself to – tell me what my peer group is doing and who it doing appsec best?
  • What does their current coverage look like?
  • How fast is their application infrastructure growing?
  • How much are they spending to get that coverage and what are the spending it on?
  • How do my peers drive up adoption of secure development practices across all of their development teams?
  • What are the critical factors for success and how do I benchmark my progress?

That’s a broad range of topics – so we decided it would be best to get systematic about getting answers to these types of questions.

The research results Perry and I will be discussing are the beginning a whole series of efforts to deliver answers for our customers. I hope you find the insights valuable and that you will give us suggestions on how to make it even more relevant to your particular challenges.

Register for the webinar.

Are You Trustworthy? UK Outlines Third-Party Software Security Specifications with PAS754

June 20, 2014 by · Leave a Comment
Filed under: Compliance, Third-Party Software 

I may be one of the few people that gets excited about regulations, controls, and guidance. But I suspect that there are many cyber security leaders that are excited and encouraged by the newly released PAS754:2014.

After consultation with industry and academia UK government launches PAS754 'software trustworthiness' standard.

After consultation with industry and academia UK government launches PAS754 ‘software trustworthiness’ standard.

This document provides a general framework for addressing cyber security and can also be used in conjunction with other standards, such as ISO 27001 and ISO/IEC 15288. As stated in the document, “the aim of this PAS, sponsored by the UK Trustworthy Software Initiative (TSI), is to provide a specification for software trustworthiness.” Trustworthiness is defined in the guidance as, “appropriately addresses safety, reliability, availability, resilience and security issues.”

What is most encouraging about this document is that it addresses an area that many enterprises have not scrutinized: the security of the software supply chain. In the publication of this

“It is unacceptable to customers, users, shareholders and taxpayers that major programmes have been delayed and, in many cases, have failed because of serious defects in software.”

~ Sir Edmund Burton

guidance, Minister of State for Universities and Science David Willetts MP stated that “This [guidance] will help UK companies select the most secure, dependable and reliable software for their needs as well as providing them with the skills to use it effectively.” Past efforts in providing general frameworks have not focused specifically on the purchasing of secure software.

The guidance itself includes the Procedural Control – Perform supplier management. Specifically calling for enterprises and governmental agencies to understand the supply chain of software so that “trustworthiness can be specified and verified.” Further, in techniques for delivery of PAS754 requirements, four specific techniques are recommended:

  1. Supply chain identification – identify the supply chain
    • Veracode POV: While this may sound obvious, tracing the exact source of purchased software, outsourced software, and open source components can be onerous for large enterprises. We believe the best way to get started is to analyze the software your development team is currently producing to see what open source components are currently being leveraged, and working with the procurement team to determine with which software suppliers your organization currently engages.
  2. Supply chain requirements – establish supply chain quality, security and integrity requirements
    • Veracode POV: While looking to understand your supply chain, you should also create a third-party security policy. We encourage enterprises to leverage the same security policy as used for internal development. For open source components, all components with known vulnerabilities cannot be used within the enterprise. If this is not addressed already in the internal development security policy it should be updated.
    • Veracode POV: Contracts and legal agreements between your enterprise or agency and software suppliers or outsources should be updated to include the third-party security policy as part of the acceptance criteria of the software that is being purchased. At the time of purchase is when a buyer has the most leverage, so be sure that this is incorporated.
  3. Supply chain assurance – establish supply chain quality security integrity assurance
  4. Supplier verification – supplier independent verification
    • Veracode POV: Enterprises and agencies cannot simply trust that a supplier has met the requirements for trustworthiness. Independent verification is required in order to ensure that these standards have been met and the software being leveraged meets the acceptance criteria. Binary static analysis is an industry standard recommended by the FS-ISAC for this independent verification as it requires no source code from the software supplier in order to get a “snapshot in time of vulnerability density” within the third-party software.

When the PAS754 was released, Sir Edmund Burton, TSI President said “it is unacceptable to customers, users, shareholders and taxpayers that major programmes have been delayed and, in many cases, have failed because of serious defects in software” I couldn’t agree more, but this must include all software: both internally developed software and purchased software.

More on Supply Chain Security

Video Survey: Limitations of On-Premises Software Versus Cloud Solutions

June 19, 2014 by · Leave a Comment

Cloud computing has been around for decades and many of the most widely used platforms today are cloud solutions. Google, Amazon, Microsoft, IBM, Salesforce, Oracle, and Zoho are among some of the most well-known cloud vendors offering cloud-based solutions. If you use the internet on a regular basis chances are, you’re already a cloud consumer. In the security space there’s been a slower adoption of some cloud solutions for a number of reasons, among them the outdated notion that the cloud is inherently less secure.

As progressive organizations seek the best possible solution for addressing the challenge of application security, they’re looking for speed, scale, and simplicity. These are the exact limitations of on-premises software. Which brings us to our next question in the Future of Application Security video series. See what our professionals have to say on the topic in the video survey below.

What are the limitations of on-premises software versus a cloud solution?

Watch Our Other Video Surveys

I Like the Monster!

June 18, 2014 by · Leave a Comment
Filed under: application security 

greg-infront-of-monsterOur corporate “Monster In Your Corner” theme really landed with me — when was the last time you heard the EVP of Development say something like that about a marketing campaign?

Here’s why.

The “Monster in your corner” means you have the full force of Veracode’s scalable cloud-based service in your corner — backed by our world-class security experts — to help you reduce application-layer risk.

The stakes are very high for executives like me. We either deliver innovative software on a timescale of relevance, securely — or we’re toast. Harsh, but true. Second, the “securely” part is — as we say in New England—“Wicked Hahd,” particularly if you try to go it alone. So, I feel like I need a monster in my corner.

Innovate securely or else!

Look, my customers are probably very similar to yours. They want new offerings and product enhancements fast — we’re a SaaS player so if we fail to meet their expectations, they shut us down — no renewal, no expansion, no reference — no IPO! Our team leverages Agile, DevOps, and AWS to meet customer expectations — and we leverage good security hygiene across the SDLC plus Veracode’s cloud-based service to do it rapidly and securely. Shameless plug alert — check out previous content by Pete Chestna and Chris Eng to learn how Veracode implements secure agile in our own development environment.

wicked-hahdApplication security is “Wicked Hahd” — and going it alone sets your Dev and Security teams up for failure. Security isn’t just another non-functional requirement like quality or performance—not that quality and performance aren’t important or challenging in their own right but neither involve planning for malicious intent in the face of focused cyber-attackers — that don’t need to be right very often to cause significant harm to your enterprise. As a result, it’s not enough to ask a developer to get more knowledgeable about writing secure code and/or to train them on a simple scanning tool. Better development security hygiene is no longer enough, given today’s AppSec threat landscape, because it’s the equivalent of bringing a pen knife to a gun fight. So, I like the “Monster in Your Corner” theme because it suggests that those of us leading Dev organizations (and our CISO counterparts) need help (no, not psychological help, although there are days…) from experts on implementing enterprise-wide governance programs to reduce risk across web, mobile, legacy and third-party applications.

The AppSec threat landscape has evolved to a point where the only way to setup your Dev team for success — you know, deliver timely innovation without sacrificing security — is by having a Monster in Your Corner. Honestly, this sounds so corny that I can’t believe I wrote it, but it’s true. Look, it’s fair and reasonable to ask my development team to develop software with secure coding practices in mind, and to incorporate corporate security policy into “doneness” criteria, etc.—all while going at a breakneck, Agile-at-scale pace. That said, it’s irresponsible not to give them access to a powerful, centralized AppSec platform with on-demand AppSec expertise to help level the ridiculously disproportionate playing field that they’re dealing with.

Learn More