Four Steps to Successfully Implementing Security into a Continuous Development Shop

18458476_sSo you live in a continuous deployment shop and you have been told to inject security into the process. Are you afraid? Don’t be. When the world moved from waterfall to agile, did everything go smoothly? Of course not – you experienced setbacks and hiccups, just like everyone else. But, eventually you worked through the setbacks and lived to tell the tale. As with any new initiative, it will take time to mature. Take baby steps.

Step one: crawl.

Baseline the security of your application by using multiple testing methods. Static, dynamic and manual analysis will let you know exactly where you stand today. Understand that you may be overwhelmed with your results. You can’t fix it all at once, so don’t panic. At least you know what you have to work with. Integration with your SDLC tools is going to be your best friend. It will allow you to measure your progress over time and spot problematic trends early.

Step two: stand.

Come up with a plan based on your baseline. What has to be fixed now? What won’t we fix? You didn’t get here in a day and you won’t be able to fix it in a day. Work with your security team to build your backlog. Prioritize, deprioritize, decompose, repeat. Now would be a great time to introduce a little education into the organization. Take a look at your flaw prevalence and priorities and train your developers. If you teach them secure coding practices they will write more secure code the first time.

Step three: walk.

Stop digging and put the shovels down. We know that we have problems to fix from the old code (security debt). Let’s make sure we don’t add to the pile. Now is the time to institute a security gate. No new code can be merged until it passes your security policy. We’re not talking about the entire application, just the new stuff. Don’t let insecure code come into the system. By finding and addressing the problems before check-ins, you won’t slow your downstream process. This is a 13881278_sgood time to make sure your security auditing systems integrate with your software development lifecycle systems (JIRA, Jenkins, etc.). Integrating with these systems will make the processes more seamless.

Step four: run!

Now you have a backlog of prioritized work for your team to fix and you’re not allowing the problem to get worse. You’re constantly measuring your security posture and showing continuous improvement. As you pay down your security debt you will have more time for feature development and a team with great secure coding habits.

Integrating a new standard into a system that is already working can be intimidating. But following these four steps will make the task more manageable. Also, once security is integrated, it will become a normal part of the continuous development lifecycle and your software will be better for it.

Related Links

Introduction, or How Securing the Supply Chain is like “Going Green”

Application security is, as any practitioner will tell you, a hard technical and business problem unlike any other. The best advice for successfully securing software is usually to avoid thinking about it like any other problem — software security testers are not like quality assurance professionals, and many security failures arise when developers think conventionally about use cases rather than abuse cases.

But just because application security is a distinct problem does not mean that we should fail to learn from other fields, when applicable. And one of the opportunities for learning is in what appears at first glance to be a doubly difficult problem: securing the software supply chain. Why is software supply chain security needed? The majority of businesses are not building every application they use, they are turning to third parties like outsourced and commercial software vendors. According to IDG, over 62% of an enterprises’ software portfolio was developed outside the enterprise.

Caption

Over 62% of an enterprises’ software portfolio is developed outside the enterprise.

How should these enterprises be thinking about security? Software supply chain security efforts have all the challenges of conventional app sec initiatives, combined with the contractual, legal, and organizational issues of motivating change across organizational boundaries.

But the consequences of ignoring supply chain issues in an application security program are momentous. Most applications are composed of first party code surrounding libraries and other code sourced from third parties — both commercial libraries and open source projects. Purchased applications deployed on the internet or the internal network may access sensitive customer or corporate data and must be evaluated and secured just like first party code, lest a thief steal data through an unlocked virtual door. And increasingly standards like PCI are holding enterprises responsible for driving security requirements into their suppliers.

So what are we to do? Fortunately, software security is not the only large, complex initiative that has implications on the supply chain. Software supply chain security initiatives can take inspiration from other supply chain transformation initiatives, including the rollout of RFID in the early 2000s by Walmart and others, and — particularly — the rise of “green” supply chain efforts.

12728281_sIn fact, software security bears close similarity to “green” efforts to reduce CO2 emissions and waste in the supply chain. Both “green” and security have significant societal benefits, but have historically been avoided in favor of projects more directly connected to revenue. Both have recently seen turns where customers have started to demand a higher standard of performance from companies. And both require coordination of efforts across the supply chain to be successful.

This series of blog posts will explore some simple principles for supply chain transformation that can be derived from efforts to implant “green” or to drive RFID adoption. The basic building blocks stem from research done into green efforts by the Wharton School of Business and published in 2012, and are supplemented with learnings from RFID. We’ll cover seven principles of supply chain transformation and show you how to apply them to your software supply chain initiative:

  1. Choose the right suppliers
  2. Put your efforts where they do the most good
  3. Collaborate to innovate
  4. Use suppliers as force multipliers
  5. The elephant in the room is compliance
  6. Drive compliance via “WIIFM”
  7. Align benefits for enterprise and supplier – or pay

I hope you enjoy the series and look forward to the discussion!

Is It Time For Customs To Inspect Software?

The Zombie Zero malware proves that sophisticated attackers are targeting the supply chain. Is it time to think about inspecting imported hardware and software?

The time for securing supply chain software is now.

If you want to import beef, eggs or chicken into the U.S., you need to get your cargo past inspectors from the U.S. Department of Agriculture. Not so hardware and software imported into the U.S. and sold to domestic corporations.

But a spate of stories about products shipping with malicious software raises the question: is it time for random audits to expose compromised supply chains?

Concerns about ‘certified, pre-pwned’ hardware and software are nothing new. In fact, they’ve permeated the board rooms of technology and defense firms, as well as the halls of power in Washington, D.C. for years.

The U.S. Congress conducted a high profile investigation of Chinese networking equipment maker ZTE in 2012 with the sole purpose of exploring links between the company and The People’s Liberation Army, and (unfounded) allegations that products sold by the companies were pre-loaded with spyware.

Of course, now we know that such threats are real. And we know because documents leaked by Edward Snowden and released in March showed how the U.S. National Security Agency intercepts networking equipment exported by firms like Cisco and implants spyware and remote access tools on it, before sending it on its way. Presumably, the NSA wasn’t the first state intelligence agency to figure this out.

If backdoors pre-loaded on your Cisco switches and routers aren’t scary enough, this week, the firm TrapX issued a report on a piece of malicious software they called “Zombie Zero.” TrapX claims to have found the malware installed on scanners used in shipping and logistics to track packages and other products. The scanners were manufactured in China and sold to companies globally. The factory that manufactured the devices is located close to the Lanxiang Vocational School, an academy that is believed to have played a role in sophisticated attacks on Google and other western technology firms dubbed “Aurora.” Traffic associated with a command and control botnet set up by Zombie Zero were also observed connecting to servers at the same facility – which is suggestive, but not proof of the School’s involvement in the attack.

TrapX said that its analysis found that 16 of 64 scanners sold to a shipping and logistics firm that they consulted with were infected. The Zombie Zero malware was programmed to exploit access to corporate wireless networks at the target firms to attack finance and ERP systems at the firms.

Scanners outfitted with another variant of Zombie Zero were shipped to eight other firms, including what is described as a “major robotics” manufacturer, TrapX claims.

If accurate, TrapX’s Zombie Zero is the most flagrant example of compromised hardware being used in a targeted attack. Its significant because it shows how factory loaded malware on an embedded device (in this case: embedded XP) could be used to gain a foothold on the networks of a wide range of companies in a specific vertical.

Prior “malicious supply chain” stories haven’t had that kind of specificity. Dell warned about compromised PowerEdge motherboards back in 2010, but there was no indication that the compromised motherboards were directed to particular kinds of Dell customers. Recent news about Android smartphones pre-loaded with spyware and teapots with wireless “spy chips” seemed more indicative of an undifferentiated cyber criminal operation satisfied to cast a wide net.

Not so TrapX, whose creators seemed intent both on compromising a particular type of firm (by virtue of the kind of device they used as their calling card) and extracting a particular type of data from those firm – the hallmarks of a sophisticated “APT” style actor.

There’s really no easy answer to this. Warning U.S. firms away from Chinese products is all well and good, but it’s also a strategy that won’t work, while punishing lots of innocent companies selling quality product. The truth is that any technology product you buy today is almost certain to contain components that were sourced in China. Any of those components could contain malicious software supplied by a compromised or unscrupulous down steam supplier. “Buy American” is even more pointless in the context of technology than it was back in the automobile sector back in the 70s and 80s.

What’s to be done? Security conscious firms need to take much more interest in the provenance of the hardware and software they buy. Firms, like Apple, that are big enough to have leverage might consider random audits of equipment and firmware looking for compromises. They might also insist on reviewing the manufacturing facilities where devices are assembled to see what kinds of quality controls the manufacturer has over the software and hardware that is installed in their products.

Beyond that, the U.S. government – via U.S. Customs and Border Protection (and like agencies in other nations) could take an interest in the contents and quality of IT products that are imported from China and other countries.

A system of random inspections and audits – akin to the inspections that are done for agricultural and consumer products – could raise the stakes for firms and governments intent on slipping compromised IT equipment and embedded devices into the U.S. market.

PCI Compliance & Secure Coding: Implementing Best Practices from the Beginning

July 15, 2014 by · Leave a Comment
Filed under: Compliance, SDLC 
Is your SDLC process built on a shaky foundation?

Is your SDLC process built on a shaky foundation?

A lot of the revisions to PCI DSS point toward the realization that security must be built into the development process. The foundation that ultimately controls the success or failure of this process must be built upon knowledge — that means training developers to avoid common coding flaws that can lead to different types of vulnerabilities being introduced. So let’s take a quick look at one of the common flaws that will become part of the mandate on July 30th, 2015.

PCI 3.0 added “Broken Authentication and Session Management” (OWASP Top 10 Category A2) as a category of common coding flaws that developers should protect against during the software development process. Left exposed, this category opens some pretty serious doors for attackers, as accounts, passwords, and session IDs can all be leveraged to hijack an authenticated session and impersonate unsuspecting end users. It’s great that your authentication page itself is secure, that’s your proverbial fortress door, but if an attacker can become your user(s), it doesn’t matter how strong those doors were…they got through.

To have a secure development process aligned to PCI that works, developers must be aware of these types of issues from the beginning. If critical functions aren’t being secured because they are missing authentication controls, using hard-coded passwords, and/or limiting authentication attempts, etc., you need to evaluate how you got into this predicament in the first place. It all starts with those who design and develop your application(s). For the record, nobody expects them to become security experts, but we do expect them to know what flawed code looks like, and how NOT to introduce it over and over again.

According to the April 2013 Veracode State of Software Security report, stolen credentials, brute force attacks, and cross-site scripting (XSS) are among the most common attack methods used by hackers to exploit web applications. The revisions found in PCI DSS 3.0 did a lot to clarify what was originally left open to interpretation; it’s worth noting that by redefining what quality assurance (QA) means, it doesn’t mean you are going to rock the world of your developers.

Change is scary, we get that, which is why the output we provide was designed and meant for the developers to consume, not a security team. The number of successful attacks leading to access of critical data and systems via hijacked sessions will never decrease unless we coach our developers on the basics of how to build security into their development process.

Related Links

Video Survey: What Would You Do with a Monster in Your Corner?

July 11, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY 

In our final video survey installment as part of the Future of AppSec Series, we talk about the idea of having a “Monster in Your Corner“. Application security often feels like a massive intractable problem, the sort of problem that requires a really big friend to help you solve, or in our thinking – a monster.

When we talk about having a monster in your corner, what do we mean? Well, we’re talking about the Veracode platform, our automated scanning techniques and the time-saving, massively scalable approach they give you to mountains of code and thousands of applications. But we’re also talking about the brilliant security engineers and minds behind the technology, the same minds that are continuously driving our service to improve and make the world of software safer on a daily basis. And lastly but certainly not least, our amazing Customer Services team of trained application security experts that are on-hand to help every customer get the most out of our cloud-based service.

Everyone at Veracode makes up the monster and we’d love to be in your corner. Help us understand what appsec problems you need help with tackling because this is what do day in and day out.

If You Had an Application Security Monster in Your Corner What Problem Would it Attack?

Watch Our Other Video Surveys

Truth, Fiction and a 20 Year Old Vulnerability

July 10, 2014 by · Leave a Comment
Filed under: application security 

The impact of a 20 year old flaw in the LZ4 is still a matter of conjecture. The moral of the story isn’t.

I think we can all agree it's not quite THIS critical.

I think we can all agree it’s not quite THIS critical.

What were you doing in 1996? You remember ’96, right? Jerry McGuire, Independence Day and Fargo were in the theaters. Everybody was dancing the “Macarena”?

In the technology world, 1996 was also a big year. Among other, less notable developments: two obscure graduate students, Larry Page and Sergey Brin, introduced a novel search engine called “Backrub.” Elsewhere, a software engineer named Markus F. X. J. Oberhumer published a novel compression algorithm dubbed LZO. Written in ANSI C, LZO offered what its author described as “pretty fast compression and *extremely* fast decompression.” LZO was particularly adept at compressing and decompressing raw image data such as photos and video.

Soon enough, folks found their way to LZO and used it. Today, LZ4 – based upon LZO – is a core component of the Linux kernel and is implemented on Samsung’s version of the Android mobile device operating system. It is also a part of the ZFS file system which, in turn, is bundled with open source platforms like FreeBSD. But the true reach of LZ4 is a matter for conjecture.

That’s a problem, because way back in 1996, Mr. Oberhumer managed to miss a pretty straight-forward, but serious integer overflow vulnerability in the LZ4 source code. As described by Kelly Jackson Higgins over at Dark Reading, the flaw could allow a remote attacker to carry out denial of service attacks against vulnerable devices or trigger

the flaw could allow a remote attacker to carry out denial of service attacks against vulnerable devices or trigger remote code execution on those devices

remote code execution on those devices – running their own (malicious) code on the device. The integer overflow bug was discovered during a code audit of LZ4.

Twenty years later, that simple mistake is the source of a lot of heartbleed…err…heartburn as open source platforms, embedded device makers and other downstream consumers of LZ4 find themselves exposed.

Patches for the integer overflow bug were issued in recent days for both the Linux kernel and affected open-source media libraries. But there is concern that not everyone who uses LZ4 may be aware of their exposure to the flaw. And Mr. Bailey has speculated that some critical operating systems – including embedded devices used in automobiles or even aircraft might be vulnerable. We really don’t know.

As is often the case in the security industry, however, there is some disagreement about the seriousness of the vulnerability and some chest thumping over Mr. Bailey’s decision to go public with his findings.

Writing on his blog, the security researcher Yann Collett (Cyan4973) has raised serious questions about the real impact of LZ4. While generally supporting the decision to patch the hole (and recommending patching for those exposed to it), Mr. Collett suggests that the LZ4 vulnerability is quite limited.

Specifically: Collett notes that to trigger the vulnerability, an attacker would need to create a special compressed block to overflow the 32-bits address space. To do that, the malicious compressed block would need to be in the neighborhood of 16 MB of data. That’s possible, theoretically, but not practical. Legacy LZ4 limits file formats to 8MB blocks – maximum. “Any value larger than that just stops the decoding process,” he writes, and 8MB is not enough to trigger a problem. A newer streaming format is even stricter, with a hard limit at 4 MB. “As a consequence, it’s not possible to exploit that vulnerability using the documented LZ4 file/streaming format,” he says. LZ4, Mr. Collett says, is no OpenSSL.

In response to Collett and others, Bailey wrote an even more detailed analysis of the LZ4 vulnerability and found that attackers actually wouldn’t be limited by the 8MB or 4 MB limit. And, while all kinds of mitigating factors may exist, depending on the platform that LZ4 is running on, Bailey concludes that exploits could be written against the current implementations of LZ4 and that block sizes of less than 4MB could be malicious. While some modern platforms may have features that mitigate the risk, “this is the kind of critical arbitrary-write bug attackers look for when they have a corresponding memory information disclosure (read) that exposes addresses in memory.”

While the LZ4 vulnerability debate has become an example of security industry “insider baseball,” there is (fortunately) a larger truth here that everyone can agree on. That larger truth is that we’re all a lot more reliant on software than we used to be. And, as that reliance has grown stronger, the interactions between software powered devices in our environment , has become more complex and our grasp of what makes up the software we rely on has loosened. Veracode has written about this before – in relation to OpenSSL and other related topics.

It may be the case that the LZ4 vulnerability is a lot harder to exploit that we were led to believe. But nobody should take too much comfort in that when a casual audit of just one element of the Linux Kernel uncovered a 20 year old, remotely exploitable vulnerability. That discovery should make you wonder what else is out there has escaped notice. That’s a scary question.

Applications are Growing Uncontrollably and Insecurely

Insecure Application Growth

This year I’m working with IDG to survey enterprises to understand their application portfolio, how it’s changing and what firms are doing to secure their application infrastructure.

The study found that on average enterprises expect to develop over 340 new applications in the 12 months. As someone that has been working in and around the enterprise software industry for more years than I care to admit here, I find that number astounding. Enterprises really are turning into software companies.

Think about it – how many new applications did software vendors like Microsoft, Oracle, or SAP bring to market in the last 12 months? The number is probably in the hundreds, but you would expect that because they are software vendors. Every application sold is money in their pocket. The more software they make the more opportunities there are for them to increase their revenue and profits.

On average enterprises expect to develop over 340 new applications in 12 months.

So why are enterprises developing as many applications as software vendors? The answer is the same. The more software they make the more opportunities there are for them to increase their revenue and profits. It may not be a short and direct line between software development, revenue and profits like it is for software vendors, but the connection is there otherwise enterprises wouldn’t be doing it.

The problem is that all those applications represent both opportunities and risks for the enterprises developing them. How much risk? It’s hard to say without assessing them for vulnerabilities. However, most of those 300+ new applications will not be assessed for security risks. The survey found that only 37% of enterprise-developed applications are assessed for security vulnerabilities.

Or look at it another way – enterprises are blindly choosing to operate in a hostile environment for 63% of the business opportunities represented by software. If it were me, I would rather take off the blindfold and see exactly what I’m getting into. I can only hope that enterprise executives start feeling the same way.

Related Links

Med Tech’s Promiscuity Problem

July 1, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY, Compliance 

A roundtable discussion of medical device security finds that innovation in the connected health space is outstripping security. And the problem will get worse before it gets better.

med-tech

Physicians are used to counseling their patients on the need to take care of themselves and take reasonable precautions to protect themselves from harm. Are you fond of cycling? Remember to wear a helmet to protect yourself from traumatic brain injury! Enjoy a drink at the pub? Remember not to over-indulge, or you risk a wide range of ills: from liver disease to fist fights and automobile accidents. And, if you’re out there hooking up willy-nilly and you fail to take precautions, don’t be surprised when you contact herpes or some other common (and preventable) STD.

There’s ample scientific evidence to back up each of these recommendations. That doesn’t mean, of course, that the risky behaviors don’t continue regardless. It just means that the physicians dispensing the advice know they’re standing on pretty solid ground when they dole it out.

When it comes to the fast-moving world of medical technology and connected healthcare, however, unchecked, promiscuous behavior is (unfortunately) the norm these days. And, unlike that other kind of promiscuity, doctors and hospitals are only just beginning to recognize the problem – let alone figuring out what to do about it.

That was one conclusion of a recent panel discussion that was held on June 12 under the auspices of the National Institute of Standards and Technology’s Information Security and Privacy Advisory Board (ISPAB).

Dr. Dale Nordenberg

Dr. Dale Nordenberg of MDISS, Photo Source: scmagazine.com

Speaking as part of a panel discussion, “Emerging Guidance and Standards Affecting Medical Device Security” earlier this month, Dale Nordenberg, a co-founder and executive director of Medical Device Innovation, Safety & Security Consortium (and a practicing physician) said that the momentum pushing adoption of connected devices in the healthcare field is far outstripping the ability of healthcare providers to safely and securely deploy them. (Panel audio recording available here.)

“These connected care environments are evolving far more rapidly than associated best practices,” Nordenberg said. “It’s far easier to go out and buy something and implement it than to educate and train everyone (to work) in that new environment,” he said.

One problem is that connected health devices are coming on the market at a furious pace – driven by innovation at thousands of large and small device makers.

The investments are driven by hospitals desire to provide better and more efficient care, as well as by financial incentives embedded in legislation like The Affordable Care Act, which has turbo-charged investments in technologies like Electronic Health Records.

But Nordenberg noted that connected health initiatives and EHR are a double-edged sword. “If you’re going to increase the promiscuity, you better increase your security and monitoring and assess your devices and do cyber security exercises,” he said. Alas, very little of that is happening anywhere in the U.S.

Why? There are many reasons. Bakul Patel a Policy Advisor in the Center for Devices and Radiological Health at the FDA noted that medical device makers have deep expertise in writing embedded software that is reliable, but little experience with the kinds of networked applications that have long been common in enterprise environments. “So when they think about adding connectivity, its just out of the box and off-the-shelf stuff,” Patel said. That has led to a lot of unintentional security breaches in recent years.

There’s also a critical shortage of talent that’s hobbling manufacturers and their customers, alike. Patel said that only the top tier of medical device manufacturers – companies like Phillips, GE – sport internal teams to do cyber security assessments on new and established products. The rest – a long tail of smaller manufacturers – do not. In fact: many device makers struggle to accept the notion that hackers would be interested in compromising their connected health device. Period.

On the customer side, expertise is lacking, as well. Ken Hoyme, a Distinguished Scientist at Adventium Labs, said that there’s lots of “denial” in the healthcare sector. “Hospitals deal with the security of their pharmacy or of their newborns,” he noted. “But they’re not dealing with the security of their connected health devices.” (Check out Ken’s presentation here.)

The focus at most hospitals and healthcare networks is, understandably, on compliance with regulations like HIPAA. “There’s just no thought that we might be involved in a targeted attack,” he said.

You can’t entirely blame the healthcare providers. The FDA itself is struggling to find its footing in this Brave New World. Mandating security is good – in theory – but not if it impedes progress, locks in obsolete solutions or interferes with patient care, Patel suggested. “You don’t want someone to have to punch in a 15 digit password when they’re trying to turn on an infusion pump. That would not be useful.”

In other areas, such as mobile apps, the scrutiny given to traditional healthcare devices that have a lifespan measured in decades just can’t scale to accommodate a market that churns out hundreds of new apps a month – most of which never gain broad adoption and die a quiet death on Apple’s iTunes App Store or Google Play. “The whole mobile app world has its own ecosystem and its user and consumer driven…These apps have a lifecycle of their own.”

What’s the cure? Patel said that the medical device security industry has lots to learn from its bigger, older sibling: the IT security industry, where tools, processes and roles have been honed over the last two decades.

As things stand, however, it is early days in the medical device sector and most of that hard work is still ahead.

You can listen to a podcast of the roundtable conversation here. (Note: the sound quality starts out weak, but improves vastly after the first minute or two – be patient.)

Video Survey: What’s in the future for application security?

June 26, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY 

Security professionals, analysts, and headlines all seem to agree that many of the most critical vulnerabilities discovered and exploited today are happening on the application layer. Organizations around the world are redirecting their efforts to find and fix these flaws. Thought leaders in the security field are calling for others to follow in their efforts to ensure applications from the entirety of their software supply chain — especially third-party vendors — are being held to the same standard as internally developed apps. While this transition is far from complete, it brings up the question, “what’s next?”

In the sixth installment of our Future of Application Security survey video series we asked industry experts where they thought the future of application security will be.

Where is the future of application security leading?

What do you think? Where do you see the future of application security?

Watch Our Other Video Surveys

First Prioritize, Then Patch: Yes, Another Blog on PCI 3.0

June 25, 2014 by · Leave a Comment
Filed under: Compliance 
Your scan results may have you feeling a bit overwhelmed but our actionable data and sorting can help streamline your remediation!

Your scan results may have you feeling a bit overwhelmed but our actionable data and sorting can help streamline your remediation efforts!

In November’s update to PCI DSS, now on version 3.0, you may have noticed that the PCI Security Council switched the order of the first two application security focused sub-requirements. Requirement 6.1 now focuses on establishing ongoing best practices, while 6.2 moves on to patching and remediation efforts. Some of our customers have questioned the logic of such a small change, so I thought we should cover it in a blog post!

The new guidance to 6.1 recognizes the need to prioritize and rank your vulnerabilities prior to executing a remediation or mitigation plan and encourages enterprises to remain aware of new vulnerabilities that may threaten their systems and ultimately disrupt “business-as-usual.” In conjunction with the guidance in 6.1, 6.2 urges enterprises to build a process by evaluating risks and prioritizing patch cycles on the most critical systems within 30 days, and less business critical systems within 2 to 3 months following a patch release. We all know you can’t fix everything right away, but this approach ensures that all systems will get patched, not just the most critical ones.

Let’s face it, as new application languages, architectures, and platforms are introduced and adopted, new vulnerabilities are sure to follow. And as enterprises evolve to close the gap those vulnerabilities pose, hackers will shift tactics and find new ways to exploit them. History does have a way of repeating itself, especially when it’s an arms race. Since applications will always be choice targets for hackers, understanding what the threat landscape is before you decide how to tackle it should be the first step toward developing an ongoing process you can be proud of, rather than checking the compliance box once a year.

From the very beginning, the Veracode platform was designed to make remediation efforts for the sake of compliance as straightforward as possible. Our ‘Fix First Analyzer’ organizes internal and third-party scan results into actionable data for developers (yes developers!) to consume. Flaws identified in the code are prioritized based on the severity level, but also by the effort to fix, and the exploitability of the flaw, all according to the OWASP Top 10, and/or SANS Top 25 CWE definitions. Developers are presented with a visual and navigable map to help determine which vulnerabilities pose the greatest risks and should be remediated soonest, including remediation advice on how to go about addressing each flaw.

While earlier versions of the standard stressed the importance of patching, this revision to PCI DSS moves toward the adoption of an application security process rather than a one-time step on the way to compliance. The only way to achieve true security is to never give up on the applications…well, until you end-of-life them. The tools you need to help comply with application security requirements found in PCI DSS 3.0 are just a cloud away.

Related Content

Next Page »