5 Best Practices in Data Breach Incident Response

August 26, 2014 by · Leave a Comment
Filed under: application security 

It goes without saying that all IT organizations should have an active Incident Response (IR) Plan in place – i.e. a policy that defines in specific terms what constitutes an information security incident, and provides a step-by-step process to follow when an incident occurs. There’s a lot of good guidance online about how to recruit a data breach response team, set initial policy, and plan for disaster.

For those organizations already prepared for IT incident response, be aware that best practices continue to evolve. The best IR plans are nimble enough to adjust over time. However, when the incident in question is feared to be a possible data breach, organizations should add a couple other goals as part of their comprehensive Application Security disaster planning:

  • The complete eradication of the threat from your environment.
  • Improved AppSec controls to prevent a similar breach in the future.

Veracode’s Information Security Assessment Team, which put together our own IR playbook, recommends that IT groups follow these five emerging guidelines to plan for the reality of today’s risks and threats.

1. Plan only for incidents of concern to your business.

Learn more about threat modeling with a free chapter from the book "Threat Modeling: Designing for Security"

Learn more about threat modeling with a free chapter from the book “Threat Modeling: Designing for Security”

According to the SANS Institute, the first two steps to handling an incident most effectively are preparation and identification. You can’t plan for everything, nor should you. For example, if no business is conducted through the organization’s website, there is probably no need to prepare for a Denial of Service attack. Companies in heavily regulated industries such as financial services or healthcare receive plenty of guidelines and mandates on the types of threats to sensitive and confidential data, but other industries may not enjoy similar “encouragement”.

Ask yourselves the question, what is OUR threat landscape, why would hackers and criminals want to attack us? The possible answer(s) will lead to a probable set of root causes for data breach attempts. Focus on what’s possible, but also don’t be afraid to think creatively. The U.S. National Security establishment was famously caught flat-footed by the events of 9/11 as the result of a “lack of imagination” about what terrorists were capable of accomplishing. By constantly re-evaluating your organization’s threat landscape (and by relying on solid threat intelligence to identify new and emerging threats), your data breach response team will remain on its best footing.

2. Don’t just plan your incident response, practice it.

Practice: it’s not just the way to Carnegie Hall. IR Plans must not be written and then left on the shelf to gather dust. A proactive and truly prepared information security organization educates its IT staff and users alike of the importance of regularly testing and updating breach response workflows. Plans must be drilled and updated regularly to remain viable. Even if it’s simply around a conference table, run through your response plan. Some organizations do this as often as monthly; your unique industry and the probable threats it faces will determine the ideal frequency of this best practice. At Veracode, we run regular Table Top Exercises on a number of possible scenarios.

The worst mistakes are typically made before the breach itself. Be prepared. The purpose of IR drills is to ensure that everyone understands what he or she should be doing to respond to a data breach, quickly and correctly. A good rule of thumb here is that “practice makes better, never perfect.” It pays to be honest about your IR team’s capabilities and their ability to effectively neutralize the most likely threats. If the necessary skills don’t exist in-house, then better plan to retain some outside help that can be standing by, just in case.

3. In speed of response, think “minutes” not “hours”.

IR teams should always strive to improve response times – that’s a given – and “within minutes” is today’s reality. On the internet, a service outage of more than 11159979_sone hour is considered significant. Social media chatter can very quickly amplify the damage that could be done to your business, so get out ahead of the crisis …and stay there.

SANS Institute defines the third step in breach response as “containment” – to neutralize the immediate threat and prevent further damage to critical or customer-facing systems. Move quickly to determine the possible severity of the data breach and then follow the customized response workflows in place for that scenario. To borrow some terminology from the military: is your “situation room” responding to a Defcon 1 attack or more like Defcon 5? Even as your IR team moves to eradicate the threat, you can be communicating to key stakeholders appropriately – according to the reality of the situation at hand.

4. Don’t over-communicate.

This guideline seems counter-intuitive. Sharing is caring, right? Wrong. Especially when it comes to the fate of your organization’s confidential or sensitive customer information. Your initial notification to customers should almost immediately follow detection as a pre-planned rote response. There will be no time to wordsmith the perfect statement in the thick of battle; better have it pre-packaged and ready ahead of time. That being said, this statement should be short and concise. Acknowledge both your awareness of the incident and the IR team’s continuing efforts to safely restore service, as soon as possible.

After that, plan to give updates to all stakeholders on some kind of methodical basis. Act like the NTSB after a plane crash. They give regularly scheduled press conferences on what they know so far, while firmly pushing back on what they don’t. Think like an investigator and deal in facts. Don’t speculate as to the root cause of the breach or even when service will be restored, unless that timeline is precisely known. Your communication to the market, while measured, should always be sympathetic and as helpful as possible. One final piece of advice: tell your customers the same thing you tell the media. There are (few if) no secrets left on the Internet.

5. Focus on restoring service first, root cause forensics later.

Uptime will keep customers happy, which is where your focus should be initially.

Uptime will keep customers happy, which is where your focus should be initially.

The root cause of a data breach incident is typically not immediately known, but that should be no impediment to your restoring service ASAP for customers (once the threat is contained and destroyed, of course). Keep the focus on the customer. Get back online as quickly as possible. Clearly, SANS outlines “recovery” as the step that ensures that no software vulnerabilities remain, but…

Ignore the engineers & analysts who want to investigate root cause immediately. With today’s sophisticated attacks, this can take weeks or months to determine, if at all. Still, incident response is not over when it’s “over”. As we’ve asserted, the best organizations – and their IR teams – take the time to learn from any mistakes. Monitor systems closely for any sign of weakness or recurrence. Analyze the incident and evaluate (honestly) how it was handled. What could be improved for better response in the future? Revise your organization’s IR Plan, making any necessary changes in people, processes or technology for when or if there is a next time. Practice any new workflows again and again until you know them cold.

Conclusion:

Solid IT risk management strategies include disaster recovery planning and the creation of a living, evolving incident response playbook. Today’s IR plans need to be focused, factual and fast. Every organization needs to budget for the hard IT costs associated with data breach recovery. However, a comprehensive and battle-tested plan will help mitigate the “soft costs” associated with poorly handled data breach incidents. These can include lingering damage to revenue, reputation or market value – long after the initial crisis is resolved.

Secure Development – One Bathroom Break At A Time

August 25, 2014 by · Leave a Comment
Filed under: SDLC, Software Development 

Google went to great lengths to educate their developers about the benefits of security testing – even developing educational materials specifically to be read on the toilet.

6276693_m

There’s enough evidence in favor of the use of security testing throughout the development cycle as to make “debates” about it moot. Still, many software development operations still lack a comprehensive and consistent approach to testing.

Why? One of the most commonly cited reasons is the development “culture.” That’s a fuzzy term that encompasses a lot of things. Often it boils down to personal resistance on the part of (influential) developers and managers within an organization, or adherence to wrong – headed tradition. “We don’t do that here.” Full stop.

It goes without saying that, if you want to change development practices, you need to change the development culture within your organization. But how?

Mike Bland

Mike Bland

Back in June, Mike Bland, a former Google Engineer, penned a great blog post that provides something of a roadmap to implementing a culture of secure development and testing. Bland worked at Google from 2005 to 2011 and, for much of time, was Google’s “Testing Evangelist,” helping to implement a system for thorough application testing and, more importantly, making practices like unit testing for developed code a cultural norm at Google.

Bland’s post is worth printing out and reading. He starts off with an analysis of the Apple GoTo Fail and Heartbleed vulnerabilities, and how they’re object lessons for the importance of unit testing. (Veracode tackled some of the same issues regarding Goto Fail in this blog post.)

But Bland also wraps in a (substantial) guide to developing a culture of secure development and testing. Bland weighs in on the relative importance and merits of various types of tests – integration testing vs. unit testing vs. fuzzing. And he talks about the organizational challenges of growing a culture of testing at Google.

Bland says that, contrary to what you may believe, Google’s engineering-heavy culture was not hospitable soil for the adoption of a culture of unit testing. Despite the company’s considerable resources, Bland notes that “vast pools of resources and talent” within the company often got in the way by “reinforc(ing) the notion that everything is going as well as possible, allowing problems to fester in the long shadows cast by towering success.”

“Google was not able to change its development culture by virtue of being Google,” Bland writes. “Rather, changing Google’s development culture is what helped its development environment and software products continue to scale and to live up to expectations despite the ever-growing ranks of developers and users.”

Bland describes an all-hands approach to bending Google’s engineering practice in the direction of more testing. Resistance, he said, was the byproduct of complex forces: a lack of proper developer education in unit testing practices and “old tools” that strained to scale with the pace of Google’s development.

Bland’s response was to form what he calls a “Testing Grouplet” within Google that served as a support group and community for like-minded folks who wanted to implement unit testing procedures. That group operated like testing guerillas, developing and driving the adoption of new tools that made unit testing less painful, sharing best practices and engaging in testing propaganda within the development organization.

One of the most successful initiatives was “Testing on the Toilet,” a circa 2006 program in which Grouplet members designed short (one page) lessons on a variety of topics (“Don’t Put Logic in Tests,” “Debug IDs”) then plastered hundreds of Google bathroom stalls worldwide with the *ahem* tactical reading material.

The idea here isn’t to shock folks. Rather: its to take the role of education in your secure development program seriously. Bland said the bathroom literature was part of a multi-pronged education effort that also included more common elements like brown bag lunches and a guest speaker series.

Check out Mike’s full post on building a unit testing culture at Google here.

Dispelling the “What Mobile Security Threat?” Myth

August 19, 2014 by · Leave a Comment
Filed under: Mobile 

Post 1 of 6: Dispelling Mobile App Security Myths – Myth #1

This is post one in a series on Mobile Application Security.

16220549_m

Mobile applications are everywhere. The growth of enterprise mobile apps in the past few years has been absolutely staggering. Forrester Research reports that 23 percent of the workforce has downloaded 11 or more apps (paid or free) to the smartphone they use for work, while 16 percent have installed that many apps to their work tablets. Up to 40 percent of workers admit to adding 10 apps or more to their work devices, across the board.

Some mobile workers have two or three different devices that they use for work. With an average of 50+ apps installed on most mobile devices, the potential attack surface from untested software grows exponentially for the average enterprise. The reality is hundreds of applications per user are brought in close proximity to enterprise data stored or accessed via approved BYOD devices. Any one of them could be a malicious gateway to a potential data breach.

I wish that most enterprises were attacking the reality of this problem head on. But they’re not. Instead, a bunch of myths about mobile security – specifically mobile app security – have taken hold. Six myths, to be exact.

Why do these myths exist? We perpetuate them primarily because they are comforting and make us feel better. The problem with a myth is that ultimately, reality gets in the way. The best way to shatter myths is with empirical evidence to the contrary.

Let’s examine these six myths one by one and discuss how best to dispel them at your organization.

Myth #1: “What mobile security threat?”

Like the proverbial ostrich with its head in the sand, perpetrators of this myth point to the lack of media coverage of major mobile data breaches as proof the problem doesn’t exist. The fact is, nearly half of companies that permit Bring-Your-Own-Device (BYOD) have experienced a breach as a result of an employee-owned device – they’re just not talking about it.

Six out of ten malware analysts at U.S. enterprises admit having investigated or addressed a data breach that was never disclosed by their company. This should surprise no one. The majority of companies still have no formal BYOD policy, and one-third have no application security program of any kind. This means that the software they are developing, mobile or otherwise, is at a higher risk of containing known security vulnerabilities.

Secure software development practices are still not as widespread as they should be. For the mobile apps that most internal teams are producing, more than two-thirds of those first submitted to Veracode for vulnerability analysis failed to comply with the enterprise’s own policies or industry standards such as the OWASP “top ten”. Errors present in in-house apps often involve insecure data storage – broken cryptography, weak input validation, unsecured transport layer or weak server-side controls. While most mobile app flaws are easily remediated and most pass their next inspection, the high initial failure rate we’ve seen proves that CISOs have good reason to be concerned about threats to their mobile ecosystem.

The magnitude of the mobile app security threat is compounding not just by the sheer numbers of devices and supposedly safe public apps out there that your employees are consuming, but also by the ever-increasing volume and sophistication of risky and malicious apps.

In a webinar I recently hosted with Tyler Shields, senior security and risk analyst at Forrester, he revealed that a clear majority of enterprises are now concerned by the drastic growth of mobile malware… with good reason. It has been on an explosive trajectory over the last few years, especially on the Android platform. Juniper Networks latest Mobile Threats Report calculated that the number of malicious apps grew an astounding 614 percent from 2012 to 2013. These apps exhibit risky behaviors such as accessing files or logs, monitoring email or calls, sharing contacts or location, installing other software, and even rooting the device.

Infected apps and malware executables find their way on to users’ mobile devices any number of ways. Risky user behaviors include downloading untrusted or unverified apps, allowing a family member to use a company-owned device, clicking on a malicious link in a phishing email, even visiting adult websites.

Once installed, these apps get very close to enterprise data, especially if the device doesn’t use an MDM to enforce policies to prohibit apps that pose a risk. On an unprotected device, enterprise data can be accessed, intermingled, duplicated and even moved to the cloud.

Let’s dispel this myth. The mobile security threat is real, and growing.

In my next post, we’ll continue to break these six myths around mobile application security, exposing the realities confronting the enterprise mobile ecosystem.

Use Software Suppliers as Force Multipliers

August 14, 2014 by · Leave a Comment
Filed under: Third-Party Software 
No, no. Not this type of force.

No, no. Not this type of force.

One of the most alarming facts of modern software, considering the deep insecurity of most software, is the degree to which it is composed of many other software components of varying origin and unknown security. Almost every enterprise software portfolio has internally developed, purchased, outsourced and open source software; but almost every application in a portfolio has code that has multiple origins as well.

This is one of the things that makes a pure source code based security analysis of software, by definition, incomplete: if you can’t scan the components for which you don’t have source, you don’t have a complete picture of the software risk.

But worse than the problem of finding the flaws is the problem of getting them fixed. If you learn that your supplier has an issue in their code, you may be able to hold them accountable for a fix, but if the issue is actually in fourth party code that they use in their application, you are reliant on their ability to manage their own software supply chain to get a fix.

Considering one level of risk removed from the software itself, the supplier may use purchased software that puts the quality of their own software at risk, thereby putting you at risk. A real world example a few years ago was the compromise of the Apache project’s source control credentials via a cross-site scripting vulnerability in their local copy of Atlassian’s JIRA software. Though the break-in was caught, it is possible that Apache’s software might have been compromised due to this hack.

This is where securing the software supply chain starts to seem like an intractable problem. Even if you can get security attestations about the quality of the vendor’s software, what about their internal systems and processes that might put you at equal risk?

Here, as before in this series, other supply chain transformation efforts suggest a solution: use the supplier as a force multiplier. Specifically, require the supplier to hold their supply chain to the same standards that you hold them to. An example (cited in Wharton, “Managing Green Supply Chains” is IBM’s Social & Environmental Management Systems program, which holds its suppliers responsible for achieving measurable performance against stated environmental goals. IBM’s program requires that its suppliers publicly disclose their metrics and results, and “cascade” the program to any suppliers whose work is material to IBM’s business. The result: a rapid transformation of the compliance level of the whole supply chain.

This approach of cascading compliance requirements is in force in other efforts, such as generation of environmental bill of materials impact information (BOMCheck), corporate responsibility initiatives, and material data systems reporting requirements at Volvo. Indeed, organizational research suggests that cascading performance factors and associated goals to the supply chain is required for effective supply chain management.

Given the sensitive nature of the data protected by software and the complex nature of the software supply chain, cascading software supply chain security program requirements to major suppliers may be the only way to ensure that the enterprise is completely protected. The good news is that it need not be an uphill struggle. The more enterprises require secure software, the more vendors will read the writing on the wall and start to understand that security is a market requirement.

The Seven Habits of Highly Effective Third-Party Software Security Programs

  1. Choose the right suppliers
  2. Put your efforts where they do the most good
  3. Use suppliers as force multipliers
  4. Collaborate to innovate
  5. The elephant in the room is compliance
  6. Drive compliance via “WIIFM”
  7. Align benefits for enterprise and supplier – or pay

Stop Freaking Out About Facebook Messenger

Facebook recently announced that mobile chat functionality would soon require users to install Facebook Messenger. Fueled by the media, many people have been overreacting about the permissions that Messenger requests before taking time to understand what the true privacy implications were.

In a nutshell, Messenger is hardly an outlier relative to the other social media apps on your phone.

Why the uproar, then? In part, people love to pick on Facebook because of their past privacy UI transgressions. They’ve deserved much of that. But it’s a little crazy that there’s such an incendiary reaction to the privacy implications of a mobile app that, permissions-wise, isn’t that different from the multitude of social apps people happily download without a second thought.

Still skeptical? We (and by “we” I mean Andrew Reiter) made a list of the Android permissions requested by the latest Facebook Messenger app. Then we checked the remaining 49 of the top 50 social apps in the Google Play store to see how many of those requested the same permissions. To nobody’s surprise whatsoever, they are all pretty greedy.

perms

If it’s not obvious how to read this chart, here’s an example: 67% of the other popular social apps also require the READ_CONTACTS permission. 47% of them require the CAMERA permission. And so on. Again, this shouldn’t surprise anybody. Mobile apps need these permissions if you want them to function properly. Messenger is a feature-packed app; some of the others may not be. Asking for all those permissions doesn’t necessarily mean the access will be abused.

We didn’t do the meta-analysis to determine how many of those permissions were requested by first-party code vs. third-party ad libraries. Ad libraries are old news at this point, and it kind of doesn’t matter who’s asking for permission as long as you’re granting it.

So stop freaking out… at least until there is something to freak out about.

5 Things You Can Do With the Veracode API

When you use the Veracode API you get an economy of scale through automation. One customer uploaded and scanned 100 applications concurrently over a weekend. image001Another one scheduled monthly recurring scans. “Application programming interface” (API) is more than jargon. It is the industrial revolution (automation) meets the information age (your application security intelligence). Here are five ways you can wield that power.

You make security testing invisible to developers

This is not to say developers are excluded from security goals. I mean the process is invisible. Imagine writing code and committing it to the build server to trigger a scan. We call this pattern “Upload and Scan” and use it in-house for our own development. See the Agile Integration SDK for more details.

You look beyond critical applications to the entire application infrastructure

Web security scans can be launched against your entire application infrastructure to quickly identify the “low-hanging fruit.” This allows you to cover everything and focus remediation on the severe. Use the API to schedule frequency such as weekly, monthly or quarterly. Scan many applications regularly and review the results that only exceed your risk appetite.

You gain flexibility managing your security initiatives

Why not delegate the administration of your security platform to the department that manages your IT? The Veracode “Admin API” makes it simple to perform common administrative tasks in bulk. You can create a standard operating procedure to create 100 application profiles or enroll 100 developers. And you can integrate your identity and access management (IAM) system for user management. The result is an elastic security program that complies with your change control procedures. The benefit is less time spent on administrative tasks by the security team.

You export your data when you need it in other systems

The Veracode “Results API” makes is easy to get your data in the format you need. Feed your application results into a governance dashboard, a defect tracking system, or a custom python application. Allow people to choose the format of their results. PDF reports for some, XML for others, and results right inside of the IDE for the rest.

You leverage application security as a selling point

The Veracode Vendor Application Security Testing (VAST) program has APIs for automating vendor and enterprise tasks. I predict more customers will use the VAST APIs, especially as more software suppliers address questions about the security of their product from their customers. Use the VAST API to retrieve the shared Veracode results of your software vendors.

Anything that can be accomplished through the User Interface (UI) can be done through the application programming interface (API). These are a few examples. While automation alone does not solve every problem, it can be a distinctive element of a security program when combined with strong program management. Veracode has deep API expertise and can help you get started using our existing tools or building a custom integration solution for your environment.

The Rise of Application Security Requirements and What to Do About Them

Secured Data Transfer

As an engineering manager, I am challenged to keep pace with ever-expanding expectations for non-functional software requirements. One requirement, application security, has become increasingly critical in recent years, posing new challenges for software engineering teams.

In what manner has security emerged as an application requirement? Are software teams equipped to respond? What can engineering managers do to ensure their teams build secure software applications?

In the ’90s, security was not a visible software requirement. During this time I worked with a team developing an innovative web content management system. We focused on scalability and performance, ensuring that dynamically generated web pages rendered quickly and scaled as the web audience grew. I don’t remember any security requirements nor any conversations about application security. Scary, but true! Our system was deployed by early-adopter media companies racing to deliver real-time online content. IT teams deploying our system may have considered security, but if they did, they focused on infrastructure and didn’t address security with us, their software vendor.

Ten years later, application security requirements began emerging in a limited way, focusing on compliance and process. At the time, I was working at a startup, developing software for financial institutions to uncover fraud through analysis of sensitive financial, customer and employee data. We routinely responded to requests for proposal (RFPs) with security questions about the controls to prevent my company’s employees from stealing data. We described our rigorous release, deployment and services processes. Without much ado, and without changes to our development process, our team simply “checked the box” for security.

A few years later the stakes became higher. New questions began showing up in RFPs: “How does your software architecture support security?” “How do your engineering practices enforce security?” And the most difficult to answer: “Provide independent attestation that your software is secure.”

I faced a sobering realization. The security of our software relied entirely on the technical acumen of our engineering leads (which fortunately was strong), and was not supported in a formal way in the engineering process. Even worse, I was starting from scratch to learn security basics. I needed help, and fast!

Engineering leads at small independent software vendors (ISVs), such as NSFOCUS and Questionmark, face this challenge routinely. Where should they start? What concrete steps can they take to secure their code and establish a process for application security?

Pete Chestna recently posted “Four Steps to Successfully Implementing Security into a Continuous Development Shop.” His approach has worked for us at Veracode and translates well to small and large engineering teams:

  • Learn, hire or consult with experts to understand the threat landscape for your software. Develop an application-security policy aligned with your risk profile.
  • Baseline the security of your application. Review and prioritize the issues according to your policy.
  • Educate your developers on security fundamentals and assign them to fix and remediate issues.
  • Update your software development life cycle (SDLC) to embed security practices so that new software is developed securely.

You will need budget and time to accomplish this. Consult with experts or engage with security services such as Veracode to benefit from their experience and expertise.

Don’t try to wing it. The stakes are too high.

Coming to a computer near you, SQL: The Sequel

August 8, 2014 by · Leave a Comment
Filed under: ALL THINGS SECURITY 

It might sound like a bad movie, but it’s playing out in real life – despite what seems like endless hacks using SQL injections, SQLi related breaches keep turning up like a bad penny.

GI-Joe

Most recently, Hold Security reported that they discovered a breach by Russian Hacker Ring. While details of this series of breaches are still surfacing, it is time for enterprises to start taking web perimeter security just as seriously as those aimed at the network.

Vulnerabilities like SQL injection are pervasive in web applications, yet most enterprises aren’t aware that their web perimeter is putting their organization at risk. This is because enterprises don’t typically know how many web applications they have in their domain. When working with an organization to reduce web application perimeter risk, we regularly find 40% more web sites than what customers provide as an input range. Couple this with the Verizon Data Breach Report findings that web application vulnerabilities are the number one cause of data breaches, and 80 percent of web application breaches in the retail industry exploit SQL injection vulnerabilities, and there is a recipe for disaster.

Without visibility into the entire web perimeter, enterprises are leaving thousands of applications vulnerable, and creating a long-term security threat, as cyber-criminals are constantly scanning the Internet looking for vulnerabilities like SQL injection. Given the large number of breaches caused by SQL injection and other web application vulnerabilities, we are getting to the point where it is reckless to assume that because your critical web-sites are secure, your risk is appropriately mitigated.

So what can enterprises do? Here are a few steps enterprises can take to help reduce risk:

  • Get stronger visibility into their entire web perimeter through use of a discovery solution (most enterprises don’t know the contents of their web perimeter, it’s typical to be unaware of up to 40% of websites within the enterprise domain).
  • Determine which sites have vulnerabilities by scanning them and looking for common exploits such as SQL Injection. Modern automated cloud-based services can now accomplish this quickly and continuously — with minimum setup time and effort — across tens of thousands of sites, in days versus weeks or months.
  • Take action: decommission sites no longer in use, which ultimately reduces your company’s attack surface. In one recent example, a global 1000 company reduced 50% of their perimeter risk by shutting down just three websites that were using unpatched software and were no longer required.

Put Your Efforts Where They Do the Most Good

August 7, 2014 by · Leave a Comment
Filed under: Third-Party Software 

20163336_sWhen doing anything challenging whether it’s a diet or writing a book, the hardest part can be figuring out where to start. Addressing software supply chain security is no different.

The typical organization has 390 business critical applications that are supplied by third parties, to say nothing of the multitudes of marketing web sites, operational sites, partner sites, off-the-shelf customer data management software, and others that represent its overall third-party-developed software footprint. It’s all too tempting to either lay down a blanket rule across all suppliers with no practical plan to implement, or to give up and turn a blind eye to supplier-provided vulnerabilities.

Giving up is not recommended, given that there are proven alternatives like Veracode’s vendor application security testing program that have been successful for Boeing and Thomson Reuters, among others. But it’s also important to not fall into implementation paralysis by reaching too broadly. Or, in other words, don’t boil the ocean!

Other supply chain transformation efforts suggest several ways to go after the problem. These include the 80/20 rule and low-hanging fruit. (These examples are drawn from the excellent Wharton article “Managing Green Supply Chains.”) To these best practices, Veracode would add the “go-forward” rule.

The 80/20 rule: Wal-Mart’s energy-saving supply chain initiative began with its top 200 suppliers in China, who represented (in 2008) constituted 60% to 80% of its total supply chain. By analogy, an enterprise could identify top software suppliers based on number of applications, or amount of data under management, and concentrate its initial supply chain efforts there.

Low-hanging fruit: The National Resources Defense Council (NRDC) recommends instead gathering easy wins to create momentum. In the software supply chain, this means addressing suppliers who may already supply attestations either publicly or to other customers, and documenting the process to create a “quick win” that can be reused as a case study.

Go-Forward: A software-supply-chain specific variation on the “low hanging fruit” strategy is to implement the new practices on suppliers as they enter or renew their presence in the supply chain via purchase or renewal of services. This is the part in the vendor relationship where the enterprise has natural negotiating power and is a good place to address new supply chain requirements if the enterprise lacks the market power to impose them on settled suppliers.

There are a variety of approaches that can be used to rapidly transform part of a supply chain, and an enterprise can choose among them based on the structure of their supplier base and their market power. Once the supply chain approach is chosen, attention turns to working with the supplier themselves. I’ll discuss some ways to do that in the next post.

The Seven Habits of Highly Effective Third-Party Software Security Programs

  1. Choose the right suppliers
  2. Put your efforts where they do the most good
  3. Collaborate to innovate
  4. Use suppliers as force multipliers
  5. The elephant in the room is compliance
  6. Drive compliance via “WIIFM”
  7. Align benefits for enterprise and supplier – or pay

Address Proof of Software Security for Customer Requirements in 4 Steps

The button for purchases on the keyboard. Online shop.

The world’s largest enterprises require proof of software security before they purchase new software. Why? Because third-party software is just as vulnerable to attack as software developed by internal teams. In fact, Boeing recently noted that over 90 percent of the third-party software tested as part of its program had significant, compromising flaws. As a software supplier, how do you get ahead of this trend?

Not every supplier has the resources and maturity to develop its own comprehensive secure-development process to the level of the Microsofts of the world, but that doesn’t mean security should be thrown out the window. Large, medium and small software suppliers — such as NSFOCUS and GenieConnect — have found significant benefit in incorporating binary static analysis into their development process, addressing vulnerabilities and meeting compliance with industry standards. This has earned them the VerAfied seal, which means their software product had no “very high,” “high” or “medium” severity vulnerabilities as defined by the Security Quality Score (SQS), nor any OWASP Top 10 or CWE/SANS Top 25 vulnerabilities that could be discovered using Veracode’s automated analysis.

This extra step to meet compliance with software security standards is one most suppliers don’t even consider: it could slow down development, add extra cost to the product and potentially reveal software vulnerabilities that the producer would rather not know about. Many software suppliers vainly hope that security is only necessary for a certain class of software — a banking program perhaps, but not a mobile application. However, security is relevant to every supplier, no matter their product or industry.

Software suppliers that neglect the security of their product are in for a rude awakening when the sales pipeline evaporates because they can’t answer questions about software security.

What should a supplier do to address a request for proof of software security? Here are four steps:

  1. Use — and document — secure coding practices when developing software. This may seem obvious, but developer documentation makes it easy to demonstrate that the software was developed to be secure from the very beginning.
  2. Test for vulnerabilities throughout the development process (the earlier and more frequent, the better). Don’t wait until the night before your product’s release to run your first security assessment, or your release will be delayed.
  3. Educate developers on how to find, fix and avoid security flaws. Many developers simply haven’t had proper training. Make sure they learn these skills not only for the benefit of your product, but also to improve your human capital.
  4. Proactively communicate with your customers about the steps you take to secure your product. This will improve existing relationships and help differentiate your product in the market.

It’s time for the software industry as a whole to embrace the trend of requiring proof of security as an opportunity to improve software everywhere.

Next Page »