By Kevin Boyle & Alex Stout
Hardly a day passes now without some new report of a security vulnerability with inevitable breaches that follow, but Monday’s news about the two-year old vulnerability in OpenSSL is (or should be) catching everyone’s attention. The problem is a coding error in a widely used cryptographic software library for implementing secure connections between a website (or web interface on a hardware device) and its user (typically indicated by a reassuring padlock in the status line of a browser). The bug allows undetectable access to random blocks of memory on the server. This means essentially all information being processed by the server may be exposed to an attack that exploits the vulnerability, including user names, passwords and, critically, the keys used to protect information on what are intended to be secure connections. (The official reference for the vulnerability is CVE-2014-0160, but the bug has been dubbed “Heartbleed” because it involves an update used to implement a “heartbeat” protocol used to evaluate the status of a secure connection.)
So, what does this mean for those charged with managing data security obligations?
- Who is really affected? The threshold question is whether your organization uses the relevant versions of the OpenSSL code in any public facing webservers. OpenSSL is included in a number of widely used Linux-based web server packages (including as as part of a Linux based front end to some Windows servers). . The vulnerable versions of the software are Open SSL 1.0.1 through 1.0.1f. The OpenSSL project and others have posted detailed information you can use to identify whether your servers are affected. Make sure you check for use of the vulnerable code in any public facing hardware that incorporates an SSL protected configuration or management console in firmware.
Note: Although there has been a lot of press suggesting that “virtually every” server on the internet is affected by Heartbleed, it is only those running the vulnerable versions of OpenSSL that are at risk. According to Netcraft, this is about 17% of the servers running on platforms that could use OpenSSL. That’s a lot of servers, but far from “all.” So, again, the threshold question is whether you are running the vulnerable versions of OpenSSL.
- Patch/update code. If your organization operates vulnerable systems, you should make sure your IT team is using information available from OpenSSL or your server package vendor to patch or update the code. Because your certificate(s) for the server may have been compromised, you should consider revoking and replacing the certificates after patching the vulnerabilities. (While one at least one certificate authority—bracing for an onslaught of certificate revocation requests—has questioned the risk of certificate compromise, even they are advising to assume those credentials are subject to compromise for now.)
- Communicate with users. Whether you ran a vulnerable version of OpenSSL or not, assuming you operate any user or customer facing SSL connections you’ll probably want to provide information to your users and customers about this issue. This should include information about whether or not your systems were vulnerable, and if vulnerable, the time you patched or updated OpenSSL and the actions you’ve taken with respect to your digital certificates. Here are links to examples of “OK” and “resolved” or “being resolved” messages. Be sure to carefully tailor yours so it is accurate. . A number of companies have already posted useful notices, but they remain in the minority of affected websites and device manufacturers.
- Are there other disclosure obligations? Finally, with respect to your servers (assuming you were using vulnerable code), , you need to consider whether you have any disclosure obligations arising under contracts or applicable law. Although the news media is frequently referring to this incident as a “breach,” it is uncertain whether evidence of a vulnerability is ever sufficient to trigger reporting or notification obligations. Constructions vary widely, but typically breach disclosure obligations arise in the event of an actual breach (always) or evidence that a breach may have occurred (sometimes). In some cases, especially under commercial contracts, the mere fact that vulnerable code was running may be enough to trigger a disclosure obligation (which would need to meet contract requirements, but is likely to be similar to the customer disclosure suggested above). But, in most cases, absent some indication that the vulnerability was actually exploited (which might be indicated by evidence of use of protected information from the servers), there is not likely to be a state requirement for notifying individual users or customers.
Most U.S. state breach notification statutes require notification only when the data custodian has knowledge (for example, Massachusetts, Mass. Gen. Laws 93H § 1 et seq.) or a reasonable belief (for example, Florida, Fla. Stat. § 817.5681) of data having been compromised, both of which appear to require more than mere discovery of a coding flaw. Even states without a clear “knowledge” requirement (for example, Washington, D.C., D.C. Code § 28-3851 et seq.) still seem to require more by defining “breach” with a requirement that data be acquired. Some states, such as Delaware (Del. Code Ann. tit. 6 § 12B-101 et seq.) and Idaho (Idaho Code § 28-51-104 et seq.), may require you to undertake a good-faith investigation to determine if there is a reasonable likelihood of harm to consumers.
Whether or not notification statutes or regulations are triggered will depend on the precise facts of your situation, so make sure consideration of the issue is on your checklist. Remember that contract standards may have a lower threshold for what constitutes a breach than that in typical breach disclosure statutes. As to the statutes, while a review of all breach disclosure law is well beyond the scope of a blog post, in the absence of any other indicators that a breach has occurred, the mere fact that one was running vulnerable code is not likely to constitute a breach under such laws.
On the other side of the table, what does this mean if you or your business are the user of a site or service which may be vulnerable?
- As an initial step, check the status of your vendor’s site. You could simply ask or look for information on the vendor’s site. While there are tools available to test sites, bear in mind that they speak to current status, so they won’t tell you if the vulnerable code was running at some prior point.
- Consider taking steps to make sure your browsers (and other client devices) are not tricked by revoked certificates. This may include setting your browsers to reject revoked certificates or taking other steps to exclude access to sites or services that are using revoked certificates.
- Check the terms of your contract. Many commercial agreements require software and hardware vendors to notify their customers about breaches or vulnerabilities as soon as they are discovered. For vendors important information (especially information that is PII of others where you have potential breach disclosure obligations), determine what information you might be entitled to receive, ask for it, and use the information you learn to develop your own risk assessment. As discussed above notification laws vary widely and require unique determinations based on the facts of your situation, but in the absence of other indications that a breach has occurred, the fact that your vendor used vulnerable code on a server used to process PII for which you are responsible is not likely to give rise to a notice obligation. Talk to your vendor to determine whether any of your data may have actually been exposed by this vulnerability and make your notification decisions accordingly.
- Finally, this is another great opportunity to remind users to use strong, unique passwords for each application, and to change them frequently. You (and your employees) should carefully consider changing the passwords for any site processing sensitive data that used the flawed version OpenSSL (or even for non-sensitive data if the same password is used at any site processing sensitive data). And, as is always the case when a security incident occurs (or is evaluated), consider whether any lessons learned in responding suggest updates to your information security policies and procedures.
John Pennebaker, CISSP and Jon Joke, CISSP contributed to this post. They are, respectively, Information Security Officer and Manager of Systems Integration at Latham & Watkins LLP.