|
As you’ve undoubtedly heard, the Equifax credit reporting agency was hit by a major attack, exposing the personal data of 143 million Americans and many more people in other countries. There’s been a lot of discussion of liability; as of a few days ago, at least 25 lawsuits had been filed, with the state of Massachusetts preparing its own suit. It’s certainly too soon to draw any firm conclusions about who, if anyone, is at fault—we need more information, which may not be available until discovery during a lawsuit—but there are a number of interesting things we can glean from Equifax’s latest statement.
First and foremost, the attackers exploited a known bug in the open source Apache Struts package. A patch was available on March 6. Equifax says that their “Security organization was aware of this vulnerability at that time, and took efforts to identify and to patch any vulnerable systems in the company’s IT infrastructure.” The obvious question is why this particular system was not patched.
One possible answer is, of course, that patching is hard. Were they trying? What does “took efforts to identify and to patch” mean? Were the assorted development groups actively installing the patch and testing the resulting system? It turns out that this fix is difficult to install:
You then have to hope that nothing is broken. If you’re using Struts 2.3.5 then in theory Struts 2.3.32 won’t break anything. In theory it’s just bug fixes and security updates, because the major.minor version is unchanged. In theory.
In practice, I think any developer going from 2.3.5 to 2.3.32 without a QA cycle is very brave, or very foolhardy, or some combination of the two. Sure, you’ll have your unit tests (maybe), but you’ll probably need to deploy into your QA environment and do some kind of integration testing too. That’s assuming, of course, that you have a compatible QA environment within which you can deploy your old, possibly abandoned application.
Were they trying hard enough, i.e., devoting enough resources to the problem?
Ascertaining liability here—moral and/or legal—can’t be done without seeing the email traffic between the security organization and the relevant development groups; you’d also have to see the activity logs (code changes, test runs, etc.) of these groups. Furthermore, if problems were found during testing, it might take quite a while to correct the code, especially if there were many Struts apps that needed to be fixed.
As hard as patching and testing are, though, when there are active exploitations going on you have to take the risk and patch immediately. That was the case with this vulnerability. Did the Security group know about the active attacks or not? If they didn’t, they probably aren’t paying enough attention to important information sources. Again, this is information we’re only likely to learn through discovery. If they did know, why didn’t they order a flash-patch? Did they even know which systems were vulnerable? Put another way, did they have access to a comprehensive database of hardware and software systems in the company? They need one—there are all sorts of other things you can’t do easily without such a database. Companies that don’t invest up front in their IT infrastructure will hurt in many other ways, too. Equifax has a market capitalization of more than $17 billion; they don’t really have an excuse for not running a good IT shop.
It may be, of course, that Equifax knew all of that and still chose to leave the vulnerable servers up. Why? Apparently, the vulnerable machine was their “U.S. online dispute portal”. I’m pretty certain that they’re required by law to have a dispute mechanism, and while it probably doesn’t have to be a website (and some people suggest that complainants shouldn’t use it anyway), it’s almost certainly a much cheaper way to receive disputes than is paper mail. That opens the possibility that there was a conscious decision that taking the risk was worthwhile. Besides, if many applications needed patching and they had limited development resources, they’d have had to set priorities on whic web servers were more at risk. Again, we need more internal documents to know.
Some text in the announcement does suggest either ignorance or a conscious decision to delay patching—the timeline from Equifax implies that they were able to patch Struts very quickly after observing anomalous network traffic to that server. That is, once they knew that there was a specific problem, rather than a potential one, they were able to respond very quickly. Alternatively, this server was on the “must be patched” list, but was too low down on the priority list until the actual incident was discovered.
We thus have several possible scenarios: difficulty in patching a large number of Struts applications, ignorance of the true threat, inadequate IT infastructure, or a conscious decision to wait, possibly for priority reasons. The first and perhaps last would seem to be exculpatory; the others would seem to leave the company in a bad moral position. But without more data we can’t distinguish among these cases.
A more interesting question is why it took Equifax so long to detect the breach. They did notice anomalous network traffic, but not until July 29. Their statement says that data was exposed starting May 13. Did they have inadequate intrusion detection? That might be more serious from a liability standpoint—unlike patching, running an IDS doesn’t risk breaking things. You need to tune your IDS correctly to avoid too many false positives, and you need to pay attention to alerts, but beyond dispute an enterprise of Equifax’s scale should have such deployed. It is instructive to read what Judge Learned Hand wrote in 1932 in a liability case when some barges sank because the tugboat did not have a weather radio:
Indeed in most cases reasonable prudence is in fact common prudence; but strictly it is never its measure; a whole calling may have unduly lagged in the adoption of new and available devices. It may never set its own tests, however persuasive be its usages. Courts must in the end say what is required; there are precautions so imperative that even their universal disregard will not excuse their omission… But here there was no custom at all as to receiving sets; some had them, some did not; the most that can be urged is that they had not yet become general. Certainly in such a case we need not pause; when some have thought a device necessary, at least we may say that they were right, and the others too slack… We hold [against] the tugs therefore because [if] they had been properly equipped, they would have got the Arlington [weather] reports. The injury was a direct consequence of this unseaworthiness.
It strikes me as entirely possible that Equifax’s exposure is greater on this issue than on patching.
This is a big case, affecting a lot of people. The outcome is likely to change the norms of how corporations world-wide protect their infrastructure. I hope the change will be in the right direction.
* * *
Update – Monday, Sep 18:
A news report today claims that Equifax was hacked twice, once in March (which is very soon after the Struts vulnerability was disclosed) and once in mid-May. The news article does not say if the same vulnerability was exploited; it does, however, say that their sources claim that “the breaches involve the same intruders”.
If it was the same exploit, it suggests to me one of the possibilities I mentioned above: that the company lacked an comprehensive softare inventory. After all, if you know there’s a hole in some package and you know that you’re being targeted by attackers who know of it and have used it against you, you have very strong incentive to fix all instances immediately. That Equifax did not do so would seem to indicate that they were unaware that they were still vulnerable. In fact, the real question might be why it took the attackers so long to return. Maybe they couldn’t believe that that door would still be open…
On another note, several people have sent me notes pointing out that Susan Mauldin, the former CSO at Equifax, graduated with degrees in music, not computer science. I was aware of that and regard it as quite irrelevant. As I and others have pointed out, gender bias seems to be a more likely explanation for the complaints. And remember that being a CSO is a thankless job.
Update – Thursday, Sep 21:
In the Sep. 18 update above, I noted that Equifax had been breached in March, and quoted the article as saying that the attackers had been “the same intruders” as in the May breach. In a newer news report, Equifax has denied that:
“The March event reported by Bloomberg is not related to the criminal hacking that was discovered on 29 July,” Equifax’s statement continues. “Mandiant has investigated both events and found no evidence that these two separate events or the attackers were related. The criminal hacking that was discovered on 29 July did not affect the customer databases hosted by the Equifax business unit that was the subject of the March event.”
So: I’ll withdraw the speculation I posted about this incident confirming one of my hypotheses and wait for further, authoritative information. I repeat my call for public investigations of incidents of this scale.
Also worth noting: Brian Krebs was one of the very few to report the March incident.
Sponsored byRadix
Sponsored byWhoisXML API
Sponsored byDNIB.com
Sponsored byCSC
Sponsored byVerisign
Sponsored byIPv4.Global
Sponsored byVerisign
>Equifax has a market capitalization of more than $17 billion;
>they don’t really have an excuse for not running a good IT shop.
http://www.zerohedge.com/news/2017-09-15/another-equifax-coverup-did-company-scrub-its-chief-security-officer-was-music-major
“Mauldin’s original LinkedIn page was on this url before it was made completely private: linkedin.com/in/susan-mauldin-93069a (now a 404 page not found)
A few days after the news of the data hacking broke, the following page reappeared a with a different url, with the specific detail that her degrees were in Music Composition removed. Also, her surname Mauldin was replaced with the initial letter M. to complicate profile discovery.”
http://www.zerohedge.com/news/2017-09-18/justice-department-begins-criminal-probe-equifax-executive-stock-sales
>Were they trying hard enough, i.e., devoting enough resources to the problem?
https://www.bloomberg.com/news/articles/2017-09-18/equifax-is-said-to-suffer-a-hack-earlier-than-the-date-disclosed
“Equifax Suffered a Hack Almost Five Months Earlier Than the Date It Disclosed”
“In a statement, the company said the March breach was not related to the hack that exposed the personal and financial data on 143 million U.S. consumers, but one of the people said the breaches involve the same intruders. Either way, the revelation that the 118-year-old credit-reporting agency suffered two major incidents in the span of a few months adds to a mounting crisis at the company, which is the subject of multiple investigations and announced the retirement of two of its top security executives on Friday.”
https://www.bloomberg.com/news/articles/2017-09-15/equifax-says-cio-chief-security-officer-to-leave-after-breach
“The firm’s chief information and chief security officers are retiring immediately, the Atlanta-based company said Friday in a statement that didn’t name the individuals [Susan Mauldin, music major]. “
Nice survey of issues for this unfortunate topic. The quotation of Judge Hand is especially apt. It points to the question of industry practice. Given the continuing pattern of compromises across the industry, it strongly suggests that identification of critical-service operations and critical operational practices needs quite a bit more work. Some services produce massive damage if compromised. There should be a process for identifying these and a robust set of design and operations practices expected (required?) of them.
>Some services produce massive damage if compromised. >There should be a process for identifying these https://www.youtube.com/watch?v=cRDTIx6mu2E It comes down to conflict of interest. We "consumers" are not the customer here, were are the product being traded. Thus protection of *US* is not a priority. As the video shows, bad things can happen to those in a position to press the issue. https://security.stackexchange.com/questions/5594/whistleblowing-business-ethics-and-credit-card-data "The credit card number and expiration date are written on the back of the paper "ticket". I also know for a fact that these tickets are passed between 3-6 people, leave the premises and, at least on one occasion, get thrown in the garbage. Not shredded and thrown in the garbage, just thrown in the garbage." In my own case CitiBank kept moving back my credit card due date after over a decade of it never changing. I did not notice, until they hit me with massive fees for a late payment. I called them up and told them to remove the fees to which I was told "Why would we do that?!" trying to intimidate me. And I told her "Because you manipulated my due date to cause this to happen". She spoke to a supervisor, the fees were removed .... I then demanded my credit line be reduced to $1. This is a little detail everybody needs to know about. Banks MUST reduce your credit line if you ask, they have no choice. You must do this before canceling a credit card because if you don't they will not report the closing of your account to the credit agencies thus FORCING YOU to come back to them if you want another card. Reduce your credit line to $1, wait a few months for that to propagate to the credit agencies, and only then close your account. Which I did for my CitiBank cards. Some banks might demand you close the account if you reduce the credit line that low, so just reduce it to their "limit" and wait for that amount to propagate to the report. Yes this does affect your FICO score these days, however such account closures and credit line reductions are noted on your reports as "customer requested". Thus when a human actually looks at your reports they see you told the credit card company what they can do with their "service", and the credit system then "expressed its dissatisfaction" (FICO Score) with your having done that .... The smoke clears out after a few months, after all they are after you to be in debt to them. As for the chips now in cards, its relatively meaningless illusion. My Discover account was somehow exposed in the beginning of this year. Every month for about 5 months straight I had to get a new card issued each month. The people I spoke to on the phone acknowledged the problem and that the chips were solving little. At the farm store I was told that the only thing that changes with the chip is this, if they run the chip and there is fraud the bank eats the cost, if they don't run the chip then they eat the cost of fraud. Thus the chip does NOT prevent fraud but only changes who pays for it. Does that really sound like the credit system has any care for you and I? Like I said, we are not the customer here, we are the product. Paying off all our debts is the best way to solve this problem so their is less commercial value to buying and selling our personal electronic persona and its description. But we know that will never happen. In the end debt and spending raises GDP which the federal government protects, so the credit agencies will be protected.
One issue is that not patching software creates the very problems that make patching such a touchy issue. If you have to jump all the way from 2.3.5 to 2.3.32, there’s a lot of chances for trouble. But why are you having to make that jump? If you were patching as new versions came out, each patch would’ve been much smaller and easier to check for problems, and you wouldn’t be facing the big jump when 2.3.32 came out with a must-apply fix because you’d be on 2.3.31 or 2.3.30 instead of 2.3.5.
On a related note, I think the issue of compatibility of updates with existing code is somewhat overblown. I deal with just that in my job, and I find that there are rarely any problems caused by the update unless it’s a major update with known API changes. Most often the problems are caused by the existing code itself taking a convoluted approach tied to the exact implementation rather than the most straightforward approach, and often it seems like it’s done in a deliberate attempt to tie the code to a specific version of the dependencies. That seems to go hand-in-hand with an argument I regularly have with the platform/infrastructure teams: tightly-constrained environments that require things to be done one way and one way only vs. loosely-constrained environments that permit doing things any way that produces the desired results without impacting anything else.