SSL and internet security news

incidentresponse

Auto Added by WPeMatico

Security Orchestration and Incident Response

Last month at the RSA Conference, I saw a lot of companies selling security incident response automation. Their promise was to replace people with computers ­– sometimes with the addition of machine learning or other artificial intelligence techniques ­– and to respond to attacks at computer speeds.

While this is a laudable goal, there’s a fundamental problem with doing this in the short term. You can only automate what you’re certain about, and there is still an enormous amount of uncertainty in cybersecurity. Automation has its place in incident response, but the focus needs to be on making the people effective, not on replacing them ­ security orchestration, not automation.

This isn’t just a choice of words ­– it’s a difference in philosophy. The US military went through this in the 1990s. What was called the Revolution in Military Affairs (RMA) was supposed to change how warfare was fought. Satellites, drones and battlefield sensors were supposed to give commanders unprecedented information about what was going on, while networked soldiers and weaponry would enable troops to coordinate to a degree never before possible. In short, the traditional fog of war would be replaced by perfect information, providing certainty instead of uncertainty. They, too, believed certainty would fuel automation and, in many circumstances, allow technology to replace people.

Of course, it didn’t work out that way. The US learned in Afghanistan and Iraq that there are a lot of holes in both its collection and coordination systems. Drones have their place, but they can’t replace ground troops. The advances from the RMA brought with them some enormous advantages, especially against militaries that didn’t have access to the same technologies, but never resulted in certainty. Uncertainty still rules the battlefield, and soldiers on the ground are still the only effective way to control a region of territory.

But along the way, we learned a lot about how the feeling of certainty affects military thinking. Last month, I attended a lecture on the topic by H.R. McMaster. This was before he became President Trump’s national security advisor-designate. Then, he was the director of the Army Capabilities Integration Center. His lecture touched on many topics, but at one point he talked about the failure of the RMA. He confirmed that military strategists mistakenly believed that data would give them certainty. But he took this change in thinking further, outlining the ways this belief in certainty had repercussions in how military strategists thought about modern conflict.

McMaster’s observations are directly relevant to Internet security incident response. We too have been led to believe that data will give us certainty, and we are making the same mistakes that the military did in the 1990s. In a world of uncertainty, there’s a premium on understanding, because commanders need to figure out what’s going on. In a world of certainty, knowing what’s going on becomes a simple matter of data collection.

I see this same fallacy in Internet security. Many companies exhibiting at the RSA Conference promised to collect and display more data and that the data will reveal everything. This simply isn’t true. Data does not equal information, and information does not equal understanding. We need data, but we also must prioritize understanding the data we have over collecting ever more data. Much like the problems with bulk surveillance, the “collect it all” approach provides minimal value over collecting the specific data that’s useful.

In a world of uncertainty, the focus is on execution. In a world of certainty, the focus is on planning. I see this manifesting in Internet security as well. My own Resilient Systems ­– now part of IBM Security –­ allows incident response teams to manage security incidents and intrusions. While the tool is useful for planning and testing, its real focus is always on execution.

Uncertainty demands initiative, while certainty demands synchronization. Here, again, we are heading too far down the wrong path. The purpose of all incident response tools should be to make the human responders more effective. They need both the ability and the capability to exercise it effectively.

When things are uncertain, you want your systems to be decentralized. When things are certain, centralization is more important. Good incident response teams know that decentralization goes hand in hand with initiative. And finally, a world of uncertainty prioritizes command, while a world of certainty prioritizes control. Again, effective incident response teams know this, and effective managers aren’t scared to release and delegate control.

Like the US military, we in the incident response field have shifted too much into the world of certainty. We have prioritized data collection, preplanning, synchronization, centralization and control. You can see it in the way people talk about the future of Internet security, and you can see it in the products and services offered on the show floor of the RSA Conference.

Automation, too, is fixed. Incident response needs to be dynamic and agile, because you are never certain and there is an adaptive, malicious adversary on the other end. You need a response system that has human controls and can modify itself on the fly. Automation just doesn’t allow a system to do that to the extent that’s needed in today’s environment. Just as the military shifted from trying to replace the soldier to making the best soldier possible, we need to do the same.

For some time, I have been talking about incident response in terms of OODA loops. This is a way of thinking about real-time adversarial relationships, originally developed for airplane dogfights, but much more broadly applicable. OODA stands for observe-orient-decide-act, and it’s what people responding to a cybersecurity incident do constantly, over and over again. We need tools that augment each of those four steps. These tools need to operate in a world of uncertainty, where there is never enough data to know everything that is going on. We need to prioritize understanding, execution, initiative, decentralization and command.

At the same time, we’re going to have to make all of this scale. If anything, the most seductive promise of a world of certainty and automation is that it allows defense to scale. The problem is that we’re not there yet. We can automate and scale parts of IT security, such as antivirus, automatic patching and firewall management, but we can’t yet scale incident response. We still need people. And we need to understand what can be automated and what can’t be.

The word I prefer is orchestration. Security orchestration represents the union of people, process and technology. It’s computer automation where it works, and human coordination where that’s necessary. It’s networked systems giving people understanding and capabilities for execution. It’s making those on the front lines of incident response the most effective they can be, instead of trying to replace them. It’s the best approach we have for cyberdefense.

Automation has its place. If you think about the product categories where it has worked, they’re all areas where we have pretty strong certainty. Automation works in antivirus, firewalls, patch management and authentication systems. None of them is perfect, but all those systems are right almost all the time, and we’ve developed ancillary systems to deal with it when they’re wrong.

Automation fails in incident response because there’s too much uncertainty. Actions can be automated once the people understand what’s going on, but people are still required. For example, IBM’s Watson for Cyber Security provides insights for incident response teams based on its ability to ingest and find patterns in an enormous amount of freeform data. It does not attempt a level of understanding necessary to take people out of the equation.

From within an orchestration model, automation can be incredibly powerful. But it’s the human-centric orchestration model –­ the dashboards, the reports, the collaboration –­ that makes automation work. Otherwise, you’re blindly trusting the machine. And when an uncertain process is automated, the results can be dangerous.

Technology continues to advance, and this is all a changing target. Eventually, computers will become intelligent enough to replace people at real-time incident response. My guess, though, is that computers are not going to get there by collecting enough data to be certain. More likely, they’ll develop the ability to exhibit understanding and operate in a world of uncertainty. That’s a much harder goal.

Yes, today, this is all science fiction. But it’s not stupid science fiction, and it might become reality during the lifetimes of our children. Until then, we need people in the loop. Orchestration is a way to achieve that.

This essay previously appeared on the Security Intelligence blog.

Powered by WPeMatico

New Presidential Directive on Incident Response

Last week, President Obama issued a policy directive (PPD-41) on cyber-incident response coordination. The FBI is in charge, which is no surprise. Actually, there’s not much surprising in the document. I suppose it’s important to formalize this stuff, but I think it’s what happens now.

News article. Brief analysis. The FBI’s perspective.

Powered by WPeMatico

The Future of Incident Response

Security is a combination of protection, detection, and response. It’s taken the industry a long time to get to this point, though. The 1990s was the era of protection. Our industry was full of products that would protect your computers and network. By 2000, we realized that detection needed to be formalized as well, and the industry was full of detection products and services.

This decade is one of response. Over the past few years, we’ve started seeing incident response (IR) products and services. Security teams are incorporating them into their arsenal because of three trends in computing. One, we’ve lost control of our computing environment. More of our data is held in the cloud by other companies, and more of our actual networks are outsourced. This makes response more complicated, because we might not have visibility into parts of our critical network infrastructures.

Two, attacks are getting more sophisticated. The rise of APT (advanced persistent threat)–attacks that specifically target for reasons other than simple financial theft–brings with it a new sort of attacker, which requires a new threat model. Also, as hacking becomes a more integral part of geopolitics, unrelated networks are increasingly collateral damage in nation-state fights.

And three, companies continue to under-invest in protection and detection, both of which are imperfect even under the best of circumstances, obliging response to pick up the slack.

Way back in the 1990s, I used to say that “security is a process, not a product.” That was a strategic statement about the fallacy of thinking you could ever be done with security; you need to continually reassess your security posture in the face of an ever-changing threat landscape.

At a tactical level, security is both a product and a process. Really, it’s a combination of people, process, and technology. What changes are the ratios. Protection systems are almost technology, with some assistance from people and process. Detection requires more-or-less equal proportions of people, process, and technology. Response is mostly done by people, with critical assistance from process and technology.

Usability guru Lorrie Faith Cranor once wrote, “Whenever possible, secure system designers should find ways of keeping humans out of the loop.” That’s sage advice, but you can’t automate IR. Everyone’s network is different. All attacks are different. Everyone’s security environments are different. The regulatory environments are different. All organizations are different, and political and economic considerations are often more important than technical considerations. IR needs people, because successful IR requires thinking.

This is new for the security industry, and it means that response products and services will look different. For most of its life, the security industry has been plagued with the problems of a lemons market. That’s a term from economics that refers to a market where buyers can’t tell the difference between good products and bad. In these markets, mediocre products drive good ones out of the market; price is the driver, because there’s no good way to test for quality. It’s been true in anti-virus, it’s been true in firewalls, it’s been true in IDSs, and it’s been true elsewhere. But because IR is people-focused in ways protection and detection are not, it won’t be true here. Better products will do better because buyers will quickly be able to determine that they’re better.

The key to successful IR is found in Cranor’s next sentence: “However, there are some tasks for which feasible, or cost effective, alternatives to humans are not available. In these cases, system designers should engineer their systems to support the humans in the loop, and maximize their chances of performing their security-critical functions successfully.” What we need is technology that aids people, not technology that supplants them.

The best way I’ve found to think about this is OODA loops. OODA stands for “observe, orient, decide, act,” and it’s a way of thinking about real-time adversarial situations developed by US Air Force military strategist John Boyd. He was thinking about fighter jets, but the general idea has been applied to everything from contract negotiations to boxing–and computer and network IR.

Speed is essential. People in these situations are constantly going through OODA loops in their head. And if you can do yours faster than the other guy–if you can “get inside his OODA loop”–then you have an enormous advantage.

We need tools to facilitate all of these steps:

  • Observe, which means knowing what’s happening on our networks in real time. This includes real-time threat detection information from IDSs, log monitoring and analysis data, network and system performance data, standard network management data, and even physical security information–and then tools knowing which tools to use to synthesize and present it in useful formats. Incidents aren’t standardized; they’re all different. The more an IR team can observe what’s happening on the network, the more they can understand the attack. This means that an IR team needs to be able to operate across the entire organization.
  • Orient, which means understanding what it means in context, both in the context of the organization and the context of the greater Internet community. It’s not enough to know about the attack; IR teams need to know what it means. Is there a new malware being used by cybercriminals? Is the organization rolling out a new software package or planning layoffs? Has the organization seen attacks form this particular IP address before? Has the network been opened to a new strategic partner? Answering these questions means tying data from the network to information from the news, network intelligence feeds, and other information from the organization. What’s going on in an organization often matters more in IR than the attack’s technical details.
  • Decide, which means figuring out what to do at that moment. This is actually difficult because it involves knowing who has the authority to decide and giving them the information to decide quickly. IR decisions often involve executive input, so it’s important to be able to get those people the information they need quickly and efficiently. All decisions need to be defensible after the fact and documented. Both the regulatory and litigation environments have gotten very complex, and decisions need to be made with defensibility in mind.
  • Act, which means being able to make changes quickly and effectively on our networks. IR teams need access to the organization’s network–all of the organization’s network. Again, incidents differ, and it’s impossible to know in advance what sort of access an IR team will need. But ultimately, they need broad access; security will come from audit rather than access control. And they need to train repeatedly, because nothing improves someone’s ability to act more than practice.

Pulling all of these tools together under a unified framework will make IR work. And making IR work is the ultimate key to making security work. The goal here is to bring people, process and, technology together in a way we haven’t seen before in network security. It’s something we need to do to continue to defend against the threats.

This essay originally appeared in IEEE Security & Privacy.

Powered by WPeMatico