Of course, when we talk about breaches, we're usually talking about someone "stealing" data. It's not actually "stolen" because you still have it. It's more accurate to say that in a breach the data is exfiltrated. This is also called an attack on the confidentiality of the data.
In security, we talk about the C-I-A triangle, Confidentiality, Integrity and Availability. Confidentiality is about the secrecy of data. Integrity is about the accuracy of data. Availability is about being able to properly access data when it's needed. A well-rounded security program needs to consider all these aspects.
What is somewhat different this year is the crypto-/ransom-ware attacks. In these cases, the attack is a virus that typically gets in as an email attachment. Someone opens the attachment and the virus executes. It finds files in network shared directories and encrypts them. Now, encryption is often a good thing, but that's when you (or your organization) has the decryption key. In a crypto-ware attack, only the attacker has the key. That's a problem. It becomes ransom-ware when the attacker offers to provide the key for a "small" consulting fee, usually paid via the anonymous crypto-currency, bitcoin.
These are basically attacks to the availability of data. We've seen instances of hospitals or other organizations temporarily shutting down as a result. These could also be considered attacks to the integrity of the data - though I think we have not yet seen the real integrity attacks... and they are coming.
So, how do we prevent these attacks? We can't. In fact, just ban the word "prevent" from your security vocabulary. The attackers are well funded and skilled. This has become a business, and business is "good". And, it's not if, but when an organization will suffer an attack.
Should we just give up? I said we can't prevent, but there are things we can do.
I don't use the "p" word prevent, but I do use the "p" word proactive. And in addition we must be able to detect problems and respond.
This week I moderated a panel of CISOs talking about Incident Management at the Cyber Security for Healthcare Exchange. It was a great discussion. Incident Response and Management could be one of the most important parts of a security program because "when" it happens, how we respond to minimize the impact can make a huge different both for the patients/customers and the organization. Incident Management needs to be a key part of an overall enterprise security program.
First you have to know where your "stuff" is. That means understanding what data and assets need to be protected either for regulatory reasons or because it might be of interest to attackers. Once you know what you've got, you need to figure out how it might be attacked. That's called Threat Modeling. Then, for each method of attack, you figure out what you would do to investigate and respond to a problem. We call those "playbooks". Easy, right? Well, it's conceptually easy, but complex to actually pull together. Here's some good news... while creating the specific plans is important, what is perhaps more important is the actual process of planning. That's because the actual response will never go down exactly as you planned it!
Another important component is monitoring. You must monitor your systems and networks. First, you need to know what "normal" is before you can figure out when things aren't normal.
One area of incident management that is often overlooked is training and awareness. There are two different components. First is training for people who will be part of investigation or response teams. This can be as simple as including those people in the planning and make sure people understand the playbooks and communication channels. But also very important is security awareness for everyone in the enterprise including understanding how to report problems and phishing email and malicious links and attachments as these are often root causes for incidents.
Ultimately, the goal is to get basic plans in place, learn from what happens (via an After Action process to review and capture lessons learned), and to practice continuous improvement. It's not about getting everything right the first time. It's about learning and improving.
The title of this post is "it's not if, but when". So what happens when it's "when"? Well, first you execute your playbook and procedures, modifying as appropriate. Another key part of the plan we didn't discuss above is communication channels. This means both how you communicate with internal as well as external stakeholders. If things get really bad, this can include: Law Enforcement, external counsel, external forensics or other security experts, your cybersecurity insurance company, the media and, regulatory bodies. Recent history has shown us that, for the worst breaches, those companies that over-communicated seemed to come out better.
This is a very rich topic so we'll revisit parts of this another time. For now, there are two guarantees... it will get worse before it gets better, and, it's not if, but when. But all is not lost! Start, plan, respond, learn, improve!
If not when is a very appropriate title! We are all at some point going to be effected by a malicious hacker, it is how the companies we trust with our information handle such a attacks will determine our loyalty to them!
ReplyDelete