Menu
Menu
Developing business-driven indicators of compromise

Developing business-driven indicators of compromise

Indicators of Compromise (IOCs) are a valuable tool for administrators and network defenders. However, what happens when an attacker doesn't trigger the expected alerts, or worse, they blend in with alerts that go unnoticed or ignored?

Indicators of Compromise (IOCs) are a valuable tool for administrators and network defenders. However, what happens when an attacker doesn't trigger the expected alerts, or worse, they blend in with alerts that go unnoticed or ignored?

[The 15 worst data security breaches of the 21st Century]

Network administrators or those charged with network defense, have what seems to be an impossible task. They have to defend the network or critical resource one-hundred percent of the time, whereas an attacker only needs to be successful once. The attacker's single victory can be short-lived though, if those keeping watch know how to spot the signs of things gone wrong.

From the news coverage this year, it would seem that most attacks focus on malware, application weaknesses, social engineering, or malicious employees. Those are all valid attack vectors, but they only skim the surface when it comes to the day-to-day realities of threats faced by security professionals. Not to mention, when they happen, many of them are detected by an increasingly complex layer of automated defenses on the network, and its defenders.

But what about the attacks that vigilance, firewalls, or other box-based security solutions can't deal with? How can security professionals discover the attacks that come in slow and low, where the attacker takes their time and their actions don't trigger normal indicators of compromise? To explore this topic, CSO spoke with Conrad Constantine, a research team engineer with AlienVault Labs. He has a unique perspective on IOCs as he was part of the incident response team in 2011 during the RSA breach.

"Any reasonably skilled attacker will try and avoid using recognizable tools as soon as they have enough of a foothold on the target network to do so," said Constantine.

"Public indicators of compromise are undoubtedly a vital tool in initial detection of compromise, but breaches do not begin and end with a single host," he said, adding that serious targeted breaches (with actual human operators behind the attack) will soon blend in and avoid the use of identifiably malicious software.

"For us to make serious progress across the field, organizations must learn to create their own personalized indicators of compromise as well -- those things that, based upon an organization's understanding of its own business processes and deployments, should never happen during the course of operations on their infrastructure."

[Report indicates insider threats leading cause of data breaches in last 12 months]

Another way to describe this method is to focus on the split between signal and noise, a common IOC detection process, such as watching for the abnormal movements on the network -- those seemingly random actions or data flows, and then asking 'why did that just happen?'

However, Constantine makes it a point to note that it isn't just about abnormalities, but certainties as well. It's important to identify the things that should never happen within the organization, such as access to a source code directory from a computer assigned to marketing.

[Most data breaches caused by human error, system glitches]

"My general method for building your own organization-specific IOC's is to look to your own security policy, then identify actions that would violate that policy, and implement alerts for the events in your logs that would indicate those violation," he said.

Asked to expand on that logic and list some examples, he recalled his favorite anecdote that involves the most general of security policies related to Internet usage within an organization. Namely, that it isn't for personal use.

"I was curious to know which business units spent the most time accessing ESPN.com, so I created a simple report of proxy logs for ESPN domains, mapped to subnets to business units," he explained.

The worst offenders were a few systems inside the data center, due to administrators who would read their sports scores in the same RDP (Remote Desktop Protocol) sessions they were working in.

"We found a huge security exposure right there, just looking for basic things that didn't require a huge amount of esoteric security knowledge to understand or formulate the initial query," he added.

In this case, an IOC can be created to trigger an alert any time a server made a client connection to the Internet. Likewise, another IOC would be to watch for RDP sessions that didn't originate from authorized stations. If only two terminals are assigned to connect to the datacenter via RDP, then any other connection attempt needs to throw an alert and be examined.

"Conversely, I've seen many policy entries that describe extravagant procedures regarding Administrator access to Executive Email services, but when we started looking for examples of this happening, it was occurring many times a day, usually via a request from the execs themselves. The policy did not match the reality of how business was getting done -- and that meant the policy had to be changed to reflect that," Constantine said.

[Rise in data breaches drives interest in cyber insurance]

"Done correctly, the kind of Digital Forensics / Incident Response (DFIR) analysis that IOCs enable doesn't just locate attackers on the network, but can act as a continually updating audit of IT deployment and usage. After all, shouldn't we be trying to catch the issues that enable our attackers, before they find them?"

When security policies are first measured against organization-specific IOC's, several violations will come to light. One of the harshest realities that the organization will face during this process is the fact that there's a big difference between how the business assumes IT resources are used, and how they're actually used.

[Seven essentials for VM management and security]

The trick is to measure the discovered violations against the level of risk the organization can tolerate. If an IOC violates a policy, but the policy actually hinders workflow or the goals of the business, then the policy needs to change. The simple fact is, security policies "should serve the business, not the other way around," Constantine said. So if the business chooses to do something in an insecure way, you're going to have to design around it.

Security teams, Constantine said, have a nasty habit of designing their monitoring around an idealized model of "some theoretical 'secure infrastructure' -- instead of going out and understanding how their particular organization does business and securing that."

As an example, he references an imagined nightmare scenario where the marketing department's rollout of a new campaign on a third-party service (a common occurrence these days), sends alert systems into overdrive as internal information suddenly shoots past the firewall, triggering breach notifications tied to the alleged exposure of corporate information. What seems like a clear security policy violation, an incident that must be responded to, isn't a meltdown at all. It's a frustrating example of how policy hindered normal workflow.

"If you apply a purely technical approach to writing behavioral IOC's, you're likely to get flooded as your idealized model doesn't match the reality," Constantine explained.

Again, the best way to do this is to start with extremes, things that should never occur during the course of normal business operations. But before that can happen, there needs to be a discussion or two with business leaders in order to determine what normal looks like. Doing the legwork will also help prevent paralysis of choice when it comes to following-up on an alert, because it establishes a workflow that enables network defenders to go with what they can execute on.

[Can the new HIPAA rule cut PHI breaches?]

"Everything but the simplest of attacks will occur over multiple stages and actions. Even if one set of events is beyond your ability to successfully investigate and analyze, something other sequence of events and alerts is going to be within your ability to start investigating from, and give context to the things that seemed meaningless before," Constantine said.

"Even if the things you are discovering and executing remediation on are less than glamorous -- minor malware infections, locating machines that have fallen off the asset management radar -- it all adds up, both in removing the easiest routes for an attacker, and reducing the dataset down to a pool of unknowns that warrant more detailed investigation: 'The more silent you become, the more you can hear.'"

Wrapping the interview, we asked Constantine to outline some custom IOC best practices for three market segments that seem to face a continuous stream of attacks. We've quoted his answers below.

Banking / Finance:

"You have many single-purpose systems in this environment, and extremely well-defined operating hours. Use them to your advantage in alerting on things outside of those [parameters]. Similarly, there are well defined procedures for financial operations and the auditing thereof, but look for accompanying aspects of those as well. You may have a well-defined set of monitors and audit for database activity, but expand that to include the operating system level as well. Should any major operation be carried out within minutes of an operating system change?"

[Why network security is the foundation for cyber strategy]

Retail:

"Retail tends to be hub-and-spoke style information architecture. How often would you expect to see one store's systems attempting to communicate to another's? Attacks can spend weeks moving laterally around a compromised network getting multiple footholds and increasing number of points they can capture information and exfiltrate it from."

Industry:

"Industrial systems are notorious for being treated as black-box systems, even though many of them are operated via general-purpose computers. Many of them do not see frequent upgrades (and even less so, installations). Detecting any kind of executable data being transferred or installed to them is worthy of investigation, since if there's a legitimate reason for it, it's going to map to some fairly significant paperwork to justify it too."

Join the CIO New Zealand group on LinkedIn. The group is open to CIOs, IT Directors, COOs, CTOs and senior IT managers.

Join the CIO New Zealand newsletter!

Error: Please check your email address.

Tags security

More about CSOESPNIOCRSA

Show Comments