In response to my post More on Threat Hunting, Rob Lee asked:
[D]o you consider detection through ID’ing/“matching” TTPs not hunting?
To answer this question, we must begin by clarifying “TTPs.” Most readers know TTPs to mean tactics, techniques and procedures, defined by David Bianco in his Pyramid of Pain post as:
How the adversary goes about accomplishing their mission, from reconnaissance all the way through data exfiltration and at every step in between.
In case you’ve forgotten David’s pyramid, it looks like this.
It’s important to recognize that the pyramid consists of indicators of compromise (IOCs). David uses the term “indicator” in his original post, but his follow-up post from his time at Sqrrl makes this clear:
There are a wide variety of IoCs ranging from basic file hashes to hacking Tactics, Techniques and Procedures (TTPs). Sqrrl Security Architect, David Bianco, uses a concept called the Pyramid of Pain to categorize IoCs.
At this point it should be clear that I consider TTPs to be one form of IOC.
In The Practice of Network Security Monitoring, I included the following workflow:
You can see in the second column that I define hunting as “IOC-free analysis.” On page 193 of the book I wrote:
Analysis is the process of identifying and validating normal, suspicious, and malicious activity. IOCs expedite this process. Formally, IOCs are manifestations of observable or discernible adversary actions. Informally, IOCs are ways to codify adversary activity so that technical systems can find intruders in digital evidence…
I refer to relying on IOCs to find intruders as IOC-centric analysis, or matching. Analysts match IOCs to evidence to identify suspicious or malicious activity, and then validate their findings.
Matching is not the only way to find intruders. More advanced NSM operations also pursue IOC-free analysis, or hunting. In the mid-2000s, the US Air Force popularized the term hunter-killer in the digital world. Security experts performed friendly force projection on their networks, examining data and sometimes occupying the systems themselves in order to find advanced threats.
Today, NSM professionals like David Bianco and Aaron Wade promote network “hunting trips,” during which a senior investigator with a novel way to detect intruders guides junior analysts through data and systems looking for signs of the adversary.
Upon validating the technique (and responding to any enemy actions), the hunters incorporate the new detection method into a CIRT’s IOC-centric operations. (emphasis added)
I will build a “hunting profile” via excerpts (in italics) from his post:
Assumption: “Attackers frequently use HTTP to facilitate malicious network communication.”
Hypothesis: If I find an unusual user agent string in HTTP traffic, I may have discovered an attacker.
Question: “Did any system on my network communicate over HTTP using a suspicious or unknown user agent?”
Method: “This question can be answered with a simple aggregation wherein the user agent field in all HTTP traffic for a set time is analyzed. I’ve done this using Sqrrl Query Language here:
SELECT COUNT(*),user_agent FROM HTTPProxy GROUP BY user_agent ORDER BY COUNT(*) ASC LIMIT 20
This query selects the user_agent field from the HTTPProxy data source and groups and counts all unique entries for that field. The results are sorted by the count, with the least frequent occurrences at the top.”
Results: Chris offers advice on how to interpret the various user agent strings produced by the query.
This is the critical part: Chris did not say “look for *this user agent*. He offered the reader an assumption, a hypothesis, a question, and a method. It is up to the defender to investigate the results. This, for me, is true hunting.
If Chris had instead referred users to this list of malware user agents
(for example) and said look for “Mazilla/4.0”, then I consider that manual (human) matching. If I created a Snort or Suricata rule to look for that user agent, then I consider that automated (machine) matching.
This is where my threat hunting definition likely diverges from modern practice. Analyst Z sees the results of Chris’ hunt and thinks “Chris found user agent XXXX to be malicious, so I should go look for it.” Analyst Z queries his or her data and does or does not find evidence of user agent XXXX.
I do not consider analyst Z’s actions to be hunting. I consider it matching. There is nothing wrong with this. In fact, one of the purposes of hunting is to provide new inputs to the matching process, so that future hunting trips can explore new assumptions, hypotheses, questions, and methods, and let the machines do the matching on IOCs already found to be suggestive of adversary activity. This is why I wrote in my 2013 book “Upon validating the technique (and responding to any enemy actions), the hunters incorporate the new detection method into a CIRT’s IOC-centric operations.”
The term “hunting” is a victim of its own success, with emotional baggage. We defenders have finally found a way to make “blue team” work appealing to the wider security community. Vendors love this new way to market their products. “If you’re not hunting, are you doing anything useful?” one might ask.
Compared to “I’m threat hunting!” (insert chest beating), the alternative, “I’m matching!” (womp womp), seems sad.
Nevertheless, we must remember that threat hunting methodologies were invented to find adversary activity for which there were no IOCs. Hunting was IOC-free analysis because we didn’t know what to look for. Once you know what to look for, you are matching.
Both forms of detection require analysis to validate adversary activity, of course. Let’s not forget that.
I’m also very thankful, however it’s defined or packaged, that people are excited to search for adversary activity in their environment, whether via matching or hunting. It’s a big step from the mindset of 10 years ago, which had a “prevention works” milieu.
tl;dr Because TTPs are a form of IOC, then detection via matching IOCs is a form of matching, and not hunting.