Saturday, March 15, 2014

Presenting DFIR, Shakespeare Style - DFIR Summit 2014

I have been given the opportunity to speak at the SANS DFIR Summit in Austin this year, on a topic that I think is very important.  That is, whether there is value in focusing on one discipline within the DFIR realm - not only from a skillset perspective, but also during investigations.

You can read more about the Summit on the SANS website, but here's a quick overview of my talk (titled To Silo, or Not to Silo: That is The Question):

Have you ever heard someone say they do network forensics and don't need a host computer to know what happened (or vice versa)? Or an incident handler analyzing RAM make a comment about disk imaging being unnecessary and outdated? Unfortunately, these types of mindsets are problematic because they are limiting - to the investigator, to the evidence, and to our profession.

These limitations show themselves through incomplete analysis and inaccurate conclusions. If the limitation is real, tangible – for instance if firewall logs are the only available evidence – then we make the most of what we have. Otherwise, incident response should be based on all of the information available to us as investigators – firewall logs, packet captures, system alerts, RAM, filesystems, malicious executables, and so forth. If these are available, but are ignored or overlooked, analysts are missing out on potentially valuable information. When that happens, the conclusions drawn and recommendations made will be incomplete or just plain wrong. In the words of Hamlet, "Ay, there's the rub."

In this presentation, the audience will be taken through several different real-world scenarios dealing with potentially infected systems, where pieces of evidence are available from some of our "competing" disciplines. Background on each system will be given, to include how it showed up on the radar as potentially compromised; again, this stresses the point that we don't know what happened until we examine all of the available evidence. With each system, different types of evidence or DFIR disciplines are available to help with analysis; these examples will show how each - by itself - falls short in painting the full picture of what happened, and will illustrate our inability to draw concrete conclusions without all the pieces of the puzzle.  Without being exhaustive, this presentation will demonstrate the importance of having knowledge, skills, and abilities in multiple DFIR disciplines, and how looking for additional evidence sources can help us perform more accurate analysis and reach more accurate conclusions.

PS:  SANS has a $1000 discount when using the code "SUMMIT" - this is available from March 17th - March 31st.  More info available here.


  1. "These limitations show themselves through incomplete analysis and inaccurate conclusions."

    I would suggest that what's missing in this approach is the focus on the mission, on the goals of the exam. Given the particular goals of your exam, you may not need everything in order to answer the questions you were asked. Many of those who use memory and selected files have developed their approach based on an understanding of their goals, and what they need to answer those questions.

    "...incident response should be based on all of the information available..."

    Not always. Again, it depends on what you were asked to do, or determine. Why wait to acquire hard drives, or even logical acquisitions of system volumes, when you can collect memory and selected files, and narrow your focus to only the affected systems? Doing so enables a much quicker response, while at the same time reducing your customer's costs.

    "...these examples will show how each - by itself - falls short in painting the full picture of what happened..."

    I would still suggest that the approach a responder employs should be based on an understanding of the goals, as well as the available data.

    1. Harlan,

      Thanks for the comments. I don't necessarily disagree, when it comes to host-based forensics. And as you've noted - whether you're acting as an external consultant or internal investigator - scope may define your role. However, just because someone (who may not want to know the truth) has defined the scope, doesn't mean that your findings will be complete and accurate. Staying within the defined scope may be a requirement of your job/role, and may protect you against legal action individually, and still leave facts behind.

      My chief complaint, if you will, is with those individuals who do not seem to think there is value in data that resides outside of their small area of expertise. They silo their skills, their knowledge, their abilities, refusing to acknowledge the fact that it's a narrow mindset, and in so doing, they are not well serving their employer (internal or external). And if the employer does not want to see or know about other data sources, at a minimum it becomes the investigator's job to document the fact that evidence might have been left behind.

      Take Target for example. We all know now that there were FireEye alerts that were not responded to, and people are shaking their heads. But just because there were FireEye alerts doesn't indicate a compromise; only the possibility/probability. That's a network alert; it needs correlation from other points of evidence to ascertain the validity. Now that there are assuredly external parties involved in the investigation, what if their scope doesn't state anything about FireEye? Or firewalls? Are they to ignore those because it's outside of scope? Even if they ultimately are not relevant, those things still are part of the bigger picture, and without considering them, you can't say you know what occurred, on the basis of facts at hand.

      Obviously there's a lot more to it all than just what has been mentioned here, twitter, etc. That's what the talk will flesh out; if you want more, be there or tune in... :)

  2. Frank,

    "My chief complaint, if you will, is with those individuals who do not seem to think..."

    Such as?

    Honestly, I'm not aware of anyone who thinks this way. I do think that most analysts and responders have some idea of focusing on specific items for triage, and then based on findings and indicators, going back and getting a more detailed view of specific systems.

    However, I'd think that at this point, anyone who focused solely on, say, network traffic, would be in something of an awkward position when asked questions regarding what happened, their findings, etc.