This year at the SANS DFIR Summit in Austin, TX I had the distinct honor and pleasure of presenting a talk entitled To Silo, or Not to Silo: That is the Question. The PDF of the slides is available here (direct download). All the other awesome presentations are up there as well, so make time to check them out if you haven't already.
Shortly
after the Summit, I promised someone somewhere (or told, or maybe just
suggested) that I would post the notes, or at least more details,
about the talk. After all, we all know how entertaining it is to
look at the slides of a presentation. Wow, great stuff, right? I
think there are supposed to be videos of the talks somewhere or
other, but if there was a post about it, I missed it, and mine might
not've been taped anyway, and well, who knows. So basically, the
point of this is to flesh out that presentation in a meaningful way for
those who are readers of the word rather than hearers (and obviously,
not everyone could - or even want to - be there). That said, my
intent here is not to recreate the presentation (although I might steal
a slide or two), but rather to build on it, and present the topic in a
slightly (well, maybe more than slightly) different format. As an
aside, you might be wondering what took me so long to get this done.
Well, just like a nice single-malt scotch, some things must age to perfection, and not leave the cask for bottling until they're just
right.
A little
background first, to help set the stage, and fair warning - this may
be a bit long, and I may break it up into multiple posts (or I may
not). Also, this is a blog post, not a white paper or news article,
so it will be more "conversational" in nature. Hopefully,
you will find it worth your while to soldier on through it. The
genesis for the talk actually came from last year's Summit, with Alex
Bond's lightning talk about combining host and network indicators. This made a lot of sense
to me, and I thought it could be a full talk; plus, it falls in line
with what I spend a lot of my time doing for a living.
First Things
Starting
off, my focus was on the need to broaden our horizons from an
evidence perspective; if we only look at host images, or RAM, or
firewall logs, or netflow, or (the list goes on...), and we don't
consider other sources, we're selling ourselves short. There are a
couple difficulties with this type of approach, I think it bears
calling them out now:
1. Not
all evidence types are always available. This could be because they
don't exist, or because you're not provided access to them.
2. Not
all analysts/investigators/whatever you want to call them have
in-depth knowledge, skills, and abilities with all evidence types.
Both
those things are limiting factors, and so I started building from the
standpoints of:
1.
Dealing with the evidence you have, and expanding where you can.
2. Know
how to deal with the evidence types available to you, and how to
expand those.
3. If
you can't/don't/won't then you're selling yourself and your client
(internal or external) short.
To me,
these things all related to siloing oneself, and so I came up with
the title I did, way back last year (had to have the title before I
could submit the talk in the first place). I mention that mainly
because Jack Crook has a great blog post very similarly named, and
touching on some of the same concepts, from May of this year. Read it, it's good, as is the
norm for his blog. Just know that these were both conceived
independent of one another; it must be a "great minds think
alike" sort of thing, if I may in any way lay claim to that
adage.
However,
as I delved more into the topic at hand, I added another piece,
which I feel it all really boils down to, and which if we ignore, can
REALLY be siloing ourselves. It's one that business people can
relate to (which is very important for us in our line of work), and
which really guides our decision-making processes in Information
Security as a whole. You haven't guessed yet? Well, it's risk.
That's right - risk. Virtually all of the decisions we make in the
course of DFIR work are based on, or informed by, risk. The
"problem" is, we don't tend to see it that way, and that's
odd to me, because in InfoSec we talk about it all the time (it's how
we relate "bad things" to the business, get money for
projects, tell people no, tell people yes, get hated/loved/ice water
dumped on, so on and so forth). To be honest, I'm guilty of that as
well - I could easily quantify various "needs" in that
respect, but it really wasn't until I started working on the
presentation, that I started seeing the correlation to the topic of
risk.
Is risk
really such an odd topic for us? I honestly don't think so, it's
just we don't think of it in those terms. We'll take something
really simple - would you close your eyes and attempt to walk across
a busy intersection? Most likely not, but why? Because it's
"stupid" or "idiotic" or a "good way to get
killed"? Doesn't it really boil down to a risk decision,
though? The risk of getting mowed down by a speeding motorist in a
2000-lb vehicle is greater than the reward of saying you crossed the
intersection with your eyes closed. It's not that it's "stupid,"
it's just too risky for most people.
All the Things
So let's
start to put it in the context of DFIR, and the scope of my Summit
talk. In the presentation, I started off with a slide showing some
different broad sources of evidence: Systems, Network, Cloud,
Mobile; with the "Internet of Things" we may need to start
adding in things like Appliances and Locks as well. Anyway, within
those broad categories or families, there are subsets of types of
evidence, such as:
Now,
obviously there are many more than that, and some (such as Reverse
Engineering/RE) aren't exactly evidence per se - but the idea was to
start to get the audience thinking about the things they do during a
given investigation (which may vary considerably, depending on the
type, scope, and sensitivity of the matter at hand). I'm pretty sure
that there are folks who don't regularly touch all, or even most, of
just these few. With that in mind, do you know these and more in
great detail? If you were handed one at random, would you know what
to do with it? Would it make you uncomfortable? What if you were
asked where to find it during an investigation? You don't have to
answer out loud - again, the point is get us all thinking. If you
think of each of these (or other) types/sources/etc of evidence as
languages, wouldn't you want to be fluent? Don't you think it would
be valuable? That's the first point.
In the
preso, I illustrated this point - that of Knowledge, Skills, and
Abilities (KSAs) - by taking everyone back to their days of
role-playing games (I realize for some this might still be reality).
Not modern MMORPGs, but old-school things like A/D&D, with
character sheets, a bag full of dice, a DM (Dungeon Master, not
direct message) and a bunch of chips and salsa. Yes, I know, for
some there were probably "other" substances involved, but
this is a family show, okay? Anyway, back in those simpler times, I
always wanted to be more than just one character class during an
adventure, especially if there were only a handful in the game (kind
of like most DFIR teams); with only one of a few types, if someone
got hurt, killed, or otherwise taken out of action, it was a disaster
(in InfoSec terms, a single point of failure). I mean, if your thief
got caught and killed while picking a pocket, who was there to open
locks or detect traps for group? But, if you had a fighter/thief as
well, then you have at least somewhat of a backup plan (again in
InfoSec terms, a Disaster Recovery and Business Continuity/DRBC plan,
and not just a single point of failure). So it's one thing to know
one thing very well, but brings more value and broadens the overall
potential of the group (or DFIR team) if you have folks with a
broader skill set, such as a dual-class human or multi-class
non-human. In this context, we're talking about people who can take
apart a packet capture, reverse-engineer a binary, parse a memory
dump, and so forth - they're not stuck with just one thing. This was
the point that Jack raised in his blog post, and he draws it out very
well.
Shelly
Giesbrecht did a presentation at the Summit this year about building
an awesome SOC, available here (direct PDF download).
In a SOC, it's pretty common to have each member focused on a single
monitoring task - firewall, IDS/IPS, DLP, AV, etc, and while that can
provide a level of expertise in that area like Elminster does magic,
it doesn't produce a very well-rounded individual (can the AV person
fill in for the pcap?). As Shelly mentioned in her talk, the counter
to that is to try to expand the knowledge base, but at the expense of
actual abilities - we become jacks of all trades, but masters of
none. This goes directly counter to what the greatest swordsman in
all of history (no, not Yoda - Miyamoto Musashi) wrote in his Book of Five Rings - that in order to truly be a master of one thing (such as
swordsmanship), you had to become a master of all things (poetry, tea
ceremony, carpentry, penmanship). Troy Larson, in his keynote
address at the Summit, (direct PDF download)
brought up the concept of using the whole pig. And if you don't know
about the whole pig, you can't use the whole pig, which is this
point. But, if you don't have the whole pig, or don't look at parts
of the pig, then you're missing out. And that's the second point.
A
Puzzling Equation
Alex's
lightning talk brought up the topic of using multiple sources of
evidence - specifically host-based and network-based data - to better
understand an attack. Yes, that's right - he was using more than one
part of the pig (Troy would be proud, I'm sure). But as we saw
earlier, there are more sources than just host/systems and network,
and a multitude of evidence types within those, and that's where it
starts to get a little more complicated, at least for some (and in
some cases). The reason I say that is that I know people who for
whatever reason, during an investigation focus on a single type of
evidence or analysis, even when they have the skills to expand on it.
For instance, they may just look at network logs, or a disk image,
or volatile data. Each of these things can bring incredible value to
an investigation, but individually, they're limited; if you don't
expand your viewpoint, you're missing the bigger picture. I'll flesh
that out with a puzzle illustration. We've probably all put together
at least one puzzle in our lifetime, and even if it's not a normal
occurrence for us, we understand the basic concepts (if not, wikiHow
lays them out in a very simple format here).
Imagine
you've been handed a pile of puzzle pieces, perhaps it looks
something like this:
(Source: http://opentreeoflife.files.wordpress.com/2012/10/puzzle2.jpg)
In other
words, you have no idea how many pieces there are (or are supposed to
be), nor what it should show when it's all put together. In case
it's not perfectly clear, this puzzle is the investigation (whether
it's internal/corporate, external/consulting, law enforcement,
military, digital forensics, or incident response). The end goal is
being able to deliver a concise, detailed report of findings that
will properly inform the necessary parties of the facts (and in some
cases, opinions) of what happened in a given scenario. If we take a
bunch of the pieces out and put them in another box somewhere, not
using them, that's probably not going to help us put it all together
(so if you ignore RAM, or disk, or network...). If we follow the
wikiHow article and start framing in the puzzle, then start taking
guesses as to what it represents (or what happened during the
commission of a crime, etc), then we're missing the bigger picture.
Get it? Picture? The puzzle makes a picture - see what I did there?
Heh heh heh. ;-)
I mean,
this probably includes sea life, but we don't know for sure what is
represented, and certainly can't answer any detailed questions about
it...
(Source:
http://www.pbase.com/image/9884347)
What if
we start to fill more pieces in? When can we start to (or best)
answer questions? Here:
(Source:
http://piccola77.blogspot.com/2010_05_01_archive.html)
Here:
(Source:
http://3.bp.blogspot.com/-wviPW6QWJiA/U_fTcSUKoUI/AAAAAAAAZjg/KLTKLJYSnQs/s1600/Lightning%2BStriking%2BTree%2B2%2B-%2B1000%2BEurographics.jpg)
or here:
(Source:
http://moralesfoto.blogspot.com/2011_11_01_archive.html)
Pretty
clearly the last one gives us the best chance of answering the most
questions, but we could still miss some critical ones, because there
are substantial blank areas. Sure, it appears to be foliage that's
displayed in the background, but is it the real thing, or a
reflection off the water? Is it made up of trees, bushes, or a
combination? Is there any additional wildlife? What about predators?
Imagine you're sitting down across from a group of attorneys (maybe
friendly, maybe not), and those gaps are due to evidence not analyzed
in the course of your investigation? Ouch...
Now,
there are multiple facets to every investigation, and within each as
well. There are differences between eDiscovery (loosely connected to
what we do), digital forensics, and incident response, and those can
probably all be argued to the nth degree and until the cows come
home. I get all that, and am taking those things into account; I'm
trying to paint a broader picture here, and get everyone to think about associated
risk. In the end, it really is about risk, and I'll
get to that. For now, let's list out a few scenarios that challenge
the "all the pieces" approach.
- There isn't enough time to gather all available evidence types. This is probably most prevalent for IR cases, where time is of the essence, and imaging 500 systems that all have 500GB hard drives when you only have two people working on it, and executives/legal/PR/law enforcement need answers - fast.
- There aren't enough resources to gather all available evidence types. Again, very common in IR cases, where you have small teams, responsibilities are divided up, and KSAs may be lacking. We talked about that before.
- All evidence is not made available to you. This factors in across the board, and comes into play in pretty much every investigative role (corporate, consulting, LE, etc). This could be because:
- The business/client/suspect is trying to hide things from you.
- The people/groups in charge of the evidence are resistant/can't be bothered/etc (I've had CIOs refuse to give me access to systems because it was "too sensitive" and we ended up not gathering certain potential evidence).
- The evidence simply doesn't exist (systems/platforms don't exist, policies purge logs w/o central storage, power was shut down, it was intentionally destroyed, etc).
Risky Business
This is
where we get to that part that didn't really dawn on me until I was
well into building the presentation. Initially, the presentation was
going to walk the audience through various investigative scenarios,
to show how it was important to know how to handle different types
and sources of evidence, and how without doing so, you could be
missing the bigger picture (or the finer details within the picture,
such as Mari DeGrazia's talk on Google Analytics cookies - direct PDFdownload. I still accomplished that, but also added in the new element, that of risk.
I can
see it in your eyes, some of you are confused about what this has to
do with risk. Wikipedia explains risk in part as "...the
potential of losing something of value, weighed against the potential
to gain something of value."
It's a very familiar concept in financial circles, especially with
regard to the return on investment (ROI) of a particular financial
transaction. As such, it's very commonplace in businesses
(especially mature ones), along with executives and business leaders.
Information Security uses risk management as a means (among other
things) to help quantify and show value to the business, especially
preemptively or proactively, to help avoid increased costs from a
negative occurrence (such as a breach) down the road. Businesses
understand that, because they can recognize the cost associated with
a breach, with damage to brand, lawsuits, expenses to clean up, and
so forth. Okay, great, that makes perfect sense - but how does it
apply to an after-the-fact situation in DFIR? Well, remember our two
main points to which the risk pertains? Lack of knowledge, Lack of
Evidence. I'll give some examples under each, for how risk ties in
(please be warned - these won't be exhaustive).
Lack of
knowledge/skills/abilities - personnel lacking a broad base of
expertise in dealing with multiple times of evidence or
investigations spanning computers, networks, cloud-based offerings,
mobile technologies, etc.
- Requires additional/new internal or external (consulting) staffing resources, which cost money.
- Takes longer to complete investigations, which costs additional money, directly and indirectly (fines and fees, for instance).
- May result in inaccurate findings/reports/testimony, and could result in sanctions, fees, fines, settlements, etc.
- Loss of personnel who seek other positions to get the training/experience they know they need.
- Inability to spot incidents in the first place, leading to additional exposure and associated costs.
- Training staff to achieve higher levels of expertise in new areas costs money.
These
are pretty straight-forward, no-brainer sort of things, right? I
think we can all see the importance of being a well-rounded
investigator; it makes us more valuable to our employer, and helps us
do our jobs more effectively. Win-win scenario.
Lack of
evidence - whether evidence is missing/doesn't exist, inaccessible or
not provided, or simply overlooked/ignored.
- Inability to answer questions for which the answers can only be found in the "missing" evidence; can result in additional costs:
- Having to go back after the fact and attempt to recover other evidence types (paying more consultants, for example).
- Potential sanctions, fines, fees due to failure in fiduciary duties, legal responsibilities, and regulatory requirements.
- Loss of personal income due to loss of job.
- Potential charges of spoliation, depending on scenario, and associated sanctions, fines, settlements.
- Loss of business due to lack of appropriate response, brand damage, court costs/legal fees, etc (everyone out of a job). May seem drastic, but smaller businesses may not be able to bear the costs associated with a significant breach, and when part of those costs stem from inappropriate response...
- May take significant time and money to collect and examine all potential/available evidence; the cost of doing so may be more than the cost of not doing so.
The
whole "lack of evidence" area is where I tend to see the
most resistance within our field, so I'll try to counter the most
common objection. I'm not saying that if we don't collect and
analyze every single possible source and type of evidence on every
single investigation of any type, that we're not doing our jobs.
What I'm saying is that to the extent it is feasible and reasonable
to do so, we need to collect and analyze the available and pertinent
evidence in the most expedient manner, based on the informed risk appetite of
the business.
There, I
think that should start to set the stage for the next piece of the
conversation. In our areas of lack of knowledge and lack of
evidence, it's not necessarily a "bad" thing for them to
exist one way or the other. What is a "bad" thing is to
take certain courses of action without engaging the proper
stakeholders in a risk conversation, so that they can make an
informed decision on how doing one thing or another may negatively
(or positively) impact the business. That's what risk management is
all about, and now that we've seen that our actions can introduce new
risk to the business, we need to start engaging the business on that
level. A big piece of the puzzle here is that we, the DFIR
contingent, are not really the ones to determine whether or not
we only need to collect a certain type of evidence, or whether the
lack of a certain type of evidence has a significant negative impact
on an investigation. That's up to the business, and it's our job to
inform them of the risks involved, so that they can weigh them accordingly in the
context of the overall goals and needs of the business (to which we
are likely not privy).
For
example, in an IR scenario, we may not think it makes a lot of sense
to image a bunch of system hard drives, due to the time it takes. We
inform the business of the time and level of effort we estimate to be
involved, and the impact of that distracting us from doing other
things that may have more immediate relevance (such as dumping and analyzing RAM, or looking at pcaps from network
monitoring). The business (executives, legal, etc) on their side,
are aware of potential legal issues surrounding the situation, and
know that if system-based evidence (of a non-volatile nature) is not
preserved in a defensible fashion, the company could be tied up in
legal battles for years. They determine that the cost/impact (aka,
"risk") of ongoing legal battles is greater than the
cost/impact (aka, "risk") of imaging the drives, so they
provide the instruction to us. If we hadn't broached the subject and
had a risk-based conversation with the appropriate stakeholders, we
might have chosen based on our perspective, and incurred significant
costs for the business and ourselves down the road.
So, am I
saying we shouldn't make intelligent decisions ourselves? Should we
do nothing until someone makes a choice for us? Please don't
misunderstand me; by no means should we do nothing. But what we do
should be tempered by the scenario we're in, and the inherent risk
(to ourselves and others), juxtaposed against the risk appetite of
those are paying us. After all, let's be honest - if a business is
made to look bad or incur significant cost (whether through an
incident response scenario, or some other investigation or legal
action), most likely a "heads will roll" situation will
arise. Professionally, it's our job to help ensure the business is
well-informed, prepared, and protected from something like this
happening; personally, it's our job to make sure it isn't our heads
on the chopping block (C-levels may just move to another home, but if
those who do the work get "branded" it may not be quite as
easy). If you do a pentest engagement, what's the first thing you
get? Your "get out of jail free" card, or a properly
scoped and signed statement of work, which authorizes you to do the
things you need in order to accomplish the mission. Think of what
I'm saying from that angle: by making DFIR another risk topic,
you're protecting yourself, your immediate boss/management, and the
company/employer. There are other benefits as well - expectations
are properly set, you have clear-cut direction, and can hopefully
operate at peak efficiency. This keeps everyone happy (a very key
point) and reduces cost; you gain visibility and insight into company
needs and strategy, and are positioned to receive greater
appreciation from the business (which can obviously be beneficial in
a number of ways).
Last Things
Now that
we've wrapped up the risk association aspect, and everyone agrees with me
100%, we can frame in the two original areas of conversation - lack
of KSAs and lack of evidence. I think the first is a given, but the
second is the gray area for most folks. I've had numerous
conversations around this concept, online and in person, and so far
the puzzle analogy seems the easiest to digest. If you're putting
together a 1000-piece puzzle without the box or picture of the
completed puzzle (isn't that pretty much EVERY investigation you've
ever done?), no matter how much you *think* you know what it is, you
don't truly know until it's done. Attorneys and business
management/executives want answers, and those can't be halfway
formed, because they're making costly and potentially career-limiting
decisions based upon what we say. So if you're only 25% done with
the puzzle, you can't answer all the questions. If you limit
yourself to 25% of the puzzle (or available evidence), or you're
limited to that amount by other parties, you're limited in the information you can
provide. If you're stuck with the 25% (as by forces outside your control), then you do the best you can, and
inform the business - they might be able to apply pressure to get
you access to more evidence (but if they don't know, they can't help
you).
Let's
look at the flip side of that briefly. If you're in an investigation
(of whatever type), and there are 500 systems with 500 GB, 5400 RPM
hard drives and only USB 2 connections; 10 TB of pcaps, 4 TB of logs
from network appliances and systems, 20 servers spread across the
country with 4 TB of local storage each and 400 TB combined network
storage (where evidence might be), total RAM of 4 TB (plus pagefile
and hiberfil), 2 people to get the work done and 1 week to do it,
you're probably not going to be very successful. You'd really need
near-unlimited resources and time, which just isn't the reality for
any of us. But the reality also is that in this imaginary scenario,
even with substantial resources, we'd still need to inform the
business of the associated risks, so that they could help establish
the true requirements, guidelines and timelines, and ultimately help us help
them (note: it is sometimes necessary for us to guide the business through this process, to help them understand the point we're trying to get to). It really doesn't matter whether we're internal or external -
our jobs put us in partnership with the business (unless the business
wants us to lie or fabricate the truth, which becomes a completely
different discussion that I won't get into here).
The goal
is to make the best use of available evidence, time, and resources,
to help the business answer the questions they need to address. If
we have the necessary KSAs, and help the business understand the
risks associated with the scope of an investigation, we can reach the
end goal in a much more efficient manner than if we just work in a
silo. I'd love to talk more; if you have any
questions/comments/concerns, comment here or hit me up on twitter. Until then...
Think
risk, and carry on.
No comments:
Post a Comment