Sunday, September 7, 2014

It's a Groovy Kind of Risk


This year at the SANS DFIR Summit in Austin, TX  I had the distinct honor and pleasure of presenting a talk entitled To Silo, or Not to Silo: That is the Question. The PDF of the slides is available here (direct download).  All the other awesome presentations are up there as well, so make time to check them out if you haven't already.

Shortly after the Summit, I promised someone somewhere (or told, or maybe just suggested) that I would post the notes, or at least more details, about the talk. After all, we all know how entertaining it is to look at the slides of a presentation. Wow, great stuff, right? I think there are supposed to be videos of the talks somewhere or other, but if there was a post about it, I missed it, and mine might not've been taped anyway, and well, who knows. So basically, the point of this is to flesh out that presentation in a meaningful way for those who are readers of the word rather than hearers (and obviously, not everyone could - or even want to - be there). That said, my intent here is not to recreate the presentation (although I might steal a slide or two), but rather to build on it, and present the topic in a slightly (well, maybe more than slightly) different format. As an aside, you might be wondering what took me so long to get this done. Well, just like a nice single-malt scotch, some things must age to perfection, and not leave the cask for bottling until they're just right.

A little background first, to help set the stage, and fair warning - this may be a bit long, and I may break it up into multiple posts (or I may not). Also, this is a blog post, not a white paper or news article, so it will be more "conversational" in nature. Hopefully, you will find it worth your while to soldier on through it. The genesis for the talk actually came from last year's Summit, with Alex Bond's lightning talk about combining host and network indicators.  This made a lot of sense to me, and I thought it could be a full talk; plus, it falls in line with what I spend a lot of my time doing for a living.

First Things

Starting off, my focus was on the need to broaden our horizons from an evidence perspective; if we only look at host images, or RAM, or firewall logs, or netflow, or (the list goes on...), and we don't consider other sources, we're selling ourselves short. There are a couple difficulties with this type of approach, I think it bears calling them out now:
1. Not all evidence types are always available. This could be because they don't exist, or because you're not provided access to them.
2. Not all analysts/investigators/whatever you want to call them have in-depth knowledge, skills, and abilities with all evidence types.

Both those things are limiting factors, and so I started building from the standpoints of:
1. Dealing with the evidence you have, and expanding where you can.
2. Know how to deal with the evidence types available to you, and how to expand those.
3. If you can't/don't/won't then you're selling yourself and your client (internal or external) short.

To me, these things all related to siloing oneself, and so I came up with the title I did, way back last year (had to have the title before I could submit the talk in the first place). I mention that mainly because Jack Crook has a great blog post very similarly named, and touching on some of the same concepts, from May of this year.  Read it, it's good, as is the norm for his blog. Just know that these were both conceived independent of one another; it must be a "great minds think alike" sort of thing, if I may in any way lay claim to that adage.

However, as I delved more into the topic at hand, I added another piece, which I feel it all really boils down to, and which if we ignore, can REALLY be siloing ourselves. It's one that business people can relate to (which is very important for us in our line of work), and which really guides our decision-making processes in Information Security as a whole. You haven't guessed yet? Well, it's risk. That's right - risk. Virtually all of the decisions we make in the course of DFIR work are based on, or informed by, risk. The "problem" is, we don't tend to see it that way, and that's odd to me, because in InfoSec we talk about it all the time (it's how we relate "bad things" to the business, get money for projects, tell people no, tell people yes, get hated/loved/ice water dumped on, so on and so forth). To be honest, I'm guilty of that as well - I could easily quantify various "needs" in that respect, but it really wasn't until I started working on the presentation, that I started seeing the correlation to the topic of risk.

Is risk really such an odd topic for us? I honestly don't think so, it's just we don't think of it in those terms. We'll take something really simple - would you close your eyes and attempt to walk across a busy intersection? Most likely not, but why? Because it's "stupid" or "idiotic" or a "good way to get killed"? Doesn't it really boil down to a risk decision, though? The risk of getting mowed down by a speeding motorist in a 2000-lb vehicle is greater than the reward of saying you crossed the intersection with your eyes closed. It's not that it's "stupid," it's just too risky for most people.

All the Things

So let's start to put it in the context of DFIR, and the scope of my Summit talk. In the presentation, I started off with a slide showing some different broad sources of evidence: Systems, Network, Cloud, Mobile; with the "Internet of Things" we may need to start adding in things like Appliances and Locks as well. Anyway, within those broad categories or families, there are subsets of types of evidence, such as:


Now, obviously there are many more than that, and some (such as Reverse Engineering/RE) aren't exactly evidence per se - but the idea was to start to get the audience thinking about the things they do during a given investigation (which may vary considerably, depending on the type, scope, and sensitivity of the matter at hand). I'm pretty sure that there are folks who don't regularly touch all, or even most, of just these few. With that in mind, do you know these and more in great detail? If you were handed one at random, would you know what to do with it? Would it make you uncomfortable? What if you were asked where to find it during an investigation? You don't have to answer out loud - again, the point is get us all thinking. If you think of each of these (or other) types/sources/etc of evidence as languages, wouldn't you want to be fluent? Don't you think it would be valuable? That's the first point.

In the preso, I illustrated this point - that of Knowledge, Skills, and Abilities (KSAs) - by taking everyone back to their days of role-playing games (I realize for some this might still be reality). Not modern MMORPGs, but old-school things like A/D&D, with character sheets, a bag full of dice, a DM (Dungeon Master, not direct message) and a bunch of chips and salsa. Yes, I know, for some there were probably "other" substances involved, but this is a family show, okay? Anyway, back in those simpler times, I always wanted to be more than just one character class during an adventure, especially if there were only a handful in the game (kind of like most DFIR teams); with only one of a few types, if someone got hurt, killed, or otherwise taken out of action, it was a disaster (in InfoSec terms, a single point of failure). I mean, if your thief got caught and killed while picking a pocket, who was there to open locks or detect traps for group? But, if you had a fighter/thief as well, then you have at least somewhat of a backup plan (again in InfoSec terms, a Disaster Recovery and Business Continuity/DRBC plan, and not just a single point of failure). So it's one thing to know one thing very well, but brings more value and broadens the overall potential of the group (or DFIR team) if you have folks with a broader skill set, such as a dual-class human or multi-class non-human. In this context, we're talking about people who can take apart a packet capture, reverse-engineer a binary, parse a memory dump, and so forth - they're not stuck with just one thing. This was the point that Jack raised in his blog post, and he draws it out very well.

Shelly Giesbrecht did a presentation at the Summit this year about building an awesome SOC, available here (direct PDF download).  In a SOC, it's pretty common to have each member focused on a single monitoring task - firewall, IDS/IPS, DLP, AV, etc, and while that can provide a level of expertise in that area like Elminster does magic, it doesn't produce a very well-rounded individual (can the AV person fill in for the pcap?). As Shelly mentioned in her talk, the counter to that is to try to expand the knowledge base, but at the expense of actual abilities - we become jacks of all trades, but masters of none. This goes directly counter to what the greatest swordsman in all of history (no, not Yoda - Miyamoto Musashi) wrote in his Book of Five Rings - that in order to truly be a master of one thing (such as swordsmanship), you had to become a master of all things (poetry, tea ceremony, carpentry, penmanship). Troy Larson, in his keynote address at the Summit, (direct PDF download) brought up the concept of using the whole pig. And if you don't know about the whole pig, you can't use the whole pig, which is this point. But, if you don't have the whole pig, or don't look at parts of the pig, then you're missing out. And that's the second point.

A Puzzling Equation

Alex's lightning talk brought up the topic of using multiple sources of evidence - specifically host-based and network-based data - to better understand an attack. Yes, that's right - he was using more than one part of the pig (Troy would be proud, I'm sure). But as we saw earlier, there are more sources than just host/systems and network, and a multitude of evidence types within those, and that's where it starts to get a little more complicated, at least for some (and in some cases). The reason I say that is that I know people who for whatever reason, during an investigation focus on a single type of evidence or analysis, even when they have the skills to expand on it. For instance, they may just look at network logs, or a disk image, or volatile data. Each of these things can bring incredible value to an investigation, but individually, they're limited; if you don't expand your viewpoint, you're missing the bigger picture. I'll flesh that out with a puzzle illustration. We've probably all put together at least one puzzle in our lifetime, and even if it's not a normal occurrence for us, we understand the basic concepts (if not, wikiHow lays them out in a very simple format here).

Imagine you've been handed a pile of puzzle pieces, perhaps it looks something like this:

 (Source:  http://opentreeoflife.files.wordpress.com/2012/10/puzzle2.jpg)


In other words, you have no idea how many pieces there are (or are supposed to be), nor what it should show when it's all put together. In case it's not perfectly clear, this puzzle is the investigation (whether it's internal/corporate, external/consulting, law enforcement, military, digital forensics, or incident response). The end goal is being able to deliver a concise, detailed report of findings that will properly inform the necessary parties of the facts (and in some cases, opinions) of what happened in a given scenario. If we take a bunch of the pieces out and put them in another box somewhere, not using them, that's probably not going to help us put it all together (so if you ignore RAM, or disk, or network...). If we follow the wikiHow article and start framing in the puzzle, then start taking guesses as to what it represents (or what happened during the commission of a crime, etc), then we're missing the bigger picture. Get it? Picture? The puzzle makes a picture - see what I did there? Heh heh heh.  ;-)

I mean, this probably includes sea life, but we don't know for sure what is represented, and certainly can't answer any detailed questions about it...

(Source: http://www.pbase.com/image/9884347)


What if we start to fill more pieces in? When can we start to (or best) answer questions? Here:

(Source: http://piccola77.blogspot.com/2010_05_01_archive.html)


Here:

(Source: http://3.bp.blogspot.com/-wviPW6QWJiA/U_fTcSUKoUI/AAAAAAAAZjg/KLTKLJYSnQs/s1600/Lightning%2BStriking%2BTree%2B2%2B-%2B1000%2BEurographics.jpg)

or here:

(Source: http://moralesfoto.blogspot.com/2011_11_01_archive.html)


Pretty clearly the last one gives us the best chance of answering the most questions, but we could still miss some critical ones, because there are substantial blank areas. Sure, it appears to be foliage that's displayed in the background, but is it the real thing, or a reflection off the water? Is it made up of trees, bushes, or a combination? Is there any additional wildlife? What about predators? Imagine you're sitting down across from a group of attorneys (maybe friendly, maybe not), and those gaps are due to evidence not analyzed in the course of your investigation? Ouch...

Now, there are multiple facets to every investigation, and within each as well. There are differences between eDiscovery (loosely connected to what we do), digital forensics, and incident response, and those can probably all be argued to the nth degree and until the cows come home. I get all that, and am taking those things into account; I'm trying to paint a broader picture here, and get everyone to think about associated risk. In the end, it really is about risk, and I'll get to that. For now, let's list out a few scenarios that challenge the "all the pieces" approach.

  1. There isn't enough time to gather all available evidence types. This is probably most prevalent for IR cases, where time is of the essence, and imaging 500 systems that all have 500GB hard drives when you only have two people working on it, and executives/legal/PR/law enforcement need answers - fast.
  2. There aren't enough resources to gather all available evidence types. Again, very common in IR cases, where you have small teams, responsibilities are divided up, and KSAs may be lacking. We talked about that before.
  3. All evidence is not made available to you. This factors in across the board, and comes into play in pretty much every investigative role (corporate, consulting, LE, etc). This could be because:
    • The business/client/suspect is trying to hide things from you.
    • The people/groups in charge of the evidence are resistant/can't be bothered/etc (I've had CIOs refuse to give me access to systems because it was "too sensitive" and we ended up not gathering certain potential evidence).
    • The evidence simply doesn't exist (systems/platforms don't exist, policies purge logs w/o central storage, power was shut down, it was intentionally destroyed, etc).

Risky Business

This is where we get to that part that didn't really dawn on me until I was well into building the presentation. Initially, the presentation was going to walk the audience through various investigative scenarios, to show how it was important to know how to handle different types and sources of evidence, and how without doing so, you could be missing the bigger picture (or the finer details within the picture, such as Mari DeGrazia's talk on Google Analytics cookies - direct PDFdownload. I still accomplished that, but also added in the new element, that of risk.

I can see it in your eyes, some of you are confused about what this has to do with risk. Wikipedia explains risk in part as "...the potential of losing something of value, weighed against the potential to gain something of value."  It's a very familiar concept in financial circles, especially with regard to the return on investment (ROI) of a particular financial transaction. As such, it's very commonplace in businesses (especially mature ones), along with executives and business leaders. Information Security uses risk management as a means (among other things) to help quantify and show value to the business, especially preemptively or proactively, to help avoid increased costs from a negative occurrence (such as a breach) down the road. Businesses understand that, because they can recognize the cost associated with a breach, with damage to brand, lawsuits, expenses to clean up, and so forth. Okay, great, that makes perfect sense - but how does it apply to an after-the-fact situation in DFIR? Well, remember our two main points to which the risk pertains? Lack of knowledge, Lack of Evidence. I'll give some examples under each, for how risk ties in (please be warned - these won't be exhaustive).

Lack of knowledge/skills/abilities - personnel lacking a broad base of expertise in dealing with multiple times of evidence or investigations spanning computers, networks, cloud-based offerings, mobile technologies, etc.
  1. Requires additional/new internal or external (consulting) staffing resources, which cost money.
  2. Takes longer to complete investigations, which costs additional money, directly and indirectly (fines and fees, for instance).
  3. May result in inaccurate findings/reports/testimony, and could result in sanctions, fees, fines, settlements, etc.
  4. Loss of personnel who seek other positions to get the training/experience they know they need. 
  5. Inability to spot incidents in the first place, leading to additional exposure and associated costs.
  6. Training staff to achieve higher levels of expertise in new areas costs money.

These are pretty straight-forward, no-brainer sort of things, right? I think we can all see the importance of being a well-rounded investigator; it makes us more valuable to our employer, and helps us do our jobs more effectively. Win-win scenario.

Lack of evidence - whether evidence is missing/doesn't exist, inaccessible or not provided, or simply overlooked/ignored.
  1. Inability to answer questions for which the answers can only be found in the "missing" evidence; can result in additional costs:
    • Having to go back after the fact and attempt to recover other evidence types (paying more consultants, for example).
    • Potential sanctions, fines, fees due to failure in fiduciary duties, legal responsibilities, and regulatory requirements.
    • Loss of personal income due to loss of job.
  2. Potential charges of spoliation, depending on scenario, and associated sanctions, fines, settlements.
  3. Loss of business due to lack of appropriate response, brand damage, court costs/legal fees, etc (everyone out of a job). May seem drastic, but smaller businesses may not be able to bear the costs associated with a significant breach, and when part of those costs stem from inappropriate response...
  4. May take significant time and money to collect and examine all potential/available evidence; the cost of doing so may be more than the cost of not doing so.

The whole "lack of evidence" area is where I tend to see the most resistance within our field, so I'll try to counter the most common objection. I'm not saying that if we don't collect and analyze every single possible source and type of evidence on every single investigation of any type, that we're not doing our jobs. What I'm saying is that to the extent it is feasible and reasonable to do so, we need to collect and analyze the available and pertinent evidence in the most expedient manner, based on the informed risk appetite of the business.

There, I think that should start to set the stage for the next piece of the conversation. In our areas of lack of knowledge and lack of evidence, it's not necessarily a "bad" thing for them to exist one way or the other. What is a "bad" thing is to take certain courses of action without engaging the proper stakeholders in a risk conversation, so that they can make an informed decision on how doing one thing or another may negatively (or positively) impact the business. That's what risk management is all about, and now that we've seen that our actions can introduce new risk to the business, we need to start engaging the business on that level. A big piece of the puzzle here is that we, the DFIR contingent, are not really the ones to determine whether or not we only need to collect a certain type of evidence, or whether the lack of a certain type of evidence has a significant negative impact on an investigation. That's up to the business, and it's our job to inform them of the risks involved, so that they can weigh them accordingly in the context of the overall goals and needs of the business (to which we are likely not privy).

For example, in an IR scenario, we may not think it makes a lot of sense to image a bunch of system hard drives, due to the time it takes. We inform the business of the time and level of effort we estimate to be involved, and the impact of that distracting us from doing other things that may have more immediate relevance (such as dumping and analyzing RAM, or looking at pcaps from network monitoring). The business (executives, legal, etc) on their side, are aware of potential legal issues surrounding the situation, and know that if system-based evidence (of a non-volatile nature) is not preserved in a defensible fashion, the company could be tied up in legal battles for years. They determine that the cost/impact (aka, "risk") of ongoing legal battles is greater than the cost/impact (aka, "risk") of imaging the drives, so they provide the instruction to us. If we hadn't broached the subject and had a risk-based conversation with the appropriate stakeholders, we might have chosen based on our perspective, and incurred significant costs for the business and ourselves down the road.

So, am I saying we shouldn't make intelligent decisions ourselves? Should we do nothing until someone makes a choice for us? Please don't misunderstand me; by no means should we do nothing. But what we do should be tempered by the scenario we're in, and the inherent risk (to ourselves and others), juxtaposed against the risk appetite of those are paying us. After all, let's be honest - if a business is made to look bad or incur significant cost (whether through an incident response scenario, or some other investigation or legal action), most likely a "heads will roll" situation will arise. Professionally, it's our job to help ensure the business is well-informed, prepared, and protected from something like this happening; personally, it's our job to make sure it isn't our heads on the chopping block (C-levels may just move to another home, but if those who do the work get "branded" it may not be quite as easy). If you do a pentest engagement, what's the first thing you get? Your "get out of jail free" card, or a properly scoped and signed statement of work, which authorizes you to do the things you need in order to accomplish the mission. Think of what I'm saying from that angle: by making DFIR another risk topic, you're protecting yourself, your immediate boss/management, and the company/employer. There are other benefits as well - expectations are properly set, you have clear-cut direction, and can hopefully operate at peak efficiency. This keeps everyone happy (a very key point) and reduces cost; you gain visibility and insight into company needs and strategy, and are positioned to receive greater appreciation from the business (which can obviously be beneficial in a number of ways).

Last Things

Now that we've wrapped up the risk association aspect, and everyone agrees with me 100%, we can frame in the two original areas of conversation - lack of KSAs and lack of evidence. I think the first is a given, but the second is the gray area for most folks. I've had numerous conversations around this concept, online and in person, and so far the puzzle analogy seems the easiest to digest. If you're putting together a 1000-piece puzzle without the box or picture of the completed puzzle (isn't that pretty much EVERY investigation you've ever done?), no matter how much you *think* you know what it is, you don't truly know until it's done. Attorneys and business management/executives want answers, and those can't be halfway formed, because they're making costly and potentially career-limiting decisions based upon what we say. So if you're only 25% done with the puzzle, you can't answer all the questions. If you limit yourself to 25% of the puzzle (or available evidence), or you're limited to that amount by other parties, you're limited in the information you can provide. If you're stuck with the 25% (as by forces outside your control), then you do the best you can, and inform the business - they might be able to apply pressure to get you access to more evidence (but if they don't know, they can't help you).

Let's look at the flip side of that briefly. If you're in an investigation (of whatever type), and there are 500 systems with 500 GB, 5400 RPM hard drives and only USB 2 connections; 10 TB of pcaps, 4 TB of logs from network appliances and systems, 20 servers spread across the country with 4 TB of local storage each and 400 TB combined network storage (where evidence might be), total RAM of 4 TB (plus pagefile and hiberfil), 2 people to get the work done and 1 week to do it, you're probably not going to be very successful. You'd really need near-unlimited resources and time, which just isn't the reality for any of us. But the reality also is that in this imaginary scenario, even with substantial resources, we'd still need to inform the business of the associated risks, so that they could help establish the true requirements, guidelines and timelines, and ultimately help us help them (note: it is sometimes necessary for us to guide the business through this process, to help them understand the point we're trying to get to). It really doesn't matter whether we're internal or external - our jobs put us in partnership with the business (unless the business wants us to lie or fabricate the truth, which becomes a completely different discussion that I won't get into here). 

The goal is to make the best use of available evidence, time, and resources, to help the business answer the questions they need to address. If we have the necessary KSAs, and help the business understand the risks associated with the scope of an investigation, we can reach the end goal in a much more efficient manner than if we just work in a silo. I'd love to talk more; if you have any questions/comments/concerns, comment here or hit me up on twitter. Until then...

Think risk, and carry on.




Saturday, May 17, 2014

Sweet Child o' LSASS

Recently, I was channeling my inner rock star, and thought I'd share a finding regarding "normal" occurrences.  You're probably all familiar with LSASS.exe, the "Local Security Authentication Subsystem Server" process, and you might also know that it doesn't have any children.  Poor thing; children are truly a gift (and a challenge, but that's a different topic).  Anyhow, as noted in the SANS DFIR "Find Evil" poster, if LSASS spawns a child process, it bears looking into - and that's exactly what I was doing.  

Given that I think it's important to be as proactive as possible with regard to incident response, I am always looking for ways to spot potential problems.  Now, the SANS poster showcases things to watch for when doing memory analysis, but if you're parsing all executable activity in real time and storing that data in a way it can be queried at will (kind of like Sysinternals procmon on steroids), then why not apply the same principle and see what can be found?  Yes, yes, I'm talking about CarbonBlack (now part of Bit9), which is (in my opinion) an awesome endpoint monitoring platform.  However, while this post will make use of that technology, don't think of it as being about Cb, but rather about the hunt, what's found, and how that informs the bigger picture (and may change some of what's considered "normal").  Keep in mind there are other tools that can help accomplish the same goal, and as noted, memory analysis (with tools like Volatility, Rekall, and Mandiant Redline) is at the forefront - so don't get hung up on the tool; it's the investigator that makes the difference in the long run. 

With that, I'd like to tell you a story about the hunt for spawn of LSASS, and how it started with a simple little query, as shown below; basically, any process for which LSASS.exe showed up as the parent...






Right away we see that there's one sweet child o' lsass, on 37 endpoints, with two different hashes showing up.  Okay, so two different binaries or versions, then.  Let's keep digging, starting with a listing of the search results.



A couple things stand out; namely that each process is associated with six (6) filemods (such as create/modify/delete a file) and two (2) netconns (could be browsing to a website, IP address, hostname), while related activities for registry (regmods) and other processes/binaries (modloads) are similar in count but different.  If you're the type (and I am) that likes to review data offline to filter, sort, search, and so on, you can download a CSV that looks a little something like what's below.  If you have oddball md5 values, abnormal paths, or process names that stand out, it's sometimes easier to focus in on (at least for me).  With this, the "start" value is the time the current instance of the process ran, and "last_update" is the most recent time it actually did something as it applies to the query at hand.



I mentioned oddball md5 values, right?  And we know we have two different ones at play here, and 37 total processes, so what's the breakdown?  Funny you should ask (well, I asked, but it was kind of rhetorical anyway - just work with me...)


So apparently two out of the 37 are the "f17e" hashes, with the remainder being "bcb8."  And yes, that can be identified using the GUI, but I like to see things with my own two eyes and plus this is an offline record in case I ever need it.  No abnormal paths were noted.

Anyway, since there's consistency with filemods and netconns, that's a good place to start looking, but first, I'd like to know a little more - high level - about these processes.  From a tool standpoint, Cb has a "preview" feature, and so to take a peek at the different binaries involved here (remember, two different hash values)...




First we had "Mr. Popularity," the "bcb8" version, followed up by the indomitable "f17e."  In either case, the command line parameters are the same:  "/efs /enroll /setkey."  If you had not already, you're probably running out and searching teh intarwebs for this executable, EFS, and whatever else might give some insight, since it appears to be from Microsoft (R) and might be legit (but you never know, right?).  If that's what you're doing, no worries, I was in the same boat.  I even reached out to the SANS DFIR email list to see if anyone else had encountered this, since all know that "normal" means LSASS doesn't have kids.  No children.  Nada.  Zip.  Zilch.  Carlos Marins pointed me to the following MS document (link is direct download) http://download.microsoft.com/download/9/8/2/98207DD4-7D2C-4EF6-9A9F-149C179D053E/CommercialOSSecFunReqsPublicV2Mar09.docx, which was very helpful in understanding some of the things I would subsequently find.  Chris Pogue asked a few questions and reminded me about checking the hash with Bit9 File Advisor.  

Speaking (er, well, writing) of hash, we'll go ahead and knock that out.  VirusTotal and File Advisor both came up clean:






Alright, now back to the fun stuff - filemods and netconns for this sweet child o' lsass.  Oh, first - this'll be quick, I promise - we can take a look at more details about the two binaries in question (as binaries, not as the named processes).




In addition to a few more details, this will show how many times that particular binary has been seen in action, without any time or other filtration (such as by parent process).  In addition, using the aptly-labeled "Download" button, I can extract a copy of the raw binary for additional analysis, reverse-engineering, and so forth, offline.  More on that later, as I said this would be quick.  Now, back to the rest!

Here's a quick look at each of the processes for analysis:


You can see that the relationship between wininit.exe and lsass.exe is as expected.  It's just that the latter spawns efsui.exe as a process (which of course we already knew by this time).  We see the commandline parameters again, and the fact that it's signed by MS.  What's new here is that the username (obfuscated) is the domain account of the actual end-user; it's not System or other non-human (actually, it's in the binary preview window as well, but I really just wanted to call it out here, rather than there).  Also, pay attention to the "Export events to CSV" button in the top right (more on that later as well).  And, more of the same from the other version of the process...


We were going to look into filemods and netconns if I recall correctly (and of course I do), so don't stop now...

You can't see the whole screen above, but just underneath the process map, there are some "facets" to speed up queries/drills/searches based on different criteria (of course, we're going to look at filemods and netconns - aren't you just tired of hearing about how we're "going" to do that?).  



Clicking on filemods highlights some other areas within that category of activity, and also focuses our results on just those.  Thus, we can see that the actions are broken down equally between creation and first write, and that only three directories were utilized, all within the user profile under AppData\Roaming (clicking on any of those would highlight only those pieces of information, thus narrowing the search further).  Next up are the event details.



Here, in timeline fashion, we get to see file creation, first write (if there were any file deletes, we would see those too), and some details associated with each.  The expanding arrow (as shown on the top entry) shows frequency information (singular in this instance because of the nature of the path), and the "Search" (in blue) will take us to those results (for instance if there were multiple systems instead of just one, we'd get to see what those all were).  The "Search" box at the top right of the event list allows us to find any search string in the results (such as username, filename, part of the path, etc), if we had something we wanted to jump to quickly (or even just see if it was present).  What's of interest here are the paths involved, which start to make sense in light of the MS document I mentioned earlier (same for regmods, but I'm not going to go into those for the sake of brevity).

Clicking on filemod in the facets again clears that drill, and we can switch over to netconns.


Basically, the two netconns were pointing to domain controllers, and as can be seen from the frequency information, those DCs are quite common from a connectivity perspective (as one would expect, being DCs).  The interesting thing here are the ports involved, which makes it look like LDAP is involved.  

Okay, we're getting down to the wrap-up stage (thanks for sticking with me this far, I know it's been a lengthy post/novella at this point).  What it appears we have are known, signed, MS executables associated with the Encrypting File System (EFS) for transparent file encryption, reaching out as the logged-in user to domain controllers for authentication, using LDAP.  But is that really truly what's going on?  Do we know enough to say that at this point?  Are there any additional checks we could do, ways to validate/verify this theory?  
What about looking at other activity on the system for suspicious processes, network communications, or known malware?  Any evidence to suggest that EFSUI.exe had been injected with other code after it started running?  What about packet captures for these LDAP connections - are they really normal authentication for the process?  Any corroborating logs from firewalls, the DCs, etc?  Is EFS used in the environment, or is the user known to do so specifically?  What about the binaries involved - does reverse engineering (RE) indicate any oddities or abnormalities?  

Do you remember that there is the ability to extract a binary for offline analysis?  So RE is definitely a possibility.  Firewall and DC event logs should be reasonable to expect (although not necessarily a given).  Packet captures from the time of the event (in the past) would require some dedicated NSM for streaming pcaps, but would be a really good way to help determine what's going on inside those netconns.  And we can certainly dig into the more detailed process and binary analysis on the hosts in question.  That "Export events to CSV" button I mentioned earlier?  Gives output like this, with a CSV for each "type" of activity, and will also include child processes if available (there weren't any for efsui):

Note:  The "Summary" text file gives some info about the process or binary being analyzed (name, hash, path, etc).

These spreadsheets provide a veritable plethora of information about the process or binary being analyzed.  Don't know if you noticed, but there was a section on the analysis page, referencing "Alliance Feeds" - this provides info about matches to Virus Total, known bad domains, and other "intel" related to activity that might be full of evil.  Rather than a specific process to search for, you could also start with a given endpoint/host, any of these threat feeds, or some custom query for known indicators (based on firewall, IPS, or other threat "intel" you have).  Anyway, all that to say that if you wanted to correlate the activity from EFSUI.exe as a child process of LSASS.exe, to any other activity to help determine whether this benign or the most evil thing on the planet, there's a lot you can do.  And again, not just with CarbonBlack - you can set up a packet capture for a period of time (host, router, switch, firewall, etc), perform memory analysis, targeted triage (such as popularized by Harlan Carvey, Corey Harrell, and Chris Pogue to name a few key folks), or even (gasp!) a full disk image with timeline and analysis galore! 

Maybe the information here is enough to make a determination and speak authoritatively about what happened, and whether there are any unremediated or ongoing risks.  However, it's really ultimately about those risks, and as investigators we may not be the decision-makers (in fact, most likely are not).  We can inform the decision-makers of our findings and recommendations, but we also have to be honest and explain what options are available, what those options would provide (and at what cost), and what potential risks could be incurred (and the probability thereof) by not pursuing the aforementioned options.  Want to know more on this subject?  Come to my talk at the SANS DFIR Summit in June - To Silo, or Not to Silo: That is the Question.  More info is available about the Summit here.

Thanks again for "listening" to my tale about the sweet child o' lsass, and remember ... you may know that lsass doesn't have any child processes, but if you don't verify or validate that, you might just reach the wrong conclusions, and that probably wouldn't be good in a "real life" scenario. 

Happy Hunting!


Saturday, May 10, 2014

Did You Know? ... or ... What Is Normal?

We all know that in Windows, explorer.exe (the user shell, graphical file system interface) is the parent process to applications launched by the user, such as Internet Explorer, (iexplore.exe).  That's normal; we all know it, and it looks a little something like this:


That's great and all, but did you know that's not always the case, at least in the matter of iexplore.exe?  Sure, of course you did.  Maybe you haven't thought about it before, but you do know.

What happens when you're on a 64-bit system, and launch the 64-bit version of Internet Explorer?  Did you note the drop-down arrow next to it in the screenshot?  Well, that looks a little something like this:

First, there's the "parent" iexplore.exe (pay no attention to the changing PIDs, please).  Path is "Program Files."  But then the "child" iexplore.exe; path is "Program Files (x86)," indicating this is a 32-bit spawn of (well, you know...) which thus means that the parent is 64-bit.  So, a "new" "normal," then?

   



                  

And of course you realize that if IE is the default browser, and hasn't been launched when you click a link in an email application (such as Outlook), then Outlook will be the parent of IE, rather than Explorer.  Another normal.  You get it.  Here's how that pans out:



So, the user clicks a link in email (outlook.exe), which then spawns the browser (iexplore.exe).  Just as I noted above.  What's cool here is that the command-line parameters (on the right in the screenshot) show where the user was being taken.  Good stuff.

So anyway, the other day I thought I'd do some digging into all the times that Explorer is NOT the parent of IE; partly I wanted to challenge my knowledge, but also to see if I had an opportunity to find any evil, or build query filters that would help separate the signal from the noise for evil in the future.  I ran some queries to find all instances of IE in my environment, where Explorer was NOT the parent process.  There are actually quite a few, you might be surprised.  The predominant ones were:  iexplore.exe, svchost.exe, and outlook.exe.  Okay, we've already discussed the first and last of those, but the middle?  Do what?

First, let's revisit the first one, because this is not the same as above; this is 32-bit on 32-bit action at its tabular best:



Parent process is on the left, child on the right.  Then on the right in the middle, you have the SCODEF and CREDAT references, which indicate an IE tab.  SCODEF points to the PID of the parent.  If you look back up at the ProcessHacker screenshots above, you'll see the parent IE is PID 7760; this is referenced by SCODEF for the child process.  And it's not just me, covered in Cheetos (R) dust, making this stuff up.  Here's a reference (granted, for IE8) from MSDN Blogs

SVCHOST!  SVCHOST!  WE WANT SVCHOST!  

Yes, yes, I promised you svchost.exe as a parent to IE.  Now, let's pause for a second and remind ourselves what is normal for svchost.exe, too.  While there may be multiple instances of svchost.exe running at any one time, the parent will always be services.exe.  That's normal, and we all know it.  Okay, remember that.  There is a scenario, which if you've spent some time reviewing the SANS DFIR Find Evil poster you are aware of, wherein IE can be started via the command-line "-embedding" parameter; in this instance, the parent won't be explorer.exe.  Looks like this:



That -embedding switch is over on the right, and as you see on the left, the parent is svchost.exe.  Done, right?  No, not quite yet.  Take a peak at what's next...



What we're looking at here is just a wee bit different.  This isn't so much about IE, although that's come into play, too, but more to the point of svchost.exe, and it's parent.  If you see a parent other than services.exe, you may start getting concerned about malware.  Then you see rpcnet.exe, which sounds legit (ish), but still isn't normal, and you're probably more concerned about malware, since malware often uses names similar to legit names, so as to look "normal."  In addition, this rpcnet.exe is signed, and we all know that signed code is used to bypass detections in antivirus, HIPS, and other products.  So, is this malware?  

Well, opinions might vary, and it certainly behaves like malware.  However, it is - in this instance - normal and legit.  It's actually associated with embedded tracking software to help deal with stolen computer assets.  Of course, while I might "know" that, someone could be trying to get one over on me by masquerading their malware as a process I'd expect to see, so how could I further verify that?  If you know anything about this tracking software, it's not designed to be "normal" and is difficult to validate - it really truly does operate very much like malware.  So I'd most likely have to turn to other sources of evidence, possibly even packet captures, to see where it was going and how it was communicating.  If you want to know more about why validating your findings and being able to do so from multiple types/sources of evidence, come to my talk (To Silo, or Not to Silo: That is the Question) at the SANS DFIR Summit in Austin this June.  

Anyway, the SANS DFIR Find Evil poster talks about knowing what "abnormal" is, but in order to know that, you have to know what "normal" is.  Old story, but that's the same way people are trained to spot counterfeit money - know what "good" money looks like, to be able to spot what's not.  When it comes to normal with computers, and especially in enterprises, there are "global" norms and "environmental" norms.  The globals are things like the 32-bit spawn from the 64-bit parent IE, the SCODEF references for child tabs (which includes the home page, by the way), and Outlook links spawning instances of IE to reach the websites.  Environmentals aren't out to save the computer, but are things like tracking software sitting in between services.exe and svchost.exe.  If you know what those are for your world, you'll be much better off when it comes to finding evil, and separating the signal from the noise.

Happy Hunting!