Showing posts with label SANS. Show all posts
Showing posts with label SANS. Show all posts

Sunday, September 7, 2014

It's a Groovy Kind of Risk


This year at the SANS DFIR Summit in Austin, TX  I had the distinct honor and pleasure of presenting a talk entitled To Silo, or Not to Silo: That is the Question. The PDF of the slides is available here (direct download).  All the other awesome presentations are up there as well, so make time to check them out if you haven't already.

Shortly after the Summit, I promised someone somewhere (or told, or maybe just suggested) that I would post the notes, or at least more details, about the talk. After all, we all know how entertaining it is to look at the slides of a presentation. Wow, great stuff, right? I think there are supposed to be videos of the talks somewhere or other, but if there was a post about it, I missed it, and mine might not've been taped anyway, and well, who knows. So basically, the point of this is to flesh out that presentation in a meaningful way for those who are readers of the word rather than hearers (and obviously, not everyone could - or even want to - be there). That said, my intent here is not to recreate the presentation (although I might steal a slide or two), but rather to build on it, and present the topic in a slightly (well, maybe more than slightly) different format. As an aside, you might be wondering what took me so long to get this done. Well, just like a nice single-malt scotch, some things must age to perfection, and not leave the cask for bottling until they're just right.

A little background first, to help set the stage, and fair warning - this may be a bit long, and I may break it up into multiple posts (or I may not). Also, this is a blog post, not a white paper or news article, so it will be more "conversational" in nature. Hopefully, you will find it worth your while to soldier on through it. The genesis for the talk actually came from last year's Summit, with Alex Bond's lightning talk about combining host and network indicators.  This made a lot of sense to me, and I thought it could be a full talk; plus, it falls in line with what I spend a lot of my time doing for a living.

First Things

Starting off, my focus was on the need to broaden our horizons from an evidence perspective; if we only look at host images, or RAM, or firewall logs, or netflow, or (the list goes on...), and we don't consider other sources, we're selling ourselves short. There are a couple difficulties with this type of approach, I think it bears calling them out now:
1. Not all evidence types are always available. This could be because they don't exist, or because you're not provided access to them.
2. Not all analysts/investigators/whatever you want to call them have in-depth knowledge, skills, and abilities with all evidence types.

Both those things are limiting factors, and so I started building from the standpoints of:
1. Dealing with the evidence you have, and expanding where you can.
2. Know how to deal with the evidence types available to you, and how to expand those.
3. If you can't/don't/won't then you're selling yourself and your client (internal or external) short.

To me, these things all related to siloing oneself, and so I came up with the title I did, way back last year (had to have the title before I could submit the talk in the first place). I mention that mainly because Jack Crook has a great blog post very similarly named, and touching on some of the same concepts, from May of this year.  Read it, it's good, as is the norm for his blog. Just know that these were both conceived independent of one another; it must be a "great minds think alike" sort of thing, if I may in any way lay claim to that adage.

However, as I delved more into the topic at hand, I added another piece, which I feel it all really boils down to, and which if we ignore, can REALLY be siloing ourselves. It's one that business people can relate to (which is very important for us in our line of work), and which really guides our decision-making processes in Information Security as a whole. You haven't guessed yet? Well, it's risk. That's right - risk. Virtually all of the decisions we make in the course of DFIR work are based on, or informed by, risk. The "problem" is, we don't tend to see it that way, and that's odd to me, because in InfoSec we talk about it all the time (it's how we relate "bad things" to the business, get money for projects, tell people no, tell people yes, get hated/loved/ice water dumped on, so on and so forth). To be honest, I'm guilty of that as well - I could easily quantify various "needs" in that respect, but it really wasn't until I started working on the presentation, that I started seeing the correlation to the topic of risk.

Is risk really such an odd topic for us? I honestly don't think so, it's just we don't think of it in those terms. We'll take something really simple - would you close your eyes and attempt to walk across a busy intersection? Most likely not, but why? Because it's "stupid" or "idiotic" or a "good way to get killed"? Doesn't it really boil down to a risk decision, though? The risk of getting mowed down by a speeding motorist in a 2000-lb vehicle is greater than the reward of saying you crossed the intersection with your eyes closed. It's not that it's "stupid," it's just too risky for most people.

All the Things

So let's start to put it in the context of DFIR, and the scope of my Summit talk. In the presentation, I started off with a slide showing some different broad sources of evidence: Systems, Network, Cloud, Mobile; with the "Internet of Things" we may need to start adding in things like Appliances and Locks as well. Anyway, within those broad categories or families, there are subsets of types of evidence, such as:


Now, obviously there are many more than that, and some (such as Reverse Engineering/RE) aren't exactly evidence per se - but the idea was to start to get the audience thinking about the things they do during a given investigation (which may vary considerably, depending on the type, scope, and sensitivity of the matter at hand). I'm pretty sure that there are folks who don't regularly touch all, or even most, of just these few. With that in mind, do you know these and more in great detail? If you were handed one at random, would you know what to do with it? Would it make you uncomfortable? What if you were asked where to find it during an investigation? You don't have to answer out loud - again, the point is get us all thinking. If you think of each of these (or other) types/sources/etc of evidence as languages, wouldn't you want to be fluent? Don't you think it would be valuable? That's the first point.

In the preso, I illustrated this point - that of Knowledge, Skills, and Abilities (KSAs) - by taking everyone back to their days of role-playing games (I realize for some this might still be reality). Not modern MMORPGs, but old-school things like A/D&D, with character sheets, a bag full of dice, a DM (Dungeon Master, not direct message) and a bunch of chips and salsa. Yes, I know, for some there were probably "other" substances involved, but this is a family show, okay? Anyway, back in those simpler times, I always wanted to be more than just one character class during an adventure, especially if there were only a handful in the game (kind of like most DFIR teams); with only one of a few types, if someone got hurt, killed, or otherwise taken out of action, it was a disaster (in InfoSec terms, a single point of failure). I mean, if your thief got caught and killed while picking a pocket, who was there to open locks or detect traps for group? But, if you had a fighter/thief as well, then you have at least somewhat of a backup plan (again in InfoSec terms, a Disaster Recovery and Business Continuity/DRBC plan, and not just a single point of failure). So it's one thing to know one thing very well, but brings more value and broadens the overall potential of the group (or DFIR team) if you have folks with a broader skill set, such as a dual-class human or multi-class non-human. In this context, we're talking about people who can take apart a packet capture, reverse-engineer a binary, parse a memory dump, and so forth - they're not stuck with just one thing. This was the point that Jack raised in his blog post, and he draws it out very well.

Shelly Giesbrecht did a presentation at the Summit this year about building an awesome SOC, available here (direct PDF download).  In a SOC, it's pretty common to have each member focused on a single monitoring task - firewall, IDS/IPS, DLP, AV, etc, and while that can provide a level of expertise in that area like Elminster does magic, it doesn't produce a very well-rounded individual (can the AV person fill in for the pcap?). As Shelly mentioned in her talk, the counter to that is to try to expand the knowledge base, but at the expense of actual abilities - we become jacks of all trades, but masters of none. This goes directly counter to what the greatest swordsman in all of history (no, not Yoda - Miyamoto Musashi) wrote in his Book of Five Rings - that in order to truly be a master of one thing (such as swordsmanship), you had to become a master of all things (poetry, tea ceremony, carpentry, penmanship). Troy Larson, in his keynote address at the Summit, (direct PDF download) brought up the concept of using the whole pig. And if you don't know about the whole pig, you can't use the whole pig, which is this point. But, if you don't have the whole pig, or don't look at parts of the pig, then you're missing out. And that's the second point.

A Puzzling Equation

Alex's lightning talk brought up the topic of using multiple sources of evidence - specifically host-based and network-based data - to better understand an attack. Yes, that's right - he was using more than one part of the pig (Troy would be proud, I'm sure). But as we saw earlier, there are more sources than just host/systems and network, and a multitude of evidence types within those, and that's where it starts to get a little more complicated, at least for some (and in some cases). The reason I say that is that I know people who for whatever reason, during an investigation focus on a single type of evidence or analysis, even when they have the skills to expand on it. For instance, they may just look at network logs, or a disk image, or volatile data. Each of these things can bring incredible value to an investigation, but individually, they're limited; if you don't expand your viewpoint, you're missing the bigger picture. I'll flesh that out with a puzzle illustration. We've probably all put together at least one puzzle in our lifetime, and even if it's not a normal occurrence for us, we understand the basic concepts (if not, wikiHow lays them out in a very simple format here).

Imagine you've been handed a pile of puzzle pieces, perhaps it looks something like this:

 (Source:  http://opentreeoflife.files.wordpress.com/2012/10/puzzle2.jpg)


In other words, you have no idea how many pieces there are (or are supposed to be), nor what it should show when it's all put together. In case it's not perfectly clear, this puzzle is the investigation (whether it's internal/corporate, external/consulting, law enforcement, military, digital forensics, or incident response). The end goal is being able to deliver a concise, detailed report of findings that will properly inform the necessary parties of the facts (and in some cases, opinions) of what happened in a given scenario. If we take a bunch of the pieces out and put them in another box somewhere, not using them, that's probably not going to help us put it all together (so if you ignore RAM, or disk, or network...). If we follow the wikiHow article and start framing in the puzzle, then start taking guesses as to what it represents (or what happened during the commission of a crime, etc), then we're missing the bigger picture. Get it? Picture? The puzzle makes a picture - see what I did there? Heh heh heh.  ;-)

I mean, this probably includes sea life, but we don't know for sure what is represented, and certainly can't answer any detailed questions about it...

(Source: http://www.pbase.com/image/9884347)


What if we start to fill more pieces in? When can we start to (or best) answer questions? Here:

(Source: http://piccola77.blogspot.com/2010_05_01_archive.html)


Here:

(Source: http://3.bp.blogspot.com/-wviPW6QWJiA/U_fTcSUKoUI/AAAAAAAAZjg/KLTKLJYSnQs/s1600/Lightning%2BStriking%2BTree%2B2%2B-%2B1000%2BEurographics.jpg)

or here:

(Source: http://moralesfoto.blogspot.com/2011_11_01_archive.html)


Pretty clearly the last one gives us the best chance of answering the most questions, but we could still miss some critical ones, because there are substantial blank areas. Sure, it appears to be foliage that's displayed in the background, but is it the real thing, or a reflection off the water? Is it made up of trees, bushes, or a combination? Is there any additional wildlife? What about predators? Imagine you're sitting down across from a group of attorneys (maybe friendly, maybe not), and those gaps are due to evidence not analyzed in the course of your investigation? Ouch...

Now, there are multiple facets to every investigation, and within each as well. There are differences between eDiscovery (loosely connected to what we do), digital forensics, and incident response, and those can probably all be argued to the nth degree and until the cows come home. I get all that, and am taking those things into account; I'm trying to paint a broader picture here, and get everyone to think about associated risk. In the end, it really is about risk, and I'll get to that. For now, let's list out a few scenarios that challenge the "all the pieces" approach.

  1. There isn't enough time to gather all available evidence types. This is probably most prevalent for IR cases, where time is of the essence, and imaging 500 systems that all have 500GB hard drives when you only have two people working on it, and executives/legal/PR/law enforcement need answers - fast.
  2. There aren't enough resources to gather all available evidence types. Again, very common in IR cases, where you have small teams, responsibilities are divided up, and KSAs may be lacking. We talked about that before.
  3. All evidence is not made available to you. This factors in across the board, and comes into play in pretty much every investigative role (corporate, consulting, LE, etc). This could be because:
    • The business/client/suspect is trying to hide things from you.
    • The people/groups in charge of the evidence are resistant/can't be bothered/etc (I've had CIOs refuse to give me access to systems because it was "too sensitive" and we ended up not gathering certain potential evidence).
    • The evidence simply doesn't exist (systems/platforms don't exist, policies purge logs w/o central storage, power was shut down, it was intentionally destroyed, etc).

Risky Business

This is where we get to that part that didn't really dawn on me until I was well into building the presentation. Initially, the presentation was going to walk the audience through various investigative scenarios, to show how it was important to know how to handle different types and sources of evidence, and how without doing so, you could be missing the bigger picture (or the finer details within the picture, such as Mari DeGrazia's talk on Google Analytics cookies - direct PDFdownload. I still accomplished that, but also added in the new element, that of risk.

I can see it in your eyes, some of you are confused about what this has to do with risk. Wikipedia explains risk in part as "...the potential of losing something of value, weighed against the potential to gain something of value."  It's a very familiar concept in financial circles, especially with regard to the return on investment (ROI) of a particular financial transaction. As such, it's very commonplace in businesses (especially mature ones), along with executives and business leaders. Information Security uses risk management as a means (among other things) to help quantify and show value to the business, especially preemptively or proactively, to help avoid increased costs from a negative occurrence (such as a breach) down the road. Businesses understand that, because they can recognize the cost associated with a breach, with damage to brand, lawsuits, expenses to clean up, and so forth. Okay, great, that makes perfect sense - but how does it apply to an after-the-fact situation in DFIR? Well, remember our two main points to which the risk pertains? Lack of knowledge, Lack of Evidence. I'll give some examples under each, for how risk ties in (please be warned - these won't be exhaustive).

Lack of knowledge/skills/abilities - personnel lacking a broad base of expertise in dealing with multiple times of evidence or investigations spanning computers, networks, cloud-based offerings, mobile technologies, etc.
  1. Requires additional/new internal or external (consulting) staffing resources, which cost money.
  2. Takes longer to complete investigations, which costs additional money, directly and indirectly (fines and fees, for instance).
  3. May result in inaccurate findings/reports/testimony, and could result in sanctions, fees, fines, settlements, etc.
  4. Loss of personnel who seek other positions to get the training/experience they know they need. 
  5. Inability to spot incidents in the first place, leading to additional exposure and associated costs.
  6. Training staff to achieve higher levels of expertise in new areas costs money.

These are pretty straight-forward, no-brainer sort of things, right? I think we can all see the importance of being a well-rounded investigator; it makes us more valuable to our employer, and helps us do our jobs more effectively. Win-win scenario.

Lack of evidence - whether evidence is missing/doesn't exist, inaccessible or not provided, or simply overlooked/ignored.
  1. Inability to answer questions for which the answers can only be found in the "missing" evidence; can result in additional costs:
    • Having to go back after the fact and attempt to recover other evidence types (paying more consultants, for example).
    • Potential sanctions, fines, fees due to failure in fiduciary duties, legal responsibilities, and regulatory requirements.
    • Loss of personal income due to loss of job.
  2. Potential charges of spoliation, depending on scenario, and associated sanctions, fines, settlements.
  3. Loss of business due to lack of appropriate response, brand damage, court costs/legal fees, etc (everyone out of a job). May seem drastic, but smaller businesses may not be able to bear the costs associated with a significant breach, and when part of those costs stem from inappropriate response...
  4. May take significant time and money to collect and examine all potential/available evidence; the cost of doing so may be more than the cost of not doing so.

The whole "lack of evidence" area is where I tend to see the most resistance within our field, so I'll try to counter the most common objection. I'm not saying that if we don't collect and analyze every single possible source and type of evidence on every single investigation of any type, that we're not doing our jobs. What I'm saying is that to the extent it is feasible and reasonable to do so, we need to collect and analyze the available and pertinent evidence in the most expedient manner, based on the informed risk appetite of the business.

There, I think that should start to set the stage for the next piece of the conversation. In our areas of lack of knowledge and lack of evidence, it's not necessarily a "bad" thing for them to exist one way or the other. What is a "bad" thing is to take certain courses of action without engaging the proper stakeholders in a risk conversation, so that they can make an informed decision on how doing one thing or another may negatively (or positively) impact the business. That's what risk management is all about, and now that we've seen that our actions can introduce new risk to the business, we need to start engaging the business on that level. A big piece of the puzzle here is that we, the DFIR contingent, are not really the ones to determine whether or not we only need to collect a certain type of evidence, or whether the lack of a certain type of evidence has a significant negative impact on an investigation. That's up to the business, and it's our job to inform them of the risks involved, so that they can weigh them accordingly in the context of the overall goals and needs of the business (to which we are likely not privy).

For example, in an IR scenario, we may not think it makes a lot of sense to image a bunch of system hard drives, due to the time it takes. We inform the business of the time and level of effort we estimate to be involved, and the impact of that distracting us from doing other things that may have more immediate relevance (such as dumping and analyzing RAM, or looking at pcaps from network monitoring). The business (executives, legal, etc) on their side, are aware of potential legal issues surrounding the situation, and know that if system-based evidence (of a non-volatile nature) is not preserved in a defensible fashion, the company could be tied up in legal battles for years. They determine that the cost/impact (aka, "risk") of ongoing legal battles is greater than the cost/impact (aka, "risk") of imaging the drives, so they provide the instruction to us. If we hadn't broached the subject and had a risk-based conversation with the appropriate stakeholders, we might have chosen based on our perspective, and incurred significant costs for the business and ourselves down the road.

So, am I saying we shouldn't make intelligent decisions ourselves? Should we do nothing until someone makes a choice for us? Please don't misunderstand me; by no means should we do nothing. But what we do should be tempered by the scenario we're in, and the inherent risk (to ourselves and others), juxtaposed against the risk appetite of those are paying us. After all, let's be honest - if a business is made to look bad or incur significant cost (whether through an incident response scenario, or some other investigation or legal action), most likely a "heads will roll" situation will arise. Professionally, it's our job to help ensure the business is well-informed, prepared, and protected from something like this happening; personally, it's our job to make sure it isn't our heads on the chopping block (C-levels may just move to another home, but if those who do the work get "branded" it may not be quite as easy). If you do a pentest engagement, what's the first thing you get? Your "get out of jail free" card, or a properly scoped and signed statement of work, which authorizes you to do the things you need in order to accomplish the mission. Think of what I'm saying from that angle: by making DFIR another risk topic, you're protecting yourself, your immediate boss/management, and the company/employer. There are other benefits as well - expectations are properly set, you have clear-cut direction, and can hopefully operate at peak efficiency. This keeps everyone happy (a very key point) and reduces cost; you gain visibility and insight into company needs and strategy, and are positioned to receive greater appreciation from the business (which can obviously be beneficial in a number of ways).

Last Things

Now that we've wrapped up the risk association aspect, and everyone agrees with me 100%, we can frame in the two original areas of conversation - lack of KSAs and lack of evidence. I think the first is a given, but the second is the gray area for most folks. I've had numerous conversations around this concept, online and in person, and so far the puzzle analogy seems the easiest to digest. If you're putting together a 1000-piece puzzle without the box or picture of the completed puzzle (isn't that pretty much EVERY investigation you've ever done?), no matter how much you *think* you know what it is, you don't truly know until it's done. Attorneys and business management/executives want answers, and those can't be halfway formed, because they're making costly and potentially career-limiting decisions based upon what we say. So if you're only 25% done with the puzzle, you can't answer all the questions. If you limit yourself to 25% of the puzzle (or available evidence), or you're limited to that amount by other parties, you're limited in the information you can provide. If you're stuck with the 25% (as by forces outside your control), then you do the best you can, and inform the business - they might be able to apply pressure to get you access to more evidence (but if they don't know, they can't help you).

Let's look at the flip side of that briefly. If you're in an investigation (of whatever type), and there are 500 systems with 500 GB, 5400 RPM hard drives and only USB 2 connections; 10 TB of pcaps, 4 TB of logs from network appliances and systems, 20 servers spread across the country with 4 TB of local storage each and 400 TB combined network storage (where evidence might be), total RAM of 4 TB (plus pagefile and hiberfil), 2 people to get the work done and 1 week to do it, you're probably not going to be very successful. You'd really need near-unlimited resources and time, which just isn't the reality for any of us. But the reality also is that in this imaginary scenario, even with substantial resources, we'd still need to inform the business of the associated risks, so that they could help establish the true requirements, guidelines and timelines, and ultimately help us help them (note: it is sometimes necessary for us to guide the business through this process, to help them understand the point we're trying to get to). It really doesn't matter whether we're internal or external - our jobs put us in partnership with the business (unless the business wants us to lie or fabricate the truth, which becomes a completely different discussion that I won't get into here). 

The goal is to make the best use of available evidence, time, and resources, to help the business answer the questions they need to address. If we have the necessary KSAs, and help the business understand the risks associated with the scope of an investigation, we can reach the end goal in a much more efficient manner than if we just work in a silo. I'd love to talk more; if you have any questions/comments/concerns, comment here or hit me up on twitter. Until then...

Think risk, and carry on.




Friday, May 11, 2012

SANS DFIRSummit 2012 - Austin TX

The SANS #DFIRSummit in June is almost here, and those of us who are involved have been asked to share a little bit about what's going on. First, I'll give you the pertinent (aka, dull and boring) info, then move on to the juicy stuff.

Who: SANS (throwing the party)
What: 5th Annual Forensics and Incident Response Summit (aka, #DFIRSummit)
When: Tuesday, 26 June and Wednesday, 27 June, 2012 (ie, next month)
Where: Omni Hotel Downtown Austin
Why: Because it's a great event - networking, learning, good times (aka, DFIR "heaven on earth")
How: A lot of work by SANS, some generous sponsors, and incredible speakers (just can't be beat)

There's another "who" and that's the speakers. Detailed bios, and event schedule are on the website, but here's a quick breakdown:
Keynotes by Detective Cindy Murphy, Madison Police Department and Harlan Carvey, Chief Forensics Scientist at Applied Security, Inc. Probably everyone knows Harlan from his books, and because of regripper, so he won't need much in the way of introduction. Cindy may not be as well known, so if her name doesn't ring a bell, look her up - she's heavily involved in CDFS, and has done some incredible pioneering work in the field of digital forensics.

The speakers over two days, in two separate tracks (last year there was only one track) are:
- Windows 8 Forensic Artifacts - Kenneth Johnson
- Analysis and Correlation of Macintosh Logs – Sarah Edwards
- Practical Use of Cryptographic Hashes in Forensic Investigations - Pär Österberg Medina
- Reasons Not to “Stay in Your Lane” as a Digital Forensics Examiner – Alissa Torres
- Digital Forensics for IaaS Cloud Computing – Josiah Dykstra
- Carve for Records (Not Files) – Jeff Hamm
- Android Memory Acquisition and Analysis with DMD and Volatility – Joe Sylve
- Sniper Forensics v3: Hunt – Christopher Pogue
- Decade of Aggression – Christopher Witter
- Passwords are Everywhere – Hal Pomeranz
- Recovering Digital Evidence in a Cloud Computing Paradigm – Jad Saliba
- Anti-Incident Response – Nick Harbour
- Automating File Analysis - Pär Österberg Medina
- Mac Memory Analysis with Volatility – Andrew Case
- Digital Dumpster Diving – Lee Reiber
- When Macs Get Hacked - Sarah Edwards
- Evidence is Data: Your Secret Advantage – Jon Stewart
- Taking Registry Analysis to the Next Level – Elizabeth Schweinsberg
- Tales from the Crypt: TrueCrypt Analysis - Hal Pomeranz
- Security Cameras: The Corporate DFIR Too of the Future – Mike Viscuso
- Exfiltration Forensics in the Age of The Cloud – Frank McClain

But wait, there's more! Looks like 21CT is sponsoring several events, including some spectacular after-hours venues; there are lunch & learns (reduces per diem expenses for the budget-conscious), a breakfast, Forensic4Cast Awards, and SANS360 (a little over half-way down the page, just before the "NetWars" section). SANS360 is a lightning talk event, where each speaker has just 6 minutes (360 seconds) to present their topic. In that line-up we have: Andrew Case, Kenneth Johnson, Cindy Murphy, Harlan Carvey, Hal Pomeranz, Kristinn Gudjonsson (extra points if you can pronounce his name properly), Corey Harrell, Melia Kelley, Tim Ray, Alissa Torres, and David Nides.

Now back in the speakers list, you might have noticed a familiar name (they saved the best for last), and I thought I'd give you all a little overview of what my talk is about. As you all probably know, I spent a lot of time last year researching the footprint of Dropbox, the popular file-sync service. This came out as a multi-part kind of thing, with some initial research posted on the SANS blog, a more detailed article published on ForensicFocus, a post or two here, and some artifacts over on ForensicArtifacts. Links to all of those are here. I'd been thinking about that for a while, because I had used that service myself, and saw how easily it could be abused - especially in smaller organizations - for people to steal data. We're used to folks using thumb drives or webmail to get docs out, but what if they just kept them in a directory on their computer, and that directory was sync'd to the cloud and possibly other computers (or mobile devices) outside of the company's control?

Last summer I moved out of the consulting realm and into a corporate investigative setting. Thinking about how attackers exfiltrate data got me to thinking that these types of services could potentially be exploited that way as well as used by insiders. And smaller orgs don't tend to have all the fancy monitoring and locked-down systems/networks that larger ones might (data loss prevention, application layer firewalls, deep packet inspection, reverse proxies with blocked websites, yada yada yada). So if users have local admin rights, and nothing on the network is stopping certain types of traffic, then what's to stop them from using things like Dropbox, Carbonite, and so on?

So anyway, I started over with Dropbox (applications change over time, right?) (Note: Yes, it did change), and have added several others. I wanted to give forensicators an idea of what kinds of artifacts to look for on these types of applications. The preso won't be as detailed as my prior Dropbox work (I might be talking for two days if that were the case!), and I'm not delving into things like prefetch, jump lists, user assist, and so on. I think those are areas we all know to look; I wanted to give a starting point specific to some of these apps, and hopefully get everyone's minds churning.

 At a high level, I'll be touching on things like:
- File locations/application signature
- Files of note (databases, logs, etc)
- Residue after uninstall (files, folders, etc)
- Network connections
- Traffic signature (from packet capture)

 I'm really looking forward to this event, and not just because I'm a speaker. I think it'll be an awesome time, and a great opportunity to get out and mix it up with the community at large. There's no other event quite like this!

 If you haven't registered yet, but are going to, please feel free (read: be encouraged to do so) to use the discount code "PrimeLending10" to save 10% off the registration fee. SANS has given each speaker a discount code to share, this year, and that one's mine (obviously, right?). And yes, I get a "li'l somethin'" if enough people use it. :)

I think that's about it. Like I said, I'm looking forward to it, and I hope to see many of you there!

Happy Forensicating!

Monday, March 12, 2012

I'm Goin' to Disneyland - Again!

Or ... What a year, what a year!

Not really Disneyland, but rather SANS DFIR Summit 2012 in Austin, TX. But let me back up and explain a wee bit first.

Last year this time I was working along at a small forensic consultancy as a senior analyst. I was able to get approval to attend FOR563 (Mobile Device Forensics) at the SANS Summit in Austin, but wouldn't be able to attend the Summit itself. Bummer, but the training was more valuable for the business in the long run. Anyway, in May that job disappeared on me, as such things happen on occasion. More bummer.

The DFIR community came around me with support, job opportunities, and in fact a way was made for me to attend the Summit directly (which I blogged about last year). No bummer there! I was able to meet a lot of great folks, see old friends, make new ones, network, and have a great time. I got a lot out of the event, and now I get to give back.

So in the interim, I've landed a corporate gig, which vastly increases my time at home with family, scheduling consistency, and so on. I have a good boss and it's a great gig all the way around.

But, to get around to the "giving back" part... I have been blessed with the opportunity to share the fruits of my research at the Summit this year, as I've been accepted as a speaker there. It's an incredible honor, and obviously very exciting! For those who might be concerned, I have no intention of making use of the term, "APT," unless I need to throw people off. :D

So here's the other thing. If you sign up at the link provided above, and use the discount code below, you'll get 10% off the Summit registration fee. No joke, it's for real! Act now, SANS expects this event to sell out quickly!

Discount Code: PrimeLending10

Hope to see you there!

PS: Just a quick update regarding the 10% discount. SANS is offering this through the speakers. They did not explain *why* in any great detail, although it seems obvious to me they want to increase attendance and think this will help. And perhaps whoever gets the most signups using their code, will be given a Ferrari. Or a SANS-branded thumb drive. Really, I'd like the Ferrari. ;)

Wednesday, July 6, 2011

Dropbox Forensics Follow-Up

Several months ago I started on a quest to research locally-created artifacts related to the use of Dropbox on Windows systems. This took several months of work as time allowed, in order to complete the outline I was following. This culminated in a blog post on SANS, a more complete article hosted on Forensic Focus, and a summary of artifacts on Forensic Artifacts. However, that's not all I have to offer on the subject. Yes, folks, for a limited time only, when you buy all three you get a fourth for free! That's a $19.95 value, included at no extra cost! (shipping & handling not included; residents of the UK must pay VAT - I know, it sucks)

At the end of the article (hosted on Forensic Focus), I wrapped up with some outstanding items, or possible other things to research. I have spent some more time going over some (only some, not all) of those; this follow-up post will cover my additional research:
1. Does unlinking (local or web) change the registry?
2. What impact does uninstallation have on the registry?
3. What are the various “hash” values; what do they signify?
4. Do the IP addresses vary with geographic area?
5. What data is transferred across the unencrypted connection?
6. Do the SQLite databases contain deleted entries, and how can those be parsed?
7. Are file/system IDs or encoded info stored in the databases, 'entries.log' or elsewhere?

1. Instead of doing ProcMon or RegMon by Sysinternals, I ran regshot 1.8.2 to create snapshots before & after each unlinking. Initially I kept getting BSOD'd every time it would scan the registry but switching systems eliminated that issue. Ultimately there were no obvious registry changes related to the unlinking (local or web).

2. I used regshot before & after the uninstallation as well, and quickly identified 49 deleted entries (truncated here; complete on Forensic Artifacts):

HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ShellIconOverlayIdentifiers\DropboxExt1\: "{FB314ED9-xxxx-xxxx-xxxx-xxxxxxxxxxxxx}"
HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\ShellIconOverlayIdentifiers\DropboxExt2\: "{FB314EDA-xxxx-xxxx-xxxx-xxxxxxxxxxxxx}"
HKU\S-1-5-21-xxxxxxxxx-xxxxxxxxx-xxxxxxxxx-xxxx\Software\Dropbox\InstallPath: "C:\Documents and Settings\username\Application Data\Dropbox\bin"
HKU\S-1-5-21-xxxxxxxxx-xxxxxxxxx-xxxxxxxxx-xxxx\Software\Microsoft\Windows\CurrentVersion\Shell Extensions\Approved\{FB314ED9-xxxx-xxxx-xxxx-xxxxxxxxxxxxx}: "DropboxExt"
HKU\S-1-5-21-xxxxxxxxx-xxxxxxxxx-xxxxxxxxx-xxxx\Software\Microsoft\Windows\CurrentVersion\Shell Extensions\Approved\{FB314EDA-xxxx-xxxx-xxxx-xxxxxxxxxxxxx}: "DropboxExt"
HKU\S-1-5-21-xxxxxxxxx-xxxxxxxxx-xxxxxxxxx-xxxx\Software\Microsoft\Windows\CurrentVersion\Uninstall\Dropbox\UninstallString: ""C:\Documents and Settings\username\Application Data\Dropbox\bin\Uninstall.exe""
HKU\S-1-5-21-xxxxxxxxx-xxxxxxxxx-xxxxxxxxx-xxxx_Classes\CLSID\{FB314ED9-xxxx-xxxx-xxxx-xxxxxxxxxxxxx}\: "DropboxExt"
HKU\S-1-5-21-xxxxxxxxx-xxxxxxxxx-xxxxxxxxx-xxxx_Classes\CLSID\{FB314EDA-xxxx-xxxx-xxxx-xxxxxxxxxxxxx}\: "DropboxExt"
HKU\S-1-5-21-xxxxxxxxx-xxxxxxxxx-xxxxxxxxx-xxxx_Classes\CLSID\{FB314EDB-xxxx-xxxx-xxxx-xxxxxxxxxxxxx}\: "DropboxExt"
HKU\S-1-5-21-xxxxxxxxx-xxxxxxxxx-xxxxxxxxx-xxxx_Classes\CLSID\{FB314EDC-xxxx-xxxx-xxxx-xxxxxxxxxxxxx}\: "DropboxExt"

I've x'd out some of the SIDs to (hopefully) make it easier to focus, and because I didn't want to post the full SIDs on the internet. I left the first segment for some of the SIDs since that part makes a noticeable, incremental change.

3. There is actually a correlation between "hash" values in the various config files. It should be noted that Dropbox hashes the files in 4MB chunks, and stores the hashes the same way (base64 encoded). Thus, there may be multiple hash values for a single file (but only when it's larger than 4MB). Here's where I've followed the trail of hash:
filecache.db block hash field
entries.log 5th section is hash
sigstore.db stores hash (and size in bytes)

4. I know that some application updates will reach out to different servers based on geographic location, and I wondered if this was the same for Dropbox. Using NirSoft CurrPorts, it was easy to gather the active connections here in Texas. I had reason to take a trip to California, so I did the same thing there. Finally, I established a VPN connection to another country and checked the connections that way as well.

There were some minor variations between the locations for IP addresses, although host names remained largely the same. The one thing that did not change in any of these, was the IP and host name for the sole HTTP (unencrypted to port 80) connection.

5. So then there's the question of this single unencrypted connection. I had not previously examined the content of this traffic, but I have now, using Netwitness Investigator to isolate the connection stream of interest and exporting that out for posterity and more review.

It's basically a "Hello, here I am" and "Let's keep the connection going" type of conversation. Of course, it's in clear text. My only concern is that it transmits the namespace ID (from config.db, root_ns), and possibly that of shared directories as well (there's a second entry that follows the namespace format, but I haven't been able to confirm that yet). With some of the Dropbox-related security issues that have recently come to the surface, I'm a little concerned about this data being transmitted in the clear, especially when I don't know for sure if it can be exploited (and since the IP address and host name are always the same).

6. Deleted entries within the SQLite database files can indeed be recovered. I suspected as much, but I'm not a DB (or SQLite) guru. Historically I've relied on others to develop a tool I can use for this purpose, and I've stuck to my guns in this instance. CCL-Forensics has a product designed for this purpose, called epilog; while it's a commercial product, there is a 7-day trial available.

I must say, it works quite nicely. I removed some files from my Dropbox folder just for this test (relocated to another directory), and then downloaded (have to register, but no sales personnel have contacted me yet), installed, and ran epilog. They have some videos on YouTube, but I found the info I needed in their Help file. There are some different methods to recover deleted entries, but I simply focused on the "Free Page Analysis" which parses the link list or freelist within the database. It very definitely did what I needed it to do.

Edit: I intended to note that to export a report-type of info from Epilog you basically have the option of going to an XML file, which may not be directly what you need. For me, I wanted to look at the data in a spreadsheet. Most methods to convert XML to CSV revolve around going through a couple steps (ie, XSLT), I found XSlicer to be very helpful.

7. And yes, other encoded data does exist within different config files. Dropbox makes use of base64 encoding, and one of the key places is the "entries.log" file located within the ".dropbox.cache" directory inside the user's Dropbox folder. (This set of artifacts is discussed in more detail in the Forensic Focus article.) By cross-referencing with the various parsed database files, I was able to decipher the entries.log (pipe-delimited) file:
1st section is filename (as it exists in .dropbox.cache directory)
2nd section is root_ns/path
3rd section is unix epoch timestamp
4th section is size (bytes)
In addition, the host.db file, 2nd row is user's Dropbox path.

So that pretty much wraps things up. I did not do any research into alternate file transfer methods (I think Dropship has addressed that rather well), but I did note that if you share a file (Public folder) you can get the link to that file; that link can be transferred via email, IM, etc, and the file downloaded by whomever has the link.

Some other resources:
I've already mentioned epilog, which I think has great potential.

There's also Dropbox Reader by ATC-NY; it's a set of python scripts to parse the SQLite files (they pull from the Dropship project). In addition to something like a SQLite Browser this can be very helpful to gather and cross-reference information.

Derek Newton has done some good research, hosted on his blog.
Forensic Artifacts
Security Issues

Great paper on cloud security (with focus on Dropbox) by SBA-Research; the actual download is here

I've mentioned the Dropship project a couple times, but it has been "officially" shut down. Research determined that it was possible to "share" files without using the Public folder, thus potentially facilitating illegal file-sharing. Although Dropship is no longer developed (by the originator) other forks can be found.

I think that's about it, folks. Unless something else comes up to pique my interest (I'm open to suggestions), I think I'm about done with Dropbox research for now. It's been a lot of fun going through this process, and I've learned a lot, which is also good. Hopefully this will all prove useful - to myself and others - in our forensicating efforts.

Friday, June 17, 2011

Dropbox Writeup Posted on SANS Blog

The "short" Dropbox writeup I mentioned previously is now posted on the SANS Forensic Blog.

Before too long - hopefully - the full article should be up on Forensic Focus. At the end of that one I listed several things I thought were outstanding in regards to artifacts. I've been working on those, and before too long - hopefully - will be posting those here.

I'm also just wrapping up reading Cory Altheide's book and am going to post a "review" of that as well. Not really into writing reviews, but I think it's worth it.

Friday, June 10, 2011

#DFIRSummit - Afterthoughts, Part 3

Who would've thought it would take 3 posts to summarize the Summit in Austin? Not me. I did the first one because I needed (personally) to start the process; I knew then that the main body would require some dedicated space (it could probably have been broken up into 2 posts just for that piece of it). But there remains something very important to cover - the "thank you" section.

First and foremost, many thanks to SANS and their hard-working people for putting on the event. Obvious thanks go to Rob Lee as the host, but he wasn't the only there. There were people doing registration, audio-visuals, and presentation facilitators. Everyone did a great job, so thanks to all the SANS team!

In addition, there were vendors who helped make things happen. AccessData, Netwitness, and Fortinet all had a presence there (Infogressive was in the program, but I don't recall seeing a booth; conversely, Fortinet was not listed, but was there nonetheless). Netwitness sponsored a lunch & learn, and AccessData sponsored an evening reception.

All of the panelists and presenters also deserve thanks, for giving their time and efforts to be there and participate; I know all the preparations for that take a lot of time and mental effort. Some of them came not just from other States, but all over the world (Iceland, Canada, Nebraska... ;) ).

And last but not least, all the attendees deserve thanks. They took time out of their lives, work, etc to be there. I'm sure it wasn't a burden for anyone, but some of them came a very long way (Spain, Canada, Germany, etc) to be there.

Without everyone listed above, there would be no event. Many thanks to all of you!

I think this is the last post on the subject...

LM

Thursday, June 9, 2011

#DFIRSummit - Afterthoughts, Part 2

Okay, so now we're on to the "real" content. First let me start off by addressing something I overlooked last night. Congratulations go to Eric Huber and his AFoD blog for winning the Forensic 4cast award for "Best Digital Forensic Blog." I know Eric did not anticipate winning, but he did, and deserves it! I must also say that I was sadly disappointed that log2timeline did not win the "Best Computer Forensic Software" category. I'm not the only one; there was a lot of discussion to that effect at the Summit. It seems that Guidance Software had an active internal campaign that paid off more than anything we did for Kristinn. General consensus from the Summit seems to be that l2t was the winner anyway. That's right!

I'm basically going to run through each presentation in order and give a couple tidbits. Any more than that and I'll be here all night! So without further ado...

Day 1

Andrew Hay - 5 Point Palm Exploding Heart Technique for Forensics
This was supposed to be Mike Cloppert's slot, but he was tied up (not literally).
The 5 Points:
Host/Platform forensics
Network forensics
Data Reduction
Corroboration
Orchestration
The overall idea is that you need to try to combine or integrate the various segments into one for more effective/comprehensive investigations, since host-based can no longer really be the primary focus.

Chris Pogue - Sniper Forensics 2.0
DF is constantly changing. We have to be agile & adapt
DF is the most challenging forensics discipline because of the changes
The software tools you use in an investigation don't matter - your brain is your best tool.
You have to have a plan - this is *key* (and your steps should be consistent)
CLI is your friend. Yay, Chris! :)

Sean Morrissey - iOS Forensics
I have used Lantern and tend to prefer it over Mobilyze. However, I really would have liked more info about "iOS Forensics" (ie, important artifacts and how to use them) than a presentation about Lantern.
Putting an iPhone in airplane mode does not disable WiFi. So if you are acquiring one, remove the SIM, put in AM, disable WiFi & bluetooth, and use a Faraday bag if need be.
To recover/carve deleted entries from SQLite db, look for "de-referenced" items.

NetWitness Lunch&Learn (I think the presenter was Michael Sconzo, from their CIRT)
It was technical, not a sales pitch, and very much about results of network investigation for malware, as opposed to what NetWitness can do.
The main idea was to know what "good" or "benign" http sessions look like so you can quickly recognize anomalies. I think he actually mentioned something about reading RFC 2616; I don't remember anything after that point... Just kidding; it was very informative.

Hal Pomeranz - EXT3 File Recovery via Indirect Blocks
What can I say - you give Hal a command line, a hex editer, a Linux file system, and he just starts dancing!
File-carving assumes 100% contiguous data...
Indirect block pointers are not nulled out when a file is deleted (unlike direct pointers).
When decoded, they will point to the next block #.
Hal has some tools to automate the process of recovery, rather than manually follow the indirect pointers; it basically runs on top of TSK and calls those utilities as it needs:
frib (file recovery indirect blocks) - this works if you know where the file started, and can progress forward from there.
fib (find indirect block) - finds indirect block (by signature, within the block grouping you're targeting), then counts back 12 blocks to what should be the start of the file.
He has a whitepaper and the tools on Mandiant's blog

RMO's were handed out by Rob Lee, to:
David Kovar - for AnalyzeMFT
Bamm Vischer - for sguil
Congratulations, guys!

Terry Maguire - IR Process & Smart Phones
As these phones become more common in the enterprise, we have to know how to handle them.
**Note: both android and iOS use a lot of SQLite db files.
-sqlite browser (sourceforge) is good, but no deleted entries will show
-epilog by CCL Forensics is designed to show deleted entries (not free, commercial product)
Android must be rooted to get access to any real information. This requires modifying the phone, if if you use something z4root that can be undone with the click of a button.
In order to get volatile data from iPhone, it will have to be jailbroken.
Blackberry cannot be imaged like other devices; removing & imaging chips might be possible. Blackberry file system can be mounted either through desktop manager or javaloader, but be careful; it's easy to destroy data! Blackberry Messenger SMS are not contained in IPD files; they can only be collected from mounted file system.
ABC Amber Blackberry Converter is now Backup Blackberry Explore by Elcomsoft.

Mike Cloppert - Distinguishing IR from Computer Network Defense
He's in Andrew Hay's original slot.
APT & such are much more advanced than the traditional IR models developed a decade ago:
Highly aware (situational awareness)
Adaptive
Lots of tools
There may be multiple adversaries/attack vectors simultaneously or near-simultaneously.
Campaigns (by adversaries) may span several years.
The conventional IR model is based on the presumption of a successful compromise. If it's an "imminent threat" the model doesn't fit. The model is reactive, not proactive. Needs to be more proactive.
Have a monthly overview of reporting to help determine where to focus preventive efforts.

Day 2

Kristinn Gudjonsson - log2timeline
version 0.60 - the "killer dwarf" release - now works on Windows; instructions on how to install in docs/install (Chris Pogue created/tested documentation).
Rewritten engine, work is done on back-end.
It is more object-oriented, and has preprocessing modules.
With the front-end not doing processing, you can easily build your own, for integration into your own processes, customize default action, etc.
It now has a Skype parser. It includes code from regripper and regtime to automatically pull in all the registry data. And (drumroll, please) David Kovar's AnalyzeMFT has been imported as well, to parse the MFT. Of course, that means it had to go from python to perl, but we won't get into that.

Mike Pilkington - Protecting Privileged Domain Accounts during Live Response!
Mission: remote access to WinXP (SP2) workstation (no patches) for analysis/triage
wmic
psexec
netuse
You don't want attackers who may be present to capture privileged credentials.
Do not use any type of interactive logon as this will cause a password hash to be stored locally. Running psexec creates a vulnerability for delegate-level access token theft. Don't set IR accounts as admin accounts; put them into different groupings and give those elevated privileges only as needed.

Panel: Professional Development in Digital Forensics and Incident Response
Lenny Zeltser, Richard Bejtlich, Ken Dunham, Joe Garcia, Bamm Visscher
Everyone had pre-formatted questions they spoke about, then it was open to questions from the audience. I will touch on one, for Richard: How do I build a computer incident response team? I thought the absolute key to it was his statement that you have to keep the groups tightly-knit and give the analysts what they need to do their jobs - training, equipment, etc. The best part was that he said you have to protect them fiercely. That's leadership! He had a blog post about this recently; it's obviously important to him.

Lee Whitfield - Digital Forensics and Flux Capacitors
Looking at reasons/ways people try to get out of trouble with their computers
Focus: Time/system clock alteration (as an excuse)
Top places to check at start of investigation
system event logs (except on XP, where it's not as important)
$UsrJrnl.$J
LNK files
Restore Points
Who is @gingerlover_17 Lee? ;)

Hal Pomeranz - EXT4: Bit by Bit
Changes in EXT4
48-bit address space
Uses extents instead of indirect block chains
64-bit nanoseconde resolution timestamps
File creation time timestamp (born, or b-time)
Backwards compatibility design goal
Inodes expanded to 256 (from 128)
Most of offsets listed in carrier's book still apply to ext4
Hal dove right in with his hex editor, heads exploded, Hal danced, twitter was on fire, etc. It was a very good presentation!

Panel: Forensics in the New Cloud Frontier
Andrew Hay, Cory Altheide, Joe Garcia, Robert Lee, Ed Skoudis
The questions were sprung on the panelists w/o preparation. Wow.
Here's my take: The cloud is here. It's not leaving. You need to know what kind of alerts your cloud provides (to indicate compromise/issue, like gmail's alerts to different locations accessing your account). Distributed processing is going to be key to future analysis (think multi-GB log files). Make sure your cloud provides you with auditing capabilities, as logs are going to be the target of your analysis. Look at the kind of data you've needed from recent incidents, and see if you can get that from your cloud.
Then it was opened up to the audience's questions, including:


#dfirsummit Q for panel: Would you get a 4Cast award for staying within a reasonable budget while proactively responding using sniper forensics, five point palm methodology and log2timeline to analyze a mobile device running ext4 whose clock was reset using false domain credentials through the cloud?

Does that question not totally sum it up?

Oh, there was one more panel, the vendor panel. I had to leave right before that, so that's where my summary falls short. However, I think the last question for the previous panel is the best place to end...

LM

Wednesday, June 8, 2011

#DFIRSummit - Afterthoughts, Part 1

I think this tweet by David Kovar sums it up the best:



The only thing that was left out was #corn, but that's another story altogether! I was involved with corn, but I don't think it's my story to tell...

The background on the above tweet was that we had a break after Hal Pomeranz gave a VERY in-depth talk about EXT4, and brains were melting. And he was dancing. Twitter was on fire. Next up was a panel about "the cloud." A group of us on break decided we needed to submit a question that somehow encompassed every topic brought up over the course of the Summit. David Kovar tweeted it and turned in his question card; Andrew Hay read it as the last question, and Cory Altheide autographed his book.

Overall, the Summit was great. The speakers were awesome, it was a great group of folks, and dare I say, a good time was had by all.

I arrived Monday afternoon, and went to the reception. When that was over, we all went to Shiner's with Rob Lee; there was a good group in attendance. And of course, I got to meet Kristinn, which is good for my "fanboy" status ;). For those who think forensicators need to be short, overweight, or bald (as declared by Chris Pogue), well he's none of those. Of course, neither am I. This was my first time to go to something like the Summit, and after a couple introductions from Hal, we were all chatting like old friends. About corn. Very cool. I guess that's what happens when a bunch of crazy geeks get together; our minds are so similar that it's just natural to have a great time!

My wife's comment to me tonight when I got back and was sharing some stuff: "It's like you've died and gone to geek heaven!" She gets me; she so gets me...

Tuesday evening there was an after party at Buffalo Billiards and everyone got to hang out for more good times and conversation. Lest you all think it was nothing more than a big party with drinks galore, there were speakers, presentations, panels, etc. And corn. And that was all very good stuff. Naturally, props go to Kristinn (@killer_dwarf?) for a great presentation about l2t, and everything that tool can now do! However - hope this doesn't damage my status - my favorite was Chris Pogue's talk on "Sniper Forensics." I really enjoyed it, thought it was a great topic and material (and he's a good speaker).

That's it for now, as it's rather late, and I've got a technical webcam check tomorrow morning for a video conference interview tomorrow afternoon. I actually had to leave Summit a little early to head back home and do my own webcam check first in preparation. Tomorrow I will write up a post on my thoughts about the presentations.

LM

Wednesday, March 9, 2011

Timelines with Registry Data

I've posted before about log2timeline, and now I'm going to add to that a bit.

This is about incorporating regripper output into the timeline. SANS teaches about this in FOR508, and I'm going to "streamline" the process a bit. Without SANS (where I learned of the possibilities) and Harlan Carvey (the source of regripper), and coffee this would not be possible.

So here's the situation: I'm working an investigation with several systems, and starting by building timelines. Standard process - fls, l2t, mactime. But before working up the final bodyfiles and running mactime, I'm going to pull in registry data with regripper.

I'm working in my linux system, which is Ubuntu based. Using Lupin's great post here I've incorporated the latest version of regripper into my system.

At first I was going to bash it together with variables, but in the end it seemed like there would be four or five, and I thought that was too much typing across several images. It seemed more efficient to type my command line once, then repeat and modify the couple things I would need.

So my evidence items are numbered sequentially, like XYZ-001, XYZ-002, and so on. Their respective file systems are mounted accordingly on /mnt/001, /mnt/002, ... The drive containing the images is a Truecrypt volume, so it's mounted on /media/truecrypt1. I'm storing my output files in a separate directory for each evidence item, named for the number and custodian, like /media/truecrypt1/001_jones, /media/truecrypt1/002_smith, etc. The bodyfiles I've already created are named like 001.fls, 001.l2t so I can easily tell which is which. My regripper output will continue in that vein with 001.reg.

Now regripper documentation and SANS teach to use it by pathing out to each hive, then redirecting output somewhere. Something like:
# rip.pl -r /media/sda2/Windows/System32/config/SAM -f sam >> /media/sdb1/evidence/timelines/jones_regripper.txt
Something like that. But that's too much typing for me. And like I said, I thought about scripting it, but there's just too many variables for what needs to be done - source path, source file, module, destination path, destination file, ack pfft!

So to pull a page from my old ways with log2timeline (before timescanner), I did the following, from within my regripper directory:
# find /mnt/001/ -iname system | while read d; do ./rip.pl -f all -r "$d" >> /media/truecrypt1/001_jones/001.reg; done
Then all I have to do is replace "system" with "sam" "software" and "ntuser.dat" and that takes care of 001. For 002 I change the "1" to "2", change "jones" to "smith" and I'm good to go. Far quicker than typing all the paths each time, or plugging in several variables. If I wanted to get more creative I probably could script it to run through the various hives for each set of variables, and then just put in the set for the next custodian system. That might be okay, but I didn't feel like going that far today. Now that I think about it, I believe I will tomorrow, though.

This is no command line kung-fu, but I like it; it's better than a bunch of typing anyway. What it does is within the /mnt/001/ directory, it searches for files named "system" with no case-sensitivity. STDOUT is the normal output; this is piped to a while loop that runs rip.pl against the "system" file (-r). I had some errors trying to use the specific modules (-f) and being lazy (just like not wanting to type) I decided just to try "all." This worked just fine, no more errors. You'll see as it runs that regtime is called; I believe this is what creates the mactime-formatted output.

That takes care of my timeline pieces. Now to get a single bodyfile for each system... Within my output directory (/media/truecrypt1/001_jones):
# cat 001.* >> body.001
I'm now reversing the order, to keep my pieces separated. It helps me track the flow/progress as well, which I'll show next.

I'm trying to identify areas of activity (or inactivity) over several months so that I can focus in on details with a more limited timeframe. To do this I'll be running mactime and building a daily index. For 001, this looks like:

# mactime -d -z CST6CDT -m -y -b /media/truecrypt1/001_jones/body.001 2010-01-30..2010-05-24 -i day >> /media/truecrypt1/001_jones/index.001
This is just wrong. Shame on me. Here's how it needs to be:

# mactime -b /media/truecrypt1/001_jones/body.001 -i day /media/truecrypt1/001_jones/index.001 -d -m -y -z CST6CDT 2010-01-30..2010-05-24

This gives me a CSV file (-d) that gives a nice overview of total activity each day during my ~4 month period. Then hopefully I'll be able to focus in on a few specific days that look interesting (maybe a lot of activity, maybe very little). All these options are in the help info or man pages, but in brief -d gives a CSV output, -z allows you to set the timezone, -m and -y set the date format, -b specifies a bodyfile, then comes the date range I'm interested in, and last -i specifies to generate an index; it's either "day" or "month", and must be redirected to the output file. Again, all I have to do is change "1" to "2", "jones" to "smith" and move on to the next.

Now my output directory has the following files:
001.fls, 001.l2t, 001.reg, body.001, index.001
This makes it easier for me to keep track of the different sets of data, and their formats. The pieces that go into my main bodyfile start numerically, and the combined datafiles end numerically. I know everyone has a different way; just wanted to share my logic/excuse.

I hope that can help someone. I should (hopefully) have some sort of shell script together tomorrow that would loop through the different hives for each custodian. There would still be several parameters to input initially, but it might work out to be faster going through four hives than up arrow, back arrow, replace... Once I have it I'll post that as well.

LM

Note: Edited to correct mactime syntax to create index files.

Tuesday, March 8, 2011

Need More Cowbell (or, links to SANS)

So Rob Lee recently posted to the DFIR list that SANS is ranking rather low on Google, as they're not stooping to "shadier" tactics such as setting up false sites to point to theirs, and other similar techniques. I say shadier since doing those kinds of SEO type things (aside from your site's internal metadata, search engine indexing, and so on) drives up your presence on the web - but at others' expense, and typically seems unrelated to the actual site. That may not make a lot of sense, but that's okay.

The point of this post is to say that I've done my part. I already follow the SANS Forensics Blog, but now I've added that and their main forensics page under Favorite Sites. Just doing my forensivic duty (forensivic=forensic civic).

On a side note, Rob suggested searching for "computer forensics" from Google to see the results. I did so, and was pleased to see Lance Mueller's site in the first page of results. Obviously he gets a lot of traffic, and well worth it!

LM