Thursday, March 10, 2011

Timeline Registry Automation Script

I just couldn't keep from writing the script. Just couldn't. After going through the process yesterday, then posting about it, I just kept thinking about how I could streamline the process, so I took me a little bit of time and worked up the following. Ran through it a few times to make sure it worked and tweaked it a little.

Please keep in mind, I'm no scripting guru (aka, Hal Pomeranz et al) , so this may seem kludgy. But it does work. I could probably feed it a list of mount points and output files to fill in the variables and have it run through the whole of it, but that would probably take me more time to create and test (and fix) than it would for me to run this a handful of times.

So here it is, in all its (lack of) glory:



_________________________________________________________________

#!/bin/sh
#
# Script to automate regripper in linux for timeline creation.
# This is designed to be run from your regripper directory.
# This version of rip.pl is brought over from the Windows download to run in Linux based on http://grey-corner.blogspot.com/2010/04/running-regripper-on-linux.html
#
# This will will automatically run through the 4 hives in a given mount point and write specified output file.
# By default, the 'all' module is run, rather than specific to hive type.
#
# $Src is the path (mount point) to be recursed for file in question
# $Dst is the path & file for regripper output (path must already exist)
# Order of operation should be ./rip.sh src dst
Src=$1
Dst=$2
#
#
# Check that the user provided all arguments required by this script.
if [ -z $1 ]; then
echo -e "USAGE: rip.sh SOURCEDIR OUTPUTFILE";
exit;
fi
if [ -z $2 ]; then
echo "USAGE: rip.sh sourcedir TARGETFILE";
exit;
fi

echo

echo

#
# Begin the job, updating the user along the way.

echo "Parsing user hive ... Please be patient."

echo

find $1 -iname ntuser.dat | while read d; do ./rip.pl -f all -r "$d" >> $2; done

echo

echo "Thank you for being patient."

echo

echo

echo "Parsing system hive ... This will only take a minute."

echo

find $1 -iname system | while read d; do ./rip.pl -f all -r "$d" >> $2; done

echo

echo "See, I told you it wouldn't take long."

echo

echo

echo "Parsing security hive ... Just a second, it's almost done."

echo

find $1 -iname sam | while read d; do ./rip.pl -f all -r "$d" >> $2; done

echo

echo "There! I can't believe you're so impatient."

echo

echo "Last one - the software hive ... Hold your horses, okay?"

echo

find $1 -iname software | while read d; do ./rip.pl -f all -r "$d" >> $2; done

echo

echo "Okay, we're done now. Stop complaining; I worked as fast as I could."

echo
echo

echo "If you want to run another system, please start over"

echo

echo "Thanks for playing; have a nice day."
echo

# end of script
#

_________________________________________________________________


Here's hoping someone can use it.

Cheers!

LM

Wednesday, March 9, 2011

Timelines with Registry Data

I've posted before about log2timeline, and now I'm going to add to that a bit.

This is about incorporating regripper output into the timeline. SANS teaches about this in FOR508, and I'm going to "streamline" the process a bit. Without SANS (where I learned of the possibilities) and Harlan Carvey (the source of regripper), and coffee this would not be possible.

So here's the situation: I'm working an investigation with several systems, and starting by building timelines. Standard process - fls, l2t, mactime. But before working up the final bodyfiles and running mactime, I'm going to pull in registry data with regripper.

I'm working in my linux system, which is Ubuntu based. Using Lupin's great post here I've incorporated the latest version of regripper into my system.

At first I was going to bash it together with variables, but in the end it seemed like there would be four or five, and I thought that was too much typing across several images. It seemed more efficient to type my command line once, then repeat and modify the couple things I would need.

So my evidence items are numbered sequentially, like XYZ-001, XYZ-002, and so on. Their respective file systems are mounted accordingly on /mnt/001, /mnt/002, ... The drive containing the images is a Truecrypt volume, so it's mounted on /media/truecrypt1. I'm storing my output files in a separate directory for each evidence item, named for the number and custodian, like /media/truecrypt1/001_jones, /media/truecrypt1/002_smith, etc. The bodyfiles I've already created are named like 001.fls, 001.l2t so I can easily tell which is which. My regripper output will continue in that vein with 001.reg.

Now regripper documentation and SANS teach to use it by pathing out to each hive, then redirecting output somewhere. Something like:
# rip.pl -r /media/sda2/Windows/System32/config/SAM -f sam >> /media/sdb1/evidence/timelines/jones_regripper.txt
Something like that. But that's too much typing for me. And like I said, I thought about scripting it, but there's just too many variables for what needs to be done - source path, source file, module, destination path, destination file, ack pfft!

So to pull a page from my old ways with log2timeline (before timescanner), I did the following, from within my regripper directory:
# find /mnt/001/ -iname system | while read d; do ./rip.pl -f all -r "$d" >> /media/truecrypt1/001_jones/001.reg; done
Then all I have to do is replace "system" with "sam" "software" and "ntuser.dat" and that takes care of 001. For 002 I change the "1" to "2", change "jones" to "smith" and I'm good to go. Far quicker than typing all the paths each time, or plugging in several variables. If I wanted to get more creative I probably could script it to run through the various hives for each set of variables, and then just put in the set for the next custodian system. That might be okay, but I didn't feel like going that far today. Now that I think about it, I believe I will tomorrow, though.

This is no command line kung-fu, but I like it; it's better than a bunch of typing anyway. What it does is within the /mnt/001/ directory, it searches for files named "system" with no case-sensitivity. STDOUT is the normal output; this is piped to a while loop that runs rip.pl against the "system" file (-r). I had some errors trying to use the specific modules (-f) and being lazy (just like not wanting to type) I decided just to try "all." This worked just fine, no more errors. You'll see as it runs that regtime is called; I believe this is what creates the mactime-formatted output.

That takes care of my timeline pieces. Now to get a single bodyfile for each system... Within my output directory (/media/truecrypt1/001_jones):
# cat 001.* >> body.001
I'm now reversing the order, to keep my pieces separated. It helps me track the flow/progress as well, which I'll show next.

I'm trying to identify areas of activity (or inactivity) over several months so that I can focus in on details with a more limited timeframe. To do this I'll be running mactime and building a daily index. For 001, this looks like:

# mactime -d -z CST6CDT -m -y -b /media/truecrypt1/001_jones/body.001 2010-01-30..2010-05-24 -i day >> /media/truecrypt1/001_jones/index.001
This is just wrong. Shame on me. Here's how it needs to be:

# mactime -b /media/truecrypt1/001_jones/body.001 -i day /media/truecrypt1/001_jones/index.001 -d -m -y -z CST6CDT 2010-01-30..2010-05-24

This gives me a CSV file (-d) that gives a nice overview of total activity each day during my ~4 month period. Then hopefully I'll be able to focus in on a few specific days that look interesting (maybe a lot of activity, maybe very little). All these options are in the help info or man pages, but in brief -d gives a CSV output, -z allows you to set the timezone, -m and -y set the date format, -b specifies a bodyfile, then comes the date range I'm interested in, and last -i specifies to generate an index; it's either "day" or "month", and must be redirected to the output file. Again, all I have to do is change "1" to "2", "jones" to "smith" and move on to the next.

Now my output directory has the following files:
001.fls, 001.l2t, 001.reg, body.001, index.001
This makes it easier for me to keep track of the different sets of data, and their formats. The pieces that go into my main bodyfile start numerically, and the combined datafiles end numerically. I know everyone has a different way; just wanted to share my logic/excuse.

I hope that can help someone. I should (hopefully) have some sort of shell script together tomorrow that would loop through the different hives for each custodian. There would still be several parameters to input initially, but it might work out to be faster going through four hives than up arrow, back arrow, replace... Once I have it I'll post that as well.

LM

Note: Edited to correct mactime syntax to create index files.

Tuesday, March 8, 2011

Need More Cowbell (or, links to SANS)

So Rob Lee recently posted to the DFIR list that SANS is ranking rather low on Google, as they're not stooping to "shadier" tactics such as setting up false sites to point to theirs, and other similar techniques. I say shadier since doing those kinds of SEO type things (aside from your site's internal metadata, search engine indexing, and so on) drives up your presence on the web - but at others' expense, and typically seems unrelated to the actual site. That may not make a lot of sense, but that's okay.

The point of this post is to say that I've done my part. I already follow the SANS Forensics Blog, but now I've added that and their main forensics page under Favorite Sites. Just doing my forensivic duty (forensivic=forensic civic).

On a side note, Rob suggested searching for "computer forensics" from Google to see the results. I did so, and was pleased to see Lance Mueller's site in the first page of results. Obviously he gets a lot of traffic, and well worth it!

LM

Thursday, March 3, 2011

The Whole HBGary v Anonymous Scenario

So it's a bit old news now, but it just doesn't seem to quit, and it's all rather interesting. And that is, of course, HBGary Federal (and HBGary's Greg Hoglund) being thoroughly smacked around by hacker group Anonymous. Ars Technica has some very in-depth coverage of the whole situation, with looks at what led up to it as well the aftermath.

Obviously there's a whole slew of questions about how a security company could have such seemingly glaring lapses in best practices, all the way around the board. But then too, it's easy to play armchair quarterback in hindsight. While I scratch my head about it, that's not why I'm posting.

From what Ars Technica has posted, there would also seem to be a lot of questions about ethical/moral considerations for other HBGary Federal activities. These things, when seen from the outside (again) and not knowing the whole facts/truth, would seem to be likely to lead to investigations into HBGary Federal by law enforcement as well as into Anonymous' activities. However, that's also not the reason for my post.

I'm posting because of the way the attacks got pulled off. Nothing fancy, cutting edge, or unique. Just a good old-fashioned SQL injection made possible by a lack of whitelisting. Password hashes extracted, cracked, and found to have multiple uses. Greg Hoglund's email being compromised and used to gain remote root access to rootkit.com through social engineering. Classic stuff, and (seemingly) fairly well executed - at least based on the results. Ars Technica published the email exchange between an Anonymous member posing as Hoglund, and Jussi Jaakonaho, wherein Jaakonaho was played in order to give Anonymous root access over ssh to take over the rootkit.com server.

To me, that last social engineering bit is the "sweet" piece. No, I'm not supporting Anonymous' actions, I'm just looking in through a window and thinking that from a technical standpoint, they did a good job. Once they got a foot in the door, they quickly went through a series of steps gaining more and more control over their target environment. Looking back at it, from the outside, it would appear that there were several opportunities for Jaakonaho to get suspicious and try to confirm through some other channel, but he didn't. All the other aspects - gaining access to the CMS, email, data storage, defacing websites, and so on, are all simply technical skills if you will.

The social engineering bit, though, stands out (to me). It was (virtually) face to face. They had to pretend to be Hoglund and communicate with someone who knew him, then try to get that person to do something that would seem to be against the very nature of a security-focused person. I mean, getting Jaakonaho to take down the firewall to open up ports for ssh access, reset passwords, hand out user names and public IP address - wow! To me, even trying to pull that off takes some serious guts. But the fact is, it worked, and quite nicely.

I don't condone what they did at all, but I guess I would have to say that I admire - at least that one piece, at least to an extent - *how* they did it.

In case anyone hasn't read about it and wants to, here are some of the Ars Technica links:

The Inside Story
The Aftermath
The Meet
The Email Revelation

One last little thought on the matter. A part of me can't help but wonder, is it possible that Aaron Barr/HBGary (Federal or otherwise) could have faked out Anonymous, and gave them a carefully orchestrated scenario? You know, a really seriously elaborate honeypot? Surely not. But I do wonder...

LM