Sunday, January 14, 2018

Forensics Quickie: Methodology for Identifying Linux ext4 Timestamp Values in debugfs `stat` Command

FORENSICS QUICKIES! These posts will consist of small tidbits of useful information that can be explained very succinctly.

I saw this tweet from @JPoForenso recently.

Archived tweet here.

I didn't know what this was either, so I began testing. And and always, remember that it's one thing to know that an artifact exists; it's another to know how to find, understand, and make use of it. This is more of a methodology post related to problem solving, but having the logic behind the approach is typically pretty useful.

So, from start to finish, let's delve into how you'd go about answering the question Jonathon posted.

Scenario
You want to determine what a specific part of the debugfs `stat` command refers to (but this process can be applied to any other Linux command!).

The Solution
My first thought on how to approach this question was to see if we could get lucky by looking at readily available source code. Since verifying program behavior from source code has proven to work several times in the past, it doesn't hurt to look in this instance. So, knowing that the Linux kernel is open source, I first Googled something fairly simple, just to see what I'd get:

Googling "debugfs stat source code"

The first result is what seems to be the stat.c source file, related to the standard `stat` Linux command. It's not exactly what I want, but it's close enough for now. As we open the link, we can see a bunch of Linux kernel versions on the left sidebar. I didn't know what kernel version I was running on my Ubuntu virtual machine, so I ran a `uname -or` via command line to find out.

Running `uname -or` to get the operating system and Linux kernel version

Looks like I'm running kernel version 4.10.0-42-generic. Since I'm going to be running my tests on this Ubuntu VM (I already had a clean snapshot), I'd like to make sure I'm reading the stat.c source code for the closest kernel version I can get. And since we have the luxury of picking the version of the source code using this first Google search result, I'm going to go ahead and select version 4.10 from the left sidebar on the page (direct link here).

Looking at the source code, I can already see that it has a lot of the same strings that we see in Jonathon's screenshots (and just by using the `stat` command in the past) -- namely, I can see references to atime (access time), mtime (modify time), and ctime (change time).

stat.c source code showing references to relevant timestamp variable names

As I Ctrl+F for "ctime" within this source file, I noticed something at line 389: the added "nsec."

stat.c source code showing references to "sec" and "nsec". Possibly referring to nanoseconds

As I looked back at Jonathon's screenshots showing the output of both the debugfs's `stat` command and the standard `stat` command, I noticed that the standard command was showing the timestamps with nanosecond granularity. The theory at this point was that the characters after the hex bytes and the colon were probably the encoded nanoseconds value. To confirm this, I ran some tests.

I needed to further ensure that my setup was the same as Jonathon's; I needed to make sure my Ubuntu VM was using an ext4 file system. First, I ran `sudo fdisk -l` to list the disks and partitions present on my VM.

`fdisk -l` output showing /dev/sda1 as the boot partition and that it is of type 0x83.

We see that /dev/sda1 is our boot partition, that it is the largest partition, and that it is of type 0x83. Brian Carrier's "File System Forensic Analysis" book tells us in Table 5.3 (Chapter 5 > DOS Partitions) that a partition ID of 0x83 is, in fact, a Linux partition. But as this web page suggests:

"Various filesystem types like xiafs, ext2, ext3, reiserfs, etc. all use ID 83. Some systems mistakenly assume that 83 must mean ext2."

Therefore, this is not enough information and we will need to learn something new. With some quick Googling, we can see that the `df -T` command will get us the information that we need to confirm we're running an ext4 file system.

`df -T` output showing that /dev/sda1 is running an ext4 file system

Now that we know our setup more or less matches the original, let's run a quick and dirty test to get a feel for what Jonathon was doing. First, I'm going to do a quick `ls -la` in my home directory to see what existing file I can use to run a standard `stat` against. I found a file called ".xsession-errors". We'll use that. I then run the standard `stat`.

Standard `stat` command output of a file showing nanosecond granularity

Running a standard `stat` (stat /home/b/.xsession-errors) gives us the following that we will jot down for later:

Access: 2018-01-14 17:59:51.272474556 -0800
Modify: 2018-01-14 17:59:52.632462056 -0800
Change: 2018-01-14 17:59:52.632462056 -0800


Now, let's run the debugfs `stat` command on the same file and see what we get.

Before running the debugfs `stat` command

Note that, to use the debugfs `stat` command, you need to first specify the file system you want to use with the -w option. Since we already ran an `fdisk -l` before, we already know that we need to specify /dev/sda1. So we run `sudo debugfs -w /dev/sda1`, we will then get a debugfs prompt where we will run our `stat` command: `stat /home/b/.xsession-errors`. Also note that you can run the `stat` command against an inode or the full path of a file (as I did here) within the specified file system.

Output of the debugfs `stat` command, showing encoded timestamps

Let's go ahead and jot down the relevant lines of this output:

ctime: 0x5a5c0b18:96ca6ba0 -- Sun Jan 14 17:59:52 2018
atime: 0x5a5c0b17:40f686f0 -- Sun Jan 14 17:59:51 2018
mtime: 0x5a5c0b18:96ca6ba0 -- Sun Jan 14 17:59:52 2018
crtime: 0x5a5c0b17:40f686f0 -- Sun Jan 14 17:59:51 2018


(An off-topic tidbit here is that we see a "crtime," which is the "born/creation time" of the file being queried; ext file systems prior to ext4 did not support this).

Jonathon already identified the first section of hex bytes (before the colon) to be the Unix epoch representation of each timestamp. Let's confirm using TimeLord.

Using TimeLord to decode the first section of a debugfs `stat` command timestamp

As we can see, Jonathon is correct; the first section of the debugfs `stat` timestamp is the Unix time timestamp (remember, my Ubuntu VM is set to Pacific time (-8), so you must apply the offset to get it to match).

But what about the second portion? That's the real question! Knowing what we know already, we have some really good reasons to believe that the second portion of the debugfs timestamp is the nanoseconds value: we saw references to possible nanoseconds in the source code, we confirmed nanosecond granularity in the standard `stat` command, and the modify and changed times are the same using both `stat` commands.

But how do we confirm this? Well...let's try Googling it.

Googling "ext4 debugfs nanoseconds"

With our testing, we were able to search more effectively for what we needed. And this time, it really paid off -- there's already an answer for us! In fact, the answer goes one step further and even links us to one of Hal Pomeranz's "Understanding EXT4" articles that walks us through the ext4 timestamp format in-depth. I would highly recommend reading the full series, but to answer our initial question, we do not need to look further than the "Fractional Seconds" portion of Hal's post.
"The hex value of the create time "extra" is 0x148AF06C, or 344649836. But the low-order two bits are not used for counting nanoseconds. We need to throw those bits away and shift everything right by two bits- this is equivalent to dividing by 4. So our actual nanosecond value is 344649836 / 4 = 86162459."
Hal explains that the second part of the debugfs `stat` timestamps (after the colon) need to be divided by 4 after being converted to decimal. So let's try that with our example. The values for each stat command are replicated below (we'll only use the access and modify times since there are only 2 unique timestamps among the four total timestamps for the .xsession-errors file).

Standard `stat` command
Access: 2018-01-14 17:59:51.272474556 -0800
Modify: 2018-01-14 17:59:52.632462056 -0800


debugfs `stat` command
atime: 0x5a5c0b17:40f686f0 -- Sun Jan 14 17:59:51 2018
mtime: 0x5a5c0b18:96ca6ba0 -- Sun Jan 14 17:59:52 2018


0x40f686f0 (hex) --> 1089898224 (decimal) / 4 = 272474556
0x96ca6ba0 (hex) --> 2529848224 (decimal) / 4 = 632462056

We convert hex to decimal, then divide by 4, and we get the nanoseconds value!

With that, we've answered the question and learned a thing or two. Again, this was more of an exercise in problem solving and verification, but knowing how to go about solving a problem is something that you can apply to any question you may have.


-@4n6k


References
1. Soon

Thursday, November 2, 2017

Forensics Quickie: Methodology for Identifying "Clear Recent History" Settings for an Old Version of FireFox

FORENSICS QUICKIES! These posts will consist of small tidbits of useful information that can be explained very succinctly.

I saw this tweet from @phillmoore recently.

Archived tweet here.

Further down in this thread, we also see that the target browser is a ~5 year old version of Firefox. I've written in the past about verifying program behavior using source code. In fact, the previous post uses Firefox as an example. Given the information that we have -- and even though this is somewhat of a fringe case -- let's run through how to get the answer. Remember: it's one thing to know that an artifact exists; it's another to know how to find, understand, and make use of it

So, from start to finish, let's delve into how you'd go about answering the question Phill posted.

Scenario
You want to determine where an older version of Firefox stores the settings for the "Clear Recent History" dialog.

The Solution
First, we need to identify a version of Firefox that is about 5 years old. Mozilla hosts a page that lists all past releases of Firefox and links each to their respective release notes -- including the dates of release. Clicking around a bit, I found that Firefox version 17.0 was about 5 years old (Nov 2012).

With that out of the way, I needed to locate the source code and compiled executables archive for past releases. Googling brought me to Mozilla's "Downloading Source Archives" page. There are tons of ways to download the source, but I wanted a quick and easy way, so I navigated to: https://archive.mozilla.org/pub/.

Following down the index, I found what I needed at:
  • https://archive.mozilla.org/pub/firefox/releases/17.0/source/
    • firefox-17.0.source.tar.bz2
and
  • https://archive.mozilla.org/pub/firefox/releases/17.0/win32/en-US/
    • Firefox Setup 17.0.exe

Now that I have the v17 source code and installer, I can start testing. I'm targeting the "Clear Recent History" dialog, so I install Firefox v17 on a clean VM, and use the "Firefox" tab within the browser to navigate to History > Clear Recent History...

Default "Clear Recent History" Dialog for Firefox v17.

I'd like to find where the logic for this dialog is located within the source code. To make that easier, I need to grab a fairly unique string from this dialog -- one that won't come back with a lot of hits across many different source files. "Time range to clear" seems unique enough, so let's go with that.

There are many different ways to go about searching the extracted source code, but since I'm on Windows and want to run a quick and dirty search across many files, I'll just use AstroGrep.

Using AstroGrep to recursively search Firefox source code for a unique string.

I provide my search path that contains the source code, define my search string, and perform the search recursively across all file types. The results show two files that contain my unique string. The sanitize.dtd file sounds interesting, so let's open that one up.

Contents of "sanitize.dtd" showing our unique string along with some other context.

The first hit for our unique string can be seen at line 12. By looking around this area, we can gather some clues in order to pivot to other files that have more meat to them. Particularly interesting are lines 5 and 6. I'm looking for the settings within the dialog titled "Clear Recent History," so let's run an AstroGrep search for "sanitizeDialog2."

Using AstroGrep to recursively search Firefox source code for "sanitizeDialog2."

Again, the search string is unique enough to cut down on the amount of results we need to review. The file named sanitizeDialog.js seems to be what we're looking for here.

Contents of "sanitizeDialog.js" showing our searched string.

Line 64 shows our searched string. It also looks like we have something more than just localization and property data in this file. Browsing through this file would probably be a good idea.

Contents of "sanitizeDialog.js" showing the sanitize() function.

Line 100 contains the beginning of the sanitize() function and references the updatePrefs() function. A few lines down, we see what that's all about.

Contents of "sanitizeDialog.js" showing the updatePrefs() function.

The updatePrefs() function provides even more clues. It gets the timespan that the user sets within the dialog and hints at what we saw on line 105: the prefDomain of "privacy.cpd." We see "downloads" and "history" tacked on to that string, which leads me to believe that we're getting really close.

An AstroGrep search for "privacy.cpd" finally leads us to our destination.

Using AstroGrep to recursively search Firefox source code for "privacy.cpd."

The very first result, firefox.js, shows a few lines that not only contain "privacy.cpd," but also some familiar labels. These labels more or less line up with the checkboxes we saw in our "Clear Recent History" dialog at the beginning of this post. It gets even more interesting as we review the contents of firefox.js.

Contents of "firefox.js" showing the default "clear history" values and time span values.

As we can see, the source code is set up to check some of the items in the dialog by default. There are some other interesting lines here, but we'll get to that in a minute. What really caught my eye was line 495. Having done some research on Firefox proxy settings in the past, I knew that those settings were stored in the prefs.js file located in Firefox profiles. Couple that with the location of the firefox.js file within the source code folder structure (C:\4n6k\firefox-17.0.source\mozilla-release\browser\app\profile), and there's a really good chance all of what we need is in our profile's prefs.js. (which is typically located at C:\Users\4n6k\AppData\Roaming\Mozilla\Firefox\Profiles\<RandomChars>.default).

We can test this theory by performing a set of actions (as a normal user would) and documenting the results.

Test #01: Perform a default "Clear Recent History"

The first test was to open Firefox v17 on a VM that did not have Firefox already installed and clear the history using the default values within the dialog.

Here's what a default "Clear Recent History" action looks like on Firefox v17:

A default "Clear Recent History" action on Firefox v17. No settings were changed.

By default, the time range is set to "Last Hour," and the "Browsing & Download History," "Cookies," "Cache," and "Active Logins" checkboxes are selected. The "Offline Website Data" and "Site Preferences" checkboxes are not selected.
Note: prefs.js - Complete Re-write Upon Exit

Upon hitting the "Clear Now" button, Firefox (at least in the case of v17) must be closed in order for us to accurately test changes to the prefs.js file. The prefs.js file gets written to upon closing the application. And, per the warning at the top of the file, the entirety of the file will get re-written upon application exit. In other words, the born time and last written time get updated upon Firefox's exit; it is a brand new file. Every prefs.js file in these tests was acquired after application closure.

For reference, you can view the prefs.js files that existed before and after the default clear was performed.
[BEFORE]  [AFTER]

A quick comparison of the two prefs.js files using Beyond Compare shows very few changes. None are relevant to what we're testing.

Using Beyond Compare to compare two prefs.js files. No relevant differences are seen here.

Test #02: Perform a default "Clear Recent History" after form data input

In the previous test, the "Form & Search History" checkbox is grayed out. I wanted to test the default history clear while that checkbox was enabled, so I browsed to Twitter and logged in. That was enough to save some form data for the username field.

A default "Clear Recent History" action on Firefox v17 with form data present.

Note that I did not check the "Form & Search History" checkbox. It was checked by default after some form data was introduced. This variance did not show anything new; there was no relevant change in the prefs.js file.
[BEFORE]  [AFTER]

Using Beyond Compare to compare two prefs.js files. No relevant differences are seen here.

Test #03: Perform a "Clear Recent History" with all boxes selected + 2hr time span

For this test, a few websites were browsed, form data was input, all checkboxes were selected, and the "time range to clear" was changed from the default "1 hour" to 2 hours.

A modified "Clear Recent History" action on Firefox v17. All boxes are selected and time span is changed..

With these changes, we finally see some relevant items get written to the prefs.js file.
[BEFORE]  [AFTER]

Using Beyond Compare to compare two prefs.js files.Some relevant differences are detected.

We see the following get written:
  • user_pref("privacy.cpd.offlineApps", true);
  • user_pref("privacy.cpd.siteSettings", true);
  • user_pref("privacy.sanitize.timeSpan", 2);
Also note the "2" value after the timeSpan line. Remember: the value at that position is defined in the source code:
  • 0 - Clear everything
  • 1 - Last Hour
  • 2 - Last 2 Hours
  • 3 - Last 4 Hours
  • 4 - Today
Test #04: Perform a "Clear Recent History" with first box selected + 4hr time span

For this test, a few more websites were browsed, form data was input, all checkboxes were deselected except for the first one (unchecking everything will gray out the "Clear Now button"), and the "time range to clear" was changed from the default "1 hour" to 4 hours.

A modified "Clear Recent History" action on Firefox v17. Only one box is selected and time span is changed..

With these changes come more changes in the prefs.js file.
[BEFORE]  [AFTER]

Using Beyond Compare to compare two prefs.js files.More relevant differences are detected.
 
The key is to understand that if something deviates from the default "Clear Recent History" settings, there will be an entry for it in the prefs.js file. Note that this recent change caused entries to show up for cache, cookies, formdata, and sessions. This is because they have been switched to the opposite of what the default settings are set to. As an example, cache is checked in the default settings. When it is unchecked, it deviates from the default, and therefore, Firefox needs to make note of it in prefs.js. Likewise, "Offline Website Data" (aka offlineApps) is NOT checked in the default settings. When it is checked, it deviates from the default, and therefore, Firefox needs to make note of it in the prefs.js.

One final anecdote is that the checkboxes and time span within the "Clear Recent History" dialog do not get updated/saved until you click the "Clear Now" button (at least in the case of v17). That is, if you check/uncheck items or change the time span value and hit cancel, your settings will not be persistent -- neither in the current session nor the prefs.js file.

With that, we now have a better idea of how one of the many facets of Firefox operates. There is much, much more there (especially in pref.js), but testing specific functionality in depth goes a long way in understanding how things work.

If there's one takeaway from all of this, it's to make sure you're leveraging what is available. If you have source code to look through, identifying the behavior of a given application can be a walk in the park. 

...unless it's written in Perl.




-@4n6k


References
1. Reference to Original Tweet 01
2. Forensics Quickie: Verifying Program Behavior Using Source Code (by 4n6k)
3. Reference to Original Tweet 02
4. Mozilla Firefox Release Notes Archive
5. Downloading Source Archives (Mozilla)
6. Firefox Source Code Archive
7. AstroGrep Homepage


Wednesday, February 1, 2017

Forensics Quickie: Accessing & Copying Volume Shadow Copy Contents From Live Remote Systems

FORENSICS QUICKIES! These posts will consist of small tidbits of useful information that can be explained very succinctly.

----------------------------------
[UPDATE #01 02/02/2017]: It looks like there's a command line utility to determine what the @GMT path is for previous versions of files/folders called volrest. Worth looking into if you want to find the proper path to use more easily than using vssadmin. Here's another post about it.
----------------------------------

I've seen some chatter about this tweet recently:


Essentially, it's about being able to access and copy files out of Volume Shadow Copies (VSCs) from live systems remotely. There seems to be some confusion related to how this is done. Specifically, I noted a few instances [1] [2] [3] of folks not being able to get it to work. Below, I show how this can and does work. I tested all of this using a Windows 7 VM.

Scenario

You want to copy files directly from a Volume Shadow Copy on a live, remote machine.

The Solution

First, figure out when the Volume Shadow Copies were created on the machine in question. A simple way to do this is to run vssadmin list shadows as admin. Let's do this on our localhost to make it simpler; just be aware that there are many other ways to remotely run vssadmin.

 
Output of "vssadmin list shadows." Note the creation timestamps for each VSC.

We have three VSCs: one created on 8/4/2014 07:51:34 PM, one created on 9/19/2016 5:01:19 PM, and one created on 1/31/2017 11:34:16 PM. Let's copy a file from the 9/19/2016 VSC using the proper path syntax -- with specific emphasis on SMB's "previous version token" pathname extension.

Output of commands showing a successful copy operation of one file from a localhost VSC (127.0.0.1).


PS C:\Windows\system32> Test-Path C:\Users\4n6k\readme.txt
False
PS C:\Windows\system32> Copy-Item -Path \\127.0.0.1\C$\@GMT-2016.09.19-23.01.19\Users\4n6k\Desktop\SysinternalsSuite\readme.txt -Destination C:\Users\4n6k
PS C:\Windows\system32> Test-Path C:\Users\4n6k\readme.txt
True


As we can see, the copy succeeded. The Test-Path Cmdlet is initially run in our PowerShell session to show that readme.txt does not exist at the path specified. Once we run our copy operation using the Copy-Item Cmdlet, we can see that this same Test-Path check returns True, indicating that the file now exists.
Note: @GMT Timestamp Format

An important item to note is that the timestamp you use in the path must be the VSC creation timestamp in GMT, and it must be in 24hr military time format (i.e. not AM/PM; not your local time). My machine's system time was set to Central (US). Therefore, I had to apply the correct time offset (+6) to get the VSC creation time of 5:01:19 PM Central converted to 23:01:19 GMT. This time has to be exactly correct to the second, or this whole thing won't work at all. Also, fun fact: the offset might be based on your machine's current time zone or DST status and NOT the time zone of when the VSC was created. Note that one of the VSCs' creation times is in September and one is in January. In the Central (US) time zone, September falls within Daylight Savings Time while January does not. Despite this time difference, to copy out the files from each one of these VSCs, I had to apply an offset of +6 to get the correct path and for the copy to succeed. To further prove this point, take a look at what time is displayed when I browse the September VSC with Windows Explorer:

Before pressing Enter:
 

After pressing Enter:


But let's not stop there. Instead of copying just one file, let's copy a whole directory from the VSC.

Output of commands showing a successful copy operation of a directory from a localhost VSC (127.0.0.1).

This copy also succeeded. We can see the contents of the copied SysInternals directory in the destination folder via Get-ChildItem's recursive output.

This is all fine and good, but up to this point, we've just been copying items from our local machine's VSCs. Now that we know that the above process works locally, we should be able to apply it to remote machines fairly easily. So let's do that.

There are many ways to copy/exfiltrate data off of remote machines, but...let's keep it simple. From a separate machine, let's connect to the [now] remote host that contains the VSCs in question using admin creds. Let's also copy readme.txt:

Output of commands showing successful connection to remote host and copy operation of one file from Sept VSC (192.168.1.199).


C:\Windows\system32>net use \\192.168.1.199\C$ /user:4n6k
Enter the password for '4n6k' to connect to '192.168.1.199':
The command completed successfully.

C:\Windows\system32>IF EXIST C:\Temp\Exfil\readme.txt ECHO FILE EXISTS

C:\Windows\system32>copy \\192.168.1.199\C$\@GMT-2016.09.19-23.01.19\Users\4n6k\Desktop\SysinternalsSuite\readme.txt C:\Temp\Exfil
        1 file(s) copied.

C:\Windows\system32>IF EXIST C:\Temp\Exfil\readme.txt ECHO FILE EXISTS
FILE EXISTS



As we can see, a simple connection is made to the remote host using net use (w/ elevated creds), and the readme.txt file from the remote host's September 2016 VSC is copied using the SMB "previous version token" pathname extension.

[UPDATE #01 02/02/2017]: If we want to make the @GMT path lookups even easier, we can run a command line tool called volrest. We don't even have to run it as admin:

Output of unelevated volrest.exe showing @GMT VSC path names on localhost C$ for a specified location, recursively..

It should be noted that you will get more results by running volrest as admin, but you will still be able to identify the @GMT paths and get a good amount as a standard user. There are some key files/directories that will not be shown if you run as a standard user; here's a sampling of some files/folders I couldn't see when specifically querying system32:

Sample of files/folders that were not returned when running volrest.exe across system32 as a standard user.

The command run was .\volrest.exe \\127.0.0.1\C$\Windows\system32 /s. Running as a standard user returned 257 files + 26 dirs; running as admin returned 323 files 49 dirs.

At the end of the day, this technique does work. Can it be used to more easily pull VSC contents for use in forensic investigations? Sure. Could it also be used to exfiltrate data that may otherwise not look to be immediately available on a given machine? Oh yes.

This is a double-edged sword if I've ever seen one.


-@4n6k


References
1. SMB @GMT Token
2. SMB Pathname Extensions
3. Reference to Original Tweet 01
4. Reference to Original Tweet 02
5. Reference to Original Tweet 03 - Tools (by Harlan Carvey)

Monday, August 15, 2016

Forensics Quickie: PowerShell Versions and the Registry

FORENSICS QUICKIES! These posts will consist of small tidbits of useful information that can be explained very succinctly.

I was chatting with Jared Atkinson and James Habben about PowerShell today and a question emerged from the discussion: is there way to determine the version of PowerShell installed on a given machine without using the $PSVersionTable PowerShell command? We all agreed that it would be nice to have an offline source for finding this information.

Scenario

You want to determine the version of PowerShell installed on a machine, but don't have a means by which to run the $PSVersionTable PowerShell command (e.g. you are working off of a forensic image -- not a live machine).

The Solution

Right off the bat, Jared suggested that there had to be something in the registry related to this information and subsequently pointed us to the following registry key: HKLM\SOFTWARE\Microsoft\PowerShell. James noted that he found a subkey named "1" inside. Within the "1" subkey is yet another subkey named PowerShellEngine. As we can see in the screenshot below, there is a value named PowerShellVersion that will tell us the version of PowerShell installed on the machine.

 Note that PowerShell version 2.0 is shown in this registry key

There was a nuance, however. While James was only seeing one subkey (with the name "1"), I was seeing another subkey in addition to "1." I also saw a subkey named "3" on my machine. I took a look to find the following:

 A second subkey named "3" shows a different, more recent version of PowerShell

We wondered what this could mean. It wasn't until Jared noted that having the "1" subkey would indicate the existence of PowerShell v1 or v2 and that having the "3" subkey would indicate PowerShell v3-5 that this all started to make more sense.

James's machine was a Windows XP workstation. My machines were Windows 10 workstations. Therefore, James's SOFTWARE hive only had a single "1" subkey. It only had PowerShell v2 on it. But why did the Windows 10 workstations have both a "1" subkey and a "3" subkey? Jared, once again, suggested that a previous version of Windows being upgraded to Windows 10 may have been the reason. Sure enough, I had upgraded my Windows 7 machines to Windows 10 and had NOT done a fresh Windows 10 install. Note that this may not be the reason for seeing both subkeys; I reviewed a machine with a fresh Windows 10 install and observed that it also had both subkeys.

The bottom line is that, yes, the version of PowerShell can be found in the registry and not just by running the $PSVersionTable PowerShell command. But keep in mind that you might find more than one registry key containing PowerShell version information.
Note: Beware the PowerShell.exe Location

Do not be fooled by the default location of PowerShell.exe. The executable's path will show %SystemRoot%\system32\WindowsPowerShell\v1.0\powershell.exe. Unless manually changed, this path will show "v1.0" regardless of the PowerShell versions installed on the machine.


Extras

Great! We solved our problem. But what about some of this other stuff we see in the PowerShellEngine subkey? What's that RuntimeVersion value and why doesn't it match the PowerShellVersion value? If two PowerShell engines exist on the Windows 10 machines, how do I use the older, v2 engine instead of the v5 engine?

To answer these questions, let's first use the easiest way possible to determine the version of PowerShell installed on a machine: the $PSVersionTable PowerShell command. (I ran everything below on the Windows 10 machine).


PS C:\Users\4n6k> $PSVersionTable

Name                           Value
----                           -----
PSVersion                      5.0.10240.16384
WSManStackVersion              3.0
SerializationVersion           1.1.0.1
CLRVersion                     4.0.30319.42000
BuildVersion                   10.0.10240.16384
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}
PSRemotingProtocolVersion      2.3



First, I looked to see if there was an easier way to figure out what all of this output meant. And, what do you know, a quick Google search and ServerFault answer were able to point me in the right direction. Instead of looking at the help files in a PowerShell session, I just looked up what I needed online here. We come back with this:
  • CLRVersion: 
    • The version of the common language runtime (CLR).
  • BuildVersion: 
    • The build number of the current version.
  • PSVersion: 
    • The Windows PowerShell version number.
  • WSManStackVersion: 
    • The version number of the WS-Management stack.
  • PSCompatibleVersions: 
    • Versions of Windows PowerShell that are compatible with the current version.
  • SerializationVersion:  
    • The version of the serialization method.
  • PSRemotingProtocolVersion: 
    • The version of the Windows PowerShell remote management protocol.
And there you have it. Full explanations of what we're looking at here.
Note: CLRVersion & RuntimeVersion

Notice that when we run the $PSVersionTable command, we see a line named CLRVersion. The value associated with this name is the same as the value that we see when we look in the registry at the RuntimeVersion value. This is because both of these entries are related to the "Common Language Runtime (CLR)" used in the .NET Framework. You can read more about that here. Since I'm using Windows 10, I have .NET 4.6, which uses CLR version 4.0.30319.42000.


So, what about the two PowerShell engines that exist on my Windows 10 machines? What if I want to use a different engine than v5? Well, it's as easy as running a PowerShell command. To quote this MSDN article:

"When you start Windows PowerShell, the newest version starts by default. To start Windows PowerShell with the Windows PowerShell 2.0 Engine, use the Version parameter of PowerShell.exe. You can run the command at any command prompt, including Windows PowerShell and Cmd.exe.

PowerShell.exe -Version 2

Let's give it a shot.


PS C:\Users\4n6k> $psversiontable
Name                           Value
----                           -----
PSVersion                      5.0.10240.16384
WSManStackVersion              3.0
SerializationVersion           1.1.0.1
CLRVersion                     4.0.30319.42000
BuildVersion                   10.0.10240.16384
PSCompatibleVersions           {1.0, 2.0, 3.0, 4.0...}
PSRemotingProtocolVersion      2.3

PS C:\Users\4n6k> PowerShell.exe -Version 2
Windows PowerShell
Copyright (C) 2009 Microsoft Corporation. All rights reserved.

PS C:\Users\4n6k> $psversiontable
Name                           Value
----                           -----
CLRVersion                     2.0.50727.8669
BuildVersion                   6.1.7600.16385
PSVersion                      2.0
WSManStackVersion              2.0
PSCompatibleVersions           {1.0, 2.0}
SerializationVersion           1.1.0.1
PSRemotingProtocolVersion      2.1



As you can see, our PowerShell session is now using the v2 engine instead of v5. Note that when I tried PowerShell.exe -Version 3, the output I received was the same output I received for v5. This may be due to jumping from PowerShell v2 on Windows 7 to PowerShell v5 on Windows 10. This could also be because of the split between v1/v2 and v3/v4/v5 (thanks to James and Jared for this possible explanation).

A big thanks goes out to Jared Atkinson and James Habben. This post wouldn't exist without their involvement and discussion.

-@4n6k


References
1. What do the contents of PowerShell's $PSVersionTable represent? (ServerFault)
2. Common Language Runtime (CLR)
3. MSDN: about_Automatic_Variables - PowerShell
4. Environment.Version Property (.NET)
5. Starting the Windows PowerShell 2.0 Engine

Wednesday, March 16, 2016

Jump List Forensics: AppID Master List (400+ AppIDs)


TL;DR: The list of 400+ manually generated Jump List application IDs can be found HERE.

About 5 years ago, I wrote two blog posts related to Windows Jump Lists [1] [2]. These two posts covered jump list basics and focused mainly on how each application that is run on a Windows machine has the potential to generate a %uniqueAppID%.automaticDestinations-ms file in the C:\Users\%user%\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations\ directory. The AppID lists I created in 2011 have been useful to me in the past, so I decided to expand them. Many tools use these lists, as well. With that, I've recently added over 100 more unique application AppIDs and combined them into one list.

As a refresher, each application (depending on the executable's path and filename) will have a unique Application ID (i.e. AppID) that will be included in the name of the .automaticDestinations-ms jump list file. Jump lists provide additional avenues in determining which applications were run on the system, the paths from which they were run, and the files recently used by them.

The catch is that you need to know which AppIDs will be generated for certain applications. And, at this point in the game, the only way to know that is to either (a) manually generate the .automaticDestinations-ms files or (b) know the executable's absolute path and use Hexacorn's AppID Calculator. Either way, you need to have some kind of starting information to come back with an answer.

As we already know, two ways in which the .automaticDestinations-ms files are generated are:


...and...

Both of these methods will show you the application's jump list, thereby generating/modifying the application's .automaticDestinations-ms file. In this case, that file is named:

C:\Users\4n6k\AppData\Roaming\Microsoft\Windows\Recent\AutomaticDestinations\9b9cdc69c1c24e2b.automaticDestinations-ms

...with 9b9cdc69c1c24e2b being the AppID for 64-bit Notepad.

In the AppID list, you will notice a few entries containing multiple versions of applications. Many of these applications retain their default installation location as they are updated to new versions. This essentially means that the AppID will stay the same. As an example, if we take a look at iTunes, we'll see that iTunes 9.0.0.70 has an AppID of 83b03b46dcd30a0e; I tested and verified that in 2011. If we take a look at a more recent version (12.3.2.35), we can see that the AppID has remained the same. This is because when the newer version is installed (and then run), it is doing so from the same location as the old version was, which causes the AppID to remain the same among different versions. If you want to learn more about how the AppID is actually generated, I highly recommend that you read through Hexacorn's blog post here.

With that, you can find the AppID master list at the following location:

https://github.com/4n6k/Jump_List_AppIDs/blob/master/4n6k_AppID_Master_List.md

Note that with the release of Eric Zimmerman's JLECmd (Jump List Explorer Command Line), an investigator can gain better insight into the applications for which the jump list files were generated.

As Eric explains in his Jump Lists In-Depth post, jump lists are (more or less) collections of LNK files. So, for example, if you have a jump list .automaticDestinations-ms file that has an unknown AppID and you see that the LNK files contained within it all point to a specific file type (say, AutoCAD .dwg drawing files), you might be able to conclude that the jumplist belongs to an AutoCAD-related program. Obviously, this is a very simple example, but you get the idea. You have more information to work with now.

The AppID master list is a work in progress and will likely be updated occasionally throughout its life cycle.

-@4n6k


References
1. Jump Lists In Depth (by Eric Zimmerman)
2. Introducing JLECmd! (by Eric Zimmerman)
3. JumpLists file names and AppID calculator (by Hexacorn)
4. Jump List Forensics: AppIDs Part 1 (by 4n6k)
5. Jump List Forensics: AppIDs Part 2 (by 4n6k)

Saturday, May 23, 2015

Forensics Quickie: NTUSER.DAT Analysis (SANS CEIC 2015 Challenge #1 Write-Up)

FORENSICS QUICKIES! These posts will consist of small tidbits of useful information that can be explained very succinctly.

SANS posted a quick challenge at CEIC this year. I had some downtime before the conference, so I decided to take part. In short, SANS provided an NTUSER.DAT hive and asked three questions about it. Below is a look at my process for answering these questions and ultimately solving the challenge. It's time to once again refresh our memories with the raw basics.

The Questions

Given an NTUSER.DAT hive [download], the questions were as follows:
  1. What was the most recent keyword that the user vibranium searched using Windows Search for on the nromanoff system?
  2. How many times did the vibranium account run excel.exe on the nromanoff system?
  3. What is the most recent Typed URL in the vibranium NTUSER.DAT?

The Answers

Right off the bat, we can see that these questions are pretty standard when it comes to registry analysis. Let's start with the first question.

Question #1: Find the most recent keyword searched using Windows Search.

First, we must understand what the question is asking. "Windows Search" refers to searches run using the following search fields within Windows:

Windows Search via the Start Menu.

...and/or...

Windows Search via Explorer.

The history of terms searched using Windows Search can be found in the following registry key:

NTUSER.DAT\Software\Microsoft\Windows\CurrentVersion\Explorer\WordWheelQuery

For manual registry hive analysis, I use Eric Zimmerman's Registry Explorer. Once I open up the program and drag/drop the NTUSER.DAT onto it, I typically click the root hive (in the left sidebar) and just start typing whichever key I'd like to analyze. In this case, I started to type "wordwheel" and Registry Explorer quickly jumped to the registry key in question. Note that you can also use the "available bookmarks" tab in the top left to find a listing of some common artifacts within the loaded hive (pretty neat feature; try it out).

Registry Explorer displaying the WordWheelQuery regkey (MRUListEx value selected).

In the screenshot above, notice that the MRUListEx value is highlighted (Sound familiar? We also saw use of the MRUListEx value upon analyzing shellbags and recentdocs artifacts). This value will show us the order in which the Windows Search terms were searched. The first entry in the MRUListEx value is "01 00 00 00." This means that the registry value that is marked as "1" is the most recently searched item. If we analyze the MRUListEx value further, we notice that the next entry is "05 00 00 00," indicating that the value marked as "5" is the term that was searched before the most recently searched item marked as "1." But we're only concerned with the most recently searched term, so let's look at what the value marked as "1" contains:

Registry Explorer displaying the WordWheelQuery regkey (value "1" selected).

We note that the Unicode representation of the hex values is "alloy." And just like that, we have our answer to question #1. The most recent Windows Search term is "alloy."
Note: MRUListEx Item Entries

Each entry in the MRUListEx value will be 4 bytes in length stored in little endian. That is, each entry is going to be a 32-bit integer with the least significant byte stored at the beginning of the entry. E.g. an entry for "7" would be shown as "07 00 00 00."

Question #2: Find the number of times excel.exe was run.

For question #2, we are concerned with program execution. And, as we already know, there is no shortage of artifacts that can be used to determine this (.lnk files, Windows Error Reporting crash logs, Prefetch, AppCompatCache, etc.). However, we are limited to only the NTUSER.DAT hive for this challenge. As such, the artifact we will want to look at will be UserAssist.

Remember that unlike Prefetch, UserAssist artifacts will show us run counts per user instead of globally per system. Since we would like to determine how many times excel.exe has been run by a specific user, UserAssist is the perfect candidate.

UserAssist artifacts can be found in the following registry key:

NTUSER.DAT\Software\Microsoft\Windows\CurrentVersion\Explorer\UserAssist

Just as with question #1, let's open up Registry Explorer and start typing "userassist" to quickly find our way to this key.

Registry Explorer displaying the UserAssist regkey. ROT13'd EXCEL.EXE, run counter, and last run time highlighted. 

Within the UserAssist key, there will be two subkeys that each contain a "Count" subkey. For this challenge, we will be looking in the {CEBFF5CD-ACE2-4F4F-9178-9926F41749EA} subkey. Each value's name within the "Count" subkey is ROT13 encoded, so let's decode the value for Excel.

{7P5N40RS-N0SO-4OSP-874N-P0S2R0O9SN8R}\Zvpebfbsg Bssvpr\Bssvpr14\RKPRY.RKR

  ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕     ↕

{7C5A40EF-A0FB-4BFC-874A-C0F2E0B9FA8E}\Microsoft Office\Office14\EXCEL.EXE

The first part of the decoded path (in bold) is the Windows KnownFolderID GUID that translates to the %ProgramFiles% (x86) location.

We've now pinpointed the application in question. Next, we need to find out how many times it has been run.

The run counter for the EXCEL.EXE UserAssist entry can be found at offset 0x04. In this instance, the run counter happens to be 4, indicating that EXCEL.EXE was run four times. Note that on an XP machine, this counter would start at 5 instead of 1. So if you are manually parsing this data, remember to subtract 5 from the run counter when dealing with XP machines.

We've successfully answered question #2. EXCEL.EXE was run 4 times.

But wait, there's more! Check the 8-byte value starting at offset 0x3C (60d). That's the last time the program was run. Convert this 64-bit FILETIME value to a readable date/time using DCode.

DCode showing the decoded last run time of EXCEL.EXE

EXCEL.EXE was last run Wed, 04 April 2012 15:43:14 UTC. On to question #3.

Note: Determining OS version with NTUSER.DAT only

As a side note, we can now tell that the machine housing this NTUSER.DAT was a post XP/Server 2003 machine. How? Well, there are a few indicators: UserAssist entries on Windows XP are 16 bytes in length while Windows 7 UserAssist entries are 72 bytes in length; the two subkeys under the root UserAssist key ({CEBFF5CD-ACE2-4F4F-9178-9926F41749EA} and {F4E57C4B-2036-45F0-A9AB-443BCFE33D9F}) are typically found on machines running Windows 7; we see references to the "C:\Users" folder in other UserAssist entries instead of the "Documents and Settings" folder that is typically found on XP machines; the run counter for EXCEL.EXE is less than 5 -- on an XP machine, this counter would be at LEAST 5.

Question #3: Find the most recent TypedURL.

The TypedURLs registry key stores URLs that are manually typed/pasted into Internet Explorer. Clicked links are not stored here.

Using our tried and true Registry Explorer process, let's look at what the TypedURLs registry key has to offer. Navigate to the NTUSER.DAT\Software\Microsoft\Internet Explorer\TypedURLs key by typing "typedurls" in Registry Explorer.

Registry Explorer showing the TypedURLs regkey.

As we can see above, the most recent TypedURL is http://199.73.28.114:53/. The value labeled "url1" will always be the most recent TypedURL. Also, check the LastWrite time on the root TypedURL key to determine the last time "url1" was typed.

Question #3: answered. Challenge complete.


Automation

Now, let's assume you've actually got some work to finish. You've gone through this stuff manually at least once and understand it inside and out. It's time to automate.

Harlan Carvey's RegRipper has plugins to quickly pull/parse the registry keys covered here and much, much more. Yes, we can answer all three of these questions with a one-liner.

C:\RegRipper2.8>rip.exe -r Vibranium-NTUSER.DAT -p wordwheelquery >> ntuser.log && rip.exe -r Vibranium-NTUSER.DAT -p userassist >> ntuser.log && rip.exe -r Vibranium-NTUSER.DAT -p typedurls >> ntuser.log

Remember, though: it's one thing to be able to run RegRipper. It's another to know where the output is coming from and why you're seeing what you're seeing.

Again, this is nothing new; this challenge is actually on the easier side of analysis. But, if at any point you had doubts about the artifacts covered here, it's worth going back and refreshing your memory.

-@4n6k


References
1. UserAssist Forensics (by 4n6k)
2. INSECURE Magazine #10 (by Didier Stevens)
3. ROT13 is used in Windows? You’re joking! (by Didier Stevens)
4. KNOWNFOLDERID (by Microsoft)
5. FILETIME structure (by Microsoft)