Helpful Navigation Toolbar

Monday, December 8, 2014

Even More Live Response Collection Updates!!



Hello again readers! The last update to the Live Response collection was about two months ago, and I have been working on adding more open-source tools and data collection processes to the collection. I also tried to enhance the way that the Windows Live Response collection operates, including building in some file/location existence checking in an effort to ensure compatibility with newer version of Windows, including some initial attempts at gathering data from Windows 10, with many thanks going to Brad Garnett for doing testing on these newer versions.


While a majority of the changes are going to be transparent to the end user, the processing of some items, like Sysinternals, has greatly changed. It also leverages a couple of really powerful tools to copy files, such as Registry Hives, $MFT, $LogFile, $UsnJrnl, and Event Logs from Windows systems. In a blog post in about a month ago Corey Harrell pointed out an awesome tool from Joakim Schicht  that allows the extraction of the $UsnJrnl that not only copies it from a system, it also only extracts "used" data, which usually results in a very great reduction in size. To quote Joakim: 

"This may be a significant portion of the total data, and most tools will extract this data stream to its full size (which is annoying and a huge waste of disk space). This is where this tools comes in, as it only extract the actual data for the change journal. That way extraction obviously also goes faster. Why extract 20 GB when you might only need 200 MB?" 


The script also now leverages another great data extraction tool, forecopy_handy. By using this tool, it also allows copying of in-use files such as Registry Hives, Event Logs, and browser related files from a live system. If you create a disk image using the "Complete" version of the script it is likely that you will get access to these files, but this method allows you to take the files prior to (or instead of!) creating a disk image if you would like.


There are also many changes to the overall processing performed by the script, for example, before the script would delete the entire Registry folder related to Sysinternals, but Luca Pugliese pointed out that in some investigations you may very well be looking for when Sysinternals was installed on the system, and that method could very well wipe out evidence (which could potentially be a bad thing). The script now checks for evidence of Registry Keys related to the Sysinternals programs that the script requires. If it finds them, it updates the value to "1" (to ensure the tool will run without user interaction) and that is the only change that is made. If the key is not found, it will populate the required Registry keys, but it will still clean up after itself. 


Extracting the $MFT, $LogFile, and $UsnJrnl had always been in my plans (especially if you use the TriForce tool) but I just hadn't had the time to work on the updates until the past week or so.


Please do not hesitate to reach out if there are any items that you commonly use during the course of an investigation that the script does not currently extract, and hopefully it can be included in the next release. For example, some of the requests for data collection from Windows are:


  • Automatically encrypting the output of the script (volatile data collection, memory, and disk image)
  • More browser history related file extraction
  • Log file collection (IIS logs, AV logs, application logs, etc.)
  • Data collection/file hashing for all users (not just current)



I am hopeful that the next release will cover most, if not all, of the requests. I am also hopeful that automated Mac memory collection and drive imaging will be included in the next update (fingers crossed!)




LiveResponseCollection-Cedarpelta.zip - download here 

MD5: 7bc32091c1e7d773162fbdc9455f6432
SHA256: 2c32984adf2b5b584761f61bd58b61dfc0c62b27b117be40617fa260596d9c63
Updated: September 5, 2019





Wednesday, December 3, 2014

Part of an Afternoon with TrustPipe...


Today an article that sounded interesting was pointed out to me, regarding a company named TrustPipe that is claiming to catch 100% of network attacks. A direct quote from their website:

"Our patented technology understands the DNA 
of the Internet — what’s good and what’s bad.
It can detect virtually every attack — even the 
brand new "zero-day" ones — and protect you."

Naturally I was intrigued by this, although the cost of the tool (five dollars for five years) seemed to be awfully cheap, and I was a little surprised that the two options at the bottom of the screen are "Rest of World" and "Mainland China". 


My location options are "Rest of World" and "Mainland China". That seems a little odd.


When I did a Google search for the company, I came across their Twitter account which, since joining in 2011, as a total of one tweet. That also seems odd, especially for a company that does as much business at the article states.



Since February 2011, the company Twitter account has tweeted one time. Again, that seems odd to me.

At this point I was a little concerned, and I decided to use a very low limit credit card that I seldom use, just in case I had any more bad vibes after making the purchase. I paid the five dollar cost and received an email to download the tool. The instructions seem fairly straight-forward, and I downloaded the tool.



The download instructions after paying five dollars for TrustPipe


I transferred the file to my Malware Box of Evil and I ensured that I had .NET 3.5 installed prior to the installation, just like the instructions stated.

When I tried to install the program, I got an error message 1721 stating that there was a problem with the installation.


Error trying to install TrustPipe

I tried to install the application a few times before giving up. If there is an installation problem I would very much like to be told what the program that is needed would be, rather than a general error. I did a little bit of digging into the program with PEStudio and didn't see anything that jumped out at me as a warning flag, but then again, it is difficult to say without spending some time reverse engineering it, which I am not inclined to do at this point. The bottom line for me is that this product, which is supposed to be lightweight, easy to use, easy to install, etc. will not even install properly on the Malware Box of Evil, which is running Windows XP SP3. I don't see how a product geared towards specifically working on Windows XP cannot run/install properly on the box, but at least I am only out five dollars.


Their website is not very helpful and it does not have very much information, and browsing the LinkedIn profiles of their "Team" page on the website, it is hard to determine exactly who is employed by TrustPipe and who is not. I would love to hear from anyone who has actually used the product and am curious on their results with it. I was looking forward to testing some POS malware with TrustPipe running to see how it would fare, but due to the installation problems I don't even recommend getting the application for testing purposes. I also immediately called up my credit card company and cancelled the card that I made the purchase with. With the bad vibes that I felt going through the initial checkout process, I felt that it was best to cancel the card and request a new one, just in case.









Thursday, October 16, 2014

Automated Windows disk imaging? Sure, it can do that!




Hello again readers! After a busy couple of weeks, I had some time to work on adding a new feature to the Windows Live Response collection, automated disk imaging! This means that when you run the "Complete_Windows_Live_Response" batch file (with administrative privileges) that, on top of creating a memory dump and gathering volatile data from a system, it also attempts to identify all mounted drives on a system (excluding network shares) and if your destination drive has enough storage space, a forensic image of the drive will be created. It also will not allow you to create a disk image of a device when the destination is that device itself (in other words, you cannot run the script from a folder on your desktop and create a disk image. The memory dump will still occur, but disk imaging will not). And best of all, after each image is created, if you have more than one drive, the free space calculation runs again to try to ensure that the destination drive has enough free space available. Because of this new functionality, the Windows Collection also has three different scripts available:


"Complete_Windows_Live_Response.bat" must be run with Administrative privileges to work to the fullest extent possible. This script creates everything in the "Memory_Dump_Windows_Live_Response.bat" script, as well as creates full disk images of logical drives (except for network drives) on a device. This script must be run from an external device (or internally on a non-system partition) in order to create the physical disk image. The external device also must have more free space available than the size of the drive(s) that it is imaging (it checks prior to each image being created for free space). This is the ultimate "plug it in, run it, pick it up" option. The script can run without administrative privileges, however running the script with non-administrative privileges will not create the disk image or the memory dumps.


"Memory_Dump_Windows_Live_Response.bat" is the traditional Windows Live Response collection.  The script will automatically collect a memory dump and copy files of interest (such as Prefetch files) to the %computername% folder. It will also leverage hashdeep to compute the md5 and SHA256 hashes of Windows PE files located in the %WINDIR%\system32 folder and the %SystemDrive%/Temp folder (if it exists). It will also compute the md5 and SHA256 hash of every file, recursively, in the %TEMP% folder. It will also run netstat -anb, to provide results of services with open connections and it will also install winpcap, in order to run an nmap scan in an attempt to detect evidence of ARP poisoning. It needs elevated privileges to perform these functions, but it can be run without administrative privileges as well.  However, it will not return as in-depth of results as it would have if it were run with administrative privileges



"Triage_Live_Response.bat" is the "lite" version of the Windows Live Response collection. This gets rid of time consuming elements like the Memory Dump and WinAudit. It is still best to run this with administrative privileges, but it should work much faster and give an examiner quicker results than the other scripts.



In order to run the script, you should complete the following steps:


  • Step 1 - Download the Live Response collection
  • Step 2 - Unzip the Live Response collection to an external drive (I prefer USB3 hard drives larger than 1TB in size)
  • Step 3 - Navigate to the Windows Live Response folder on your external drive
  • Step 4 - Run "Complete_Windows_Live_Response"
  • Step 5 - Check back in a few hours, the image should be complete!



I made a short video using Snagit showing the above steps as well, which is embedded below:







I tried my best to make it as easy as possible to run as well as putting in as many checks as possible, within the batch script, to ensure that something bad would not happen. The update allows an incident responder, system administrator, help desk associate, non-IT savvy employee, etc. to be able to do an initial collection from a Windows system, as long as they have (at least) local administrative privileges. I built-in checks so that a disk image will not be created on the device that you are trying to image (you can do a memory dump still on a local machine, but disk imaging will not occur). It will also ignore the drive where you are running the script from, but if that drive has other partitions that are recognized, those will be imaged (please be aware of that and try to use drives with only one mounted partition as your destination).


I also had to debate whether to image the entire physical drive or just the logical drive. After going back and forth, I decided on the logical drive, for a couple of reasons. The first reason is that if we image the logical drive, we may indeed be missing some data, but if you utilize full disk encryption and we image the entire physical drive, more than likely we will have to decrypt that image at some point. This could add steps to the analysis process, so I tried my best to keep it as straight-forward as possible. The second reason is that with the physical drives, it will take into account multiple partitions on the internal drive. While this may be a catch-22 if you have multiple partitions on the destination drive, I decided to go that route to ensure if you have another volume mounted on your system (like TrueCrypt) that will get imaged as well.


You may also note that I added the GPL to this instance of the Live Response Collection. All of the tools included in the collection are available to use at no-cost, but I want to ensure that the work that went into making the scripts work and perform the automated memory dumps and disk imaging remains available to anyone who wants to use it. While I certainly hope that a company would not take the Live Response collection and attempt to monetize it, I felt that putting the GPL in there would be another step that I could take to try to ensure that monetization of the collection will not happen.





LiveResponseCollection-Cedarpelta.zip - download here 

MD5: 7bc32091c1e7d773162fbdc9455f6432
SHA256: 2c32984adf2b5b584761f61bd58b61dfc0c62b27b117be40617fa260596d9c63
Updated: September 5, 2019






As always, any feedback is very welcome and if there are any features that you would like to see in a future update to the collection, please let me know! Happy automated disk imaging everyone!!




Monday, September 8, 2014

Spending $$$ on hardware won't fix the problem...you first have to understand what the problem is



As more and more organizations experience data breaches that are specifically targeting credit card processing programs, many in the sales and marketing areas are quick to say "If Organization X had only spent $5,000,000 on our latest greatest virtualized cyber cloud threat mitigation machine learning device..." More than likely the sales pitch contained some of those buzzwords and probably others as well. It also seems that many in the managed services (including some individuals within the incident response realm) are also attempting to convince anyone who will listen that some very expensive hardware or software (or both) solutions and costly services retainers will prevent these breaches from happening. The simple fact of the matter is that regardless of whether you possess the secretive schematics on how the flux capacitor works or if your company processes 749,392 credit card transactions every minute, a single solution will NOT stop your organization from being targeted


There seems to be a disturbing trend that the individuals responsible for the protection of the environment no longer have a full understanding of "what" is on the system/network and are increasingly relying on these very expensive products to generate an alert to tell them when something has occurred. While the amount of data and devices that the team(s) within your organization has to monitor is increasing, and these expensive products can help monitor the environment, a grass-roots, "back to basics" approach would help those responsible for security within an organization be able to recognize and detect threats more rapidly and more efficiently and can even help potentially minimize the depth and severity of a breach when it occurs.


Scenario using Goodwill data breach malware

In this particular case, I am going to cover a hypothetical scenario using malware that was utilized in the Goodwill data breach in which roughly 868,000 credit cards were compromised during a period from February 10, 2013 through August 14, 2014. By pairing free and available tools with commonly recommended security practices, the system administrator(s) could have easily detected and identified system(s) that were infected with this malware and potentially have stopped the breach shortly after the malware was installed on the system(s). In fact, based off the processes that the malware itself searches for, if the administrators had renamed the primary card processing software/services to something non-descript, it is possible that the breach may have not even occurred in the first place. 


In an attempt to replicate the environment, I renamed "notepad.exe" to "pms.exe" and pasted in modified Track data so that the malware will find the data, since (according to the Symantec writeup and strings within the file itself) that is one of the executables the program searches for). 


Strings in ncsvr32.exe. Note the regular expression looking for Track data at the top and the executable names the malware searches for at the bottom

Once that has been completed, I then loaded the malware onto my test system and ran it. The malware is very basic and not sophisticated at all. In fact, if there is a space in the path where the malware was run (for example, if you placed the malware in C:\Documents and Settings\Administrator\Local Settings\Performance Monitor") the malware will run and the folder and file (if there is data collected) will be created, however, subsequent writes to that file cannot be completed because the author(s) did not account for spaces in file paths. 


It appears the author(s) forgot to account for spaces in file paths


Secondly, the malware opens a command prompt window that actually lists all of the data that it captures. 
This window is open on every device infected with this malware. You can minimize it, but if you close it the program stops and it has to be manually restarted again



If you simply close the command prompt window, the logging process completely stops. While the window is open, the malware collects data from processes every 60 seconds. Based off of my own triage analysis, there did not appear to any persistence mechanisms, so once the executable is stopped it has to be manually restarted in order to start again. If a system administrator, security analyst, or even a non-technical employee had noticed the window open on a machine and had closed it, it would have stopped this sample of malware from collecting credit card information.


With the sample that I downloaded, the Track data is saved to the output file titled "data-pms.exe-2224.dmp". The data contained within this file is plain-text and there does not appear to be any encryption or additional obfuscation techniques used.


Plain-text Track data stored in the logging file. No encryption or any obfuscation attempted


The malware used in this case appears to be VERY unsophisticated, however as I have pointed out in the past, attackers will use malware that is only as advanced as it needs to be in order to accomplish their goals. In this case, this very basic and poorly written malware stole credit card transaction data from the Goodwill environment for over 19 months and resulted in nearly 868,000 compromised credit card numbers before it was stopped. It is paramount that network administrators/security engineers/incident response teams (or whatever name your organization labels your security teams) understand what is on the network and systems and what is supposed to be there prior to spending large amounts of money on hardware and network monitoring devices. All of the monitoring hardware in the world could have been in place and the Goodwill breach probably would have continued to go unnoticed in part because the attacker(s) probably used legitimate credentials to gain access to the network, the malware did not appear to use any network connectivity, and the malware was very basic and unsophisticated. This is my own speculation based on past experiences, but more than likely the attacker(s) managed to get credentials that allowed access to the network and after some reconnaissance, they were able to figure out where the data they wanted was stored and came up with an easy way to capture that data. Then, the attackers(s) probably either exfiltrated the data in an automated fashion (possibly a script) or the attackers remotely accessed the system(s) again to remove the data (based on the presence of an unencrypted logging file).


Without the understanding of what should be on these systems and monitoring the systems for items such as additional running processes, sluggish performance, open command prompt windows, etc. it does not matter how much money you spend on the high priced hardware and software solutions. Those solutions MUST compliment the understanding and comprehension of your security team. These solutions are not a replacement for practicing some of the fundamentals of information security.



Here are some quick tips you can take to reduce the possibility of experiencing a credit card breach, inside of your organization:


  • Rename your payment application to something non-descript
    • Instead of pms.exe, change it to chrome.exe or firefox.exe or itunes.exe. Or something else unique but not easily associated with "what" the program is doing. While calling your payment processing program "OMGItsSoFluffy.exe" is non-descript, having a very unique name can also sometimes be an indicator that something is important.
  • Perform periodic triage analysis on key systems and components
  • Strive to exceed PCI/DSS compliance standards, such as:
    • Segregate and segment your payment processing network from the rest of your network. Don't have your payment processing application running on the same system (and network) where your employees are checking Facebook
    • Change ALL software/hardware defaults, including application names and third party provider passwords. YOU should create a unique username/password for remote access if it is required. That reduces the chance of credentials reuse. DO NOT ONLY use simple dictionary words, your store name and number, etc.
    • Implement strong password policies and require password changes periodically
  • Look into freely available tools to help diagnose your current environment like:
    • noriben - A portable, simple, malware analysis sandbox
    • PEStudio - Static malware analysis tool
    • Online malware analysis services, such as anubis
    • Live Response collection - Allows gathering of volatile data from Windows, OSX, and *nix based systems



AUTHOR'S NOTE: Here is another good article from an author seems to be just as frustrated and shares similar opinions on this topic!








Wednesday, September 3, 2014

Many small updates to the Windows Live Response collection



Good morning readers! Over the past few days I have had a little bit of free time, which I used to update several of the applications contained within the Windows Live Response collection.  cports, LastActivityView, md5deep, nmap, and PEStudio all were updated. I ended up removing both "full" versions of FTK Imager and just kept FTK Imager Lite as I felt having three FTK Imager options to choose from was a bit much. I also updated WinAudit to 3.0.8, but retained 2.2.9 just in case anyone had used that extensively and had written parser(s) for the data it presented. I also added an Excel spreadsheet in the Windows collection that lists the tool, the date uploaded, and the original website where the tool came from.



LiveResponseCollection-Cedarpelta.zip - download here 

MD5: 7bc32091c1e7d773162fbdc9455f6432
SHA256: 2c32984adf2b5b584761f61bd58b61dfc0c62b27b117be40617fa260596d9c63
Updated: September 5, 2019



Monday, August 18, 2014

Live Response Tool collection update (BONUS FEATURE) Searching the Windows Hashes file(s) using VirusTotal



Hello again readers! First off, I want to start the post by announcing that the latest update to the Live Response collection of tools is up; you can download it here:



LiveResponseCollection-Cedarpelta.zip - download here 

MD5: 7bc32091c1e7d773162fbdc9455f6432
SHA256: 2c32984adf2b5b584761f61bd58b61dfc0c62b27b117be40617fa260596d9c63
Updated: September 5, 2019



The main highlight of this update is the inclusion of a Linux script that gathers data from a live system. I still want to add quite a bit of features and functionality to the script but I wanted to get a version out that automates most of the items listed in the Malware Forensics Field Guide to Linux Systems. Some of the items that the script collects are:

Copy contents of “log” folders
Determine date on the system
Determine hostname of the system
Determine logged in users on the system
Determine running processes on the system
Determine process tree (and arguments)
Determine mounted disks/items
Review output of disk utility
Determine loaded kernel extensions
Determine system uptime
Determine system environment
Determine (more detailed) system environment
Determine OS kernel version
Determine running process memory usage
Determine running services
Determine all loaded modules
Determine “who” logged in user is
Review .bash_history for each user
Determine current network connections
Determine socket statistics
Determine list of open files and network connections
Determine routing table
Determine ARP table
Determine network interface information
Review allowed hosts
Review denied hosts


This version includes a "Triage" version of the Windows script, but it does not collect a memory dump and it does not run WinAudit, to save some time (creating memory dumps and running WinAudit can take a long time). I still recommend running the full script whenever possible, but sometimes you don't need a memory dump or have the ability to create one with a different tool. I don't want to force you into using something else, so I took those two specific items out.


I also included checklists for each of the operating systems covered by the collection (Windows, OSX, and Linux) and updated a couple of items in the Windows collection like PEStudio and the latest version of FTK Imager. I kept the old version of FTK Imager as well which is why the size is roughly double what the previous size of the zip file was. I will phase out the older version in the next release but I wanted to keep it in case there is an imaging issue with the latest version. Please do not hesitate to provide any feedback (positive or negative) regarding the use of these freely available tools!



SUPER AWESOME BONUS FEATURE!!

I also try to ensure that the data from the tools can be use by other, already existing tools, and last week I encountered a prime example of using the output with a tool to get data that I was looking for.

As you may know, the Windows Live Response script attempts to identify executable files and hash those files which are located in the %WINDIR%\system32 folder, the %SYSTEMDRIVE%\Temp" folder, and ALL files in the %TEMP% folder. The script uses the program md5deep to perform these activities. My goal for this output was to search for the hashes on VirusTotal (or your malware repository of choice) and try to identify possibly malicious files that were on the system(s). 

Fortunately for all of us in the community, Didier Stevens already wrote "virustotal-search.py", a small Python script to perform queries using your own VirusTotal API key, with the added bonus of writing the script so that it can process data that kind of follows a specific format! So rather than having to re-parse the output data, if take the output from md5deep and you run his script with the "-c" flag (for "Comment"), it will look up the hashes and save them to a nice CSV formatted file for you. Then you just have to import the file into Excel, choosing the semi-colon (";") as your delimiter, and you have a nice view of what files have already been scanned to VirusTotal. It even takes into account the API query limits for the standard (free) API keys. Pretty cool!!



Contents of "Hashes_md5_User_TEMP_WindowsPE_and_Dates.txt" file created by the Windows Live Response script using md5deep


Running "virustotal-search,py"


Formatted results of the script. How awesome is that?!?!






Wednesday, August 13, 2014

Analysis of a Windows 8 Memory Dump with Volatility 2.4 ("The New Hotness")


Hello again readers! Today's blog post is going to cover my initial experiences working with the newest release of volatility (version 2.4) and a Windows 8 memory dump I created using Belkasoft RAMCapture64 (part of the Live Response collection) during my Windo while working on my Bluetooth for data exfiltration series

I set this up on a Windows system, so if you are using a *nix system or OSX your set up details will be a little different, but the overall theme is the same.



1) SETUP

First of all, I had to download the 32-bit version of ActiveState Python (currently 2.7.8.10, which you can download here). Once that was downloaded and installed, I navigated to the volatility page to read more about the latest version (version 2.4, which you can read more about here here) which, among other things, now has support for Windows 8. I downloaded both the Volatility 2.4 Windows Standalone Executable and the Volatility 2.4 Windows Python Module Installer. Although I personally prefer to use the Python version that is usually found under "<PYTHONINSTALLPATH>/Scripts/vol.py", I grabbed the standalone version for eventual testing and comparison purposes. Installing the Python modules took just a few seconds and I was ready to move onto the next, but perhaps the most important, steps. According to the volatility website "the distorm3 python module is a requirement for analyzing 64-bit Windows 8 and 2012 raw memory images". So, I had to visit the distorm Google code page and download the latest version and install it. The last setup step was to visit the PyCrypto page and download the latest pycrypto modules to ensure that all of the volatility plug-ins can run with no problem. Without installing PyCrypto I kept getting messages like "The module "Crypto.Hash" is not installed" and "no module _MD4". Installing PyCrypto seemed to alleviate all of those error messages.

To summarize the tools and steps you must perform in order to run the Python version of volatility on a Windows system, you need (at the bare minimum):

ActiveState Python (32-bit)
Volatility 2.4 Windows Python Module Installer
Distorm3 Python Module
PyCrypto Python Module
 -- plus any additional modules that you desire, based off of plugins you run


-----or if you just prefer to use the standalone executable-----
Volatility 2.4 Windows Standalone Executable


2) ANALYSIS

With the release of Windows 8, quite a few changes were made with regards to "how" Windows memory is handled and "how" tools can work with the dumps. Fortunately for us, the volatility crew is keeping a Windows 8/2012 page updated with their findings. 

For the purposes of this post, I only wanted to perform some of the basic analysis steps, so I only cover running the plug-ins "imageinfo", "kdbgscan", and "pslist". However, I will be making another post in the Bluetooth for data exfiltration series as I dig into the memory dump to see what other artifacts that I can extract, so be sure to watch for that!


imageinfo

Time to complete: 1 hour, 53 minutes
Query:
vol.py -f "C:\Users\Brian\Desktop\Memory\ADMIN_LAPTOP_20140520_153701_mem.dmp" imageinfo
Although I already know what the OS profile is from the system that the memory dump came from (Win8SP1x64), I am treating this as if I had no idea and needed the information from "imageinfo" to make the profile determination.

The "imageinfo" results gave me 4 possible suggested profiles and it gave me the kdbg address. However, since I know that with Windows 8/2012 I have to pass the virtual address of the KdCopyDataBlock rather than the address of the kdbg, thanks to the documentation by volatility crew, I need to run kdbgscan against my image.


The 2nd entry under best practices is probably the most important to note when dealing with Windows 8/2012 memory dumps


In order to save some time I would recommend running only "kdbgscan" and waiting for the results from that before running the "imageinfo" plugin if you absolutely need something from "imageinfo" that you cannot get from another plugin. You are going to get more of the information that you need to perform additional analysis from "kdbgscan" than you are from running "imageinfo" on Windows 8/2012 images (and at least in my case, it would have saved nearly two hours of work).


Running the volatility 2.4 and the"imageinfo" plugin against my Windows 8 memory dump



kdbgscan

Time to complete: 1 hour, 30 minutes
Query: 
vol.py -f "C:\Users\Brian\Desktop\Memory\ADMIN_LAPTOP_20140520_153701_mem.dmp" kdbgscan

Running the "kdbgscan" plugin took just over an hour and a half to complete. However, it did find all of the data that I was hoping to find and then some. The plugin provided a total of four results. The results looked to be completely identical except that for each result it used a different profile (Win2012x64, Win8SP0x64, Win2012R2x64, ad Win8SP1x64). I was hoping that I could tell if some of them were incorrect or not based off of the PsActiveProcessHead and the PsLoadedModuleList results (covered in the Art of Memory Forensics book on page 64) but unfortunately this was not the case. I could take a guess and run through each of the profiles, but fortunately the Windows Live Response batch script in the Live Response collection collects a file named "Windows_Version.txt". Based off the data in that file, I know from that my version of Windows is 6.3.9600 (which is the version associated with the profile Win8SP1x64). So using the Live Response collection to help with your incident response/digital forensics case that requires memory dumps might be useful.... (hint hint!)


Running volatility 2.4 and the "kdbgscan" plugin against my Windows 8 memory dump (1 of 2)


Running volatility 2.4 and the "kdbgscan" plugin against my Windows 8 memory dump (2 of 2). My machine was running Windows 8.1 (6.3.9600) so my profile will be "Win8SP1x64" and my kdbg will be "0xf802b65e66d8" (KdCopyDataBlock virtual address)


pslist

Time to complete: 27 seconds
Query: 
vol.py -f "C:\Users\Brian\Desktop\Memory\ADMIN_LAPTOP_20140520_153701_mem.dmp" --profile=Win8SP1x64 --kdbg=0xf802b65e66d8 pslist

Now that I have both the profile and the kdbg (which, remember, is the virtual address of the KdCopyDataBlock on Windows 8/2012 dumps) I can begin my "normal" method of running plug-ins against the memory dump in an attempt to extract data from it. The response time is more along the lines of what I have seen with volatility in the past (just a couple of seconds) once you specify the proper profile and the proper kdbg (which in Windows 8/2012 is really the KdCopyDataBlock location (I really hope you remember that with as many times as that has been said in this post!))


Running volatility 2.4 and the "pslist" plugin against my Windows 8 memory dump. I specified the profile as "Win8SP1x64" and kdbg as "0xf802b65e66d8" (KdCopyDataBlock virtual address)


In my research so far, the main thing that users should be aware of is the processing time that it takes to analyze a Windows 8 memory dump in order to get the information you need to speed up additional analysis. But once you get the information you need from "kdbgscan" (REMEMBER, with Windows 8/2012 you need the pass the kdbg as the virtual address of KdCopyDataBlockit should increase the processing time of your queries considerably.


3) ????


4) PROFIT



SUMMARY

I am not certain if the longer processing time is with all Windows 8/2012 dumps or just those that are created using Belkasoft RamCapture. I will eventually get around to doing some memory dumps with other tools and seeing how volatility works against that format to see if there is a speed increase. I am also planning to give the standalone executable a more thorough testing but in my initial results the speed of using the executable only compared to the python script looks to be better by a few seconds. I don't know the reasoning for that, but if 

  1. I can reliably use the standalone executable to perform some of the functions and don't have to worry about Python dependencies (which seems to be the case),
  2. I can script the "standard" memory analysis, and
  3. It is faster,

then I will definitely use that more often. I have had better luck using the Python version in the past but that could change, I will keep you updated as my usage continues!! I'd like to thank everyone that has been and is involved with the development of the volatility framework for offering such an awesome tool for the absolutely low cost of free. And pick up your own copy of the "The Art of Memory Forensics" if you haven't already!!


Friday, August 8, 2014

Parsing Windows Live Messenger data from iOS devices


Good afternoon readers, the past couple of weeks have been pretty busy with case work, but thankfully I finally had some time to dig in to some messaging data that I extracted from an iOS device that never seems to have been addressed previously and does not appear to be recognized by any mobile device forensic tools that I have used.

The application I will be covering in this blog post is Windows Live Messenger for iOS devices, which seems to have been discontinued some time in 2013. Unsurprisingly, the application is VERY poorly written and stores data on the device itself in a variety of different ways, which probably explains part of the reason that no one has really dug into this data. As the below image shows, there does not appear to be specific time stamps or even a definitive structure as to how data appears within the application itself, which makes extracting data MUCH more difficult.


Screenshot of Windows Live Messenger on an iOS device. Retrieved 7 August 2014 from http://news.softpedia.com/news/Microsoft-Discontinues-Windows-Live-Messenger-for-iOS-324028.shtml#


I would like to note though, that with all of the research, time, and effort that I put into this post and parsing the data from, I only have one device that has the application on it, so the data stored on a device you encounter may be slightly different. If you have a case where you have access to this data and are willing/able to share it I would much appreciate it in an effort to make the small Perl script that accompanies this blog post. Likewise, if you encounter issues with the script please reach out to me and I will try my best to help! So, now that all of the formalities are out of the way, shall we begin?



The Messenger data itself is stored in a standard location, under the "/private/var/mobile/Applications/com.microsoft.wlx" folder (or the applicable SHA value, if you have not used a tool/method to reconstruct file paths). 


Messenger application folder from iOS device when viewed in X-Ways

There is quite a bit of data in here, but for now we are most interested in files that are stored under the "Documents/cache/Messenger" and under the "Library/Caches/cache/Messenger" folders. It should look something like this:


Contents of "Messenger" folder. The files stored in "Documents/cache/Messenger" and "Library/Caches/cache/Messenger" following the same naming conventions, but contain different data.

The first thing that I want to point out here is that all of the files in these folders have a ".cache" file extension. But of course Microsoft does not follow a standard format for exactly what a ".cache" file should be, so the file header and footer for each of these files are different. The files seem to follow the naming convention of "MSN User ID_filename.cache". The data in the Message History of the files also appear to bring up the possibility that 7-bit encoding is in use, so that opens up an entirely new can of worms for parsing the data. Rather than reverse engineer the entire file structure format, I decided to focus on areas from which I could extract data and have relative confidence in the results.



No discernible, repeatable, standard file header. Foiled again! 



This seems to suggest 7-bit encoding might be in use, at least a little bit. Maybe. I don't know to be 100% honest. 

Thus far I have identified three files that contain chat and/or chat associated data of interest. The files contain "ChatConversations", "MessageHistory", and "Status" in the filename. The "ChatConversations" file seems to contain a listing of the most recent chat sessions, the "MessageHistory" file seems to contain most of the message data, and the "Status" file seems to contain email addresses and usernames of individuals involved in chat sessions as well as the username and email address of the individual who used the Windows Live Messenger application on the device itself. I will cover each of the files individually, but I would also like to note that the script can be given the "-folder" option and it will attempt to recursively (meaning all of the subfolders as well) search to find these files for you.


"Message History"

The message history files seem to follow the format of storing the data in the following format


  • User "Friendly Name" (if present)
  • Hex character 00 (can occur 1 or 2 times (usually if friendly name is present, but not always))
  • Number 1 
  • Colon
  • Hex character 00 (can occur 1 through 4 times)
  • Email Address
  • -- sometimes additional printed and non-printed characters --
  • Message
  • Variety of characters
  • Hex character 00 five times in a row

Since I have not found a reliable way to account for if the "Friendly Name" is present or not, I am going to focus on starting the pattern match with the number 1 and the colon, and then greedily match everything up to where the hex character 00 occurs at least five times in a row. The Perl regular expression that the script uses for this matching is:

"\x31\x3A([\w\W]+?)\x00\x00\x00\x00\x00"

Once the script matches that pattern, it then attempts to format the data structure by replacing the "1:" with the term "Email Address", and tries to find the beginning of the message by matching the various patterns that occur after the ".com"  that I have seen in my data. It then attempts to clean up non-printable characters that occur in the message itself as well and then prints the chunks of data out one by one. This method is not 100% foolproof, however if should be more than sufficient in order to allow you to at least get an understanding of the message conversations that have occurred on the device. 


The contents of a modified "Message History" file that match our regular expression that we search for, since the data structure is currently unknown.


The parsed contents of the above file. Different programs recognize different encoding schemes, in this case Notepad translates \x84\x00 as ",,".



"Chat Conversations"

Initially I thought that this file would contain the same data as found in Message History, but interestingly enough, it did not. This seems to contain a list of what appears to be (guessing here, based on the limited amount of data I have to work with) the most recent chat message that was received from an individual that is listed in the Message History file(s). What this means is that if there is a Message History file associated with the username "peter_quill_88@hotmail.com", there will be an entry in the chat conversation folder with that user name and the message as well. So in order to try to be as complete as possible, the script can also handle this file. Fortunately, the data in this file seems to follow a very rough structure, and the data format of this file appears to be:


  • Hex characters \xE8\xFF\xFF\xFF
  • Variety of characters
  • Number 1
  • Colon
  • Email Address
  • Message
  • Variety of characters
  • Hex characters \x00\x00\x00\xBC


Using this, our regular expression to extract the data is going to be

"\xE8\xFF\xFF\xFF([\w\W]+?)\x00\x00\x00\xBC"

As stated above, the script matches that pattern, it then attempts to format the data structure by replacing the "1:" with the term "Email Address", and tries to find the beginning of the message by matching the various patterns that occur after the ".com"  that I have seen in my data. It then attempts to clean up non-printable characters that occur in the message itself as well and then prints the chunks of data out one by one. I think it bears repeating again that this method is not 100% foolproof, however if should be more than sufficient in order to allow you to at least get an understanding of the message conversation that is stored in this file. 


"Status"

The file containing the term "Status" seems to be the most well-structured file within the data. The data itself seems to be stored in a similar structure as the Chat Conversations file, although there are differences in the data itself.


  • Hex characters \xE8\xFF\xFF\xFF
  • Variety of characters
  • Number 1
  • Colon
  • Email Address
  • Username (if present)
  • Microsoft Object (if present)
  • Variety of characters
  • Hex characters \x00\x00\x00\x01

Just like the above examples, the regular expression that we use to extract data is:

"\xE8\xFF\xFF\xFF([\w\W]+?)\x00\x00\x00\x01"


As with the above examples, the script then takes this data and attempts to clean it up in a user readable fashion. The script replaces the "1:" with the term "Email Address", it attempts to identify is a Username is present and if it is, label it accordingly. It also attempts to digest the embedded Microsoft Object, if present, and format that data into a more easily readable format.


The Script

The script, which is written in Perl (insert obligatory coding language debate comment here), runs by either specifying a "file" or a "folder", which should be used after identifying the Windows Live Messenger application on your iOS device or iOS backup file(s). The application attempts to digest the file based on the fully restored filename (i.e. having "Status", "MessageHistory", or "ChatConversations" in that file). In other words, pointing the script at a raw iOS backup folder will not yield any results since it is looking for specific patterns in the file name, rather than opening every single file and trying to find the data structure(s) the script is looking for.

The output is just regular text, so you can copy/paste from the command prompt or you can save it to a file of your choosing. Be advised that the formatting is not 100% perfect, so it might require some cleanup before presenting the "final" version of the output but it should allow you to get a much better idea of any messages that are stored in the application data since no other mobile forensic tool on the market currently seems able to handle this data.

If you have any questions or issues please do not hesitate to let me know. You can download the script here.


Filename: "wlm_ios_parser.pl"
MD5: c6f3f7d09bea79d69dc3cd60da1fc17a
SHA256: 99380423520b16385dde1493bb46ad439459a221346b65e62451b9a2d7c17bac











Thursday, May 29, 2014

Bluetooth for data exfiltration. Say what?!? Part 4: Some Registry artifacts


Hello again readers and welcome back to another post regarding evidence left behind during Bluetooth data "exfiltration". Today's post is going to focus primarily on Registry artifacts. 

First of all, I want to point out a post made by Russ Taylor regarding Last Modified time updates. The "Last Modified" time stamps on Windows system files are no longer updated like they "used" to be and it is entirely possible to have time stamps from Registry hives and Event Logs (among other files) that are in the past, but the files themselves will have entries from the "future" For example, my NTUSER.dat timestamp was 05/19/2014 at 14:45:17, but the hive had entries from 05/20/2014 15:20:55. Great Scott! <cue Back to the Future music>

NTUSER.dat timestamp shows the "Last Modified" time 05/19/2014 14:45:17

Software-Atheros-VistaAddOn-Devices NTUSER.dat key updated at 05/20/2014 15:20:55. Great Scott!

The issue of "normal" time stamp updating seems to have been first noticed with Windows 7 and  underscores the fact that a forensicator cannot simply rely on file system time stamps alone. In fact, with a couple of lines in PowerShell, you can change timestamps with ease: 

$file = (gi malware.exe);
$file.CreationTime = '8/1/14 12:00AM';
$file.LastWriteTime = '8/1/14 12:00AM';

(Props to Brian Baskin for these exact commands. You may see these again some day....)



(NOTE: I want to test the time stamps out using a program like Triforce to see what additional data it can provide. It is on my list of things to do!)


So, now that we have covered the time stamps next up is covering some of the interesting data contained within the Registry itself.

The first example is in the aforementioned NTUSER.dat hive associated with my user account (which is cleverly named "Brian"). There is quite a bit of data located under the "Software-Atheros-VistaAddOn-Devices" path that looks to be associated with the connection of my Galaxy Note 2 via Bluetooth. I have to dig into the data more (when time permits) to try to figure out exactly "what" information can be determined from the Registry entry(ies). It still doesn't look like there is any evidence of actual "exfiltration" but it is nice to have another item that seems to match pretty closely to the connected device times. 

The Software-Atheros-VistaAddOn-Devices key screenshot, again!

X-Ways Forensics (my forensic analysis tool of choice) also has the ability to carve entries from Registry Hives. This also needs some more digging, as it looks like it is an entry regarding the command and the arguments needed to initiate the Bluetooth connection.



"Path unknown" Registry entry, with Win7UI.exe and the SCH-I605 Bluetooth MAC address

The SOFTWARE hive also had some entries associated with the Bluetooth connection under the path "Microsoft-Device Association Framework-Store" path. This also requires some more investigation, but once again, it does not appear that this shows anything along the lines of exfiltration, but only connections. These timestamps are prior to the timestamps entries that were created in the NTUSER.dat hive.

SOFTWARE entries regarding the Bluetooth connection


So at least we have a little more data that helps correlate some of the connection times, but we still have not found anything definitive that proves "exfil.doc" was indeed transferred from my computer to my phone via Bluetooth. But, the search continues...