Welcome to the BriMor Labs blog. BriMor Labs is located near Baltimore, Maryland. We specialize in offering Digital Forensics, Incident Response, and Training solutions to our clients. Now with 1000% more blockchain!
Thursday, September 5, 2019
Small Cedarpelta Update
Good morning readers and welcome back! This is going to be a very short blog post to inform everyone that a very minor update to the Cedarpelta version of the Live Response Collection has been published. This change was needed, as it was pointed out by an anonymous comment, that when a user chose one of the three "Secure" options, the script(s) failed due to an update to the SDelete tool. I changed the module to ensure that it works properly with the new version of the executable and published the update earlier this morning. As always, if you have any feedback or would like to see additional data be collected by the LRC, please let me know!
LiveResponseCollection-Cedarpelta.zip - download here
MD5: 7bc32091c1e7d773162fbdc9455f6432
SHA256: 2c32984adf2b5b584761f61bd58b61dfc0c62b27b117be40617fa260596d9c63
Updated: September 5, 2019
Thursday, June 20, 2019
Phinally Using Photoshop to Phacilitate Phorensic Analysis
Hello again readers, and welcome back! Today's blog post is going to cover the process that I personally use to rearrange and correlate RDP Bitmap Cache data in Photoshop. Yes, I am aware that some of you know me primarily for my Photoshop productions in presentations and logos (and HDR photography, a hobby I do not spend nearly enough time on!), but the time has finally come when I can utilize Photoshop as part of my forensic analysis process!
First off, if you are not aware, when a user establishes an RDP (Remote Desktop Protocol) connection, there are files that are typically saved on the user’s system (the source host). These files have changed in name and in format over the years, but commonly are stored under the path “%USERPROFILE%\AppData\Local\Microsoft\Terminal Server Client\Cache\”. You will usually have a file with a .bmc extension, and on Windows 7 and newer systems, you will also likely see files that are named “cache000.bin” (these are incrementally numbered starting at 0000). This was introduced on Windows 7 and should be searchable by the naming convention of “cache{4-digits}.bin”. Both files contain what are essentially small chunks of screenshots that are saved of the remote desktop. The most reliable tool that I have found to parse this data is bmc-tools, which can be downloaded from https://github.com/ANSSI-FR/bmc-tools. The process for extracting the data is straight-forward, you point the script at a cache####.bin file, and extract it to a folder of your choice. Once done, you end up with a folder filled with small bitmap images.
First off, if you are not aware, when a user establishes an RDP (Remote Desktop Protocol) connection, there are files that are typically saved on the user’s system (the source host). These files have changed in name and in format over the years, but commonly are stored under the path “%USERPROFILE%\AppData\Local\Microsoft\Terminal Server Client\Cache\”. You will usually have a file with a .bmc extension, and on Windows 7 and newer systems, you will also likely see files that are named “cache000.bin” (these are incrementally numbered starting at 0000). This was introduced on Windows 7 and should be searchable by the naming convention of “cache{4-digits}.bin”. Both files contain what are essentially small chunks of screenshots that are saved of the remote desktop. The most reliable tool that I have found to parse this data is bmc-tools, which can be downloaded from https://github.com/ANSSI-FR/bmc-tools. The process for extracting the data is straight-forward, you point the script at a cache####.bin file, and extract it to a folder of your choice. Once done, you end up with a folder filled with small bitmap images.
Now begins the phun part! The bitmaps will need to be rearranged manually to reconstruct the screenshot as best as is possible (like a jigsaw for forensic enthusiasts). This is not an exact science, and it relies on educated best-guess in many cases. While this could be a more manual and tedious process, Adobe Photoshop can be used to automate the import of the files. Then you can rebuild the item(s) as you see fit!
First, view the contents of the folder in Windows Explorer, or Adobe Bridge (included in Adobe Photoshop CC bundle) for Mac users. I found Preview does not work, it does not render the bitmaps properly. Rather than spending valuable time trying to figure out why that is, I just used Bridge.
Next select the bitmaps of the activity you’d like to reconstruct, go into Photoshop, and choose "File-Scripts-Load Files into Stack...":
|
This will allow you to choose multiple files, to import into Photoshop all at once. You will be presented with a “Load Layers” option. Select the “Browse” button, and then browse to the folder that contains the bitmap files you wish to load:
The "Load Layers" dialogue box. In order to choose the file(s) you want to open, click "Browse..." |
Choose the files that you wish to load |
Once you’ve selected the bitmap files, you will see the “Load Layers” box is populated with those files:
|
Paste the layers into your original workspace, and rearrange them to rebuild the activity! |
|
I truly hope that this small tutorial helps with your process and workload should you find yourself rebuilding RDP session activity. For readers who do not currently own Photoshop, Adobe has a very inexpensive offering of the Adobe Creative Cloud (CC) for a personal license under the Photography plan, which is $9.99 a month. It is a great deal and one that I have used for my photography hobby for many years. And now on forensic analysis cases that involve RDP bitmap reconstruction!
Thursday, April 11, 2019
Live Response Collection - Cedarpelta
Hello again readers and welcome back!! Today I would like to announce the public release of updates to the Live Response Collection (LRC), which is named "Cedarpelta".
This may come as a surprise to some as Bambiraptor was released over two years ago, but over the past several months I've been working on adding more macOS support to the LRC. Part of the work that went into this version was a complete rewrite of all of the bash scripts that the LRC utilizes, which was no small task. Once the rewrite was completed, then I focused on my never-ending goal of blending speed, comprehensive data collection, and internal logic to ensure that if something odd was encountered, the script would not endlessly hang or, even worse, collect data that was corrupted or not accurate. So, lets delve into some of the changes that Cedarpelta offers compared to Bambiraptor!
Windows Live Response Collection
To be honest, not a whole lot has changed on the Windows side. I added a new module at the request of a user, that collects Cisco AMP databases from endpoints, if the environment utilizes the FireAMP endpoint detection product. The primary reason for this is that the databases themselves contain a WEALTH of information, however users of the AMP console are limited to what they can see from the endpoints. The reason for this is likely because it would take a large amount of bandwidth and processing power to process every single item collected by the tool. Since most of this occurs within AWS, the processing costs would scale considerably, which in the end would end up costing more money to license and use.* (*Please note that I am not a FireAMP developer, and I do not know if this is definitely the case or not, but from my outsider perspective and experience in working with the product, this explanation is the most plausible. If any developers would like to provide a more detailed explanation, I will update this post accordingly!)
MacOS Live Response Collection
This is the section that has had, by far, the most work done to it. On top of the code rewrite, which makes the scripting more "proper" and also much, much faster, new logic was added to deal with things like system integrity protection (SIP) and files/folders that used to be accessible by default, but now are locked down by the operating system itself. Support has been added for:
- Unified Logs
- SSH log files
- Browser history files (Safari, Chrome, Tor, Brave, Opera)
- LSQuarantine events
- Even more console logs
- And many, many other items!
One of the downsides to the changes to macOS is the fact that things like SIP and operating system lock downs prevent a typical user from accessing data from certain locations. One example of this is Safari, where by default you cannot copy your own data out of the Safari directory because of the OS protections in place. There are ways around this, by disabling SIP and granting the Terminal application full disk access under Settings, but since the LRC was written to work with a system that is running with default configurations, it will attempt to access these protected files and folders, and if it cannot, it will record what it tried to do and simply move on. Some updates that are in the pipeline for newer version of macOS may also require additional changes, but we will have to wait for those changes to occur first and then make the updates accordingly.
You will most likely no longer be able to perform a memory dump or automate the creation of a disk image on newer versions of macOS with the default settings, because of the updates and security protections native within the OS internals. As I have stated in the past, if you absolutely require these items I highly recommend a solution such as Macquisition from BlackBag. The purpose of the LRC is, and will always be, to collect data from a wide range of operating systems in an easy fashion, and require little, if any, user input. It does not matter if you are an experienced incident response professional, or directed to collect data from your own system by another individual, you simply run the tool, and it collects the data.
Future Live Response Collection development plans
As always, the goal of the Live Response Collection is not only to collect data for an investigation, it is also able to be customized by any user to collect information and/or data that is desired by that user. Please consider taking the time to develop modules that extract data and share modules that you have already developed. The next update of the LRC will focus on newer versions of Windows (Windows 10, Server 2019, etc). I personally am still encountering very few of those systems in the wild, but that is mostly because I tend to deal with larger enterprises where adoption of a new operating system takes considerable time, compared to a typical user that runs down to Best Buy and has a new Windows 10 laptop because the computer they used for a few years no longer works.
Remember, a tool is a tool. It is never the final solution
One last note that I would like to add is that please remember that while a lot of work has been put into the LRC to "just work", at the end of the day, it is just a tool that is meant to be used to enhance the data collection process. There are many open source tools that are available to collect data, perhaps more than ever before, and one tool may work where another one failed.
For example, you might try the CrowdStrike Mac tool and it might work where the LRC fails, or vice versa. Or you may try to use Eric Zimmerman's kape on a Windows machine, but it fails because the .NET Framework was not installed. Or you might try to use the LRC on a system running Cylance Protect and it gets blocked because of the "process spawning process" rule.
In each case you have to give various tools and methods a shot, with the end goal of collecting the information that you want. It is important to remember that YOU (the user of the tool) are the most valuable aspect of the data collection process, and you simply utilize tools to make the collection process faster and smoother!
LiveResponseCollection-Cedarpelta.zip - download here
MD5: 7bc32091c1e7d773162fbdc9455f6432
SHA256: 2c32984adf2b5b584761f61bd58b61dfc0c62b27b117be40617fa260596d9c63
Updated: September 5, 2019
Subscribe to:
Posts (Atom)