Helpful Navigation Toolbar

Wednesday, August 8, 2018

Live Response Collection Development Roadmap for 2018


Hello again readers and welcome back! It's been a little while ...OK, a long while... since I've made updates to the Live Response Collection. Rest assured for those of you who have used, and continue to use it, that I am still working on it, and trying to keep it as updated as possible. For the most part it has far exceeded my expectations and I have heard so much great feedback about how much easier it made data collections that users and/or businesses were tasked with. The next version of the LRC will be called Cedarpelta, and I am hoping for the release to take place by the end of this year.


As most Mac users have likely experienced by now, not only has Apple implemented macOS, they have also changed the file system to APFS, from HFS+. Because the Live Response Collection interacts with the live file system, this really does not affect the data collection aspects of the LRC. Although it DOES affect third party programs running on a Mac, as detailed in my previous blog post.


Although the new operating system updates limits what we can collect leveraging third party tools, there are a plethora of new artifacts and data locations of interest, and to ensure the LRC is collecting data points of particular interest, I've been working with the most knowledgeable Apple expert that I (and probably a large majority of readers) know, Sarah Edwards (@iamevltwin and/or mac4n6.com). As a result of this collaboration, one of the primary features of the next release of the LRC will be much more comprehensive collections from a Mac!



For the vast majority of you who use the Windows version of the Live Response Collection, don't fret, because there will be updates in Cedarpelta for Windows as well! These will primarily focus on Windows 10 files of interest, but also will include some additional functionality for some of the existing third-party tools that it leverages, like autoruns.



Autoruns 13.90 caused an issue, and was fixed very quickly once the issue was reported (thanks to @KyleHanslovan)


As always, if there are any additional features that you would like to see the LRC perform, please reach out to me, through Twitter (
@brimorlabs or @brianjmoran) or the contact form on my website (https://www.brimorlabs.com/contact/) or even by leaving a comment on the blog. I will do my best to implement them, but remember, the LRC was developed in a way that allows users to create their own data processing modules, so if you have developed a module that you regularly use, and you would like to (and have the authority to) share it, please do, as it will undoubtedly help other members of the community as well!




Tuesday, July 17, 2018

Let's Talk About Kext


Hello again readers and welcome back! Today's blog post is going to cover some of the interesting things I found poking around MacOS while developing updates to the Live Response Collection. First off, I have to offer my thanks to Sarah Edwards for taking the time to talk about what she has done with regards to the quirkiness ("official technical term") regarding MacOS, System Integrity Protection ("SIP"), kernel extensions, and everything else that completely derailed my plans for pulling data from a Mac!


Our story begins by trying to diagnose some errors that I noticed while trying to perform a memory dump on my system using osxpmem. The errors were related to my system loading the kernel extension MacPmem.kext, which resulted in the error message "/Users/brimorlabs/Desktop/Cedarpelta-DEV/OSX_Live_Response/Tools/osxpmem_2.1/temp/osxpmem.app/MacPmem.kext failed to load - (libkern/kext) system policy prevents loading; check the system/kernel logs for errors or try kextutil(8).
". Even though I was running the script as root, for some reason the kext was failing to load. 


That weird error message is weird

The Live Response Collection script has always changed the owner of the kernel extension, so I knew that ownership was also not the problem, so that left me in a bit of a bind. Fortunately the tool "kextutil" is included on a standard Mac load, so I hoped that running that command could shed some light on my issues. The results from running kextutil were mostly underwhelming, with the exception of .... what the heck is that path? 


Output from kextutil. What is "/Library/StagedExtensions/Users/brimorlabs/Desktop/Cedarpelta-DEV/OSX_Live_Response/Tools/osxpmem_2.1/osxpmem.app/MacPmem.kext" and why are you there, and not the folder you are supposed to be in?


The actual path on disk was "
Users/brimorlabs/Desktop/Cedarpelta-DEV/OSX_Live_Response/Tools/osxpmem_2.1/osxpmem.app/MacPmem.kext", but for some reason the operating system was putting it in another spot. OK, that seems really weird, so why is my system doing stuff that I don't specifically want it to do? Oh Apple, how very Python of you! :)


osxpmem/MacPmem.kext related files under the "/Library/StagedExtensions" path


As it turns out, thanks to quite a bit of research, the "Library/StagedExtensions" folder is, in very basic terms, the sandbox in which MacOS puts things that it does not trust, as a function of SIP. Now, if you were presented with the "Do you trust this extension" prompt ...


This is what the "System Extension Blocked" popup looks like. This is NOT the popup you see with osxpmem


... then, if you navigate to "Security & Privacy" and click on the 
"General" tab, and clicked "Allow"


Security & Privacy - "General". Note the "System software from developer REDACTED was blocked from loading" and the "Allow" button

It would (ok, *should*) then stop the symbolic linking (that is what I assume is happening, although that is not confirmed yet) from the original folder to the StagedExtension folder to allow the sandboxing/SIP to occur. That means that the kernel extension would then be able to run, and the world would be a glorious place. Except.....it seems that once a developer/company signs their kext, which allows the bypassing of SIP, that means that EVERYTHING signed by them in the future will also, automatically, be trusted. Obviously this could present a security issue down the road if those signing certificates would be stolen. I don't know of that happening yet, but it does seem like it is plausible and could presumably happen in the future.



I've tried a couple of workarounds to bypass the SIP process to allow me to dump memory from a system without having to go through all of the bypass sip/csrutil steps (if you are unfamiliar with that, please follow the link here). None of my attempts succeeded yet, but I am still trying. I specifically do NOT want to reboot the computer, because I want to collect memory from the system and not potentially lose volatile data. I will either update this post, or continue this as a series, when I find a sufficient work around (if there is one) to this issue! With that being said, if you found a way to dump memory from a MacOS live system that has SIP enabled, and if you are able to share it publicly (or privately) please share your methodology. I would love for the next LRC update to be able to include memory dumps from systems with SIP enabled!

Sunday, June 17, 2018

Who's Down With PTP?


Hello again readers and welcome back! Today's blog post covers a series of (unfortunate) events that I had to work through in order to acquire data from an LG Aristo phone. These methods might also work for other devices, especially ones that are severely locked down, such as those that are primarily utilized on pre-paid plans, such as TracFone. (DISCLAIMER: I am *NOT* claiming that this will work all the time. It seems that tech companies/developers sometimes take shortcuts (*gasp*) which means that devices don't quite function the way they are "supposed" to function.)


Our journey begins with using Magnet Axiom (thanks Jessica!) in an attempt to acquire data, and subsequently process that data, from a stock Android device. Following the very concise, user-friendly prompts, all of the steps were properly taken in an effort to acquire the device. However, the first issue arose when the "Trust this computer" prompt never came up on the Aristo itself. Since I've had many experiences with mobile devices in the past, my first thought was to fire up Android Debug Bridge (adb) in an attempt to make sure that adb was properly recognizing the device. Because if adb can't recognize it, acquisition through just about any commercial tool just won't work. Interestingly, choosing the "Charging only" option from the USB option in Developer mode, which is usually the standard in Android device acquisition, results in nothing being recognized in adb.



Charging, the usual method, does not work 
No devices shown in adb


So, the next step is to see if perhaps we connect with MTP (Media Transfer Protocol), it will allow adb to recognize the device. It is a different protocol, and I know from past experience that sometimes different protocols means the difference between working or not. When I chose MTP from the Developer Options, I was *finally* presented with the desired "Allow USB debugging" prompt, which also lists the unique computer fingerprint. So...success!!



MTP is picked as the USB connection on the device


Finally, debugging options show up on the device with MTP!


Or not. adb recognizes the device and allows me to send commands, such as making a backup, but what I need is the Magnet agent to be pushed to the device so we can get the user data, such as SMS, contacts, call history, etc. When connected via MTP, it seems the Aristo allows some data to be transferred from the device to a system, but it does not allow data to go from the system to the mobile device. Curses!! Foiled again!!




abd recognizes the device with MTP. Partial success!


After deliberating, some adb-kung-fu, and using Google to search for additional options, I decided to try using PTP (Picture Transfer Protocol). adb still recognized the device, however, for reasons that COMPLETELY elude me, setting it up this way allowed not only the backup to be performed, but ALSO allowed data (aka the Magnet agent) to be pushed to the device! At last, I finally had success!



Now we choose the PTP option. For some reason, this choice works!!


In Axiom, we choose the ADB (Unlocked) method
Finally! With the PTP connection, AXIOM recognizes the device!
Ready to start processing!
Data acquisition as begun at last!
Our data has been acquired!


Interestingly enough, however, when I completely cleared the Trusted Devices on the mobile device, I could not get the "Trust Connections from this device" prompt to show up using a PTP connection. So, as long as you follow the method below, you *may* be able to get data from a severely locked down mobile device!

0) Get familiar with using adb from the command line. 

It is a free download, and most commercial tools use adb behind the scenes. If you do any work with Android devices, you should know some basics of adb!

1) Connect the device using the MTP protocol. 

2) When presented with the Trust Connections prompt on the device, choose OK and make sure the "Always allow from this computer" box is checked
3) Change the connection protocol to PTP
4) Acquire the device using Magnet Axiom
5) ....
6) PROFIT!!


One additional note I would like to add about the Magnet agent when using Magnet Axiom to acquire data from a device. In my opinion, it is very important to choose the "Remove agent from device upon completion" option, found under Settings, when acquiring data from a mobile device. We ran into this issue with the agent being left behind when processing mobile devices during forward deployments. When we had devices associated with high value entities, the final step in data acquisition was that we would have to interact with the device and manually remove the agent and acquisition log(s).  (NOTE: it was not Magnet, as they did not exist at the time, it was another vendor who I will not publicly name.) It is entirely up to the end user if they feel comfortable leaving behind an agent or not. I definitely do not and will always choose to remove it. I just wanted to specifically point that out to anyone using Axiom to get data from mobile devices!



To change the agent settings, Open Process, Navigate to Tools, then Settings


Check the "Restore Device State" box to remove the Magnet agent after acquisition

Friday, April 6, 2018

Fishing for work is almost as bad as phishing (for anything)


Hello again readers and welcome back! The topic of today's blog post is something that we posted on a few years back, but unfortunately it’s worth repeating again. Companies (both large and small) who provide any kind of cyber security services have a responsibility to anyone they interact with to be completely transparent particularly when words like “breach”, “victim”, and “target” start getting thrown around. Case in point is an email that a client received from a large, well-established, cyber security services company a few weeks ago that caused a bit of internal alarm that ultimately did not contain enough information to be actionable.


In short, sharing information, threat intelligence, tactics/techniques/procedures (TTPs), indicators of compromise (IOCs), etc is something that ALL of us in the industry need to do better. I applaud the sharing of IOCs and threat information (when it’s unclassified, obviously).  If this particular email had simply contained that information in a timely manner, I would have applauded the initiative. Unfortunately the information sharing of a seven month old phish consisted of:


  • four domains
  • tentative attribution to Kazakhstan, but zero supporting evidence
  • “new” (but, admittedly, unanalyzed) malware, including an MD5 hash, and of course, 
  • a sales pitch

The recipient of this email attempted to find out more information, but was ultimately turned off by a combination of the tone and was unsure if the information was valid, or if it was just a thinly veiled sales pitch. They reached out to us directly for assistance.


I passed this particular information on to others within the information security field, and recently Arbor Networks actually put out a much more comprehensive overview of this activity, with a whole bunch of indicators and information that was not included, or even alluded to, in this particular email. I wish that more companies would take the initiative and do research into actors and campaigns such as this. If I were a CIO, and I was looking for a particular indicator from an email, but in searching for more information I came across the information in the Arbor post, I would be much more inclined to engage with Arbor if myself/my team needed external resources, than I would from an email that may have had good intentions, but felt like a services fishing expedition.


On the exact opposite end of the spectrum, the outreach of the recent Panera data loss was done perfectly. In the original email, the individual attempted to contact the proper security individuals, had no luck initially, was very disappointed by the initial response from Panera, and tried repeatedly to work with the team. The team at Panera pretty much did nothing until they went public with the issue just a few days ago, which (finally) spurred Panera to react, albeit in a less than satisfactory fashion, again. To be 100% honest, if I were in that situation I would have done everything exactly the same way.  It is a sad state of affairs when we as customers/consumers are more concerned with companies protecting our own information, than the companies who are charged with the care of that information for their services/loyalty programs/etc.


Additionally, no one wants to hear that their company or team has security issues, but responsible disclosure methods are always the way to go. However, it is hard for companies and individuals who are trying to do the right thing to highlight and address issues when “fishing for work” is so pervasive. I’ve seen many companies blow off security notifications as scams and ignore them completely, due precisely to this pervasive problem of fishing for work.


So ideally, how can we share information better?
  1. Join information sharing programs and network (Twitter, LinkedIn, conferences, etc.)
  2. Don’t “cold call” unless you have no other option. The process works much better when you already have a relationship (or know someone who does)
  3. Share complete, useful, and actionable information: recognize that not all companies can search the same way, due to limitations in resources available and even policy, regulations, and even privacy laws. Some companies cannot search by email, while others will need traditional IOCs (IPs, domains, hashes (not just MD5 hashes, also include SHA1 and SHA256 if you can)).  
  4. Include the body of the phishing email and the complete headers--if the company is unable to search for the IOCs, they may be able to determine that it was likely blocked by their security stack    
  5. Be timely. Sharing scant details of a phish from seven months ago goes well beyond the capabilities threshold of most companies 
  6. Be selective in how and to whom you share. Sending these “helpful” notifications to C-levels are guaranteed to bring the infosec department to a full-stop while they work on only this specific threat, real, imagined, or incorrect. Which brings me to #7….
  7. Make sure (absolutely sure) you are correct. “Helpful notifications” that are based on incorrect information and lack of technical expertise are common enough that a large company could have days of downtime dedicated to them. (And if the client themselves points out your technical errors with factual observations, consider the possibility that you might be wrong, apologize profusely, and DO NOT keep calling every day)

Tuesday, January 30, 2018

Several minor updates to buatapa!


Hello again readers and welcome back! I am pleased to announce that today there is a brand new, updated version of buatapa! Over the past several months I've had requests for better in script feedback on some of the ways that buatapa processed the results of autoruns, but just have not had the free time to sit down and try to work on implementing them. The new version is a little more "wordy", as it tries the best that it can to help the user if there are processing problems. For example, if you did not run autoruns with the needed flags, buatapa will recognize that from the output file you are running and suggest you run it again. For those on Mac (and maybe a few *nix systems), it also tells you if you do not have the proper permissions to access the autoruns output file.


There are also some slight changes to the interior processing and a little better logic flow. All in all, buatapa has held up quite well since the early testing nearly three years ago, and hopefully is a useful tool in helping to try to triage Windows systems within your environment.



If you have any questions or encounter any bugs/issues, please do not hesitate to reach out!



buatapa_0_0_7.zip - download here 

MD5: 8c2f9dc33094b3c5635bd0d61dbeb979
SHA-256: c1f67387484d7187a8c40171d0c819d4c520cb8c4f7173fc1bba304400846162
Version 0.0.7
Updated: January 30, 2018

Tuesday, December 26, 2017

Amazon Alexa Forensic Walkthrough Guide


Hello again readers and welcome back! We are working on wrapping up 2017 here at BriMor Labs, as this was a very productive and busy year. One of the things that Jessica and I have been meaning to put together for quite some time was a small document summarizing the URLs to query from Amazon to return some of the Amazon Echosystem data.

After several months, we (cough cough Jessica) finally was able to get the time to put it together and share it with all of you. We hope that it is helpful during your investigations and analysis, and if you need anything else please do not hesitate to reach out to Jessica or myself!



Alexa Cloud Data Reference Guide




Monday, June 26, 2017

A Brief Recap of the SANS DFIR Summit


Hello again readers and welcome back!! I had the pleasure of attending (and speaking at, more on that in a bit!) at the 10th SANS DFIR Summit this past week. It is one conference that I always try to attend, as it always has a fantastic lineup of DFIR professionals speaking about amazing research and experiences that they have had. This year was, of course, no exception, as the two day event was filled with incredible talks. The full lineup of slides from the talks can be found here. This was also the first year that the presenters had "walk-up music" before the talks.


This year, my good friend Jessica Hyde and I gave a presentation on the Amazon "Echo-system" in a talk we titled "Alexa, are you Skynet". We even brought a slight cosplay element to the talk as I dressed up in a Terminator shirt and Jessica went full Sarah Connor! One other quick note about our talk that I would like to add, is we chose the song "All The Things" by Dual Core as our walk-up music. Dual Core actually lives in Austin and fortunately his schedule allowed him to attend our talk. It was really cool having the actual artist who performed our walk-up music be in attendance at our talk!

Jessica and I speaking about the Amazon Echo-system at the 2017 SANS DFIR Summit

We admittedly had a LOT of slides and a LOT of material to cover, but if you have attended any of our presentations in the past, the reason our slide decks tend to be long is that we want to make sure that the slides themselves can still paint a pretty good picture of what we talked about. This way, even if you were not fortunate enough to see our presentation, the you can follow along and the slides and they can also serve as reference points during future examinations. We received a lot of really great comments about our talk and had some fantastic conversations afterwards as well, so hopefully if you attended you enjoyed it!


My other favorite part of the DFIR Summit is getting to see colleagues and friends that you interact with throughout the year, actually in person and not just as a message box in a chat window! Even though some of us live fairly close to each other in the greater Baltimore/DC area, we fly 1500 miles every summer to hang out for a few days. While in Austin several of us had some discussions about trying to start some local meetup type events on a more regular basis, so there definitely will be more on that to follow in the coming weeks!