Att&cking The Engenuity Evals (Mitre by Mitre)

If the title of the article didn’t strike you as Engenuis, that’s ok, we’ll just move along.

By now, most people working in some sort of Cyber Security role would have seen at least one form of Vendor self-praise around the latest Mitre Engenuity Att&ck Evaluations for Enterprise – Wizard Spider & Sandworm 2022.

I even made a Meme about it which, if I have to say so myself, is doing pretty well on the socials.


Now, don’t get me wrong, I love vendors. I also love branded hoodies and socks and stickers and free stress balls and iPads.

Swag Sidenote

If there is a gap in the market, I’m willing to do a Vendor Swag comparison test. We’ll call it the Swanepoel Vendor SW&G Off for Enterprise. What is great about this is that there is no fee for vendors to participate, but Swag will unfortunately not be returned after the evaluations have been completed. Similar to the Mitre Engenuity Att&ck Evaluations for Enterprise, the Swanepoel Vendor SW&G Off for Enterprise will pit the best Vendors against each other, and see who has the best SW&G, without us telling you who came out on top. We do realise the first question you’ll have when hearing about the SW&G Evaluations for the first time is “who won”, and we understand why you ask. Unfortunately, while evaluations data should be informative and aid in decision making for SW&G related stuff, it becomes difficult to control how each vendor would interpret their results.


Introducing the Average Analyst

So usually when a new Mitre EDR / XDR evaluation rolls by, I try to make a Meme then don’t do much further.

But, in this the year of our Lord 2022, I thought it was time to stand up for the Average Analyst, pick up my keyboard of destiny and do the work that no average analyst has the strength left to do after a day of battling “is this the Russians or just IT doing stupid stuff” alerts. That is, to try and make sense of the 2022 Mitre Engenuity Att&ck Evaluations for Enterprise – Wizard Spider & Sandworm.

Because if we are to leave it to the 30 participating vendors to interpret the results for us, we’ll walk away thinking everyone is a winner, everyone prevented all badness and you should defo be ripping out your current solution and roll their product instead.

Don’t get me wrong, EDR / XDR vendors are special vendors. These are the folks helping thousands of SOC Analysts, Incident Response Analysts, Security Operation Managers and CISO’s sleep at night. Knowing your environment is protected by decent tech, provided by vendors who have hopefully employed some of the smartest minds in the industry.


The Mouthpiece Of The Average Analyst

So I hear you at the back. “What gives this guy the right to speak for the Average Analyst?”. Well, allow me two full sentences of self-promotion:

Sentence 1: I’ve done DFIR (Digital Forensics and Incident Response) work since before CrowdStrike was founded. (That definitely sounds so authoritative, doesn’t it?)
Sentence 2: No one else is currently speaking up for the Average Analyst, so I might not be your favourite child, but I’m the only one you have.

The point I’m trying to get to here is that if I struggle to make sense of the Evaluations, so might a lot of the other Average Analysts out there. If you are rolling your own Sysmon Elascticsearch stack on top of Kibana authentication logs on a distributed ledger to verify the authenticity of alerts, then you probably aren’t the target market here. (But still always welcome to stick around for the jokes).


Why So Confused?

If you don’t know why there is confusion about these results, it boils down to this: The way the Evaluations work means that Mitre doesn’t define a sole “winner” at the end. Thus leaving the door open for each Vendor to interpret the results in a way that they feel fit.

Don’t believe me? Let’s take a five minute Google and see what some of the Vendors say about this latest set of Evaluations:

  • Cynet ranked all 30 vendors crowning SentinelOne as the winner.
  • Cybereason said they won: “Undefeated in MITRE ATT&CK Evaluations” and “leads the industry in the MITRE ATT&CK Enterprise Evaluation 2022”
  • SentinelOne agreed with Cynet, also claiming first spot, with Microsoft second and CrowdStrike third.
  • CrowdStrike said that it’s actually them who won (leading is winning right?): “CrowdStrike leads the latest MITRE ATT&CK Evaluations”.
  • Microsoft went the humble route by not actually ranking themselves, They did say that they “successfully detected and prevented malicious activity at every major attack stage”.
  • Palo Alto in turn said they also won, going for “Cortex XDR Triumphs in 2022 MITRE ATT&CK Evaluations”.
  • Trend Micro gave themselves a Top 3 finish, while Malwarebytes also chose not to rank themselves, only saying they scored “High Marks”.
  • Finally, we end with BlackBerry just saying they were 100% successful in preventing the attack emulations.


So, Is It Really Worth It?

I don’t know, but that is what we are here to find out.

The fact is, Mitre believes so and so do 30 EDR / XDR vendors. You can rest assured that these latest Evaluations have also found their way to your CISO’s inbox. The person in charge of procurement will most likely use this to guide the shortlist of Vendors to review when renewal time comes around.

So instead of just making it off as marketing dribble, or responding with a blank face when your CISO asks you “So, based on the Evals, which XDR should we buy”, join me on a fact-finding journey and let us see what it’s all about (Ok, that sounds extremely cheesy but I’ve run out of smart things to say now).

Finally, drop some comments below to help me understand where you are at.

  • Is there anything you love about the Evals?
  • What do you hate?
  • What doesn’t make sense?
  • What is your number one burning question?
  • Do you think there is any value in it?
  • Can we all blame the poor marketing teams?

If you are a vendor and would like to send me Swag to help alleviate the pain about to rain down on your marketing team from my keyboard of destiny.. ok, way over the top, OVERRULED.


Based on the feedback, we’ll formulate a strategy and unpack the Evals in the upcoming posts. (If this article dies a slow and lonely death, humiliated by its view count, this will be the only post in the series, and also be the shocking end of me being the Mouthpiece of the Average Analyst.)

Cheers!

Introducing SocVel (DFIR CTF)

Just over a year ago (Feb 2020) I started running weekly internal training CTFs @work.

These were aimed at the various levels of analysts in the SOC as well as the folks in Incident Response. It ultimately allowed us to test and train analysts in a question-answer style CTF, validating understanding of the tools and systems used in everyday work. One of the great things about it for me was that we were using actual data and tools from our own environment. I could see how analysts were answering questions, which for me is a great way to identify gaps in either technical knowledge or (mis)understanding of tool output.

Since then, I’ve long wanted to launch something similar in the public domain. A CTF aimed at SOC and DFIR (Digital Forensics and Incident Response) analysts. But, just to get a decent amount of data generated on which you can build a public CTF is a fair amount of work. Since the start of this year I kept coming back to the idea of running a public training CTF and have now made work of an MVP (Minimum Viable Product). 

So say hallo to SocVel:

The name SocVel is derived from the well known South African term Stokvel. But more on that at a later time… MVP right.


What is the aim of all this?

For those new to the field

Most infosec vendors will have some training available to help you understand how to interpret what is on the screen when using their tools. Whether that is an AV solution, EDR, SIEM, SOAR or SNAFU. (The last one is not a real infosec term, although in this day and age, that could be deemed an acceptable way to refer to the industry.)

But, one of the main gaps I often see is the ability to link all the bits of information together. Some analysts may get overwhelmed by the noise in their environment, and struggle to identify the golden needles in a stack of more needles. 

For me, it often comes down to asking the right questions about the situation in front of you, and being able to devise plans to answer those.

In addition, you need to be able to formulate these answers you’ve found during an incident to tell the story of what happened. Whether that story needs to be communicated to a colleague, a level up in the SOC, or an overworked CISO who really just wants to know if this is the big incident that finally pushes them over the edge.

For veterans

If you are a veteran SOC or DFIR analyst, this is a great way for you to test your abilities as well as tooling. Challenge yourself by not having the data necessarily in the way you are used to get it from your EDR, SIEM or Triage Scripts.


What makes this different from most DFIR ‘conference’ CTFs?

Time Pressure

There is no time pressure. Each SocVel CTF should remain open for a month or so, depending on the number of participants or general interest. 

Oftentimes the time zones when CTFs are presented aren’t ideal. Yeah I know they can’t cater for the entire globe, but, doing a CTF between 01:00 and 07:00 local time on a Saturday morning is not my idea of fun. 

Even if the CTF is in a respectable timeslot, the line of work most DFIR or SOC analysts find themselves in doesn’t always guarantee they’ll have the consecutive hours available to complete it. 

Barrier To Entry

Sometimes CTFs are just plain whack in their asking (especially general hacking ones). Allow me to quote a post from hatsoffsecurity.com, referring to people who create CTFs:

“The challenge should be hard because the subject is hard, not because you’re being a d***”

My target market with SocVel are both experienced DFIR veterans and entry-level analysts. To that end, most questions in a SocVel CTF will have an unlockable hint available. This should be helpful enough for you to derive how to get to the answer.

You’re not going to learn anything if you get stuck at a point, and there is nothing or no one there to guide you in understanding what needs to be done.

Guessing

Again, my aim for SocVel is to be a training CTF. 

In an online conference CTF which took place last year, there were no limits on the amount of incorrect answers you could submit. This was the stats for the winner: 

  • Correct Submissions: 22 (5.49%)
  • Wrong Submissions: 379 (94.51%)

As a strategy for winning CTF’s, that will probably get you there. If the question is: “Which browser was used by the attacker”, you just start submitting browser names until you get it right. However, I don’t want someone working on incidents that have a mere 5.49% success rate.

To combat this, SocVel will deduct points for each incorrect submission. You can still try and try again until you get it right, but it will cost you. 


Ready?

And with that, the first investigation (Pooptoria) is live: 

The notorious threat actor Fancy Poodle has done it again! This time striking at Strikdaspoort Wastewater Treatment Plant in Pretoria, South Africa… 

Do you have what it takes to solve the investigation while only using limited triage data? All before the license-dongle-wielding forensic analysts have checked their write blockers out of storage?


Head over to www.socvel.com for instructions to give it a go.


Parsing APFS with Axiom before the thing from Lost eats you

During the latter part of 2017, Apple introduced their APFS file system which is being rolled out with their High Sierra macOS.

The following section was taken from an Apple support article:

When you install macOS High Sierra on the Mac volume of a solid-state drive (SSD) or other all-flash storage device, that volume is automatically converted to APFS. Fusion Drives, traditional hard disk drives (HDDs), and non-Mac volumes aren’t converted. You can’t opt out of the transition to APFS.

Although there are a couple of articles floating around which shows ways to ‘opt-out’ of APFS, it is still likely that 99% of High Sierra systems with Solid State Drives you’re going to come across will have APFS running.

Now, picture this scenario:

You are stuck on an island with a forensic image of an APFS volume and a toolbox full of your favorite commercial forensic tools. Contained in the APFS volume is a backup of an iPhone 6s which contains a WhatsApp message with the instructions on how to make one mean coconut Mojito. You need to access said message in order to make the Mojito before sunset. Should you fail,  you’ll be forced to do manual USB device history analysis for 26 Windows 7 internet café PCs, after which, you may or may not get eaten by that thing that was eating people in Lost.

So, your options:

  • Blackbag’s BlackLight — Yes, it works.
  • Autopsy — No support as of version 4.7.
  • AccessData FTK — No support as of version 6.4. Their online tech support noted that APFS support is planned for future releases, however no eta yet.
  • Magnet Forensics Axiom — No support as of version 2.1.0.9727. Jad Saliba mentioned at the Magnet User Summit in Las Vegas (May 2018) that they’re currently working on it, but no eta yet.
  • OpenText EnCase — Officially: Yes, Unofficially: Sort of. Although EnCase announced APFS support in version 8.07, I’ve dealt with two separate Macs where EnCase is refusing to parse the APFS volumes. I’ve put one of the images through a few tests. The image happily parses with Blackbag’s Blacklight and mounts with both Paragon‘s APFS mounter and Simon Gander’s APFS-Fuse library. OpenText Tech support is currently looking into this.
  • X-ways — No support in version 19.6, however, according to this tweet from Eric it should be coming soon:

 

Plan A: Blackbag

After your confidence grows while scrolling through the heaps of tweets about Blackbag being ‘the only end-to-end solution for APFS’, you realize that your 30 day trial license has just expired… As you were about to accept your fate and Google “sans usb profiling cheat sheet“, you find two articles from Mari Degrazia on mounting APFS images:

As the daylight starts to fade and you try and remember how many episodes of Lost you actually watched before losing interest, you devise a new plan:

 

Plan B: Quick and dirty way to process APFS with Axiom and friends.

I was specifically looking for a way to get my APFS image parsed with Axiom.

The following approaches did not work:

Experiment 1:

Mount E01 with Arsenal Image Mounter > Mount resulting APFS partition with Paragon’s ‘APFS for Windows’ > Add files & folders in Axiom.

Result: It processed, but for some files Axiom wasn’t properly linking back to the actual source files to display their content. Not sure who’s fault it is, but most likely something to do with the mounting of a mounted image.

 

Experiment 2:

Mount E01 with Arsenal Image Mounter > Mount APFS partition with Paragon’s ‘APFS for Windows’ > Create AD Image with FTK Imager > Process AD Image with Axiom.

Result: It processed, but again had issues with displaying actual content for some of the files processed. During the creation of the AD Image, FTK Imager encountered a large volume of files it claimed couldn’t be added to the logical image, again likely due to the various mountings.

 

Experiment 3:

Mount E01 in SIFT with ewfmount (libewf) > mount APFS partition with APFS-fuse > Create a tar of mounted data > Process tar with Axiom

Result: Again got a similar result where Axiom processed the data, but didn’t display actual content for some files.

 

At this stage most island-stricken forensicators would have given up and resigned themselves to a life of USBSTORs and Volume GUIDs. But luckily, you’re not most forensicators and you try one more way:

Experiment 4:

  1. Mount E01 in SIFT with ewfmount (libewf)
  2. Mount APFS partition with APFS-fuse
  3. Create an empty DD image, give it a volume and copy mounted APFS data to new DD image. For a step by step walk through of basically creating a DD image from files and folders, check out Andy Joyce’s 2009 post: http://dougee652.blogspot.com/2009/06/logical-evidence-collection-stored-in.html
  4. Process DD image with Axiom.
  5. Success and Mojito’s.

Axiom was happy to process the DD, as well as the iPhone backup which was contained on the APFS volume in one go.

And yes, copying the mounted data to a DD container will update the creation dates of the files. If this makes you feel uneasy, remember, you also just used an ‘experimental’ driver to mount an APFS volume.

At least the thing from Lost didn’t eat you… #winning