Friday, November 6, 2015

Solving the 2015 FLARE On RE Contest - Challenge #2

You can find the solution to part 1 here.

Challenge 1 was fairly straight forward. The executable in it's entirety was rather small and wholly contained within the main application entry point. The 'key' was a hard-coded value in the executable that stood out visually (when analyzed in a hex-viewer) and the decryption routine was a simple XOR bitwise operation.  As we will see, challenge 2 introduces additional layers of complexity in the form of a slightly larger application with numerous function calls, and a more convoluted decryption routine.

So let's dive in!

Challenge #2
It seems the trend of poor grammar continues! The file name for the second problem is 'very_success' and is missing a file extension. I made a copy of the file and added '.exe' to the file name. This worked just fine as I was able to run the file. Similar to the first challenge, we are presented with a simple command prompt upon execution that requests user input and prints a string based on our input.
The import table for the file is largely the same as the first problem. Let's disassemble the application and see what we can infer from IDA.
So right away we see that there are three subroutines. This application is a little bit larger. You can check out my article here on disassembling and identifying function calls in assembly if you'd like a better understanding. These calls will introduce function prologues, epilogues, compiler specific conventions, local variable declaration, space allocation for passed arguments, and use of the ESP and EBP registers for computation on the stack. So although a function call is trivial from a programming perspective, it certainly adds some complexity at the assembly layer if you are unfamiliar with these conventions.

In the first function we see a call to sub_401000. This is followed by scasd, stosb, lodsb and a jmp to loc_401096+1. These commands are a little less common and the jmp to loc_401096+1 is somewhat odd but I'm going to ignore this for now and follow the programs intended execution flow. Let's drill into sub_401000.
Well this feels very familiar! We call GetStdHandle and then print a string ("You crushed that last one...") to STDOUT with WriteFile. We then read user input from STDIN using ReadFile. This value is stored in variable unk_402159 (with a memory offset of the same value). Several values, including our user input variable unk_402159, are pushed to the stack and then a call to sub_401084 is executed.

This sub-routine is where things start to get more interesting. There are two functions at work here.
In the first we see a local variable declared, and space for the three arguments is allocated. The function prologue begins and a few empty registers are pushed to the stack. EBX is XOR'd by itself (resulting in a value of 0 in that register as well). Then 25 (hexadecimal) is loaded into ECX and a comparison to the memory at offset EBP+arg_8 is made. This serves effectively as a length check! If we enter a value with a length of at least 25h than this comparison passes. Otherwise loc_401096 is executed and a short jump to loc_4010D7 occurs (which effectively quits the application).

If the length check passes then we proceed to the second part of our function.
A-hah! Here is the decryption routine. The dereferenced value at EBP+ARG_4 (0018FF64 which has stack contents of 00402159 which is the address to the starting value in our user input variable) is loaded into ESI. A return address is then loaded into EDI. Oddly, EDI+ECX-1 is then loaded into EDI with the LEA command. ECX is 25, subtract 1 = 24. EDI contains 004010E4 + 24 = 00401108. This is an odd address. Let's take a quick look in a hex-viewer to see what resides at this location.
Well this is interesting. Our initial length check, and the ECX-1 calculation are the same. This is a pretty strong indication that the characters from 004010E4 to 00401108 are the 'key'.

At this point we've probably gathered as much useful information that can be derived from reviewing the disassembled instructions. There are a lot of register comparisons that occur in the suspected decryption function so it becomes easier to analyze what transpires if we know what the registers actually contain at run-time. We need to use a debugger to gain this view into the applications execution flow. I prefer Immunity but Olly or any other debugger will suffice.

Load your Debugger and Open 'Very Success.exe'. Right click in the disassembled instruction window and select View -> Module 'very_suc'. Select the line directly above the call to ReadFile and press F2 to set a breakpoint. This will halt application execution when it reaches this instruction so that we can manually step into and over subsequent instructions and see each change as it effects the registers and stack.
Press F9 to run the program, F7 to step into the first routine, and F9 again to hit our breakpoint. Press F8 twice and then enter a value in the application command prompt window that is greater than 25h in length. Lets continue to press F8 to step over each instruction until we reach CALL very_sec.00401084. Press F7 to step into this function. This is our bounds check routine. Continue to press F8 until you reach LEA EDI, DWORD PTR DS:[EDI+ECX-1] (which you should recall from the prior IDA analysis of this function). Application execution is now sitting in our decryption routine.
Let's examine the stack and register contents to better understand our current landscape. EAX contains 0018FF84 which is a stack offset. ECX holds 25 (the length check). EDI points to the suspect string. ESI points to the user entered string.

We also see some 16 bit and 8 bit register references. If you are unfamiliar with this please see the following diagram:

                   EAX
|<-------------- 32 bits ----------->|
+---+---+---+---+---+---+---+---+
| 0  | 0  | 1 | 8 |  0 | 1 |  C |  7 |
+---+---+---+---+---+---+---+---+
                              AX
                     |<-- 16 bits -->|
                     +---+---+---+---+
                      | 0 | 1 | C |  7 |
                     +---+---+---+---+
                      | 8bits | 8bits |
                         AH      AL

I'm going to walk through each instruction in the decryption routine to help explain things to those unfamiliar with some of the operations that take place.

MOV DX, BX
This instruction moves the low 16 bits from BX into DX effectively zeroing out those bits.

AND DX, 3 performs a bitwise AND operation which effectively does nothing at this point in time.

MOV AX, 1C7
1C7 is a hard-coded value and it is loaded into the lower 16 bits for EAX (which now contains 001801C7). EAX is then pushed onto the stack.

LODS BYTE PTR DS:[ESI] stores the byte from ESI into the AL register. Since ESI points to the user input string, this takes our first value and stores in inside the low 8 bits of EAX. This is important as it is the beginning of our string comparison test against a presumed harcoded key value. ESI increments.

PUSHFD decrements our stack pointer by 4 and pushes the contents of the EFLAGS register to the stack (00000203).

XOR AL, BYTE PTR SS:[ESP+4] is a crucial step. This instruction performs an XOR bitwise operation on the AL register (first byte of user input) against ESP+4. ESP points to the top of the stack (which currently contains the EFLAGS register contents). If we add four (moving backwards in the stack as its addressing grows downward) we reference 001801C7. Specifically the value C7. So if our user input was 'AAAAAAAAAAAAAAAAAAAAAAA' than this operation would be XOR 41, C7. The result is stored in AL.

XCHG DL, CL swaps the contents of these two registers effectively placing the value 25 in DL and 00 in CL.

ROL AH, CL is another important command. It will rotate CL number of bits left in AH. CL contains 00 so nothing occurs during this iteration of the loop.

POPFD pops the EFLAGS register back off the stack.

ADC AL, AH is a third crucial command. ADC stands for 'add with carry'. Effectively the SRC operand is added with the carry flag to the DST operand. The result is stored in DST. Continuing with our example user input from before, AL contains 86 after the XOR operation. The carry flag is set to 1 and AH contains 1. 86+1+1 = 88 placed into AH. EAX now contains 00180188.

XCHG is used again to swap the contents of DL and CL. EDX is XOR'd by itself to zero the register.

AND EAX, 0FF is called which effectively zeros the contents of EAX excluding AL.

ADD BX, AX sets EBX to 0x00000088.

And now to the most important command..

SCAS BYTE PTR ES:[EDI] compares the byte, word, or double word specified with the memory operand with the value in the AL, AX, or EAX register, and sets flags in the EFLAGS register according to the results. This checks our XOR, ROL, and ADC computed first byte of user input against the byte at EDI (00401108 which is our suspected key if you recall).

If our value is correct then EDI decrements (as it will need to check the next byte in the suspect key) and the loop iterates again. If incorrect then the application jumps to 004010D7 and quits.

Lets try to summarize this activity:
1. 004010E4 - 00401108 contains our key
2. 00402159 - contains our user input
3. The decryption routine does an XOR against C7, a ROL, and an ADC on each byte of user input and compares it to a corresponding byte in the key.
4. If there is a match it continues until fully decrypted. Otherwise it quits.

Since we know the operations that occur, we can perform each step in reverse order to decrypt the key.

A8 9A 90 B3 B6 BC B4 AB 9D AE F9 B8 9D B8 AF BA A5 A5 BA 9A BC B0 A7 C0 8A AA AE AF BA A4 EC AA AE EB AD AA AF

A8 - 2 = A6 XOR by C7 = 61 Convert hexadecimal to ASCII = lowercase 'a'

You also have to factor the ROL command as ROL AH,CL on the third character (where CL = 02 and AH = 01) turns AH to 04. The ADC instruction then becomes AL + 4 + 1 and it would be a subtraction of 5 instead of 2.

I did these computations by hand and stepped through everything manually in the debugger. After computing each value manually I tested the input in a debugger and observed the XOR, ROL, ADC events to ensure that everything was being processed correctly and adjusted my calculations appropriately if an incorrect value was seen.

I'm sure you could build a quick python script to calculate everything with a proper ROL implementation but it didn't take that long by hand and was good practice. This was definitely more challenging but still manageable. The application and decryption routine was not complex enough to necessitate renaming variables for clarity in IDA or programming anything in python.

Result:
a_Little_b1t_harder_plez@flare-on.com

Friday, October 30, 2015

Solving the 2015 FLARE On RE Contest - Challenge #1

After my last post on disassembling control flow structures, a colleague of mine mentioned that I should participate in the 2016 FLARE On challenge next year. Due to work and life obligations I have not had the opportunity to partake in the previous 2 contests but I figured now would be a good time to get some practice in and prepare for 2016. I am by no means an expert and this is very much a learning exercise for me, one that I intend to document and share. I'll be doing a series of posts (when I find time) over the next few months documenting my progress.

The FLARE On contest is an annual capture-the-flag style reverse engineering competition from FireEye. If you'd like to follow along you can download the materials here. Each challenge involves assessing a sample with the sole intent of finding a key (an e-mail address).

Challenge #1
The first file is a self-extracting zip and will dump an executable into a specified directory with the name "i_am_happy_you_are_to_playing_the_flareon_challenge.exe". It's a long and oddly named file. I'll assume the intentional grammatical deficiencies are for comedic purposes and don't have any greater significance, for now :)

Before executing this file lets check out the import table to get an idea of what it does. It's only 2KB in size so it is clearly a small application with very limited functionality.
Pretty standard stuff here, no notable cryptography or networking related libraries. This binary appears setup to take some input and that's about all.

When we run the file this hypothesis is confirmed. The application prints two strings, prompts the user for input, checks if it is the correct value, and then prints a string depending on whether the user input the secret key. I tried entering a variety of answers that included special characters and lengthy input but could not generate an alternate error message.
At this point I elected to fire up IDA and take a peek at the disassembled routines. Before analyzing the main function though, lets review the raw hex and look for some of the strings that we observed during interactive program execution.
My first observation here is that after the 'You are failure' string we see  24 characters that appear mutated and are followed by some null values. Let's keep this in mind and take a peak at the application entry point.
Our application is initialized and GetStdHandle is called. This function retrieves a handle to the specified standard device (standard input, standard output, standard error). User input is taken and written to the memory offset for variable 'byte_402158'. Now we enter the core logic of the application.
The first routine takes the first low order byte (x86 is little-endian and the AL register is 8 bits) from the ECX register and moves it into AL. A bit-wise XOR operation is then performed against the AL register and the hex value '7D'. The result is then compared to a value stored at the memory offset of variable 'byte_402140'. If the values do not match then we get the error message. However, if the value does match then ECX is incremented and the XOR comparison occurs again. It iterates through this loop 0x18 times. In decimal this is 24 loops, which interestingly is the length of the mutated character we noted earlier. The memory offset referenced by variable 'byte_402140' is the location of this suspicious string as well.

At this point there is enough evidence indicating that the mutated characters are in fact the secret key. I thought about writing a program to XOR each byte at that offset by 0x7D but got lazy and used the Python interpreter.

Once you have computed each value, assemble the string and convert the hexadecimal values to ASCII.

Well that certainly looks like an email address. Sure enough, when we enter this as our key value it is successful.

Ta-da! This first challenge was fairly straightforward and didn't take long at all but I assume the difficulty level will ramp up quite quickly.


Monday, October 5, 2015

Disassembling Loops and Control Structures in x86

One of the first topics we discussed focused on understanding how function calls and stack frames are represented in assembly languages. I wanted to follow up on that as we haven't posted on this subject matter since then. Both this post and my prior one are geared towards those with an interest in learning more about assembly and have only introductory level knowledge.

Although most of the worlds developers have long since ascended or are in the process of moving to high-level and very-high-level languages, a subset of industries and disciplines still necessitate a heavy focus on low-level languages. The cyber-security industry is a prime example in that both defenders and attackers, to some degree, work intimately with dis-assemblers and debuggers.

On the offensive side of the equation developers focus on writing memory efficient shellcode in assembly to interact with operating system application programming interfaces in order to exploit a system. Exploit developers may also dis-assemble and debug an application in assembly to gain a better understanding of it's internal structure so as to identify weaknesses. Dis-assembly and debugging is core to the 'hacker' ethos in that it is the mechanism by which one seeks to explore and understand the inner workings of an application or object.

But this process cuts both ways. Defenders spend a great deal of time dis-assembling and debugging malware to understand how it works and what impact it may or will have on a system. Moreover, fuzzing, dis-assembly, and debugging of legitimate applications is often done with altruistic intention and leads to more secure software.

For many tier-1 and tier-2 security analysts, crossing the bridge to a research position is difficult in that it requires significant familiarity with assembly language concepts. I don't often meet a lot of computer science graduates who are fluent in IA32. Most entrants to the security field find themselves working in various security applications for a couple of years unless they score an entry level threat research or junior A/V analyst position. These are more niche areas so the opportunities are not as common.
I mentioned in my previous post that I think one of the best ways to understand assembly is by training the eye to recognize assembly routines that translate to common high-level language functionality. The Intel 64 Architecture Manual isn't a particularly thrilling read and I don't spend all day writing programs in NASM, so for me, coding in C++ and then disassembling an executable was a quick way to build visual fluency. It is much easier if you can visually recognize a particular pattern and immediately know what it is doing instead of picking your way through every operation when viewing an application in IDA or Ollydbg. Legibility is obviously much more natural in high-level languages and so this takes some practice. I wanted to go through a few examples in this post.

Loops and Conditional Statements
Loops and conditional statements are one of the first concepts a programmer will learn and are found in all programs of moderate complexity. A loop is an iterative or repetitive task designed to carry out some action until a specific condition is met. Loops can be expressed in a number of different ways depending on the logical scenario. It is normal to encounter:
  • If-Then Statements
  • For Loops
  • While Loops
  • Do-While Loops
  • Infinite Loops
It's important to understand that all of these loop functions rely on a similar underlying conditional decision making process. A For-Loop contains an initialized variable, a comparative statement, an iterative counter, and an action that is performed. These shared characteristics become more apparent when such a process is dis-assembled. 

Control Flow
We utilize control flow statements to modify the orderly execution of an application. Control flow commands allow us to move non-sequentially around an application, branching out and executing different routines based on required conditions. High-level languages address a lot of the minutiae relating to the management of this process. When we dis-assemble loops and conditional statements we see that variables, comparators and control flow commands are at the core of these components.

Consider the following example:
for (int i=0; i < 10; i++){
     cout << "Hello";
}
Can be expressed as:
int i = 0;
do{
cout << "Hello";
i++;}while(i<10);
Assembly
To implement this functionality in IA32 we utilize the compare (CMP) and (JMP) statements, along with registers, and stored values on the stack.

Let's walk through the prior example. At function loc_415F00 we see:
mov [ebp + i] , 0
jmp short loc_415F0F
First we move the value of 0 into the value 'i' which is referenced as an offset from the EBP register. Think of EBP as the base value from which all variables can be accessed inside functions or the application entry point. This is essentially our variable initialization (declaration would occur in the .BSS or .DATA segments of the PE file).

Now at function loc_415F0F we find:
cmp [ebp + i] , 0Ah
jge short loc_415F30
We see a comparison command that compares the current value of 'i' (which was just set to 0) to 0Ah. The 'h' indicates this value is expressed in hexadecimal and when translated to decimal we know this is the number 10.

The second command is an example of a conditional jump. The JMP command is an unconditional jump in that there is no requirement for it execute a jump. If EIP points at this command it will execute and redirect control flow. A conditional jump requires that a certain criteria is met before it redirects EIP. In our example the command can be interpreted as saying point EIP to the address of loc_415F30 if the second operand (0Ah) is greater than or equal to (JGE) the first operand (our variable 'i'). These commands represent our comparative statement.

We can then proceed to function loc_415F30:
mov eax, [ebp + i]
add eax, 1
mov [ebp + i], eax
jmp loc_415F0F
This set of instructions comprise the iteration required by a For-Loop. The value of 'i' (0) is moved into the register EAX which is then incremented by 1 and then pushed back into the memory offset of our variable 'i'. We then jump back to loc_415F0F so that our comparison and iteration occurs until the CMP statement sees that [ebp + i] is greater than or equal to 0Ah. When this condition is satisfied the program will skip the conditional branch and EIP will move to the instruction following the jump statement.

The popular dis-assembler IDA does a great job of visually representing this process as seen below:
The thick blue line represents the For-Loop. The red line indicates the that the required condition was not met while oppositely the green line implies that it was satisfied. Similar sequences may not be as legible in a debugger.

While this is not the most advanced material I hope that it serves to clarify the subject and help you in your journey!

Sunday, September 27, 2015

Using the Modern Honey Network to Detect Malicious Activity

The Modern Honey Network

The Modern Honey Network (MHN) is an amazing honeypot framework created by the great team at ThreatStream. MHN simplifies honeypot deployment and data collection into a central management system. From MHN you can send output to an ELK instance, Splunk or even an ArcSight digestible format. I personally output the data to Splunk because MHN has also made an elegant Splunk application that renders MHN data quite nicely.

MHN comes pre built with deployment scripts for the following honeypots:
  • Dionaea
  • Conpot
  • Kippo
  • Amun
  • Glastopf
  • Wordpot
  • ShockPot
  • Elastichoney
MHN also comes with scripts to install Snort and Suricata for IPS alerting as well as instructions to add additional honeypots to the framework. As mentioned earlier, the deployment scripts are designed to automatically feed their information back into MHN, which is then displayed within the MHN WebUI, ELK, ArcSight, or better yet, Splunk. Update: MHN also comes with p0F, which is not a honeypot but a passive fingerprint scanner.

Installation and Configuration

If you're interested in installing MHN on a server or VM you can follow the instructions by n0where.net. I installed MHN and the associated honeypots in Docker containers for convenience. This effectively isolates and compartmentalizes the services and allows multiple services that run locally on similar ports (such as 80 or 443) to use different "external" ports on the host machine.

To install MHN on Docker start a container with the following command:

docker run -p 10000:10000 -p 80:80 -p 3000:3000 -p 8089:8089 --name mhn  --hostname=mhndocker -t -i ubuntu:14.04.2 /bin/bash
*Note: 8089 is specified if you are using the Splunk forwarder. You can chose between 80 and 443. You can also make the host OS' port separate from the docker container's port by using [hostport]:[dockerport], which is convenient for honeypots.
Next, create and run the following script:
#!/bin/bash

set -x

apt-get update 
apt-get upgrade -y 
apt-get install git wget gcc supervisor -y 
cd /opt/ 
git clone https://github.com/threatstream/mhn.git 
cd mhn

cat > /etc/supervisor/conf.d/mhntodocker.conf <<EOF
[program:mongod]
command=/usr/bin/mongod
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
autostart=true

[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true
autostart=true
autorestart=true

EOF

mkdir -p /data/db /var/log/mhn /var/log/supervisor

supervisord &

#Starts the mongod service after installation
echo supervisorctl start mongod >> /opt/mhn/scripts/install_mongo.sh

./install.sh

supervisorctl restart all 
Don't forget to reference the host's IP address or hostname as the MHN server's IP during the ./install.sh script (not the docker container's IP address) unless you are using Docker's internal networking for Honeypot to MHN communication.
Unfortunately, due to the interactive nature of MHN's installation, supervisord is manually running in the background instead of as a started service. To restart the container later use:
docker start <containerID> 
docker exec <containerID> supervisord &
To deploy a honeypot go to the 'deploy' section in the MHN WebUI and select your honeypot from the drop-down list. Then, either copy the wget command or the script contents and run either in your honeypot system. 
MHN Deployment Page


Installing honeypots in containers uses a similar but less complex method where you: create an Ubuntu 14.04 container with your host and internal port mappings, install the required services (such as wget, sshd, python, supervisord, etc.) and run the install command or script from MHN deployment page above. There are consequences to installing honeypots on docker containers since some honeypots require direct interface access, which Docker supports, but reduces performance significantly. So decide how important packet capture is to your installation and choose appropriately. I'm not going to go through the installation instructions for each container, but if needed I can provide guidance.

Run the following command to generate Splunk friendly output:
cd /opt/mhn/scripts/
sudo ./install_hpfeeds-logger-splunk.sh
This will log the events as key/value pairs to /var/log/mhn-splunk.log. This log should be monitored by the SplunkUniveralForwarder.

To create an output for ArcSight run:
cd /opt/mhn/scripts/
sudo ./install_hpfeeds-logger-arcsight.sh
This will log the events as CEF to /var/log/mhn-arcsight.log

Now you have MHN installed with Honeypots feeding information into it. 
MHN Main Page

Configuring Honeypots

Once MHN is up and running an important question to ask is where to deploy honeypots and for what purpose. There are primarily three locations that a Honeypot can be installed:

1. Internally
Internal honeypots provide a low-noise, high value, alarm system that lets you know when someone is performing attacking your internal servers. In theory, nothing should ever hit your internal honeypots minus perhaps vulnerability scanners, which you can whitelist from any alarm. I would recommend deploying Kippo, Conpot  Dionaea, and Amun (although Amun is a new addition to MHN and I haven't had the opportunity to play around with it yet) across your environment, and especially in high-value networks. I would also consider Shockpot and any other honeypot that mimics services you run internally such as WordPot or Glastopf.

2. Externally
Opening an IP or specific ports on your firewall to Honeypots can let you know who is scanning your perimeter environment looking for vulnerabilities. Although it is difficult to action external scans into an alert since you will have both legitimate and illegitimate scans against your external addresses.

3. Globally
The third option is to rent a server in the cloud and place MHN Honeypots on random public IPs. You can then use this data to compare your external MHN data to try to determine who is randomly scanning the internet versus who is specifically targeting you. Although this is a very unscientific way of going about this, it cannot hurt to have more information for investigative purposes. This type of deployment is often used to gather generic threat data that is fed to IP/URL/Hash blacklist databases.

Feeding Your Data into Splunk

I won't go into much detail about what reports to create with MHN's external and global data because I think that MHN has done a great job with the MHN Splunk application that I mentioned earlier. The application displays summary data for each of the honeypots on a dashboard home page.
Main MHN Splunk App page
Splunk Conpot Page
Splunk Dionea Page
For a free and open source product, I'm pretty impressed by the work ThreatStream has put into MHN. I hope that they continue this trend.

What To Do With The Data

Honeypot data utilization relies heavily on the context of the data. Internal Honeypot hits are far more important to investigate than external or global hits. That said, some uses include:
  • When something unexpected hits an internal honeypot start an investigation ASAP.
  • A report of your top external honeypot hits to understand who's making the most noise, what they're trying to hit and how frequent the connections are. This will give you an idea of where to tighten security and how you can tailor your patch management program. It may also provide data points that can be used to narrow threat hunting efforts on the network (such as confirming that targeted attacks on the perimeter we're all dropped and that no traffic from the malicious external IPs was allowed through).
  • An alert that takes the top 1000 global/external honeypot source IP addresses in the past month and compares them to your firewall traffic to see if any non-honeypot connection lasted longer than thirty seconds or contained more than 1MB of data.
  • Threshold vectors for when a particular connection makes an unusually high number of connections to an external honeypot such as over 50-5000 (depending on how popular you are).
  • Understand what countries your attackers are originating from to create rules looking for successful authentications/unusual connections from those geographic locations.
  • Identify what usernames attackers use to attempt automated authentication and ban them within your organization.
    • For my honeypot it's root, admin, test, user, MGR, oracle, postgres, guest, ubnt and ftpuser.
  • Identify what passwords that authentication spammers use to try to authenticate to ensure that your password complexity rules meet minimum requirements
    • For my honeypots it's 123456, password, admin, root, 1234, test, 12345, guest, default and oracle.
  • Collect packet samples of potentially malicious traffic for custom IPS signatures.

If you enjoy this post/project please be sure to thank the MHN project and volunteers and to support the Honeynet Project.

P.S. check out this great introduction video by Jason Tro.

Wednesday, September 23, 2015

Empire Post-Exploitation Analysis with Rekall and PowerShell Windows Event Logs

In my last blog entry I explored some post-exploitation possibilities using PowerShell and Matt Graeber's repository of penetration testing tools, PowerSploit. PowerSploit, like PowerTools, is a set of fantastic scripts capable of accomplishing siloed tasks; however, they lack the modularity and plug-ability of a complete framework. Today I want to talk about a relatively new entrant to the field—PowerShell Empire.

Although Empire is only a couple of months old, the developers (who also worked on Veil) have built an impressive lightweight management architecture that borrows heavily from projects like PowerSploit and PowerTools to create a "pure PowerShell post-exploitation agent built on cryptographically-secure communications and a flexible architecture." While working with it the past couple of days I have found that it has a familiar workflow for those who are accustomed to Metasploit, making it easy to use for penetration testing Windows environments.

I have used Metasploit for many years, dabbled with Core Impact, and explored Armitage/Cobalt Strike at great length. These are all fantastic frameworks that are incredibly extensible, have strong community support and regular development release cycles. But the PowerSploit framework isn't exactly 'built-in' to those solutions (Cobalt Strike allows you to import modules making it perhaps the easiest to extend in terms of PowerShell based attacks). I've had a few conversations recently with people who are unsure about what framework they should be using and my answer is always the same, it depends. What you select is largely dependent on financial limitations and objectives but in the end it is probably best that you get familiar with all of these offerings.

There are a couple of key features in Empire:
  • Invoke Expression and Web Client download cradles allow you to stay off disk as much as possible. Evading on-access scanners is crucial and leaving as few forensic artifacts as possible is just good trade-craft.
  • The agent beacons in a cryptographically secure manner and in a way that effectively emulates command and control traffic.
As penetration testers our goal should be to effectively mimic real-world attack methodologies, network traffic and end-point activity to provide clients with a set of indicators of compromise that can be effectively used to identify monitoring gaps. Tools like Empire help to push these ideas forward and reduce the latency between attacker innovation and defender evolution.

In this post I want to demonstrate how to use Empire, conduct basic IR memory analysis (in the same format as my previous article) and, more importantly, highlight some discussion around automated detection at the network and host level.

Red Team

I used Kali (2.0) for my server but I'm sure this would work on most Debian based distributions.
git clone https://github.com/PowerShellEmpire/Empire.git
cd Empire/setup
./install.sh
Simple. To launch Empire, execute the following command from the Empire root directory with the -debug switch enabled to ensure logs are stored for troubleshooting and tracing your activity:
./empire -debug
Empire uses the concept of listeners, stagers and agents. A listener is a network socket instantiated on the server side that manages connections from infected agents. A stager is the payload you intend to deliver to the victim machine. To access your listeners simply type ‘listeners’ to enter the listeners context, followed by ‘info’.

There are a few important values to note here. First, you can specify a KillDate and WorkingHours to limit agent and listener activity based on project limitations. I have certainly worked on a number of engagements in which a client had very specific restrictions about when we could work, which would have proved invaluable.

Second, the DefaultJitter value will help evade solutions that attempt to identify malicious beacon patterns that occur at a constant interval, and imply scripted or machine like activity that obviously stands out from natural human browsing patterns. There is also a DefaultProfile that defines the communication pattern that the agent uses to beacon home, which we will talk more about later.

Third, define variables using 'set [variablename] [value]' syntax, and activate the listener with the 'execute' command . Type list to verify that the listener is active and a network socket has been opened.

Logically the next step is define a payload and select a payload delivery mechanism. Type 'usestager'  followed by TAB+TAB to see a list of options.


The two options that are best suited for payload execution are launcher and macro. Launcher will generate a PowerShell one-liner (Base64 encoded or clear text) that automatically sets the required staging key/listener values. Macro creates an office macro with the appropriate callback values to establish a connection with the listener. This can be embedded in an office document and used in social engineering attacks as a payload delivery mechanism. 

To select a stager type 'usestager [stagername] [listenername]' followed by 'execute'


In the image above you can see that the listener callback details are embedded in the script, and a (possibly) hard-coded value of /index.asp is used for the agent GET request. The session value for the agent is included. Base64 encoding the script will turn on the '-Enc' PowerShell flag which will decrypt the payload at run-time making investigation and tractability more difficult (again, simulating a real breach.) 

After executing this one-liner on our victim machine you will receive a callback notification that a new connection has been established. You can observe active agents by typing 'agents' followed by 'list'. 

Now that a connection is established you can type 'interact [agentname]' to hop into an agent session similar to meterpreter. Enter 'usemodule' followed by TAB+TAB to see all available options. You can identify privilege escalation opportunities, move laterally, establish persistence, steal tokens/credentials, install key-loggers and run all of the amazing post exploitation tasks available from the PowerSploit/PowerTools exploitation kits. I don't want to go into detail for each of these modules as it is not the intent of this post. I simply wanted to demonstrate how to get up and running to encourage more offensive-security professionals to embrace this tool. 

Blue Team

My objective for the defensive aspect of this post is to conduct some high level analysis of the tool itself and the general methods it employs. There are a lot of modules available and of course each of these may leave behind specific indicators of attack/compromise but it's not my goal to go into each of them for this post.

Let's take a look at some of the network traffic first.


We see that after the initial stager is executed our first connection is established. On its own this is an extremely poor indicator. GET requests to /index.asp are going to be very common on any network. However, it does appear to be a hard-coded value and it's important to gather as much information as possible. 

After this initial connection a second stage payload is downloaded, key negotiation occurs, an encrypted session is established and the agent starts beaconing. This beacon is characterized by the DefaultProfile variable set for the listener running on the Empire server. 

We can see the beacon issues 3 GET requests within a short period of time during a call home interval. The requests are sent to /news.asp, /admin/get.php, /login/process.jsp and have a generic Mozilla User-Agent.

Again, individually each of these actions appears benign and alerting on it would generate a significant number of false positives (which is the intention of the framework.) If we look at this traffic collectively we could design a network IDS rule that alerts when a connection is made to /index.ASP and is followed by at least three GET requests to at least two of the GET requests in the image above.

Moreover, many organizations may issue tight controls around the type of Browser application that can be installed, and it is unlikely to see a Windows server with Firefox running. If you are a system administrator that has implemented application white-listing and your users should only be using IE, the presence of Mozilla/Chrome/Opera UA indicates a policy violation (best case scenario) or a manually crafted UA (worst case possibly indicating malware). In any event, it is possible to at least use this information to profile other infected hosts even if it doesn't serve as a point of initial detection. It's good to have options.

Of course all of this can be customized in Empire, so from a heuristic perspective I think the important take away really is recognizing the pattern itself and not necessarily the specific implementation of that pattern. That is a little bit esoteric so let's try and gather more information from the host.

Dave recently published a two part series on Windows event monitoring. This is a fantastic starting point for most organizations, especially those who are new to SIEM. I still come across a lot of environments that do not have any formal log management program, let alone a properly deployed SIEM with a good alerting framework that has been adequately tuned. For most companies, implementing monitoring for the event IDs Dave highlighted is a good objective. But for those with a more mature security program, I think it's important to start looking at PowerShell events.

PowerShell 2.0 is the default installed version for Windows 7 and Server 2008 R2 (prior versions do not have PowerShell installed) and unfortunately it does not provide much information from a logging perspective.

There are primarily two log files that are accessible:
  • Microsoft Windows PowerShell
  • Microsoft Windows PowerShell Operational
It is also possible to enable analytic and debug logging however this is fairly noisy and resource intensive. Open Event Viewer and select View -> Show Analytic and Debug Logs. Then browse Application and Service Logs -> Microsoft -> Windows -> PowerShell and right click Analytic to enable it. I don't think there is a lot of value add here but it can be useful when debugging a script or troubleshooting a problem.

In the 2.0 version of the Microsoft Windows PowerShell Operational log you will have the following events of interest:
  • 40961 - Console is starting up
  • 40962 - Console is ready for user input
These logs do contain meta information such as the user who performed the event, and the computer it was executed on but it is pretty limited. If you do not use PowerShell in your environment (even small organizations have use cases so this is unlikely) then perhaps alerting on one of these events may be useful but there is very little contextual data stored in the event log to indicate what was done while the console was accessed.

The Microsoft Windows PowerShell log in version 2.0 of PowerShell will often generate these event IDs:
  • 600 - Provider Life-cycle
  • 400 - Engine Life-cycle
  • 403 - Engine Life-cycle
Again these events are fairly nondescript and provide little information. 

Event ID 5156 from the Windows Security audit log can provide some additional information regarding network connections if we effectively filter to alert on Outbound, external, connections generated from applications like powershell.exe.  


None of these indicators are of any substantial quality, but thankfully Microsoft introduced some improvements in version 3.0 of PowerShell (no additional changes to event logging functionality in version 4.0 or 5.0 unfortunately).

After upgrading to PowerShell version 3.0 you can specify a GPO setting to turn on module logging for Windows PowerShell modules in all sessions of all affected computers. Pipeline execution events for the selected modules will then be recorded in the PowerShell event logs I covered earlier. You can also interactively enable these values as shown below. This is shown in the image below:
If we now execute our PowerShell Empire one-line stager we will have more event log data to work with. Event IDs 4103 and 800 are recorded and contain a veritable wealth of information that can be used to detect suspicious activity. 

At this point we can launch Rekall, list processes, identify suspect network connections, dump process memory and perform keyword string searches.

This is a similar workflow to my prior post. In large memory dumps it can be difficult (time consuming) to navigate or CTRL+F search through a document for specific keywords. Mark Russinovich's strings can greatly reduce this work effort but a better solution in my opinion is to write Yara rules and use them in conjunction with Volatility. If you aren't familiar, Yara is a tool designed to help execute binary or textual pattern match searches. It is very easy to write rules as the syntax is easy to pick up. Save the following to a text file with the .YARA extension.

rule example : powershell

{   
     meta:
           Description = "Look for suspect powershell artificats."
           filetype = "MemoryDump"         
           Author = "Greg Carson"
           Date = "09-09-2015"
    
     strings:
           $s0 = "Invoke-" ascii
           $s1 = "-Enc" ascii

     condition:
           2 of them
}


This can then be imported to perform a search in Volatility:
vol.py -f image.raw --profile=Win7SP1x64 yarascan -y yarafilename.yara -p 7860

The workflow demonstrated on the Blue Team side of things isn't necessarily in any order. Ideally, you would have a SIEM rule trigger based on a suspicious pipeline execution PowerShell event (that has appropriate filters and suppression enabled), which results in an investigation of network traffic prior to and shortly after the event and is followed by a more thorough live memory forensic analysis of the system and others it may have had contact with. But this may not be possible depending on the environment you find yourself in. It's important not to rely on any one single security solution as the indicators of attack will often exist in many different places and across disparate entities that solve different problems.

EDIT:
@tifkin_ contacted me to mention an additional tool from Mark Russinovich titled 'Sysmon'. It's a little bit outside the scope of this post but the tool itself shows a lot of promise, I'd recommend defenders look into this.

Monday, September 21, 2015

Spotting the Adversary with Windows Event Log Monitoring, Part II

Events to Monitor in a Windows Environment

Part II

Recently, on part I of Spotting the Adversary with Window Event Log Monitoring,  I walked you through the first half of NSA's guide to spotting malicious behaviour using Windows event logs. In part II I will go through the second half of the guide:
  1. AppLocker
  2. System or Service Failures
  3. Windows Update Errors
  4. Kernel Driver Signing Errors
  5. Group Policy Errors
  6. Mobile Device Activities
  7. Printing Services
  8. Windows Firewall

9. AppLocker

Windows 7/2008R2 introduced Microsoft's AppLocker, which Microsoft describes as:
AppLocker is a new feature in Windows Server 2008 R2 and Windows 7 that advances the features and functionality of Software Restriction Policies. AppLocker contains new capabilities and extensions that allow you to create rules to allow or deny applications from running based on unique identities of files and to specify which users or groups can run those applications.
It is a great product for  restricting the authorized software on a user's machine or production server. As such, when AppLocker is installed, reports and alerts should be configured that notify you when an AppLocker violation occurs. Recommended alerts include:
  • Alerts on AppLocker violations on production servers outside of change windows
  • Reports on all user AppLocker violations
Table-1: AppLocker Blocked Events
The server alert,  of course, depends on the size of your environment and control over the machines. If this is unreasonable,  you may want to focus the alert on high-severity servers and create a report for everything else.

10. System or Service Failures

This is a difficult set of alerts/reports to properly utilize. A system or service failure should not happen on a regular occasion. That said, something on my personal security environment seems to fail on a bi-daily occasion. On the other hand, if a Windows service continues to fail repeatedly on the same machines, it may indicate that an attacker is targeting the service.

Table-2: System or Service Failures
My recommendation is to focus on the absolute highest-severity systems at first, to take a report for the first couple of weeks of system/service failures, get a feel for your environment and then create alerts/reports depending on the stability of your environment because one malfunctioning system crashing every couple of hours to minutes can make your entire SIEM environment noisy and a factor to ignore instead of respond.

11. Windows Update Errors

Depending on how your environment's update policy is applied, an ad-hoc or scheduled report of Windows Update Errors should be created to be able to identify Microsoft Windows Update failures. Typically my recommendation is at least a report, and an alert for high-severity assets if constant diligence is a factor.

Table-3: Microsoft Windows Update Errors

12. Kernel Driver Signing Errors

Microsoft introduced Kernel driver signing in Microsoft Vista 64-bit to improve defense against inserting malicious drivers or code into the kernel. Typically any indication of a protected driver violation may indicate malicious activity or a disk error and warrants investigation; however, much like the System or Service Failures, I recommend creating a report and monitoring your environment for a few weeks to ensure that there are no repeat offenders that will spam your SIEM alerting engine.

Table-4: Kernel Driver Signing Errors

13. Group Policy Errors

Microsoft's built-in group policy functionality is an amazing way to ensure consistent security and configuration properties are applied to all Windows domain machines within your environment. The inability to apply a group-policy setting should be investigated immediately to determine the source. Depending on the size of your domain and frequency of GPO updates, I may recommend a report over an alert just because one failure across your domain could provide thousands of alerts instantly.

Table-5: Group Policy Errors

14. Mobile Device Activities

Mobile activities on a system are part of daily operations, so there is no need to report or alert directly on them. However, an abnormal number of disconnects or wireless association status messages may indicate a rogue wifi hotspot attempting to intercept your user's wireless traffic. Therefore, it may be important to log mobile activity traffic, and, if your SIEM is capable, to create a threshold or trending alert to notify you when there are an above-average number of mobile events.

Table-6: Mobile Device Activities

15. Printing Services

Microsoft Print Service events are more of a good-to-have event useful for tracing who printed what in case of an internal data leak. That said, the massive amount of events that a print server can make tracking printing events prohibitive (I have seen print servers generate more events than an active directory server.) If possible, you way wish to offload printer events to a log server instead of a SIEM for historical service. A simple SNARE/ELK/Syslog-NG solution may work very well.

Table-7: Printing Services

16. Windows Firewall

Windows Firewall modifications or service status outside of a change window/Windows update should not take place. As such, an alert indicating a firewall policy modification or status change is recommended for servers and high-severity assets. A report for user activities may be prudent depending on whether your users have administrative access or not (hopefully not!)

Table-8: Windows Firewall
There you have it. The NSA's Spotting the Adversary with Windows Event Log Monitoring is an excellent starting guide for alerts/reports in a new SIEM environment. My only complaint is that they forgot Windows Task schedules, which are often used by malicious entities for privilege escalation, malicious code execution, etc. Otherwise, hopefully between the NSA's original guide and this breakdown you have a healthy set of alerts and reports to start with. 

Friday, September 18, 2015

Triaging PowerShell Exploitation with Rekall

David recently published his article Spotting the Adversary so I figured I'd continue the trend and focus on Blue Team tactics in this post.

I've spent a fair bit of time in EnCase. They have a great product and a number of solutions to fit most of your needs, but at times it can feel bulky and a little stiff. Moreover, it has an arguably non-intuitive user interface and is an expensive solution that a lot of organizations cannot afford. Volatility is fantastic but for this post I wanted to focus specifically on Rekall. Incident Response and Forensics require a superb understanding of operating system internals, file system structures, and malware behavior patterns, but tools like Volatility and Rekall greatly reduce the barrier to entry for security analysts and service providers.

Rekall is a complete end to end memory forensics framework (branched from Volatility itself). The developers of this project wanted to focus on improving modularity, usability, and performance. One of the most significant advantages to using Rekall is that at allows for local or remote live forensic analysis. For this reason it is a core component of the Google Rapid Response tool.

In this article I wanted to document a common offensive tactic and then briefly step through some investigatory steps. This is by no means a complete incident response process, and the scenario assumes the attacker has some level of access to the system or network already.

Red Team

I am a big fan of Matt Graeber's PowerSploit. PowerSploit is essentially a set of PowerShell scripts designed to aid penetration testers during all phases of an assessment. You can use it to inject DLLs, reflectively load PE files, insert shellcode, bypass anti-virus, load Mimikatz and do all sorts of other wonderful and nefarious things. PowerShell has become extremely popular as an attack vector in the last two years as it is a native trusted process, lacks comprehensive security mechanisms (excluding perhaps the most recent versions), and is an extremely powerful scripting language (direct API support for Windows internals).

In our scenario we assume that the attacker has a limited shell and wants to gain more significant access. First let's set up our listener:
I've elected to use the Reverse_HTTPS Meterpreter payload as the PowerSploit Invoke-Shellcode script used later on only supports the Reverse_HTTPS and Reverse_HTTP variants (and as such it is preferable to pass this traffic over SSL).

On the compromised machine we will now execute a PowerShell one-liner to retrieve the Invoke-Shellcode PS1 script from the PowerSploit GitHub page. This is a nice method of retrieval as GitHub is not a suspicious or inherently untrustworthy page and the request is submitted over HTTPS (which can help with evasion at the URL filtering/web traffic content inspection layers). Additional levels of evasion can be employed by encrypting our PS1 script but I'm not going to explore that option for the purposes of this demo.
What's happening here? PowerShell is calling IEX (Invoke-Expression) to generate a new WebClient object and calls the DownloadString function to retrieve the URL. The Invoke-Shellcode script is executed with the switches that will instantiate a connection to our MSF listener. This is all executed in the context of the PowerShell process itself.

In Metasploit we see a session is established and migrate processes. In this example I am migrating to Notepad. This is not good trade-craft and I'm only doing this to make things more discernible and easy to grasp forensically in the next section.
At this point we can start looking at things from the other perspective.

Blue Team

First let's drop WinPMem on the system. WinPMem is a kernel mode driver that allows us to gain access to physical memory and map it is a device. 
Once this is done we can use Rekall to perform live memory forensics on the system. Navigate to your Rekall install directory and run the following command to mount the WinPMem device:
rekall.exe -f \\.\pmem
This will launch a Rekall interactive console.  Rekall has support for a lot of native plugins (that you are likely familiar with if you have ever used Volatility). Typing plugins. [tab] [tab] will print a list of options. To get additional information type plugins.[pluginname]? Many plugins also have switches that allow you to filter your query based on a specified criteria. To see the list of available switches type pluginname [tab].

A good starting point is the netscan plugin. 
You should also run pslist when starting your analysis to understand what processes are running. Nothing out of the ordinary but as we continue to browse we see Notepad has a network socket established:
It is clear that this process has been hooked as Notepad should never establish network connections. Let's use the LDRModules plugin to detect unlinked DLLs.
We see a list of unlinked DLLs and some that stand out as extremely suspicious. So we know that Notepad was injected into by a different process (or spawned by malware using a technique known as process hollowing). Obviously for this example we know that Meterpreter migrated to this process.

Meterpreter has a fairly standard migration process:
  1. Identify the PID.
  2. Scan for architecture of target process.
  3. Check if the SeDebugPrivilege is set to get handle to target process.
  4. Calculate payload length.
  5. Call OpenProcess() to gain access to Virtual Memory of target process.
  6. Call VirtualAllocEx() to assign PAGE_EXECUTE_READWRITE.
  7. Call WriteProcessMemory() to write the payload into the target process allocated memory region.
  8. Call CreateRemoteThread() to execute the payload.
  9. Close the prior thread from the old Meterpreter session.
In knowing this we look for additional suspect processes. I like to look for cmd.exe or powershell.exe and dump these processes memory regions for string analysis. When I ran pslist earlier I identified a PowerShell process and ran the memdump pid=[PowerShellPID] plugin. This will produce a DMP file that you can load in your favorite editor.  
I was able to find suspicious strings by searching for keywords such as 'IEX', 'Download', and other commands that might be used by an attacker. At this point I am able to extract the full PowerSploit script from memory and have identified that my attacker downloaded a Meterpreter stager. From an Incident Response perspective we have numerous Indicators of Compromise to work with.

Lastly, ProcExplorer from Mark Russinovich is a great tool for at a glance identification of suspect activity. Let's look at spawned threads and stack information from a normal Notepad process relative to a hooked one:
This is by no means a complex attack and there is much more we can do from a memory analysis perspective, but I think the material covered serves as a gentle introduction to the topic. I'll likely follow up on this post in the future and go into more depth but hopefully this information is enough to get you started.

There are a number of fantastic writers covering these types of topics in the blogosphere and I wanted to link to them here as they are all doing amazing work. I came across some great posts while doing research:
http://holisticinfosec.blogspot.ca/2015/05/toolsmith-attack-detection-hunting-in.html
http://www.tekdefense.com/news/2013/12/23/analyzing-darkcomet-in-memory.html
http://www.behindthefirewalls.com/2013/07/zeus-trojan-memory-forensics-with.html
https://github.com/volatilityfoundation
http://www.rekall-forensic.com/
https://github.com/google/grr
https://github.com/google/rekall/releases/tag/v1.3.2