Catching attackers with go-audit and a logging pipeline

2016.12.25

RSS feed

In November, Ryan Huber and Nate Brown from Slack released go-audit with an introductory post Syscall Auditing at Scale. Being able to see what’s hapening on your hosts is critical for proper defense, and their tool helps make sense of the data that can be collected with auditd. In this post, I’ll go through a quick setup to show how to use go-audit and some other tools to do something interesting.

I’ll setup auditing that will get fed through go-audit and collected by logstash locally on a host. The auditing will log a message when a honeytrap file is touched. Logstash will look for that log message and core dump the offending process, hopefully resulting in some info about the attackers TTPs (Tools, Techniques, Procedures) being saved.

Normally, I prefer to use a simple log shipper (ex. filebeat) that sends logs to an aggregator (ex. logstash) to ingest into a SIEM, but for my needs on this project, I need more smarts on the host, so I’m putting logstash directly on it. Further, I don’t like any detection logic on a host (see my post The Coventry Conundrum of Threat Intelligence), but again, it’s needed here.

Setting up auditd

I’m starting off with a fresh Ubuntu 16.04 server.

apt-get install auditd audispd-plugins

By default it doesn’t do anything:

auditctl -l
# Returns: No rules

Let’s create a secret file to monitor, with a secret in it. Then I’ll tell auditd to monitor any access to this file, and I’ll access it, so it will log a message.

echo I don't like cats even though I tell people I don't mind them. > /opt/secret.txt
auditctl -a exit,always -F path=/opt/secret.txt -F perm=wra
auditctl -l
# Returns: -w /opt/secret.txt -p rwa
cat /opt/secret.txt

Now, I check /var/log/audit/audit.log and at the end I find:

type=SYSCALL msg=audit(1482696524.188:25): arch=c000003e syscall=2 success=yes exit=3 a0=7ffd52c7894a a1=0 a2=20000 a3=69d items=1 ppid=4133 pid=5465 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=2 comm="cat" exe="/bin/cat" key=(null)
type=CWD msg=audit(1482696524.188:25):  cwd="/root"
type=PATH msg=audit(1482696524.188:25): item=0 name="/opt/secret.txt" inode=785716 dev=fc:00 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL
type=PROCTITLE msg=audit(1482696524.188:25): proctitle=636174002F6F70742F7365637265742E747874

So we have 4 log lines for this one event, which are difficult to make sense of, especially for an automated alerting system. That’s why go-audit was created, so let’s use it.

Setting up go-audit

go-audit requires golang 1.7, and Ubuntu only provides 1.6, so we need to install it manually:

wget https://storage.googleapis.com/golang/go1.7.4.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.7.4.linux-amd64.tar.gz

# Add it to our path
echo -e "export PATH=$PATH:/usr/local/go/bin" >> /etc/profile
source /etc/profile

# Ensure we're running at least go 1.7
go version
# Returns: go version go1.7.4 linux/amd64

# Set our GOPATH to ~/go
mkdir ~/go
export GOPATH=~/go

Now we install the prereq for go-audit and then go-audit itself.

apt-get install build-essential
go get -u github.com/kardianos/govendor

cd  go/src/
git clone https://github.com/slackhq/go-audit.git
cd go-audit
~/go/bin/govendor sync  # This command takes a while with no output
go build 

# You should now have a go-audit executable.
./go-audit
# Returns
# A config file must be provided
# Usage of ./go-audit:
#  -config string
#        Config file location

Now that you have a go-audit binary you can put it on other servers without needing to install golang and all that other stuff on them.

Create the following config file go-audit.yaml in the current directory:

#########################################################################################
# Please note that until this bug https://github.com/spf13/viper/pull/165 is fixed      #
# you _must_ specify all values despite the fact that they talk about having a default. #
# Hopefully this problem with viper goes away soon                                      #
#########################################################################################

# Configure socket buffers, leave unset to use the system defaults
# Values will be doubled by the kernel
# It is recommended you do not set any of these values unless you really need to
socket_buffer:
  # Default is net.core.rmem_default (/proc/sys/net/core/rmem_default)
  # Maximum max is net.core.rmem_max (/proc/sys/net/core/rmem_max)
  receive: 16384

# Configure message sequence tracking
message_tracking:
  # Track messages and identify if we missed any, default true
  enabled: true

  # Log out of orderness, these messages typically signify an overloading system, default false
  log_out_of_order: false

  # Maximum out of orderness before a missed sequence is presumed dropped, default 500
  max_out_of_order: 500

# Configure where to output audit events
# Only 1 output can be active at a given time
output:
  # Appends logs to a file
  file:
    enabled: true
    attempts: 2

    # Path of the file to write lines to
    # The actual file will be created if it is missing but make sure the parent directory exists
    path: /var/log/go-audit.log

    # Octal file mode for the log file, make sure to always have a leading 0
    mode: 0666

    # User and group that should own the log file
    user: nobody
    group: nogroup

# Configure logging, only stdout and stderr are used.
log:
  # Gives you a bit of control over log line prefixes. Default is 0 - nothing.
  # To get the `filename:lineno` you would set this to 16
  #
  # Ldate         = 1  // the date in the local time zone: 2009/01/23
  # Ltime         = 2  // the time in the local time zone: 01:23:23
  # Lmicroseconds = 4  // microsecond resolution: 01:23:23.123123.  assumes Ltime.
  # Llongfile     = 8  // full file name and line number: /a/b/c/d.go:23
  # Lshortfile    = 16 // final file name element and line number: d.go:23. overrides Llongfile
  # LUTC          = 32 // if Ldate or Ltime is set, use UTC rather than the local time zone
  #
  # See also: https://golang.org/pkg/log/#pkg-constants
  flags: 0

rules:
  - -a exit,always -F path=/opt/secret.txt -F perm=wra -k bees_in_my_honey
  # Watch all 64 bit program executions
  #- -a exit,always -F arch=b64 -S execve
  # Watch all 32 bit program executions
  #- -a exit,always -F arch=b32 -S execve
  # Enable kernel auditing (required if not done via the "audit" kernel boot parameter)
  # You can also use this to lock the rules. Locking requires a reboot to modify the ruleset.
  # This should be the last rule in the chain.
  #- -e 1

I set the log file to permissions 0666, which if you productionized this you’d want to change, but for now I don’t want you to battle permission issues.

I also added to my rule -k bees_in_my_honey, which will help me alert off this later by giving me a “key” to look for.

Now run go-audit with our config:

./go-audit -config go-audit.yaml
# This will print:
#   Flushed existing audit rules
#   Added audit rule #1
#   Socket receive buffer size: 32768
#   Started processing events

In another window run cat /opt/secret.txt. Now check our /var/log/go-audit.log and you’ll see the line:

{"sequence":101,"timestamp":"1482700861.088","messages":[{"type":1300,"data":"arch=c000003e syscall=2 success=yes exit=3 a0=7ffff76f7938 a1=0 a2=20000 a3=69d items=1 ppid=12166 pid=12602 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=5 comm=\"cat\" exe=\"/bin/cat\" key=\"bees_in_my_honey\""},{"type":1307,"data":" cwd=\"/root\""},{"type":1302,"data":"item=0 name=\"/opt/secret.txt\" inode=785716 dev=fc:00 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL"},{"type":1327,"data":"proctitle=636174002F6F70742F7365637265742E747874"}],"uid_map":{"0":"root","1000":"user"}}

We’ll make it pretty with jq:

{
  "sequence": 101,
  "timestamp": "1482700861.088",
  "messages": [
    {
      "type": 1300,
      "data": "arch=c000003e syscall=2 success=yes exit=3 a0=7ffff76f7938 a1=0 a2=20000 a3=69d items=1 ppid=12166 pid=12602 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts1 ses=5 comm=\"cat\" exe=\"/bin/cat\" key=\"bees_in_my_honey\""
    },
    {
      "type": 1307,
      "data": " cwd=\"/root\""
    },
    {
      "type": 1302,
      "data": "item=0 name=\"/opt/secret.txt\" inode=785716 dev=fc:00 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL"
    },
    {
      "type": 1327,
      "data": "proctitle=636174002F6F70742F7365637265742E747874"
    }
  ],
  "uid_map": {
    "0": "root",
    "1000": "user"
  }
}

This isn’t super pretty still, but at least all 4 log lines are tied together into one message.

Setting up logstash

Install logstash

apt-get install default-jre
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/5.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-5.x.list
apt-get update && sudo apt-get install logstash
# The logstash-output-exec plugin seems to have disappear so add it
cd /usr/share/logstash
bin/logstash-plugin install logstash-output-exec

Now we create a config file /etc/logstash/conf.d/00-config.conf:

input { file { path => "/var/log/go-audit.log" } }
filter {
  json { source => "message" }
  grok { 
    match => { 
      "messages" => "arch=%{BASE16NUM:arch} syscall=%{INT:syscall} success=%{WORD:success} exit=%{INT:exit} a0=%{BASE16NUM:a0} a1=%{BASE16NUM:a1} a2=%{BASE16NUM:a2} a3=%{BASE16NUM:a3} items=%{INT:items} ppid=%{INT:ppid} pid=%{INT:pid} auid=%{INT:auid} uid=%{INT:uid} gid=%{INT:gid} euid=%{INT:euid} suid=%{INT:suid} fsuid=%{INT:fsuid} egid=%{INT:egid} sgid=%{INT:sgid} fsgid=%{INT:fsgid} tty=%{WORD:tty} ses=%{INT:ses} comm=\\\"%{GREEDYDATA:comm}\\\" exe=\\\"%{GREEDYDATA:exe}\\\" key=\\\"%{GREEDYDATA:key}\\\"" 
    }
  }
}
output {
  if [key] == "bees_in_my_honey" {
      stdout {codec => json}
      exec { command => "logger honeytrap %{pid}"}
  }
}

For testing, we’ll just run logstash directly (from /usr/share/logstash):

bin/logstash -f /etc/logstash/conf.d/00-config.conf 

Now again cat /opt/secret.txt and you should see this as stdout to your screen running logstash:

{"syscall":"2","gid":"0","fsgid":"0","pid":"26599","suid":"0","path":"/var/log/go-audit.log","uid":"0","egid":"0","exe":"/bin/cat","@version":"1","host":"ubuntu","sgid":"0","key":"bees_in_my_honey","timestamp":"1482711294.767","ses":"7","auid":"1000","comm":"cat","euid":"0","message":"{\"sequence\":2096,\"timestamp\":\"1482711294.767\",\"messages\":[{\"type\":1300,\"data\":\"arch=c000003e syscall=2 success=yes exit=3 a0=7ffc2fd63938 a1=0 a2=20000 a3=69d items=1 ppid=19348 pid=26599 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=7 comm=\\\"cat\\\" exe=\\\"/bin/cat\\\" key=\\\"bees_in_my_honey\\\"\"},{\"type\":1307,\"data\":\" cwd=\\\"/root\\\"\"},{\"type\":1302,\"data\":\"item=0 name=\\\"/opt/secret.txt\\\" inode=785716 dev=fc:00 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL\"},{\"type\":1327,\"data\":\"proctitle=636174002F6F70742F7365637265742E747874\"}],\"uid_map\":{\"0\":\"root\",\"1000\":\"user\"}}","tags":[],"a0":"7ffc2fd63938","ppid":"19348","a1":"0","fsuid":"0","sequence":2096,"exit":"3","a2":"20000","a3":"69d","@timestamp":"2016-12-26T00:14:54.970Z","success":"yes","tty":"pts2","messages":[{"data":"arch=c000003e syscall=2 success=yes exit=3 a0=7ffc2fd63938 a1=0 a2=20000 a3=69d items=1 ppid=19348 pid=26599 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=7 comm=\"cat\" exe=\"/bin/cat\" key=\"bees_in_my_honey\"","type":1300},{"data":" cwd=\"/root\"","type":1307},{"data":"item=0 name=\"/opt/secret.txt\" inode=785716 dev=fc:00 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL","type":1302},{"data":"proctitle=636174002F6F70742F7365637265742E747874","type":1327}],"arch":"c000003e","items":"1","uid_map":{"0":"root","1000":"user"}}

Again, running through jq we can see this is:

{
  "syscall": "2",
  "gid": "0",
  "fsgid": "0",
  "pid": "26599",
  "suid": "0",
  "path": "/var/log/go-audit.log",
  "uid": "0",
  "egid": "0",
  "exe": "/bin/cat",
  "@version": "1",
  "host": "ubuntu",
  "sgid": "0",
  "key": "bees_in_my_honey",
  "timestamp": "1482711294.767",
  "ses": "7",
  "auid": "1000",
  "comm": "cat",
  "euid": "0",
  "message": "{\"sequence\":2096,\"timestamp\":\"1482711294.767\",\"messages\":[{\"type\":1300,\"data\":\"arch=c000003e syscall=2 success=yes exit=3 a0=7ffc2fd63938 a1=0 a2=20000 a3=69d items=1 ppid=19348 pid=26599 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=7 comm=\\\"cat\\\" exe=\\\"/bin/cat\\\" key=\\\"bees_in_my_honey\\\"\"},{\"type\":1307,\"data\":\" cwd=\\\"/root\\\"\"},{\"type\":1302,\"data\":\"item=0 name=\\\"/opt/secret.txt\\\" inode=785716 dev=fc:00 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL\"},{\"type\":1327,\"data\":\"proctitle=636174002F6F70742F7365637265742E747874\"}],\"uid_map\":{\"0\":\"root\",\"1000\":\"user\"}}",
  "tags": [],
  "a0": "7ffc2fd63938",
  "ppid": "19348",
  "a1": "0",
  "fsuid": "0",
  "sequence": 2096,
  "exit": "3",
  "a2": "20000",
  "a3": "69d",
  "@timestamp": "2016-12-26T00:14:54.970Z",
  "success": "yes",
  "tty": "pts2",
  "messages": [
    {
      "data": "arch=c000003e syscall=2 success=yes exit=3 a0=7ffc2fd63938 a1=0 a2=20000 a3=69d items=1 ppid=19348 pid=26599 auid=1000 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts2 ses=7 comm=\"cat\" exe=\"/bin/cat\" key=\"bees_in_my_honey\"",
      "type": 1300
    },
    {
      "data": " cwd=\"/root\"",
      "type": 1307
    },
    {
      "data": "item=0 name=\"/opt/secret.txt\" inode=785716 dev=fc:00 mode=0100644 ouid=0 ogid=0 rdev=00:00 nametype=NORMAL",
      "type": 1302
    },
    {
      "data": "proctitle=636174002F6F70742F7365637265742E747874",
      "type": 1327
    }
  ],
  "arch": "c000003e",
  "items": "1",
  "uid_map": {
    "0": "root",
    "1000": "user"
  }
}

You’ll also see a lone message written to /var/log/syslog as:

Dec 25 17:15:18 ubuntu user: honeytrap 26599

This will make sense in a moment.

Process dump the attacker

Now I’ll make logstash core dump any process that causes auditd to write that “bees_in_my_honey” key.

Install gcore:

apt install gdb

Modify the output section of your /etc/logstash/conf.d/00-config.conf:

output {
  if [key] == "bees_in_my_honey" {
      stdout {codec => json}
      exec { command => "gcore -o /tmp/honeydump-%{@timestamp} %{pid}"}
  }
}

Rerun logstash.

If you cat the file, it’ll happen too quickly for us to catch it, so instead open it in a text editor (ex vim). You’ll likely end up with a couple of core dumps. Pick one, and since you know the process that this core dump came from (it’s in your log message), you can run:

gdb /usr/bin/vim  /tmp/honeydump-2016-12-26T00:33:50.495Z.27247

Using that, you could then try to investigate information about the attacker.

Conclusion

In this example, we caught the attacker running vim, which isn’t interesting. What you should take away from this is how to use these tools for this pipeline. To understand how this could actually be useful, imagine you’ve sandboxed a process, so it can’t create any new processes. Now if it starts trying to open files that it shouldn’t then perhaps an attacker has something like meterpreter running inside it, or perhaps they’ve exploited it in such a way that it can do arbitrary reads.

Or imagine the attacker’s shellcode tries to execute a new process, and is denied by the sandbox. It’s very important that a defender can identify when one of their defenses fails. In this case, the attacker got RCE (got through one defense), but hasn’t gotten out of the sandbox yet. If, as a defender, you wait until the attacker has gotten out of the sandbox before you detect him, then you’re responding too late. You want to not only be alerted when that first defense fails, but also collect information about what was going on. This pipeline allows you to do that.

Productionizing

To run this pipeline under real-world conditions you’ll of course want to run go-audit and logstash as services that survive reboots. You’ll also want to send those core dumps to another box so an attacker can’t stomp on them if he does get full RCE. Also, I’m only core dumping the process, not killing it. You could have a script run that dumps the process, and then kills it, or shuts down the system, or takes some other automated defensive action.