In my recent post about osquery, I wrote about collecting telemetry from digital endpoints. This is important as skilled threat actors may be quick to manipulate anything that leaves traces on the system. A centralised logging solution is however a completely different thing to go after, where the adversary will risk detection if attempting a clean-up. This is also where the final osquery logs should reside and be streamed in realtime to, and such the intial stages of an infiltration and possibly the detection of a shutdown agent will be your first clue that something happened. But what do you do when the incident is a fact after this point?

The limitation of collecting telemetry is that there are more information than can be collected available on the end-point. The collection priority will typically be on artefacts that are directly relevant to detection and triage. The next step after a careful operational security consideration will be to collect forensic evidence that answers the who, what, where and how.

choose wisely where you do physical acquisition, and prioritize it in an observation phase where operational security is of utter importance. For everything else there is the option of doing nothing (observe) or using remote acquisition.

In the old days, everything was done physically. This was an advantage in terms of it actually being possible to go full stealth. However, if you have a sizable network - this is no longer a feasible approach for all compromised endpoints, if you want to keep up with the operations tempo of the threat actor. It all ends up with your own ability to outmanoeuvre the adversary. In addition you will have clusters of computers and virtual machines running on the same hardware. Briefly stated: choose wisely where you do physical acquisition, and prioritize it in an observation phase where operational security is of utter importance. For everything else there is the option of doing nothing (observe) or using remote acquisition.

I have been following the progress of Google Rapid Response for years. I'm quite disappointed that it has never gotten to a maturity stage that was viable in the long-run for anyone else than Google. There are loads of impressive functionality, such as the Chipsec support, which is nothing less than awesome and fundamental. However, the install process, complexity of the system and lack of compability with new versions of e.g. macOS is telling. An opportunity missed.

So if the answer is not osquery or GRR, what do we have left? One way to go is the commercial route, another is the open source path. I tend to favor the latter. As I mentioned earlier, the clue is to outmanoeuvre the adversary, right? I still don't understand how anyone thinks a standardised setup, with completely standard process names, file locations and so on can be practically effective against the more skilled adversaries.

For this post I'll focus on Mozilla InvestiGator (MIG). Stupid name aside:

MIG is a platform to perform investigative surgery on remote endpoints. It enables investigators to obtain information from large numbers of systems in parallel, thus accelerating investigation of incidents and day-to-day operations security.

The rest of this post will require:

  • A console/terminal-enabled environment on macOS or Linux
  • Docker (optional)

The above video can be found on MIG's website.

Mozilla InvestiGatoring Your Endpoints

There are several things that seems reasonable with MIG, one is that queries are PGP-signed. MIG are also "sold" as a fast end-point forensics tool, since it is distributed in nature. MIG, which supports Linux, macOS and Windows, is not feature-complete for all platforms yet, but it is getting close. In remote forensics, the most used features are likely memory and file inspection - and those are fully supported on all platforms.

As osquery, MIG has an easy way of getting a test environment up and running through Docker. I suggest you set it up now. When you have installed Docker, run the following queries in a terminal. This will present you with an all-in-one server, client and agent environment that you can use for testing.

docker pull mozilla/mig
docker run -it mozilla/mig

For setting up MIG in production, the process is quite exhaustive. The last time I did something this complex was when configuring OpenLDAP. There are options almost everywhere, so make sure to pay attention to the details. Configuration mistakes, such as not enabling authentication on the web API can cause severe impact.

Luckily MIG is really well documented, so I recommend reading up on the documentation and examples at their main site.

The layout of the code and docs is very neat. It consists of a microarchitecture, with its own API-server and a scheduler - which contains the central task list. In addition what is exposed to the agents and the world is a RabbitMQ server. The general architecture, which is shown in the Concept Docs on Github, is describing enough for this. The below is also a taste for what to expect in the docs: clean, thorough, old-school.

{investigator} -https-> {API}        {Scheduler} -amqps-> {Relays} -amqps-> {Agents}
                        \           /
                      sql\         /sql
                         {DATABASE}

What follows is a brief reiteration of the install docs applied on Debian 9 and my notes on the installation.

First install required a mix of applications. I did this on one server, but the services could be distributed and segmented on several (Postgres, RabbitMQ and MIG API and Scheduler).

apt install golang postgresql nginx
useradd mig -md /home/mig 
echo 'export GOPATH="$HOME/go"' >> /home/mig/.bashrc
su mig
go get github.com/mozilla/mig
cd ~/go/src/github.com/mozilla/mig
wget -O - 'https://dl.bintray.com/rabbitmq/Keys/rabbitmq-release-signing-key.asc' | sudo apt-key add -
echo "deb https://dl.bintray.com/rabbitmq/debian jessie erlang" > /etc/apt/sources.list.d/bintray.erlang.list
echo "deb https://dl.bintray.com/rabbitmq/debian jessie main" > /etc/apt/sources.list.d/bintray.rabbitmq.list
apt update && apt install erlang-nox rabbitmq-server
exit

For signing the agent and scheduler certificate you can setup a small PKI like the following. The certificates will be used throughout, so make sure they are well protected.

cd ~
mkdir migca
cd migca
cp $GOPATH/src/github.com/mozilla/mig/tools/create_mig_ca.sh .
bash create_mig_ca.sh # some manual work required here

This step probably needs no explaining. The scheduler database is stored in Postgres. Configure it like this (you may want to consider having unique passwords per role, unlike the example though):

echo "host all all 127.0.0.1/32 password" >> /etc/postgresql/9.6/main/pg_hba.conf
su postgres
PASSWORD="<set pass here>"
psql -c "CREATE ROLE migadmin;
         ALTER ROLE migadmin WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN PASSWORD '$PASSWORD';
         CREATE ROLE migapi;
         ALTER ROLE migapi WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN PASSWORD '$PASSWORD';
         CREATE ROLE migscheduler;
         ALTER ROLE migscheduler WITH NOSUPERUSER INHERIT NOCREATEROLE NOCREATEDB LOGIN PASSWORD '$PASSWORD';"
psql -c 'CREATE DATABASE mig;'
exit

sudo -u postgres psql -f /home/mig/go/src/github.com/mozilla/mig/database/schema.sql mig
exit    

RabbitMQ is a bit of a hassle, but I got a working configuration with the following. Also do note that the "https variant" for RMQ is "ampqs and port 5671", while plain text protocol is "ampq and port 5672". You will need to keep the latter in mind later on.

cd ~
cp {rabbitmq.crt,rabbitmq.key,ca/ca.crt} /etc/rabbitmq

PASSWORD_ADMIN="<set pass here>"
PASSWORD_AGENT="<set pass here>"
PASSWORD_WORKER="<set pass here>"
PASSWORD_SCHEDULER="<set pass here>"

rabbitmqctl add_user admin "$PASSWORD_ADMIN"
rabbitmqctl set_user_tags admin administrator
rabbitmqctl delete_user guest
rabbitmqctl add_vhost mig
rabbitmqctl add_user scheduler "$PASSWORD_SCHEDULER"
rabbitmqctl set_permissions -p mig scheduler \
    '^(toagents|toschedulers|toworkers|mig\.agt\..*)$' \
    '^(toagents|toworkers|mig\.agt\.(heartbeats|results))$' \
    '^(toagents|toschedulers|toworkers|mig\.agt\.(heartbeats|results))$'
rabbitmqctl add_user agent "$PASSWORD_AGENT"
rabbitmqctl set_permissions -p mig agent \
    '^mig\.agt\..*$' \
    '^(toschedulers|mig\.agt\..*)$' \
    '^(toagents|mig\.agt\..*)$'
rabbitmqctl add_user worker "$PASSWORD_WORKER"
rabbitmqctl set_permissions -p mig worker \
    '^migevent\..*$' \
    '^migevent(|\..*)$' \
    '^(toworkers|migevent\..*)$'
service rabbitmq-server restart

At this point, copy /usr/share/doc/rabbitmq-server/rabbitmq.config.example.gz to /etc/rabbitmq/rabbitmq.config. Uncomment {ssl_listeners, [5671]}, and add the following to it. You will only be able to connect to the domain specified in migca (not 127.0.0.1 for instance).

{ssl_options, [{cacertfile,            "/etc/rabbitmq/ca.crt"},
               {certfile,              "/etc/rabbitmq/rabbitmq.crt"},
               {keyfile,               "/etc/rabbitmq/rabbitmq.key"},
               {verify,                verify_peer},
               {fail_if_no_peer_cert,  true},
               {versions, ['tlsv1.2', 'tlsv1.1']},
               {ciphers,  [{dhe_rsa,aes_256_cbc,sha256},
                           {dhe_rsa,aes_128_cbc,sha256},
                           {dhe_rsa,aes_256_cbc,sha},
                           {rsa,aes_256_cbc,sha256},
                           {rsa,aes_128_cbc,sha256},
                           {rsa,aes_256_cbc,sha}]}
]}

Then restart the service and make sure it is running:

service rabbitmq-server restart
# netstat -taupen|grep 5671

You have now configured the data stores (Postgres and RabbitMQ) and have your own small PKI CA up and running. The next steps gets into the details on compiling and deploying the actual MIG Scheduler and then the API.

su mig
cd $GOPATH/src/github.com/mozilla/mig
make mig-scheduler
exit
cp /home/mig/go/src/github.com/mozilla/mig/bin/linux/amd64/mig-scheduler /usr/local/bin/
mkdir -p /etc/mig
cp /home/mig/go/src/github.com/mozilla/mig/conf/scheduler.cfg.inc /etc/mig/scheduler.cfg
cp /home/mig/migca/{scheduler.crt,scheduler.key,ca/ca.crt} /etc/mig
chown root.mig /etc/mig/*
chmod 750 /etc/mig/*
mkdir /var/cache/mig/
chown mig /var/cache/mig/

Open /etc/mig/scheduler.cfg. Uncomment the TLS section under mq, and make sure it looks like:

usetls = true
cacert = "/etc/mig/ca.crt"
tlscert = "/etc/mig/scheduler.crt"
tlskey = "/etc/mig/scheduler.key"

The data store sections should look like (use the PASSWORD variables from earlier):

[postgres]
  host = "127.0.0.1"
  port = 5432
  dbname = "mig"
  user = "migscheduler"
  password = "$PASSWORD"
  sslmode = "disable"
  maxconn = 10

[mq]
  host  = "127.0.0.1"
  port  = 5671
  user  = "scheduler"
  pass  = "$PASSWORD_SCHEDULER"
  vhost = "mig"

Now you can start the scheduler. Note that this is for an initial op only and that the scheduler should use a service script.

su mig
nohup mig-scheduler &

At this point the scheduler should be running fine. To compile and boot the API client:

cd $GOPATH/src/github.com/mozilla/mig
make mig-api
exit
cp /home/mig/go/src/github.com/mozilla/mig/bin/linux/amd64/mig-api /usr/local/bin/
cp /home/mig/go/src/github.com/mozilla/mig/conf/api.cfg.inc /etc/mig/api.cfg

My final API client config (/etc/mig/api.cfg) looked like the following. I forwarded this with the Nginx example in the docs. Note that authentication can only be enabled after you have added an investigator's key, so that should be "off" and not exposed to the world at this point. This config is as everything else in MIG quite beautiful when it comes to the options it provides (such as baseroute):

[authentication]
    enabled = on
    tokenduration = 10m

[manifest]
    requiredsignatures = 2

[server]
    ip = "127.0.0.1"
    port = 8392 
    host = "https://<domain>:<port>"
    baseroute = "/api/v1"
    
[postgres]
    host = "127.0.0.1"
    port = 5432
    user = "migapi"
    password = "$PASSWORD"
    sslmode = disable
    
[logging]
    mode = "stdout"
    level = "info"

Start the MIP API like the following. Here as well: it needs a service script for permanent ops.

nohup mig-api &

Okay. So at this point you are done with the initial server side setup.

This was where it got interesting. As we have the API up and running, we can now connect with the client applications. This part I did on macOS with Homebrew set up in advance. To note here, Mozilla haven't made MIG GPGv2-compatible yet, so that was a bit sad.

brew install gpg1
gpg1 --gen-key
gpg1 --edit-key <fpr-from-above>
# create a DSA subkey for signing
gpg --export -a <fpr-from-subkey> > /tmp/pubkey.asc
echo 'export GOPATH="$HOME/go"' >> ~/.bashrc

Compile the console client:

go get github.com/mozilla/mig
cd ~/go/src/github.com/mozilla/mig
sudo cp bin/darwin/amd64/mig-console /usr/local/bin

All configuration on the investigator client side is done in ~/.migrc, which is sweet. Mine ended looking like the following. Take note of the macros, those can be used to select hosts for queries later on.

[api]
    url = "https://<domain>/api/v1/"
[gpg]
    home = "<homedir>/.gnupg"
    keyid = "<PGP Fingerprint>"
[targets]
    macro = allonline:status='online'
    macro = idleandonline:status='online' OR status='idle'

Boot it up!

Screenshot-2018-08-29-at-17.13.32

For the first user:

mig> create investigator
  name> Tommy
  Allow investigator to manage users (admin)? (yes/no)> yes
  Allow investigator to manage loaders? (yes/no)> yes
  Allow investigator to manage manifests? (yes/no)> yes        
  Add a public key for the investigator? (yes/no)> yes
  pubkey> /tmp/pubkey.asc
  create investigator? (y/n)> y
  Investigator 'Tommy' successfully created with ID 2

Back to the MIG server. Enable authentication in the API, by editing /etc/mig/api.cfg and switching enabled = off to enabled = on. Verify this by: curl https://<api-domain>:<api-port>/api/v1/dashboard.

It's now time to setup the agent. By default it will be compiled for the system you are on, but you can compile for other platforms as well, as shown below. Before compiling, configure the agent with the investigators PGP keys:

mkdir /etc/mig/agentkeys # add pubkeys of investigators to this directory

Now do the configuration.

cp conf/mig-agent.cfg.inc conf/mig-agent.cfg
vim conf/mig-agent.cfg
make mig-agent BUILDENV=prod OS=darwin ARCH=amd64

An example agent configuration is shown below. Take note of the ampqs (5671, not 5672 which is plain) port that is used to pub and sub from the RabbitMQ queues.

[agent]
  relay            = "amqp://agent:$PASSWORD_AGENT@<domain>:5671/" 
  api              = "https://<domain>:8393/api/v1/"
  socket           = "127.0.0.1:51664"
  heartbeatfreq    = "300s"
  moduletimeout    = "300s"
  isimmortal       = on
  ; proxies          = "proxy1:8888,proxy2:8888"
  installservice   = on
  discoverpublicip = on
  refreshenv       = "5m"
  extraprivacymode = off
  ; nopersistmods    = off
  onlyVerifyPubKey = false
  ; tags             = "tagname:tagvalue"

[stats]
  maxactions = 15

[certs]
  ca  = "/etc/mig/ca.crt"
  cert= "/etc/mig/agent.crt"
  key = "/etc/mig/agent.key"

[logging]
  mode    = "stdout" ; stdout | file | syslog
  level   = "info"

To be honest I had a bit of a headache getting the agent to run a built-in config, so I ended up copying this config to the endpoints at /etc/mig/mig-agent.cfg. You also need the CA and agent certificates deployed in /etc/mig, and you should use the whitelisting functionality since the authentication of agents has limited strength at this point. Again, this can be built in to the agent binary, I just haven't wrapped my head around it yet.

So, compiling this for both Debian and the newest macOS beta went like (there's a script at tools/build-agent-release.sh for this as well):

make mig-agent BUILDENV=prod OS=darwin ARCH=amd64
make mig-agent BUILDENV=prod OS=linux ARCH=amd64

I also added the investigator pubkeys to the endpoint's /etc/mig/agentkeys directory. This was kind of interesting due to the ACL's. For testing I allowed the PGP key signature only through setting onlyVerifyPubKey = false, but ACL's in this context are quite cool - so make sure to have a look at the configuration docs for ACLs. After configuring the ACLs you can set onlyVerifyPubKey = true again. I saw a lot of debugging at first before I figured this out, since the agent won't allow queries from one investigator alone by default.

That was pretty much it. Back to the investigator endpoint, you should also compile the rapid query binary mig in addition to the console-client.

cd $GOPATH/src/github.com/mozilla/mig
make mig-cmd
sudo cp bin/darwin/amd64/mig /usr/local/bin

This enables queries like the following (remember the macros in .migrc):

mig file -e 20s -path /var/log -name "^syslog$" -maxdepth 3

Which turns out like the following. The below is a query for a system file on both macOS 10.14 and a Debian 9 server.

Screenshot-2018-08-29-at-20.51.10

Conclusions

So that was my initial go at Mozilla Ivestigator. At this point I can't praise it enough for the granular possibilities and very linux-y architecture. I had more or less no issues that weren't due to my own lack of experience with MIG during the setup and everything worked, surprisingly also on the newest macOS beta. That is robust. Other than that all features are as advertised, perhaps except for the GnuPGv1-only support, but it seems like they are working on that as well (the ticket is old though).

Compared to other solutions that I've seen in action, this is the first product that resonates with my workflow. It's rapid and it integrates easily. I also look forward to having a look on Mozilla's proposed "MIG Action Format".

When it comes to cloaking and customisation, this is also the first tool I've seen that provides some freedom of movement. I didn't detail that in my notes above, but more or less everything can be customised here, so a threat group would have to work a bit against you to identify the agent. What I saw had some operational security potential though.

MIG has its place alongside osquery, and I am sure that these two in combo could provide an able cross-platform hunting and DFIR tool.

I will surely follow up on the progress on MIG going forward, and there really is no reason you should not either.

Thanks for reading!