My Home Setup Helped Me Understand Cybersecurity Flow

George Chen
The PayPal Technology Blog
16 min readApr 1, 2020
A real-time threat map that you can build for your home internet network

I work as a Cybersecurity Engineer at PayPal. My responsibilities include Security Incident Response and Security Engineering, for which I create security alerts and help manage threat detection.

To get hands-on experience with the end-to-end flow of cybersecurity, I set up a defensive monitor for my home network. I made a small-scale version of a typical corporate security framework with minimal hardware and free software.

With a budget of $50, I set up monitoring alerts and dashboards, and started cybersecurity threat hunting on my home network. It helped me gain a high-level appreciation of security architecture, and I hope this setup does the same for you.

My setup includes the following components:

1. Tapping network traffic
2. IDS
3. NetFlow
4. Firewall & RSyslog
5. Splunk alerts and dashboard
6. DNS Server & Sinkhole
7. Automated Blocks based on Splunk Alerts
8. Threat Intelligence Feed
9. Threat Map

Hardware

I purchased a passive network TAP. My router did not support SPAN, so I chose the Throwing Star Lan Tap. It was shipped to me under $8.

I happened to have a Raspberry Pi 3b (I’m going to call it raspi) that I paid about $35 for. I dug out three old Ethernet LAN cables, which would otherwise cost about a dollar each.

A USB-to-Ethernet adapter would cost about $4.

Messy, but it works
image from https://greatscottgadgets.com/throwingstar/

J1 to Modem
J2 to Router
J3 to Raspi Ethernet
J4 to Raspi via USB-to-Ethernet Adapter

Since I also wanted internet access on this raspi, I connected it to my home wifi via its wireless adapter.

Operating System

I went with Raspbian OS. You can set up your Raspbian OS with this guide.

NetFlow Collector

After some considerations between NetFlow and tcpdump, I decided to forward the flow records to Splunk, while using packet capturing as required. I played around with a few NetFlow collectors and went with nProbe. It was also convenient to use ntopng as the NetFlow analyzer.

On a fresh instance of Raspbian OS, here are the steps I took to set up nProbe and ntopng:

sudo apt-get update
sudo apt-get install ntopng
wget http://packages.ntop.org/apt/ntop.key
# su from here onwards since we are setting things up
sudo su
apt-key add ntop.key
apt-get update
apt-get install nprobe

To test the setup of nProbe and ntopng, I did the following test on two separate terminal instances:

ntopng -i tcp://127.0.0.1:1234 -w 3003nprobe --zmq "tcp://*:1234" -i eth0 -n none -b 2

NetFlow Analyzer

Then I headed over to http://raspi:3000 to view. I logged in using the default admin credentials and was asked to change them.

All IP addresses in this post have been painstakingly scrambled
One of the many dashboards available on ntop
This dashboard has a friendly GUI for BPF packet captures

Once I completed testing, I moved on to check out how Splunk would receive these logs.

Splunk

I wanted to load a Splunk server instance on the raspi, but the ARM version of the Splunk Stream app is unavailable to customers. This is a bummer because Splunk Stream supports parsing and ingesting packet captures (pcap).

Instead, I opted to use the raspi as a log collector and forwarder, and have my MacBook run Free Splunk/Splunk Dev. You could use Splunk Web or a dedicated box in place of a laptop.

Because there is no ARM support for Splunk Stream Forwarder , I used the universal forwarder.

https://www.splunk.com/en_us/download/universal-forwarder.html#tabs/linux

So here’s a quick recap of the setup:

Raspi => Splunk Forwarder
MacBook => Splunk Server (60 days of Enterprise then Splunk Free. Or Splunk Dev)

Here are the instructions to set the forwarder:

cd /home/pi/Downloads
tar xvzf splunkforwarder-<…>-Linux-x86_64.tgz -C /opt
# run
cd /opt/splunkforwarder-<…>-Linux-x86_64.tgz/bin
/opt/splunkforwarder/bin/splunk
# configure the forwarder to run at boot
/opt/splunkforwarder/bin/splunk enable boot-start
# get Splunk server's (my MacBook) IP address
/opt/splunkforwarder/bin/splunk add forward-server <host name or IP address>:9997

I launched Splunk on my MacBook browser to make the following configurations:

Settings -> Forwarding and Receiving -> “Receive data” -> Enable 9997 as a new receiving port

You might need to create a local firewall rule if you are unable to receive data on that port.

Forwarding NetFlow

Next, I forwarded some NetFlow records to Splunk.

# creating a designated directory
mkdir -p /home/pi/splunk/netflow
nprobe -T "%IPV4_SRC_ADDR %L4_SRC_PORT %IPV4_DST_ADDR %L4_DST_PORT %PROTOCOL %IN_BYTES %OUT_BYTES %FIRST_SWITCHED %LAST_SWITCHED %HTTP_SITE %HTTP_RET_CODE %IN_PKTS %OUT_PKTS %IP_PROTOCOL_VERSION %APPLICATION_ID %L7_PROTO_NAME %ICMP_TYPE" -n none -b 2 -i eth0 --json-labels --dump-path /home/pi/splunk/netflow --dont-drop-privileges

The above nProbe command collects flows in the specified format, and logs them in the stipulated directory. The directory then gets forwarded to and ingested by Splunk. For the full list of elements, check out: https://www.ntop.org/guides/nprobe/flow_information_elements.html.

I searched for index=pi sourcetype=flow, and within seconds, I saw entries in Splunk. With log forwarding verified, I set up a cron job to persist the flow collection in case nProbe has a memory crash, as was the case when I first attempted this with Bro IDS and ELK Stack.

There are two interfaces, eth0 and eth1, that I wanted to cover, and while I tried running a single instance of nprobe on both of them, I wasn’t successful. A configuration here suggests that compiling with PF_RING support would enable multiple interfaces, but I went ahead with two separate instances of nprobe.

mkdir -p /home/pi/splunk/netflowout
mkdir -p /home/pi/splunk/netflowin
cp /usr/local/bin/nprobe /usr/local/bin/probeth1
cp /usr/local/bin/nprobe /usr/local/bin/probeth0

Here’s the basic bash script that I created at /home/pi/nprone.sh to illustrate that.

#!/bin/bash
probeth0_pid="$(ps -eo 'pid,comm' | grep probeth0 | awk '{print $1}')"
probeth1_pid="$(ps -eo 'pid,comm' | grep probeth1 | awk '{print $1}')"
if [ ! -n "$probeth0_pid" ]
then
echo "probeth0 is not running"
else
kill -s TERM $probeth0_pid
echo "killed probeth0: $probeth0_pid"
fi
echo "restarting probeth0"
/usr/local/bin/probeth0 -T "%IPV4_SRC_ADDR %L4_SRC_PORT %IPV4_DST_ADDR %L4_DST_PORT %PROTOCOL %IN_BYTES %OUT_BYTES %FIRST_SWITCHED %LAST_SWITCHED %HTTP_SITE %HTTP_RET_CODE %IN_PKTS %OUT_PKTS %IP_PROTOCOL_VERSION %APPLICATION_ID %L7_PROTO_NAME %ICMP_TYPE" -n none -b 2 -i eth0 --json-labels --dump-path /home/pi/splunk/netflowout --dont-drop-privileges -G
if [ ! -n "$probeth1_pid" ]
then
echo "probeth1 is not running"
else
kill -s TERM $probeth1_pid
echo "killed probeth1: $probeth1_pid"
fi
echo "restarting probeth1"
/usr/local/bin/probeth1 -T "%IPV4_SRC_ADDR %L4_SRC_PORT %IPV4_DST_ADDR %L4_DST_PORT %PROTOCOL %IN_BYTES %OUT_BYTES %FIRST_SWITCHED %LAST_SWITCHED %HTTP_SITE %HTTP_RET_CODE %IN_PKTS %OUT_PKTS %IP_PROTOCOL_VERSION %APPLICATION_ID %L7_PROTO_NAME %ICMP_TYPE" -n none -b 2 -i eth1 --json-labels --dump-path /home/pi/splunk/netflowin --dont-drop-privileges -G

I set up the crontab to restart the service every now and then in case of memory-related errors.

chmod +x /home/pi/nprone.sh
crontab -e
15 */12 * * * /home/pi/nprone.sh

I set it to restart every 12 hours for starters, and nProbe to run as a daemon with the parameter -G. You could also do a check to see if the process was running before doing a kill -HUP to avoid interruptions.

With the logs in the designated directory, I set up the forwarding configurations to Splunk. I added forwarding to the same sourcetype since I can differentiate by source if I want (for example, source="/home/pi/splunk/netflowin/*").

/opt/splunkforwarder/bin/splunk add monitor /home/pi/splunk/netflowin -index pi -sourcetype flow/opt/splunkforwarder/bin/splunk add monitor /home/pi/splunk/netflowout -index pi -sourcetype flow

If you want, you can verify your configuration changes at /opt/splunkforwarder/etc/system/local/inputs.conf.

Now that I shipped my logs to Splunk on my MacBook, I proceeded with the field extractions.

1. On a Splunk, search for index=pi sourcetype=flow.

2. Click on “Extract New Fields”

3. Select Delimiter.

4. Name the fields according to the screenshot below.

5. Save as a transformation report — I saved under the name “REPORT-flow”

6. Make this parsing definition persistent:

# on workstation
nano /Applications/Splunk/etc/system/local/props.conf
[flow]
REPORT-flow = flow
# restart Splunk forwarder to take effect
/opt/splunkforwarder/bin/splunk restart

Once done, I tried out a base search.

index=pi sourcetype=flow src!=0.0.0.0 dest!=0.0.0.0 dest!=255.255.255.255
| rex field=src "(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| rex field=src_port "(?<source_port>\d+)"
| rex field=dest "(?<dest_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| rex field=status "(?<http_status>\d\d\d)"
| stats min(first_switched) as first_switched, max(last_switched) as last_switched, values(source_port) as src_port, values(dest_port) as dest_port, values(protocol_l7) as protocol_l7, values(http_status) as http_status, count by src_ip, dest_ip
| convert ctime(first_switched)
| convert ctime(last_switched)
| eval dest_port=mvindex(dest_port,0,9)
| sort -count

DNS

DNS resolution information is useful for investigating security incidents. There are many methods you could use to get this information. I listed a few here:

DNS Resolution
1. Inbuilt DNS lookup | lookup dnslookup clientip. This uses an external lookup script to perform a DNS lookup.

Passive DNS
2. Since I have NetFlow running, I can capture Passive DNS, and have them stored in a lookup via a scheduled job. Here is a base search to get that information:

index=pi sourcetype=flow src!=0.0.0.0 dest!=0.0.0.0 dest!=255.255.255.255 
| rex field=src "(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| rex field=http_site "(?<domain>\w+\.[^\d]+)"
| eval domain = if(domain=src_ip,null(),domain)
| stats values(domain) as domain, max(_time) as last_seen count by src_ip
| convert ctime(last_seen)
| where isnotnull(domain)
appending results to a lookup

3. If you aren’t running NetFlow, you can also run a separate service to log Passive DNS: https://isc.sans.edu/forums/diary/Running+your+Own+Passive+DNS+Service/24784/

OpenDNS
4. I’ve been using OpenDNS for some time now because it gives me an added layer of control over my network. Since I’m using a dynamic IP address, I need to update my IP address frequently, as I didn’t like the idea of running the Dynamic IP updater on my Mac.

Since raspi isn’t running 24x7, I thought it would make sense to have it update my IP address instead. Setup is straightforward:

apt-get install ddclient# installation process prompts for login credentials# fqdn would be requested, and that is the network label on OpenDNS itself# check that /etc/ddclient.conf has the following configurations:protocol=dyndns2
use=web, web=myip.dnsomatic.com
ssl=yes
server=updates.opendns.com
login=opendns_username
password=‘opendns_password’
opendns_network_label #your OpenDNS network label

DNS Server Logs & Sinkhole
5. OpenDNS is great, but I also wanted a local DNS server to do local DNS blocks. There’s an amazing project, Pi-hole, that is dedicated to this. It has a built-in DHCP and DNS server, blacklist management, sinkhole, and more. Setting up is easy:

wget -O basic-install.sh https://install.pi-hole.net
bash basic-install.sh

And as I navigated to the web interface, I was impressed with this dashboard:

The following changes were made:

  • Static LAN IP address for raspi
  • Point router DNS server settings to raspi
  • Point Pi-hole upstream DNS server settings to OpenDNS

Forward to Splunk:

/opt/splunkforwarder/bin/splunk add monitor /var/log/pihole.* -index pi -sourcetype dns

And here is a base search to get started with:

index=pi sourcetype=dns NOT "forwarded" NOT "127.0.0.1" 
| rex "\ (?<domain>[^/][0-9a-zA-Z.-_]+\.[0-9a-zA-Z.-_]+)\ "
| rex "(?<ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| rex "\]\:\ (?<action>[0-9a-zA-Z\[\]]+)\ "
| eval internal=if(action="query[A]" OR action="query[AAAA]",ip,"")
| eval external=if(action="query[A]" OR action="query[AAAA]","",ip)
| stats values(external) as external, values(internal) as internal, values(action) as action, count by domain
| eval ip=mvindex(ip,0,9)
| sort -count

Automated DNS Blocklist From Splunk
Given that I have set up several Splunk alerts now, I want to automatically block caught domains from my DNS server.

I used a quick-and-dirty way to get started:

  1. Create a Github repository and clone it locally.
  2. Create a bash script here: $SPLUNK_HOME/bin/scripts/git.sh with the following contents:
    splunk=$(gzcat $SPLUNK_ARG_8 | grep '",' | cut -d '"' -f 2)
    echo #splunk >> "blocklist.txt"
    git add blocklist.txt
    git commit -m $splunk
    git push -u origin master
  3. Grant your bash script the appropriate permissions, i.e. chmod +x git.sh
  4. Create your alert and set the Trigger Actions to Run a script. Specify your script name as git.sh.
Sample alert to automatically block suspicious domains

5. Verify that your Git repo is being updated by the triggered action:

6. Navigate to your pi-hole admin portal with Settings=>Blocklists or use the direct linkhttp://raspi/admin/settings.php?tab=blocklists to add in your Git file raw link. Save and update your settings.

The highlighted entry is the new dynamic feed based on Splunk results

Now, verify that the block is active:

Connection is terminated at the local DNS server

And now I have an automated blocklist from Splunk alerts!

IDS

I moved on to the next piece of the puzzle — Intrusion Detection System. I chose the open-source Snort project.

apt-get install snort
touch /etc/snort/rules/white_list.rules /etc/snort/rules/black_list.rules /etc/snort/rules/local.rules /etc/snort/sid-msg.map
# if it is not already added
groupadd snort && useradd -g snort snort
# configure Snort to process using community rules
wget https://www.snort.org/rules/community -O ~/community.tar.gz
tar -xvf ~/community.tar.gz -C ~/
cp ~/community-rules/* /etc/snort/rules

I did a test run. Change your interface accordingly.

snort -c /etc/snort/snort.conf -i eth1 -T
# "Snort successfully validated the configuration!"

Test that logs are being generated.

snort -c /etc/snort/snort.conf -i eth1 -d -A fast

And logs were generated at /var/log/snort/*.

To persist this threat detection service, I used the same bash script and ran Snort as a daemon. Here is the code snippet for /home/pi/snorn.sh:

#!/bin/bash
snort_pid="$(ps -eo 'pid,comm' | grep snort | awk '{print $1}'"
snort_folder="$(ls /var/log/snort/ | grep '^[0-9]\+$' | sort -n | tail -n1)"
if [ ! -n "$snort_pid" ]
then
echo "clean"
else
/usr/sbin/snort -r /var/log/$snort_folder/snort.log.* > /var/log/snort/$snort_folder/snort-r
echo "wrote /var/log/snort/$snort_folder/snort-r"
kill -s TERM $snort_pid
echo "killed $snort_pid"
fi
datevar=$(date +%s)
mkdir -p /var/log/snort/$datevar
/usr/sbin/snort -c /etc/snort/snort.conf -i eth1 -d -A fast -D -l /var/log/snort/$datevar

For this example, I wrote a manual log rotate schedule above. If you are wondering how Splunk indexes rotated logs, this article might be helpful.

I set up the crontab:

chmod +x /home/pi/snorn.shcrontab -e
0 */6 * * * /home/pi/snorn.sh

And added /var/log/snort to Splunk monitoring.

/opt/splunkforwarder/bin/splunk add monitor /var/log/snort -index pi -sourcetype ids
/opt/splunkforwarder/bin/splunk restart

To run Snort on two interfaces, I needed to run two separate instances of Snort.

On the Search & Reporting App on Splunk, I get results when I query index=pi sourcetype=ids. It takes about three minutes or so to receive the forwarded logs when I open my MacBook from a closed state.

Here is a base search for IDS alert logs — this time, I’ll regex-extract on-the-fly.

index=pi sourcetype=ids NOT "0.0.0.0:68 -> 255.255.255.255:67" source="/var/log/snort/*/alert"
| rex "\}\ (?<src>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}).+\ \-\>"
| rex "\:(?<src_port>\d+)\ \-\>"
| rex " \-\>\ (?<dest>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| rex " \-\>\ \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:(?<dest_port>\d+)"
| rex "\[\*\*\].+\]\ (?<threat>.+)\ \[\*\*\]"
| rex "\[Classification\:\ (?<class>.+)\]\ \[Priority"
| rex "\[Priority\:\ (?<priority>\d)\]"
| rex "\{(?<protocol>\w+)\}"
| eval srcport = src + ":" + src_port
| stats values(srcport) as srcport, max(_time) as time, values(protocol) as protocol, values(class) as class, values(threat) as threat, count by dest_port
| convert ctime(time)
| sort -count

We see some enumeration scans on UDP port 1900 — that is the port for Universal Plug N' Play. We also see some NTP exploit attempts. My router’s port forwarding rules are already disabled, but for good measure, I disabled UPnP altogether. I also saw a bunch of nasty router RCE exploits that were fairly recent.

Here are the Snort logs:

index=pi sourcetype=ids NOT "0.0.0.0:68 -> 255.255.255.255:67" source="/var/log/snort/*/snort-r"
| rex "\ (?<src>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}).+\ \-\>"
| rex "\-\>\ (?<dst>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| rex "\:(?<src_port>\d+)\ \-\>"
| rex "\-\>\ \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:(?<dst_port>\d+)"
| rex field=src "(?<src_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| rex field=dest "(?<dest_ip>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3})"
| rex "(?<protocol>\w+)\ TTL\:(?<ttl>\d+)\ "
| stats dc(dst_port) as dc_dst_port, values(dst_port) as dst_port, values(src_port) as src_port, values(protocol) as protocol, values(ttl) as ttl, min(_time) as min_time, max(_time) as max_time, count by src
| eval dst_port=mvindex(dst_port,0,4)
| eval src_port=mvindex(src_port,0,4)
| convert ctime(min_time)
| convert ctime(max_time)
| sort -count
results from the base search above

Splunk Alerts

I set up an example alert based on the number of destination ports accessed within a predefined amount of time. Saving the search automatically creates a scheduled alert. Note that alerts are only available on the licensed Splunk Enterprise version.

Set up your mail server configurations before this: Settings => Server settings => Email Settings

If you use Gmail, enable the Less secure apps setting. Dedicate a dummy account to this.

Alert triggered to my inbox

You may also need to review and apply changes before you can enable Splunk Alerts:

Settings => Monitoring Console => set up your instance

Router Firewall & RSyslog Monitoring

Routers come with inbuilt firewall, and the one I’m using allows me to forward its logs to a remote syslog server. I configured it to forward to raspi, and configured raspi to receive these logs from my router by uncommenting the following lines from /etc/rsyslog.conf:

module(load="imudp") 
input(type="imudp" port="514")

Create the firewall log file:

touch /var/log/router.log

Configure the writing of logs by creating a new file /etc/rsyslog.d/router.conf:

$template NetworkLog, "/var/log/router.log"
:fromhost-ip, isequal, "192.168.1.1" -?NetworkLog
& stop

where 192.168.1.1 is your router IP address. Restart the rsyslog server:

service rsyslog restart

Create a log rotation with a new file /etc/logrotate.d/router:

/var/log/router.log {
rotate 10
daily
notifempty
compress
postrotate
invoke-rc.d rsyslog rotate > /dev/null
endscript
}

Forward logs to Splunk:

/opt/splunkforwarder/bin/splunk add monitor /var/log/router.* -index pi -sourcetype firewall

And here is a base search:

index=pi sourcetype=firewall SRC!=192.168.1.* 
| rex "kernel\:\ (?<action>\w+)\ "
| search NOT SPT IN ("80","443")
| where action="ACCEPT"
| stats count values(action) as action, values(SPT) as SPT by SRC, DST
| sort -count

I also increased the verbosity of log level from 6 to 7 (debug) on my router by executing the following command on my router via ssh:

nvram set log_level=7

Operationalizing the Monitor

I did a simple threat modeling exercise where I mapped out the IoT devices that I have connected on my network and did some nmap scans against them to understand the prospective attack surface.

One of my interesting finds was from single outbound calls to these non-existent DGA-like domains:

8688deo3yorgtap.mujt32-l1luaj9mrkr9s6rcfdg[.]com
98p-z6dz8fd405ae379wcgw8bu.6vw61ko2r2pu[.]com
9e9t6k39y2qk9.blq2mno6te-6u94cn65da4jj1[.]com
btwap9t695rkx-6.myz8qelmx[.]com
ctwgqflyyv.01wnov4dwdxbawbto-te[.]com
esl8rxs119mtx.3ke8b1ok5yl-981efahx8th[.]com
hlpt4kilgxrbyx2750uj3d2w.68-i5jjf4[.]com
jp8v24e6t93hogwn.soegle4p2b4aqt8v24v12-al9s[.]com

I installed the URL Toolbox app on Splunk to help me identify related domains. Sample search can be seen below:

Tracking these outbound beaconing calls took a bit of effort as I had to set raspi as the DNS server on various devices in order to get to the true source. The true source was eventually identified to be my corporate phone, which I had just reformatted so I could drill down to the application in question quickly. Doing a quick search online, I came across another post describing the same issue. Subsequently, I found the source of the traffic, and have reported the behavior to the app owner.

If you are interested in catching DGAs (domain generation algorithm), here is a good reference based on entropy. And if you are interested in finding new domains, here is another neat guide.

Here is another interesting set of breadcrumbs which I ran another exercise on, though I am not covering details in this post.

Spike of exploit traffic from public cloud IP

When all is done, I switched the raspi from the default “gui” mode to “cli”. The respi hovers around 15% memory utilization, so that is all good.

Basic Raspi Hardening

Here is a useful guide on hardening your Raspberry Pi. There are two items from the guide that I wanted to highlight.

I wanted something simpler than iptables and the above guide recommended the Uncomplicated Fire Wall. Installation and setting up was painless:

apt install ufw
ufw enable
# based on unused open ports, pick whichever you wish to close off
ufw deny <port>

I also wanted some form of rate-based protection against attacks like brute-forcing, especially with a running web server, and fail2ban fits my requirements perfectly:

apt install fail2ban
cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local

Here’s a guide on how to protect Pi-hole’s admin interface with fail2ban: https://www.marek.tokyo/2019/02/securing-pi-hole-with-fail2ban-to.html

Threat Intelligence Feed

You can set up a proper threat intelligence feed using instructions from https://docs.splunk.com/Documentation/ES/6.1.0/Admin/Downloadthreatfeed.

Lookups are always ideal, but for illustration, I used another sourcetype to add a “bad reputation” flag to the IP addresses to pay more attention to.

wget https://www.badips.com/get/list/any/2 -O /home/pi/ipreputation
crontab -e
10 8 * * * wget https://www.badips.com/get/list/any/2 -O /home/pi/splunk/ipreputation
/opt/splunkforwarder/bin/splunk add monitor /home/pi/splunk/ipreputation -index pi -sourcetype intel

And here is a sample search with IP reputation:

(index=pi sourcetype=ids NOT "0.0.0.0:68 -> 255.255.255.255:67") OR (index=pi sourcetype=intel earliest=-25h)
| rex "\}\ (?<src>\d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}).+\ \-\>"
| rex " \-\>\ \d{1,3}\.\d{1,3}\.\d{1,3}\.\d{1,3}\:(?<dest_port>\d+)"
| rex "\[\*\*\].+\]\ (?<threat>.+)\ \[\*\*\]"
| rex "\[Classification\:\ (?<class>.+)\]\ \[Priority"
| eval src=if(sourcetype="intel",_raw,src)
| eval reputation=if(sourcetype="intel","threat",null())
| stats values(sourcetype) AS sourcetype, values(reputation) as reputation, values(threat) as threat, values(class) as class, values(dest_port) as dest_port, count by src
| eval dest_port = mvindex(dest_port,0,9)
| where sourcetype = "ids"
| fields - sourcetype
| sort -reputation, -count
| head 10

We now see IP addresses that are associated with bad activities showing up with the “reputation=threat” flag, which is updated daily.

Dashboard

Other than setting up Splunk alerts to my email, I also added a Splunk Dashboard to get a visual view of the ongoing network traffic and threats. Here’s how it looks:

With a timeframe drop-down, I could perform specific retrospective searches or real-time-updating. While at that, I added in geolocation information.

index=pi sourcetype=flow src!=0.0.0.0 dest!=0.0.0.0 dest!=255.255.255.255
| iplocation src
| geostats count by City globallimit=10
map with IP geolocation information

Threat Map

Looks neat, but I wanted a real-time, animated threat map. I found this “Missile Map” app that was ideal. I installed it as a visualisation and used a grey map template from http://alexurquhart.github.io/free-tiles/. And here is my final monitoring dashboard that I set to auto-refresh.

Red for threats based on Intel, Pink for IDS, Green for clean flows

So, what’s next?

Now that I’ve set these up:

1. Passive Tap & Rasbian Raspi
2. IDS alerts & logging
3. NetFlow collection & analyzer
4. Router Firewall logs & RSyslog
5. Log monitoring & forwarding
6. Splunk alerts and dashboard
7. Passive DNS
8. DNS Server & Sinkhole
9. Automated Domain Blocks based on Splunk Alerts
10. OpenDNS Updater
11. Threat Intelligence Feed
12. Threat Map

I might consider some other explorations:

1. Splunk Enterprise Security would be nice
2. Honeypots on a segregated network
3. Further hardening on the raspi
4. Active Tap
5. Hosting Splunk on a dedicated compute stick
6. Forwarding endpoint logs
7. Unbound DNS Resolver
8. Custom Snort rules
9. Squid Proxy

Have fun building your cybersecurity network monitoring setup, and I hope this helped you better understand cybersecurity flow.

--

--

George Chen
The PayPal Technology Blog

Global Threat Hunting Manager at PayPal. George is a site lead for Innovation Lab & Community Impact. In his spare cycles, he lectures cybersec at a University.