Capturing Cisco vCube SIP messages with Homer

Jeremy Worden
automate builders
Published in
6 min readAug 4, 2023

Recently in my lab I made a Docker container running the latest version of SIPp (v3.7). I was doing this to stress test a SIP carrier and I wanted to automate some different scenarios. However, I soon discovered that the most cumbersome aspect of this was combing thru the logs to troubleshoot issues. This ultimately led me to project called Homer. Homer is part of a stack that lets you perform analysis and monitoring of SIP traffic. Today I will go over my setup to get Homer up and working in my lab. Also for those interested in my SIPp setup check out my GitHub.

I started off by installing Homer on my Ubuntu 22.04 LTS server via Docker. I won’t be going over the OS installation or installing Docker, but if you need a good guide I’d suggest some of the guides from DigitalOcean.

Moving back along, the Homer wiki had a great guide for the Docker setup.

First I started by cloning the project, and then installing with Docker Compose:

git clone https://github.com/sipcapture/homer7-docker
cd homer7-docker/heplify-server/hom7-prom-all
docker-compose up -d

This will create a Docker stack that includes Homer, Prometheus, Loki and Grafana.

Portainer view of Homer Docker Stack

Once they were up and running, I was able to access them via the following:

  • Homer:9080 (admin/sipcapture)
  • Grafana:9030 (admin/admin)
  • Prometheus:9090 (admin/admin)
  • Loki:3100 (admin/admin)
  • Alertmanager:9093 (admin/admin)

Access to the Homer GUI is via HTTP on port 9080 and we will be able to send HEPv3 traffic to port 9060/UDP or 9061/TCP (Note: ports can be modified by the docker-compose configuration).

After the containers are up and running, logging into the Homer GUI should reveal a blank dashboard:

Homer Dashboard

Great! Now let’s get some data into our Homer setup. For this we will use a capture agent. Capture agents collect and index SIP packets as well as relaying them to our Homer server on port 9060/UDP . The source of my SIP packets will be my vCube which has a trunk to Twilio. If you’d like more information on this setup check out this PDF, or leave a comment below.

I’m running my Cisco vCube on an Intel NUC with ESXi installed. My Intel NUC has a single NIC card that connects to my LAN.

Home lab setup

To capture all the traffic in and out of the vCube we will be setting up a secondary interface. This second interface will be a member of a virtual switch within ESXi that also has a second interface from our Ubuntu host server. This will act a private network that we will use to port mirror network traffic across using ERSPAN.

ESXi configuration looks something like this:

ESXi vSwitch0 Topology
ESXi vSwitch1 Topology. Note: you may need to tag this as 4095.

Now that we have the plumbing for our secondary network. Let’s configure our Cisco vCube to mirror the traffic destined to vSwitch0 to vSwitch1.

!
interface GigabitEthernet2
ip address 10.20.20.3 255.255.255.0
negotiation auto
no mop enabled
no mop sysid
!
monitor session 10 type erspan-source
source interface GigabitEthernet1
destination
erspan-id 10
mtu 1900
ip address 10.20.20.2
ipv6 dscp 0
ipv6 ttl 0
origin ip address 10.20.20.3
!

Note: For my setup I used 10.10.20.x/24 as a private address space for my vSwitch1 network.

We are now mirroring traffic from GigabitEthernet1 on the vCube to IP address 10.10.20.2 which will be assigned to our Ubuntu Server via our internal vSwitch1 setup.

Back on our Ubuntu server I used netplan to set up my secondary interface.

sudo nano /etc/netplan/*.yaml

Then I pasted the following under the ethernets section:

ens192:
addresses:
-10.20.20.2/24

I then applied my configuration as so:

sudo netplan try
sudo netplan apply

Now we need to configure the Ubuntu side of the ERSPAN configuration with the following command.

sudo ip link add dev cisco_erspan mtu 1900 type erspan seq key 10 local 10.20.20.2 remote 10.20.20.3 erspan_ver 1

Enable the virtual interface.

ip link set cisco_erspan up

Afterwards you can then validate that packets are coming in with the following command:

ip -d -s -s link show dev cisco_erspan

We’ll be using systemd to make our ip link persistent. First let’s create a script file that we will run at boot:

sudo touch erspan.sh

Next we’ll need to update the permission on the file:

chmod a+x erspan.sh

Now let edit our file:

sudo nano erspan.sh

And paste in the following:

#!/bin/bash
# Make sure the script runs with super user privileges.
[ “$UID” -eq 0 ] || exec sudo bash “$0” “$@”
# Create the virtual CAN interface.
ip link add dev cisco_erspan mtu 1900 type erspan seq key 10 local 10.20.20.2 remote 10.20.20.3 erspan_ver 1
# Bring the virtual CAN interface online.
ip link set cisco_erspan up

Now we’ll update systemd with a new service to point to our script file.

sudo touch /etc/systemd/system/erspan.service
sudo nano /etc/systemd/system/erspan.service

Paste in the following (Note: Update path to script):

[Unit]
Description=Script to add ERSPAN links
After=network.target

[Service]
ExecStart=/path/to/erpsan.sh
Restart=always
User=root
Group=root
Type=simple

[Install]
WantedBy=multi-user.target

Save and exit the file.

Reload the systemd manager configuration by running the following command:

sudo systemctl daemon-reload

Enable the service by running the following command:

sudo systemctl enable erspan.service

Start the service by running the following command:

sudo systemctl start erspan.service

Restart your system to test that the script is being run on startup!

Next we will be installing a capture agent on the Ubuntu host server that will then send the packets to our Homer Docker setup on port 9060/UDP.

First lets download the binary from GitHub with the following:

wget https://github.com/sipcapture/heplify/releases/download/v1.65.2/heplify

Next let’s move the binary so we will be able to access it anywhere from the prompt.

sudo mv heplify /usr/local/bin

You should then be able to run the command by simply issuing:

sudo ./heplify -erspan &

This will collect the packets on all interface, including the erspan setup and forward it to 127.0.0.1:9060.

However, let’s take it a step further. Let’s add our binary application to a services and have it restart if it fails.

Start by creating a new service:

sudo nano /lib/systemd/system/heplify.service

Paste in the following:

[Unit]
Description=
After=network.target

[Service]
Type=simple
ExecStart=/usr/local/bin/heplify -erspan
Restart=on-failure
RestartSec=1s

[Install]
WantedBy=multi-user.target

Enable and start your service:

systemctl enable heplify.service
systemctl start heplify.service

Verify that your service is up and running:

ubuntu:~$ ps aux | grep heplify
root 2554183 0.0 0.1 797204 26552 ? Ssl 18:50 0:05 /usr/local/bin/heplify -erspan

If everything went as planned we should see some SIP messages in our Homer dashboard.

Homer Dashboard

Now you should be able to drill down into all the SIP traffic, including raw messages, SIP ladder diagrams or even downloading the PCAP file for the call!

NOTE: If you also install the SIPp container on the same Ubuntu host, messages from SIPp will also be captured to your Homer installation as well.

As always if you would like to support my work, you can always buy me a coffee. I would really appreciate it (but is not required).

Thanks and enjoy!

--

--