Inside Extreme Networks SLX

Jörg Kost
7 min readJan 4, 2019

--

In the last few weeks, I had the possibility to play around with new Extreme Networks SLX 9540 branded switches, that were originally developed and sold by Brocade Communications. This is the first article of a small series focusing on SLX gotchas, starting with the basic concepts and the hypervisor vision, mainly focused on the software stack.

SLX in field action

The SLX 9540 is a 48x10GbE and 6x100GbE deep buffer switch that can be used in WAN edge, IXP and colocation deployments. With OptiScale Routing and advance licensing it supports full BGP routing tables, MPLS and VXLAN.

The SLX feels kind of unique and contrasting to the well-established platform routers like the Brocade MLX family, that has a stable network operating system but a limited userland. And with its download volume of eight gigabytes, the SLX operating system promises to be a fully loaded box of toys.

The new hardware architecture and data plane is built around commodity Broadcom network processors in a sufficient or industry leading (marketing!) factor and density, however the control planes for management- and line cards all run on Intel XEON x86 processors. That is a big contrast to former Brocade products that make a lot of use of the Power PC architecture.

Also instead of a custom operation system like VDX NOS or Ironware the SLX runs a normal Ubuntu Linux installation as a host operating system solely for the usage of running KVM based virtualization on it. In point of fact even what we call the management interface itself — the SLX-VM — is running as a Linux based virtual machine all along.

Additionally every SLX-OS comes with a decoupled Ubuntu Linux third party virtual machine (in jargon: TPVM) that is going to be connected to internal hardware based streaming and analytics paths to run tcpdump or similar diagnostics tools directly on the device. Also if you want to run docker containers, nmap, Arpsponge or any custom service to program your device — here is the place to be.

Access to the host

In normal day to day operations, you will rarely see the bottom of the host operating system — if even any of it. But as curious as we are, we will go all way down to the rabbit hole at once.

The Extreme Networks management guide advises that the host operating system login shell is accessible via the serial interface module and the hotkeys CTRL+Y1, (Host OS), CTRL+Y2 (SLX-VM), CTRL+Y3 (TPVM).

That never functions for me and might be an issue with my local computer and the terminal emulator. However, what always works, is directly connecting from the SLX-VM and the virtual router instance mgmt-vrf.

For this access, you need to point the ssh or the telnet client to the loopback IP addresses of the management board. The first board will be reachable via the IP address 127.2.0.1 and the standby unit — if any present — will be reachable under 127.2.0.2. Furthermore, the line card host operation systems (not their specific SLX-VM) will listen on 127.2.0.3 to 127.2.0.10.

slx# telnet 127.2.0.1 vrf mgmt-vrfTrying 127.2.0.1…
Connected to 127.2.0.1.
Escape character is ‘^]’.
Ubuntu 14.04 LTS
HOST login: root
Password:
Last login: Wed Jan 2 16:12:38 GMT 2019 from pb_vm1 on pts/6Welcome to Ubuntu 14.04 LTS (GNU/Linux 4.4.7 x86_64)

On the login prompt we can use the default Brocade login, this is going to be username: root , password: fibranne.

Diving around the host system

Peeking around a typical SLX 9540 switch with Linux commands like ifconfig or lshw, we can easily find out that Extreme is using the Intel SoC integrated platform, specifically the Intel D-1527 — formerly called Broadwell DE. Also there are 32 gigabytes of RAM and two 128 GB solid state disks attached.

The first solid state disk is attached to the host OS and the SLX-VM, the second one is being utilized by the TPVM as fast data treasure. Extremes own software control tools seem to live under the path /fusion, the SLX-VM disk images can be found under /VM and finally the TPVM is located under /TPVM.

If you have already activated TPVM, you can see two virtual machines running in the output of the virsh command, else you will only see the SLXVM instance.

~# virsh list
Id Name State
----------------------------------------------------
4 SLXVM1 running
13 TPVM running

There are also a few software watchdog processes around, namely skapwatchdogd and watchdogd. Both seem to satisfy the hardware watchdog timer for the SLX switch board by accessing or writing to DMA and FPGA in fixed intervals. If the timer gets no calls on regular times, it will surely trigger a reboot of the chassis or a switchover to a standby management module.

Network bridges

On startup two virtual Linux network bridges br0 and br1 have been created on the host OS. The bridge br0 connects to the host OS eth0 network adapter and is connected as physical management ethernet interface from the SLX switch.

The virtual devices vnet0 and vnet2 are the primary network devices for the management and the third party virtual machine, basically sourcing the management ip addresses.

~# brctl showbr0 8000.d88466000000 no eth0 / vnet0 /vnet2
br1 8000.00a0c0000001 no vnet1

You can also configure an IPv6 or IPv4 address on br0 and make the host operating system accessible via your local management network or a monitoring tool. DHCP and IPv6 auto address management are also supported, but be careful: Do not make it worldwide accessible by accident!

The second bridge br1 is connected to the internal management network, basically the pool of all magically IP addresses that begin with 127.

There is also a virtual 10-GbE networking card hanging around on the PCI bus. This card will be attached on demand for the analytics path to the TPVM.

Tmux

Continuing with the host machine, we notice that tmux — a terminal multiplexer — is happily waiting for a connection over the serial console.

# ps auxw | grep ttyS0
root 1939 0.0 0.0 18008 2940 ttyS0 Ss+ 2018 0:00 /bin/bash /fusion/sbin/vm_tmux
root 2064 0.0 0.0 24220 2684 ttyS0 S+ 2018 0:00 /fusion/sbin/tmux attach -t FUSION_KVM

The tmux configuration is easily located at /fusion/conf/tmux.conf. Roughly in a short audit we can point out that Extreme is using tmux for multiplexing between the host login getty program, the SLX-VM and the TPVM. This is the way CTRL+Y1 / Y2 / Y3 shall work.

The virtual machines are connected by tmux calling the virsh console command — this will attach a terminal session on the virtual serial driver that is being created inside the vm. That is by the way the same procedure how the access to a serial console on a supported virtual machine running the libvirt package, e.g. on CentOS, works.

MLX software comparison

The MLX OS is a monolithic operating system, that is full of network features but lacks a proper user-land for doing just more. It is stable as a rock and there are a bunch of flavors and forks for different switch- or router boards available:

  • NetIron for MLX and MLXE
  • NetIron for CES and CER
  • Ironware for TurboIron, FastIron and ICX (now all Ruckus)

Being the older top dog from the Brocades router portfolio, Extreme still adds new features on regular basis and a complete system installation file fits nicely into a 400 megabyte ZIP file.

SLX software

The SLX software is trying to fill the gaps of usability and visibility that is left by the MLX. The download size reflects the currently shipped three different Linux instances. The usage of the Linux ecosystem will allow you to run any x86 conform binaries or Perl and Python scripts without an intermediate host, also there is support for configuration management tools.

Moreover with vSLX there is a complete bootable SLX-OS lab image available, shortening training and development cycles. You don’t need to access the real hardware and you can emulate the SLX9850 and SLX9540 control planes right ahead on your local computer.

On the other hand, using an Linux as the default image, means to track Canonical update and security announcements. The installed Ubuntu on both host and the SLX-VM is still on version 14.04 LTS, reaching its end-of-life security cycle in April 2019. But I am sure folks at Extreme have this covered and I am expecting an update soon.

root@HOST:~# lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 14.04 LTS
Release: 14.04
Codename: trusty

Running a full management plane on commodity hard- and software also brings up the issue that attacks on the Linux host systems or the Intel processors can now even affect your network devices. And I also wonder if broken SSDs will be factor and replacements or new installs can be done in the field.

Closing the first chapter, we can see that SLX-OS is not a single standalone operating system. It is rather a collection of ideas, tools and a lot of Linux. In one of the next stories, I will take a deeper look inside the SLX-VM and TPVM.

--

--