My “humble” home lab with OpenStack
I would like to tell about my “humble” home lab with OpenStack in my living room.
My home lab has undergone several evolutions modernization during the construction.
In about a year ago, I bought my first server. It was Dell PowerEdge R610. And then something went wrong :)
First generation of network equipment:
- Two juniper ex4200–48t (addition Uplink Module EX-UM-2X4SFP — 2 SFP+ ports/4 SFP) as leafs
- One Mikrotik CRS305–1G-4S+IN as cheap spine (4 SFP+ ports)
- TP-LINK TL-ER6120 router
- Cisco SF300–24 as a managment switch for oob (ipmi, ilo, etc…)
Later I decided that 1 Gig is not serious and change equipment.
Second network generation after rebuild:
- Two Edge-core AS5812–54X (48x10G/6x40G) as leafs
- Two juniper ex4200–48t as cheap spine :)
- HPE MSR1003 8S router
- Smart NICs — Emulex OneConnect OCE14102 and Mellanox CX4121A
A few words about Edge-core AS5812–54X. It is l2/l3 switch with Trident 2+ on board (ASIC by broadcom) and has pre-loaded with Open Network Install Environment (ONIE).
I chose a SONiC network OS by Microsoft. SONiC is an open source network operating system based on Linux that runs on switches from multiple vendors and ASICs.
But it was not too easy to start to use it because I had to patch the source code. I fixed a several module (one of them is fan control algorithm and logic of interaction with the chip). And I fixed CLi tools.
List of servers:
- Supermicro CSE-512F-441B (mb X9SCL-F, Core i3–2100, 16Gb) as a management server
- Supermicro 1U 1027GR-TRF (2xE5–2695v2/128Gb, 3*NVIDIA Telsa/Grid)
- HP ProLiant DL380 G8 (2xE5–2620/128Gb)
- Dell PowerEdge R610 (2x Xeon X5650/64Gb)
- HP ProLiant DL360 G7 (2x Xeon X5670/64Gb)
I use Ikea HEJNE shelf for the base of the rack.
The management server has MaaS, NetBox, Vault, Ansible, Docker Registry.
I developed a software solution to configure network and hardware state and lifecycle management. If I create a new record with a server or change current network config, then the NetBox pushs webhook to my service. The service can receive events and handle them. As a simple example, the service after handle event will create a valid configuration for all hardware/software and push it to the MaaS or the network hardwares. The NetBox is the source of truth for all of my systems.
Also, I marked all cables and hardware according to IDs in the NetBox. In order to easily to reconfigure cables and connections. It is a good decision when you have more than 2 servers and 6 patch cords. As an example, in the future, it can be used to create change’s plan for engineers at a data center. That plan can to send as ticket to a data center support.
Underlay network is IP Fabric with Spine/Leaf implementation.
In my plans to change it to evpn (for more flexibility) in future.
IP fabric based on the eBgp protocol (each spine and each leaf has own AS number). And I used the ecmp protocol (per flow) for balancing between equal-cost uplinks (let me remind you that each leaf has many connections with many spines).
Each server has two 10G SFP+ port connected to the leaf switch. In order to provide fault tolerance. I configured each nic’s port as 2 different “physical Functions” and split them speed by 30/70. Each interface in the Linux combined as a bond interface. One interface uses for overlay network and other one uses for control and backup network.
Then I deployed my patched version OpenStack.
One of the server is management and at the same time is network controller node. Other servers are compute nodes.
I use a little patched Neutron. As a protocol for overlay, network uses vxlan. To speed up encapsulating, I use smart nics with vxlan offload.
Thanks for reading!