My adventure adding 10GbE networking to an Intel NUC for ESXi via Thunderbolt 3 PCIe expansion enclosure
When Heptio was acquired by VMware late last year, I started taking a deep dive into the vSphere ecosystem.
I‘ve been running a cluster of three Intel NUCs in my homelab for a while now. They are a great platform to run bare-metal Kubernetes and experiment.
The goal was to convert my bare metal k8s cluster to a vSphere cluster, mainly to experiment with Hyperconverged storage platforms like vSAN and Ceph (Proxmox). SANs like this can run on Gigabit networking, but 10GbE networking is preferred for better performance.
I started by purchasing a Mikrotik CRS328–24P-4S+RM. I love this switch! For around $350 it’s significantly cheaper than many other comparable 10GbE switches (*cough* Ubiquiti)
The switch has 4 10GbE enabled SFP+ ports. My strategy was to take advantage of the Thunderbolt 3 port on the NUCS to add a 10GbE network interface.
After looking around for options, I decided to settle on the Akito T3 Thunderbolt3 to 10G Adapter. I was unsure if ESXi would recognize this adaptor, as I did not see it listed on the compatibility guide. I figured if it didn’t work, I could at least use it with my MacBook pro. The adaptor provides a 10-BASE-T RJ-45 port, so I needed to order a RJ-45 to SFP+module for it to connect with my switch’s SFP+ ports.
I was so excited when it arrived! I first tested it by running Proxmox and it was recognized with no issues! For good measure I also tested it with my MacBook Pro and even Windows 10 briefly. All worked flawlessly with no additional drivers needed. It’s a really cool device.
My excitement turned to disappointment however when I booted up ESXi. It was not recognized. The interface listed was still just the NUC’s integrated Gigabit nic.
I searched high and low to try and make it work. Everything from installing William Lam’s USB 3.0 Ethernet Drivers (amazing resource btw) to various customized builds of ESXi. Short of writing my own drivers, it was looking like this was a failed undertaking. At least I didn’t buy three of them!
Not only did it look pretty cool, it’s dimensions would fit 3 of them perfectly on a shelf in my rack. They are also just under 2U in height.
When the parts arrived I hooked everything up and booted ESXi.
IT WORKS! Notice the speed listed below, 10000 Mbps, full duplex.
The cost of the enclosure + NIC is around $320. While this is not cheap, I actually like this better than an “all-in-one” adaptor for several reasons. It basically modularizes my homelab components. I can easily swap these around and access the fast PCIe ecosystem, as well as upgrade the NUCs and keep the same nics.
I noticed there was a quite a bit of space inside the enclosure. For fun I decided to see if a de-cased NUC could fit in there. I made a macgyver style riser out of cardboard for the NUC to sit above the network card. Its a perfect fit!
Three of these is going to look pretty sweet on the rack! I’ll create another post when the remaining hardware arrives, along with my adventures with vSAN. Thanks for reading!