The Metal Cloud

How I came to build a cloud from scratch

Matthew G. Johnson
10 min readSep 15, 2019
The Metal Cloud, drawing by Gunawan Artd

When Henry Ford launched the Model T a century ago, it wasn’t the first car or the finest, but it was quite unique. While all the cars that came before were hand-crafted and very expensive, the Model T was affordable and immensely popular. Ford’s genius was in pioneering the mass production techniques that allowed him to manufacture cars in much higher volume and at a much lower price than anyone before. The Model T fundamentally changed the way we live.

Cloud computing, just like the Model T, was far from being the first to market. However, its predecessors were equally hand-crafted and expensive. Automated provisioning at scale has allowed the cloud to deliver computing services at a much lower price and with a much shorter delivery time than ever before. Just like the Model T a century before, cloud computing is made from simple elements like iron and copper, but combined with the power of imagination, the metal cloud is fundamentally changing the way we live.

Many years ago, I had the opportunity to pay for my studies by working as a system administrator. At the beginning, I didn’t have a clue, but I needed the money and was ready to learn. I started by reading the Sun OS system administrators’ guide cover to cover and slowly developed my skills on the job. I built one of the first few sites on the world wide web and even suffered an early internet hack. But that was a long time ago.

I had lost touch with my roots. As a new community developed around the cloud, I became increasingly eager to reconnect.

The easy route would have been to create a cloud account and build something, but that would have been like trying to understand the engineering of a car by taking it for a drive. To really understand cloud computing, I would have to get under the hood and start working on the engine. But doing that with Amazon, Google or Microsoft was never going to be possible. They rightly keep their data centres under heavy security.

If I was serious about reconnecting, there was really only one option: I had to build my own cloud from scratch.

Now to set the context, I wanted a personal project not a business, so this had to be functional, but not too big or expensive. Moreover, I wanted to share it with others, so hopefully it might also be portable and aesthetically coherent.

The obvious choice would have been to create a stack of Raspberry Pi’s, but I knew that the target operating systems and applications needed more power and storage than a Pi could provide. In the end, I settled on the Udoo x86, a small maker board with a four core Intel CPU, 8GB RAM and connectors for both flash (M.2) and disk storage (SATA).

Udoo x86 Ultra — Single Board Computer

The next challenge was working out how to put four of them together without sacrificing portability, durability or appearance. After a long search, I located an elegant compact LAN gaming case from Lian Li with a carrying handle and a generous provision of storage bays. Nevertheless, LAN gaming was long past its prime, and the case was no longer in production. Thankfully, eBay came to the rescue, and with all the primary components secured, I was ready to start the build.

Lian Li PC-TU200 Mini ITX case with laser etched artwork

PC assembly is typically a very orderly process with standard parts coming together in a well choreographed routine. Nonetheless, this build was going to be far from routine.

The first challenge was to put four computers, storage and network into a case meant for only one. Given the space constraints, I started by searching out an especially small power supply to provide additional headroom. I then stacked the boards in a staggered manner so the cables from each board could pass in the gaps between their neighbours. There were only a few millimetres of clearance between the last board and the power supply, but that was good enough.

The storage and networking was thankfully much easier. The four disks fit in the existing bays, the flash storage fit neatly under the boards, and a vacant optical disk bay provided the perfect home for a small network switch. With confidence starting to build, I breathed a sigh of relief and moved on.

Nonetheless the second challenge of connecting the peripherals was to prove all the more formidable. Not only were there four separate boards and only one access slot, but each board had connectors on two opposite edges. I could have simply attached extension cables and left these dangling out of the back, but this would have been both confusing and unappealing. I decided instead to build a custom panel to mount the extension sockets to preserve both aesthetics and functionality.

All the same, I was no metal worker and squeezing all thirteen connectors into such a small space would require great precision. For a fortnight I wrestled with aluminium sheet, drills, saws and a full squad of hand-files. Despite my best efforts, the final result did not achieve the required precision. I could not fit in all of the sockets and had no choice but to start again.

For another week, I struggled with the same simple tools, measuring even more carefully and using a small jig to align the holes more precisely. Thankfully the second attempt was more successful, fitting all thirteen connectors. Moreover with neat rows of screws clearly visible, it offered an industrial aesthetic that might even have impressed Henry Ford!

Custom Rear Panel

The final challenge of the assembly was to wire up all the components. Typically a very easy task, little in this build was to follow expectation.

The large number of unique components meant that most of the cables had to be custom made with special connectors. Moreover, with much less space and many sharp edges, fitting them wasn’t any easier. After several blood sacrifices, I finally managed to connect everything and turn on the power. You cannot underestimate my surprise when each of the four little servers started successfully on the first attempt. I could have sworn that the little metal cloud glowed with pride.

Final assembly (power supply removed)

So with the hardware assembly successfully completed, it was time to move on to the central phase of the project. For as much as creating a mini data centre in a box was personally very satisfying, it was not the ultimate goal. What truly differentiates cloud computing, is the ability to automatically provision infrastructure and services on demand. The heart of project was to be in its software.

The first step was to enable installation of the operating system across the network. While this originally appeared to be a simple step, it soon became apparent that it was going to present the most extraordinary obstacle and put the whole project in jeopardy.

The episode started with a configuration tool called Cobbler which promised to automate the provisioning. The first attempt was, nonetheless, a total failure. Not only would the target client not install an operating system, but it wouldn’t even request a boot loader: the most basic step of the process. The twenty-first attempt wasn’t any more fruitful despite my best efforts.

This was a serious problem and it was time to break out the network traffic analyser. The trace showed the client sending out its BOOTP request and receiving a prompt response from the server. The client would subsequently sit silently without any error messages. I was at a dead-end.

Eventually I chanced upon an interesting option in the Udoo BIOS settings marked, “PXE Legacy mode.. only Legacy PXE OpROM is supported“. It didn’t sound promising but I was desperate and willing to try anything. As if by magic, selecting this option woke the client from its slumber and it started to download the boot loader.

Not only was it finally working, but I had a distinct feeling of déjà vu. I could have sworn that the animated text graphics as it loaded were identical to those of a diskless Sun workstation decades earlier. Was there a glitch in the matrix?

However, the jubilation was to be short-lived. After loading the boot program, the system promptly returned to its comatose state. I continued to wrestle with the problem for several more weeks, but to no avail. Progress was at a standstill.

I started to wonder whether I should just abandon the idea of installing the operating system automatically and move on to other things. After all, with only four computers, it would certainly have been a lot faster. But I knew that I would never be satisfied with this choice. The project could not survive without its heart. I resolved to battle on.

A few weeks later, I caught a ray of hope shining through the clouds. One Saturday morning after a long cycle ride, I was drinking coffee on the East Coast when I chanced across another configuration tool called the Foreman. It was evident from the online documentation that this was a far more mature and functional alternative to the Cobbler. Was this the solution I was searching for?

The Foreman Web Interface

The initial progress with the Foreman was equally slow, but learning from my earlier struggles with PXE and BOOTP, I knew I had to dig deeper. I finally arrived at the original standards document from October 1993, identified simply as RFC 1542. It became clear that there were several different architectures that an Intel client system could advertise, and that the boot-loaders were unique to the architecture. By configuring the DHCP/BOOTP server with some magical codes and directing these at the appropriate boot-loaders, I could successfully complete a network boot by configuring either legacy or standard PXE mode in the Udoo BIOS.

This was a small step forward, but in retrospect a very important one. There were many more problems to overcome before getting the Foreman to work successfully. Yet, as the work progressed, the error messages and logs became increasingly informative, and the rate of progress continued to accelerate. If the mountain pass had a summit from which the road descended, then this was it.

Overview of server, storage, network and provisioning configuration

A month later, with the Foreman working well, it was finally time to start installing the applications. The direct path would have been to use Puppet to automate installation, as this was built in to the Foreman. All the same, I had many positive reports about Ansible, and I was curious to try this out. Eventually I used both, finding each equally simple and effective.

I wanted to test out a wide range of typical cloud applications and services such as distributed file systems, application load balancers, big data platforms and container orchestration services. There was a wealth of great open-source projects from which I selected some popular favourites such as Keepalived, HAProxy, Gluster, Elastic, Hortonworks and Kubernetes.

Installing and configuring these applications and services took only weeks where the operating system had taken months. Extensive online documentation combined with the power of tools like Ansible and Puppet made the process remarkably easy. Soon the little metal cloud was fully functional with a wide range of cloud services available on demand.

Building a cloud from scratch had been a long and often very challenging climb. However, like many challenging climbs, the view from the peak was immensely rewarding. The development of the project into an operational service was an entirely different mountain, but that’s a story for another day.

I invited a close friend over to share the project and to celebrate its successful conclusion. We watched in awe as the little box automatically detected a new server, installed an operating system, configured custom storage and network settings and then started a range of application services all without lifting a finger. The little metal cloud glowed orange with pride. This time there was no doubt!

Final assembly completed and cloud services operational

Acknowledgements

I would like to offer my thanks to Ravi, Mark and Kent for their support, encouragement and invaluable guidance.

Hardware Inventory

  • 4 x Udoo x86 Ultra — Intel Pentium 4 core, 8GB RAM (N3710)
  • 4 x Western Digital Red 4TB HDD (WD40EFRX)
  • 4 x Transcend M.2 256GB SSD (MTS600)
  • 1 x Lian Li Mini ITX case (PC-TU200)
  • 1 x Corsair SFX 600w modular power supply (SF600)
  • 1 x Corsair 140mm RGB fan (HD140)
  • 1 x Noctua fan controller (NA-FC1)
  • 1 x Cisco 5-port gigabit switch (SG95D-05)
  • 2 x j5create USB 3.0 Gigabit Ethernet Adapter (JUE130)
  • 1 x j5create USB 3.0 Gigabit Ethernet & 3-Port Hub (JUH470)

Software Inventory

--

--

Matthew G. Johnson

I am an informatician, fine arts photographer and writer who is fascinated by AI, dance and all things creative. https://photo.mgj.org