Running a FIFO Cloud on OVH Dedicated Servers

Overview

This is a quick runthrough of how we got Project FiFo and SmartOS working via vRack in an OVH datacentre.

To follow along, you will need two or more OVH servers running SmartOS which have the vRack capability available to them.

The layout we are building:​

Storage

The configuration we have is two 480GB SSD’s and two 2TB SAS drives, we configured them as follows:

[root@core3 ~]# zpool status
pool: zones
state: ONLINE
scan: none requested
config:
NAME STATE READ WRITE CKSUM
zones ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
c1t2d0 ONLINE 0 0 0
c1t3d0 ONLINE 0 0 0
logs
c1t1d0 ONLINE 0 0 0
cache
c1t0d0 ONLINE 0 0 0
errors: No known data errors
[root@core3 ~]#

Where c1t0 and c1t1 are the SSD’s.

SmartOS usbkey/config

The servers have two interfaces, ixgbe0 and ixgbe1:

ixgbe0 = internet
ixgbe1 = vrack

The config for ‘server1’ in our diagram is as follows:

[root@core3 /usbkey]# cat config
admin_ip=dhcp
admin_nic=MA:CA:DR:ES:HE:RE:DD
internal_nic=MA:CA:DR:ES:HE:RE:DD
internal0_ip=10.0.0.243
internal0_netmask=255.255.255.0
internal0_gateway=10.0.0.254
headnode_default_gateway=none
dns_resolvers=213.186.33.99,213.186.33.102
dns_domain=ovh.net
ntp_hosts=ntp.ovh.net
compute_node_ntp_hosts=dhcp
hostname=core3.cyberpunk.network

Change as rq for both boxes and drop into /usbkey/config and reboot

Proxy / FW / Gateway nodes

We will use a SmartOS Zone as a router/nat/rproxy box, to set this up:

{
“alias”: “gw1”,
“hostname”: “gw1”,
“brand”: “joyent”,
“max_physical_memory”: 512,
“dataset_uuid”: “3c999bae-d419–11e6-adea-e730a3c335c6”,
“nics”: [
{
“interface”: “net0”,
“nic_tag”: “admin”,
“ip”: “46.105.56.5”,
“mac”: “02:00:00:18:2d:d8”,
“netmask”: “255.255.255.0”
},
{
“interface”: “net1”,
“nic_tag”: “internal”,
“ip”: “10.0.0.200”,
“netmask”: “255.255.255.0”
}
]
}
[root@core3 /opt/configs]# vmadm create -f ./gw1.json
Successfully created VM 9a112e67–1d92–4963-f3bd-df16c1e073b2
[root@core3 /opt/configs]#

In OVH land, the 46.105.56.5 address is a ‘failover’ ip and its gateway should be configured to be the gateway of the physical server that it’s current assigned to.

In my case that is the gateway address 193.70.32.254.

To set these up, zlogin and use route -p:

[root@core3 /opt/configs]# zlogin 9a112e67–1d92–4963-f3bd-df16c1e073b2
[Connected to zone ‘9a112e67–1d92–4963-f3bd-df16c1e073b2’ pts/19]
Last login: Sun Jan 29 13:34:30 on pts/18
__ . .
_| |_ | .-. . . .-. : — . |-
|_ _| ;| || |(.-’ | | |
|__| ` — ‘ `-’ `;-| `-’ ‘ ‘ `-’
/ ; Instance (minimal-64-lts 16.4.0)
`-’ https://docs.joyent.com/images/smartos/minimal
[root@gw1 ~]# route -p add 193.70.32.254 46.105.56.5 -interface
add host 193.70.32.254: gateway 46.105.56.5
add persistent host 193.70.32.254: gateway 46.105.56.5
[root@gw1 ~]#

Next, we set the gateway:

[root@gw1 ~]# route -p add default 193.70.32.254
add net default: gateway 193.70.32.254
add persistent net default: gateway 193.70.32.254
[root@gw1 ~]#
`[root@gw1 ~]# echo ‘nameserver 8.8.8.8’ > /etc/resolv.conf
[root@gw1 ~]# ping google.com
google.com is alive
[root@gw1 ~]#

Next, we want to NAT traffic from the internal network (10.0.0.0/8) to
46.105.56.5 so we can access internet addresses from inside our new fifo cloud:

[root@gw1 /etc]# routeadm -u -e ipv4-forwarding
[root@gw1 /etc]# echo ‘map net0 10.0.0.0/24 -> 0/32’ > /etc/ipf/ipnat.conf
[root@gw1 /etc]# svcadm enable ipfilter
[root@gw1 /etc]# ipnat -l
List of active MAP/Redirect filters:
map net0 10.0.0.0/24 -> 0.0.0.0/32
List of active sessions:
[root@gw1 /etc]#

I repeated this process on the 2nd node with different IP’s and called it ‘gw2’..

Testing

To test these I created a quick smartos zone and did the following:

GW1 Test:

[root@testvm ~]# route add default 10.0.0.200
add net default: gateway 10.0.0.200
[root@testvm ~]# ping google.com
google.com is alive
[root@testvm ~]# curl icanhazip.com
46.105.56.5

GW2 Test:

[root@testvm ~]# route delete default 10.0.0.200
delete net default: gateway 10.0.0.200
[root@testvm ~]# route add default 10.0.0.210
add net default: gateway 10.0.0.210
[root@testvm ~]# ping google.com
google.com is alive
[root@testvm ~]# curl icanhazip.com
51.255.232.154

Installing FIFO

First, create the zone for fifo1 and boot it up (we’ll add fifo2 later):

root@core3 /opt/configs]# cat fifo1.json
{
“autoboot”: true,
“brand”: “joyent”,
“image_uuid”: “e1faace4-e19b-11e5–928b-83849e2fd94a”,
“delegate_dataset”: true,
“indestructible_delegated”: true,
“max_physical_memory”: 3072,
“cpu_cap”: 100,
“alias”: “fifo1”,
“quota”: “40”,
“resolvers”: [
“8.8.8.8”,
“8.8.4.4”
],
“nics”: [
{
“interface”: “net0”,
“nic_tag”: “internal”,
“ip”: “10.0.0.250”,
“gateway”: “10.0.0.254”,
“netmask”: “255.255.255.0”
}
]
}
[root@core3 /opt/configs]# vmadm create -f fifo1.json
Successfully created VM f416b636–08f5-c0d6–8d65-c427fdae9aa2

At this point, follow the fifo installation docs at https://docs.project-fifo.net/docs/installing-fifo

note: that I also installed LeoFS as per the instructions with two storage nodes and replicas = 2, write = 2, read = 1, delete =1.

RProxy to the fifo console

On one of the gateway nodes, install nginx and configure it to proxy to the fifo console:

`[root@gw1 ~]# pkgin install nginx
<snipped>

nginx config:

server {
listen 80;
server_name fifo.cyberpunk.network;
 error_page 404 /404.html;
 access_log /var/log/nginx/fifo.cyberpunk.network.access.log;
 location / {
allow a.b.c.d/32;
allow 10.0.0.0/24;
deny all;
 proxy_pass http://10.0.0.250:80;
 proxy_set_header  X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Server $host;
proxy_set_header X-Forwarded-Host $host;
client_max_body_size 10m;
client_body_buffer_size 128k;
 proxy_buffer_size 4k;
proxy_buffers 4 32k;
proxy_busy_buffers_size 64k;
proxy_temp_file_write_size 64k;
}
}

Once all of that is done, then you’re ready to start using your new fifo cloud!