Enable IT mode on HP Smart Array P410i RAID card on HP DL380 Gen7 servers

Terryjx
6 min readMay 23, 2020

--

Recently my job requires me to build a testbed platform for a new project. This project requires a K8s distributed storage backend, and after some research, I narrowed down the solutions to two options: 1) ceph and 2) moosefs.

Both these two distributed file systems have Container Storage Interface drive for Kubernetes, and supports data high availability. Because many components that I intend to use for building the platform is developing quite fast, therefore I decide to test building the prototype platform on a small cluster first. The testbed does not have to be high performance, but better to have full set of function and architecture to expose as much issues in early stage as possible.

In this sense, I pulled in three rack servers, 1x HP DL380 G7 and 2x Dell R720. Good thing is that these servers all have more than one hard drives, and have pretty decent specs, e.g., more than 32GB RAM, 8 CPU cores, 4 1GbE ethernet ports, which should be able to provide sufficient quantity of resource. The bad news is that the RAID card of both these two kinds of servers do not support IT/JBOD mode, which is pretty important to ceph and moosefs.

Just to give a little bit more background about these two kinds of servers.

My Dell R720 servers have identical hardware specs: 1x 480GB SSD connected to native SATA port from motherboard, 2x 1TB HDD connected to Dell H710p mini RAID card via backplane, 2x Xeon E5–2630 Hexa-core CPU, 64GB RAM.

The HP DL380 G7 server has 1x 480GB SSD, 3x 270GB HDD connected to HP P410i RAID card via backplan. 2x Xeon E5630 CPU Quad-core, 32GB RAM.

OS of all three servers are the same, Ubuntu 18.04 LTS HWE.

After some digging on Internet, I noticed that it’s basically impossible to config Dell H710p mini to JBOD mode, and the most popular solution for Dell R720 is replace the former with Dell H310 mini. So I decided to grab a pair of Dell Perc H310 mini module from taobao.com.

Regarding the HP server, there are quite a lot of discussions about enabling JBOD mode on P410i card, which looks pretty promising. Most of the successful cases requires the RAID card to be running firmware ver 6.64, and then applying patches on top of Linux kernel 4.19 or 5.4. So I decided to upgrade linux kernel from 5.3 to 5.4 for all three servers to keep the software configuration consistent. kernel upgrade could be done as below:

# wget -c https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.4/linux-headers-5.4.0-050400_5.4.0-050400.201911242031_all.deb# wget -c https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.4/linux-headers-5.4.0-050400-generic_5.4.0-050400.201911242031_amd64.deb# wget -c https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.4/linux-image-unsigned-5.4.0-050400-generic_5.4.0-050400.201911242031_amd64.deb# wget -c https://kernel.ubuntu.com/~kernel-ppa/mainline/v5.4/linux-modules-5.4.0-050400-generic_5.4.0-050400.201911242031_amd64.deb

# sudo dpkg -i *.deb
# sudo init 6

The next step would be upgrade P410i firmware to ver 6.64. Smart Array P410i is a pretty old RAID controller, but HPE still provided a good firmware upgrade support. Unfortunately, I only found the ver 6.64 firmware upgrade file for Windows and Redhat based linux operating system while my operating system is Ubuntu 18.04 LTS. Apparently re-deploying a CentOS just for upgrading firmware of one hardware would be a pretty painful process, and another option would be boot the machine into a CentOS environment (USB, LiveCD) to perform the upgrade, and an even easier path is extracting the upgrade binary from Redhat upgrade file and directly run it on Ubuntu. I found this article a useful reference for this path. The procedure is as below:

// Download the firmware upgrade file for Redhat based Linux
# wget https://downloads.hpe.com/pub/softlib2/software1/sc-linux-fw-array/p332076214/v110820/hp-firmware-smartarray-14ef73e580-6.64-2.i386.rpm
// Install rpm package extraction tool
# sudo apt install rpm2cpio
// Extract the binary out
# rpm2cpio hp-firmware-smartarray-14ef73e580-6.64-2.i386.rpm | cpio -idmv
// Copy ccissflash binary to somewhere under your PATH variable
# cd usr/lib/i386-linux-gnu/hp-firmware-smartarray-14ef73e580-6.64-2/
# sudo cp ccissflash /usr/local/bin
// Run the upgrade binary
# sudo bash hpsetup

If the firmware upgrade is successful, you should be able to reboot into your server in 5 minutes time. To further verify the version of your current running firmware, you can:

# cd ~
# git clone -b dkms https://github.com/artizirk/hpsahba
# cd /home/$USER/hpsahba
// hpsahba -i will provide information about your RAID controller, N stands for the index of the controllers on your system,
# ls /dev/sg*
sg0 sg1 sg2
// The status of /dev/sg1 should be something like below output
# sudo hpsahba -i /dev/sg1
VENDOR_ID='HP'
PRODUCT_ID='P410i'
BOARD_ID='0x3245103c'
SOFTWARE_NAME=''
HARDWARE_NAME=''
RUNNING_FIRM_REV='6.64'
ROM_FIRM_REV='6.63'
REC_ROM_INACTIVE_REV='6.63'
YET_MORE_CONTROLLER_FLAGS='0xfa71a216'
NVRAM_FLAGS='0x08'
HBA_MODE_SUPPORTED=0
HBA_MODE_ENABLED=0

Here we can see that RUNNING_FIRM_REV is already 6.64.

If you try to change RAID card working mode from RAID to HBA mode now, you will find that the RAID array disk will disappear from lsblk output and new bare disk device will be added back, which are sdb, sdc, sdd in the below output.

# cd /home/$USER/hpsahba
# sudo ./hpsahba -E /dev/sg1
// CAUTION // // CAUTION // // CAUTION // // CAUTION //
HBA MODE CHANGE WILL DESTROY YOU DATA!
HBA MODE CHANGE MAY DAMAGE YOUR HARDWARE!
Type uppercase "yes" to accept the risks and continue: YES
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 93.9M 1 loop /snap/core/9066
loop1 7:1 0 93.8M 1 loop /snap/core/8935
sda 8:0 0 447.1G 0 disk
├─sda1 8:1 0 1M 0 part
└─sda2 8:2 0 447.1G 0 part /
sdb 8:16 0 279.4G 0 disk
sdc 8:32 0 279.4G 0 disk
sdd 8:48 0 279.4G 0 disk

In my case, I run into the below error when trying to change the working mode of /dev/sg0, which does not seem to be affecting the exposing hard disk at the end of the tutorial, so I simply ignore it.

In order to make sure this changes are persistent across reboot, I rebooted the server,

# sudo init 6After reboot, login back to the server
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 93.9M 1 loop /snap/core/9066
loop1 7:1 0 93.8M 1 loop /snap/core/8935
sda 8:0 0 447.1G 0 disk
├─sda1 8:1 0 1M 0 part
└─sda2 8:2 0 447.1G 0 part /

The three baremetal HDD disappeared from lsblk, and we also could not see them under /dev/ folder:

ls /dev/sd*
sda sda1

This is because the OS kernel also has to be patched to make it work, we can patch it using the provided dkms module.

First step would be modifying the providedMakefile to adapt to the linux kernel source path KDIR to Ubuntu 18.04 LTS, which is /lib/modules/$(KRELEASE)/build.

# cd /home/$USER/hpsahba/kernel/dkms
# vi Makefile
obj-m := hpsa.o
ifndef KERNELRELEASE
KRELEASE := $(shell uname -r)
else
KRELEASE := $(KERNELRELEASE)
endif
# KDIR := /usr/lib/modules/$(KRELEASE)/build
KDIR := /lib/modules/$(KRELEASE)/build
PWD := $(shell pwd)
default:
$(MAKE) -C $(KDIR) M=$(PWD) modules
clean:
$(MAKE) -C $(KDIR) M=$(PWD) clean

# find . -type f -iname '*.patch' -print0|xargs -n1 -0 patch -p 1 -i
# sudo dkms add ./
# sudo dkms install --force hpsa-dkms/1.0
# sudo modprobe -r hpsa
# sudo modprobe hpsa hpsa_use_nvram_hba_flag=1
# sudo echo "options hpsa hpsa_use_nvram_hba_flag=1" > /etc/modprobe.d/hpsa.conf
# sudo update-initramfs -u
# sudo init 6

After rebooting, we can check the status of P410i RAID card

// The status of /dev/sg1 should be something like below output
# sudo hpsahba -i /dev/sg1
VENDOR_ID='HP'
PRODUCT_ID='P410i'
BOARD_ID='0x3245103c'
SOFTWARE_NAME=''
HARDWARE_NAME=''
RUNNING_FIRM_REV='6.64'
ROM_FIRM_REV='6.63'
REC_ROM_INACTIVE_REV='6.63'
YET_MORE_CONTROLLER_FLAGS='0xfa71a216'
NVRAM_FLAGS='0x08'
HBA_MODE_SUPPORTED=1
HBA_MODE_ENABLED=1
# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
loop0 7:0 0 93.9M 1 loop /snap/core/9066
loop1 7:1 0 93.8M 1 loop /snap/core/8935
sda 8:0 0 447.1G 0 disk
├─sda1 8:1 0 1M 0 part
└─sda2 8:2 0 447.1G 0 part /
sdb 8:16 0 279.4G 0 disk
sdc 8:32 0 279.4G 0 disk
sdd 8:48 0 279.4G 0 disk

You can also check /proc/scsi/scsi:

# cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: INTEL SSDSC2BW48 Rev: RG21
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 00 Lun: 00
Vendor: HP Model: P410i Rev: 6.64
Type: RAID ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 01 Lun: 00
Vendor: HP Model: EG0300FBDBR Rev: HPD6
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 02 Lun: 00
Vendor: HP Model: EG0300FBDBR Rev: HPD6
Type: Direct-Access ANSI SCSI revision: 05
Host: scsi2 Channel: 00 Id: 03 Lun: 00
Vendor: HP Model: EG0300FBDBR Rev: HPD6
Type: Direct-Access ANSI SCSI revision: 05

At this point, the server is ready to deploy with hard disks directly exposed to distributed file system.

I will continue talking about the remaining parts of this build probably in my next few write-ups..

--

--