Recently, there has been a trend away from Storage Area Network or SAN storage and towards direct-attached storage or DAS. Many are predicting the death of SANs. Here, I take a contrarian view. This blog argues that we will see a resurgence of SANs, but based on newer technologies. Furthermore, such shared SAN storage will more often be sold as an integrated system along with servers and networks, and less as separate stand-alone storage.
1. The SAN Era 1994–2012
A storage area network (SAN) is a network that provides access from multiple servers in a data center to shared block storage, such as a disk array. SANs emerged in 1994 and became popular when 1 Gbps FC (Fibre Channel) became available in 1997. Direct-Attached Storage (DAS) was an alternative available at the time, but it did not support storage sharing, leading to poor storage utilization.
SANs allowed for better storage utilization, higher reliability, high performance and simplified management through use of a common set of storage functions available to all servers in a data center.
The performance of SANs continued to improve with time and the costs dropped to levels allowing wider adoption across both enterprise and SMB environments. New capabilities such as snapshots, compression, de-duplication and remote copy were introduced rapidly and were often unavailable for DAS storage. As a result, shared SAN storage became very popular, at the expense of DAS, and this was the state of affairs until about 2012.
2. The DAS Era 2012–2016
In the last few years, however, the pendulum has started swinging back in favor of DAS due to the following four problems with SANs.
1. SANs are too slow for use with Solid-State Drives (SSDs). The network protocol overhead for accessing fast storage devices (SSDs) over a SAN network dominates the total response time, making SAN latencies 2X to 3X worse than DAS latencies.
2. SANs are expensive. Storage devices in SAN arrays are priced significantly higher than DAS storage devices.
3. SANs are proprietary. Storage arrays used in SANs are generally proprietary appliances that use purpose-built hardware coupled with proprietary storage array software.
4. SANs are complex to manage. SANs based on the FC standard require unique host bus adapters (HBAs), unique switches, unique cables, and new concepts like zoning and masking. Additional complexity springs from (a) the disconnect between application concepts (VMs) and storage concepts (LUNs) and (b) the separate management of compute and storage.
As a result, new virtual SAN or vSAN software emerged that eliminates SANs and provides equivalent sharing capability by pooling together DAS storage across multiple servers.
Hyperconverged or HCI systems use clusters of compute servers with DAS storage, eliminate SANs, and use vSAN software to achieve equivalent functionality. They provide integrated management of servers and storage. Examples of HCI systems include Nutanix and Simplivity.
3. New developments
Four recent developments point the way forward.
3.1 NVMe over Fabrics (NVMeoF)
Small Computer System Interface (SCSI) became a standard for connecting between a host and storage in 1986, when hard disk drives (HDDs) and tape were the primary storage media. NVMe, a newer alternative to SCSI with much lower CPU overhead, is designed for use with faster media, such as SSDs.
NVMe was originally designed for connecting to local storage over a computer’s PCIe bus. NVMe over Fabrics or NVMeoF (spec released June 2016) enables the use of alternate transports to connect to remote storage. It enables running the NVMe protocol over any low-latency interconnect including high speed Ethernet. With NVMeoF, there is no latency difference between local storage and remote storage — no performance difference between DAS and SAN.
SAN networks have traditionally used SCSI over FC transport. Future SANs will use the NVMeoF protocol over RDMA transports.
3.2 Storage Controllers using Standard x86 Servers
In the past, SAN-attached shared block storage controllers used to be proprietary. They were built using special motherboards and sometimes even with special ASICs, as in the case of 3Par.
Increasingly, SAN storage controllers are built using standard x86 servers and are competitive in cost, performance and scalability to storage controllers built using proprietary hardware. This is called software-defined storage or SDS. Both scale-out storage controllers (Hedvig, Formation data systems) and dual controller RAID arrays (Tegile, Tintri) can be built by leveraging standard x86 server hardware.
3.3 App-centric SAN Storage Management
SAN storage management has traditionally been storage-centric and non-intuitive to admins managing servers and apps. This is changing. New SAN vendors (e.g. Tintri) have shown it is now possible to provide intuitive, app-centric or VM-centric storage management.
3.4 Integrated Server and Storage Systems
Traditionally, SAN storage was purchased separately from compute and networking. More recently, there has been a shift to integrated systems that are easier to use and deploy. Converged infrastructure or CI systems bundle existing server, storage and networking products, together with management software, in a single offering. Hyperconverged or HCI systems use clusters of compute servers with DAS storage, eliminate SANs, and use vSAN software as discussed previously.
In my view, this convergence trend will continue. Customers will increasingly buy storage as part of an integrated or converged system, and stand-alone storage purchases will become less common.
Future shared SAN storage will
· be built with standard server hardware using SDS software
· provide intuitive application-centric management
· deliver high performance using the NVMeoF protocol, and
· be delivered as part of an integrated or converged system
4. Next-Gen Storage
We predict that shared SAN storage will become popular again. They will be based on NVMeoF over RDMA Ethernet and built using SDS on standard x86 servers to eliminate the four SAN issues of the past.
1. SAN access times 2x to 3X worse than DAS access times. This will no longer be true with NVMeoF.
2. SAN storage devices more expensive than DAS storage devices. This is no longer true with x86 server-based controllers.
3. SAN storage is proprietary. SAN storage built on standard x86 servers with SDS software is no longer proprietary.
4. SANs are complex to manage. Using standard x86 servers for storage controllers, virtual networking in place of zoning and masking, and intuitive app-centric storage management can eliminate SAN complexity. NVMeoF enables use of high-speed Ethernet with RDMA and eliminates need for a separate FC fabric.
Furthermore, such shared storage will be integrated with servers and Ethernet switches in an integrated or converged system to further simplify ease of deployment and management for the customer.
To differentiate from hyperconverged systems that are not SAN based, we call such systems superconverged systems.
We reviewed historical SAN weaknesses that opened the door for DAS and HCI to emerge. With newer technologies like NVMeoF and software-defined storage, we showed that SAN storage of the future would overcome these deficiencies.
We expect that rather than be sold as separate stand-alone storage, future SAN storage is more likely to be integrated with servers and Ethernet switches and delivered as a new kind of integrated system that we call superconverged.
To net it out, I am predicting a resurgence in SANs, based on the new NVMeoF protocol, and sold as an integrated system.