Software Defined Storage (SDS) vs Traditional SAN
This week I have a couple hours to analyze about almost everything what I know about the Software Defined Storage (SDS) paradigm. And as a result of these reflections I have defined such main advantages of SDS against old traditional SAN at the current moment:
- Deep SLA Atomic Granularity. Since SDS brings a new level of completely separating logical storage service from its hardware, we can achieve deeper granularity of stored elements for which we enable such features as rebalancing, migration, replication, deduplication, snapshotting and so on. All this task can be simply configured on per unit basis to achieve maximum SLA uniqueness. For example, instead of whole LUN or Volume aggregation tuning, we can create unique SLA policy for particular virtual disk. This also empowers us to make data portable eliminating planned and unplanned downtime on per VM or per virtual disk basis across different physical disks, nodes, racks, rooms and so on.
- Pay As You Grow model. Because SDS does not have any central elements, we will not have central bottlenecks or single point of failures and can have theoretically unlimited horizontal scalability range and vertical range is limited only to server limitations here. And since we can scale (or modernize) our storage cluster on per drive basis, we can achieve cool approach in very evenly investments during building our storage platform. This is something like Pay As You Grow model ☺
- Very low OPEX. In traditional old paradigm, you would need to plan the lifecycle of the future storage system. During planning stage, you need to understand how much disk space and how much I/O commands it should serve and at the same time taking into account how it can operate, and how it scales. You also should to consider whether it requires planned downtimes to implement new functions or disabling some extra ones. Or how implement plan current or future disaster replication and so on. After understanding future storage architectures of this life cycle, you must invest in building SAN or modify your existing one. Thus, if you chose to present expanded capacity to a new compute server, you will require not only adding new disks to storage and creating new LUN, but there will be a necessity to perform a separate fabrics zoning and multipathing driver installation. In the new software defined paradigm you can find completely new approach in operating — no RAID related calculation, no SAN setup, no zones creation, no special cabling or special switch hardware configuration. For SDS, the disks, nodes, and rooms, are all suitable duplication locations. You can easily add new disks to nodes, or node to rack, or rack of nodes to a system without any downtime. Rebalancing, migration, new replication and so on can be just programmed because this storage is already a program.
- Investment Protection. In SDS, as in any other Software Defined thing, becomes obsolete mostly a software. While used hardware is a simple commodity elements such as disk drives, memory, CPU and so on, all magic happens inside of the logical subsystems, completely in the software layer. Thus, to implement any new recently developed feature you need only update software layer. This is a great protection of investments.
If you, my dear reader, see any other obvious distinctions, please leave comment here or drop me an email, I will very appreciate this.
follow Igor Nemylostyvyi on Twitter