Unity Architecture — Part II

Azhagarasu A
SAN Enthusiast
Published in
7 min readFeb 26, 2017

Is Unity really a re-branded VNX2/VNX? Well, let’s find out. In this post, we will take a closer look at the functions of Unity OE. This post is part II of Unity Architecture series. If you have not read the part I click here (opens in new window) to read.

Unity OE

The Operating System that runs on Unity hardware is called Unity OE (Operating Environment). Unity OE is based on SUSE Linux Enterprise Server (SLES). Unity provides block and file access to hosts and clients. Unity is an Asymmetric Active-Active array and is ALUA aware.

Multicore Cache

Each SP has a certain amount of Cache. In the older storage systems, usually the cache is divided into Read Cache and Write Cache and most of the times it will be a static partition and does not dynamically change according to the IO served. Unity’s Multicore Cache is dynamic and the amount of Read and Write cache is dynamically adjusted according to the read and write operation. The main aim of this approach is to minimize forced flushing when the high watermark level on the cache is reached. An additonal layer of SSD Cache can be added to a hybrid pool by leveraging FAST Cache technology.

Unity Block and File Storage

Using FC and iSCSI protocol, Unity provides block access of storage to hosts. Without any special hardware, Unity also provides file access to hosts via NAS Servers (virtual) that can be created in Unity OE. The most fundamental part of Unity storage is the Unified storage pools. Unity allows all types of storage resources such as Block LUNs, VVols, and NAS file systems to be placed in the same storage pool. The following diagram shows various storage resources residing in the same storage pool.

Unity storage pools

Storage Pools

The disks that are residing in DPE and DAE can be grouped together to form storage pools. In a pool, there can be three tiers,

  • Extreme performance tier (SSD)
  • Performance tier (SAS)
  • Capacity tier (NL-SAS)

RAID protection is applied on the tier and not at the pool. In Unity, All Flash system the pool will contain only extreme performance tier and they are called All Flash Pools. In hybrid systems, it is possible to create an all flash pool with only SSD’s and later on expand the pool with SAS or NL-SAS disks. Each tier in the pool can have different RAID levels applied. For Example, extreme performance tier can have RAID 10 and performance tier can be configured with RAID 5 and the capacity tier can be configured in RAID 6 fashion. But the same tier cannot have different RAID types. A drive that is part of one storage pool cannot be part of another storage pool.

FAST VP

FAST stands for Fully Automated Storage Tiering. Configuring pools is the first step in provisioning storage in Unity. Unity uses FAST VP (Fully Automated Storage Tiering for Virtual Pools) algorithms to move hot data to SSD and cold data to NL-SAS. The policy can be adjusted according to business need via Unisphere. Following are the available Tiering Policies,

  • Highest Available Tier
  • Auto-Tier
  • Start High then Auto-Tier (Default/Recommended)
  • Lowest Available Tier

When FAST VP is enabled, the data is spread across the pool in 256 MB slices. FAST VP is enabled on a storage resource such as LUN, Datastore individually.

Highest Available tier

With Highest Available Tier policy of FAST VP, new data slices are provisioned from the extreme performance tier. If that gets full, new slices are then provisioned from the next tier. With this policy, we should expect superior performance and low latency as all data is kept in extreme performance tier, only when it gets full the next tier is chosen. During the next relocation schedule, the system tries to place the slice in the highest tier only. Hot slices will take precedence over any other tiering policy. Hot denotes frequently accessed slices and cold denotes less frequently accessed slices.

Auto-tier

Auto-Tier policy is very similar to highest available tier policy with two primary differences,

  • Even if a slice with auto-tier is more active than a slice with highest available tier policy, during relocation the slice belongs to highest available tier policy takes precedence.
  • When new storage resource with auto-tier policy is created, slices are allocated from all tiers depending on the usage of each tier. If more capacity is found in capacity tier, then the slices are allocated from capacity tier initially.

Start high then auto-tier

Dell EMC recommends using this policy on storage resources and it is the default policy. When new slices are allocated, this policy allocated slices from the highest performance tier. Later on during relocation schedule, slices are moved down if they are not frequently accessed. We can expect good initial performance on the storage resource with this policy and later on, for effective capacity utilization it moved the cold slices down. It works exactly like Auto-Tier policy and the only difference is the initial allocation of slices are from highest performance tier.

Lowest available tier

All slices of a storage resource reside always in the lowest tier i.e. the capacity tier. If the lowest tier is full, all slices with this policy will be compared and the one’s with the lowest activity will reside in the lowest tier.

Following table is an excerpt from “EMC Unity: FAST Technology Overview” white paper that summarizes the functions of each tiering policy.

Tiering Policy Corresponding Initial Tier Placement Description Highest Available Tier Highest Available Tier Initial data placement and subsequent data relocations set to the highest performing tier of drives with available space Auto-Tier Optimized for Pool Performance Initial data placement optimizes Pool capacity, then relocates slices to different tiers based on the activity levels of the slices Start High then Auto-Tier (Default) Highest Available Tier Initial data placed on slices from the highest tier with available space, then relocates data based on performance statistics and slice activity Lowest Available Tier Lowest Available Tier Initial data placement and subsequent relocations preferred on the lowest tier with available space.

The following picture shows hot and cold slices before and after relocation.

Image source: emc.com
Image source: emc.com

Expanding Storage Pool with additional drives

While expanding the pool with additional drives it is not mandatory to use the same stripe width. For example, if the existing tier is configured with RAID 5 4+1 (this means the existing tier contains sets of 5 drives); we can expand the pool in sets of RAID 5 (4+1) or RAID 5 (8+1) or RAID 5 (12+1). In VNX2, it was a best practice to maintain the preferred drive count while expanding i.e. 5 (as per our example). This is no longer the case in Unity, Unity allows expanding a tier in pool with any supported stripe width.

System Drives

Unity OE occupies first 4 disks in DPE. These drives are called as system drives. Unity OE does not occupy the entire drive capacity instead it uses 107GB. The system drives are allowed to take part in the storage pool with nonsystem drives.

FAST Cache

FAST Cache technology extends the existing cache in storage processor by utilizing high-speed SSD’s. FAST Cache is applicable only for hybrid pools. Frequently accessed data (64 KB chunks) in SAS and NL-SAS drives are copied to Cache tier. The data is only copied and not moved; it still exists in drives. Not only the frequently accessed data is copied, the algorithm also copies data that may be read next. After copying FAST Cache memory map is updated.

How read operation is performed in Unity?

For the incoming IO, first, the system cache (DRAM) is checked. If the data found to be residing in the system cache, the requested data is sent to host completing IO request. If a Cache miss occurs, i.e the data is not present in the system cache, FAST Cache memory map is checked. Obviously, FAST Cache must be enabled and configured for the memory map to exist. If FAST Cache exists, its memory map is checked and the data is sent to host. This improves system throughput and reduces response time. In case of system and FAST cache read miss, the data is read from the drive, copied to system cache and then sent to host.

How write operation is performed in Unity?

For all write request, the data is first written in system cache and acknowledgment is sent to host. During flushing, the data is written to pool. For some reason, if the system write cache is disabled, the data is written to FAST Cache (if present) and then written to pool.

Inline Compression

Unity has compression feature available for block LUNs and VMware VMFS Datastores. This feature was added in Unity OE 4.1 release. Compression can be enabled on storage resource while creating it or at a later point. Compression can be enabled for resources that are in all flash pools only. When a host sends data it is placed in system cache first and acknowledgment is sent to host. Compression of data happens inline between system cache and all flash pool. Compression is not available for VVols and file storage resources.

Management tools of Unity

Unity can be managed via Unisphere (HTML 5 GUI), Unisphere CLI and REST API (Unity management functions only and no data access like S3).

Replication, Snapshots and other protection

Unity supports Sync and Async replication of storage resources such as block LUNs, VMware Datastores. File storage resources can be protected by Async replication. Unity also supports snapshots natively. Snapshot supports block LUNs, VMFS Datastores, File systems and VMware NFS Datastores. Data At Rest Encryption can be enabled on Unity. When enabled all data in Unity will be encrypted. Unity also integrates well with other Dell EMC products such as RecoverPoint for DVR like recovery.

This brings us to the end of the post. What we discussed is high-level overview of Unity OE and its functions. I hope you found it helpful. Deep dive posts of Unity features will soon be published.

Disclosure: I work for Dell EMC and this is not a promoted post.

Originally published at SAN Enthusiast.

--

--

Azhagarasu A
SAN Enthusiast

Technophile who loves writing ¤ Storage Geek ¤ Wannabe Programmer ¤ Shutterbug ¤ Avid reader