Reaching the Billion IOP Datacenter

Traditional data storage is being stretched beyond its boundary and design. Users want more than just raw storage capacity. They want sub-microsecond latencies, multiple instances of terabytes, and hundreds of thousands of IOPs per deployment. All this across multiple coexisting workloads.

Many blogs or white papers are declaring these needs and raising questions about what the next generation datacenter will look like from data storage companies. People are wondering what can solve our big data storage problems while offering seamless scalability. Yet amongst the hype, there is one big problem people fail to recognize.

The need to approach the billion IOP datacenter.

We can easily move from kilobytes to megabytes, gigabytes, terabytes, and petabytes of storage. Or from kilobit and megabit to multi-gigabit of bandwidth. But what about architecting a system that can go from kilo-iops, mega-iops, and giga-iops?

You see, the industry has a cloud problem. Everyone wants to save time and money with fewer people and fewer resources, even though it requires both to design, write, test, deploy, and scale applications at fast speeds. All of these must work side by side without impacting the other while being deployed in various frameworks across many platforms. A datacenter is then fragmented by diverse applications, frameworks, and platforms resulting in silos which prevent clear operations and efficient economics.

As we watch the public cloud success of data storage companies such as Amazon Web Services, Google, and Azure, traditional datacenter models are increasingly challenged. The only solution is a universal data infrastructure and proper data storage management to consolidate the mess.

Here, we’re addressing this problem with the following four key elements.

Elastic Data and Control Plane

So, how can you attach storage resources to hundreds or thousands of applications with an elastic control plane?

We createdfloating iSCSI initiator/target relationships. These allow applications to effortlessly move across storage endpoints and dissolve topological rigidity. When an application moves across racks, the storage is dragged along and manifests its endpoints on the rightrack. Migrating apps can be served from many locations at the same time by spreading out all of the IOPs.

Next, our operational model allows you to describe applications based on their service needs such as resiliency, performance, affinity, etc. During application deployment, storage isn’t required to be handcrafted as LUNs on pre-determined arrays with pre-set RAID levels. With our model, every volume has fluid characteristics from the build up to tear down.

Finally, we don’t make deployment teams spend valuable time mapping out the storage system. Rather, we deliver a consolidated architecture that only requires installation.

API-Based Operations Model

We know developers want easy resources. With Datera’s data storage management, you can deploy big data storage without getting lost in the details.

Simply describe your applications needs (service levels) and roles (development, testing, production, etc.), and watch as Datera does the hard work for you.

You won’t be dealing with provisioning, LUN masking, ACLs, authentication hassles, or finding which ports have access to which storage.

Standard-Based Protocols

So far you’ve learned Datera is scalable and easy to use. It provides multi-tenant storage for containers, bare metal, VMs, etc.

But does it support every OS? When and where will drivers be available for Linux, Windows, or BSD?

If the OS supports iSCSI, Datera supports it.

Fun fact: Datera contributed the block system to Linux, including iSCSI, Fibre Channel, and a dozen or more storage protocols. This means you have no more hassle with proprietary drivers and no more client-side proxies.

The Power of NVDIMM/NVRAM/NVMe

Now, how do we reach the infamous gigaiops? This level of performance is completely useless without solving the three challenges above.

After we capture all applications by their intent, accommodate hundreds to millions of IOPS in a single cluster, and have a control plane that configures and re-configures, then we can add the final ingredient.

Powerful NVDIMM and NVMe storage media take us the rest of the way to our gigaiops.

Datera can auto-tier and pool this storage media to scale it across the datacenter. This delivers higher performance and low latency via standard protocols to any application on any platform.

Sound impressive? Just wait until 3D XPoint arrives.

Think of Datera as the easy button for your datacenter.

At Datera, we:

· Made it to the billion IOP datacenter

· Started with hundreds of IOP disks

  • Then larger IOP SSDs
  • Then even larger NVMe

· Figured out the shared nothing, scale-out architecture others claimed to discover. The mistake was using proprietary drivers while we used standard iSCSI

· Learned how to scale and distribute the control plane

· No one scaled quite like we did

How did we do this?

  • Central control plan
  • No proprietary driver
  • Auto-tiering
  • Support for iSCSI and iSER
  • At 4k Random read, we can give 150,000 IOPs per machine at 600MB/s (you’d need 6,000+ machines for this)
  • Our all flash array offers 500,000 IOPs (you’d need 2,000 machines for this)

What is it?

  • Persistent container storage
  • Template based application deployment for VMs, bare metal applications
  • Provisioning storage isn’t difficult, standing up a large cluster is. But Datera makes it simple

Who did it?

  • We did. Datera created the Application-Driven Cloud Data Infrastructure

Why did we do it?

  • Cloud carving for a couple hundred IOP applications to multi-million IOP big data storage jobs
  • Hosting providers can get the cost of standard hardware and scale to customer needs without figuring out placement of storage. One control plane with one API to deploy

If you’re interested in saving money outside of AWS or mirroring your current AWS deployment, we can provide large, elastic, cost-effective, and scalable storage. Others may say they do too, but ask them their limits.

Many data storage companies may think they have the right parts to the job, but only we know how to build data storage management systems that are the fastest and most innovative on the market.

Curious to learn more? Contact us today.

Like what you read? Give Cloud Data Storage a round of applause.

From a quick cheer to a standing ovation, clap to show how much you enjoyed this story.