<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by The Qumulo Team on Medium]]></title>
        <description><![CDATA[Stories by The Qumulo Team on Medium]]></description>
        <link>https://medium.com/@kwhitman_30901?source=rss-c786732c58ae------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Sun, 17 May 2026 10:12:43 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@kwhitman_30901/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Data storage and hybrid cloud computing: The era of scale-across is here]]></title>
            <link>https://medium.com/qumulo/data-storage-and-hybrid-cloud-computing-the-era-of-scale-across-is-here-be8d0766c26c?source=rss-c786732c58ae------2</link>
            <guid isPermaLink="false">https://medium.com/p/be8d0766c26c</guid>
            <category><![CDATA[data-storage]]></category>
            <category><![CDATA[cloud-migration]]></category>
            <category><![CDATA[cloud-storage]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <category><![CDATA[filesystem]]></category>
            <dc:creator><![CDATA[The Qumulo Team]]></dc:creator>
            <pubDate>Wed, 21 Nov 2018 15:51:53 GMT</pubDate>
            <atom:updated>2018-11-21T15:51:53.649Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/605/1*D_h8pPSeKkHJe-KagfoBdQ@2x.png" /></figure><p>Tens of thousands of <a href="https://reinvent.awsevents.com/">AWS re:Invent</a> attendees will be traveling to Las Vegas next week — ranging from longtime cloud-native organizations to “cloud curious” companies. All of them will be seeking answers to the same question: “How can I take advantage of the cloud in a way that works for my business?”</p><p>In a dynamic industry such as ours, few things are certain, but IT infrastructure decision makers and buyers can be sure of two things: enterprise data will continue to grow in size and strategic importance, and storage infrastructures will resemble a hybrid cloud within the next five years.</p><p>The <a href="https://qumulo.com/discover/why-qumulo/">era of scale-up and scale-out is ending</a>. Enterprises today rely on their data being available wherever and whenever they do business, whether that’s around the country or around the world. Increasingly, this also means availability across on-premises and public cloud environments, as well as the requirement to scale-across environments.</p><p>The era of scale-across data management is here, and it couldn’t be arriving a moment too soon.</p><p>Distributing workloads across on-premises file systems and public clouds such as <a href="https://qumulo.com/product/cloud/aws/">AWS</a> is becoming both more widespread and more of a business imperative.</p><p>As a result, many will be seeking guidance about how to manage their data in multiple regions, as well as <a href="https://qumulo.com/cloud/file-storage-in-the-cloud/">across public clouds and on-premises environments</a>. The primary objective will be to find a way to manage data in a way that drives business objectives. While this can be a complex process amidst terminology, acronyms, and alternatives, at the end of the day, business need their data to solve business problems.</p><p>The need for scale-across hybrid cloud storage is here — it enables organizations to retain the performance and accessibility of data managed on-premises with the <a href="https://qumulo.com/resources/elastic-storage/">elasticity</a> of data managed in the public cloud. Today’s enterprises need both for the following reasons:</p><ul><li><strong>Real-time visibility:</strong> Gone are the days when companies managed big blocks or buckets of storage. Today’s businesses require granular management of data — not storage. It’s imperative to know exactly when data is available, what data is available and who is depending on it.</li><li><strong>Have your data where you need it:</strong> Moving data among environments takes what was known as replication and moves it into the hybrid cloud era. When scaling across environments, data replication becomes a critical, pervasive data management competency.</li><li><strong>Operate across environments:</strong> First and foremost, your data is <em>your</em> data, and to seamlessly scale that data across environments, a file system that operates across environments is required. Same data. Same file system.</li><li><strong>Elastic scale:</strong> Hybrid cloud storage enables organizations to take advantage of elastic scale by managing data across on-premises environments <em>and</em> the public cloud, provisioning burstable workflows to take advantage of public cloud elasticity.</li><li><strong>Support for protocols:</strong> The hybrid cloud enables enterprises to seamlessly run file-based applications on-premises and in the public cloud, contingent upon the critical protocols the file applications depend on, including NFS, SMB, FTP and REST.</li><li><strong>Fast reads <em>and</em> writes:</strong> When data is the lifeblood of the business, responsiveness matters. As such, enterprises need write and read performance regardless of where data is stored. Hybrid clouds enable enterprises to achieve on-premises read performance that is simply not possible in the public cloud.</li><li><strong>Strong consistency:</strong> A write needs to be instantly available to the next read. Businesses can’t afford to wait or, worse, to not know if their data is available to be put to work.</li><li><strong>Eliminate usage taxes:</strong> Accessing data is imperative, especially if it’s tied to business decisions. Public cloud access fees slow down decision making. Ensuring data is accessible as if it were on-premises ensures it can put to use, without limitations.</li></ul><p><strong>Qumulo has pioneered hybrid cloud data storage and enables agility, responsiveness and scale. Today. For your business.</strong></p><p>With Qumulo, your data is available, when and how you want it. Provision your data across your on-premises and public cloud environments wherever it can most effectively power your business. Qumulo’s unique hybrid cloud approach lets you access, manage and migrate your data seamlessly across both environments.</p><p>And, that data is available with blazing fast performance at your fingertips, affordably.</p><p><strong>At just dollars per gigabyte of throughput, Qumulo is the fastest and most affordable file system in the cloud.</strong></p><p>Our mission is to ensure that enterprises that depend on file-based data should be able to manage it across the environments where that data is most productive. This means providing data management solutions that bring simplicity to storage infrastructure — to ensure that the era of “on-premises <em>versus</em>public cloud” becomes “on-premises <em>and</em> the public cloud.”</p><p>For example, saying that on-premises file storage deployments will <em>always</em> have superior economics is inaccurate. Saying <em>all</em> on-premises storage will cease to exist is just as flawed. Customer workloads need to be able to take advantage of the cloud some of the time. As a result, your data needs to be easily managed across these environments.</p><p><strong>See it for yourself. </strong><a href="https://go.qumulo.com/re-invent/"><strong>Stop by Qumulo’s booth (#828)</strong></a><strong> at AWS re:Invent for a demonstration of Qumulo + AWS and to learn more about our hybrid cloud offerings.</strong></p><p>See why Qumulo’s enterprise-class hybrid cloud file storage offers our customers freedom, choice and flexibility. Our solutions <a href="https://qumulo.com/product/cloud/aws/">run natively in the public cloud</a>, and are also available in a variety of standard on-premises platforms that span <a href="https://qumulo.com/product/nearline-archive/">groundbreaking nearline archive</a>, <a href="https://qumulo.com/product/capacity/">robust capacity needs</a>, and <a href="https://qumulo.com/product/performance/">high performance all-flash platforms</a> — and this same file system runs <a href="https://qumulo.com/cloud/file-storage-in-the-cloud/">natively in the public cloud</a>.</p><p>With Qumulo, organizations can effortlessly move and manage their data between on-premises and the public cloud, whether it’s object or file-based, all with the same file system.</p><p><em>This post </em><a href="https://qumulo.com/blog/the-era-of-scale-across-is-here/"><em>originally appeared</em></a><em> on the </em><a href="https://qumulo.com/blog/"><em>Qumulo blog</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=be8d0766c26c" width="1" height="1" alt=""><hr><p><a href="https://medium.com/qumulo/data-storage-and-hybrid-cloud-computing-the-era-of-scale-across-is-here-be8d0766c26c">Data storage and hybrid cloud computing: The era of scale-across is here</a> was originally published in <a href="https://medium.com/qumulo">Qumulo</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What type of cloud storage is best for the enterprise?]]></title>
            <link>https://medium.com/qumulo/what-type-of-cloud-storage-is-best-for-the-enterprise-a8e94d993345?source=rss-c786732c58ae------2</link>
            <guid isPermaLink="false">https://medium.com/p/a8e94d993345</guid>
            <category><![CDATA[cloud-migration]]></category>
            <category><![CDATA[cloud-storage]]></category>
            <category><![CDATA[cloud-services]]></category>
            <category><![CDATA[data-storage]]></category>
            <category><![CDATA[filesystem]]></category>
            <dc:creator><![CDATA[The Qumulo Team]]></dc:creator>
            <pubDate>Mon, 12 Nov 2018 19:59:10 GMT</pubDate>
            <atom:updated>2018-11-12T19:59:10.795Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/393/1*TvaBckAYW1FsVjMTKGwwXg@2x.png" /></figure><p>It might feel like choosing technologies is more like picking sides.</p><p>iOS vs Android, Mac vs PC, Xbox vs PlayStation, etc. That either/or fallacy is now finding its way into data centers. The split comes when deciding a what types of cloud storage is right for your enterprise.</p><p>The <a href="https://www.networkcomputing.com/storage/cloud-storage-adoption-soars-workplace/712205254">cloud has become a legitimate option for enterprise storage</a>. However, the question has shifted from if the business <strong><em>should</em></strong> use the cloud to how the business <strong><em>needs</em></strong> to be using the cloud.</p><p>We’ve seen this question split system administrator teams down the middle between file and object types of cloud storage.</p><h3>A typical discussion about types of cloud storage</h3><p>Let’s take a look at an example I see all the time. Say you, like many enterprises, have file-based workloads and applications that depend on <a href="https://searchstorage.techtarget.com/definition/file-storage">file-based storage</a>.</p><p>These workloads are likely critical for your business. But with data center costs continuing to rise, an initiative is drawn up to migrate to the public cloud. If you’re lucky, that initiative comes with cloud specialists to navigate the types of cloud storage available.</p><p>If this sounds familiar, you’ve probably had a conversation like this before:</p><blockquote>Enterprise Architect<em>: “So we need to move </em>XYZ<em> app to the public cloud. We have done some application mapping and know most of the dependencies. The big one will be SMB share access to the file the app consumes and generates.”</em></blockquote><blockquote>Cloud Architect<em>: “That is not going to work. We need to change the app so that it can operate via S3 in the public cloud.”</em></blockquote><blockquote>Enterprise Architect<em>: “This app is over 10 years old, no one who wrote it is still here…”</em></blockquote><p>And scene.</p><p>Now as a cloud solutions architect myself, I understand the benefits of writing your apps for the cloud. But, it is unrealistic to take on both a migration and app rewrite at the same time and have both succeed.</p><p>Stubbornness can exist on both sides of this divide, but the correct solution doesn’t just differ between businesses. It can also vary depending on <a href="https://qumulo.com/solution/">workloads as well</a>.</p><p>Determining the right type of cloud storage will allow for easier migrations, with less headaches, and with the desired result. Nothing is worse than having to failback from a botched migration attempt to public cloud.</p><h3>Why not have both types of cloud storage?</h3><p>When looking into types of cloud storage, I recommend sticking to a hybrid file and object story.</p><p>There are many reasons why an enterprise would want to take a <a href="https://qumulo.com/cloud/file-storage-in-the-cloud/">hybrid approach to the cloud</a>. The specifics might vary depending on workload, but generally enterprises that take this approach:</p><ul><li>Are able to move to the cloud at their own pace. With a hybrid approach, the location of a customer’s data footprint becomes purely a business decision based on economics and the nature of the workload.</li><li>Use the same filesystem for applications, whether on-premises or in the cloud.</li><li>Can take advantage of the <a href="https://qumulo.com/discover/why-qumulo/">features of file</a> and the efficiency of object.</li></ul><p>With that in mind, there are certain things you should look for when considering the types of cloud storage for your business.</p><p>On the file-based storage side, look for support for enterprise class features (Auditing, Replication, AD support) and protocols (<a href="https://qumulo.com/resources/create-smb-share/">SMB</a> and <a href="https://qumulo.com/resources/create-nfs-export/">NFS</a>). Look for <a href="https://qumulo.com/resources/qumulo-file-fabric-technical-overview/#protocol">REST API</a>s to configure those enterprise features, and to read and write data based on object protocols.</p><p>Even better, look for ways to use both file and object together. But be careful. File-object gateways have become popular ways to mimic file-based behavior, but access object-based storage. The issue here is that once a gateway writes “files” to an object store, only that gateway can find that file again.</p><h3>Takeaways for types of cloud storage</h3><p>Are you ready to make the leap to the cloud? Here are a few steps you can take to pick the best types of cloud storage that are best for your business.</p><ol><li>Assess the workloads you want to move to the cloud. Determine what you need from a cloud storage solution to keep those applications running.</li><li>Look to see if the vendors you already work with have a cloud storage solution that could fit your needs.</li><li>Explore the cloud storage market for new players and emerging technologies. I can think of one that is <a href="https://qumulo.com/product/cloud/aws/">worth looking at right away.</a></li></ol><p><em>This post </em><a href="https://qumulo.com/blog/what-types-cloud-storage/"><em>originally appeared</em></a><em> on the </em><a href="https://qumulo.com/blog/"><em>Qumulo blog</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a8e94d993345" width="1" height="1" alt=""><hr><p><a href="https://medium.com/qumulo/what-type-of-cloud-storage-is-best-for-the-enterprise-a8e94d993345">What type of cloud storage is best for the enterprise?</a> was originally published in <a href="https://medium.com/qumulo">Qumulo</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[What do we mean by highly scalable file storage?]]></title>
            <link>https://medium.com/qumulo/what-do-we-mean-by-highly-scalable-file-storage-a4192efaed3f?source=rss-c786732c58ae------2</link>
            <guid isPermaLink="false">https://medium.com/p/a4192efaed3f</guid>
            <category><![CDATA[network-attached-storage]]></category>
            <category><![CDATA[data-storage]]></category>
            <category><![CDATA[scale-out]]></category>
            <category><![CDATA[big-data]]></category>
            <dc:creator><![CDATA[The Qumulo Team]]></dc:creator>
            <pubDate>Fri, 09 Nov 2018 18:34:29 GMT</pubDate>
            <atom:updated>2018-11-09T19:49:38.069Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*YDk3lxTWAoBoM8DRVjthhw.png" /></figure><p>You may have read that <a href="https://qumulo.com/">Qumulo</a> builds a modern, <a href="https://qumulo.com/discover/how-qf2-works/">highly scalable file storage system</a>. But what does that really mean?</p><p>Let’s break it down:</p><h3>A file system</h3><p>You may not know this, but you’re probably already familiar with scalable file storage systems. Every laptop has a file system on its hard drive. The apps you use, like iTunes, Word, and Excel, create files to store their data. Your operating system allows the apps to use commands to read and write files, and it organizes the data into a file system on the hard drive. These file systems provide some important guarantees such as persistent file storage, transactional modifications, and the ability to organize files into a folder hierarchy.</p><h3>Networked file systems</h3><p>Sometimes you want to share data with others, or have a central repository for documents and data. To do this, multiple computers need to be able to access the same file system. Luckily, modern operating systems provide a way to connect to file systems over a network. These kinds of file systems are called <a href="https://en.wikipedia.org/wiki/Network-attached_storage">Network Attached Storage (NAS) systems</a>.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*6oHGyIGsVrknA_jj.png" /></figure><p>The two main protocols for these systems are SMB (Server Message Block, used on Windows) and NFS (Network File System, used on Linux). Both allow users to browse and edit files using normal applications as if the files were local. In fact, if you’ve ever <a href="https://www.laptopmag.com/articles/map-network-drive-windows-10">mapped a network drive on Windows</a> before, you’re using SMB.</p><p>Qumulo supports both of these protocols (and more, like FTP) so it can be used to share files between computers on both Windows and Linux.</p><h3>A scalable file storage system</h3><p>Qumulo is a kind of NAS system called distributed NAS. Rather than having a single server, we create a single file system from a distributed set of nodes.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*tZSLMOHz2V0eXNNB.png" /></figure><p>QF2 creates a file system that combines all the disks on all the nodes in a cluster into one namespace. This means we can create incredibly large file systems with the combined capacity and throughput of hundreds of disks. When your system gets full, new nodes can be added seamlessly to expand and increase performance.</p><h3>Who uses these huge scalable file storage systems?</h3><p>Many of our customers are <a href="https://qumulo.com/solution/">organizations that have huge data needs</a> because data is part of their business. Movie studios require huge amounts of video and animation data to render their films. Scientific researchers have tons of experimental data to process. Autonomous driving systems need to train on thousands of hours of driving footage. All these users need petabyte-scale file systems and incredibly fast file access.</p><h3>Why Qumulo?</h3><p>Other enterprise file systems are built using traditional software development practices resulting in a tightly coupled hardware and software solution that produces long, buggy release cycles. QF2 has been built from the ground-up as an agile software project. It runs on commodity hardware platforms and in the public cloud with <a href="https://qumulo.com/blog/why-does-qumulo-ship-software-every-two-weeks/">releases every 2 weeks</a>.</p><p>We can do this because of the extreme confidence we get from our <a href="https://qumulo.com/blog/making-100-code-coverage-as-easy-as-flipping-a-coin/">exhaustive testing processes</a> and our system architecture which enables top-rate data protection, transactionality, and performance.</p><p>This development process has allowed us to create revolutionary new ways to gain insight into file system data through our <a href="https://qumulo.com/blog/real-time-analytics-a-game-changer-for-managing-billions-of-files/">analytics</a>: Instant visibility of capacity information, breakdowns of performance by client and file, snapshot capacity accounting, and more.</p><p><em>This post </em><a href="https://qumulo.com/blog/scalable-file-storage/"><em>originally appeared</em></a><em> on the </em><a href="https://qumulo.com/blog/"><em>Qumulo blog</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a4192efaed3f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/qumulo/what-do-we-mean-by-highly-scalable-file-storage-a4192efaed3f">What do we mean by highly scalable file storage?</a> was originally published in <a href="https://medium.com/qumulo">Qumulo</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[VFX rendering presents huge storage challenges]]></title>
            <link>https://medium.com/qumulo/vfx-rendering-presents-huge-storage-challenges-42ad699d3e60?source=rss-c786732c58ae------2</link>
            <guid isPermaLink="false">https://medium.com/p/42ad699d3e60</guid>
            <category><![CDATA[vfxtech]]></category>
            <category><![CDATA[visual-effects]]></category>
            <category><![CDATA[vfx]]></category>
            <category><![CDATA[rendering]]></category>
            <dc:creator><![CDATA[The Qumulo Team]]></dc:creator>
            <pubDate>Fri, 09 Nov 2018 17:11:32 GMT</pubDate>
            <atom:updated>2018-11-09T19:00:00.975Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*80OTN8_NMzckv_5MlxJbqg.png" /></figure><p>Animation and visual effects (VFX) are at the heart of modern film production. Digital production as we know it began in the early 1990s with movies such as “Terminator 2” and “Jurassic Park.” Those movies broke new ground in what could be done with visual effects. Then, “Toy Story” in 1995 became the first full-length digitally generated animated feature, and the rest is film history.</p><p>Concurrent with the production of these movies was the “democratization” of visual effects. This began in the early 1990s with open source products and proprietary packages being productized and sold by companies, replacing in-house tools available only to industry pioneers. Standards such as file formats also began to be established, which allowed the emerging ecosystem of digital production tools to be linked together into flexible processing pipelines.</p><p>With access to reusable pipelines, studios began focusing on the more creative aspects of their trade, such as creating characters with increasingly complex physical characteristics. In “Monsters, Inc.,” made in 2001, Sully had about 1 million hairs. In “Monsters University,” made in 2013, he had around <a href="https://venturebeat.com/2013/04/24/the-insiders-view-of-the-tech-behind-pixars-monsters-university-interview/">5.5 million hairs</a>.</p><h3>Challenges of the VFX workflow</h3><p>A VFX production pipeline is complex, with many processes and many among them. The biggest challenge is managing the sheer volume of information required to produce photorealistic imagery. A single creature might be composed of hundreds, if not thousands of digital assets. It is often necessary to assemble terabytes of data that must be rendered and/or composited.</p><p><a href="https://en.wikipedia.org/wiki/Volume_rendering">Volumetric data</a>, which is essential to many bread-and-butter effects, including clouds, dust, water, fire, and fluids, is another example of extreme complexity. The data is challenging both because of its large footprint and because it often requires conversion to other formats before it can be used by other tools.</p><p>Here is a very simple VFX pipeline for on-premises rendering.</p><p>Most pipelines are, in reality, far more intricate and far less linear than the one shown above. For example, editing can take place before the VFX shots are available. Creating special effects is time consuming. Editors can work without the completed special effects, using mocked up versions as stand ins, and then drop in the completed shots when they’re available.</p><h3>Storage pain points for on-premises VFX rendering</h3><p>In general, the driver behind the growth of VFX is that the technology has gotten better, cheaper and faster, which means that animation and special effects have gotten increasingly ambitious. These complex films require a highly scalable, modern storage system that is designed to handle billions of files.</p><p>The emphasis is on “file” because VFX is a file-based workflow and files are the medium of exchange between applications that were not necessarily written by the same company. Workflows must integrate across applications and file is the way to do that.</p><p>Any organization that relies on its file storage system as heavily as VFX studios do is always sensitive to a variety of issues. These include <a href="https://qumulo.com/discover/qf2-overview/">performance, scalability, adaptability and visibility.</a> Storage system performance is always important. Systems that are too slow can starve the rendering farm or keep artists from working while rendering is going on. As more and more studios move to 4K and higher resolutions, performance will become increasingly critical.</p><p>The ever-increasing amount of data that VFX shops generate can easily fill up their storage system. Again, the advent of 4K has to be considered because it means much larger data footprints. Depending on what they’re using, demands for more capacity may mean that the studio must replace the entire storage system or buy new metadata controllers and storage shells.</p><p>A small VFX shop might, when it first starts, use a storage system the team has put together themselves. As their business grows, that system will need to be replaced. They will want to move to a scalable, commercial file storage system with enterprise features will help them take on more projects and create more sophisticated effects.</p><p>Visibility is a limitation of legacy storage systems. Most M&amp;E shops use a treewalk script/program to manage their storage. Treewalks are extremely slow when file systems are large. Administrators can literally wait for days to get answers, which effectively means the data is useless.</p><h3>Qualities of a modern VFX storage system</h3><p>VFX studio management needs a storage system that will help the business grow while keeping costs down. The ability to scale to billions of files is important so that the system can keep pace with the company’s expansion. The TCO also matters. What the storage itself costs is just the beginning. There are other factors, such as how easy the system is to install and manage. Another factor is how efficiently disk space is used. The more efficiency, the less storage is needed and the lower the infrastructure costs such as cooling and power.</p><p>A simple subscription mode that covers everything, including upgrades and support will help make costs transparent. Customer support that provides instant access to a dedicated storage expert via communication tools such as Slack is a must.</p><p>Artists need to see data clusters as a single volume, rather than having to deal with multiple disks or provisioned volumes. They need a system that’s fast enough so that they can work while frames are being rendered.</p><p>IT administrators need real-time visibility and control to gain insights into what’s happening in the storage system now, down to the file level. Capacity explorer and capacity trends tools will let them see who is using the most storage now and over time, so that they can plan sensibly for future use and not worry about over-provisioning. The ability to identify hotspots and immediately apply quotas will let them halt any processes that are monopolizing storage resources.</p><h3>VFX rendering in the cloud</h3><p>Problems such as scalability and performance, which you would expect from any organization that has to deal with an ever-growing number of files, are exacerbated for VFX studios because they cannot tolerate delays in their schedule to add more resources.</p><p>VFX studios can easily outstrip the capacity of their on-premises render farms. Power, cooling and physical space are all finite resources that put limits on what the studio can achieve. A new project or one with unexpected complications can exhaust the available compute and storage resources at any moment.</p><p>With tight deadlines, there’s no time to build out the physical infrastructure. Even renting equipment may not be a feasible solution. When considering how long it takes to order, deliver, and rack and stack the nodes, the challenge of finding available rental hardware and the challenge of finding enough data center space, power, networking, and cooling, it may seem like there’s no answer — unless you start looking at the cloud.</p><p>In fact, many VFX studios are interested in the cloud. Most of the work may still be on-premises but if there is an unexpectedly complex job or more work than was originally anticipated, studios want the option of using the cloud to handle the overload.</p><h3>Storage pain points for cloud rendering</h3><p>Many of the same considerations for on-premises rendering apply to <a href="https://qumulo.com/solution/cloud-rendering-vfx/">cloud rendering</a> but there are some specific issues that need to be considered. Unfortunately, while compute resources in the cloud are readily available, file-based storage solutions are often inadequate, or are versions of legacy file systems with some patches applied to make them “cloud ready.” Problems include lack of protocol support, performance and capacity limitations, and complexity in setting up the cluster.</p><p>It’s important that studios be able to at least match the performance of their on-premises render farm in the cloud. They should also be able to scale performance and capacity separately to take advantage of the flexible resources the public cloud offers. It should also be easy to transfer files from the on-premises cluster to the cluster in the cloud, and to then return the results back to the on-premises cluster.</p><h3>Evaluating cloud-based file systems</h3><p>Here are some questions that VFX studios should ask themselves when they evaluate cloud-based file solutions:</p><ul><li>Does your workflow use file? If you’re thinking about converting from file to object, would you consider changing your mind if you could get great file in the cloud today?</li><li>Do you use NFS or SMB? Often, when they think of file in the cloud, people think in terms of the AWS Elastic File Store, which only supports NFS. Many VFX studios are Windows shops, which means they need SMB.</li><li>What’s your workflow? What are you trying to do in the cloud? Compute, rendering, processing?</li><li>What kind of performance metrics do you need?</li><li>How will you transfer files between the cloud and the on-premises render farm?</li></ul><p><em>This post </em><a href="https://qumulo.com/blog/vfx-rendering-storage-challenges/"><em>originally appeared</em></a><em> on the </em><a href="https://qumulo.com/blog/"><em>Qumulo blog</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=42ad699d3e60" width="1" height="1" alt=""><hr><p><a href="https://medium.com/qumulo/vfx-rendering-presents-huge-storage-challenges-42ad699d3e60">VFX rendering presents huge storage challenges</a> was originally published in <a href="https://medium.com/qumulo">Qumulo</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A simpler, more reliable approach to high availability in the cloud]]></title>
            <link>https://medium.com/qumulo/a-simpler-more-reliable-approach-to-high-availability-in-the-cloud-a484355298a6?source=rss-c786732c58ae------2</link>
            <guid isPermaLink="false">https://medium.com/p/a484355298a6</guid>
            <category><![CDATA[high-availability]]></category>
            <category><![CDATA[cloud-storage-services]]></category>
            <category><![CDATA[data-storage]]></category>
            <category><![CDATA[cloud-storage]]></category>
            <category><![CDATA[cloud-computing]]></category>
            <dc:creator><![CDATA[The Qumulo Team]]></dc:creator>
            <pubDate>Fri, 09 Nov 2018 16:43:36 GMT</pubDate>
            <atom:updated>2018-11-09T19:14:26.864Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/605/1*YNp4ywEBdIIwITPMLYtbmQ.png" /></figure><p>Public cloud infrastructure has transformed many aspects of IT strategy, but one thing remains constant: the vital importance of <a href="https://en.wikipedia.org/wiki/High_availability">high availability (HA)</a>. When data is your business — as is the case for every business today — any loss can have dire consequences. You’ve got to do all you can to minimize that risk. Naturally, HA is a key area of focus for storage vendors <a href="https://qumulo.com/discover/why-qumulo/">both on-premises and in the cloud</a>, but not every vendor takes the same approach. Understanding what the differences are, and why they matter, is essential to make the right choice for your data and your business.</p><p>On premises, HA commonly relies on a few clever network tricks. One of these is the concept of floating IP addresses — one or more IP addresses that do not solely belong to one device, but are shared between a cluster of devices. Clients use these floating IP addresses to access content being served by the clustered devices, so in the event of device failure, the client’s connection can seamlessly swing from one device to another. There are a few different mechanisms that can be used to swing floating IP addresses away from failed devices. For example, both F5 Networks BIG-IP platform and the <a href="https://qumulo.com/resources/qumulo-file-fabric-technical-overview/">Qumulo File Fabric</a> use a technique called gratuitous ARP to take over a floating IP address that was previously served by another node. Other systems use asynchronous routing so that only a live device will receive traffic. In both cases, it’s the network itself that enables the functionality for seamless failover from a node with issues to a node that is functioning.</p><p>In public cloud environments, you don’t own or control the network. Here, it’s Amazon, Microsoft, or Google who get to dictate which features to enable. For Amazon Web Services (AWS), such choices include disabling ARP in order to prevent the risk of abuses such as ARP cache poisoning (also known as ARP spoofing or ARP poison routing). That means that any on-premises appliances you’ve been using that relied on ARPs for HA won’t work. As a result, infrastructure vendors need to find a different approach for cloud HA.</p><p>The options for HA in the cloud come down to two basic approaches: you can either find a workaround that’s essentially similar to what you’ve done on-premises, or you can write a new, cloud-specific method for IP failover.</p><p>An example of a workaround is the method NetApp ONTAP uses for IP failover in AWS. As a classic scale-up storage architecture, NetApp relies on paired nodes where data is constantly mirrored from node to node. In this case, you’re effectively maintaining two copies of your data store, incurring compute, storage, and software costs for both the used and the unused node. Think of it as a form of auto insurance where, instead of paying a relatively low monthly fee, you cover your risk by buying an entire second car in case something goes wrong with the first. These deployments can be run in either active/standby or active/active configurations; both require that data be replicated fully. Now, this deployment itself does not provide IP failover; for that, you need to deploy a third compute system called the NetApp Cloud Manager.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/974/0*ky5QUW-c7mFctnyx.png" /></figure><p>The Cloud Manager is a t2.micro instance (shown as the “mediator” above) dedicated to handling configuration of the ONTAP systems and providing failover. The Cloud Manager watches for a failure, and then swings IP routing from the active to the standby as needed. That sounds all well and good until we take a closer look at the t2.micro — an AWS EC2 instance type with just 1 vCPU and 1 GB of RAM. Making that the lynchpin of your HA strategy means moving from a single point of failure of the active node, to an even smaller single point of failure in the failover mechanism itself.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/974/0*x6Kqe2dp2hD26ku8.png" /></figure><p>As an agile software company, <a href="https://qumulo.com/">Qumulo</a> is in a position to really think through the right solution to each problem — no matter how hard it might be — and build it for <a href="https://qumulo.com/company/customers/">our customers</a>. Considering the complexity and risk of the ONTAP approach to HA in the cloud, we started from scratch and found a simpler and more reliable method.</p><p>Instead of trying to force-fit an on-premises model into a public cloud environment, we purpose-built IP failover designed specifically for the cloud. The key is to make use of the features available in each of the public cloud platforms. For example, we use AWS APIs from any working member of the cluster to swing a floating IP address from a downed cluster member to a functioning member. In this way, we avoided adding another layer of complexity, and also avoid introducing a single point of failure that can easily become a bottleneck. As an additional benefit, our approach eliminates the need for a redundant standby cluster, greatly reducing the cost of HA.</p><p>Now, you may be wondering why you should care at all about HA in the public cloud given assurances like these from Amazon:</p><p><em>“Amazon EBS volumes are designed for an annual failure rate (AFR) of between 0.1% — 0.2%, where failure refers to a complete or partial loss of the volume, depending on the size and performance of the volume. … For example, if you have 1,000 EBS volumes running for 1 year, you should expect 1 to 2 will have a failure.”</em></p><p>The real question is why, when avoiding data loss is a solved problem on-premises, you’d accept even one or two lost EBS volumes per year in the cloud. No matter what business you’re in, whether <a href="https://qumulo.com/solution/editorial/">media &amp; entertainment</a>, <a href="https://qumulo.com/solution/ngs-genomics/">genomics research</a>, <a href="https://qumulo.com/solution/adas/">autonomous driving</a>, or even something as simple as home folders, your data is precious. What might be in those EBS volumes you’re losing every year? How will their loss affect your business? There’s no way of knowing — and that’s a risk no business can afford to take casually.</p><p>And Amazon’s assurances don’t even take into account compute node failure rates, which can be higher than you think. EC2 instances can fail for a variety of interesting reasons. One common case results from the fact that that AWS is, at its core, a data center of shared hardware. If a piece of hardware is going to undergo maintenance or be decommissioned, your EC2 instance will need to be moved, and that will cause a reboot. An even simpler example is when the underlying piece of hardware has a fault that causes all the instances it hosts to be shifted to another piece of hardware, which in turn causes those instances to reboot. Any reboot will temporarily cause the node to appear failed, so traffic will need to switch to an active node.</p><p>If a compute node does go down, ONTAP will fail over from the active node to the surviving node. Unless, of course, it’s the t2.micro NetApp Cloud Manager that fails. If this happens, you’ve lost air traffic control for all your storage traffic in the public cloud, and there’s nothing to move clients from a failed node to the surviving node. Now you’ve got a real problem. In addressing the risk of a failed node, NetApp Cloud Manager ends up adding a new failure condition to the mix. Surely we can expect better for our next-generation enterprise architectures.</p><p>NetApp ONTAP serves as a cautionary tale about moving legacy technology to the public cloud without accounting for the inherent differences in these environments. Qumulo’s cloud-native approach makes it possible survive disk and node failures without introducing further complexity, and without excessive cost. By taking the time to do things in the right way for each type of infrastructure — on-premises and public cloud — we can provide the simple, reliable HA you need for the data your business depends on.</p><p><em>This post </em><a href="https://qumulo.com/blog/high-availability-cloud/"><em>originally appeared</em></a><em> on the </em><a href="https://qumulo.com/blog/"><em>Qumulo blog</em></a><em>.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a484355298a6" width="1" height="1" alt=""><hr><p><a href="https://medium.com/qumulo/a-simpler-more-reliable-approach-to-high-availability-in-the-cloud-a484355298a6">A simpler, more reliable approach to high availability in the cloud</a> was originally published in <a href="https://medium.com/qumulo">Qumulo</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>