• Join a game-changing team to help create hybrid cloud storage solutions. We're Hiring

Let’s be clear, regardless of what you’ve heard, the onsite storage systems that reside in datacenters today are not going away any time soon. There’s no substitute for performance-sensitive, mission- critical applications. Storage systems were taken out of the mainframes of past decades for a reason, to gain performance and redundancy independence from compute systems.

We can expect to see traditional storage systems transition from a general-purpose storage device to a “hot” data storage device. Hot data storage devices will be populated with SSD drives, flash memory, and DRAM. The recent success of all-flash arrays is a testament to this transition.

The Capacity Challenge

One of the biggest white elephants in the storage industry is that 80 percent or more of the data stored on most storage systems is inactive or “cold.” With the massive growth of data over the last decade, the expensive design of traditional storage systems just doesn’t make sense for storing data that is not being actively used.

The very design that makes storage systems reliable and mission-critical also makes them unable to cost-effectively scale to support large amounts of cold data. The concepts of RAID (Redundant Array of Independent Disks) that were developed and first deployed in the late 1980s made perfect sense for the last 30 years when data growth rates and disk capacity grew at relatively constant rates. With today’s explosive data growth rates and single drive capacities exceeding 10 terabytes, RAID concepts just don’t work.

The biggest problem with RAID is that it doesn’t work well with low-cost high-capacity disks. RAID system rebuild times after a large capacity disk failure can sometimes take days. Clearly, this is not a workable solution for deploying a low-cost, high-capacity system.

Houston, We Have a Problem

Storage systems that use forward error correction, or erasure coding, provide a means of making multiple copies from the original data source. This concept was used by NASA to communicate with astronauts on the moon. The original message had enough redundancy built into it that the entire transmission could be quickly rebuilt from a message fragment. Storage systems based on this principle are typically called object storage or a cloud.
Object storage systems are well suited for cold data storage. They work extremely well with low-cost, high-density drives. Due to the erasure coding, if a drive fails, all the information already exists on other drives, that can immediately take over for the failed drive. Object storage systems also scale well on general-purpose servers and can grow into extremely large storage systems. The core of Amazon S3 and Google storage is based on object storage.
Object storage deployment is emerging as a mainstream storage platform for enterprises that have a capacity in the hundreds of millions or billions of files. However, the adoption of the technology must address a number of factors:

Gateways historically have been needed to convert file or block storage into objects. Think of the gateway as a language translator. They can introduce another layer of complexity by being an abstraction layer between applications and the object store.

Performance of object stores is not suited to serving hot data, so in their native form, object stores are best for archiving, backup, and applications that are not very transactional.

Installation and migration challenges have resulted from integrating a different storage technology into existing systems. Traditional tiering products that tier from file-based NAS storage to an object store introduce NAS system scanning to identify cold data, which kills the performance of the original file system. The addition of stubs files and links also create a recovery nightmare in blackout situations.

The Ideal Architecture is Both Hot and Cold

The idea storage architecture would be similar to a video distribution system. Hot data is kept near the users on very fast media and cold data in stored centrally on a low-cost platform. Building this type of architecture today would combine a flash-based storage array with the intelligence to automatically and continuously migrate data to an object store based on its cold/hot state.

Migration to and from the object store would ideally not be based solely on usage, as most storage and caching systems have done in the past. Since not all data is created equal, migration and movement of data should be based on logical business policies that are automated to keep “fullness” thresholds.

 Mission-critical data gets the highest performance by not being burdened with coexisting on the same system as cold data. And IT administrators can choose a best-of-breed system to store cold data. In other words, by separating the platforms cold and hot data reside on, an optimal architecture can be achieved. Mission accomplished.