The Missing Links in Software-Defined Storage

Tuesday Jan 24th 2017 by Drew Robb
Share:

Vendors and IT teams have work to do before SDS can reach its full potential.

In anthropology, the missing link is a hypothetical concept of an extinct creature that lies halfway in the evolutionary line between modern human beings and their ancestors. Applying that general idea to software-defined storage (SDS), we appear to have a missing link between the current reality of SDS and its ultimate vision.

One Vision?

Let’s start with the envisioned goal of SDS. Unfortunately, the interpretation of the end game varies from person to person and company to company. Kate Davis, manager, HPE Storage marketing, believes the goal is to have a consolidated approach to meeting application demands from software-defined storage to dedicated storage systems, including the use of SDS in hyper-converged (HC) products. By federating the underlying primary and secondary storage technologies, users can shift between SDS/HC and traditional arrays without having to re-architect their apps and processes.

But not everyone shares that exact concept. Paul LaPorte, director of products, Metalogix, believes SDS is really about enabling flexibility. Content and storage devices change. Regulations impacting content change, and organizations grow and merge. That’s why a rigid storage architecture eventually breaks down. SDS aims to establish a flexible storage environment that adjusts with the internal and external changes. It also delivers the opportunity for the automated content caretaking that is needed to free administrators to focus on other emerging challenges, he said.

Another way to look at SDS is in terms of the business as a whole. Business value and utilization rate are the vision, according to Tibi Popp, CTO, Archive360. Each unit of data has a different business value and utilization rate. This should be directly correlated to the cost of storing each data unit. Given that storage vendors are providing different storage types at different price levels, it is important to build software-defined storage that understands the value of the data and stores it on the most cost effective medium.

“Another critical feature that customers should look for moving forward is that SDS should be able to create a predictable analysis of the cost of storing data based on the value of the data,” said Popp.

But the implications could be even deeper — and perhaps a little grim from a storage perspective. Mario Blandini, vice president of marketing, SwiftStack, stated that the end game for SDS is that storage goes away as its own distinct segment of the IT infrastructure market.

“Storage came about when applications expanded beyond mainframe compute to distributed x86 computing,” said Blandini. “With SDS and hybrid cloud, storage is no longer defined as a big box full or hard drives, it is the management of data existing across private and public data centers, all in a single namespace.”

Meanwhile, trends such as digitization, the Internet of Things (IoT), big data analytics and the cloud are causing an upheaval across many industry verticals. Even traditionally conservative spheres such as power generation have realized they have to eliminate silos and move closer to the consumer mindset of "instant everything." As a result, IT must become more agile, it must become more efficient, and it must be ready for a broad spectrum of changes coming in the future — from hardware innovations to application design to cloud strategies.

Traditional infrastructure, however, is not well suited to addressing these needs, so organizations are turning to a software-defined data center (SDDC) architecture. Thanks to storage being a traditional pain point and major source of IT spend, SDS is the next step to take now that server virtualization is standard practice. A complete software-defined infrastructure, therefore, includes virtualized compute, storage and networking along with a common management platform that provides a unified operating environment capable of spanning from on-premises data centers to public clouds.

“By shifting to this type of modern software-defined environment, IT departments are increasingly becoming an internal service provider rather than a cost center,” said Lee Caswell, vice president of products, storage and availability, VMware. “Software-defined storage and compute, which we also commonly call a hyper-converged environment, allows IT to take a holistic view of infrastructure and concentrate on business objectives, rather than mere technical imperatives.”

What’s Missing in SDS?

So what’s missing?

Caswell said SDS still has work to do to take full advantage of the latest x86 hardware, like affordable all-flash and new CPU capabilities. In addition, there has to be a fundamental shift in IT teams and processes. As software-defined storage becomes easier to manage and a simple extension to what is done today, VM management, roles and responsibilities shift to IT generalists and away for specialized and siloed teams. Virtualization generalists will be able to extend their expertise into storage, network, cloud and beyond, providing end-to-end management and freeing up teams and resources to focus on strategic projects rather than all the day-to-day fire drills that consume the brightest IT administrators to ‘keep the lights on.’

Further, as we shift to a multi-cloud era, the continued advancements around security and availability become key. SDS should look to continue to advance in areas of encryption as well as tighter integration with network virtualization solutions to deliver the necessary levels of security regardless of where the data resides, added Caswell.

“True software-defined storage will always result in a hyper-converged infrastructure (HCI), with an accompanying hypervisor for server virtualization — those two components (hypervisor and SDS) are the fundamental components to HCI,” said Caswell. “It’s hard to separate out just the storage component versus what is happening in the rest of the IT stack around compute, networking and management.”

Davis largely agrees. She said that SDS ultimately leads to an infrastructure with end-to-end integration for consolidation, efficiency and simplicity purposes — fewer devices to use and manage, and improved agility to respond to new demands.

“This also leads to hosting SDS instances in the public cloud to act as a tier of storage for hybrid IT use cases,” said Davis.

Blandini augmented that with the fact that SDS can provide the missing capabilities to enable hybrid cloud data management. Ultimately, infrastructure will consist of the hardware networked together that enables hybrid cloud, where the compute, networking and storage are indistinguishable as segments — it just happens to be what software the hardware is running at the time.

“The value-added features that had been provided by individual storage controllers in the data center historically will be available across multiple clouds as a service,” he said. “Storing data will be a given, how it is managed is where the segment will go.”

Inevitability

SDS is an inevitable evolution for storage across the board from small to large data centers, much like server virtualization. So says Andy Mills, CEO of Enmotus. But where is it best to virtualize? If it’s done too high in the file system volume management layers like many early SDS solutions do, you lose the benefits offered by the continued evolution of storage media as it evolves towards memory-class performance. If you do it too low in the storage layers, you can lose application- or file-level visibility into how much and when storage is used.

“That’s why SDS will remain evolutionary, continually adapting to changing applications and underlying storage media types for some while until we reach a truly intelligent, self-adapting, scalable environment that is truly software enabled and controller,” said Mills.

Near term, he added, the key missing piece is the ability to fully automate the process of allocating storage media and resources to compute. SDS provides a means to simply implement the same old storage in a software defined way, but does not address the need for intelligently placing data on the right media. The next step, then, is to dynamically allocate flash (or other) types of premium storage based on real-time measured usage profiles and continually adapt to changing workloads without taxing the system.

“When that is achieved, we have our first truly software-defined storage architecture that is more than just a means of replacing expensive SANs with commodity storage and software,” concluded Mills.

Photo courtesy of Shutterstock.

Share:
Home
Mobile Site | Full Site
Copyright 2017 © QuinStreet Inc. All Rights Reserved