Inside Else Inside TEMPlate====>
 

Will Enterprises Shift to True Software-Defined Storage?

By Content Contributor

For now, most storage continues to be sold as appliances but moving to true software-defined data centers could disrupt the market.

By Bill Stevenson, Executive Chairman, Sanbolic

In recent years, enterprises have dramatically improved the utilization and flexibility of their compute resources—approximately 68 percent of servers are now virtualized. But storage has not yet seen a similar large-scale architectural shift in the enterprise or service provider market. Less than 5 percent of the $29 billion external storage market has shifted to new architectures. But storage will likely look very different five years from now.

Web-scale data centers have demonstrated the benefits of the “software-defined data center” using converged architectures built around industry standard servers and storage components. The big three (Facebook, Amazon and Google) make little or no use of expensive proprietary storage arrays. The storage intelligence of these hyperscale deployments resides in sophisticated software running on the servers. The software developed by the players is not available commercially, but disruptive new storage vendors have been working to deliver these same economies to the enterprise.

Several new storage players like Nutanix and Simplivity are going to market with appliance-based models. These solutions offer converged infrastructure—compute and storage both reside in the commodity servers, enabled by custom software. Hybrid storage vendor Nimble highlights its file system as its core competitive advantage but also comes to the market as an appliance. And at least a dozen other startups have developed flash-centric appliances.

So will new types of storage appliances or converged compute/storage appliances emerge as predominant in next-gen storage architectures? Or will the pure-play “software defined data center” extend to encompass the majority of networking and storage assets a decade from now?

Looking back a couple of decades, the type of storage array currently in widespread use emerged at a time when server processors were much less powerful, when disk drives had much less capacity, when solid state memory was much more expensive and when systems management was much less automated. There was a clear technical and economic advantage of locating storage resources in a dedicated appliance that could be managed locally. Dedicated storage appliances remain the most common architecture for Tier 1/Tier 2 storage, but many of the assumptions that drove their adoption no longer hold. Today, server processors often have excess capacity available for storage workloads. The smallest server chassis can hold many terabytes of storage, and solid state memory has become inexpensive enough to use as persistent storage. There is a lot of inertia around the appliance business model though, even for new vendors in the storage space.

There are many reasons for this resistance. Packaging storage software together with industry standard hardware as an appliance simplifies development and build-out, and software can be tailored for just one or small number of hardware configurations. It is a considerably simpler business case—the value proposition can be boiled down to a few performance specs. It is also an easier deployment as customers do not need to configure the storage architecture—just connect it. Continuing to offer appliances makes it easier to compare a new offering with legacy appliances the customer is already familiar with, and incremental change is more palatable. For all of these reasons, it is hardly a surprise that the majority of recent storage start-ups (of the last decade) are selling appliances.

Furthermore, storage customers tend to be conservative, particularly since the data for which they are responsible is typically a core asset for their company. The teams driving server virtualization have had great success abstracting compute resources into flexible centrally administered pools upon which applications are deployed. But the storage teams in many large firms are organizationally separate and have traditionally been (primarily) responsible for acquiring storage arrays, administering storage LUNs, and making sure that the data is always available and adequately protected. These customers are comfortable buying storage appliances. They have not necessarily been exposed to the tools or new ways of architecting data in a way that supports applications cost-effectively on flexible infrastructure. In truth, it is unlikely that anyone ever got fired for buying EMC when it comes to data availability and protection.

However, storage appliances, whether legacy or new, have limitations. Each appliance typically creates another island of storage to manage. Converged appliances require replacement of the entire infrastructure stack—which is disruptive and carries a very real risk. And appliances create dependencies on a single vendor for software, hardware and service.

True software defined storage can be woven into existing infrastructure, allowing customers to fully utilize existing investments. It provides the flexibility to utilize heterogeneous hardware—to avoid dependency on a single vendor and to benefit quickly from innovation and price improvement. It simplifies management and introduces elasticity by abstracting data across legacy hardware, server-side storage, and unmanaged flash or HDD enclosures into a single pool with common storage services. Perhaps most importantly, it can provide an organization with tools to more precisely architect their data in support of applications, tuning cost, performance, availability, geo-distribution, data protection and other key attributes.

VMware just announced their vSAN software, which provides the ability to support VMware workloads on local server storage that is replicated across other servers for availability and is managed centrally across a cluster. VMware will encourage adoption in their captive accounts and validate the concept of software defined storage. Presumably, VMware is in no hurry to support non-VMware workloads, nor is VMware likely to support external storage. But VMware’s share of the 68 percent of servers that are virtualized has declined to 56.8 percent, so this solution is unavailable for many servers. And savvy customers recognize that overdependence on a large established vendor is likely to result in both higher cost and less innovation, so will be looking to other vendors.

Customers have seen large tangible benefits in abstracting heterogeneous compute resources into a centrally managed and flexible pool. Similar benefits are derived from abstracting heterogeneous storage resources into a flexible pool that can then be used to dynamically configure storage capacity, performance and services optimized for each application. Data architecture will become the story that unfolds rather than storage management. We expect that adoption of true software defined storage will be eagerly watched and driven by large customers with distributed data centers, system integrators, and storage component vendors, all of whom have significant interest in seeing the existing proprietary storage appliance business model disrupted.

  This article was originally published on Wednesday Mar 12th 2014
Home
Mobile Site | Full Site