Steve

By: Steven Lamb, CEO, ioFABRIC 

ioFABRIC-logo-small2

The increasing interest in Software-Defined Storage (SDS) to reduce data center complexity is helping companies increase business agility and realize a better return on investment. It’s being called – and rightly so – a game-changing technology.

However, this game-changing, emerging category offers some opportunities for improvement. There are ways to maximize the potential of SDS and achieve reduced costs, increased efficiency, and improved ease-of-use.

Even with SDS, storage is still complex, hard to manage, and hard to optimize. No matter how advanced the SDS solution, it takes time and effort to manage IT resources in distributed networks consisting of flash and hard drives across direct-attached, SAN, NAS, and cloud storage. In many infrastructures, it is typical to see capacity and performance utilization of only 60 percent due to the complexity of these underlying storage resources and the solution’s inability to coordinate between resources and workload needs.

A better, game-winning SDS solution hides the complexity of the underlying storage resources, automatically monitoring and adapting to infrastructure and application workload changes.

For example, adding new, whiz-bang, high-end arrays or cloud storage may solve an immediate problem, but creates yet another storage silo, while existing but underutilized assets are squandered, effectively costing you capacity and performance you’ve already paid for.

In real-world IT environments, more than 80 percent of business data is inactive. Runaway data growth is generally not due to active files—the files people create and use to do their jobs day in and day out—but rather inactive files, many of which are used briefly and never accessed again. With their all-resources-are-created-equal design, many SDS solutions lack the ability to distinguish between active and inactive data and among different media types. There is still a great deal of manual effort needed to maintain active data on high-performance media for fast access, and obsolete data onto inexpensive, higher-capacity media for long-term archive.

This creates an opportunity for more automation, for extending the value and utility of storage systems. The expense of expanding the data center to accommodate stale data is unjustifiable, and challenges the economics of all-flash arrays or cloud storage. SDS with advanced storage automation unifies existing storage resources, eases management, and simplifies the deployment of new storage such as flash arrays. A big-league SDS uses thin provisioning, on-demand provisioning, and micro-tiering for intelligent data placement.

Even with SDS, data protection can be costly and unreliable. Ideally, it eliminates some or many of the point solutions required to protect data. In reality, first-gen, rookie-year SDS with one-size-fits-all data protection features may incur higher costs, consume excessive capacity, and maintain unnecessary replicas depending on your needs. Worse, it may not provide sufficient protection levels, risking downtime and inferior service.

Your data and applications likely have distinctly different requirements for protection and availability, so SDS should offer better data protection, like dynamic replicas that automatically balance, move, or replicate data. Snapshots and clones should deliver instant point in time read-only copies or writeable copies. Fault domains for availability, healing, and dynamic data routing also ease data protection drama.

For all its advantages—and they are numerous—not even current SDS can avert the probability that storage operating costs will be five times the initial acquisition cost. Data growth is outstripping storage budget growth by over 10x, requiring more capacity, more complexity, and making it harder to maintain service levels. Free time and extra staff, of course, are not growing at all.

This is an inefficient, unsustainable foundation for long-term growth. Instead, game-winning SDS should provide IT resource time savings from installation onwards. SDS should mitigate the spiraling costs by driving efficiency, reducing the amount of IT knowledge required to make effective use of multiple types of storage devices. Licensing it by capacity under management as an annual software subscription also keeps costs predictable.

All these opportunities for improvement aside, the single most substantial way SDS can be improved is to make it application-specific. In order for storage to deliver the proper service to applications in terms of performance, capacity, and data protection requirements, SDS needs to talk directly to each application. Without this feature, achieved through RESTful API integration with third-party workflows, SDS cannot adapt and scale dynamically enough to accommodate changes in infrastructure and workloads, and to allow applications to control storage policy directly.

Application-specific SDS ensures Quality of Service (QoS) levels are met for each application’s needs. Boosted by heavy automation, the software monitors and optimizes data placement over all available resources to automatically fulfill application requirements on demand. The domino effect is reduced management overhead, more value, and better utilization.

ioFABRIC’s unique QoS-driven storage automation solution Vicinity maintains service level objectives, and automates management, extending the life of storage investments and at the same time increasing their utilization. Vicinity software centrally manages and monitors data placed across diverse resources including flash arrays, direct-attached devices, SAN/NAS, and cloud.

SDS that automates, simplifies, delivers more agility, efficiency, utilization, and lets admins define what they need from storage and let the software do the rest: now that’s a whole new ball game.

asfvgav asdvsav