Stefan Headshot

By: Stefan Bernbo, founder and CEO of Compuverde

compuverde-logotype-software-defined-storageAccording to IBM, humans create 2.5 quintillion bytes of data. In fact, 90 percent of the data in the world today has been created in the last two years alone. All this data has to be stored somewhere, putting a strain on traditional storage architectures. The standard model of storage involves buying more hardware, but the rate of data growth has outpaced most organizations’ ability to buy the number of servers needed. Not to mention that the rate of scale is too slow using this model.

Instead, enterprises are considering new storage options that are more flexible and scalable. Software-defined storage (SDS) offers that flexibility. In light of the varied storage and compute needs of organizations, two SDS options have arisen: hyperconverged and hyperscale. Each approach has its distinctive features and benefits, which are discussed below. First, it’s important to understand what came before hyperconverged and hyperscale approaches.

The Evolution of Storage

Converged storage combines storage and computing hardware to increase delivery time and minimal the physical space required in virtualized and cloud-based environments. This was an improvement over the traditional storage approach, where storage and compute functions were housed in separate hardware. The goal was to improve data storage and retrieval and to speed the delivery of applications to and from clients.

In the converged storage model, there are discrete hardware components, each of which can be used on its own for its original purpose in a “building block” model. Converged storage is not centrally managed and does not run on hypervisors; the storage is attached directly to the physical servers.

So then, what does it mean to be hyperconverged? This storage model is software-defined, and all components are converged at the software level; they cannot be separated out. This model is centrally managed and virtual machine-based. The storage controller and array are deployed on the same server, and compute and storage are scaled together. Each node has compute and storage capabilities. Data can be stored locally or on another server, depending on how often that data is needed.

Flexibility and agility are needed to effectively and efficiently manage today’s data demands, and these are what hyperconverged storage offers. It also promotes cost savings. Organizations are able to use commodity servers, since software-defined storage works by taking features typically found in hardware and moving them to the software layer. Organizations that need more 1:1 scaling would use the hyperconverged approach, and those that deploy VDI environments. The hyperconverged model is storage’s version of a Swiss Army knife; it is useful in many business scenarios. It is one building block that works exactly the same; it’s just a question of how many building blocks a data center needs.

Now let’s turn our attention to the hyperscale model, a new storage approach created to address differing storage needs. Hyperscale computing is a distributed computing environment in which the storage controller and array are separated. As its name implies, hyperscale is the ability of an architecture to scale quickly as greater demands are made on the system. This kind of scalability is required in order to build big data or cloud systems; it’s what Internet giants like Amazon and Google use to meet their vast storage demands. However, software-defined storage now enables many enterprises to enjoy the benefits of hyperscale.

As with hyperconverged storage, hyperscale reduces costs because the IT organizations can use commodity servers and a data center can have millions of virtual servers without the added expense that this number of physical servers would require. Data center managers want to get rid of refrigerator-sized disk shelves that use NAS and SAN solutions, which are difficult to scale and very expensive. With hyper solutions, it is easy to start small and scale up as needed. Using standard servers in a hyper setup creates a flattened architecture. Less hardware needs to be bought, and it is less expensive. Hyperscale enables organizations to buy commodity hardware. Hyperconverged goes one step further by running both elements—compute and storage—in the same commodity hardware. It becomes a question of how many servers are necessary.

Two Options for Today’s Storage Needs

As described above, the hyperconverged approach is like having a really useful box that contains everything you need. Hyperscale has two sets of boxes, one set of storage boxes and one set of compute boxes. It just depends what the architect wants to do, according to the needs of the business. A software-defined storage solution would take over all the hardware and turn it into a type of appliance, or it could be run as a VM – which would make it a hyperconverged configuration.

Perhaps the best aspect of these two approaches is that you don’t have to choose one or the other. Data center architects can mix and match the models according to their needs at any given time. Those needs will remain fluid as technologies change and as data continues to proliferate, making hyperconverged and hyperscale approaches all the more attractive due to their flexibility and cost-effectiveness. Enterprises can use these approaches to scale as needed as they face the future.