Kiran

By: Kiran Sreenivasamurthy, Director of Product Management & Product Marketing, Maxta Hyper-Convergence

maxtaExperts have noted that the storage space has not seen much innovation in the past decade.  As many enterprise data center technologies have gone through major architecture “re-dos,” the storage industry has remained virtually unchanged with the exception of some flash caching and solid-state.  The one area that has seen significant growth and has come to fruition is hyper-convergence.

Why? Over the last several years server virtualization has redefined computing in a profound way with the abstraction, pooling, and on-demand allocation of compute resources. This has introduced new levels of simplicity, availability, agility, and cost. At the same time, the basic architecture of enterprise storage has not evolved, the SAN/NAS paradigm remains virtually unchanged, and innovation seems very limited. There is a significant gap between compute and storage that hyper-convergence is working to close.

Hyper-convergence, sometimes called Server SAN or hyper-converged infrastructure, provides cloud-like economics and scale by integrating compute resources, storage resources, software-defined virtualization, and software-defined storage onto standard x86 platforms. Hyper-converged solutions are made available through software only, reference architectures or dedicated appliances.

In the Wikibon report, The Rise of Server SAN, hyper-convergence was estimated to be a $2B market in 2014 growing tenfold to $20B by 2020. The report also claims that hyper-convergence is poised to disrupt traditional IT architectures. Additional trends can be found in the 451 Research special report, Software-Defined Storage and Hyper-Converged Infrastructure in the Midmarket, where survey respondents revealed an 82% likelihood to consider hyper-convergence for virtualized data centers.

Evolution from Traditional IT to Convergence to Hyper-convergence

Broadly speaking, information technology (IT) is based on 3 elements—compute, storage, and networking. Traditionally, each one of these elements was delivered as separate hardware and software products. Servers were designed to support compute workloads and storage arrays were designed to support storage workloads. This model works well for physical data centers where applications run on hardware-defined servers, but is not well suited for software-defined data centers with virtualized servers and desktops.

Server virtualization transformed the pool of independent physical servers into a shared pool of compute resources to be allocated on demand to applications. Initially, virtualized server environments leveraged storage arrays conforming to the traditional IT model.

Converged systems were designed to simplify the deployment and ordering of virtualized IT. Vendors packaged servers, server virtualization software, storage arrays, and networking products into a single offering to simplify the ordering of all essential elements for virtualized IT. The pre-packaging and certification also simplified deployment and provided validation that all these independent products that were designed by multiple vendors were interoperable. Nevertheless, Converged Systems today are based on independently developed products where compute and storage are separated rather than converged.

The challenge with traditional IT and Converged Systems is that they don’t address the gap between compute and storage in terms of simplicity, availability, agility, and cost. Hyper-convergence looks to close this gap by empowering compute and storage to run on the same server platform without compromising storage sharing or functionality. The ability to run on the same platform facilitates the management of storage with the same constructs as compute, namely virtual machines, further simplifying IT management and increasing IT agility. Thus, providing storage with the same levels of simplicity, availability, agility, and cost reductions as is possible with compute. The economic advantages enabled by server virtualization are further enhanced by a hyper-converged approach and innovators  are introducing optimized solutions supporting multiple abstraction approaches such as whole system virtualization (hypervisors) and operating-level virtualization (containers) technologies simultaneously.

 The Main Drivers Enabling Hyper-Convergence Now

It was once a challenge to run some enterprise applications on x86 server platforms due to their relatively modest performance capabilities. Thus, it was not possible to run applications, server virtualization, and storage on the same platform. Thanks to Moore’s law, x86 server technology made giant leaps in the last decade. Over the same time, the compute requirements of most applications grew but at a lower pace than the improvements in server technology. Today, standard best-of-breed x86 hardware with multi-core processors, large amounts of memory, fast networking ports, and advanced solid-state storage are capable of supporting hyper-converged solutions for most workloads within enterprise data centers.

 With the scale-out architecture enabled by hyper-convergence, today’s server platforms can support mission-critical and business-critical enterprise databases and applications with the reliability, availability, performance, and capacity they demand—while future proofing IT infrastructure and lowering total cost of ownership (TCO) as noted in the Special Report by 451 Research.

 Hyper-Convergence—The Killer Application for Server and Desktop Virtualization

Server and desktop virtualization made a significant impact over the last decade. However, the true potential of virtualization is still untapped due to storage issues. Hyper-convergence fulfills customer expectations by combining the scalability and agility levels of cloud environments with the simplicity and cost effectiveness of virtualization.

To achieve its full potential, hyper-converged solutions must meet certain criteria. It must leverage flash technology for storage (both read and write-back caching), delivering the performance required for virtualized infrastructure as well as leverage disk drives for delivering an attractive cost. Large-scale deployments of virtual desktops require zero-copy clones. Storage management should be simple for operational savings. Support for unlimited numbers of performance-efficient snapshots is required to successfully implement a comprehensive data protection strategy for virtualized IT infrastructure. To enable flexibility, agility, and heterogeneity, support of multi-hypervisor environments now while enabling the support of other virtualization approaches in the future—including Dockers Containers is required.

 The 3 Approaches to Deploying Hyper-convergence

Three approaches exist for deploying hyper-converged solutions: 1) software 2) pre-validated appliances  3) dedicated appliances. Maxta is committed to maximizing the promise of hyper-convergence by offering industry-leading benefits of choice, flexibility, simplicity, agility, and scalability, along with enterprise-class data services at a very attractive cost. For this reason, Maxta is not offering a “one size fits all” dedicated appliances that limit the potential of hyper-convergence.

 Instead, Maxta is offering customers the choice of pre-configured and pre-validated MaxDeploy appliances to facilitate easy and rapid ordering and deployment as well as MxSP software products that can be tailored to any x86 servers configuration of choice. Unlike other companies that require you to purchase dedicated appliances with fixed configurations and expensive upgrades, Maxta solutions deliver the flexibility to use any server, any hypervisor, and any storage hardware that you want in addition to the ability to scale your compute and storage independently.

Common Use Cases for Hyper-convergence

Hyper-convergence solutions are deployed in a wide range of customer environments enabling various types of workloads including mission-critical applications, virtualized desktop infrastructure (VDI), remote offices and branch offices (ROBO), disaster recovery (DR), and test and development. Additionally, hyper-converged solutions are in use by industry-leading cloud service providers as a foundation for their client services.

 Criteria to Evaluate

The following is a checklist of hyper-converged features, functions, and benefits as key criteria for evaluating potential solutions for enterprise data centers, cloud infrastructures, and similar environments.

 Choice

  • To use any x86 server whether it is a brand-name or a “white box”
  • To run on any server model up to the latest and greatest generation
  • To run on any hypervisor
  • To use mixed drive types such as Flash, SSD, and spinning disk in any configuration

 Manageability

  • VM-centric data services such as snapshot and clones
  • Single pane of glass for VM and data management
  • Pre-configured and pre-validated appliances

 Scalability

  • Global namespace
  • Scale-out and Scale-up
  • Scale compute and storage independently
  • Flash optimized
  • Log based data layout

 Resiliency/HA

  • Data availability, strong checksums and RAID support
  • Local mirroring and local replication
  • MetroCluster support

 Enterprise-Class Data Services

  • Highly efficient snapshots & clones
  • Ability to co-locate VM & associated data
  • Data Protection Policies: Schedule creation & retention of snapshots per data protection policies

 Capacity Optimization

  • Inline compression and deduplication
  • Thin provisioning
  • Space reclamation

 Cost Efficiency

  • Simplify management and provide VM-centric administration for OPEX savings
  • Ability to run on any x86 platform and utilize capacity optimization for CAPEX savings

Maxta Hyper-Converged Solutions

Maxta’s approach to hyper-convergence delivers enterprise and service provider customers with the greatest opportunity to modernize and simplify the management of their virtual data center. This is accomplished without compromise to choice in servers, hypervisors, or enterprise class data services.

With Maxta MaxDeploy, customers can choose from pre-configured pre-validated appliances providing them simplicity of ordering and deployment as well as peace of mind due to guaranteed interoperability and predictability of performance. With MxSP software-defined storage solutions, customers can customize their solutions and run on existing hardware. Customers choosing to use commodity hardware will also experience significant CAPEX savings in addition to the OPEX savings drive by VM-level simplification of storage management.

Additionally, Maxta provides users with the ability to configure storage based on their applications by allowing them to configure properties at a VM or Virtual Disk level granularity.

  1. Number of Replica Copies and MetroCluster – delivering a higher level of availability
  2. Rebuild policies: Prioritization based on the importance of the VM
  3. Wide Striping, Block Size and Read Caching – Tune performance based on the application type
  4. Compression – Optimize storage and performance based on normal disk versus self-encrypting disks

A hyper-converged environment that meets its full potential takes into consideration many factors that include choice in hardware or software, combined with manageability, scalability, cost savings, not to mention resilience. There are different approaches for deploying hyper-converged solutions with vendors that offer a “one size fits all” solution and then there are those who offer more flexible choices to fit your environment. Choose wisely.