By Hans Ashlock, Technical Marketing Manager, QualiSystems

newlogo Gartner predicts that by 2016, nearly 25% of the Global 2000 IT organizations will be mainstreaming DevOps methodologies for bringing applications into production IT. A recent 451 Research survey of enterprises found that 56% of application workloads will be in private or hybrid cloud environments within the next two years.

These shifts are driving the adoption of a number of technologies and tools, including containers, which are garnering big attention from investors. In this article we’ll look at why containers are a critical enabler for both DevOps and hybrid cloud. We’ll also discuss their limitations and how they can be used with other DevOps tools to form a true DevOps-driven approach to deploying workloads on hybrid cloud.

The primary advantage of containers is allowing applications to look uniform as they cross between non-production and production environments, and between on premise and cloud. Containers do this by packaging applications with their associated libraries and dependencies, and then allowing them to run as individual containers on a common container runtime.  Multiple containers can run on a single OS, offering the same level of abstraction and portability that virtual machines do, without the overhead of duplicating the OS layer.

This allows containerized applications to overcome one of the top challenges of hybrid-cloud.  Often, public and private clouds don’t share the same virtualization platform. For example, migrating an application from AWS to VMWare isn’t trivial because even though it’s virtualized, it requires crossing from a Xen hypervisor to an ESXi hypervisor – a process that will likely require some kind of lengthy migration. For a containerized application, this isn’t the case, because the same container runtime will run on Xen or ESXi, allowing the containerized application to migrate seamlessly from a VM running on Xen to a VM running on ESXi.

Furthermore, containers are packaged as images that include both the application and all of its dependency components (libraries, distros, etc.). These dependencies are typically the Achilles’ heel of migrating applications between dev, test, and production environments. For example, the development environment for your application might be using the Python 2.7 library, while production is using version 3.0, and when pushed from dev to production, the application breaks because of some unforeseen side effect caused by differences in the two Python libraries. So by packaging these dependencies into a single containerized app, containers avoid discrepancies between environments and enable reliable migration between non-production and production. This makes containers ideal for enabling a more DevOps-oriented approach to application development and deployment.

The additional layer of abstraction that containers provide makes them essential for developing an effective hybrid cloud strategy for application workloads. In addition, their ability to ensure fidelity between pre-production and production deploys makes them critical for creating an effective DevOps practice. However, container technologies don’t eliminate the need for DevOps tools.

The Challenges of Using Containers

First, container technology like Docker doesn’t eliminate the need for configuration management, nor is it meant to. Configuration management tools like Puppet, Chef and Ansible provide extremely sophisticated mechanisms for creating comprehensive automation to maintain server and application configuration. They allow policy-based provisioning, dynamic executions, and can address configuration aspects that containers can’t.

But there is overlap, and rather than replacing configuration management tools, container technologies like Docker make them more efficient. In fact, the abstraction layer that containers provide for migrating across cloud platforms is something that DevOps tools, like Chef, can do, but it leads to extremely challenging and complex code. So container technologies can create the “run anywhere” abstracted deployments of applications effortlessly, in turn allowing DevOps tools to focus on the configuration automation they do best.

Furthermore, DevOps tools play a special role in their ability to automate the process of building containers – that is, orchestrating the process of creating the initial application image and all of its dependencies and configurations that will then be “containerized.” For this, DevOps tools are absolutely necessary. In fact, Ansible is so good at this that it’s beginning to be seen as the perfect match for Docker.

As quoted in TheNewStack, John Minnihan from ModernRepo claims, “Using Ansible to build containers is so fast, easy + reliable that I can’t imagine doing it any other way.” DevOps tools like Marathon, Kubernetes, Swarm, and Fleet are also needed for the dynamic orchestration, scheduling, and scaling of containers within the context of a true distributed application.

Lastly – DevOps tools that focus on enabling continuous processes, like Jenkins and other release automation tools, are essential for taking a technology like containers and incorporating it into the larger goal of continuous deployment and delivery. Environment sandboxes are another tool that help leverage the real power of containers. Sandboxes address the real question of “where” do I develop and test my application so that the infrastructure that it runs on looks the same from the development lab to the test lab to the production datacenter or cloud.

As mentioned, containers can address this issue with regard to the configuration of the application and its requisite software dependencies, but they cannot account for more complex infrastructure configurations that might be necessary, like network and storage optimizations.  In fact, because most implementations of containers are deployed onto bare metal without virtualization, the likelihood that they will be exposed to operating system and physical infrastructure differences is much greater.

The Power and Flexibility of Sandboxes

Sandboxes, on the other hand, are self-contained application and infrastructure environments that can be configured to look exactly like the final target deployment environment. A sandbox environment includes a comprehensive model of an application including containers, but also models any non-containerized components like physical and virtual network connections, and any other apps and tools that are part of the target production environment.

For example, developers can create a sandbox that looks like the production environment – from the network and hardware to OS versions and software, to cloud APIs. They do their development in that sandbox for a short period of time and when they are done they tear down the sandbox. Testers can do the same thing, and in addition they can run a bunch of tests with the sandbox configured to look like their internal IT environment and then automatically re-configure the sandbox on the fly to look like the external cloud environment and run more tests. This process allows them to test all of the possible environments that the application could run in without disrupting the actual production infrastructure.

A number of vendors including QualiSystems, Skytap and CA, are now providing sandbox solutions (some are called “Environment as a Service”) that have a simple interface for creating any target infrastructure environment and configuring it with as much control as you want. They allow you to bring applications, tools, tests and automated processes into that sandbox. They provide protections so that others cannot mess with any infrastructure that you are currently using in your sandbox. They provide reservation and scheduling for many people so that whole teams of developers and/or testers can share physical and virtual infrastructure on-the-fly for hours, days or weeks at a time.

Sandboxes are much easier to create when applications are containerized because applications can be put into any sandbox regardless of the target infrastructure that the sandbox is replicating.  Containers really simplify the work of orchestrating applications successfully in sandboxes without requiring a huge amount of application specific knowledge.  If the application can run unchanged in Amazon or vCenter, then Sandboxes can hide this detail from the user if desired.  Finally, a good sandbox solution can be triggered from the outside (for example, from a DevOps tool).

The Ultimate Formula for DevOps Success

In the world of hybrid clouds, applications need to be deployable on both on-premise infrastructure and on public cloud infrastructure. Sandboxes allow developers to truly mimic both environments and define applications that can run successfully in both types of infrastructure. The sandbox can then be used to test in both types of infrastructure.

Containers on their own, while critical to speeding adoption of DevOps and hybrid cloud deployment, are not sufficient. To truly move toward a DevOps, continuous, hybrid cloud strategy, businesses will need containers, DevOps tools and sandboxes working in concert. Package your applications in containers, use DevOps tools to manage and automate the process of moving through the development cycle, and create sandboxes for each step in the development and test cycle that mimic the actual target production infrastructure(s) on which those applications need to run. This is the formula for DevOps and hybrid cloud success.