Much like the transition from client-server applications on discrete physical servers to the virtualized data center, containers represent a significant shakeup in the technology landscape. Working with them requires a rethinking of existing infrastructure architectures, processes and policies. This also means that IT organizations should consider ways of incorporating containers into their application modernization strategy.
It's not easy working with containers, a technology that seemingly changes by the month. There are two important steps to take to use this technology at its utmost capabilities The first step is gaining an understanding of the complete container ecosystem, which includes parts like container orchestrators and container repositories. The second step is assessing container management software compatibility with existing system management software and processes, its potential security risks, and the interoperability and integration challenges.
An overview of the container ecosystem
Containers are a new technology that is built upon a nearly 40-year-old idea of compartmentalization. Containers allow multiple applications to run in isolated environments on the same OS. This increases resource efficiency, reduces load time and simplifies deployment and management of distributed applications that are spread across multiple systems and locations.
Hype around containers often obscures important distinctions. Containers themselves are just the source code description of application components, system libraries and configuration that can be turned into a binary image which is deployed and executed on an OS and container runtime. The container runtime provides logical separation of application environments but adds process isolation and limitations on resource usage.
Containers and their runtime engines provide a foundation for application and infrastructure modernization, but require many other subsystems to create a production environment. Though many have tried, the nascence and dynamic nature of container development mean there are no universally accepted standard categorizations for the container management software. However, IBM offers a "Taxonomy of Building Blocks for Container Lifecycle and Cluster Management" report that is useful for understanding the various components of individual application containers.
Running multiple containers in a cloud data center requires components to manage deployment on one or more machines, job scheduling and monitoring, networking, and the working state of each instance.
The container management software market is quite diverse and fragmented; however, vendors like Apprenda (which acquired Kismatic's cluster management software), CoreOS, Docker, Mesos and Rancher Labs have assembled most of the pieces necessary to run a container environment in production. These often include popular open source projects like Google-backed Kubernetes, which is becoming a de facto standard for container orchestration. Complete container environments are also available from major infrastructure-as-a-service providers including Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform and IBM SoftLayer.
Key features to assess before purchasing container management software
When assessing container management software to assist in executing an application modernization strategy, there are key factors that businesses need to consider.
Usage and workflow
Initial implementations of container systems, including Docker, use an entirely new set of software and management tools that don't typically integrate with existing virtualization platforms and are quite disruptive. This isn't a problem if an organization plans on building a greenfield environment or using public cloud services for new containerized applications; however, recent developments from VMware (e.g., vSphere integrated containers, or VIC) and Microsoft (Hyper-V containers) allow containers to run on legacy server platforms using existing workflows that reduce the learning curve and greatly simplify process integration.
Container networks use the virtual networking concepts introduced with server virtualization in that the container runtime engine creates an overlay network and virtual switch to move packets between container instances on the same cluster (e.g., Docker Swarm). Networking capabilities built into container engines like Docker are adequate for self-contained clusters. Connecting a container network to a public cloud service or across data centers, however, is difficult without a network overlay controller. Products from CoreOS, Weaveworks and Project Calico provide a routable, Layer 3 network fabric that can span multiple locations but come with a learning curve and require careful implementation planning.
Security and policy
As a relatively new technology, the vulnerabilities and overall security of containers is notably tenuous. While containers do provide greater application isolation than user-mode system processes, they aren't as impenetrable as virtual machines (VMs) on a Type 1 hypervisor. Potential attack vectors include OS exploits, container breakouts, denial of service, embedded malware and credential theft.
Many of these exploits can be mitigated by running single containers within a lightweight VM like Microsoft Nano Server or VMware Photon OS, albeit with some loss of efficiency due to running a full guest OS -- even a small one -- for each application.
Central control of access policies is another area where container management software is wanting. Microsoft recently made a significant investment in Israeli security startup Aqua Security with software that automates and monitors policy enforcement throughout the container lifecycle.
Most modern OSes -- including various flavors of Linux, Windows Server, VMware ESXi (via VIC) and even IBM PowerLinux -- have a Docker-compatible container runtime. Although Docker gets most of the attention as the de facto standard for application containers, it's not the only one, and tensions between Docker and other open source developers led to the creation of the Open Container Initiative (OCI). The OCI set industry standards for container image formats and runtime that allows running a container image on any runtime, including AWS, Google Cloud Platform, Kubernetes, Mesos, rkt or the Docker engine. Docker itself was the first to ship a runtime based on OCI technology. Container vendors and developers also must work to improve interoperability via creating standards for image distribution and discovery (via a registry protocol) and APIs between back-end services like the orchestrator/cluster manager and resource monitors.
Containers are a great technology for hosting cloud-native, distributed applications that can automatically scale across dozens of physical hosts with much less system overhead than a VM appliance. However, containers can also be used to wrap legacy applications not designed for virtual environments in a package that can be easily distributed and deployed on public or private cloud infrastructure.
Mobile application management tools can simplify app management and deployment.
Business process management is another way to modernize legacy apps and infrastructure.
Go to the cloud to modernize legacy systems.