In this article, I examine how leveraging strong strategies and tools can help you ensure your adoption of container technologies will sustain and extend with your development organization -- not rip it apart. I will note feature gaps and the ways to address them. Additionally, I will suggest ways to take containers beyond testing and development (test/dev), and into your application release processes.
After 30 interviews with my developer peers, it was clear to me that a few boulders need to be moved before container technology can become the norm. Only four out of the 30 hinted at a possible production-level use case, with the remainder just dabbling in test/dev. When asked why not use containers for production, the typical response was not about the technology itself, but about how it fits into the broader delivery chain in a stable way.
Adopting container technologies without a well-defined strategy accompanies reduced application quality and increased governance risk. Product quality drops because rouge containers introduce hidden variables that eventually surface as bugs or outages. And governance is threatened, because without clear oversight of your containers, you do not know which are following policies and which are not. There is the added issue of change management, a dreaded word for any development team and a problem often felt when an expert has left the building.
Here are the key gaps where current container technology needs help in order to mitigate the risks:
Network limitations: Docker Network lets you network containers easily on the same host. And with some additional effort, you can use the overlay network feature across hosts. However, it ends there. The manipulation of network configuration is limited, and for now, the effort is manual, so to speak. Although containers can be scripted at scale, because you have to add provisioned instances to the network definition, there is an additional step each time you provision a container, which is prone to error.
Limited library control: The library has become a central topic in any container conversation. The public library is prized due to its huge volume of contributed prebuilt containers, saving many hours of configuration time. However, using it beyond sandboxing is risky. Without knowing who and how images were created, there could be any number of intentional or unintentional stability and security risks. For the enterprise, it is necessary to maintain and create a private library, which is not a huge challenge to set up, but it is to manage. Docker provides a limited metadata model for managing images in large libraries, which limits the ability to ensure that future instances are as expected and there is no overlap.
No clear audit trail: It is easy to provision a container, but it is not as easy to know the when, who, why and how for its provisioning. So, post-provisioning, you have very little history for auditing purposes.
Low visibility on running instances: Without deliberate effort once instances are provisioned, it is hard to reach into the population of running containers and know which should or should not be there. This problem is what I call container sprawl -- it can be a serious issue and can result in:
- Rogue VMs;
- Old versions and configurations;
- Resource waste; and
- Inability to do resource planning.
These challenges are not insurmountable. And although Docker does not have it built in their roadmap, the company is actively addressing them. For example, Notary, launched at DockerCon15, lets organizations set policies, which all new containers are checked against. And even stronger offerings have come from the third-party market.
Here are key ways to address these challenges:
Planning: It's hard to accept sometimes that you not only have to architect your application, but that you also have to architect your pipeline. Organizations would not forgo the planning activities around sprints and product features, just as they cannot forgo the same with the pipeline and release processes. There has to be deliberate, upfront effort to make sure the system of containers is defined and can function for a sustained period of time. The litmus test for this is to pick examples of issues that can arise and test the team on how they would respond to the issue. Can they?
Provisioning: At a low volume, provisioning containers is simple. But when you add a team, there are many more variables. For example, making sure people do not step on each other's toes; making sure that provisioning matches an expected configuration team-wide, such as an expected component; and that deprovisioning or replacement of containers is not ad hoc and without guidance.
Log analysis: When logging from host machines, each container is the only way to create full visibility with no additional effort. Only then can you enable easier query across an entire population of containers to know what is going on.
The container world is going to evolve quickly, and the focus will be on tools that make it more mature for the enterprise. Once we get there, the future holds greater adoption of concepts such as microservices, more abstractions of the container pipeline -- to the point where you do not even need to know they are there -- and more robust container libraries.
The biggest inhibiting factor is the fact that new tools and functionalities are coming so fast. Organizations tend to hit the pause button and wait for that one key feature that solves it all. Or, they further limit the usage of containers until they are sure an update does not cause a massive interruption.
Prepare for the day when we refer to applications as containers and not code. Also, look for the point when considerations around infrastructure are minimal and only early on in the application lifecycle. Container technologies will look a lot different in the next few years -- and, as a result, will change the nature of applications.
Is container technology right for your organization?
How do container technologies fit into the cloud picture?
Find some tools, tips and techniques for scaling container technology.