Getty Images/iStockphoto

Tip

The reasons to use (or not use) sidecars in Kubernetes

Sidecar containers provide a great relief to developers who need to manage large clusters of containerized applications at scale. But is it always the right approach?

Kubernetes introduced sidecar containers to solve a fundamental challenge: When you deploy an application via Kubernetes, it can be difficult to directly integrate that application with the various external monitoring tools, logging systems and other components often readily available to developers working with conventional servers.

The sidecar container approach addresses this problem by running an application's core processes inside its own Kubernetes pod instance, while a companion pod running alongside it facilitates access to external resources and communication with external systems. In other words, using sidecars in Kubernetes can let you have your cake and eat it, too.

The key to making this approach work, however, is understanding exactly how, why and when to use a sidecar container. Let's examine some of the key benefits and drawbacks of the sidecar container approach and review some guidelines that can help you decide if it's the right move.

Advantages of sidecar containers

In Kubernetes, the sidecar container approach is primarily a method used to cut through otherwise interfering abstractions. In cases like this, they offer several important advantages:

  • Isolation. The primary application can run independently in one container while the sidecar hosts complementary processes and tools. This also keeps the application's main codebase hidden from the external services that connect with the application, and can help isolate failures.
  • Quick deployment. It's relatively easy to deploy a sidecar container -- certainly easier than trying to add extra layers to the main application container to integrate whichever functionality you host in the sidecar.
  • Scalability. Once you have the sidecar containers in place, it's easy to scale up to support as many pods as needed. If you want to, you can deploy a hundred sidecar containers about as easily as you can deploy one.

Drawbacks of sidecar containers

Of course, using sidecars in Kubernetes isn't a free ride. Here are some of the notable pitfalls associated with sidecar containers:

  • Resource consumption. More containers mean more memory consumption and CPU utilization. From a resource perspective, it's sometimes more efficient to host all the processes inside a single container, or to run complementary tasks at the node level.
  • Management complexity. Sidecars increase the total number of containers you need to monitor and manage, not to mention the relationships between them. You'll need a comprehensive monitoring system that can track, for instance, exactly how the failure of a sidecar container affects your main applications.
  • Update compatibility. It may require more work to ensure that updates to your main application container are compatible with the sidecar that supports it, and that those updates can carry across all the related components without issues.

Alternatives to the sidecar container approach

Sidecar containers are not the only way to build a tunnel through the Kubernetes abstraction layer. The most obvious option, of course, is to add whichever extra functionality you need directly to the application container itself -- provided that isn't a detriment to performance. Similarly, you can configure your overarching software network so that the application can interact with external software and tools via the network itself.

In many cases, it's also possible to add managing components like DaemonSets to each Kubernetes node to perform tasks that can't process inside individual containers.

Deciding if sidecars are worth it

In general, most teams that manage Kubernetes clusters of significant size and complexity will eventually rely on sidecar containers. Thus, sidecars have become a go-to solution for cutting through the abstraction that Kubernetes imposes between pods and the rest of the world. However, they don't come without their drawbacks, so development teams need to think carefully before deploying sidecars at will.

In order to decide if a sidecar is the right approach, ask yourself the following questions:

  • What are your performance requirements? Sidecars aren't the most efficient way to run complementary tools and can be a problem if you just need to optimize performance for relatively simple tasks.
  • What is your scale? Because they are easy to deploy at a massive scale, sidecar containers deliver the greatest benefit in a large-scale cluster. In smaller clusters, you can likely get away without using sidecars.
  • How difficult are the feature modifications? If you can you easily add the desired functionality to your application's main codebase or container without causing too many issues, sidecar containers might be an overkill approach.
  • How much cluster complexity can you manage? While it may simplify the primary application container, sidecars make your pod cluster more complex by increasing the number of containers you need to deploy and monitor. Consider the tradeoff of adding complexity to a single instance rather than the entire cluster.

Dig Deeper on Container orchestration

SearchSoftwareQuality
SearchCloudComputing
TheServerSide.com
Close