Andrea Danti - Fotolia


Five deployment technologies for flexible services

Learn how to ease responsive tech stacks with these five deployment technologies.

It's fine to talk about moving fast and breaking things, as long as you're working on UI components for the front end of your next big mobile app. When it comes to serverland, fast is good, but nobody wants to see it broken. Business moves at light speed, but if your back-end infrastructure consists of manually deployed applications with hand-coded configurations, responding to changing requirements can be a nightmare. Here are five deployment technologies that are making it possible for even small teams to deploy fluid, responsive tech stacks. 

Container management systems

Docker containers have been conquering the IT world over the last two years and for good reason. An evolution of the Unix chroot command, combined with kernel namespaces and a layered file system, containers package the complete set of dependencies for an application, so that your code can be deployed quickly to any server running a compatible kernel. Unlike hardware virtualization, containers add very little runtime overhead and start nearly as fast as any process. Thousands of them can be run on a single virtual machine instance. They enable the concept of immutable infrastructure by capturing installation and configuration state in a declarative form that can be reliably reproduced at any time. Ubuntu 16.04LTS Canonical has introduced LXD, a more integrated container management system that promises to bring many of the benefits of Docker and hardware virtualization to a single platform, increasing security and performance. It's fair to say at this point that containers are here to stay and mark a permanent change in the way software is deployed and managed in the cloud.

Service discovery frameworks

Containers will grant you the flexibility to run your services almost anywhere, but you still have to get requests to them. That means something in the system has to know where the containers that implement your application are running and how to route traffic to the right address and port. In a RESTful design, this requirement includes routing requests based on Layer 7 content. Powerful open source tools like NGINX and HAProxy will let you roll your own solution pretty quickly, but managing proxy configurations manually is error prone and an impediment to flexibility. Service discovery frameworks like Consul, Apache Zookeeper and Mesosphere help to automate the discovery and routing setup for service-oriented architectures by providing a central store of configuration, interfaces for services to announce their lifecycle events, and typically a pub or sub model for other components to be notified of those events.

Which approach works for you will depend on your current code base and stage of development. Unlike simple proxies, discovery layers involve more collaboration between services and infrastructure, so how each offering supports the languages and tools you already use will be a big factor in your decision.

Container clusters

Take the concepts of containerization and automated service discovery to their logical conclusion, and you end up with clusters. Container clustering platforms aim to make building entire systems as reliably reproducible as building containers. They close the gap between the recipe for a single container and all the other things that have to be done to get a bunch of different containers running and working together on a specific number of hosts, with specific network rules, auto-scaling parameters, access to storage and more. Leading platforms such as Google Kubernetes, Amazon Elastic Container Service and Docker Compose all take slightly different approaches that share a lot of common goals and ideas. Each has strengths and weaknesses, but all three are production ready tools that target the same goal: automated deployment technologies and configuration of entire stack layers. When choosing between them, vendor lock-in and portability of service code across platforms can be important considerations. Whichever way you go, you'll also want to look at automation tools like Ansible, Chef and the venerable but tenacious GNU Make to pull all the pieces together, but the end result is well worth the effort in terms of durability and scalability.

Instant APIs

Ok, so you have a cluster running, and your cluster has discoverable services. So when an http request hits the cluster for /awesome or /awesomer/, it gets to the right place and the response gets back. What about terminating SSL connections and routing between different versions of your stack, or different environments? You need a public point of ingress that handles this stuff and can serve as the gateway to all the different services you will want to deploy behind it. You can set up a load balancer with SSL, but they don't generally handle Layer 7 routing. You can set up a proxy behind the LB to do that work, but now you have to worry about configuration, scalability and failover of that component. What if you could just configure your entire API as a cloud service and deploy it with a single command? Amazon's API Gateway does just this, and it's very slick. You can even describe your API using a language like Swagger then just upload it and have it all work. Google has no direct competitor yet, but you can bet they aren't far behind, and there are independent offerings like Strongloop as well.

Are shake-n-bake gateways right for your project? In the early stages, the increase in velocity and reduction in management overhead should be more than worth it. Later, much will depend on how pricing actually works out at your level of use.

Serverless services

The technologies mentioned above can get you to fully-automated deployment technologies of complex systems, but it's no use pretending there isn't still plenty of back-end engineering to be done to achieve that goal. What if you're a startup and you just want to deploy an API and a service or two as quickly as possible? Or you might be an established company that wants the flexibility of zero-infrastructure and pay-per-request costing. The last year has seen the emergence of serverless compute platforms that are robust enough for real world applications today. The leader in this space is Amazon's Lambda, which allows fast deployment of code written in python, Javascript and Java. Lambda functions can be single scripts or complex applications with dependencies and I/O to other services. They can be called (invoked) manually or triggered by events generated from other Amazon services such as S3. When paired with API Gateway, they can be used to deploy entire microservices implementations in a zero-infrastructure environment. The other major cloud platforms have entered this space in a big way as well, such as Microsoft with Azure Functions and Google with Cloud Functions.

In some sense these deployment technologies represent the most fundamental promise of cloud computing: There is a lot of complexity under the hood in order to make them work as seamlessly as they do, but you don't have to think about it at all.

Next Steps

Learn the top 10 Windows deployment tools

How important are your software deployment tools?

Deployment automation tools from Automic Software

Dig Deeper on API management