ktsdesign - Fotolia
A service registry sits between a client and server in a network and controls access between both of them. It is a critical part of the application stack and has a big impact on the end-user experience. With the growing use of microservices, the role of a service registry has become even more important as the number of services that talk to each other has grown exponentially.
Service registries need to be fine-tuned and rearchitected to function seamlessly in a microservices world. More importantly, they need to be governed differently from older models. Let's look at how we can better govern service registries and why a modern platform like Kubernetes does it the right way.
Kubernetes service discovery is dynamic
In a microservices architecture, containers are short-lived. As containers get outdated or corrupted, they are killed, and new containers take their place automatically. This is what keeps the services running on top of these containers highly available.
Kubernetes automatically assigns an IP address to every group of containers, or pods, as Kubernetes refers to them. As containers and pods are replaced with new ones, new IP addresses are assigned automatically, and they register themselves with new IP addresses. Because the system is always aware of changing IP addresses, Kubernetes service discovery is dynamic and works well at the scale of microservices. Kubernetes uses SkyDNS to map requests to services according to IP address.
Services are inspected by the kubelet
Services can affect each other by spreading vulnerabilities or draining resources from each other. It's important to ensure only healthy services are running in your system. Unhealthy ones should be retired or replaced.
In Kubernetes, this is done by the kubelet. Kubelet is a core component of the Kubernetes system, and it inspects every pod to ensure that the containers in them are healthy.
Requests are load balanced by kube-proxy
When applications are bombarded with web-scale traffic, they need a way to route requests equally across services. Load balancing builds on service discovery; it ensures that traffic intended for one instance is balanced against any other instances that provide the same service. For Kubernetes, the kube-proxy handles load balancing.
By combining Kubernetes service discovery, inspection and load balancing, you have the ingredients of a great governance model for a modern microservices application. These three things in combination have the potential to scale to running millions of containers. And remember: These service governance principles can be applied to any application in order to improve the way services are governed -- no matter what scale you operate at.
Learn how Docker and Kubernetes can help simplify Java EE development
Discover how to leverage automation in Kubernetes for container cluster management
Find out why Microsoft Draft has simplified Kubernetes cluster management
Dig Deeper on Distributed application architecture
Related Q&A from Twain Taylor
Explore the pros and cons that differentiate SDKs and APIs from each other and come to understand what types of applications call for SDKs, APIs or ... Continue Reading
The more distributed and complex a modern application is, the more likely an API mapping tool should join the developers' arsenal for efficient ... Continue Reading
An unknowable number of interactions occur to enable digitization. Applications must pass these messages according to a design pattern, and these two... Continue Reading
Have a question for an expert?
Please add a title for your question
Get answers from a TechTarget expert on whatever's puzzling you.