carloscastilla - Fotolia

Evaluate Weigh the pros and cons of technologies, products and projects you are considering.

Four reasons you need microservices architecture

Why do we need mircoservices architecture? How can we benefit from it?

Ten years ago, we were at or near the peak of the SOA hype. Your service deployment at that time may have involved a J2EE (it still had the "2" back then) application server, an EAR file-based deployment or maybe a more integration-centric approach with an ESB focused on taking legacy integration points and exposing them as SOAP-based services. All of your services may have been owned by one or two teams because they were the only ones who understood the technology. Although Thomas Erl's initial book about SOA was popular, most services didn't follow all of his service orientation principles. The application server or ESB was still deployed on physical hardware in production.

Fast-forward to the current time, and the scenario has changed dramatically. Organizations have many more services owned by many more teams. Everything isn't written in Java. Services are deployed onto virtual machines (VMs), perhaps even outside the corporate data center in a public cloud. So, why do we still need a microservices architecture? Let's take these changes one by one.

Reasons we still need a microservices architecture

Infrastructure will continue to move more toward a utility model, with the end consumer being able to turn capacity up or down as needed.

First, you have many more services. Actually, what you probably have more of is operations, but they're bundled into services. When you look at the usage of those operations, I'm willing to bet it follows the Pareto principle: Eighty percent (or more) of your traffic comes from 20% (or less) of those services. If each of these services is provisioned with the same amount of infrastructure, your resource utilization is probably very poor. Furthermore, even within a service, odds are that not all operations see equal traffic. For a given operation, you can't scale capacity up (or down); you have to do that at the service level. If you're still using Java EE servers, you may even have multiple EAR files in the same cluster, and have to add or remove capacity for all as a whole. Simply put, you're not making infrastructure decisions according to where the dependency and demand are defined, which is at the individual operation level.

Second, these services are spread across more teams. This can exacerbate the resource utilization problem, because two different teams typically don't want to share infrastructure. As a result, more servers are provisioned, even if capacity exists. What's even worse is that organizations always change. What happens when the way management decides to change the organization doesn't match up with how services were organized? Can you easily move things around? Keep in mind that it's not just the infrastructure, it's also the underlying source code and associated projects.

Third, not everything is written in Java EE or .NET. The days of the application server with frameworks for everything under the sun are over. Don't get me wrong, those frameworks still exist, but if anything, the trend is toward a model where you simply deploy what you need.

Finally, it's the cloud. While we're not there yet, we continue to see more and more pay-for-what-you-use models versus the 2005 model of paying for a fixed amount of capacity, regardless of your utilization. Although the financial side of this is not simple (capital expenses versus operational expenses and everything that goes with them), it's hard to argue that current trends will change anytime soon. This means that infrastructure will continue to move more toward a utility model, with the end consumer being able to turn capacity up or down as needed. If this is the case, we need a model where that capacity goes live as quickly as possible, with the least amount of overhead as possible. This means we can't wait for an application server to load lots of things that we might not need. Instead, we want that unit of scale to have exactly what we need, nothing more, nothing less.

So, when we look at all of these factors together, the picture clearly lines up very well with the microservices architecture model. While the benefits of SOA from 2005 are still valid, the changes brought about through cloud-based infrastructure, DevOps and more have now made possible service management at the right level of granularity -- the operation. We are still in the early stages of this effort, and the biggest gap is in managing all of these moving parts. Fortunately, we have plenty of good examples to learn from. Find one of your older colleagues with mainframe experience and ask them how they managed all of those individual microservices on the mainframe. Just remember to call them transactions.

About the author:
Todd Biske is an enterprise architect with a Fortune 50 company in the St. Louis metro area. He has had architecture experience in many different verticals, including healthcare, financial services, publishing and manufacturing, over the course of his 20-year career.

Follow us on Twitter @SearchSOA and like us on Facebook.

Next Steps

MicroProfile aims to simplify microservices development on Java EE

Dig Deeper on Distributed application architecture

SearchSoftwareQuality
SearchCloudComputing
TheServerSide.com
Close