Understanding Microsoft’s Open Service Mesh

Only a few years ago, when we talked about infrastructure we meant physical infrastructure: servers, memory, disks, network switches, and all the cabling necessary to connect them. I used to have spreadsheets where I’d plug in some numbers and get back the specifications of the hardware needed to build a web application that could support thousands or even millions of users.

That’s all changed. First came virtual infrastructures, sitting on top of those physical racks of servers. With a set of hypervisors and software-defined networks and storage, I could specify the compute requirements of an application, and provision it and its virtual network on top of the physical hardware someone else managed for me. Today, in the hyperscale public cloud, we’re building distributed applications on top of orchestration frameworks that automatically manage scaling, both up and out.

[ Also on InfoWorld: What is Istio? The Kubernetes service mesh explained ]

Using a service mesh to manage distributed application infrastructures

Those new application infrastructures need their own infrastructure layer, one that’s intelligent enough to respond to automatic scaling, handle load-balancing and service discovery, and still support policy-driven security.

Sitting outside microservice containers, your application infrastructure is implemented as a service mesh, with each container linked to a proxy running as a sidecar. These proxies manage inter-container communication, allowing development teams to focus on their services and the APIs they host, with application operations teams managing the service mesh that connects them all.

Perhaps the biggest problem facing anyone implementing a service mesh is that there are too many of them: Google’s popular Istio, the open source Linkerd, HashiCorp’s Consul, or more experimental tools such as F5’s Aspen Mesh. It’s hard to choose one and harder still to standardize on one across an organization.

Currently if you want to use a service mesh with Azure Kubernetes Service, you’re advised to use Istio, Linkerd, or Consul, with instructions as part of the AKS documentation. It’s not the easiest of approaches, as you need a separate virtual machine to manage the service mesh as well as a running Kubernetes cluster on AKS. However, another approach under development is the Service Mesh Interface (SMI), which provides a standard set of interfaces for linking Kubernetes with service meshes. Azure has supported SMI for a while, as its Kubernetes team has been leading its development.

Copyright © 2020 IDG Communications, Inc.

Source link