Understanding Azure Edge Zones | InfoWorld

The scale of the public cloud and services such as Azure is astounding. Massive data centers full of compute and storage are available on demand, and the network pipes in and out of those sites give you tremendous bandwidth. But putting all your compute eggs in one cloud basket has its downsides, with network latency a significant issue.

It’s not surprising to see Azure doing more with the edge. I’ve recently looked at how Microsoft is moving compute closer to end users, but compute is only part of the story. If we’re to get Microsoft’s promised consistent experience wherever we access Azure services, we need to be able to treat our edge resources and our Azure-hosted compute and storage as part of a single virtual network, with policy-driven security and routing.

Bringing the edge to Azure

The edge of the network is hard to define. To some, it’s the devices on our desks, in our homes, in our data centers, and built in to industrial equipment. To others it’s the equipment that sits on the provider side of the last mile. Microsoft is understandably agnostic—it has customers across all those markets. However, by thinking of its edge network integration as a part of Azure, a networking equivalent of the server, VM, and container management capabilities of Azure Arc, it’s clear that much of the attention is on the data center and the provider.

It’s a focus that makes sense. Azure Stack’s various incarnations scale from devices that sit at provider sites close to the end user, to multirack stamps that extend Azure into your data center. As much as Azure is key to the company’s future, Microsoft is well aware that hybrid infrastructures that mix cloud and on-premises aren’t going away and are likely to be a key element of most businesses’ strategic architectural decisions.

Source link