Building and running modern cloud-native applications has its risks. One of the biggest is that you’re sharing computing resources with an unknown number of other users. Your memory and CPU are shared, and there’s always a possibility that data may accidentally leak across boundaries, where it can be accessed from outside your organization.
A breach, even an accidental one, is still a breach, and if you’re using Azure or another cloud platform to work with personally identifiable information or even your own financial data, you’re in breach of any compliance regulations. It’s not only user or financial data that could be at risk; your code is your intellectual property and could be key to future operations. Errors happen, even on well-managed systems, and a networking problem or a container failure could expose your application’s memory to the outside world.
Then there’s the risk of bad actors. Although Azure has patched its servers to deal with known CPU-level bugs that can leak data through processor caches, microcode-level issues are still being discovered, and it’s not hard to imagine nation-state or organized cybercriminals using them to snoop through co-tenants’ systems.
Azure’s cybersecurity infrastructure is one of the best. It uses a wide range of signals to look for malicious activity with machine learning-based threat detection to quickly spot possible areas for investigation. Security and encryption are built into its underlying platform. Even so, some customers want more than the defaults, as good as they may be. They’re businesses that are building cutting edge financial technology in the cloud or using it to process and manage health data. They even may be governments or the military.