As multicloud strategies become fully mainstream, companies and dev teams are having to figure out how to create consistent approaches among cloud environments. Multicloud, itself, is ubiquitous: Among companies in the cloud, a full 93% have multicloud strategies—meaning they use more than one public cloud vendor like Amazon Web Services, Google Cloud Platform, or Microsoft Azure. Furthermore, 87% or those companies have a hybrid cloud strategy, mixing public cloud and on-premises cloud environments.
The primary reason that companies move to the cloud at all is to improve the performance, availability, scalability, and cost-effectiveness of compute, storage, network, and database functions. Then, organizations adopt a multicloud strategy largely to avoid vendor lock-in.
But multicloud also presents a second alluring possibility, an extension of that original cloud-native logic: the ability to abstract cloud computing architectures so they can port automatically and seamlessly (if not just quickly) between cloud providers to maximize performance, availability, and cost savings—or at least maintain uptime if one cloud vendor happens to goes down. Cloud-agnostic platforms like Kubernetes, which run the same in any environment—whether that’s AWS, GCP, Azure, private cloud, or wherever—offer a tantalizing glimpse of how companies could achieve this kind of multicloud portability.
But while elegant in theory, multicloud portability is complicated in practice. Dependencies like vendor-specific features, APIs, and difficult-to-port data lakes make true application and workload portability a complicated journey. In practice, multicloud portability only really works—and works well—when organizations achieve consistency across cloud environments. For that, businesses need a level of policy abstraction that works across said vendors, clouds, APIs, and so on—enabling them to easily port skills, people, and processes across the cloud-native business. While individual applications may not always port seamlessly between clouds, the organization’s overall approach should.
Using OPA to create consistent policy and processes across clouds
One of the tools that has become popular, precisely because it’s domain agnostic, is Open Policy Agent (OPA). Developed by Styra and donated to the Cloud Native Computing Foundation, OPA is an open-source policy engine that lets developer teams build, scale, and enforce consistent, context-aware policy and authorization across the cloud-native realm. Because OPA lets teams write and enforce policies across any number of environments, at any number of enforcement points—for cloud infrastructure, Kubernetes, microservices APIs, databases, service meshes, application authorization, and much more—it allows organizations to take a portable approach to policy enforcement across multicloud and hybrid cloud environments.
Moreover, as a policy-as-code tool, OPA enables organizations to take the policies that are otherwise in company wikis and people’s heads and codify them into machine-processable policy libraries. Policy as code not only lets organizations automatically enforce policy in any number of clouds, but also shift left and inject policies upstream, closer to the development teams who are working across clouds, in order to catch and prevent security, operational, and compliance risk sooner.
Pairing OPA with Terraform and Kubernetes
As one example, many developers now use OPA in tandem with infrastructure-as-code (IaC) tools like Terraform and AWS CDK. Developers use IaC tools to make declarative changes to their vendor-hosted cloud infrastructure—describing the desired state of how they want their infrastructure configured, and letting Terraform figure out which changes need to be made. Developers then use OPA, a policy-as-code tool, to write policies that validate the changes that Terraform suggests and test for misconfigurations or other problems, before they are applied to production.
At the same time, OPA can automatically approve routine infrastructure changes to cut down on the need for manual peer review (and the potential for human error that comes with it). This creates a vital safety net and sanity check for developers, and allows them to experiment risk-free with different configurations. While the cloud infrastructure itself is not portable between vendors, the approach is, by design.
In a similar way, developers also use OPA to control, secure, and operationalize Kubernetes across clouds, and even across various Kubernetes distributions. Kubernetes has become a standard for deploying, scaling, and managing fleets of containerized applications. Just as Kubernetes is portable, so, too, are the OPA policies that you run on top of it.
There are many Kubernetes use cases for OPA. One popular use case, for example, is to use OPA as a Kubernetes admission controller to ensure containers are deployed correctly, with appropriate configuration and permissions. Developers can also use OPA to control Kubernetes ingress and egress decisions, for example writing policies that prohibit ingresses with conflicting hostnames to ensure that applications never steal each other’s internet traffic. Most important for the multicloud cloud, perhaps, is the ability to ensure that each Kubernetes distribution, across clouds, is provably in compliance with enterprise-wide corporate security policies.
Creating standard cloud-native building blocks
Before companies can port applications seamlessly across public clouds, they must first create standard building blocks for developers across every cloud-native environment. Along these lines, developers not only use OPA to create policy, but to automate the enforcement of security, compliance, and operations standards across the CI/CD pipeline. This enables repeatable scale for any multicloud deployment, while speeding development and reducing manual errors.
OPA’s enabling of policy as code means that companies can use tools like Terraform for their public clouds and OPA for policy, Kubernetes for container management and OPA for policy, plus any number of microservices API and app authorization tools and OPA for policy, while running those same OPA policies in the CI/CI pipeline, or on the laptops of developers.
In short, organizations need not waste any time reverse-engineering applications for multicloud portability. Instead, they can focus on building a repeatable process, using common skills, across the entire cloud-native stack.
Tim Hinrichs is a co-founder of the Open Policy Agent project and CTO of Styra. Before that, he co-founded the OpenStack Congress project and was a software engineer at VMware. Tim spent the last 18 years developing declarative languages for different domains such as cloud computing, software-defined networking, configuration management, web security, and access control. He received his Ph.D. in Computer Science from Stanford University in 2008.
New Tech Forum provides a venue to explore and discuss emerging enterprise technology in unprecedented depth and breadth. The selection is subjective, based on our pick of the technologies we believe to be important and of greatest interest to InfoWorld readers. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Send all inquiries to [email protected].
Copyright © 2020 IDG Communications, Inc.