3 OPA Trends from Cloud Native Policy Day at KubeCon + CloudNativeCon

3 min read

This year’s KubeCon + CloudNativeCon NA featured new and exciting updates from the open source community, and we also hosted our own event, Cloud Native Policy Day with OPA, hosted by Styra. 

At Cloud Native Policy Day, we were fortunate to host a full roster of Open Policy Agent (OPA) luminaries from leading companies to speak at our event, and we wanted to share some high-level takeaways from their talks– each of which showcased unique ways of using OPA. While OPA has dedicated Kubernetes use cases (this did happen adjacent to KubeCon, after all!), it isn’t limited to Kubernetes — these speaker talks focused on the application of OPA for Kubernetes, for CI/CD, for application authorization, to remote server debugging and more. 

Here’s a quick snapshot of the companies represented:

  • CapitalOne
  • Chime
  • Snap
  • Nvidia
  • Snowflake
  • Comcast
  • T-Mobile

The long and short of it: OPA continues to prove extremely useful for an incredibly (if not increasingly) diverse set of use cases for companies across industries, and it’s resilient at global scale. 

Here are the big trends we observed. 

1. Companies navigate the challenge of using and loading data for OPA policy decisions

One of the advantages (and drawbacks) of any policy tool is that it must fetch data from somewhere to make a policy decision — therefore, the size and location of the data matters in many cases. 

This was a theme we heard from both Chime and Nvidia. In the latter case, Nvidia senior software engineers Sowmya Seetharaman and Jieping Lu talked about how they solved the challenge of enabling dynamic data decisions with a centralized entitlement system. In general, dynamic policy decisions are made from data that changes frequently, if not in real time — rather than relatively static data, like an employee’s role. That team was able to extend to the OPA SDK to help solve their data dependencies for dynamic data like geolocation and GPU instance type. 

2. Companies weigh the tradeoffs of centralized and distributed authorization architectures

OPA requires companies to make architectural decisions about how they deploy OPA and (as we mentioned) its data. As we’ve written about, those decisions come with tradeoffs: namely, that while a centralized authorization system can be closer to your data, every system that needs an authorization decision must call back to it. For this reason, a distributed fleet of OPAs, conversely, tends to offer much better performance — because the OPAs (together with their data/policy bundles) sit closer to the policy enforcement points of individual services. 

Both Snap and Snowflake talked about their implementations of the latter-style system (distributed). At Snap, Infrastructure Security Engineer Umar Faruq talked about their journey to build a centralized (enterprise-wide) access control system that enables distributed policy decisions — in their case, enforced at the sidecar level in microservices applications, which is a popular style of implementation that we often recommend. 

Snowflake offered a (to our knowledge) unique and innovative use case: leveraging OPA to help remote-debug servers and audit machine decisions with authorization on commands. In his talk,  James Chacon, Principal Engineer, spoke about how Snowflake worked through their policy challenges as they scaled. 

These talks speak to the tendency of OPA implementations to fall into one of four design patterns, which was part of our CTO Tim Hinrichs’s keynote talk at the event. In general, OPA implementations can be sorted into two big buckets: infrastructure (or configuration) authorization and application authorization. Within those two categories, there tend to be two levels of compute power required (large or small), depending on whether the system is centralized or distributed and the data needs involved. 

Below shows the results of a fun activity where the audience helped sort our speakers’ implementations into these categories. 

3. Companies use OPA to enforce policies at multiple enforcement points and across both applications and infrastructure

Which leads us to our final point: enterprises are finding use for OPA across both applications and infrastructure. And not only that, but enterprises are using OPA to enforce policies and multiple points within the organization. And, frankly, we love to hear it! This shows that OPA users are finding value for not just one use case, but are branching off to others and finding new internal OPA champions within the company. 

CapitalOne is a perfect example of this. In his talk, Jason Burks, Director of Engineering, discussed their journey of using OPA policy for not just Kubernetes — we had to talk about one Kubernetes example — but cloud configuration (validating infrastructure-as-a-code resource changes against policy as code) and within their internal CI/CD build pipelines. 

Wrapping up

It’s incredibly heartening and humbling to see the OPA community come together and share their innovative OPA uses — and the response from attendees has likewas been fantastic. Community, quite literally, is what makes an open-source tool like OPA a success. We absolutely recommend that everyone check out the full set of talk recordings (they’ll be live within the week), as well as presenter slides. Thanks for making this another memorable KubeCon and an amazing debut for Cloud Native Policy Day with OPA! We can’t wait to learn other ways the community is using OPA. See you in Amsterdam in 2023! 

Want to share your own OPA use case or have questions about your own implementation? Always feel free to set up a call.


Try Styra Load

Take 5 minutes and benefit from an OPA distribution built for data-heavy workloads.

Speak with an Engineer

Request time with our team for a discussion that fits your needs.