Dynamic Authorization for Zero Trust Security

Eric Kao

Zero Trust evolved out of at least two decades of the IT industry contending with the inadequacy of a perimeter-based security system that implicitly trusts users, devices, and workloads within a perimeter. As industry trends such as cloud computing, remote work, and mobile devices made it increasingly difficult to implement a perimeter security wall, the widespread exploitation of both technical and human vulnerabilities made it impossible for any single wall to be truly secure.

Coined in 2010, “Zero Trust” was born out of the need to embrace that, in reality, every network, device, user, and piece of software is potentially compromised. In the years that followed, Zero Trust has become a strategic and compliance necessity for much of the mainstream. According to Microsoft’s Zero Trust Adoption Report1, a survey of 1200 security decision-makers of enterprises across the globe found that 76% have Zero Trust implementation completed or in progress. The adoption has a wide geographic reach, with the US, Germany, and Japan all above 75%. Furthermore, both the NIS 2 Directive2 and the Digital Operational Resilience Act (DORA)3 will require key aspects of Zero Trust security when the regulations are finalized.

Figure 1: Enterprise Zero Trust adoption
Data source: Microsoft Zero Trust Adoption Report1

The concept of Zero Trust advocates for a stringent and continuous verification of every system, user, and device seeking access to an organization’s network or resources. While much of the early focus of moving toward Zero Trust has been on implementing uniformly strong identity authentication to reduce the reliance on “trusted” networks or devices, a crucial next piece is in limiting what a user may do once their identity is established. At the heart of a holistic Zero Trust strategy is dynamic access control that gives each actor just enough privileges needed to perform their expected roles at just the right times and situations.

The dynamic, just-in-time nature of such access control places special requirements on the authorization technology that determines the right privileges to grant each time they are requested. 

In this white paper, we discuss how to meet the authorization challenges of Zero Trust security. We will:

  • Translate the principles of Zero Trust security into requirements for the authorization technology.
  • Analyze the challenges of meeting those requirements.
  • Recommend how to select and implement an authorization solution to meet those challenges.
  • Show sample deployments using the Enterprise OPA Platform for Zero Trust authorization.

By understanding the dynamic authorization requirements for Zero Trust security, organizations can take decisive steps to fortify their defenses while meeting the ever-evolving security needs and compliance demands.

Zero Trust Defined

Zero Trust is a holistic approach to cybersecurity that emphasizes minimizing trust assumptions and adopting a robust set of security controls. It challenges the traditional perimeter-based security model by assuming that no user or device within or outside the network perimeter is inherently safe. Instead, Zero Trust security focuses on continuous verification and strict access control to protect sensitive resources.

Forrester Research, where the term originated, defines Zero Trust4 this way:

Zero Trust is an information security model that denies access to applications and data by default. Threat prevention is achieved by only granting access to networks and workloads utilizing policy informed by continuous, contextual, risk-based verification across users and their associated devices. Zero Trust advocates these three core principles: All entities are untrusted by default; least privilege access is enforced; and comprehensive security monitoring is implemented.

Because Zero Trust is a model and a strategy rather than a concrete set of technologies, there are many non-identical definitions. Despite some differences, they generally align with the following principles.

Strong authentication

In a Zero Trust approach, every user, device, and network component should establish and authenticate their identity before being granted access to non-public resources. Strong authentication gives a high degree of confidence an actor requesting access is who they say they are. Strong authentication can include single-sign-on, multifactor authentication, sensible credential policy, client endpoint analytics, and continual monitoring for risk-based reauthentication or elevated authentication, sometimes including the use of hardware tokens.

Secure communication

Because no network or medium is implicitly trusted, all communication should be secured. By consistently adopting strong cryptographic protocols, organizations can protect communication confidentiality, integrity, and source authentication regardless of its location or transport medium.

Dynamic least privilege access

The principle of least privilege access means actors (users, devices, and workloads) should be granted only the minimal access privileges they require. By following this principle, organizations reduce the risk of unauthorized access and limit the potential blast radius of a successful attack via masquerading, replay, social engineering, or an insider threat. Furthermore, the minimal privileges required are dynamic, that is, dependent on time and context. Elements of dynamic context can include location, security assessment of the client device, behavioral analytics, and the known threats or active attacks in the environment.

Continuous monitoring and analytics

Organizations should proactively collect and monitor system logs, access control decisions, network traffic, user behavior, and asset security posture. By employing automated and manual analysis, organizations can identify anomalies, detect potential threats, and respond swiftly to security incidents. Continuous monitoring and analytics can also enrich the context that supports strong authentication and dynamic least privilege access.

The Authorization Requirements for Zero Trust Security

At a high level, an implementation of Zero Trust can be grouped into these four conceptual areas:

  1. Identity and authentication
    An identity is a representation of an individual, device, or workload within a computer system, network, or organization. This area commonly includes account provisioning, authentication (single sign on and multifactor authentication), directory service, and identity governance.
  2. Authorization and access control
    Authorization determines who can do what under which contexts. Broadly understood, authorization includes the administration of access control policy (policy administration), the use of access control policy to determine which access requests should be allowed (policy decision), and the actual admission and denial of requests (policy enforcement).
  3. Monitoring and analytics
    This area can include continuous diagnostics and mitigation (CDM), Security Information and Event Management (SIEM), device monitoring, threat intelligence, activity logs, and behavioral analytics.
  4. Infrastructure
    This area can include public key infrastructure (PKI), secrets management, and software-defined networking.

Zooming in on authorization, the principles of Zero Trust security lead to some fairly concrete requirements for the authorization architecture.

Contextual data breadth and freshness

In order to implement the principle of least privilege with dynamic contextual data, authorization decisions should be made with a wide range of contextual data. What client location is a request from? How secure is the client device? How sensitive is the requested resource? Is there an active denial-of-service attack? How well does the usage compare with the behavioral profile of the user?

Furthermore, the data must be fresh. For example, when a behavioral anomaly from a compromised identity is detected, authorization decision points not having the latest signals may continue to let an attacker exfiltrate large amounts of data.

Challenges and recommendations

The dynamic contextual data needed for Zero Trust authorization exhibit both high volume and high velocity. The need to distribute the data to all the authorization decision points and maintain them for fast retrieval poses technical and cost challenges. Additionally, contextual data comes from many sources with diverse formats and protocols. 

To meet these challenges, choose an authorization technology that supports the following:

  • Easy integration with as many of the data formats and protocols as possible used by the varied sources where your authorization-relevant facts are found (e.g., LDAP, Active Directory, SCIM, Kafka, SQL, JSON, YAML, and XML).
  • Flexible data distribution that can act in different ways as needed, including differential data update, streaming data, and just-in-time data.
  • High-performance, cost-efficient data caching close to where the authorization decisions are made.

High availability and low latency

A Zero Trust approach requires continuous evaluation of access. Directing every authorization request through a dedicated service naturally places very strict requirements on the availability and the responsiveness of this service, and it is crucial to avoid imposing any bottleneck or Achilles’ heel on the organization. Whereas a user may tolerate several seconds of delay on an initial login, the limit for acceptable delay is generally much lower when experienced on every request.

Challenges and recommendations

Everything fails at some point. High availability can be achieved by using redundancy to avoid a single point of failure. Different redundancy topologies make sense for different situations. For example, a shared authorization service may be deployed in an active-active, load-balanced cluster (Figure 2). Alternatively, a horizontally-scaled application may pair each application instance with a locally available authorization decision point to maintain the desired level of availability (Figure 3). In both cases, the authorization service should be included in the overall disaster recovery and failover plan.

Figure 2: Shared authorization service


Figure 3: Local authorization decision points


The goal of low-latency authorization faces several headwinds:

  • The authorization computation may require contextual data from several sources.
    This challenge can be met by choosing an authorization technology that supports the preloading of contextual data to minimize data retrieval delays, and furthermore caches the preloaded data efficiently so as to work within organizational and technical resource constraints.
  • The authorization computation may be complex.
    This challenge can be mitigated by choosing an authorization technology that demonstrates good real-world performance and also supports the performance-tuning of the authorization computation.
  • Authorization decisions sent over the network incur latency.
    This challenge can be met by choosing an authorization technology that supports a range of convenient deployment options. For the most latency-sensitive cases, the network delay can be avoided by deploying the authorization decision points locally with each application or microservice instance.

Expressive access control with usability

Authorization requirements for an organization are rarely simple, and they change over time with regulations as well as business priorities. Any successful authorization technology for Zero Trust security needs a flexible and adaptable way of expressing all of these requirements and policies.

The language or interface for expressing the policies needs to meet these requirements:

  • Be expressive enough to implement dynamic least privilege access.
  • Be adaptable to changing security and compliance needs, including new sources of contextual data.
  • Be usable by the full range of stakeholders including non-technical stakeholders.
Challenges and recommendations

With the evolving security and business needs, it is not feasible for an authorization technology to anticipate all the types of access control policies that will be needed. Technologies that force one particular style of access control (say role-based access control or attribute-based access control) make it impractical to adapt to the evolving threats, business needs, and regulatory requirements.

Furthermore, expressiveness and extensibility cannot come at the expense of usability. The more complex a language or interface for defining access control policy, the more training is generally required. In most organizations, it is not feasible to rely on access control specialists to create and maintain the access control policy for all the applications and business domains. In many cases, the language or interface must be usable by generalist developers and business domain experts in addition to access control specialists.

To meet these challenges, choose an authorization technology that does the following:

  • Supports open-ended access control policy without forcing a particular style.
  • Avoids the need for general-purpose programming which tends to be inaccessible to non-developers and unnecessarily complex for developers.
  • Provides open-ended support for a wide range of data formats.
  • Provides the users with interfaces they are already familiar with and will be the most productive with. For example, a business domain expert may prefer a graphical editor while a software developer may prefer a declarative policy as code language.

Brownfield onboarding

Organizations rarely have the luxury to replace existing technologies and processes wholesale. The authorization architecture is often required to accommodate incremental progress on the Zero Trust journey. In fact, the top three technological barriers to Zero Trust adoption cited by those surveyed by Microsoft’s Zero Trust Adoption Report all point to integration and onboarding in a brownfield environment5.

The authorization technology should work with existing authentication and identity providers while also connecting with new providers that may be adopted along the way.

The authorization technology should also work well with all types of applications including internally-developed apps, externally-developed apps, apps in maintenance mode, apps under active development, in-house apps, customer-facing apps, SaaS, monoliths, and microservices.

The work required to onboard and maintain the authorization integration should be reasonable for the technical staff, most of whom will have many priorities besides authorization.

Challenges and recommendations

Choose an authorization technology that integrates well with most identity providers and authentication mechanisms. Robust support for well-known standards such as SAML, OAuth, OpenID Connect, JSON Web Token, and SCIM are useful. An open architecture that adapts to custom or emerging identity frameworks further helps your organization stay flexible in the Zero Trust journey.

Choose an authorization technology that integrates with services and applications while imposing minimal constraints on the services’ or applications’ architecture, implementation language, and development framework. No architecture change should be required to onboard an application. Onboarding without code change should also be an option in order to onboard applications whose development is outside of an organization’s control.

Policy lifecycle

As an organization, its applications, and its environment evolve, the access control policies also need to change. Developers and business domain experts across the organization need to continually update the access control policy for their applications and business domains.

Challenges and recommendations

Every update to access control policy carries a risk. The misconfiguration of access control policy that accidentally exposes sensitive resources continues to challenge the industry, as the long and growing list of incidents shows. On the other side, the misconfiguration that accidentally blocks legitimate access receives less press, but can seriously disrupt operations.

To meet these challenges, organizations should consider doing the following:

  • Use an authorization technology that supports automated and semi-automated testing and analysis of policy changes, which help catch unintended consequences before they take effect.
  • Adopt workflows for proposing, reviewing, and approving access control policy changes.
  • Use an authorization technology that supports the workflows that fit your organization.

Governance

While most organizations cannot rely on a central team of specialists to create and maintain access control for all the applications and business domains, some level of central governance is still needed to uphold security best practices.

Challenges and recommendations

Without robust governance in place, access control policies often drift from the principle of least privilege. Furthermore, as new applications are built and deployed, the developers may not always correctly integrate with the organization’s authorization framework. These applications may use sub-standard authorization or sometimes no authorization at all.

To meet these challenges, organizations should consider doing the following:

  • Develop standardized authorization integration components for use across the organization. Automate the provisioning and configuring of authorization integration where possible.
  • Enforce infrastructure guardrails that flag or block the deployment of applications that do not conform to the authorization integration standards.
  • Implement a baseline access control policy that is applied across the organization. Use an authorization technology that supports the enforcement of such a baseline policy while also allowing flexible customization for different classes of applications.

Implementation of Zero Trust Dynamic Authorization

Having looked at the requirements and challenges of dynamic authorization for Zero Trust security, let’s turn our attention to a concrete example. In this section, we look at a reference architecture showing how to meet the authorization challenges of Zero Trust security in a brownfield environment. At Styra, we are most familiar with the Enterprise OPA Platform, so we will ground the discussion using this solution.

The Enterprise OPA Platform is a complete authorization solution trusted by some of the world’s largest organizations for their Zero Trust initiatives. From the creators and maintainers of Open Policy Agent (OPA), the Enterprise OPA Platform combines the advantages of open source OPA, a global standard for policy as code, with enterprise capabilities, performance, and support.

The Enterprise OPA Platform includes distributed decision engines and a central manager.

  • Enterprise OPA, the decision engine, is a lightweight, architecturally-flexible decision engine that computes the authorization decisions for each request.
  • Enterprise OPA Manager, the central manager, provides centralized visibility and administration including the control plane, policy lifecycle management, and policy governance.

Deployment models

To meet the performance and availability requirements within the resource constraints of each use case, the Enterprise OPA Platform is designed to work in a range of deployment models. In a shared service model, requests from several applications or application instances go through a shared authorization service that is a horizontally scalable and highly available collection of decision engines (Figure 4). In a local decision points model, each instance of each application or microservice is paired with a local, dedicated decision engine to minimize the networking overhead (Figure 5). An organization can mix and match the two models depending on the needs of each use case. 

Figure 4: Shared service model sample deployment


Figure 5: Local decision points model sample deployment

Data architecture for Zero Trust dynamic authorization

Consider the example of a financial institution protecting its customers’ sensitive data. The institution wants to grant certain employees deep access into a customers’ transaction data and behavioral data, but only when there is a legitimate business need, such as to investigate suspected money laundering.

The decision to allow or deny access depends on many pieces of data. We highlight several to illustrate how the Enterprise OPA Platform enables dynamic authorization:

  • The roles and attributes of the accessing employee are typically available from identity and directory services. This type of low-velocity data can be fetched by the Enterprise OPA Manager, then distributed to and cached by the Enterprise OPAs for fast decision-making.
  • The flagging of accounts for investigation is an example of medium-velocity data that can be streamed directly to the individual Enterprise OPAs that need this information to grant access.
  • Behavioral analytics systems continuously analyze employee access patterns to identify suspected compromise or abuse. Enterprise OPAs can consume these high-velocity data signals via data streams, REST API calls, or database queries, in order to react to the latest signals in granting or denying access.

Beyond this example, the Enterprise OPA Platform handles the volume, velocity, and variety of data by providing the following capabilities.

  • The Enterprise OPA Platform provides out-of-the-box integration with LDAP, Active Directory, Okta, SCIM, Kafka, SQL, MongoDB, DynamoDB, and JSON data sources plus straightforward integration with YAML and XML data.
  • For medium to high-velocity data, Enterprise OPAs directly consume streaming data sources and make just-in-time data queries.
  • For low to medium-velocity data, the Enterprise OPA Manager efficiently aggregates and distributes data to the decision engines via differential bundle updates.
  • A proprietary data cache that is 20X more memory-efficient than open source OPA enables the use of high-volume contextual data to make fast and accurate authorization decisions.

High availability and low latency

The Enterprise OPA Platform delivers fast and highly available authorization within your deployment constraints through Enterprise OPA. Enterprise OPA is a lightweight decision engine that computes the authorization decisions for each request according to the latest contextual data and the fine-grained access control policy. Enterprise OPAs efficiently cache the contextual data and the access control policies to provide fast authorization decisions. These decision engines fit in almost any deployment configuration, whether as a central service, a sidecar container, a daemon, a node-level service, or an integrated part of an enterprise application.

Governance

To meet the governance challenges of Zero Trust authorization, the Enterprise OPA Platform includes the Enterprise OPA Manager which provides central visibility and administration.

  • API-driven workflows, secrets manager integration, and established integration practices all aid in standardizing and automating authorization integration across an organization.
  • Mandatory baseline policy allows a central team to enforce minimum access control standards across the board and also customize the baseline policy for different classes of applications.
  • Kubernetes and Terraform guardrails can flag, restrict, or block the deployment of unapproved applications that may not conform to the organization’s authorization standards.

Policy lifecycle

To catch and avoid unintended consequences of policy change, the Enterprise OPA Platform conducts impact analysis using both backtesting and live testing to expose problems.

  • Impact Analysis backtesting runs a proposed new policy on past requests to aid in making sure all behavior changes are intended.
  • Live Impact Analysis adds another layer of risk reduction by trialing a proposed new policy on live requests.

To support natural policy workflows, the Enterprise OPA Platform integrates with your existing Git service to manage the proposal, review, and approval of policy change. The integration with an existing Git service allows both developers and other stakeholders to participate in the policy workflow using the tools that are most familiar to them. IDEs, text editors, and the Enterprise OPA Platform graphical editor all fit into the workflow.

Expressive access control with usability

The Enterprise OPA Platform makes expressive, fine-grained access control usable by adopting Rego, the standard policy language of Open Policy Agent.

  • Rego supports open-ended access control policy without forcing a particular style. RBAC, ABAC, ReBAC, and more are all supported within a single language and a single mental model. Policy authors can choose the style that fits each particular use case, smoothly transition between styles, or combine multiple styles. 
  • Rego is a declarative language that can be picked up by any experienced user of database queries or spreadsheets. As a declarative language, Rego avoids the unnecessary complexities of general-purpose programming.
  • The Enterprise OPA Platform provides policy composers that help both developers and non-developers create and edit access control policies with an easier learning curve.
  • Being an open standard with over 2 billion downloads, Rego comes with a rich ecosystem of community tools and learning resources including IDE plugins, linter, playground, and online courses.

Brownfield onboarding

The Enterprise OPA Platform works well with most identity providers because it supports well-known standards such as SAML, OAuth, OpenID Connect, JSON Web Token, and SCIM. Moreover, the Enterprise OPA Platform‘s flexible data architecture enables it to work with multiple identity providers and trust services while also adapting to custom or emerging frameworks.

The Enterprise OPA Platform also works well with almost all types of applications. By using an API gateway or a proxy that obtains and enforces the authorization decisions, the integration can often be done with no change to the application code. SaaS applications that support externalized authorization can also be protected using the Enterprise OPA Platform.

A Zero Trust adoption strategy that has shown real-world success is for a central team to first adopt an API gateway architecture that enforces a baseline level of access control for the whole organization (see sample deployment in Figure 4), and then enable the disparate parts of the organization to follow-up with more granular access control. With this strategy, the organization can achieve significant progress in Zero Trust security while giving individual teams more time to adopt the authorization framework in a manner that fits with their technical and human contexts. One team may adopt a workflow that contributes their specific access control policy to the org-level gateway. Another team may deploy a shared authorization service to make access control decisions for their monolith applications. Yet another team may deploy authorization decision engines local to each containerized application instance and obtain exemption from the central gateway to achieve particularly demanding latency goals.

Conclusion

Zero Trust asks a great deal of the authorization technology. To implement dynamic least privilege access in a Zero Trust framework, an authorization technology needs to consume a high volume and variety of fast-changing data and then use the data to act quickly and reliably to authorize each access across your organization. The technology needs to enable granular access control that is adaptable to evolving needs. And it needs to minimize disruption to existing technology, people, and processes.

Zero Trust isn’t going to happen overnight, but Styra’s Enterprise OPA Platform provides a complete, battle-tested authorization solution to simplify your Zero Trust adoption. Schedule a demo to see how the Enterprise OPA Platform can fulfill your Zero Trust authorization needs.


  1. Zero Trust Adoption Report, Microsoft Security.
  2. The preamble of the NIS 2 Directive specifically mentions “zero-trust principles” as a “basic cyber hygiene practice”.
  3. Articles 21 and 22 of the June 2023 draft regulation consultation describe principles such as least privilege access and the just-in-time assignment of privileged access.
  4. The Definition Of Modern Zero Trust, Forrester Research.
  5. Zero Trust Adoption Report (Exhibit 12), Microsoft Security.