Authorization Engine Comparison
Five open-source authorization engines, compared on features, integration depth, and evaluation performance. The engines represent different design traditions: purpose-built policy languages (SAPL, Cedar), general-purpose policy engines (OPA/Rego), relationship-based authorization (OpenFGA), and policy-as-YAML (Cerbos).
Summary
- Performance. SAPL delivers sub-microsecond median evaluation latency, 10x faster than Cedar and orders of magnitude faster than OPA and OpenFGA in the Cedar OOPSLA benchmark scenarios. As a deployed server, SAPL sustains over 2M decisions/sec (8 cores, JVM) with 35 µs p50 latency, even at 10,000 policies. See the full performance benchmarks.
- Streaming. Only SAPL supports streaming authorization (ASBAC). Applications subscribe once and receive updated decisions as policies, attributes, or external data change. All other engines are request-response only.
- Decisions beyond permit/deny. Obligations and advice were introduced by the XACML standard. SAPL carries this concept forward with a modern policy language. Decisions include structured instructions for query rewriting, data filtering, field redaction, audit logging, and approval workflows. Among the engines compared here, none of the others include obligations or advice in the decision. Cerbos adds limited output expressions.
- Framework integration. SAPL provides enforcement annotations or decorators for 7 frameworks (Spring, Django, FastAPI, Flask, NestJS, .NET, FastMCP) including streaming enforcement and database query rewriting. OPA provides enforcement middleware for Spring Boot and ASP.NET Core. OpenFGA has a Spring Boot starter with
@PreAuthorize. Cedar has Express.js middleware with OpenAPI schema generation. Cerbos covers 8 languages as API clients with query plan adapters for Prisma, SQLAlchemy, and Drizzle. See the integration depth table for details. - Verification and testing. All engines provide ways to gain confidence in policy correctness, with different approaches. Cedar offers formal verification via Lean proofs for mathematically provable security properties. The NIST NGAC standard also explores this area. SAPL has a behavior-driven testing language (SAPLTest) with given/when/expect/then blocks, declarative mocking, streaming assertions, and coverage reporting. OPA has a built-in Rego test runner with coverage. Cerbos uses YAML test suites. OpenFGA has CLI model tests.
- AI and agent authorization. SAPL provides a dedicated FastMCP SDK for MCP server authorization. Spring AI tool methods can be secured via the existing SAPL Spring Boot integration. RAG and human-in-the-loop patterns use standard SAPL features (obligations, query rewriting). Cedar offers MCP schema generation. OpenFGA and Cerbos document RAG patterns.
- Deployment. SAPL, OPA, OpenFGA, and Cerbos ship standalone server binaries. SAPL, Cedar, and OPA can also be embedded as libraries. Cedar is embeddable only with no standalone server.
What Each Engine Does Well
Every engine in this comparison is a serious, maintained open-source project. Each reflects different design priorities, and the right choice depends on your requirements.
SAPL
Fastest evaluation. Only engine with streaming authorization and first-class obligations/advice. Deep framework integrations with method-level enforcement. Extensible with custom functions and attribute finders. Behavior-driven testing language with coverage.
Cedar
Formal verification via Lean proofs. Mathematically provable security properties about policy sets. Purpose-built language with strong static typing. Backed by AWS.
OPA / Rego
Broadest cloud-native ecosystem. CNCF graduated project. First-class Kubernetes (Gatekeeper), Envoy, Terraform, Docker, and Kafka integrations. Domain-agnostic: policies beyond just authorization.
OpenFGA
Google Zanzibar model for relationship-based access control at scale. CNCF incubating project. Purpose-built for ReBAC with transitive relationship traversal. Horizontally scalable. Used in production by Auth0, Grafana Labs, Docker.
Cerbos
Stateless PDP with zero-dependency YAML policies. GitOps-native with file/git-based policy storage. Widest SDK coverage (8 languages). Kubernetes sidecar and DaemonSet patterns.
Performance Comparison
These benchmarks reproduce the experimental setup from the Cedar OOPSLA 2024 paper (Cutler et al., Section 5, Figure 14), adding SAPL to the original comparison of Cedar, OPA, and OpenFGA. The same three application scenarios, the same entity generators, the same request distributions. All engines evaluate equivalent authorization models and produce identical allow/deny decisions for every request.
Each data point represents 100,000 authorization requests across 200 randomly generated entity stores. Evaluation time measures the core is_authorized() operation: no parsing, no entity loading. SAPL, Cedar, and OPA are evaluated as embedded libraries. OpenFGA is evaluated over HTTP to a local in-memory server, as in the original Cedar paper.
Google Drive: Median and p99 Evaluation Latency
5 policies. Users share Documents and Folders with transitive view access. Entity graph scales with N (users, groups, documents, folders).
GitHub: Median and p99 Evaluation Latency
8 policies. Users and Teams with read/triage/write/maintain/admin on Repositories and Organizations. Entity graph scales with N.
TinyTodo: Median and p99 Evaluation Latency
4 policies. Users and Teams sharing todo Lists. Entity graph scales with N.
Benchmark Environment
Clock: All P-cores pinned to 4.0 GHz (constant frequency, no turbo/throttle noise)
JVM: OpenJDK 25.0.2 (HotSpot C2) for SAPL
Cedar: v4.10 and v3.0.1 (Rust), OPA: Rego v0.61.0 (Go), OpenFGA: latest (Go in-memory store)
OS: NixOS Linux 6.18.19
Protocol: 200 entity stores per data point, 500 requests per store (100,000 total)
Metric: Core is_authorized() time, excluding I/O, parsing, and entity loading
Feature Comparison
| Feature | SAPL | Cedar | OPA | OpenFGA | Cerbos |
|---|---|---|---|---|---|
| Authorization Models | |||||
| RBAC | Yes | Yes | Yes | Yes (via ReBAC) | Yes |
| ABAC | Yes | Yes | Yes | Partial (CEL conditions) | Yes |
| ReBAC | Yes | Yes | Yes | Yes | No |
| ACL | Yes | Yes | Yes | Yes (via tuples) | Yes |
| Location-based (GIS / geometry) | Yes (built-in geo functions) | No | No | No | No |
| Streaming (ASBAC) | Yes | No | No | No | No |
| Policy Language | |||||
| Language type | SAPL (purpose-built) | Cedar (purpose-built) | Rego (Datalog-derived) | DSL + JSON | YAML + CEL conditions |
| Human-readable syntax | Yes | Yes | Moderate (Datalog) | Moderate (DSL/JSON) | Yes (YAML) |
| Hot-reload policies | Yes (active subscriptions re-evaluate) | Manual (reconstruct authorizer with new policy set) | Yes (bundles) | N/A (API-driven) | Yes (file watch) |
| Custom functions | Yes | No (extension types only) | Yes (Rego + Go plugins) | No | No (built-in CEL only) |
| Decision Model | |||||
| Decision protocol | Streaming + request-response | Request-response | Request-response | Request-response | Request-response |
| Obligations / advice | Yes (first-class constructs) | No | No | No | Output expressions (limited) |
| External data during evaluation | Yes (HTTP, MQTT, clock, custom PIPs) | No (pre-loaded entity store) | Yes (http.send, bundles) | No (pre-loaded tuples) | No (pre-loaded data) |
| Data filtering / query rewriting | Yes (JPA, R2DBC, MongoDB via obligations) | No | Partial evaluation | ListObjects / ListUsers API | PlanResources API |
| Data transformation | Yes (resource transformation via obligations) | No | Arbitrary structured output | No | Output expressions |
| Verification and Testing | |||||
| Formal verification | No | Yes (Lean proofs) | No | No | No |
| Testing framework | Behavior-driven DSL (SAPLTest) | CLI validation + analysis | Built-in Rego test runner | CLI model tests | YAML test suites |
| Coverage reporting | Yes | No | Yes | No | No |
| Mocking support | Yes (declarative PIP + function mocking) | No | No (test with fixtures) | No | No |
| Deployment | |||||
| Embeddable library | Yes (JVM) | Yes (Rust, Java via JNI, community Go/WASM) | Yes (Go) | Server-oriented (Go library exists) | No |
| Standalone server | Yes (HTTP + RSocket) | No (library only) | Yes (HTTP) | Yes (HTTP + gRPC) | Yes (HTTP + gRPC) |
| Native binary (no runtime needed) | Yes (GraalVM native image) | Yes (Rust) | Yes (Go) | Yes (Go) | Yes (Go) |
| Implementation language | Java / JVM | Rust | Go | Go | Go |
| Operations | |||||
| Health / readiness probes | Yes (Actuator: liveness, readiness, startup) | No (library) | Yes (/health endpoint) | Yes (gRPC + HTTP probes) | Yes (/_cerbos/health + CLI) |
| Decision logging | Yes (structured JSON with subscription, decision, obligations) | No (library) | Yes (remote HTTP, console, custom plugins) | Yes (changelog API + structured logs) | Yes (File, Kafka, Local DB, Hub) |
| Prometheus metrics | Yes (decisions, latency, active subscriptions) | No (library) | Yes (bundle loading, request latency) | Yes | Yes (+ OTLP push) |
| Signed policy bundles | Yes (Ed25519) | No | Yes (JWT + HMAC/RSA/ECDSA) | No | No (Git-based versioning) |
| Evaluation diagnostics | Yes (full trace, JSON report, text report) | Determining policies + error details | Yes (explain modes, OpenTelemetry) | OpenTelemetry tracing | Yes (matched policy, AST, OpenTelemetry) |
| SDKs and Integrations | |||||
| Language SDKs | Java, Python, JS, C# | Rust, Go, Java, JS | Go, Java, JS, C# | Java, JS, Go, Python, C# | Go, Java, JS, C#, PHP, Python, Ruby, Rust |
| Framework integrations | Spring, Django, FastAPI, Flask, NestJS, .NET, FastMCP | Express.js | Spring Boot, ASP.NET Core, Kubernetes, Envoy, Terraform, Docker, Kafka | Spring Boot | Query plan adapters (Prisma, SQLAlchemy, Drizzle) |
| Kubernetes | Via server deployment | Admission control (deprecated PoC) | Gatekeeper (CNCF), admission control | Helm chart | Sidecar, DaemonSet, Helm chart |
| AI and Agent Authorization | |||||
| All engines can authorize AI operations via their standard APIs. This section compares dedicated integrations and documented patterns. | |||||
| Tool call authorization | Dedicated (Spring AI integration) | Via standard API | Via standard API | Via standard API | Via standard API |
| RAG pipeline authorization | Dedicated (obligation-driven query rewriting) | Via standard API | Via standard API | Documented patterns | Documented recipe |
| Human-in-the-loop | Dedicated (obligation-driven approval workflows) | Via standard API | Via standard API | Via standard API | Via standard API |
| MCP server authorization | Dedicated (FastMCP SDK, decorators) | Dedicated (schema generation, analysis server) | Via standard API | Documented patterns | Via standard API |
Integration Depth
SDK integration depth varies significantly across engines. A client library that sends HTTP requests is different from a framework integration with enforcement annotations, streaming support, or database query rewriting. This table shows what each integration actually provides.
| Framework | Enforcement | Streaming | Query Rewriting |
|---|---|---|---|
| SAPL | |||
| Spring | AOP annotations + AuthorizationManager with automatic obligation handling | Yes (streaming annotations) | R2DBC, MongoDB (deep query language integration), JPA and others (obligation-driven parameter rewriting) |
| Django | Full SDK with decorators and middleware | Yes | Obligation-driven parameter rewriting |
| FastAPI | Full SDK with decorators | Yes | Obligation-driven parameter rewriting |
| Flask | Full SDK with decorators | Yes | Obligation-driven parameter rewriting |
| NestJS | Full SDK with guards and decorators | Yes | Obligation-driven parameter rewriting |
| .NET | Full SDK with middleware | Yes | Obligation-driven parameter rewriting |
| FastMCP (Python) | Full SDK with decorators for tools, resources, prompts | No | Obligation-driven parameter rewriting |
| Cedar | |||
| Express.js | Route-level middleware with OpenAPI schema generation and starter policy generation | No | No |
| Java, Go, Rust | Evaluation library only (no framework enforcement) | No | No |
| OPA / Rego | |||
| Spring Boot | AuthorizationManager (Spring Security) | No | No |
| ASP.NET Core | OpaAuthorizationMiddleware (HTTP pipeline enforcement) | No | No |
| Kubernetes, Envoy, Terraform, Docker, Kafka | Infrastructure-level enforcement (Gatekeeper, sidecar plugins) | No | Partial evaluation (generates filter conditions) |
| Java, TypeScript, C#, Go | API client only | No | No |
| OpenFGA | |||
| Spring Boot | AOP annotations (Spring Security) | No | ListObjects / ListUsers API |
| JS, Python, Java, C#, Go | API client only | No | No |
| Cerbos | |||
| Go, Java, JS, C#, PHP, Python, Ruby, Rust | API client only | No | Prisma, SQLAlchemy, Drizzle (query plan adapters) |
Methodology and Sources
Benchmark code. All benchmark code, scenario generators, and analysis tools are open source. The Cedar OOPSLA benchmark harness is at cedar-policy/cedar-examples. Our fork with the SAPL engine integration is at heutelbeck/cedar-benchmarks (branches sapl-engine for Cedar 3.0 and sapl-engine-4.10 for Cedar 4.10).
Engine documentation. Feature claims are based on official documentation: SAPL, Cedar, OPA, OpenFGA, Cerbos.
Cedar paper. Cutler et al., “Cedar: A New Language for Expressive, Fast, Safe, and Analyzable Authorization (Extended Version),” arXiv:2403.04651, OOPSLA 2024.