Rule definitions can be more expressive when using the
future keywords
contains
and
if
. They are optional, and you will find examples below of defining rules without them.
To follow along as-is, please import the keywords:
hostnames contains name if {
name := sites[_].servers[_].hostname
}
Note that the
(future) keywords
contains
and
if
are optional here.
If future keywords are not available to you, you can define the same rule as follows:
+-------------+-----------------+
| name | hostnames[name] |
+-------------+-----------------+
| "beryllium" | "beryllium" |
| "boron" | "boron" |
| "carbon" | "carbon" |
| "helium" | "helium" |
| "hydrogen" | "hydrogen" |
| "lithium" | "lithium" |
| "nitrogen" | "nitrogen" |
| "oxygen" | "oxygen" |
+-------------+-----------------+
This example introduces a few important aspects of Rego.
First, the rule defines a set document where the contents are defined by the variable
name
. We know this rule defines a set document because the head only includes a key. All rules have the following form (where key, value, and body are all optional):
<name> <key>? <value>? <body>?
For a more formal definition of the rule syntax, see the
Policy Reference
document.
Second, the
sites[_].servers[_].hostname
fragment selects the
hostname
attribute from all of the objects in the
servers
collection. From reading the fragment in isolation we cannot tell whether the fragment refers to arrays or objects. We only know that it refers to a collections of values.
Third, the
name := sites[_].servers[_].hostname
expression binds the value of the
hostname
attribute to the variable
name
, which is also declared in the head of the rule.
Generating Objects
Rules that define objects are very similar to rules that define sets.
server := sites[_].servers[_]
hostname := server.hostname
apps[i].servers[_] == server.name
app := apps[i].name
}
The rule above defines an object that maps hostnames to app names. The main difference between this rule and one which defines a set is the rule head: in addition to declaring a key, the rule head also declares a value for the document.
The result:
server := sites[_].servers[_]
hostname := server.hostname
apps[i].servers[_] == server.name
app := apps[i].name
}
Incremental Definitions
A rule may be defined multiple times with the same name. When a rule is defined
this way, we refer to the rule definition as
incremental
because each
definition is additive. The document produced by incrementally defined rules is
the union of the documents produced by each individual rule.
For example, we can write a rule that abstracts over our
servers
and
containers
data as
instances
:
instances contains instance if {
server := sites[_].servers[_]
instance := {"address": server.hostname, "name": server.name}
instances contains instance if {
container := containers[_]
instance := {"address": container.ipaddress, "name": container.name}
}
If the head of the rule is same, we can chain multiple rule bodies together to
obtain the same result. We don’t recommend using this form anymore.
instances contains instance if {
server := sites[_].servers[_]
instance := {"address": server.hostname, "name": server.name}
container := containers[_]
instance := {"address": container.ipaddress, "name": container.name}
}
An incrementally defined rule can be intuitively understood as
<rule-1> OR <rule-2> OR ... OR <rule-N>
.
The result:
+-----------------------------------------------+-----------------------------------------------+
| x | instances[x] |
+-----------------------------------------------+-----------------------------------------------+
| {"address":"10.0.0.1","name":"big_stallman"} | {"address":"10.0.0.1","name":"big_stallman"} |
| {"address":"10.0.0.2","name":"cranky_euclid"} | {"address":"10.0.0.2","name":"cranky_euclid"} |
| {"address":"beryllium","name":"web-1000"} | {"address":"beryllium","name":"web-1000"} |
| {"address":"boron","name":"web-1001"} | {"address":"boron","name":"web-1001"} |
| {"address":"carbon","name":"db-1000"} | {"address":"carbon","name":"db-1000"} |
| {"address":"helium","name":"web-1"} | {"address":"helium","name":"web-1"} |
| {"address":"hydrogen","name":"web-0"} | {"address":"hydrogen","name":"web-0"} |
| {"address":"lithium","name":"db-0"} | {"address":"lithium","name":"db-0"} |
| {"address":"nitrogen","name":"web-dev"} | {"address":"nitrogen","name":"web-dev"} |
| {"address":"oxygen","name":"db-dev"} | {"address":"oxygen","name":"db-dev"} |
+-----------------------------------------------+-----------------------------------------------+
Note that the
(future) keywords
contains
and
if
are optional here.
If future keywords are not available to you, you can define the same rule as follows:
instances[instance] {
server := sites[_].servers[_]
instance := {"address": server.hostname, "name": server.name}
instances[instance] {
container := containers[_]
instance := {"address": container.ipaddress, "name": container.name}
}
Complete Definitions
In addition to rules that
partially
define sets and objects, Rego also
supports so-called
complete
definitions of any type of document. Rules provide
a complete definition by omitting the key in the head. Complete definitions are
commonly used for constants:
Rego allows authors to omit the body of rules. If the body is omitted, it defaults to true.
Documents produced by rules with complete definitions can only have one value at
a time. If evaluation produces multiple values for the same document, an error
will be returned.
For example:
# Define two sets of users: power users and restricted users. Accidentally
# include "bob" in both.
power_users := {"alice", "bob", "fred"}
restricted_users := {"bob", "kim"}
# Power users get 32GB memory.
max_memory := 32 if power_users[user]
# Restricted users get 4GB memory.
max_memory := 4 if restricted_users[user]
Error:
1 error occurred: module.rego:16: eval_conflict_error: complete rules must not produce multiple outputs
OPA returns an error in this case because the rule definitions are in
conflict
.
The value produced by max_memory cannot be 32 and 4
at the same time
.
The documents produced by rules with complete definitions may still be undefined:
In some cases, having an undefined result for a document is not desirable. In those cases, policies can use the
Default Keyword
to provide a fallback value.
Note that the
(future) keyword
if
is optional here.
If future keywords are not available to you, you can define complete rules like this:
}
Functions
Rego supports user-defined functions that can be called with the same semantics as
Built-in Functions
. They have access to both the
the data Document
and
the input Document
.
For example, the following function will return the result of trimming the spaces from a string and then splitting it by periods.
foo([x, {"bar": y}]) := z if {
z := {x: y}
}
The following calls would produce the logical mappings given:
|
Call
|
x
|
y
|
z := foo(a)
|
a[0]
|
a[1].bar
|
z := foo(["5", {"bar": "hello"}])
|
"5"
|
"hello"
|
z := foo(["5", {"bar": [1, 2, 3, ["foo", "bar"]]}])
|
"5"
|
[1, 2, 3, ["foo", "bar"]]
|
If you need multiple outputs, write your functions so that the output is an array, object or set
containing your results. If the output term is omitted, it is equivalent to having the output term
be the literal
true
. Furthermore,
if
can be used to write shorter definitions. That is, the
function declarations below are equivalent:
]
Negation
To generate the content of a
Virtual Document
, OPA attempts to bind variables in the body of the rule such that all expressions in the rule evaluate to True.
This generates the correct result when the expressions represent assertions about what states should exist in the data stored in OPA. In some cases, you want to express that certain states
should not
exist in the data stored in OPA. In these cases, negation must be used.
For safety, a variable appearing in a negated expression must also appear in another non-negated equality expression in the rule.
OPA will reorder expressions to ensure that negated expressions are evaluated after other non-negated expressions with the same variables. OPA will reject rules containing negated expressions that do not meet the safety criteria described above.
The simplest use of negation involves only scalar values or variables and is equivalent to complementing the operator:
Negation is required to check whether some value
does not
exist in a collection. That is, complementing the operator in an expression such as
p[_] == "foo"
yields
p[_] != "foo"
. However, this is not equivalent to
not p["foo"]
.
For example, we can write a rule that defines a document containing names of apps not deployed on the
"prod"
site:
+-----------+------------------------+
| name | apps_not_in_prod[name] |
+-----------+------------------------+
| "mongodb" | "mongodb" |
+-----------+------------------------+
Universal Quantification (FOR ALL)
Rego allows for several ways to express universal quantification.
For example, imagine you want to express a policy that says (in English):
There must be no apps named "bitcoin-miner".
The most expressive way to state this in Rego is using the
every
keyword:
array := ["one", "two", "three"]; array[i] == "three"
The query will be satisfied
if there is an
i
such that the query’s
expressions are simultaneously satisfied.
+-----------------------+---+
| ["one","two","three"] | 2 |
+-----------------------+---+
Therefore, there are other ways to express the desired policy.
For this policy, you can also define a rule that finds if there exists a bitcoin-mining
app (which is easy using the
some
keyword). And then you use negation to check
that there is NO bitcoin-mining app. Technically, you’re using 2 negations and
an existential quantifier, which is logically the same as a universal
quantifier.
For example:
The
undefined
result above is expected because we did not define a default
value for
no_bitcoin_miners_using_negation
. Since the body of the rule fails
to match, there is no value generated.
A common mistake is to try encoding the policy with a rule named
no_bitcoin_miners
like so:
app := apps[_]
app.name != "bitcoin-miner" # THIS IS NOT CORRECT.
}
It becomes clear that this is incorrect when you use the
some
keyword, because the rule is true whenever there is SOME app that is not a
bitcoin-miner:
The reason the rule is incorrect is that variables in Rego are
existentially
quantified
. This means that rule bodies and queries express FOR ANY and not FOR
ALL. To express FOR ALL in Rego complement the logic in the rule body (e.g.,
!=
becomes
==
) and then complement the check using negation (e.g.,
no_bitcoin_miners
becomes
not any_bitcoin_miners
).
Alternatively, we can implement the same kind of logic inside a single rule
using
Comprehensions
.
no_bitcoin_miners_using_comprehension if {
bitcoin_miners := {app | some app in apps; app.name == "bitcoin-miner"}
count(bitcoin_miners) == 0
}
Whether you use negation, comprehensions, or
every
to express FOR ALL is up to you.
The
every
keyword should lend itself nicely to a rule formulation that closely
follows how requirements are stated, and thus enhances your policy’s readability.
The comprehension version is more concise than the negation variant, and does not
require a helper rule while the negation version is more verbose but a bit simpler
and allows for more complex ORs.
Modules
In Rego, policies are defined inside
modules
. Modules consist of:
-
Exactly one
Package
declaration.
-
Zero or more
Import
statements.
-
Zero or more
Rule
definitions.
Modules are typically represented in Unicode text and encoded in UTF-8.
Comments begin with the
#
character and continue until the end of the line.
Packages
Packages group the rules defined in one or more modules into a particular namespace. Because rules are namespaced they can be safely shared across projects.
Modules contributing to the same package do not have to be located in the same directory.
The rules defined in a module are automatically exported. That is, they can be queried under OPA’s
Data API
provided the appropriate package is given. For example, given the following module:
package opa.examples
pi := 3.14159
The
pi
document can be queried via the Data API:
GET https://example.com/v1/data/opa/examples/pi HTTP/1.1
Valid package names are variables or references that only contain string operands. For example, these are all valid package names:
package foo
package foo.bar
package foo.bar.baz
package foo["bar.baz"].qux
These are invalid package names:
package 1foo # not a variable
package foo[1].bar # contains non-string operand
For more details see the language
Grammar
.
Imports
Import statements declare dependencies that modules have on documents defined outside the package. By importing a document, the identifiers exported by that document can be referenced within the current module.
All modules contain implicit statements which import the
data
and
input
documents.
Modules use the same syntax to declare dependencies on
Base and Virtual Documents
.
# allows users assigned a "dev" role to perform read-only operations.
allow if {
method == "GET"
input.user in data.roles["dev"]
# allows user catherine access on Saturday and Sunday
allow if {
user == "catherine"
day := time.weekday(time.now_ns())
day in ["Saturday", "Sunday"]
}
Imports can include an optional
as
keyword to handle namespacing issues:
some server in my_servers
"http" in server.protocols
}
Future Keywords
To ensure backwards-compatibility, new keywords (like
every
) are introduced slowly.
In the first stage, users can opt-in to using the new keywords via a special import:
-
import future.keywords
introduces
all
future keywords, and
-
import future.keywords.x
only
introduces the
x
keyword – see below for all known future keywords.
Using
import future.keywords
to import all future keywords means an
opt-out of a
safety measure
:
With a new version of OPA, the set of “all” future keywords can grow, and policies that
worked with the previous version of OPA stop working.
This
cannot happen
when you selectively import the future keywords as you need them.
At some point in the future, the keyword will become
standard
, and the import will
become a no-op that can safely be removed. This should give all users ample time to
update their policies, so that the new keyword will not cause clashes with existing
variable names.
Note that some future keyword imports have consequences on pretty-printing:
If
contains
or
if
are imported, the pretty-printer will use them as applicable
when formatting the modules.
This is the list of all future keywords known to OPA:
future.keywords.in
More expressive membership and existential quantification keyword:
deny {
"denylisted-role" in input.roles # membership check
}
in
was introduced in
v0.34.0
.
See
the keywords docs
for details.
future.keywords.every
Expressive
universal quantification
keyword:
role.name in allowed
}
There is no need to also import
future.keywords.in
, that is
implied
by importing
future.keywords.every
.
every
was introduced in
v0.38.0
.
See
Every Keyword
for details.
future.keywords.if
This keyword allows more expressive rule heads:
deny contains msg { msg := "forbidden" }
contains
was introduced in
v0.42.0
.
Some Keyword
The
some
keyword allows queries to explicitly declare local variables. Use the
some
keyword in rules that contain unification statements or references with
variable operands
if
variables contained in those statements are not
declared using
:=
.
|
Statement
|
Example
|
Variables
|
|
Unification
|
input.a = [["b", x], [y, "c"]]
|
x
and
y
|
|
Reference with variable operands
|
data.foo[i].bar[j]
|
i
and
j
|
For example, the following rule generates tuples of array indices for servers in
the “west” region that contain “db” in their name. The first element in the
tuple is the site index and the second element is the server index.
some i, j
sites[i].region == "west"
server := sites[i].servers[j] # note: 'server' is local because it's declared with :=
contains(server.name, "db")
}
If we query for the tuples we get two results:
# Define a rule called 'i'
i := 1
If we had not declared
i
with the
some
keyword, introducing the
i
rule
above would have changed the result of
tuples
because the
i
symbol in the
body would capture the global value. Try removing
some i, j
and see what happens!
The
some
keyword is not required but it’s recommended to avoid situations like
the one above where introduction of a rule inside a package could change
behaviour of other rules.
For using the
some
keyword with iteration, see
the documentation of the
in
operator
.
Every Keyword
every
is a future keyword and needs to be imported.
import future.keywords.every
introduces the
every
keyword described here.
See the docs on
future keywords
for more information.
The
every
keyword takes an (optional) key argument, a value argument, a domain, and a
block of further queries, its “body”.
The keyword is used to explicitly assert that its body is true for
any element in the domain
.
It will iterate over the domain, bind its variables, and check that the body holds
for those bindings.
If one of the bindings does not yield a successful evaluation of the body, the overall
statement is undefined.
If the domain is empty, the overall statement is true.
Evaluating
every
does
not
introduce new bindings into the rule evaluation.
Used with a key argument, the index, or property name (for objects), comes into the
scope of the body evaluation:
object_domain if {
every k, v in {"foo": "bar", "fox": "baz" } { # object domain
startswith(k, "f")
startswith(v, "b")
set_domain if {
every x in {1, 2, 3} { x != 4 } # set domain
"object_domain": true,
"set_domain": true
}
Semantically,
every x in xs { p(x) }
is equivalent to, but shorter than, a “not-some-not”
construct using a helper rule:
}
Negating
every
is forbidden. If you desire to express
not every x in xs { p(x) }
please use
some x in xs; not p(x)
instead.
With Keyword
The
with
keyword allows queries to programmatically specify values nested
under the
input Document
or the
data Document
, or built-in functions.
For example, given the simple authorization policy in the
Imports
section, we can write a query that checks whether a particular request would be
allowed:
allow with input as {"user": "catherine", "method": "GET"}
with data.roles as {"dev": ["bob"]}
with time.weekday as "Sunday"
The
with
keyword acts as a modifier on expressions. A single expression is
allowed to have zero or more
with
modifiers. The
with
keyword has the
following syntax:
<expr> with <target-1> as <value-1> [with <target-2> as <value-2> [...]]
The
<target>
s must be references to values in the input document (or the input
document itself) or data document, or references to functions (built-in or not).
When applied to the
data
document, the
<target>
must not attempt to
partially define virtual documents. For example, given a virtual document at
path
data.foo.bar
, the compiler will generate an error if the policy
attempts to replace
data.foo.bar.baz
.
The
with
keyword only affects the attached expression. Subsequent expressions
will see the unmodified value. The exception to this rule is when multiple
with
keywords are in-scope like below:
outer := result if {
result := middle with input as {"foo": 200, "bar": 300}
}
When
<target>
is a reference to a function, like
http.send
, then
its
<value>
can be any of the following:
-
a value:
with http.send as {"body": {"success": true }}
-
a reference to another function:
with http.send as mock_http_send
-
a reference to another (possibly custom) built-in function:
with custom_builtin as less_strict_custom_builtin
-
a reference to a rule that will be used as the
value
.
When the replacement value is a function, its arity needs to match the replaced
function’s arity; and the types must be compatible.
Replacement functions can call the function they’re replacing
without causing
recursion
.
See the following example:
Note that function replacement via
with
does not affect the evaluation of
the function arguments: if
input.x
is undefined, the replacement of
concat
does not change the result of the evaluation:
Default Keyword
The
default
keyword allows policies to define a default value for documents
produced by rules with
Complete Definitions
. The
default value is used when all of the rules sharing the same name are undefined.
For example:
Without the default definition, the
allow
document would simply be undefined for the same input.
When the
default
keyword is used, the rule syntax is restricted to:
default <name> := <term>
The term may be any scalar, composite, or comprehension value but it may not be
a variable or reference. If the value is a composite then it may not contain
variables or references. Comprehensions however may, as the result of a
comprehension is never undefined.
Similar to rules, the
default
keyword can be applied to functions as well.
For example:
clamp_positive(x) = x {
}
When
clamp_positive
is queried, the return value will be either the argument provided to the function or
0
.
The value of a
default
function follows the same conditions as that of a
default
rule. In addition, a
default
function satisfies the following properties:
-
same arity as other functions with the same name
-
arguments should only be plain variables ie. no composite values
-
argument names should not be repeated
Else Keyword
The
else
keyword is a basic control flow construct that gives you control
over rule evaluation order.
Rules grouped together with the
else
keyword are evaluated until a match is
found. Once a match is found, rule evaluation does not proceed to rules further
in the chain.
The
else
keyword is useful if you are porting policies into Rego from an
order-sensitive system like IPTables.
authorize := "allow" if {
input.user == "superuser" # allow 'superuser' to perform any operation.
} else := "deny" if {
input.path[0] == "admin" # disallow 'admin' operations...
input.source_network == "external" # from external networks.
} # ... more rules
The
else
keyword may be used repeatedly on the same rule and there is no
limit imposed on the number of
else
clauses on a rule.
Operators
Membership and iteration:
in
To ensure backwards-compatibility, new keywords (like
in
) are introduced slowly.
In the first stage, users can opt-in to using the new keywords via a special import:
import future.keywords.in
introduces the
in
keyword described here.
See the docs on
future keywords
for more information.
The membership operator
in
lets you check if an element is part of a collection (array, set, or object). It always evaluates to
true
or
false
:
p := [x, y, z] if {
x := 3 in [1, 2, 3] # array
y := 3 in {1, 2, 3} # set
z := 3 in {"foo": 1, "bar": 3} # object
}
When providing two arguments on the left-hand side of the
in
operator,
and an object or an array on the right-hand side, the first argument is
taken to be the key (object) or index (array), respectively:
p := [x, y] if {
x := "foo", "bar" in {"foo": "bar"} # key, val with object
y := 2, "baz" in ["foo", "bar", "baz"] # key, val with array
}
Note
that in list contexts, like set or array definitions and function
arguments, parentheses are required to use the form with two left-hand side
arguments – compare:
"w": "one function argument: true",
"z": "two function arguments: 0, true"
}
Combined with
not
, the operator can be handy when asserting that an element is
not
member of an array:
"test_deny": true
}
Note
that expressions using the
in
operator
always return
true
or
false
, even
when called in non-collection arguments:
"b": "f"
}
Equality: Assignment, Comparison, and Unification
Rego supports three kinds of equality: assignment (
:=
), comparison (
==
), and unification
=
. We recommend using assignment (
:=
) and comparison (
==
) whenever possible for policies that are easier to read and write.
Assignment
:=
The assignment operator (
:=
) is used to assign values to variables. Variables assigned inside a rule are locally scoped to that rule and shadow global variables.
p if {
x := 1 # declare local variable 'x' and assign value 1
x != 100 # true because 'x' refers to local variable
}
Assigned variables are not allowed to appear before the assignment in the
query. For example, the following policy will not compile:
y = 41
x = 42
}
Best Practices for Equality
Here is a comparison of the three forms of equality.
Equality Applicable Compiler Errors Use Case
-------- ----------- ------------------------- ----------------------
:= Everywhere Var already assigned Assign variable
== Everywhere Var not assigned Compare values
= Everywhere Values cannot be computed Express query
Best practice is to use assignment
:=
and comparison
==
wherever possible. The additional compiler checks help avoid errors when writing policy, and the additional syntax helps make the intent clearer when reading policy.
Under the hood
:=
and
==
are syntactic sugar for
=
, local variable creation, and additional compiler checks.
Comparison Operators
The following comparison operators are supported:
a == b # `a` is equal to `b`.
a != b # `a` is not equal to `b`.
a < b # `a` is less than `b`.
a <= b # `a` is less than or equal to `b`.
a > b # `a` is greater than `b`.
a >= b # `a` is greater than or equal to `b`.
None of these operators bind variables contained
in the expression. As a result, if either operand is a variable, the variable
must appear in another expression in the same rule that would cause the
variable to be bound, i.e., an equality expression or the target position of
a built-in function.
Built-in Functions
In some cases, rules must perform simple arithmetic, aggregation, and so on.
Rego provides a number of built-in functions (or “built-ins”) for performing
these tasks.
Built-ins can be easily recognized by their syntax. All built-ins have the
following form:
<name>(<arg-1>, <arg-2>, ..., <arg-n>)
Built-ins usually take one or more input values and produce one output
value. Unless stated otherwise, all built-ins accept values or variables as
output arguments.
If a built-in function is invoked with a variable as input, the variable must
be
safe
, i.e., it must be assigned elsewhere in the query.
Built-ins can include “.” characters in the name. This allows them to be
namespaced. If you are adding custom built-ins to OPA, consider namespacing
them to avoid naming conflicts, e.g.,
org.example.special_func
.
See the
Policy Reference
document for
details on each built-in function.
Errors
By default, built-in function calls that encounter runtime errors evaluate to
undefined (which can usually be treated as
false
) and do not halt policy
evaluation. This ensures that built-in functions can be called with invalid
inputs without causing the entire policy to stop evaluating.
In most cases, policies do not have to implement any kind of error handling
logic. If error handling is required, the built-in function call can be negated
to test for undefined. For example:
allow if {
io.jwt.verify_hs256(input.token, "secret")
[_, payload, _] := io.jwt.decode(input.token)
payload.role == "admin"
reason contains "invalid JWT supplied as input" if {
not io.jwt.decode(input.token)
"invalid JWT supplied as input"
}
If you wish to disable this behaviour and instead have built-in function call
errors treated as exceptions that halt policy evaluation enable “strict built-in
errors” in the caller:
|
API
|
Flag
|
POST v1/data
(HTTP)
|
strict-builtin-errors
query parameter
|
GET v1/data
(HTTP)
|
strict-builtin-errors
query parameter
|
opa eval
(CLI)
|
--strict-builtin-errors
|
opa run
(REPL)
|
> strict-builtin-errors
|
rego
Go module
|
rego.StrictBuiltinErrors(true)
option
|
|
Wasm
|
Not Available
|
Example Data
The rules below define the content of documents describing a simplistic deployment environment. These documents are referenced in other sections above.
# description: A rule that determines if x is allowed.
# authors:
# - John Doe <john@example.com>
# entrypoint: true
allow {
}
Annotations are grouped within a
metadata block
, and must be specified as YAML within a comment block that
must
start with
# METADATA
.
Also, every line in the comment block containing the annotation
must
start at Column 1 in the module/file, or otherwise, they will be ignored.
OPA will attempt to parse the YAML document in comments following the
initial
# METADATA
comment. If the YAML document cannot be parsed, OPA will
return an error. If you need to include additional comments between the
comment block and the next statement, include a blank line immediately after
the comment block containing the YAML document. This tells OPA that the
comment block containing the YAML document is finished
Annotations
|
Name
|
Type
|
Description
|
|
scope
|
string; one of
package
,
rule
,
document
,
subpackages
|
The scope on which the
schemas
annotation is applied. Read more
here
.
|
|
title
|
string
|
A human-readable name for the annotation target. Read more
here
.
|
|
description
|
string
|
A description of the annotation target. Read more
here
.
|
|
related_resources
|
list of URLs
|
A list of URLs pointing to related resources/documentation. Read more
here
.
|
|
authors
|
list of strings
|
A list of authors for the annotation target. Read more
here
.
|
|
organizations
|
list of strings
|
A list of organizations related to the annotation target. Read more
here
.
|
|
schemas
|
list of object
|
A list of associations between value paths and schema definitions. Read more
here
.
|
|
entrypoint
|
boolean
|
Whether or not the annotation target is to be used as a policy entrypoint. Read more
here
.
|
|
custom
|
mapping of arbitrary data
|
A custom mapping of named parameters holding arbitrary data. Read more
here
.
|
Scope
Annotations can be defined at the rule or package level. The
scope
annotation in
a metadata block determines how that metadata block will be applied. If the
scope
field is omitted, it defaults to the scope for the statement that
immediately follows the annotation. The
scope
values that are currently
supported are:
-
rule
- applies to the individual rule statement (within the same file). Default, when metadata block precedes rule.
-
document
- applies to all of the rules with the same name in the same package (across multiple files)
-
package
- applies to all of the rules in the package (across multiple files). Default, when metadata block precedes package.
-
subpackages
- applies to all of the rules in the package and all subpackages (recursively, across multiple files)
Since the
document
scope annotation applies to all rules with the same name in the same package
and the
package
and
subpackages
scope annotations apply to all packages with a matching path, metadata blocks with
these scopes are applied over all files with applicable package- and rule paths.
As there is no ordering across files in the same package, the
document
,
package
, and
subpackages
scope annotations
can only be specified
once
per path.
The
document
scope annotation can be applied to any rule in the set (i.e., ordering does not matter.)
Example
# METADATA
# scope: document
# description: A set of rules that determines if x is allowed.
# METADATA
# title: Allow Ones
allow {
x == 1
# METADATA
# title: Allow Twos
allow {
x == 2
}
Title
The
title
annotation is a string value giving a human-readable name to the annotation target.
Example
allow {
}
The
related_resources
annotation is a list of
related-resource
entries, where each links to some related external resource; such as RFCs and other reading material.
A
related-resource
entry can either be an object or a short-form string holding a single URL.
When a
related-resource
entry is presented as an object, it has two fields:
-
ref
: a URL pointing to the resource (required).
-
description
: a text describing the resource.
When a
related-resource
entry is presented as a string, it needs to be a valid URL.
Examples
allow {
}
Authors
The
authors
annotation is a list of author entries, where each entry denotes an
author
.
An
author
entry can either be an object or a short-form string.
When an
author
entry is presented as an object, it has two fields:
-
name
: the name of the author
-
email
: the email of the author
At least one of the above fields are required for a valid
author
entry.
When an
author
entry is presented as a string, it has the format
{ name } [ "<" email ">"]
;
where the name of the author is a sequence of whitespace-separated words.
Optionally, the last word may represent an email, if enclosed with
<>
.
Examples
allow {
}
Schemas
The
schemas
annotation is a list of key value pairs, associating schemas to data values.
In-depth information on this topic can be found
here
.
Schema files can be referenced by path, where each path starts with the
schema
namespace, and trailing components specify
the path of the schema file (sans file-ending) relative to the root directory specified by the
--schema
flag on applicable commands.
If the
--schema
flag is not present, referenced schemas are ignored during type checking.
access := data.acl["alice"]
access[_] == input.operation
}
Schema definitions can be inlined by specifying the schema structure as a YAML or JSON map.
Inlined schemas are always used to inform type checking for the
eval
,
check
, and
test
commands;
in contrast to
by-reference schema annotations
, which require the
--schema
flag to be present in order to be evaluated.
allow {
input.x == 42
}
Entrypoint
The
entrypoint
annotation is a boolean used to mark rules and packages that should be used as entrypoints for a policy.
This value is false by default, and can only be used at
rule
or
package
scope.
The
build
and
eval
CLI commands will automatically pick up annotated entrypoints; you do not have to specify them with
--entrypoint
.
Unless the
--prune-unused
flag is used, any rule transitively referring to a
package or rule declared as an entrypoint will also be enumerated as an entrypoint.
Custom
The
custom
annotation is a mapping of user-defined data, mapping string keys to arbitrarily typed values.
Example
"severity": "MEDIUM"
}
If you’d like more examples and information on this, you can see more here under the
Rego
policy reference.
Inspect command
Annotations can be listed through the
inspect
command by using the
-a
flag:
Go API
The
ast.AnnotationSet
is a collection of all
ast.Annotations
declared in a set of modules.
An
ast.AnnotationSet
can be created from a slice of compiled modules:
var modules []*ast.Module
as, err := ast.BuildAnnotationSet(modules)
if err != nil {
// Handle error.
or can be retrieved from an
ast.Compiler
instance:
var modules []*ast.Module
compiler := ast.NewCompiler()
compiler.Compile(modules)
as := compiler.GetAnnotationSet()
The
ast.AnnotationSet
can be flattened into a slice of
ast.AnnotationsRef
, which is a complete, sorted list of all
annotations, grouped by the path and location of their targeted package or -rule.
flattened := as.Flatten()
for _, entry := range flattened {
fmt.Printf("%v at %v has annotations %v\n",
entry.Path,
entry.Location,
entry.Annotations)
// Output:
// data.foo at foo.rego:5 has annotations {"scope":"subpackages","organizations":["Acme Corp."]}
// data.foo.bar at mod:3 has annotations {"scope":"package","description":"A couple of useful rules"}
// data.foo.bar.p at mod:7 has annotations {"scope":"rule","title":"My Rule P"}
// For modules:
// # METADATA
// # scope: subpackages
// # organizations:
// # - Acme Corp.
// package foo
// ---
// # METADATA
// # description: A couple of useful rules
// package foo.bar
// # METADATA
// # title: My Rule P
// p := 7
Given an
ast.Rule
, the
ast.AnnotationSet
can return the chain of annotations declared for that rule, and its path ancestry.
The returned slice is ordered starting with the annotations for the rule, going outward to the farthest node with declared annotations
in the rule’s path ancestry.
var rule *ast.Rule
chain := ast.Chain(rule)
for _, link := range chain {
fmt.Printf("link at %v has annotations %v\n",
link.Path,
link.Annotations)
// Output:
// data.foo.bar.p at mod:7 has annotations {"scope":"rule","title":"My Rule P"}
// data.foo.bar at mod:3 has annotations {"scope":"package","description":"A couple of useful rules"}
// data.foo at foo.rego:5 has annotations {"scope":"subpackages","organizations":["Acme Corp."]}
// For modules:
// # METADATA
// # scope: subpackages
// # organizations:
// # - Acme Corp.
// package foo
// ---
// # METADATA
// # description: A couple of useful rules
// package foo.bar
// # METADATA
// # title: My Rule P
// p := 7
Schema
Using schemas to enhance the Rego type checker
You can provide one or more input schema files and/or data schema files to
opa eval
to improve static type checking and get more precise error reports as you develop Rego code.
The
-s
flag can be used to upload schemas for input and data documents in JSON Schema format. You can either load a single JSON schema file for the input document or directory of schema files.
-s, --schema string set schema file path or directory path
Passing a single file with -s
When a single file is passed, it is a schema file associated with the input document globally. This means that for all rules in all packages, the
input
has a type derived from that schema. There is no constraint on the name of the file, it could be anything.
Example:
opa eval data.envoy.authz.allow -i opa-schema-examples/envoy/input.json -d opa-schema-examples/envoy/policy.rego -s opa-schema-examples/envoy/schemas/my-schema.json
Passing a directory with -s
When a directory path is passed, annotations will be used in the code to indicate what expressions map to what schemas (see below).
Both input schema files and data schema files can be provided in the same directory, with different names. The directory of schemas may have any sub-directories. Notice that when a directory is passed the input document does not have a schema associated with it globally. This must also
be indicated via an annotation.
Example:
opa eval data.kubernetes.admission -i opa-schema-examples/kubernetes/input.json -d opa-schema-examples/kubernetes/policy.rego -s opa-schema-examples/kubernetes/schemas
Schemas can also be provided for policy and data files loaded via
opa eval --bundle
Example:
opa eval data.kubernetes.admission -i opa-schema-examples/kubernetes/input.json -b opa-schema-examples/bundle.tar.gz -s opa-schema-examples/kubernetes/schemas
Samples provided at:
https://github.com/aavarghese/opa-schema-examples/
Usage scenario with a single schema file
Consider the following Rego code, which assumes as input a Kubernetes admission review. For resources that are Pods, it checks that the image name
starts with a specific prefix.
pod.rego
package kubernetes.admission
deny[msg] {
input.request.kind.kinds == "Pod"
image := input.request.object.spec.containers[_].image
not startswith(image, "hooli.com/")
msg := sprintf("image '%v' comes from untrusted registry", [image])
Notice that this code has a typo in it:
input.request.kind.kinds
is undefined and should have been
input.request.kind.kind
.
Consider the following input document:
input.json
{
"kind": "AdmissionReview",
"request": {
"kind": {
"kind": "Pod",
"version": "v1"
"object": {
"metadata": {
"name": "myapp"
"spec": {
"containers": [
"image": "nginx",
"name": "nginx-frontend"
"image": "mysql",
"name": "mysql-backend"
Clearly there are 2 image names that are in violation of the policy. However, when we evaluate the erroneous Rego code against this input we obtain:
% opa eval data.kubernetes.admission --format pretty -i opa-schema-examples/kubernetes/input.json -d opa-schema-examples/kubernetes/policy.rego
The empty value returned is indistinguishable from a situation where the input did not violate the policy. This error is therefore causing the policy not to catch violating inputs appropriately.
If we fix the Rego code and change
input.request.kind.kinds
to
input.request.kind.kind
, then we obtain the expected result:
[
"image 'nginx' comes from untrusted registry",
"image 'mysql' comes from untrusted registry"
With this feature, it is possible to pass a schema to
opa eval
, written in JSON Schema. Consider the admission review schema provided at:
https://github.com/aavarghese/opa-schema-examples/blob/main/kubernetes/schemas/input.json
We can pass this schema to the evaluator as follows:
% opa eval data.kubernetes.admission --format pretty -i opa-schema-examples/kubernetes/input.json -d opa-schema-examples/kubernetes/policy.rego -s opa-schema-examples/kubernetes/schemas/input.json
With the erroneous Rego code, we now obtain the following type error:
1 error occurred: ../../aavarghese/opa-schema-examples/kubernetes/policy.rego:5: rego_type_error: undefined ref: input.request.kind.kinds
input.request.kind.kinds
have: "kinds"
want (one of): ["kind" "version"]
This indicates the error to the Rego developer right away, without having the need to observe the results of runs on actual data, thereby improving productivity.
Schema annotations
When passing a directory of schemas to
opa eval
, schema annotations become handy to associate a Rego expression with a corresponding schema within a given scope:
# METADATA
# schemas:
# - <path-to-value>:<path-to-schema>
# ...
# - <path-to-value>:<path-to-schema>
allow {
See the
annotations documentation
for general information relating to annotations.
The
schemas
field specifies an array associating schemas to data values. Paths must start with
input
or
data
(i.e., they must be fully-qualified.)
The type checker derives a Rego Object type for the schema and an appropriate entry is added to the type environment before type checking the rule. This entry is removed upon exit from the rule.
Example:
Consider the following Rego code which checks if an operation is allowed by a user, given an ACL data document:
package policy
import data.acl
default allow := false
# METADATA
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
allow {
access := data.acl["alice"]
access[_] == input.operation
allow {
access := data.acl["bob"]
access[_] == input.operation
Consider a directory named
mySchemasDir
with the following structure, provided via
opa eval --schema opa-schema-examples/mySchemasDir
mySchemasDir/
├── input.json
└── acl-schema.json
For actual code samples, see
https://github.com/aavarghese/opa-schema-examples/tree/main/acl
.
In the first
allow
rule above, the input document has the schema
input.json
, and
data.acl
has the schema
acl-schema.json
. Note that we use the relative path inside the
mySchemasDir
directory to identify a schema, omit the
.json
suffix, and use the global variable
schema
to stand for the top-level of the directory.
Schemas in annotations are proper Rego references. So
schema.input
is also valid, but
schema.acl-schema
is not.
If we had the expression
data.acl.foo
in this rule, it would result in a type error because the schema contained in
acl-schema.json
only defines object properties
"alice"
and
"bob"
in the ACL data document.
On the other hand, this annotation does not constrain other paths under
data
. What it says is that we know the type of
data.acl
statically, but not that of other paths. So for example,
data.foo
is not a type error and gets assigned the type
Any
.
Note that the second
allow
rule doesn’t have a METADATA comment block attached to it, and hence will not be type checked with any schemas.
On a different note, schema annotations can also be added to policy files part of a bundle package loaded via
opa eval --bundle
along with the
--schema
parameter for type checking a set of
*.rego
policy files.
The
scope
of the
schema
annotation can be controlled through the
scope
annotation
In case of overlap, schema annotations override each other as follows:
rule overrides document
document overrides package
package overrides subpackages
The following sections explain how the different scopes affect
schema
annotation
overriding for type checking.
Rule and Document Scopes
In the example above, the second rule does not include an annotation so type
checking of the second rule would not take schemas into account. To enable type
checking on the second (or other rules in the same file) we could specify the
annotation multiple times:
# METADATA
# scope: rule
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
allow {
access := data.acl["alice"]
access[_] == input.operation
# METADATA
# scope: rule
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
allow {
access := data.acl["bob"]
access[_] == input.operation
This is obviously redundant and error-prone. To avoid this problem, we can
define the annotation once on a rule with scope
document
:
# METADATA
# scope: document
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
allow {
access := data.acl["alice"]
access[_] == input.operation
allow {
access := data.acl["bob"]
access[_] == input.operation
In this example, the annotation with
document
scope has the same affect as the
two
rule
scoped annotations in the previous example.
Package and Subpackage Scopes
Annotations can be defined at the
package
level and then applied to all rules
within the package:
# METADATA
# scope: package
# schemas:
# - input: schema.input
# - data.acl: schema["acl-schema"]
package example
allow {
access := data.acl["alice"]
access[_] == input.operation
allow {
access := data.acl["bob"]
access[_] == input.operation
package
scoped schema annotations are useful when all rules in the same
package operate on the same input structure. In some cases, when policies are
organized into many sub-packages, it is useful to declare schemas recursively
for them using the
subpackages
scope. For example:
# METADTA
# scope: subpackages
# schemas:
# - input: schema.input
package kubernetes.admission
This snippet would declare the top-level schema for
input
for the
kubernetes.admission
package as well as all subpackages. If admission control
rules were defined inside packages like
kubernetes.admission.workloads.pods
,
they would be able to pick up that one schema declaration.
Overriding
JSON Schemas are often incomplete specifications of the format of data. For example, a Kubernetes Admission Review resource has a field
object
which can contain any other Kubernetes resource. A schema for Admission Review has a generic type
object
for that field that has no further specification. To allow more precise type checking in such cases, we support overriding existing schemas.
Consider the following example:
package kubernetes.admission
# METADATA
# scope: rule
# schemas:
# - input: schema.input
# - input.request.object: schema.kubernetes.pod
deny[msg] {
input.request.kind.kind == "Pod"
image := input.request.object.spec.containers[_].image
not startswith(image, "hooli.com/")
msg := sprintf("image '%v' comes from untrusted registry", [image])
In this example, the
input
is associated with an Admission Review schema, and furthermore
input.request.object
is set to have the schema of a Kubernetes Pod. In effect, the second schema annotation overrides the first one. Overriding is a schema transformation feature and combines existing schemas. In this case, we are combining the Admission Review schema with that of a Pod.
Notice that the order of schema annotations matter for overriding to work correctly.
Given a schema annotation, if a prefix of the path already has a type in the environment, then the annotation has the effect of merging and overriding the existing type with the type derived from the schema. In the example above, the prefix
input
already has a type in the type environment, so the second annotation overrides this existing type. Overriding affects the type of the longest prefix that already has a type. If no such prefix exists, the new path and type are added to the type environment for the scope of the rule.
In general, consider the existing Rego type:
object{a: object{b: object{c: C, d: D, e: E}}}
If we override this type with the following type (derived from a schema annotation of the form
a.b.e: schema-for-E1
):
object{a: object{b: object{e: E1}}}
It results in the following type:
object{a: object{b: object{c: C, d: D, e: E1}}}
Notice that
b
still has its fields
c
and
d
, so overriding has a merging effect as well. Moreover, the type of expression
a.b.e
is now
E1
instead of
E
.
We can also use overriding to add new paths to an existing type, so if we override the initial type with the following:
object{a: object{b: object{f: F}}}
we obtain the following type:
object{a: object{b: object{c: C, d: D, e: E, f: F}}}
We use schemas to enhance the type checking capability of OPA, and not to validate the input and data documents against desired schemas. This burden is still on the user and care must be taken when using overriding to ensure that the input and data provided are sensible and validated against the transformed schemas.
It is sometimes useful to have different input schemas for different rules in the same package. This can be achieved as illustrated by the following example:
package policy
import data.acl
default allow := false
# METADATA
# scope: rule
# schemas:
# - input: schema["input"]
# - data.acl: schema["acl-schema"]
allow {
access := data.acl[input.user]
access[_] == input.operation
# METADATA for whocan rule
# scope: rule
# schemas:
# - input: schema["whocan-input-schema"]
# - data.acl: schema["acl-schema"]
whocan[user] {
access := acl[user]
access[_] == input.operation
The directory that is passed to
opa eval
is the following:
mySchemasDir/
├── input.json
└── acl-schema.json
└── whocan-input-schema.json
In this example, we associate the schema
input.json
with the input document in the rule
allow
, and the schema
whocan-input-schema.json
with the input document for the rule
whocan
.
Translating schemas to Rego types and dynamicity
Rego has a gradual type system meaning that types can be partially known statically. For example, an object could have certain fields whose types are known and others that are unknown statically. OPA type checks what it knows statically and leaves the unknown parts to be type checked at runtime. An OPA object type has two parts: the static part with the type information known statically, and a dynamic part, which can be nil (meaning everything is known statically) or non-nil and indicating what is unknown.
When we derive a type from a schema, we try to match what is known and unknown in the schema. For example, an
object
that has no specified fields becomes the Rego type
Object{Any: Any}
. However, currently
additionalProperties
and
additionalItems
are ignored. When a schema is fully specified, we derive a type with its dynamic part set to nil, meaning that we take a strict interpretation in order to get the most out of static type checking. This is the case even if
additionalProperties
is set to
true
in the schema. In the future, we will take this feature into account when deriving Rego types.
When overriding existing types, the dynamicity of the overridden prefix is preserved.
Supporting JSON Schema composition keywords
JSON Schema provides keywords such as
anyOf
and
allOf
to structure a complex schema. For
anyOf
, at least one of the subschemas must be true, and for
allOf
, all subschemas must be true. The type checker is able to identify such keywords and derive a more robust Rego type through more complex schemas.
anyOf
Specifically,
anyOf
acts as an Rego Or type where at least one (can be more than one) of the subschemas is true. Consider the following Rego and schema file containing
anyOf
:
policy-anyOf.rego
package kubernetes.admission
# METADATA
# scope: rule
# schemas:
# - input: schema["input-anyOf"]
deny {
input.request.servers.versions == "Pod"
input-anyOf.json
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "object",
"properties": {
"kind": {"type": "string"},
"request": {
"type": "object",
"anyOf": [
"properties": {
"kind": {
"type": "object",
"properties": {
"kind": {"type": "string"},
"version": {"type": "string" }
"properties": {
"server": {
"type": "object",
"properties": {
"accessNum": {"type": "integer"},
"version": {"type": "string"}
We can see that
request
is an object with two options as indicated by the choices under
anyOf
:
-
contains property
kind
, which has properties
kind
and
version
-
contains property
server
, which has properties
accessNum
and
version
The type checker finds the first error in the Rego code, suggesting that
servers
should be either
kind
or
server
.
input.request.servers.versions
have: "servers"
want (one of): ["kind" "server"]
Once this is fixed, the second typo is highlighted, prompting the user to choose between
accessNum
and
version
.
input.request.server.versions
have: "versions"
want (one of): ["accessNum" "version"]
allOf
Specifically,
allOf
keyword implies that all conditions under
allOf
within a schema must be met by the given data.
allOf
is implemented through merging the types from all of the JSON subSchemas listed under
allOf
before parsing the result to convert it to a Rego type. Merging of the JSON subSchemas essentially combines the passed in subSchemas based on what types they contain. Consider the following Rego and schema file containing
allOf
:
policy-allOf.rego
package kubernetes.admission
# METADATA
# scope: rule
# schemas:
# - input: schema["input-allof"]
deny {
input.request.servers.versions == "Pod"
input-allof.json
{
"$schema": "http://json-schema.org/draft-07/schema",
"type": "object",
"properties": {
"kind": {"type": "string"},
"request": {
"type": "object",
"allOf": [
"properties": {
"kind": {
"type": "object",
"properties": {
"kind": {"type": "string"},
"version": {"type": "string" }
"properties": {
"server": {
"type": "object",
"properties": {
"accessNum": {"type": "integer"},
"version": {"type": "string"}
We can see that
request
is an object with properties as indicated by the elements listed under
allOf
:
-
contains property
kind
, which has properties
kind
and
version
-
contains property
server
, which has properties
accessNum
and
version
The type checker finds the first error in the Rego code, suggesting that
servers
should be
server
.
input.request.servers.versions
have: "servers"
want (one of): ["kind" "server"]
Once this is fixed, the second typo is highlighted, informing the user that
versions
should be one of
accessNum
or
version
.
input.request.server.versions
have: "versions"
want (one of): ["accessNum" "version"]
Because the properties
kind
,
version
, and
accessNum
are all under the
allOf
keyword, the resulting schema that the given data must be validated against will contain the types contained in these properties children (string and integer).
Remote references in JSON schemas
It is valid for JSON schemas to reference other JSON schemas via URLs, like this:
{
"description": "Pod is a collection of containers that can run on a host.",
"type": "object",
"properties": {
"metadata": {
"$ref": "https://kubernetesjsonschema.dev/v1.14.0/_definitions.json#/definitions/io.k8s.apimachinery.pkg.apis.meta.v1.ObjectMeta",
"description": "Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata"
OPA’s type checker will fetch these remote references by default.
To control the remote hosts schemas will be fetched from, pass a capabilities
file to your
opa eval
or
opa check
call.
Starting from the capabilities.json of your OPA version (which can be found
in the
repository
), add
an
allow_net
key to it: its values are the IP addresses or host names that OPA is
supposed to connect to for retrieving remote schemas.
{
"builtins": [ ... ],
"allow_net": [ "kubernetesjsonschema.dev" ]
Note
-
To forbid all network access in schema checking, set
allow_net
to
[]
-
Host names are checked against the list as-is, so adding
127.0.0.1
to
allow_net
,
and referencing a schema from
http://localhost/
will
fail
.
-
Metaschemas for different JSON Schema draft versions are not subject to this
constraint, as they are already provided by OPA’s schema checker without requiring
network access. These are:
-
http://json-schema.org/draft-04/schema
-
http://json-schema.org/draft-06/schema
-
http://json-schema.org/draft-07/schema
Limitations
Currently this feature admits schemas written in JSON Schema but does not support every feature available in this format.
In particular the following features are not yet supported:
-
additional properties for objects
-
pattern properties for objects
-
additional items for arrays
-
contains for arrays
-
oneOf, not
-
enum
-
if/then/else
A note of caution: overriding is a powerful capability that must be used carefully. For example, the user is allowed to write:
# METADATA
# scope: rule
# schema:
# - data: schema["some-schema"]
In this case, we are overriding the root of all documents to have some schema. Since all Rego code lives under
data
as virtual documents, this in practice renders all of them inaccessible (resulting in type errors). Similarly, assigning a schema to a package name is not a good idea and can cause problems. Care must also be taken when defining overrides so that the transformation of schemas is sensible and data can be validated against the transformed schema.
References
For more examples, please see
https://github.com/aavarghese/opa-schema-examples
This contains samples for Envoy, Kubernetes, and Terraform including corresponding JSON Schemas.
For a reference on JSON Schema please see:
http://json-schema.org/understanding-json-schema/reference/index.html
For a tool that generates JSON Schema from JSON samples, please see:
https://jsonschema.net/home
Strict Mode
The Rego compiler supports
strict mode
, where additional constraints and safety checks are enforced during compilation.
Compiler rules that will be enforced by future versions of OPA, but will be a breaking change once introduced, are incubated in strict mode.
This creates an opportunity for users to verify that their policies are compatible with the next version of OPA before upgrading.
Compiler Strict mode is supported by the
check
command, and can be enabled through the
-S
flag.
-S, --strict enable compiler strict mode
Strict Mode Constraints and Checks
|
Name
|
Description
|
Enforced by default in OPA version
|
|
Duplicate imports
|
Duplicate
imports
, where one import shadows another, are prohibited.
|
1.0
|
|
Unused local assignments
|
Unused arguments or
assignments
local to a rule, function or comprehension are prohibited
|
1.0
|
|
Unused imports
|
Unused
imports
are prohibited.
|
1.0
|
input
and
data
reserved keywords
|
input
and
data
are reserved keywords, and may not be used as names for rules and variable assignment.
|
1.0
|
|
Use of deprecated built-ins
|
Use of deprecated functions is prohibited, and these will be removed in OPA 1.0. Deprecated built-in functions:
any
,
all
,
re_match
,
net.cidr_overlap
,
set_diff
,
cast_array
,
cast_set
,
cast_string
,
cast_boolean
,
cast_null
,
cast_object
|
1.0
|
Ecosystem Projects
There are no integrations for this category.