Skip to content

feat: add direct OPA evaluator with shared base infrastructure#3276

Draft
joejstuart wants to merge 8 commits intoconforma:mainfrom
joejstuart:opa-evaluator
Draft

feat: add direct OPA evaluator with shared base infrastructure#3276
joejstuart wants to merge 8 commits intoconforma:mainfrom
joejstuart:opa-evaluator

Conversation

@joejstuart
Copy link
Copy Markdown
Contributor

Add a new opaEvaluator that evaluates policies directly via OPA's rego API instead of going through conftest's runner. This eliminates the conftest runner overhead while reusing conftest's Engine for policy compilation and data loading.

Extract shared infrastructure into basePolicyEvaluator to eliminate ~400 lines of duplication between the two evaluators. Both now share policy download/inspection, data directory preparation, capabilities management, post-processing, and success computation.

Key changes:

  • New opaEvaluator with direct rego.Eval() queries matching conftest's engine.Check() semantics (same regexes, exception handling, success counting)
  • basePolicyEvaluator embedded struct with 12 shared methods
  • sync.Once lazy initialization on both evaluators to prevent data races from concurrent worker goroutines
  • BuildInput() on ApplicationSnapshotImage for in-memory OPA input delivery without disk I/O
  • ParsedInput field on EvaluationTarget for passing pre-parsed input
  • EC_USE_OPA=1 environment variable gate for selecting the OPA evaluator

@joejstuart joejstuart marked this pull request as draft May 6, 2026 23:07
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 6, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Enterprise

Run ID: 31b22dd0-3a20-4c38-9c91-06969aabfca1

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.

Use the checkbox below for a quick retry:

  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

The pull request implements a functional OPA evaluator by introducing a shared basePolicyEvaluator base class that manages policy downloads, workspace setup, rule discovery, and result post-processing. The opaEvaluator is upgraded from a stub to a full implementation that compiles OPA engines, executes rego queries, and synthesizes evaluation outcomes. Integration points propagate parsed input structures to evaluators and refactor conftestEvaluator to reuse the common base infrastructure.

Changes

OPA Evaluator Implementation

Layer / File(s) Summary
Data Shape
internal/evaluator/evaluator.go
EvaluationTarget adds ParsedInput map[string]any field to carry in-memory OPA input alongside file paths.
Base Infrastructure
internal/evaluator/base_evaluator.go
New basePolicyEvaluator struct and methods provide policy workspace initialization, policy source downloading and inspection (rule annotation extraction), data directory discovery, namespace filtering, result post-processing with metadata enrichment, and implicit success computation.
Input Building
internal/evaluation_target/application_snapshot_image/application_snapshot_image.go
New BuildInput(ctx) method constructs the OPA input structure in-memory from snapshot attestations, image metadata, and optional parent config, marshals to JSON, and returns both the parsed map and JSON bytes.
OPA Evaluator Implementation
internal/evaluator/opa_evaluator.go
Complete rewrite from stub: embeds basePolicyEvaluator, implements lazy initialization via ensureInitialized, compiles conftest.Engine with discovered policies and data directories, executes rego queries to collect successes/failures/warnings/exceptions, and post-processes results with rule metadata and filtering.
Conftest Refactoring
internal/evaluator/conftest_evaluator.go
Refactored to embed basePolicyEvaluator for shared state management; introduces ensureInitialized(ctx) for lazy policy download/inspection; delegates result post-processing to postProcessResults helper; removes inlined evaluation logic now owned by base class.
Integration & Wiring
cmd/validate/image.go, internal/image/validate.go
Command-line OPA evaluator initialization supplies context, policy sources, policy config, and namespace; ValidateImage builds parsed input via BuildInput, propagates it to EvaluationTarget.ParsedInput alongside file paths for evaluator consumption.
Tests
internal/evaluator/conftest_evaluator_unit_data_test.go, internal/evaluator/opa_evaluator_test.go
Test harnesses updated to construct evaluators with embedded basePolicyEvaluator fields; filesystem and workdir initialization adjusted for new struct nesting.

Sequence Diagram

sequenceDiagram
    participant Client as Validate Command
    participant Input as ApplicationSnapshotImage
    participant Build as BuildInput
    participant Eval as OPA Evaluator
    participant Base as Base Evaluator
    participant Policy as Policy Sources
    participant Engine as OPA Engine
    participant Rules as Rule Registry
    
    Client->>Input: ValidateImage(ctx)
    Input->>Build: BuildInput(ctx)
    Build-->>Input: (parsedInput map, JSON bytes)
    
    Client->>Eval: NewOPAEvaluator(ctx, policySources, ...)
    Eval->>Base: Initialize basePolicyEvaluator
    Base->>Base: initWorkDir, createDataDirectory
    
    Client->>Eval: Evaluate(ctx, target{ParsedInput, Inputs})
    Eval->>Eval: ensureInitialized(ctx)
    Eval->>Base: downloadAndInspectPolicies(ctx)
    Base->>Policy: Download policy sources
    Base->>Base: Extract rule annotations, discover data dirs
    Base->>Rules: Register rules with metadata
    
    Eval->>Engine: compileEngine()
    Engine->>Engine: Load capabilities, build OPA engine
    
    Eval->>Eval: evaluateWithEngine(target, namespaces)
    Eval->>Engine: Query deny/violation/warn rules
    Engine-->>Eval: Rule results + exceptions
    
    Eval->>Base: postProcessResults(outcomes)
    Base->>Base: Enrich with metadata, compute successes
    Base->>Base: Apply filtering, validate rule coverage
    Base-->>Eval: Final Outcome[]
    
    Eval-->>Client: Outcome[] with aggregated results
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 10.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main addition: a new OPA evaluator with shared base infrastructure, matching the core objective of the PR.
Description check ✅ Passed The description is directly related to the changeset, providing context about the new opaEvaluator, basePolicyEvaluator refactoring, and supporting infrastructure changes.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (2)
internal/image/validate.go (1)

109-119: ⚡ Quick win

Avoid building the same policy input twice per image.

BuildInput already marshals/unmarshals the full payload, and WriteInputFile then marshals it again immediately afterward. On the conftest path this is pure overhead in a hot loop, and it also creates two places that must stay semantically identical. Prefer a single shared builder, or only populate ParsedInput when an OPA evaluator is actually present.

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@internal/image/validate.go` around lines 109 - 119, The code currently calls
BuildInput and then WriteInputFile, causing duplicate marshaling; instead call
BuildInput once and reuse its result when writing files or populating
ParsedInput. Modify WriteInputFile (or add an overload) to accept the
already-built inputJSON/inputMap from BuildInput so it does not re-marshal, and
only populate a.ParsedInput (or run the OPA/conftest path) when an OPA
evaluator/conftest is actually present; update callers to pass the BuildInput
output into WriteInputFile or skip WriteInputFile when no evaluator is used
(referencing BuildInput, WriteInputFile, and ParsedInput in your changes).
internal/evaluator/opa_evaluator.go (1)

243-267: ⚡ Quick win

Cache rule discovery per namespace after compilation.

This rescans every compiled module for every (namespace, input) pair, then does an O(n²) dedupe on top. With many inputs, rule discovery becomes part of the hot path even though o.engine.Modules() is immutable after compileEngine(). Precomputing a map[string][]string once during initialization would remove that repeated AST walk and simplify queryNamespace.

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@internal/evaluator/opa_evaluator.go` around lines 243 - 267, Precompute and
cache discovery of OPA failure/warning rule names per namespace during engine
initialization (e.g., in compileEngine or immediately after it) instead of
recomputing inside queryNamespace by walking o.engine.Modules() each time; build
a map[string][]string (namespace -> ruleNames) using the same filtering logic
(isOPAFailure/isOPAWarning and dedupe) and store it on the evaluator struct,
update usages of ruleNames/ruleCount in queryNamespace to read from that cache,
and ensure cache is populated once after compileEngine() so o.engine.Modules()
is not rescanned for every (namespace, input) pair.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Inline comments:
In `@internal/evaluator/base_evaluator.go`:
- Around line 332-333: computeSuccesses is calling FilterResults with time.Now()
and discarding its returned missingIncludes; change those calls to pass the
evaluator's effective time (use b.effectiveTime or the evaluator's effectiveTime
field) instead of time.Now(), and capture/propagate the missingIncludes return
value so result.Successes is computed with the same effective time and the
missingIncludes set is not lost; apply the same fix to the other FilterResults
call in the same file (the block around the alternate call at ~406-412) so both
successes and the other filter use the evaluator effective time and preserve
missingIncludes.
- Around line 307-325: The loop always constructs
NewUnifiedPostEvaluationFilter, which overrides any custom filter set by
NewConftestEvaluatorWithPostEvaluationFilter; change the code to use the
evaluator's configured post-evaluation filter (e.g., b.postEvaluationFilter)
when present, falling back to NewUnifiedPostEvaluationFilter(b.policyResolver)
only if b.postEvaluationFilter is nil, and then call FilterResults and
CategorizeResults on that chosen filter instance so custom filters are preserved
(apply this to the variables used around FilterResults/CategorizeResults).

---

Nitpick comments:
In `@internal/evaluator/opa_evaluator.go`:
- Around line 243-267: Precompute and cache discovery of OPA failure/warning
rule names per namespace during engine initialization (e.g., in compileEngine or
immediately after it) instead of recomputing inside queryNamespace by walking
o.engine.Modules() each time; build a map[string][]string (namespace ->
ruleNames) using the same filtering logic (isOPAFailure/isOPAWarning and dedupe)
and store it on the evaluator struct, update usages of ruleNames/ruleCount in
queryNamespace to read from that cache, and ensure cache is populated once after
compileEngine() so o.engine.Modules() is not rescanned for every (namespace,
input) pair.

In `@internal/image/validate.go`:
- Around line 109-119: The code currently calls BuildInput and then
WriteInputFile, causing duplicate marshaling; instead call BuildInput once and
reuse its result when writing files or populating ParsedInput. Modify
WriteInputFile (or add an overload) to accept the already-built
inputJSON/inputMap from BuildInput so it does not re-marshal, and only populate
a.ParsedInput (or run the OPA/conftest path) when an OPA evaluator/conftest is
actually present; update callers to pass the BuildInput output into
WriteInputFile or skip WriteInputFile when no evaluator is used (referencing
BuildInput, WriteInputFile, and ParsedInput in your changes).
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Enterprise

Run ID: 46c4d173-fd5a-48f1-94ce-2a52bf4b8246

📥 Commits

Reviewing files that changed from the base of the PR and between 68bb316 and 8989ef7.

📒 Files selected for processing (9)
  • cmd/validate/image.go
  • internal/evaluation_target/application_snapshot_image/application_snapshot_image.go
  • internal/evaluator/base_evaluator.go
  • internal/evaluator/conftest_evaluator.go
  • internal/evaluator/conftest_evaluator_unit_data_test.go
  • internal/evaluator/evaluator.go
  • internal/evaluator/opa_evaluator.go
  • internal/evaluator/opa_evaluator_test.go
  • internal/image/validate.go

Comment on lines +307 to +325
for _, result := range runResults {
unifiedFilter := NewUnifiedPostEvaluationFilter(b.policyResolver)

allResults := []Result{}
allResults = append(allResults, result.Warnings...)
allResults = append(allResults, result.Failures...)
allResults = append(allResults, result.Exceptions...)
allResults = append(allResults, result.Skipped...)

for j := range allResults {
addRuleMetadata(ctx, &allResults[j], b.rules)
}

filteredResults, updatedMissingIncludes := unifiedFilter.FilterResults(
allResults, b.allRules, target.Target, missingIncludes, effectiveTime)
missingIncludes = updatedMissingIncludes

warnings, failures, exceptions, skipped := unifiedFilter.CategorizeResults(
filteredResults, result, effectiveTime)
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Preserve caller-supplied post-evaluation filters.

postProcessResults now always creates NewUnifiedPostEvaluationFilter(...), so NewConftestEvaluatorWithPostEvaluationFilter no longer changes behavior after this refactor. That is a functional regression for any path relying on a custom filter.

Suggested fix
-func (b *basePolicyEvaluator) postProcessResults(ctx context.Context, runResults []Outcome, target EvaluationTarget) ([]Outcome, error) {
+func (b *basePolicyEvaluator) postProcessResults(ctx context.Context, runResults []Outcome, target EvaluationTarget, filter PostEvaluationFilter) ([]Outcome, error) {
 	...
 	for _, result := range runResults {
-		unifiedFilter := NewUnifiedPostEvaluationFilter(b.policyResolver)
+		unifiedFilter := filter
+		if unifiedFilter == nil {
+			unifiedFilter = NewUnifiedPostEvaluationFilter(b.policyResolver)
+		}
 		...
 	}
-return c.postProcessResults(ctx, runResults, target)
+return c.postProcessResults(ctx, runResults, target, c.postEvaluationFilter)
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@internal/evaluator/base_evaluator.go` around lines 307 - 325, The loop always
constructs NewUnifiedPostEvaluationFilter, which overrides any custom filter set
by NewConftestEvaluatorWithPostEvaluationFilter; change the code to use the
evaluator's configured post-evaluation filter (e.g., b.postEvaluationFilter)
when present, falling back to NewUnifiedPostEvaluationFilter(b.policyResolver)
only if b.postEvaluationFilter is nil, and then call FilterResults and
CategorizeResults on that chosen filter instance so custom filters are preserved
(apply this to the variables used around FilterResults/CategorizeResults).

Comment thread internal/evaluator/base_evaluator.go Outdated
Comment on lines +332 to +333
result.Successes = b.computeSuccesses(result, b.rules, target.Target, missingIncludes, unifiedFilter)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Success filtering is using the wrong clock and dropping include matches.

The FilterResults call inside computeSuccesses uses time.Now() instead of the evaluator's effective time, and it ignores the returned missingIncludes. That means a rule matched only by a success can still trigger the later "Include criterion ... doesn't match any policy rule" warning, and backdated/effective-time runs can report successes differently from failures and warnings.

Also applies to: 406-412

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@internal/evaluator/base_evaluator.go` around lines 332 - 333,
computeSuccesses is calling FilterResults with time.Now() and discarding its
returned missingIncludes; change those calls to pass the evaluator's effective
time (use b.effectiveTime or the evaluator's effectiveTime field) instead of
time.Now(), and capture/propagate the missingIncludes return value so
result.Successes is computed with the same effective time and the
missingIncludes set is not lost; apply the same fix to the other FilterResults
call in the same file (the block around the alternate call at ~406-412) so both
successes and the other filter use the evaluator effective time and preserve
missingIncludes.

@codecov
Copy link
Copy Markdown

codecov Bot commented May 7, 2026

Codecov Report

❌ Patch coverage is 86.86869% with 65 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
internal/evaluator/base_evaluator.go 85.77% 33 Missing ⚠️
internal/evaluator/opa_evaluator.go 88.18% 26 Missing ⚠️
...ation_snapshot_image/application_snapshot_image.go 93.54% 2 Missing ⚠️
internal/image/validate.go 75.00% 2 Missing ⚠️
internal/validate/vsa/fallback.go 0.00% 2 Missing ⚠️
Flag Coverage Δ
acceptance 55.97% <72.72%> (+0.80%) ⬆️
generative 17.10% <0.00%> (-0.79%) ⬇️
integration 28.12% <61.81%> (+1.45%) ⬆️
unit 68.93% <66.26%> (-0.09%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
cmd/validate/image.go 91.62% <100.00%> (+0.25%) ⬆️
...ation_snapshot_image/application_snapshot_image.go 83.96% <93.54%> (+1.28%) ⬆️
internal/image/validate.go 70.42% <75.00%> (-0.39%) ⬇️
internal/validate/vsa/fallback.go 36.84% <0.00%> (-0.50%) ⬇️
internal/evaluator/opa_evaluator.go 88.18% <88.18%> (-11.82%) ⬇️
internal/evaluator/base_evaluator.go 85.77% <85.77%> (ø)

... and 1 file with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@joejstuart joejstuart force-pushed the opa-evaluator branch 2 times, most recently from 8dbf508 to adbdb0a Compare May 7, 2026 14:27
joejstuart and others added 6 commits May 7, 2026 10:44
Add a new opaEvaluator that evaluates policies directly via OPA's rego
API instead of going through conftest's runner. This eliminates the
conftest runner overhead while reusing conftest's Engine for policy
compilation and data loading.

Extract shared infrastructure into basePolicyEvaluator to eliminate
~400 lines of duplication between the two evaluators. Both now share
policy download/inspection, data directory preparation, capabilities
management, post-processing, and success computation.

Key changes:
- New opaEvaluator with direct rego.Eval() queries matching conftest's
  engine.Check() semantics (same regexes, exception handling, success
  counting)
- basePolicyEvaluator embedded struct with 12 shared methods
- sync.Once lazy initialization on both evaluators to prevent data
  races from concurrent worker goroutines
- BuildInput() on ApplicationSnapshotImage for in-memory OPA input
  delivery without disk I/O
- ParsedInput field on EvaluationTarget for passing pre-parsed input
- EC_USE_OPA=1 environment variable gate for selecting the OPA evaluator

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
BuildInput was missing ComponentName and PolicySpec fields that
WriteInputFile includes, causing acceptance test snapshot failures.
Also fix gci import ordering in fallback.go and tidy acceptance/go.mod.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add unit tests covering evalOPAQuery, queryNamespace, evaluateWithEngine,
helper functions (isOPAFailure, isOPAWarning, stripRulePrefix), input
parsing, and base evaluator methods (prepareDataDirs, computeSuccesses,
postProcessResults, createDataDirectory, createCapabilitiesFile,
initWorkDir, initPolicyResolver, resolveFilteredNamespaces,
isResultIncluded). Also fix gci import ordering in base_evaluator.go
and opa_evaluator.go.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Full end-to-end integration tests for the OPA evaluator covering:
- Basic creation and capabilities path
- Evaluation with file-based and parsed input
- Deny/warn semantics (both triggered, warn only, all pass)
- Component-name-based VolatileConfig exclude filtering

Mirrors the existing conftest evaluator integration tests.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Run both evaluators against the same policy and input, assert identical
outcomes (failure/warning/success codes and messages). Covers deny,
warn, conditional rules, multiple rules, parsed vs file input, and
component-name-based VolatileConfig filtering.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Inject EC_USE_OPA from the test runner's process environment into every
acceptance scenario's ec binary invocation, enabling `EC_USE_OPA=1 make
acceptance` to run the full suite with the OPA evaluator.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Add validate_image_opa.feature (7 scenarios) and validate_input_opa.feature
(2 scenarios) that exercise the OPA evaluator end-to-end via EC_USE_OPA=1.
Covers happy day, rejection, multiple sources, rule filtering, future deny
conversion, volatile config, and input validation paths.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Generated by running the OPA acceptance test scenarios locally.
These snapshots enable CI to validate OPA evaluator output matches
expected results.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant