Skip to content

Refactor Decision CRD to History CRD#569

Open
SoWieMarkus wants to merge 36 commits intomainfrom
refactor-decision-crd-v2
Open

Refactor Decision CRD to History CRD#569
SoWieMarkus wants to merge 36 commits intomainfrom
refactor-decision-crd-v2

Conversation

@SoWieMarkus
Copy link
Collaborator

@SoWieMarkus SoWieMarkus commented Mar 12, 2026

Note

The Decision CRD is kept as-is for now because removing it entirely from the workflow between the external scheduler API and the pipeline controller would be too complex at this point. Its current use case is primarily to trigger scheduling runs and to serve as a DTO within the scheduler pipeline. We plan to replace it with something more fitting in the future, and since a similar concept will likely be needed, it makes more sense to leave it in place rather than tear it out now. Ofc the Decision CRD will no longer be persisted. Just see it as deprecated. The functionality that was originally planned for the Decision CRD has been moved into a new History CRD. Once we have a clearer plan on how to restructure the CRD workflow and can fully retire the now-deprecated Decision CRD, the History CRD can be renamed to Decision if appropriate.

Note

As ugly as this state is: The flag to persist the decision in the History CRD is still called createDecision to avoid breaking changes

Changes

  • Introduced History CRD which stores information about the most recent decision as well as the last 10 decisions
  • Adjusted filter weigher pipelines to create those history objects
  • Decisions CRD still exists but will not be persisted anymore
  • Removed explanation controller
  • Introduced HistoryManager to explain decisions

@SoWieMarkus SoWieMarkus marked this pull request as ready for review March 12, 2026 16:36
@SoWieMarkus SoWieMarkus marked this pull request as draft March 13, 2026 06:32
@SoWieMarkus SoWieMarkus marked this pull request as ready for review March 13, 2026 07:30
Copilot AI review requested due to automatic review settings March 13, 2026 07:30
@cobaltcore-dev cobaltcore-dev deleted a comment from coderabbitai bot Mar 13, 2026

This comment was marked as outdated.

@coderabbitai
Copy link

coderabbitai bot commented Mar 13, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Introduce a cluster-scoped History CRD and HistoryManager; migrate controllers to asynchronously upsert/delete History resources instead of synchronously creating/patching Decision status; add deepcopy and CRD manifests, RBAC, and tests; remove the explanation subsystem and related wiring; adjust API types to use v1alpha1.SchedulingIntent.

Changes

Cohort / File(s) Summary
History API & Deepcopy
api/v1alpha1/history_types.go, api/v1alpha1/zz_generated.deepcopy.go
Add History CRD types (SchedulingIntent, History, CurrentDecision, SchedulingHistoryEntry, HistorySpec/Status) and generated deepcopy implementations.
CRD Manifest & RBAC
helm/library/cortex/files/crds/cortex.cloud_histories.yaml, helm/library/cortex/templates/rbac/role.yaml
Add cluster-scoped History CRD manifest with printer columns/schema; add histories, histories/finalizers, histories/status to RBAC and grant event create/patch verbs.
History Manager & Tests
internal/scheduling/lib/history_manager.go, internal/scheduling/lib/history_manager_test.go
New HistoryManager (Client + EventRecorder) with Upsert and Delete; upserts archive prior current entries, set Ready condition, update status, and emit events; comprehensive unit tests added.
Pipeline Lib & Controller Base
internal/scheduling/lib/filter_weigher_pipeline.go, internal/scheduling/lib/filter_weigher_pipeline_test.go, internal/scheduling/lib/pipeline_controller.go
Pipeline run/reporting now collects StepResults into DecisionResult; controller base types gain a HistoryManager field.
Domain Controllers & Tests
internal/scheduling/{cinder,manila,nova,pods,machines}/*filter_weigher_pipeline_controller.go, *_test.go
Remove synchronous Decision Create/Patch; wire HistoryManager in SetupWithManager; asynchronously call HistoryManager.Upsert on create/update and HistoryManager.Delete on deletes; tests updated to expect History CRDs and include History in fake client status subresources.
Cleanup Jobs & Tests
internal/scheduling/{cinder,manila,nova}/decisions_cleanup.go, *_test.go
Switch cleanup logic and tests from listing/deleting Decision to listing/deleting History resources and adjust references to HistorySpec.
Explanation Subsystem Removal
internal/scheduling/explanation/*
Remove entire explanation package: controller, explainer, templates, types, and all related tests and wiring.
External API change
api/external/nova/messages.go, api/external/nova/messages_test.go
Replace local RequestIntent with v1alpha1.SchedulingIntent; update GetIntent signature and tests accordingly.
CLI & Helm values
cmd/main.go, helm/bundles/*/values.yaml, helm/library/cortex/files/crds/cortex.cloud_pipelines.yaml
Remove explanation-controller wiring from main and Helm bundles; add NOTE to pipeline CRD createDecisions field description; update Helm values to exclude explanation-controller.

Sequence Diagram(s)

sequenceDiagram
  participant Controller as "Pipeline Controller"
  participant HistoryMgr as "HistoryManager"
  participant K8sAPI as "Kubernetes API"
  participant Recorder as "EventRecorder"

  Controller->>HistoryMgr: Upsert(ctx, decision, intent, az, pipelineErr)
  activate HistoryMgr
  HistoryMgr->>K8sAPI: Get History by name
  alt History not found
    K8sAPI-->>HistoryMgr: NotFound
    HistoryMgr->>K8sAPI: Create History (Spec)
  else History exists
    K8sAPI-->>HistoryMgr: Existing History
    HistoryMgr->>K8sAPI: Status().Update(CurrentDecision + archived History)
  end
  HistoryMgr->>Recorder: Event(SchedulingSucceeded|SchedulingFailed)
  deactivate HistoryMgr
  HistoryMgr-->>Controller: return error?
Loading

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Poem

🐰 I swapped decisions for histories with a hop and a cheer,
Async upserts hum stories of places once near,
Templates tucked away, explainers put to rest,
Events and conditions keep each record at best,
A rabbit’s tidy migration — history preserved dear!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 21.74% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'Refactor Decision CRD to History CRD' accurately and concisely describes the primary change: migrating core functionality from the Decision CRD to a new History CRD while deprecating Decision.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch refactor-decision-crd-v2
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

Tip

You can enable review details to help with troubleshooting, context usage and more.

Enable the reviews.review_details setting to include review details such as the model used, the time taken for each step and more in the review comments.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (5)
internal/scheduling/manila/decisions_cleanup_test.go (1)

350-356: ⚠️ Potential issue | 🟡 Minor

Assert deletions against History, not Decision.

At Line 352 this assertion still reads a v1alpha1.Decision, so it passes even when the History object was never deleted. After the CRD swap, the test no longer validates the cleanup path.

Suggested fix
-					var decision v1alpha1.Decision
+					var history v1alpha1.History
 					err := client.Get(context.Background(),
-						types.NamespacedName{Name: expectedDeleted}, &decision)
+						types.NamespacedName{Name: expectedDeleted}, &history)
 					if err == nil {
-						t.Errorf("Expected decision %s to be deleted but it still exists", expectedDeleted)
+						t.Errorf("Expected history %s to be deleted but it still exists", expectedDeleted)
 					}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/manila/decisions_cleanup_test.go` around lines 350 - 356,
The test is asserting deletion against v1alpha1.Decision but should check
v1alpha1.History; update the loop that looks up expectedDeleted to fetch a
v1alpha1.History (e.g., var history v1alpha1.History) and call client.Get with
types.NamespacedName{Name: expectedDeleted} into that history variable, then
assert that an error is returned (or IsNotFound) instead of checking a
Decision—ensure all references to the retrieved object in this block use the
History type and variable name.
internal/scheduling/nova/decisions_cleanup_test.go (1)

352-360: ⚠️ Potential issue | 🟠 Major

Fetch History, not Decision, in the deletion assertion.

This still queries v1alpha1.Decision, so the test will pass even if the History object was never deleted.

Suggested fix
 			if !tt.expectError {
-				// Verify expected decisions were deleted
+				// Verify expected histories were deleted
 				for _, expectedDeleted := range tt.expectedDeleted {
-					var decision v1alpha1.Decision
+					var history v1alpha1.History
 					err := client.Get(context.Background(),
-						types.NamespacedName{Name: expectedDeleted}, &decision)
+						types.NamespacedName{Name: expectedDeleted}, &history)
 					if err == nil {
-						t.Errorf("Expected decision %s to be deleted but it still exists", expectedDeleted)
+						t.Errorf("Expected history %s to be deleted but it still exists", expectedDeleted)
 					}
 				}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/nova/decisions_cleanup_test.go` around lines 352 - 360,
The test is asserting deletion of History objects but currently fetches
v1alpha1.Decision; change the lookup to fetch a v1alpha1.History instead.
Replace the local variable (e.g., var decision v1alpha1.Decision) with var
history v1alpha1.History and call client.Get(..., &history) when checking each
tt.expectedDeleted (using the same types.NamespacedName{Name: expectedDeleted}),
and update the error message to reference the History name so the assertion
fails if the History was not actually deleted.
internal/scheduling/pods/filter_weigher_pipeline_controller.go (1)

76-96: ⚠️ Potential issue | 🟠 Major

pod.Name is too weak for the new history key.

HistorySpec only carries SchedulingDomain and ResourceID, and this controller still identifies the scheduling by pod name. Same-named pods in different namespaces will overwrite or delete each other’s history.

Suggested fix
 	decision := &v1alpha1.Decision{
 		ObjectMeta: metav1.ObjectMeta{
 			GenerateName: "pod-",
 		},
 		Spec: v1alpha1.DecisionSpec{
 			SchedulingDomain: v1alpha1.SchedulingDomainPods,
-			ResourceID:       pod.Name,
+			ResourceID:       client.ObjectKeyFromObject(pod).String(),
 			PipelineRef: corev1.ObjectReference{
 				Name: "pods-scheduler",
 			},
@@
 		DeleteFunc: func(ctx context.Context, evt event.DeleteEvent, queue workqueue.TypedRateLimitingInterface[reconcile.Request]) {
 			pod := evt.Object.(*corev1.Pod)
-			if err := c.HistoryManager.Delete(ctx, v1alpha1.SchedulingDomainPods, pod.Name); err != nil {
+			resourceID := client.ObjectKeyFromObject(pod).String()
+			if err := c.HistoryManager.Delete(ctx, v1alpha1.SchedulingDomainPods, resourceID); err != nil {
 				log := ctrl.LoggerFrom(ctx)
 				log.Error(err, "failed to delete history CRD for pod", "pod", pod.Name)
 			}

Also applies to: 224-229

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/pods/filter_weigher_pipeline_controller.go` around lines
76 - 96, The Decision resource uses pod.Name as Decision.Spec.ResourceID which
is too weak (collides across namespaces); update ProcessNewPod (and the similar
creation at the other location) to set ResourceID to a unique identifier
combining namespace and name (e.g., "namespace/name") or better, use pod.UID
(preferred) so HistorySpec's ResourceID is unique per pod; change the
Decision.Spec.PodRef to remain the same but replace ResourceID assignment from
pod.Name to either fmt.Sprintf("%s/%s", pod.Namespace, pod.Name) or
string(pod.UID) consistently wherever Decision.Spec.ResourceID is set.
internal/scheduling/machines/filter_weigher_pipeline_controller.go (1)

77-97: ⚠️ Potential issue | 🟠 Major

machine.Name is not unique enough for persisted history.

With the new History CRD, the effective key is SchedulingDomain + ResourceID. Using only the machine name means same-named machines in different namespaces can clobber each other’s history.

Suggested fix
 	decision := &v1alpha1.Decision{
 		ObjectMeta: metav1.ObjectMeta{
 			GenerateName: "machine-",
 		},
 		Spec: v1alpha1.DecisionSpec{
 			SchedulingDomain: v1alpha1.SchedulingDomainMachines,
-			ResourceID:       machine.Name,
+			ResourceID:       client.ObjectKeyFromObject(machine).String(),
 			PipelineRef: corev1.ObjectReference{
 				Name: "machines-scheduler",
 			},
@@
 		DeleteFunc: func(ctx context.Context, evt event.DeleteEvent, queue workqueue.TypedRateLimitingInterface[reconcile.Request]) {
 			machine := evt.Object.(*ironcorev1alpha1.Machine)
-			if err := c.HistoryManager.Delete(ctx, v1alpha1.SchedulingDomainMachines, machine.Name); err != nil {
+			resourceID := client.ObjectKeyFromObject(machine).String()
+			if err := c.HistoryManager.Delete(ctx, v1alpha1.SchedulingDomainMachines, resourceID); err != nil {
 				log := ctrl.LoggerFrom(ctx)
 				log.Error(err, "failed to delete history CRD for machine", "machine", machine.Name)
 			}

Also applies to: 213-218

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/machines/filter_weigher_pipeline_controller.go` around
lines 77 - 97, The Decision.ResourceID is currently set to machine.Name which
can collide across namespaces; update
FilterWeigherPipelineController.ProcessNewMachine (and the other Decision
creation site around the similar block) to use a namespace-qualified key (for
example fmt.Sprintf("%s/%s", machine.Namespace, machine.Name) or k8s
types.NamespacedName.String()) so Decision.Spec.ResourceID is unique per
namespace+name; ensure both places that build v1alpha1.Decision.Spec.ResourceID
(and any analogous ResourceID assignments) are changed to the namespaced form.
internal/scheduling/cinder/decisions_cleanup_test.go (1)

299-307: ⚠️ Potential issue | 🟠 Major

The deletion check is still looking up the wrong kind.

After this migration, querying v1alpha1.Decision here returns NotFound even when the History object still exists, so this test no longer protects the cleanup path.

Suggested fix
 			if !tt.expectError {
-				// Verify expected decisions were deleted
+				// Verify expected histories were deleted
 				for _, expectedDeleted := range tt.expectedDeleted {
-					var decision v1alpha1.Decision
+					var history v1alpha1.History
 					err := client.Get(context.Background(),
-						types.NamespacedName{Name: expectedDeleted}, &decision)
+						types.NamespacedName{Name: expectedDeleted}, &history)
 					if err == nil {
-						t.Errorf("Expected decision %s to be deleted but it still exists", expectedDeleted)
+						t.Errorf("Expected history %s to be deleted but it still exists", expectedDeleted)
 					}
 				}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/cinder/decisions_cleanup_test.go` around lines 299 - 307,
The test is querying the wrong Kubernetes kind (v1alpha1.Decision) when
asserting deletions; change the client.Get call to look up the History kind
instead (replace v1alpha1.Decision with v1alpha1.History and adjust the variable
name accordingly) so the test verifies the History object was actually deleted,
and update the error message to reference the History resource name; keep the
existing use of types.NamespacedName{Name: expectedDeleted} and the nil-check
logic.
🧹 Nitpick comments (2)
internal/scheduling/lib/filter_weigher_pipeline_test.go (1)

224-228: Consider validating the returned step results.

The test correctly adapts to the new runFilters signature, but discarding stepResults misses an opportunity to verify that filter activations are properly captured. Since this is now a key feature for history tracking, adding assertions would strengthen coverage.

💡 Optional enhancement to verify step results
-	req, _ := p.runFilters(slog.Default(), request)
+	req, stepResults := p.runFilters(slog.Default(), request)
 	if len(req.Hosts) != 2 {
 		t.Fatalf("expected 2 step results, got %d", len(req.Hosts))
 	}
+	if len(stepResults) != 1 {
+		t.Fatalf("expected 1 step result, got %d", len(stepResults))
+	}
+	if stepResults[0].StepName != "mock_filter" {
+		t.Errorf("expected step name 'mock_filter', got %s", stepResults[0].StepName)
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/lib/filter_weigher_pipeline_test.go` around lines 224 -
228, The test currently ignores the second return value from p.runFilters (the
stepResults) after calling req, _ := p.runFilters(slog.Default(), request);
update the test to capture and assert on stepResults (e.g., stepResults := ...
or req, stepResults := ...), verifying its length and that entries correspond to
expected filters/steps and activation states; specifically assert that
stepResults contains the expected number of steps, that each StepResult
references the correct filter identifier (by name or ID used in the test) and
that the activation/decision fields match the expected outcomes for the given
request, while keeping the existing host count assertion for req.Hosts.
internal/scheduling/lib/history_manager.go (1)

181-188: Consider copying OrderedHosts to avoid shared slice reference.

The slice is assigned directly from decision.Status.Result.OrderedHosts. If the caller later modifies the original slice, it could inadvertently affect the stored history. While unlikely in current usage patterns, a defensive copy would be safer.

♻️ Defensive copy of OrderedHosts
 	if decision.Status.Result != nil {
 		current.TargetHost = decision.Status.Result.TargetHost
 		hosts := decision.Status.Result.OrderedHosts
 		if len(hosts) > 3 {
 			hosts = hosts[:3]
 		}
-		current.OrderedHosts = hosts
+		if len(hosts) > 0 {
+			current.OrderedHosts = make([]string, len(hosts))
+			copy(current.OrderedHosts, hosts)
+		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/lib/history_manager.go` around lines 181 - 188, The code
assigns decision.Status.Result.OrderedHosts directly to current.OrderedHosts
which shares the underlying slice and can lead to accidental mutation; update
the logic in the block that sets current.OrderedHosts (where
decision.Status.Result.OrderedHosts is read) to create a defensive copy of the
slice (copy the elements into a new slice, truncating to 3 if needed) before
assigning to current.OrderedHosts so the stored history never references the
original slice.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/scheduling/cinder/filter_weigher_pipeline_controller.go`:
- Around line 98-103: The fire-and-forget call to c.HistoryManager.Upsert inside
the CreateDecisions branch should be made in-band: remove the goroutine so
c.HistoryManager.Upsert(...) is executed synchronously (use the existing ctx,
decision, v1alpha1.SchedulingIntentUnknown and err parameters), check its
returned error, log it via ctrl.LoggerFrom(ctx).Error and propagate/return the
error from ProcessNewDecisionFromAPI() (or surface it to the caller) instead of
swallowing it; ensure callers observing pipelineConf.Spec.CreateDecisions can
rely on the history write completing before the function returns.

In `@internal/scheduling/machines/filter_weigher_pipeline_controller.go`:
- Around line 120-124: The async goroutine that calls c.HistoryManager.Upsert
with context.Background() can run after the machine is deleted and recreate
stale history; instead perform the Upsert synchronously (remove the goroutine)
or, if you must async, capture and use the request-scoped ctx and verify the
machine still exists before calling c.HistoryManager.Upsert. Concretely, replace
the fire-and-forget go func around c.HistoryManager.Upsert(decision,
v1alpha1.SchedulingIntentUnknown, err) with a direct call or add a pre-check
using the controller's store/client (e.g. check c.MachineStore.Exists/Get or
c.Client.Get for the machine referenced by decision) and only call
HistoryManager.Upsert if the resource still exists; apply the same change to the
duplicate block around lines 213-218.

In `@internal/scheduling/manila/filter_weigher_pipeline_controller.go`:
- Around line 98-103: The current fire-and-forget goroutine around
HistoryManager.Upsert causes ProcessNewDecisionFromAPI (and callers) to believe
persistence succeeded even if the Upsert fails; make the Upsert call in-band
instead of spawning a goroutine: remove the anonymous go func and call
c.HistoryManager.Upsert(ctx, decision, v1alpha1.SchedulingIntentUnknown, err)
synchronously, check its returned error, log it via ctrl.LoggerFrom(ctx).Error,
and return or propagate that error from ProcessNewDecisionFromAPI (or merge it
into the existing err) so callers observe a failure when history persistence
fails; this change touches the CreateDecisions check and the Upsert invocation
in filter_weigher_pipeline_controller.go and ensures HistoryManager.Upsert is
not async.

In `@internal/scheduling/pods/filter_weigher_pipeline_controller.go`:
- Around line 119-123: ProcessNewPod() currently spawns a detached goroutine
that calls c.HistoryManager.Upsert, which can race with Delete() (which runs
synchronously without processMu) and re-create history for a deleted pod; change
the Upsert to run synchronously and under the controller's processMu to prevent
the race (i.e., remove the goroutine and acquire c.processMu before calling
HistoryManager.Upsert inside ProcessNewPod()), and apply the same fix to the
other occurrence that mirrors lines 224-229 so both Upsert calls use the mutex
and are not deferred to a background goroutine.

---

Outside diff comments:
In `@internal/scheduling/cinder/decisions_cleanup_test.go`:
- Around line 299-307: The test is querying the wrong Kubernetes kind
(v1alpha1.Decision) when asserting deletions; change the client.Get call to look
up the History kind instead (replace v1alpha1.Decision with v1alpha1.History and
adjust the variable name accordingly) so the test verifies the History object
was actually deleted, and update the error message to reference the History
resource name; keep the existing use of types.NamespacedName{Name:
expectedDeleted} and the nil-check logic.

In `@internal/scheduling/machines/filter_weigher_pipeline_controller.go`:
- Around line 77-97: The Decision.ResourceID is currently set to machine.Name
which can collide across namespaces; update
FilterWeigherPipelineController.ProcessNewMachine (and the other Decision
creation site around the similar block) to use a namespace-qualified key (for
example fmt.Sprintf("%s/%s", machine.Namespace, machine.Name) or k8s
types.NamespacedName.String()) so Decision.Spec.ResourceID is unique per
namespace+name; ensure both places that build v1alpha1.Decision.Spec.ResourceID
(and any analogous ResourceID assignments) are changed to the namespaced form.

In `@internal/scheduling/manila/decisions_cleanup_test.go`:
- Around line 350-356: The test is asserting deletion against v1alpha1.Decision
but should check v1alpha1.History; update the loop that looks up expectedDeleted
to fetch a v1alpha1.History (e.g., var history v1alpha1.History) and call
client.Get with types.NamespacedName{Name: expectedDeleted} into that history
variable, then assert that an error is returned (or IsNotFound) instead of
checking a Decision—ensure all references to the retrieved object in this block
use the History type and variable name.

In `@internal/scheduling/nova/decisions_cleanup_test.go`:
- Around line 352-360: The test is asserting deletion of History objects but
currently fetches v1alpha1.Decision; change the lookup to fetch a
v1alpha1.History instead. Replace the local variable (e.g., var decision
v1alpha1.Decision) with var history v1alpha1.History and call client.Get(...,
&history) when checking each tt.expectedDeleted (using the same
types.NamespacedName{Name: expectedDeleted}), and update the error message to
reference the History name so the assertion fails if the History was not
actually deleted.

In `@internal/scheduling/pods/filter_weigher_pipeline_controller.go`:
- Around line 76-96: The Decision resource uses pod.Name as
Decision.Spec.ResourceID which is too weak (collides across namespaces); update
ProcessNewPod (and the similar creation at the other location) to set ResourceID
to a unique identifier combining namespace and name (e.g., "namespace/name") or
better, use pod.UID (preferred) so HistorySpec's ResourceID is unique per pod;
change the Decision.Spec.PodRef to remain the same but replace ResourceID
assignment from pod.Name to either fmt.Sprintf("%s/%s", pod.Namespace, pod.Name)
or string(pod.UID) consistently wherever Decision.Spec.ResourceID is set.

---

Nitpick comments:
In `@internal/scheduling/lib/filter_weigher_pipeline_test.go`:
- Around line 224-228: The test currently ignores the second return value from
p.runFilters (the stepResults) after calling req, _ :=
p.runFilters(slog.Default(), request); update the test to capture and assert on
stepResults (e.g., stepResults := ... or req, stepResults := ...), verifying its
length and that entries correspond to expected filters/steps and activation
states; specifically assert that stepResults contains the expected number of
steps, that each StepResult references the correct filter identifier (by name or
ID used in the test) and that the activation/decision fields match the expected
outcomes for the given request, while keeping the existing host count assertion
for req.Hosts.

In `@internal/scheduling/lib/history_manager.go`:
- Around line 181-188: The code assigns decision.Status.Result.OrderedHosts
directly to current.OrderedHosts which shares the underlying slice and can lead
to accidental mutation; update the logic in the block that sets
current.OrderedHosts (where decision.Status.Result.OrderedHosts is read) to
create a defensive copy of the slice (copy the elements into a new slice,
truncating to 3 if needed) before assigning to current.OrderedHosts so the
stored history never references the original slice.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: c7318694-8787-4c2d-8835-ed238e44afda

📥 Commits

Reviewing files that changed from the base of the PR and between c509b50 and ea4f99e.

📒 Files selected for processing (41)
  • api/external/nova/messages.go
  • api/external/nova/messages_test.go
  • api/v1alpha1/history_types.go
  • api/v1alpha1/pipeline_types.go
  • api/v1alpha1/zz_generated.deepcopy.go
  • cmd/main.go
  • helm/bundles/cortex-cinder/values.yaml
  • helm/bundles/cortex-ironcore/values.yaml
  • helm/bundles/cortex-manila/values.yaml
  • helm/bundles/cortex-nova/values.yaml
  • helm/bundles/cortex-pods/values.yaml
  • helm/library/cortex/files/crds/cortex.cloud_histories.yaml
  • helm/library/cortex/files/crds/cortex.cloud_pipelines.yaml
  • helm/library/cortex/templates/rbac/role.yaml
  • internal/scheduling/cinder/decisions_cleanup.go
  • internal/scheduling/cinder/decisions_cleanup_test.go
  • internal/scheduling/cinder/filter_weigher_pipeline_controller.go
  • internal/scheduling/cinder/filter_weigher_pipeline_controller_test.go
  • internal/scheduling/explanation/controller.go
  • internal/scheduling/explanation/controller_test.go
  • internal/scheduling/explanation/explainer.go
  • internal/scheduling/explanation/explainer_test.go
  • internal/scheduling/explanation/templates.go
  • internal/scheduling/explanation/types.go
  • internal/scheduling/lib/filter_weigher_pipeline.go
  • internal/scheduling/lib/filter_weigher_pipeline_test.go
  • internal/scheduling/lib/history_manager.go
  • internal/scheduling/lib/history_manager_test.go
  • internal/scheduling/lib/pipeline_controller.go
  • internal/scheduling/machines/filter_weigher_pipeline_controller.go
  • internal/scheduling/machines/filter_weigher_pipeline_controller_test.go
  • internal/scheduling/manila/decisions_cleanup.go
  • internal/scheduling/manila/decisions_cleanup_test.go
  • internal/scheduling/manila/filter_weigher_pipeline_controller.go
  • internal/scheduling/manila/filter_weigher_pipeline_controller_test.go
  • internal/scheduling/nova/decisions_cleanup.go
  • internal/scheduling/nova/decisions_cleanup_test.go
  • internal/scheduling/nova/filter_weigher_pipeline_controller.go
  • internal/scheduling/nova/filter_weigher_pipeline_controller_test.go
  • internal/scheduling/pods/filter_weigher_pipeline_controller.go
  • internal/scheduling/pods/filter_weigher_pipeline_controller_test.go
💤 Files with no reviewable changes (12)
  • cmd/main.go
  • helm/bundles/cortex-manila/values.yaml
  • helm/bundles/cortex-cinder/values.yaml
  • helm/bundles/cortex-pods/values.yaml
  • helm/bundles/cortex-nova/values.yaml
  • internal/scheduling/explanation/types.go
  • internal/scheduling/explanation/explainer_test.go
  • internal/scheduling/explanation/controller.go
  • helm/bundles/cortex-ironcore/values.yaml
  • internal/scheduling/explanation/templates.go
  • internal/scheduling/explanation/controller_test.go
  • internal/scheduling/explanation/explainer.go

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (5)
internal/scheduling/machines/filter_weigher_pipeline_controller.go (1)

119-126: ⚠️ Potential issue | 🟠 Major

Async upsert races with synchronous delete.

Same issue as the pods controller: the goroutine-based Upsert can execute after Delete() in the DeleteFunc handler, recreating a stale history record for a deleted machine.

🛠️ Suggested fix: make upsert synchronous
 	if pipelineConf.Spec.CreateDecisions {
-		go func() {
-			if upsertErr := c.HistoryManager.Upsert(context.Background(), decision, v1alpha1.SchedulingIntentUnknown, nil, err); upsertErr != nil {
-				ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
-			}
-		}()
+		if upsertErr := c.HistoryManager.Upsert(ctx, decision, v1alpha1.SchedulingIntentUnknown, nil, err); upsertErr != nil {
+			ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
+		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/machines/filter_weigher_pipeline_controller.go` around
lines 119 - 126, The asynchronous goroutine calling c.HistoryManager.Upsert when
pipelineConf.Spec.CreateDecisions is true can race with the DeleteFunc and
recreate stale history; remove the goroutine and perform the Upsert
synchronously (call c.HistoryManager.Upsert directly, handle/log any upsertErr)
so the Upsert completes before returning from the function (use the existing ctx
or context.Background() as done now) — update the block guarded by
pipelineConf.Spec.CreateDecisions that references c.HistoryManager.Upsert and
ensure the function returns err only after Upsert finishes.
internal/scheduling/manila/filter_weigher_pipeline_controller.go (1)

98-104: ⚠️ Potential issue | 🟠 Major

Fire-and-forget history upsert can silently fail.

The goroutine-based Upsert means ProcessNewDecisionFromAPI() can return success while the History write never completes. Since the PR moves persistence behind CreateDecisions, callers may observe no persisted record despite a successful return.

Consider making this synchronous or using a bounded context with error propagation.

🛠️ Suggested fix for synchronous upsert
 	if pipelineConf.Spec.CreateDecisions {
-		go func() {
-			if upsertErr := c.HistoryManager.Upsert(context.Background(), decision, v1alpha1.SchedulingIntentUnknown, nil, err); upsertErr != nil {
-				ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
-			}
-		}()
+		if upsertErr := c.HistoryManager.Upsert(ctx, decision, v1alpha1.SchedulingIntentUnknown, nil, err); upsertErr != nil {
+			ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
+			// Optionally propagate error: return fmt.Errorf("persist history: %w", upsertErr)
+		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/manila/filter_weigher_pipeline_controller.go` around
lines 98 - 104, ProcessNewDecisionFromAPI() currently fire-and-forgets
HistoryManager.Upsert in a goroutine when pipelineConf.Spec.CreateDecisions is
true, which can let the function return before the history write completes;
change this to perform a synchronous Upsert (or call Upsert with a bounded
context and propagate any error back to the caller) instead of launching a
goroutine: call c.HistoryManager.Upsert(ctxOrBoundedCtx, decision,
v1alpha1.SchedulingIntentUnknown, nil, err), check the returned error and
surface/log/return it appropriately so callers observe failures to persist
history when CreateDecisions is enabled.
internal/scheduling/pods/filter_weigher_pipeline_controller.go (1)

118-126: ⚠️ Potential issue | 🟠 Major

Async upsert races with synchronous delete.

The goroutine-based Upsert (line 119-123) can execute after Delete() in the DeleteFunc handler (lines 224-229), recreating a stale history record for a deleted pod. The Delete handler doesn't acquire processMu and runs synchronously, while ProcessNewPod releases the mutex before the goroutine completes.

🛠️ Suggested fix: make upsert synchronous
 	if pipelineConf.Spec.CreateDecisions {
-		go func() {
-			if upsertErr := c.HistoryManager.Upsert(context.Background(), decision, v1alpha1.SchedulingIntentUnknown, nil, err); upsertErr != nil {
-				ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
-			}
-		}()
+		if upsertErr := c.HistoryManager.Upsert(ctx, decision, v1alpha1.SchedulingIntentUnknown, nil, err); upsertErr != nil {
+			ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
+		}
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/pods/filter_weigher_pipeline_controller.go` around lines
118 - 126, Async Upsert races with Delete: remove the goroutine and call
c.HistoryManager.Upsert synchronously when pipelineConf.Spec.CreateDecisions is
true (i.e., replace the go func with a direct call to c.HistoryManager.Upsert
using the current context and same arguments), so that the Upsert completes
before ProcessNewPod releases processMu; keep the existing error handling
(ctrl.LoggerFrom(ctx).Error) and return semantics unchanged; alternatively (if
synchronous call would deadlock) ensure DeleteFunc also acquires processMu
before deleting, but the preferred fix is to make the Upsert in ProcessNewPod
synchronous by invoking c.HistoryManager.Upsert directly instead of spawning a
goroutine.
internal/scheduling/lib/history_manager_test.go (1)

558-586: ⚠️ Potential issue | 🟡 Minor

Calling t.Errorf inside a goroutine is unsafe.

The test spawns a goroutine that calls t.Errorf without synchronization. If the goroutine outlives the test function (e.g., if the polling loop exits early or times out), this can cause a race or panic. The test should wait for the goroutine to complete before exiting.

🛠️ Proposed fix using a channel
 	// Mirrors the pattern used in pipeline controllers.
 	ctx := context.Background()
+	errCh := make(chan error, 1)
 	go func() {
-		if err := hm.Upsert(ctx, decision, v1alpha1.SchedulingIntentUnknown, nil, nil); err != nil {
-			t.Errorf("Upsert() returned error: %v", err)
-		}
+		errCh <- hm.Upsert(ctx, decision, v1alpha1.SchedulingIntentUnknown, nil, nil)
 	}()

 	// Poll for history creation.
 	var histories v1alpha1.HistoryList
 	deadline := time.Now().Add(2 * time.Second)
 	for {
 		if err := c.List(context.Background(), &histories); err != nil {
 			t.Fatalf("Failed to list histories: %v", err)
 		}
 		if len(histories.Items) > 0 {
 			break
 		}
 		if time.Now().After(deadline) {
 			t.Fatal("timed out waiting for async history creation")
 		}
 		time.Sleep(5 * time.Millisecond)
 	}

+	// Wait for the goroutine to complete and check for errors.
+	if err := <-errCh; err != nil {
+		t.Errorf("Upsert() returned error: %v", err)
+	}
+
 	got := histories.Items[0].Status.Current.TargetHost
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/lib/history_manager_test.go` around lines 558 - 586, The
goroutine that calls hm.Upsert should be synchronized so the test doesn't call
t.Errorf from a background goroutine; replace the unsynchronized goroutine with
one that reports its result over a channel (or use a sync.WaitGroup) and ensure
the main test goroutine waits for it before returning. Specifically, when you
call go func() { if err := hm.Upsert(...) { /* send err on errCh */ } }(),
create an errCh (or wg) before starting the goroutine, send any error into that
channel, and after the polling loop receive from errCh (or wg.Wait()) and then
call t.Errorf/t.Fatalf from the main test goroutine if an error was reported;
reference hm.Upsert and the async goroutine around the polling of
histories.Items.
internal/scheduling/cinder/filter_weigher_pipeline_controller.go (1)

98-104: ⚠️ Potential issue | 🟠 Major

Fire-and-forget history upsert can silently fail.

Same issue as in the Manila controller—the goroutine-based Upsert can fail silently while ProcessNewDecisionFromAPI() returns success.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/cinder/filter_weigher_pipeline_controller.go` around
lines 98 - 104, The fire-and-forget goroutine calling c.HistoryManager.Upsert
when pipelineConf.Spec.CreateDecisions is true can cause silent failures; change
this to perform the Upsert synchronously (remove the goroutine) and propagate or
handle its error from ProcessNewDecisionFromAPI so ProcessNewDecisionFromAPI
only returns success if Upsert succeeds—locate the block using
pipelineConf.Spec.CreateDecisions, c.HistoryManager.Upsert, and
ProcessNewDecisionFromAPI to implement the synchronous call and proper error
handling/logging.
🧹 Nitpick comments (4)
internal/scheduling/machines/filter_weigher_pipeline_controller.go (1)

213-219: Delete handler should acquire mutex for consistency.

If the upsert remains async, the DeleteFunc should acquire processMu to prevent the race with in-flight upserts.

♻️ Suggested fix
 		DeleteFunc: func(ctx context.Context, evt event.DeleteEvent, queue workqueue.TypedRateLimitingInterface[reconcile.Request]) {
+			c.processMu.Lock()
+			defer c.processMu.Unlock()
 			machine := evt.Object.(*ironcorev1alpha1.Machine)
 			if err := c.HistoryManager.Delete(ctx, v1alpha1.SchedulingDomainMachines, machine.Name); err != nil {
 				log := ctrl.LoggerFrom(ctx)
 				log.Error(err, "failed to delete history CRD for machine", "machine", machine.Name)
 			}
 		},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/machines/filter_weigher_pipeline_controller.go` around
lines 213 - 219, The DeleteFunc handling machine deletions can race with async
upserts; acquire the controller's processMu before performing the history delete
to serialise against in-flight upserts. In the DeleteFunc (the anonymous
function registered as DeleteFunc) add a processMu.Lock() at the start and defer
processMu.Unlock() before calling c.HistoryManager.Delete(ctx,
v1alpha1.SchedulingDomainMachines, machine.Name) so the delete is performed
under the same mutex used by the upsert path (reference processMu, DeleteFunc,
and HistoryManager.Delete to locate the code).
internal/scheduling/pods/filter_weigher_pipeline_controller.go (1)

224-230: Delete handler should acquire mutex for consistency.

If the upsert remains async, the DeleteFunc should acquire processMu to prevent the race with in-flight upserts.

♻️ Suggested fix
 		DeleteFunc: func(ctx context.Context, evt event.DeleteEvent, queue workqueue.TypedRateLimitingInterface[reconcile.Request]) {
+			c.processMu.Lock()
+			defer c.processMu.Unlock()
 			pod := evt.Object.(*corev1.Pod)
 			if err := c.HistoryManager.Delete(ctx, v1alpha1.SchedulingDomainPods, pod.Name); err != nil {
 				log := ctrl.LoggerFrom(ctx)
 				log.Error(err, "failed to delete history CRD for pod", "pod", pod.Name)
 			}
 		},
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/pods/filter_weigher_pipeline_controller.go` around lines
224 - 230, The DeleteFunc handler currently calls c.HistoryManager.Delete
without synchronizing with in-flight async upserts; wrap the deletion in the
controller's processMu lock to avoid races by calling c.processMu.Lock() at the
start of DeleteFunc and defer c.processMu.Unlock() before invoking
c.HistoryManager.Delete (preserving existing logging behavior on error). Ensure
you acquire the same mutex used by the upsert path (processMu) and keep the rest
of the DeleteFunc logic unchanged.
internal/scheduling/nova/filter_weigher_pipeline_controller.go (1)

101-132: Async history upsert with extracted metadata.

The upsertHistory helper extracts availability zone and scheduling intent from the request, which is good for richer history records. However, the fire-and-forget goroutine pattern (line 102) means failures are only logged and not propagated to callers.

Additionally, the goroutine uses context.Background() (line 129) which is intentional to decouple from the request lifecycle, but this means the upsert has no timeout and could hang indefinitely.

♻️ Consider adding a timeout
 func (c *FilterWeigherPipelineController) upsertHistory(ctx context.Context, decision *v1alpha1.Decision, pipelineErr error) {
 	log := ctrl.LoggerFrom(ctx)

 	var az *string
 	intent := v1alpha1.SchedulingIntentUnknown

 	if decision.Spec.NovaRaw != nil {
 		var request api.ExternalSchedulerRequest
 		err := json.Unmarshal(decision.Spec.NovaRaw.Raw, &request)
 		if err != nil {
 			log.Error(err, "failed to unmarshal novaRaw for history, using defaults")
 		} else {
 			azStr := request.Spec.Data.AvailabilityZone
 			az = &azStr
 			if parsedIntent, intentErr := request.GetIntent(); intentErr != nil {
 				log.Error(intentErr, "failed to get intent from nova request, using Unknown")
 			} else {
 				intent = parsedIntent
 			}
 		}
 	}

-	if upsertErr := c.HistoryManager.Upsert(context.Background(), decision, intent, az, pipelineErr); upsertErr != nil {
+	upsertCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
+	defer cancel()
+	if upsertErr := c.HistoryManager.Upsert(upsertCtx, decision, intent, az, pipelineErr); upsertErr != nil {
 		log.Error(upsertErr, "failed to create/update history")
 	}
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/nova/filter_weigher_pipeline_controller.go` around lines
101 - 132, The current fire-and-forget call to upsertHistory from
FilterWeigherPipelineController starts a goroutine without a timeout and calls
HistoryManager.Upsert with context.Background(), risking indefinite hangs and
lost/ungoverned work; change the goroutine invocation to derive a cancellable
context with a bounded timeout (e.g., context.WithTimeout(ctx, <reasonable
duration>)), pass that context into upsertHistory (update the signature of
upsertHistory to accept ctx context.Context), ensure the goroutine defers
cancel(), and inside upsertHistory use the passed ctx for
c.HistoryManager.Upsert instead of context.Background(); keep the call
asynchronous (go ...) so callers aren’t blocked and preserve existing logging of
upsert errors (or add a metric emission if desired).
internal/scheduling/lib/history_manager.go (1)

216-219: Add conflict-retry around status updates to prevent concurrent write loss.

Multiple pipeline controllers call Upsert() concurrently via goroutines. When the same History object is updated by concurrent calls, a TOCTOU race can occur: one goroutine's Get() fetches a stale resource version, then its Status().Update() fails with a conflict error, losing the status write. The suggested refactor with RetryOnConflict is the standard Kubernetes pattern for handling such races.

Proposed refactor
+	"k8s.io/client-go/util/retry"
@@
-	if updateErr := h.Client.Status().Update(ctx, history); updateErr != nil {
+	if updateErr := retry.RetryOnConflict(retry.DefaultRetry, func() error {
+		latest := &v1alpha1.History{}
+		if err := h.Client.Get(ctx, client.ObjectKey{Name: name}, latest); err != nil {
+			return err
+		}
+		latest.Status = history.Status
+		return h.Client.Status().Update(ctx, latest)
+	}); updateErr != nil {
 		log.Error(updateErr, "failed to update history CRD status", "name", name)
 		return updateErr
 	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/lib/history_manager.go` around lines 216 - 219, The
status update in Upsert() currently calls h.Client.Status().Update(ctx, history)
directly and can fail with a conflict when concurrent goroutines modify the same
History; wrap the status update in a kubernetes retry loop using
clientretry.RetryOnConflict to Get() the latest History, apply the status
changes to that fresh object, and then call h.Client.Status().Update until it
succeeds or the retry returns an error; reference the Upsert() function, the
history variable, name/name identifier and the h.Client.Status().Update call
when making the change and ensure you preserve context and returned error
handling.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@internal/scheduling/lib/history_manager.go`:
- Around line 136-140: Spec.AvailabilityZone is only set on History creation so
it can remain stale; update the code paths that construct or persist
v1alpha1.History (both the create branch where v1alpha1.HistorySpec{...
AvailabilityZone: az} is used and the later block referenced in the comment) to
ensure the current az is applied on upsert: when you build the HistorySpec for
either create or update, assign AvailabilityZone = az, and if loading an
existing History, compare existing.Spec.AvailabilityZone to az and patch/update
the History resource when they differ so AZ changes are persisted.
- Around line 115-126: Upsert currently dereferences decision immediately (used
to build name via getName(decision.Spec.SchedulingDomain,
decision.Spec.ResourceID)); add a nil guard at the top of HistoryManager.Upsert
to validate decision is not nil and return a clear error (or log and return) if
it is nil to avoid panics in async goroutines; follow the same defensive pattern
used by generateExplanation and ensure subsequent uses of decision (e.g.,
getName, accessing Spec) only run after the nil check.

---

Duplicate comments:
In `@internal/scheduling/cinder/filter_weigher_pipeline_controller.go`:
- Around line 98-104: The fire-and-forget goroutine calling
c.HistoryManager.Upsert when pipelineConf.Spec.CreateDecisions is true can cause
silent failures; change this to perform the Upsert synchronously (remove the
goroutine) and propagate or handle its error from ProcessNewDecisionFromAPI so
ProcessNewDecisionFromAPI only returns success if Upsert succeeds—locate the
block using pipelineConf.Spec.CreateDecisions, c.HistoryManager.Upsert, and
ProcessNewDecisionFromAPI to implement the synchronous call and proper error
handling/logging.

In `@internal/scheduling/lib/history_manager_test.go`:
- Around line 558-586: The goroutine that calls hm.Upsert should be synchronized
so the test doesn't call t.Errorf from a background goroutine; replace the
unsynchronized goroutine with one that reports its result over a channel (or use
a sync.WaitGroup) and ensure the main test goroutine waits for it before
returning. Specifically, when you call go func() { if err := hm.Upsert(...) { /*
send err on errCh */ } }(), create an errCh (or wg) before starting the
goroutine, send any error into that channel, and after the polling loop receive
from errCh (or wg.Wait()) and then call t.Errorf/t.Fatalf from the main test
goroutine if an error was reported; reference hm.Upsert and the async goroutine
around the polling of histories.Items.

In `@internal/scheduling/machines/filter_weigher_pipeline_controller.go`:
- Around line 119-126: The asynchronous goroutine calling
c.HistoryManager.Upsert when pipelineConf.Spec.CreateDecisions is true can race
with the DeleteFunc and recreate stale history; remove the goroutine and perform
the Upsert synchronously (call c.HistoryManager.Upsert directly, handle/log any
upsertErr) so the Upsert completes before returning from the function (use the
existing ctx or context.Background() as done now) — update the block guarded by
pipelineConf.Spec.CreateDecisions that references c.HistoryManager.Upsert and
ensure the function returns err only after Upsert finishes.

In `@internal/scheduling/manila/filter_weigher_pipeline_controller.go`:
- Around line 98-104: ProcessNewDecisionFromAPI() currently fire-and-forgets
HistoryManager.Upsert in a goroutine when pipelineConf.Spec.CreateDecisions is
true, which can let the function return before the history write completes;
change this to perform a synchronous Upsert (or call Upsert with a bounded
context and propagate any error back to the caller) instead of launching a
goroutine: call c.HistoryManager.Upsert(ctxOrBoundedCtx, decision,
v1alpha1.SchedulingIntentUnknown, nil, err), check the returned error and
surface/log/return it appropriately so callers observe failures to persist
history when CreateDecisions is enabled.

In `@internal/scheduling/pods/filter_weigher_pipeline_controller.go`:
- Around line 118-126: Async Upsert races with Delete: remove the goroutine and
call c.HistoryManager.Upsert synchronously when
pipelineConf.Spec.CreateDecisions is true (i.e., replace the go func with a
direct call to c.HistoryManager.Upsert using the current context and same
arguments), so that the Upsert completes before ProcessNewPod releases
processMu; keep the existing error handling (ctrl.LoggerFrom(ctx).Error) and
return semantics unchanged; alternatively (if synchronous call would deadlock)
ensure DeleteFunc also acquires processMu before deleting, but the preferred fix
is to make the Upsert in ProcessNewPod synchronous by invoking
c.HistoryManager.Upsert directly instead of spawning a goroutine.

---

Nitpick comments:
In `@internal/scheduling/lib/history_manager.go`:
- Around line 216-219: The status update in Upsert() currently calls
h.Client.Status().Update(ctx, history) directly and can fail with a conflict
when concurrent goroutines modify the same History; wrap the status update in a
kubernetes retry loop using clientretry.RetryOnConflict to Get() the latest
History, apply the status changes to that fresh object, and then call
h.Client.Status().Update until it succeeds or the retry returns an error;
reference the Upsert() function, the history variable, name/name identifier and
the h.Client.Status().Update call when making the change and ensure you preserve
context and returned error handling.

In `@internal/scheduling/machines/filter_weigher_pipeline_controller.go`:
- Around line 213-219: The DeleteFunc handling machine deletions can race with
async upserts; acquire the controller's processMu before performing the history
delete to serialise against in-flight upserts. In the DeleteFunc (the anonymous
function registered as DeleteFunc) add a processMu.Lock() at the start and defer
processMu.Unlock() before calling c.HistoryManager.Delete(ctx,
v1alpha1.SchedulingDomainMachines, machine.Name) so the delete is performed
under the same mutex used by the upsert path (reference processMu, DeleteFunc,
and HistoryManager.Delete to locate the code).

In `@internal/scheduling/nova/filter_weigher_pipeline_controller.go`:
- Around line 101-132: The current fire-and-forget call to upsertHistory from
FilterWeigherPipelineController starts a goroutine without a timeout and calls
HistoryManager.Upsert with context.Background(), risking indefinite hangs and
lost/ungoverned work; change the goroutine invocation to derive a cancellable
context with a bounded timeout (e.g., context.WithTimeout(ctx, <reasonable
duration>)), pass that context into upsertHistory (update the signature of
upsertHistory to accept ctx context.Context), ensure the goroutine defers
cancel(), and inside upsertHistory use the passed ctx for
c.HistoryManager.Upsert instead of context.Background(); keep the call
asynchronous (go ...) so callers aren’t blocked and preserve existing logging of
upsert errors (or add a metric emission if desired).

In `@internal/scheduling/pods/filter_weigher_pipeline_controller.go`:
- Around line 224-230: The DeleteFunc handler currently calls
c.HistoryManager.Delete without synchronizing with in-flight async upserts; wrap
the deletion in the controller's processMu lock to avoid races by calling
c.processMu.Lock() at the start of DeleteFunc and defer c.processMu.Unlock()
before invoking c.HistoryManager.Delete (preserving existing logging behavior on
error). Ensure you acquire the same mutex used by the upsert path (processMu)
and keep the rest of the DeleteFunc logic unchanged.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 10cae06d-47ef-40d4-b4f2-13baff932c9e

📥 Commits

Reviewing files that changed from the base of the PR and between ea4f99e and 36b9195.

📒 Files selected for processing (10)
  • api/v1alpha1/history_types.go
  • api/v1alpha1/zz_generated.deepcopy.go
  • helm/library/cortex/files/crds/cortex.cloud_histories.yaml
  • internal/scheduling/cinder/filter_weigher_pipeline_controller.go
  • internal/scheduling/lib/history_manager.go
  • internal/scheduling/lib/history_manager_test.go
  • internal/scheduling/machines/filter_weigher_pipeline_controller.go
  • internal/scheduling/manila/filter_weigher_pipeline_controller.go
  • internal/scheduling/nova/filter_weigher_pipeline_controller.go
  • internal/scheduling/pods/filter_weigher_pipeline_controller.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • api/v1alpha1/zz_generated.deepcopy.go

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
internal/scheduling/lib/history_manager.go (2)

152-156: ⚠️ Potential issue | 🟠 Major

AvailabilityZone becomes stale after initial creation.

Spec.AvailabilityZone is only set when creating a new History CRD. If the availability zone becomes known or changes on subsequent upserts, the existing History's AvailabilityZone field is never updated.

🛠️ Proposed fix to sync AZ on existing History
 	} else if err != nil {
 		log.Error(err, "failed to get history CRD", "name", name)
 		return err
 	}
+
+	// Sync AvailabilityZone if it changed.
+	azChanged := (history.Spec.AvailabilityZone == nil) != (az == nil) ||
+		(history.Spec.AvailabilityZone != nil && az != nil && *history.Spec.AvailabilityZone != *az)
+	if azChanged {
+		history.Spec.AvailabilityZone = az
+		if updateErr := h.Client.Update(ctx, history); updateErr != nil {
+			log.Error(updateErr, "failed to update history CRD spec", "name", name)
+			return updateErr
+		}
+	}

 	successful := pipelineErr == nil && decision.Status.Result != nil && decision.Status.Result.TargetHost != nil
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/lib/history_manager.go` around lines 152 - 156, The
History CRD's Spec.AvailabilityZone is only set on creation and never updated on
upsert; modify the upsert logic in internal/scheduling/lib/history_manager.go
(the code that constructs v1alpha1.HistorySpec using
decision.Spec.SchedulingDomain, decision.Spec.ResourceID and az) so that when an
existing History is found you set/update history.Spec.AvailabilityZone = az and
persist that change (call the client's Update/Patch for the existing History
instead of skipping updates). Ensure the update path uses the same History
object fetched and only changes the AvailabilityZone field before saving.

229-235: ⚠️ Potential issue | 🟠 Major

Missing retry logic for concurrent status update conflicts.

The Upsert method is called asynchronously from multiple pipeline controllers. When two goroutines fetch the same History CRD, the first Status().Update() succeeds while the second fails with a conflict error. Without a retry loop, the second update is permanently lost.

Consider using retry.RetryOnConflict from k8s.io/client-go/util/retry to handle resourceVersion conflicts gracefully.

🛠️ Proposed fix using RetryOnConflict
+import "k8s.io/client-go/util/retry"
+
 // In Upsert method, wrap the status update:
-	if updateErr := h.Client.Status().Update(ctx, history); updateErr != nil {
-		log.Error(updateErr, "failed to update history CRD status", "name", name)
-		return updateErr
-	}
+	updateErr := retry.RetryOnConflict(retry.DefaultBackoff, func() error {
+		// Re-fetch on retry to get latest resourceVersion.
+		if err := h.Client.Get(ctx, client.ObjectKey{Name: name}, history); err != nil {
+			return err
+		}
+		// Re-apply status modifications here (archive, current, conditions)...
+		return h.Client.Status().Update(ctx, history)
+	})
+	if updateErr != nil {
+		log.Error(updateErr, "failed to update history CRD status", "name", name)
+		return updateErr
+	}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/lib/history_manager.go` around lines 229 - 235, The
Status().Update call in Upsert can fail with resourceVersion conflicts when
multiple goroutines update the same History CRD; wrap the update in a
retry.RetryOnConflict loop (importing k8s.io/client-go/util/retry) so you
re-fetch the History object (using h.Client.Get or the same retrieval path),
apply the intended status changes to the fresh copy, and call
h.Client.Status().Update until success or max retries; specifically modify the
block around h.Client.Status().Update(ctx, history) in Upsert to retry on
conflict and preserve the existing log/error handling when the update ultimately
fails.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@internal/scheduling/lib/history_manager.go`:
- Around line 152-156: The History CRD's Spec.AvailabilityZone is only set on
creation and never updated on upsert; modify the upsert logic in
internal/scheduling/lib/history_manager.go (the code that constructs
v1alpha1.HistorySpec using decision.Spec.SchedulingDomain,
decision.Spec.ResourceID and az) so that when an existing History is found you
set/update history.Spec.AvailabilityZone = az and persist that change (call the
client's Update/Patch for the existing History instead of skipping updates).
Ensure the update path uses the same History object fetched and only changes the
AvailabilityZone field before saving.
- Around line 229-235: The Status().Update call in Upsert can fail with
resourceVersion conflicts when multiple goroutines update the same History CRD;
wrap the update in a retry.RetryOnConflict loop (importing
k8s.io/client-go/util/retry) so you re-fetch the History object (using
h.Client.Get or the same retrieval path), apply the intended status changes to
the fresh copy, and call h.Client.Status().Update until success or max retries;
specifically modify the block around h.Client.Status().Update(ctx, history) in
Upsert to retry on conflict and preserve the existing log/error handling when
the update ultimately fails.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: cb53a7e1-2c12-402d-9840-c76ec7c8cff1

📥 Commits

Reviewing files that changed from the base of the PR and between 3dd85ca and c2abd19.

📒 Files selected for processing (4)
  • internal/scheduling/cinder/decisions_cleanup.go
  • internal/scheduling/lib/history_manager.go
  • internal/scheduling/lib/history_manager_test.go
  • internal/scheduling/manila/decisions_cleanup.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • internal/scheduling/manila/decisions_cleanup.go

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
internal/scheduling/machines/filter_weigher_pipeline_controller.go (1)

120-121: ⚠️ Potential issue | 🟠 Major

Use request-scoped context for history upsert (not context.Background()).

This call runs while processMu is held; dropping deadlines/cancellation can stall scheduling on slow client calls. Use ctx here.

💡 Suggested fix
-		if upsertErr := c.HistoryManager.Upsert(context.Background(), decision, v1alpha1.SchedulingIntentUnknown, nil, err); upsertErr != nil {
+		if upsertErr := c.HistoryManager.Upsert(ctx, decision, v1alpha1.SchedulingIntentUnknown, nil, err); upsertErr != nil {
 			ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
 		}

Run this to verify remaining context.Background() upsert calls in scheduling controllers:

#!/bin/bash
set -euo pipefail

# Show all HistoryManager.Upsert call sites with context.
rg -nP --type=go 'HistoryManager\.Upsert\(' internal/scheduling -C2

# Specifically flag Upsert calls using context.Background().
rg -nP --type=go 'HistoryManager\.Upsert\(\s*context\.Background\(\)' internal/scheduling -C2
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@internal/scheduling/machines/filter_weigher_pipeline_controller.go` around
lines 120 - 121, The Upsert call is using context.Background() while holding
processMu which can drop cancellation/deadlines and stall scheduling; replace
the call to c.HistoryManager.Upsert(context.Background(), ...) with
c.HistoryManager.Upsert(ctx, ...) so it uses the request-scoped ctx (the same
ctx used when logging via ctrl.LoggerFrom(ctx)) — update the call site in
filter_weigher_pipeline_controller.go where c.HistoryManager.Upsert is invoked
while processMu is held to pass ctx instead of context.Background().
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@internal/scheduling/machines/filter_weigher_pipeline_controller.go`:
- Around line 120-121: The Upsert call is using context.Background() while
holding processMu which can drop cancellation/deadlines and stall scheduling;
replace the call to c.HistoryManager.Upsert(context.Background(), ...) with
c.HistoryManager.Upsert(ctx, ...) so it uses the request-scoped ctx (the same
ctx used when logging via ctrl.LoggerFrom(ctx)) — update the call site in
filter_weigher_pipeline_controller.go where c.HistoryManager.Upsert is invoked
while processMu is held to pass ctx instead of context.Background().

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 1227c0af-2dbf-4fbf-8cf5-ade897505f04

📥 Commits

Reviewing files that changed from the base of the PR and between c2abd19 and 47ea72b.

📒 Files selected for processing (5)
  • api/v1alpha1/history_types.go
  • helm/library/cortex/files/crds/cortex.cloud_histories.yaml
  • internal/scheduling/lib/history_manager.go
  • internal/scheduling/machines/filter_weigher_pipeline_controller.go
  • internal/scheduling/pods/filter_weigher_pipeline_controller.go
🚧 Files skipped from review as they are similar to previous changes (3)
  • helm/library/cortex/files/crds/cortex.cloud_histories.yaml
  • internal/scheduling/lib/history_manager.go
  • api/v1alpha1/history_types.go

This comment was marked as outdated.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR refactors the scheduling “Decision” persistence/explanation flow by introducing a new cluster-scoped History CRD and a HistoryManager that records the latest decision plus a bounded history, while keeping the Decision CRD as an in-memory/DTO concept (deprecated, no longer persisted).

Changes:

  • Add History CRD + HistoryManager to upsert/delete histories and generate simplified explanations (plus emit K8s Events).
  • Update all filter/weigher pipeline controllers and cleanup tasks to write/read History instead of persisting Decision.
  • Remove the old explanation controller/template explainer, and extend pipeline results with per-step activations (StepResults).

Reviewed changes

Copilot reviewed 41 out of 41 changed files in this pull request and generated 5 comments.

Show a summary per file
File Description
internal/scheduling/pods/filter_weigher_pipeline_controller_test.go Update pod pipeline tests to assert History creation instead of Decision persistence.
internal/scheduling/pods/filter_weigher_pipeline_controller.go Stop persisting Decisions; upsert/delete History entries keyed by domain+resourceID.
internal/scheduling/nova/filter_weigher_pipeline_controller_test.go Update nova pipeline tests to assert History creation.
internal/scheduling/nova/filter_weigher_pipeline_controller.go Upsert History asynchronously (incl. intent/AZ extraction) instead of patching Decision status.
internal/scheduling/nova/decisions_cleanup_test.go Switch cleanup tests from Decisions to Histories.
internal/scheduling/nova/decisions_cleanup.go Cleanup task now deletes orphaned Histories instead of Decisions.
internal/scheduling/manila/filter_weigher_pipeline_controller_test.go Update manila pipeline tests to assert History creation.
internal/scheduling/manila/filter_weigher_pipeline_controller.go Upsert History asynchronously instead of persisting/patching Decision.
internal/scheduling/manila/decisions_cleanup_test.go Switch cleanup tests from Decisions to Histories.
internal/scheduling/manila/decisions_cleanup.go Cleanup task now deletes orphaned Histories instead of Decisions.
internal/scheduling/machines/filter_weigher_pipeline_controller_test.go Update machine pipeline tests to assert History creation.
internal/scheduling/machines/filter_weigher_pipeline_controller.go Stop persisting Decisions; upsert/delete History entries for machines.
internal/scheduling/lib/pipeline_controller.go Add HistoryManager to the shared base controller struct.
internal/scheduling/lib/history_manager_test.go New unit tests covering HistoryManager upsert/delete and explanation generation.
internal/scheduling/lib/history_manager.go New HistoryManager implementation: upsert, delete, explanation, bounded history/hosts.
internal/scheduling/lib/filter_weigher_pipeline_test.go Adjust tests for changed runFilters signature.
internal/scheduling/lib/filter_weigher_pipeline.go Capture filter+weigher activations into DecisionResult.StepResults.
internal/scheduling/explanation/types.go Remove old explanation templating types (deleted).
internal/scheduling/explanation/templates.go Remove old explanation template rendering (deleted).
internal/scheduling/explanation/explainer_test.go Remove old explainer tests (deleted).
internal/scheduling/explanation/explainer.go Remove old explainer implementation (deleted).
internal/scheduling/explanation/controller_test.go Remove old explanation controller tests (deleted).
internal/scheduling/explanation/controller.go Remove old explanation controller (deleted).
internal/scheduling/cinder/filter_weigher_pipeline_controller_test.go Update cinder pipeline tests to assert History creation.
internal/scheduling/cinder/filter_weigher_pipeline_controller.go Upsert History asynchronously instead of persisting/patching Decision.
internal/scheduling/cinder/decisions_cleanup_test.go Switch cleanup tests from Decisions to Histories.
internal/scheduling/cinder/decisions_cleanup.go Cleanup task now deletes orphaned Histories instead of Decisions.
helm/library/cortex/templates/rbac/role.yaml Grant RBAC for histories + event creation/patch for HistoryManager event emission.
helm/library/cortex/files/crds/cortex.cloud_pipelines.yaml Document why createDecisions flag name is kept for compatibility.
helm/library/cortex/files/crds/cortex.cloud_histories.yaml Add History CRD schema and printer columns.
helm/bundles/cortex-pods/values.yaml Disable explanation-controller in pods bundle.
helm/bundles/cortex-nova/values.yaml Disable explanation-controller in nova bundle.
helm/bundles/cortex-manila/values.yaml Disable explanation-controller in manila bundle.
helm/bundles/cortex-ironcore/values.yaml Disable explanation-controller in ironcore bundle.
helm/bundles/cortex-cinder/values.yaml Disable explanation-controller in cinder bundle.
cmd/main.go Remove wiring for explanation-controller.
api/v1alpha1/zz_generated.deepcopy.go Add generated deepcopy implementations for new History-related types.
api/v1alpha1/pipeline_types.go Add compatibility note on CreateDecisions field.
api/v1alpha1/history_types.go Define new History CRD Go types (SchedulingIntent, History spec/status, etc.).
api/external/nova/messages_test.go Update intent type expectation to v1alpha1.SchedulingIntent.
api/external/nova/messages.go Switch nova request intent parsing to return v1alpha1.SchedulingIntent.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

You can also share your feedback on Copilot code review. Take the survey.

@SoWieMarkus SoWieMarkus changed the title Refactor Decision CRD Refactor Decision CRD to History CRD Mar 19, 2026
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR shifts persistence of scheduling outcomes from the (now-deprecated) Decision CRD to a new cluster-scoped History CRD, and removes the standalone explanation controller in favor of generating explanations during history upsert.

Changes:

  • Introduces History CRD plus HistoryManager for upsert/delete and simplified decision explanations.
  • Updates filter/weigher pipeline controllers and cleanup tasks to write/clean up History instead of persisting Decision.
  • Extends pipeline results to include per-step activations (StepResults) and updates RBAC/Helm/CRDs accordingly.

Reviewed changes

Copilot reviewed 41 out of 41 changed files in this pull request and generated 6 comments.

Show a summary per file
File Description
internal/scheduling/pods/filter_weigher_pipeline_controller.go Writes scheduling runs to History and deletes History on pod deletion.
internal/scheduling/pods/filter_weigher_pipeline_controller_test.go Updates tests to assert History creation instead of Decision persistence.
internal/scheduling/machines/filter_weigher_pipeline_controller.go Writes scheduling runs to History and deletes History on machine deletion.
internal/scheduling/machines/filter_weigher_pipeline_controller_test.go Updates tests to assert History creation instead of Decision persistence.
internal/scheduling/nova/filter_weigher_pipeline_controller.go Upserts History (async) and enriches it with intent/AZ derived from nova request.
internal/scheduling/nova/filter_weigher_pipeline_controller_test.go Updates tests to assert History creation and adds ResourceID setup.
internal/scheduling/cinder/filter_weigher_pipeline_controller.go Upserts History asynchronously instead of persisting Decision status.
internal/scheduling/cinder/filter_weigher_pipeline_controller_test.go Updates tests to assert History creation and adds ResourceID setup.
internal/scheduling/manila/filter_weigher_pipeline_controller.go Upserts History asynchronously instead of persisting Decision status.
internal/scheduling/manila/filter_weigher_pipeline_controller_test.go Updates tests to assert History creation and adds ResourceID setup.
internal/scheduling/nova/decisions_cleanup.go Cleanup task now deletes stale History objects instead of Decisions.
internal/scheduling/nova/decisions_cleanup_test.go Updates cleanup tests to use History objects.
internal/scheduling/cinder/decisions_cleanup.go Cleanup task now deletes stale History objects instead of Decisions.
internal/scheduling/cinder/decisions_cleanup_test.go Updates cleanup tests to use History objects.
internal/scheduling/manila/decisions_cleanup.go Cleanup task now deletes stale History objects instead of Decisions.
internal/scheduling/manila/decisions_cleanup_test.go Updates cleanup tests to use History objects.
internal/scheduling/lib/pipeline_controller.go Adds HistoryManager to the base controller toolbox.
internal/scheduling/lib/history_manager.go Implements History upsert/delete and simplified explanation generation.
internal/scheduling/lib/history_manager_test.go Adds unit tests for HistoryManager (name generation, explanation, upsert/delete).
internal/scheduling/lib/filter_weigher_pipeline.go Captures filter/weigher step activations into DecisionResult.StepResults.
internal/scheduling/lib/filter_weigher_pipeline_test.go Adjusts tests for updated runFilters signature.
internal/scheduling/explanation/types.go Removes explanation template model (deleted).
internal/scheduling/explanation/templates.go Removes templated explanation renderer (deleted).
internal/scheduling/explanation/explainer.go Removes explainer implementation (deleted).
internal/scheduling/explanation/explainer_test.go Removes explainer test suite (deleted).
internal/scheduling/explanation/controller.go Removes explanation controller (deleted).
internal/scheduling/explanation/controller_test.go Removes explanation controller tests (deleted).
api/v1alpha1/history_types.go Adds API types for the new History CRD.
api/v1alpha1/zz_generated.deepcopy.go Adds autogenerated deepcopy implementations for History-related types.
api/v1alpha1/pipeline_types.go Documents that createDecisions name is retained to avoid breaking changes.
api/external/nova/messages.go Switches nova request intent type to v1alpha1.SchedulingIntent.
api/external/nova/messages_test.go Updates intent tests to the new intent type.
helm/library/cortex/files/crds/cortex.cloud_histories.yaml Adds CRD manifest for histories.cortex.cloud.
helm/library/cortex/files/crds/cortex.cloud_pipelines.yaml Notes the createDecisions compatibility behavior in CRD schema docs.
helm/library/cortex/templates/rbac/role.yaml Grants RBAC permissions for histories (+ status/finalizers) and events.
helm/bundles/cortex-pods/values.yaml Disables explanation controller in pods bundle.
helm/bundles/cortex-nova/values.yaml Disables explanation controller in nova bundle.
helm/bundles/cortex-manila/values.yaml Disables explanation controller in manila bundle.
helm/bundles/cortex-ironcore/values.yaml Disables explanation controller in ironcore bundle.
helm/bundles/cortex-cinder/values.yaml Disables explanation controller in cinder bundle.
cmd/main.go Removes wiring for the explanation controller.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +98 to +104
decisionForHistory := decision.DeepCopy()
histCtx := context.WithoutCancel(ctx)
go func(dec *v1alpha1.Decision, ctx context.Context, processErr error) {
if upsertErr := c.HistoryManager.Upsert(ctx, dec, v1alpha1.SchedulingIntentUnknown, nil, processErr); upsertErr != nil {
ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
}
}(decisionForHistory, histCtx, err)
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

History upsert is executed in a new goroutine with context.WithoutCancel(ctx). This creates an unbounded goroutine per request and also removes any deadline/cancellation, so under load or API-server slowness these goroutines can accumulate and run indefinitely. Consider running the upsert synchronously, or using a bounded worker/queue, and applying a dedicated timeout context for the history write.

Suggested change
decisionForHistory := decision.DeepCopy()
histCtx := context.WithoutCancel(ctx)
go func(dec *v1alpha1.Decision, ctx context.Context, processErr error) {
if upsertErr := c.HistoryManager.Upsert(ctx, dec, v1alpha1.SchedulingIntentUnknown, nil, processErr); upsertErr != nil {
ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
}
}(decisionForHistory, histCtx, err)
if upsertErr := c.HistoryManager.Upsert(ctx, decision, v1alpha1.SchedulingIntentUnknown, nil, err); upsertErr != nil {
ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
}

Copilot uses AI. Check for mistakes.
Comment on lines +98 to 105
decisionForHistory := decision.DeepCopy()
histCtx := context.WithoutCancel(ctx)
go func(dec *v1alpha1.Decision, ctx context.Context, processErr error) {
if upsertErr := c.HistoryManager.Upsert(ctx, dec, v1alpha1.SchedulingIntentUnknown, nil, processErr); upsertErr != nil {
ctrl.LoggerFrom(ctx).Error(upsertErr, "failed to create/update history")
}
}(decisionForHistory, histCtx, err)
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

History upsert is executed in a new goroutine with context.WithoutCancel(ctx). This creates an unbounded goroutine per request and also removes any deadline/cancellation, so under load or API-server slowness these goroutines can accumulate and run indefinitely. Consider running the upsert synchronously, or using a bounded worker/queue, and applying a dedicated timeout context for the history write.

Copilot uses AI. Check for mistakes.
Comment on lines 100 to 104
if pipelineConf.Spec.CreateDecisions {
patch := client.MergeFrom(old)
if err := c.Status().Patch(ctx, decision, patch); err != nil {
return err
}
decisionForHistory := decision.DeepCopy()
histCtx := context.WithoutCancel(ctx)
go c.upsertHistory(histCtx, decisionForHistory, err)
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

History upsert is executed in a new goroutine with context.WithoutCancel(ctx). This creates an unbounded goroutine per request and also removes any deadline/cancellation, so under load or API-server slowness these goroutines can accumulate and run indefinitely. Consider running the upsert synchronously, or using a bounded worker/queue, and applying a dedicated timeout context for the history write.

Copilot uses AI. Check for mistakes.
Comment on lines +303 to 320
// Build step results from filters and weighers.
stepResults := filterStepResults
for _, weigherName := range p.weighersOrder {
activations, ok := stepWeights[weigherName]
if !ok {
continue
}
stepResults = append(stepResults, v1alpha1.StepResult{
StepName: weigherName,
Activations: activations,
})
}

result := v1alpha1.DecisionResult{
RawInWeights: request.GetWeights(),
NormalizedInWeights: inWeights,
StepResults: stepResults,
AggregatedOutWeights: outWeights,
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

DecisionResult.StepResults now includes full per-host activation maps for every filter and weigher step. For large clusters this can significantly increase response payload size and in-memory CPU/memory usage (maps are O(hosts * steps) and largely duplicate data already present in weights). If the primary consumer is generating a human-readable explanation, consider storing a more compact representation (e.g., only the removed host names per step, or capped/aggregated activations), and/or gating this behind a debug/feature flag.

Copilot uses AI. Check for mistakes.
Comment on lines +37 to +39
func getName(schedulingDomain v1alpha1.SchedulingDomain, resourceID string) string {
return fmt.Sprintf("%s-%s", schedulingDomain, resourceID)
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

getName() (used as History.metadata.name) is constructed by concatenating schedulingDomain and resourceID without enforcing Kubernetes naming constraints (DNS-1123 + max 253 chars). For long IDs (e.g. pod.Namespace + "--" + pod.Name) this can exceed 253 chars and make History creation fail at runtime. Consider sanitizing and length-capping the generated name (e.g., truncate and append a stable hash of the full resourceID), while keeping the full spec.resourceID in the CRD for display/debugging.

Copilot uses AI. Check for mistakes.
Comment on lines +149 to +160
if apierrors.IsNotFound(err) {
// Create new History CRD.
history = &v1alpha1.History{
ObjectMeta: metav1.ObjectMeta{
Name: name,
},
Spec: v1alpha1.HistorySpec{
SchedulingDomain: decision.Spec.SchedulingDomain,
ResourceID: decision.Spec.ResourceID,
AvailabilityZone: az,
},
}
Copy link

Copilot AI Mar 20, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

HistorySpec.AvailabilityZone is only set when the History CRD is first created. If the first upsert runs with az == nil (e.g., novaRaw unmarshal fails) and later upserts can determine the AZ, the History spec will remain permanently unset/outdated. Consider updating history.Spec.AvailabilityZone on subsequent upserts when a non-nil AZ is provided and the stored value differs (this requires a regular Update/Patch on the main resource, not the status subresource).

Copilot uses AI. Check for mistakes.
@github-actions
Copy link
Contributor

Test Coverage Report

Test Coverage 📊: 67.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/monitor.go:21:							NewMonitor					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/monitor.go:39:							Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/monitor.go:45:							Collect						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_api.go:37:			NewCinderAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_api.go:45:			Init						81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_api.go:68:			GetAllStoragePools				73.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_sync.go:27:			Init						83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_sync.go:40:			Sync						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_sync.go:51:			SyncAllStoragePools				53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:46:			TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:49:			Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:52:			UnmarshalJSON					93.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/cinder/cinder_types.go:131:			MarshalJSON					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/controller.go:60:				Reconcile					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/controller.go:239:				SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:35:			NewIdentityAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:39:			Init						80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:59:			GetAllDomains					66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_api.go:83:			GetAllProjects					72.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:26:			Init						85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:41:			Sync						83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:54:			SyncDomains					53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_sync.go:74:			SyncProjects					53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:16:		TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:19:		Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:47:		TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/identity/identity_types.go:50:		Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:45:			NewLimesAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:50:			Init						81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:74:			GetAllCommitments				90.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_api.go:124:			getCommitments					86.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_sync.go:28:			Init						83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_sync.go:41:			Sync						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_sync.go:52:			SyncCommitments					63.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_types.go:69:			TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/limes/limes_types.go:72:			Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_api.go:41:			NewManilaAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_api.go:46:			Init						81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_api.go:69:			GetAllStoragePools				75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_sync.go:28:			Init						83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_sync.go:41:			Sync						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_sync.go:52:			SyncAllStoragePools				53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:47:			UnmarshalJSON					87.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:137:			MarshalJSON					72.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:234:			TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/manila/manila_types.go:237:			Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:54:				NewNovaAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:59:				Init						81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:85:				GetAllServers					69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:145:				GetDeletedServers				69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:200:				GetAllHypervisors				69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:254:				GetAllFlavors					68.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:291:				GetAllMigrations				69.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_api.go:344:				GetAllAggregates				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:29:				Init						90.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:53:				Sync						50.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:75:				SyncAllServers					57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:98:				SyncDeletedServers				64.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:128:			SyncAllHypervisors				57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:152:			SyncAllFlavors					57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:174:			SyncAllMigrations				57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_sync.go:196:			SyncAllAggregates				57.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:36:			UnmarshalJSON					77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:59:			MarshalJSON					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:79:			TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:82:			Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:119:			UnmarshalJSON					77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:142:			MarshalJSON					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:162:			TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:165:			Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:198:			UnmarshalJSON					80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:234:			MarshalJSON					85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:266:			TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:269:			Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:289:			UnmarshalJSON					54.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:312:			MarshalJSON					55.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:333:			TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:336:			Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:360:			TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:363:			Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:384:			TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/nova/nova_types.go:387:			Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:48:		NewPlacementAPI					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:53:		Init						81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:77:		GetAllResourceProviders				66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:105:		GetAllTraits					90.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:155:		getTraits					90.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:179:		GetAllInventoryUsages				71.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_api.go:229:		getInventoryUsages				77.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:28:		Init						62.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:46:		Sync						71.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:62:		SyncResourceProviders				53.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:83:		SyncTraits					57.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_sync.go:112:		SyncInventoryUsages				57.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:17:		TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:20:		Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:31:		TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:34:		Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:74:		TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/placement/placement_types.go:77:		Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/openstack/supported_syncers.go:22:			getSupportedSyncer				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/controller.go:51:				Reconcile					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/controller.go:201:				SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:32:					newTypedSyncer					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:100:					fetch						79.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:205:					getSyncWindowStart				81.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:245:					sync						68.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/sync.go:295:					Sync						70.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/triggers.go:7:				TriggerMetricAliasSynced			0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/triggers.go:12:				TriggerMetricTypeSynced				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:42:					TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:43:					Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:44:					GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:45:					GetTimestamp					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:46:					GetValue					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:47:					With						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:90:					TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:91:					Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:92:					GetName						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:93:					GetTimestamp					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:94:					GetValue					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:95:					With						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:145:				TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:146:				Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:147:				GetName						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:148:				GetTimestamp					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:149:				GetValue					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:150:				With						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:170:				TableName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:171:				Indexes						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:172:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:173:				GetTimestamp					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:174:				GetValue					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:175:				With						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:211:				TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:212:				Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:213:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:214:				GetTimestamp					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:215:				GetValue					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:216:				With						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:242:				TableName					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:243:				Indexes						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:244:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:245:				GetTimestamp					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:246:				GetValue					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:247:				With						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:282:				TableName					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:285:				Indexes						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:286:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:287:				GetTimestamp					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:288:				GetValue					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/datasources/plugins/prometheus/types.go:289:				With						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:51:								FromSecretRef					6.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:133:								SelectTimed					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:142:								CreateTable					70.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:159:								AddTable					66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:170:								TableExists					58.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:199:								ReplaceAll					62.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/db.go:226:								BulkInsert					86.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/monitor.go:21:								newMonitor					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/monitor.go:63:								Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/monitor.go:73:								Collect						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/containers/postgres.go:21:					GetPort						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/containers/postgres.go:25:					Init						70.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/containers/postgres.go:69:					Close						50.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/db/testing/env.go:24:							SetupDBEnv					59.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/controller.go:46:							Reconcile					54.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/controller.go:237:						SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:26:							NewMonitor					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:44:							Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:50:							Collect						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:69:							Init						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:78:							monitorFeatureExtractor				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/monitor.go:97:							Extract						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/base.go:28:						Init						87.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/base.go:45:						ExtractSQL					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/base.go:58:						Extracted					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/flavor_groups.go:67:				Extract						77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_az.go:31:					Extract						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_capabilities.go:35:				Extract						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_details.go:59:				Extract						87.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_pinned_projects.go:45:			Extract						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/host_utilization.go:47:				Extract						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/libvirt_domain_cpu_steal_pct.go:35:		Extract						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vm_host_residency.go:53:				Extract						85.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vm_life_span.go:52:				extractHistogramBuckets				89.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vm_life_span.go:97:				Extract						88.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_hostsystem_contention_long_term.go:39:	Extract						82.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_hostsystem_contention_short_term.go:39:	Extract						82.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_hostsystem_resolver.go:33:			Extract						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/compute/vrops_project_noisiness.go:33:			Extract						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/plugins/storage/storage_pool_cpu_usage.go:35:			Extract						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:42:							Reconcile					77.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:95:							findDependentKnowledge				96.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:142:							triggerKnowledgeReconciliation			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:175:							enqueueKnowledgeReconciliation			81.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:201:							getResourceType					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:213:							mapDatasourceToKnowledge			100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:234:							mapKnowledgeToKnowledge				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/extractor/trigger.go:255:							SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:53:							Reconcile					42.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:107:							InitAllKPIs					83.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:147:							getJointDB					27.8%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:185:							handleKPIChange					52.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:310:							handleDatasourceChange				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:335:							handleDatasourceCreated				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:345:							handleDatasourceUpdated				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:363:							handleDatasourceDeleted				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:375:							handleKnowledgeChange				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:400:							handleKnowledgeCreated				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:410:							handleKnowledgeUpdated				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:428:							handleKnowledgeDeleted				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/controller.go:438:							SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:21:								Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:26:								Collect						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:32:								Init						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/logger.go:37:								GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/base.go:24:							Init						80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:32:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:36:				Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:54:				Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/flavor_running_vms.go:58:				Collect						71.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:28:					GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:32:					Init						80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:49:					Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_contention.go:54:					Collect						82.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:42:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:46:				Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:69:				Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/host_running_vms.go:73:				Collect						66.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:27:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:31:				Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:43:				Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/project_noisiness.go:47:				Collect						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:24:				getBuildingBlock				75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:42:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:46:				Init						87.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:138:				Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:146:				Collect						90.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.go:223:				exportCapacityMetricKVM				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:29:			GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:33:			Init						80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:75:			Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:80:			Collect						67.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/resource_capacity_vmware.go:153:			exportCapacityMetricVMware			92.3%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:30:					GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:34:					Init						85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:81:					Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:89:					convertLimesMemory				100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_commitments.go:106:					Collect						89.5%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:29:					GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:33:					Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:46:					Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:50:					Collect						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_life_span.go:57:					collectVMBuckets				71.4%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:28:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:32:				Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:45:				Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/compute/vm_migration_statistics.go:49:				Collect						69.2%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:32:				GetName						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:35:				Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:49:				Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/datasource_state.go:52:				Collect						92.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:32:				GetName						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:35:				Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:49:				Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/decision_state.go:52:				Collect						94.1%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:32:				GetName						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:35:				Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:49:				Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/knowledge_state.go:52:				Collect						85.7%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:32:					GetName						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:35:					Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:49:					Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/kpi_state.go:52:					Collect						92.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:32:				GetName						100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:35:				Init						75.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:49:				Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/deployment/pipeline_state.go:52:				Collect						92.9%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:28:				GetName						0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:32:				Init						80.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:49:				Describe					0.0%
github.com/cobaltcore-dev/cortex/internal/knowledge/kpis/plugins/storage/storage_pool_cpu.go:54:				Collect						82.6%
github.com/cobaltcore-dev/cortex/internal/knowledge/math/histogram.go:7:							Histogram					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/decisions_cleanup.go:30:						DecisionsCleanup				76.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/e2e_checks.go:21:							RunChecks					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/e2e_checks.go:26:							checkCinderSchedulerReturnsValidHosts		0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:42:					NewAPI						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:50:					Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:57:					canRunScheduler					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:80:					inferPipelineName				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/external_scheduler_api.go:90:					CinderExternalScheduler				68.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:48:				PipelineType					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:53:				Reconcile					83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:73:				ProcessNewDecisionFromAPI			93.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:109:				process						80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:139:				InitPipeline					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/filter_weigher_pipeline_controller.go:152:				SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/cinder/pipeline_webhook.go:15:						NewPipelineWebhook				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/nova.go:29:							NewNovaReader					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/nova.go:34:							GetAllServers					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/nova.go:44:							GetAllFlavors					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/nova.go:54:							GetAllHypervisors				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/nova.go:64:							GetAllMigrations				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/nova.go:74:							GetAllAggregates				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/nova.go:85:							GetServerByID					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/nova.go:99:							GetFlavorByName					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/postgres.go:33:							NewPostgresReader				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/postgres.go:48:							DB						0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/external/postgres.go:67:							Select						0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/activation.go:12:							NoEffect					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/activation.go:15:							Norm						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/activation.go:21:							Apply						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:22:							NewSchedulerMonitor				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:32:							Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:36:							Collect						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:50:							Callback					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/api_monitor.go:56:							Respond						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector.go:53:							Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector.go:64:							Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector.go:75:							CheckKnowledges					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:26:						NewDetectorPipelineMonitor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:46:						SubPipeline					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:52:						Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:58:						Collect						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:76:						monitorDetector					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:99:						Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:107:						Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_monitor.go:112:						Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_pipeline.go:33:						Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_pipeline.go:63:						Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_pipeline.go:98:						Combine						97.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/detector_step_opts.go:15:						Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter.go:31:								Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:23:							monitorFilter					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:36:							Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:41:							Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_monitor.go:46:							Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:22:						Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:28:						Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:33:						validateFilter					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_validation.go:38:						Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:45:						InitNewFilterWeigherPipeline			86.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:138:					runFilters					75.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:170:					runWeighers					81.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:210:					normalizeInputWeights				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:219:					applyWeights					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:255:					sortHostsByWeights				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline.go:265:					Run						96.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:36:					NewPipelineMonitor				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:90:					SubPipeline					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:97:					observePipelineResult				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:118:				Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_monitor.go:130:				Collect						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:48:					Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:63:					Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:75:					IncludeAllHostsFromRequest			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step.go:85:					PrepareStats					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_monitor.go:42:				monitorStep					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_monitor.go:65:				RunWrapped					48.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_monitor.go:215:				impact						94.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/filter_weigher_pipeline_step_opts.go:15:				Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/history_manager.go:30:							joinHostsCapped					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/history_manager.go:37:							getName						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/history_manager.go:44:							generateExplanation				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/history_manager.go:130:						Upsert						70.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/history_manager.go:280:						Delete						80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:38:						InitAllPipelines				93.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:63:						handlePipelineChange				77.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:176:						HandlePipelineCreated				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:190:						HandlePipelineUpdated				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:203:						HandlePipelineDeleted				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:215:						handleKnowledgeChange				71.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:248:						HandleKnowledgeCreated				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:261:						HandleKnowledgeUpdated				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_controller.go:283:						HandleKnowledgeDeleted				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:38:						ValidateCreate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:47:						ValidateUpdate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:56:						ValidateDelete					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:65:						validatePipeline				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/pipeline_webhook.go:149:						SetupWebhookWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/scaling.go:7:								clamp						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/scaling.go:22:								MinMaxScale					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher.go:35:								Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher.go:40:								Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher.go:45:								CheckKnowledges					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:23:							monitorWeigher					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:36:							Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:41:							Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_monitor.go:46:							Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:22:						Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:28:						Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:33:						validateWeigher					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/lib/weigher_validation.go:38:						Run						81.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:52:				PipelineType					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:56:				Reconcile					83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:76:				ProcessNewMachine				92.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:126:			process						70.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:177:			InitPipeline					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:190:			handleMachine					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/filter_weigher_pipeline_controller.go:222:			SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/pipeline_webhook.go:15:						NewPipelineWebhook				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:21:				Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:25:				Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:34:				Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/machines/plugins/filters/filter_noop.go:44:				init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/decisions_cleanup.go:32:						DecisionsCleanup				77.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/e2e_checks.go:34:							RunChecks					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/e2e_checks.go:39:							checkManilaSchedulerReturnsValidHosts		0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:42:					NewAPI						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:50:					Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:57:					canRunScheduler					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:80:					inferPipelineName				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/external_scheduler_api.go:90:					ManilaExternalScheduler				68.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:48:				PipelineType					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:53:				Reconcile					83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:73:				ProcessNewDecisionFromAPI			93.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:109:				process						80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:139:				InitPipeline					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/filter_weigher_pipeline_controller.go:152:				SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/pipeline_webhook.go:15:						NewPipelineWebhook				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:35:			Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:53:			Init						60.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:64:			Run						88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/manila/plugins/weighers/netapp_cpu_usage_balancing.go:110:			init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/candidate_gatherer.go:29:						MutateWithAllCandidates				94.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/decisions_cleanup.go:30:						DecisionsCleanup				79.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_cleanup.go:24:						Start						82.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_cleanup.go:63:						Reconcile					70.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_cleanup.go:95:						SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_executor.go:45:						Reconcile					68.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/deschedulings_executor.go:256:					SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_cycle_breaker.go:17:						Filter						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:42:					PipelineType					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:47:					InitPipeline					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:65:					CreateDeschedulingsPeriodically			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:126:					Reconcile					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/detector_pipeline_controller.go:131:					SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:61:							getHypervisors					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:104:							prepare						0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:257:							randomRequest					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:330:							checkNovaSchedulerReturnsValidHosts		0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/e2e_checks.go:360:							RunChecks					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:50:						NewAPI						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:59:						Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:66:						canRunScheduler					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:89:						inferPipelineName				96.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:149:					limitHostsToRequest				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/external_scheduler_api.go:171:					NovaExternalScheduler				68.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:51:				PipelineType					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:56:				Reconcile					91.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:76:				ProcessNewDecisionFromAPI			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:108:				upsertHistory					80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:135:				process						80.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:187:				InitPipeline					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/filter_weigher_pipeline_controller.go:200:				SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:48:				Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:82:				Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:111:				Reconcile					93.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:178:				handleRemoteHypervisor				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:207:				predicateRemoteHypervisor			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/hypervisor_overcommit_controller.go:220:				SetupWithManager				23.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:57:							NewNovaClient					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:61:							Init						0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:99:							Get						75.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:108:							LiveMigrate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/nova_client.go:119:							GetServerMigrations				74.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/pipeline_webhook.go:16:						NewPipelineWebhook				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:26:				Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:39:				Init						80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:49:				Run						86.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/detectors/avoid_high_steal_pct.go:85:				init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_allowed_projects.go:22:			Run						87.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_allowed_projects.go:54:			init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_capabilities.go:25:				hvToNovaCapabilities				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_capabilities.go:48:				Run						81.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_capabilities.go:119:				init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_correct_az.go:21:				Run						91.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_correct_az.go:65:				init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_exclude_hosts.go:28:				Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_exclude_hosts.go:30:				Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_exclude_hosts.go:43:				init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_external_customer.go:23:			Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_external_customer.go:36:			Run						94.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_external_customer.go:86:			init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_accelerators.go:21:			Run						91.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_accelerators.go:55:			init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go:24:			Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go:44:			Run						76.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_enough_capacity.go:312:			init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_requested_traits.go:24:			Run						95.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_has_requested_traits.go:89:			init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_host_instructions.go:21:			Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_host_instructions.go:44:			init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_affinity.go:19:			Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_affinity.go:54:			init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_anti_affinity.go:22:		Run						88.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_instance_group_anti_affinity.go:99:		init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_live_migratable.go:22:				checkHasSufficientFeatures			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_live_migratable.go:51:				Run						94.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_live_migratable.go:112:			init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_requested_destination.go:30:			Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_requested_destination.go:43:			Run						98.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_requested_destination.go:135:			init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_status_conditions.go:23:			Run						93.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/filters/filter_status_conditions.go:88:			init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:17:						GetResource					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:18:						GetReason					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:19:						GetHost						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/vm_detection.go:20:						WithReason					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:29:					Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:72:					Run						90.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:141:					calcVMResources					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_binpack.go:154:					init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:26:			Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:30:			GetFailoverHostWeight				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:37:			GetDefaultHostWeight				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:54:			Run						93.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_failover_evacuation.go:116:			init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_instance_group_soft_affinity.go:29:		Run						94.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_instance_group_soft_affinity.go:85:		init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go:29:			Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go:60:			Run						92.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/kvm_prefer_smaller_hosts.go:157:			init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:29:		Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:44:		Init						80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:55:		Run						81.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_anti_affinity_noisy_projects.go:93:		init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:35:	Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:53:	Init						80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:64:	Run						88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_long_term_contended_hosts.go:111:	init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:35:	Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:53:	Init						80.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:64:	Run						88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_avoid_short_term_contended_hosts.go:111:	init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:32:				Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:75:				Init						0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:88:				Run						80.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:165:				calcHostCapacity				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:177:				calcHostAllocation				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:187:				calcVMResources					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/nova/plugins/weighers/vmware_binpack.go:200:				init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:51:				PipelineType					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:55:				Reconcile					83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:75:				ProcessNewPod					92.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:125:				process						71.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:188:				InitPipeline					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:201:				handlePod					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/filter_weigher_pipeline_controller.go:233:				SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/helpers/resources.go:12:						GetPodResourceRequests				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/helpers/resources.go:31:						AddResourcesInto				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/helpers/resources.go:41:						MaxResourcesInto				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/pipeline_webhook.go:15:						NewPipelineWebhook				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:22:				Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:26:				Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:30:				Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:43:				matchesNodeAffinity				88.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:62:				matchesNodeSelectorTerm				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:71:				matchesNodeSelectorRequirement			90.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_affinity.go:124:				init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:21:				Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:25:				Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:29:				Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:42:				isNodeHealthy					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:70:				isNodeSchedulable				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_available.go:74:				init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:22:				Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:26:				Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:30:				Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:45:				hasCapacityForPod				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_node_capacity.go:60:				init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:21:					Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:25:					Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:34:					Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_noop.go:44:					init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:21:					Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:25:					Validate					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:29:					Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:42:					canScheduleOnNode				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:53:					hasToleration					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/filters/filter_taint.go:67:					init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:21:					Validate					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:34:					Run						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:48:					calculateBinpackScore				85.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/pods/plugins/weighers/binpack.go:83:					init						50.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api.go:22:					NewAPI						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api.go:26:					NewAPIWithConfig				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api.go:33:					Init						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:28:			sortedKeys					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:45:			HandleChangeCommitments				76.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:103:			processCommitmentChanges			79.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_change_commitments.go:288:			watchReservationsUntilReady			63.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_info.go:22:					HandleInfo					72.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_info.go:58:					buildServiceInfo				22.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/api_report_capacity.go:19:			HandleReportCapacity				78.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:24:					NewCapacityCalculator				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:29:					CalculateCapacity				91.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:60:					calculateAZCapacity				71.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/capacity.go:86:					getAvailabilityZones				55.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:46:					NewCommitmentsClient				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:50:					Init						0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:109:					ListProjects					90.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:128:					ListCommitmentsByID				79.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/client.go:172:					listCommitments					90.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/config.go:17:					DefaultConfig					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/messages.go:135:					UnmarshalJSON					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/messages.go:158:					MarshalJSON					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:25:			NewReservationManager				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:46:			ApplyCommitmentState				89.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:201:			syncReservationMetadata				93.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/reservation_manager.go:248:			newReservation					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:23:					getFlavorGroupNameFromResource			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:51:					FromCommitment					75.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:95:					FromChangeCommitmentTargetState			93.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/state.go:148:					FromReservations				86.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:40:					NewSyncer					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:47:					Init						66.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:54:					getCommitmentStates				62.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/syncer.go:124:					SyncReservations				60.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/utils.go:13:					GetMaxSlotIndex					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/utils.go:30:					GetNextSlotIndex				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/commitments/utils.go:36:					extractCommitmentUUID				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/context.go:20:						WithGlobalRequestID				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/context.go:26:						WithRequestID					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/context.go:32:						GlobalRequestIDFromContext			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/context.go:43:						RequestIDFromContext				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:82:					Reconcile					53.5%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:353:					reconcileAllocations				17.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:432:					Init						0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:446:					listServersByProjectID				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/controller.go:511:					SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/monitor.go:32:					NewControllerMonitor				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/monitor.go:47:					Describe					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/controller/monitor.go:53:					Collect						100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/config.go:56:					DefaultConfig					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/context.go:16:					LoggerFromContext				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:45:					NewFailoverReservationController		100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:66:					Reconcile					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:103:					reconcileValidateAndAcknowledge			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:179:					validateReservation				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:243:					ReconcilePeriodic				73.1%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:349:					reconcileRemoveInvalidVMFromReservations	96.9%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:408:					reconcileRemoveNoneligibleVMFromReservations	97.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:477:					reconcileRemoveEmptyReservations		70.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:501:					selectVMsToProcess				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:540:					sortVMsByMemory					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:554:					reconcileCreateAndAssignReservations		80.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:660:					calculateVMsMissingFailover			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:711:					getRequiredFailoverCount			81.8%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:731:					patchReservationStatus				66.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:758:					SetupWithManager				0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/controller.go:774:					Start						0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/helpers.go:18:					getFailoverAllocations				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/helpers.go:26:					filterFailoverReservations			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/helpers.go:37:					countReservationsForVM				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/helpers.go:50:					addVMToReservation				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/helpers.go:76:					ValidateFailoverReservationResources		0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/helpers.go:93:					newFailoverReservation				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_eligibility.go:28:			reservationKey					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_eligibility.go:34:			newDependencyGraph				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_eligibility.go:94:			ensureVMInMaps					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_eligibility.go:103:			ensureResInMaps					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_eligibility.go:111:			checkAllVMConstraints				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_eligibility.go:170:			isVMEligibleForReservation			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_eligibility.go:188:			IsVMEligibleForReservation			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_eligibility.go:216:			doesVMFitInReservation				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_eligibility.go:245:			FindEligibleReservations			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_scheduling.go:33:			queryHypervisorsFromScheduler			86.2%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_scheduling.go:120:			tryReuseExistingReservation			83.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_scheduling.go:178:			validateVMViaSchedulerEvacuation		0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/reservation_scheduling.go:253:			scheduleAndBuildNewFailoverReservation		82.4%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/vm_source.go:58:					NewDBVMSource					100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/vm_source.go:63:					ListVMs						78.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/vm_source.go:146:					parseExtraSpecs					28.6%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/vm_source.go:161:					truncateString					0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/vm_source.go:170:					GetVM						86.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/vm_source.go:220:					ListVMsOnHypervisors				27.3%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/vm_source.go:264:					buildVMsFromHypervisors				66.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/vm_source.go:333:					filterVMsOnKnownHypervisors			100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/failover/vm_source.go:402:					warnUnknownVMsOnHypervisors			0.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/flavor_groups.go:25:						Get						85.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/flavor_groups.go:46:						GetAllFlavorGroups				85.7%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/scheduler_client.go:25:					loggerFromContext				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/scheduler_client.go:44:					NewSchedulerClient				100.0%
github.com/cobaltcore-dev/cortex/internal/scheduling/reservations/scheduler_client.go:89:					ScheduleReservation				71.4%
total:																(statements)					67.5%

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants