Refactor kvm resource capacity kpi to use effective capacity#585
Refactor kvm resource capacity kpi to use effective capacity#585SoWieMarkus merged 2 commits intomainfrom
Conversation
📝 WalkthroughWalkthroughResource capacity computation now uses Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment Tip CodeRabbit can use Trivy to scan for security misconfigurations and secrets in Infrastructure as Code files.Add a .trivyignore file to your project to customize which findings Trivy reports. |
There was a problem hiding this comment.
🧹 Nitpick comments (1)
internal/knowledge/kpis/plugins/compute/resource_capacity_kvm_test.go (1)
44-412: Consider adding one case forEffectiveCapacity: nilskip behavior.The implementation now explicitly skips hypervisors with nil effective capacity; adding a dedicated table case would lock this behavior and prevent regressions.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@internal/knowledge/kpis/plugins/compute/resource_capacity_kvm_test.go` around lines 44 - 412, Add a new table case in TestKVMResourceCapacityKPI_Collect that supplies a Hypervisor with Status.EffectiveCapacity set to nil and assert that no metrics are emitted (i.e., expectedMetrics empty for that test). Locate the test table in TestKVMResourceCapacityKPI_Collect and add a case (e.g., name: "skip when effective capacity missing") where the hypervisor object has Status.EffectiveCapacity: nil and Allocation possibly nil, and verify the collector returns zero metrics for both "cortex_kvm_host_capacity_total" and "cortex_kvm_host_capacity_utilized" to lock in the skip behavior. Ensure the new case follows the same metricLabels conventions as other cases so it integrates with the existing check logic.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@internal/knowledge/kpis/plugins/compute/resource_capacity_kvm_test.go`:
- Around line 44-412: Add a new table case in TestKVMResourceCapacityKPI_Collect
that supplies a Hypervisor with Status.EffectiveCapacity set to nil and assert
that no metrics are emitted (i.e., expectedMetrics empty for that test). Locate
the test table in TestKVMResourceCapacityKPI_Collect and add a case (e.g., name:
"skip when effective capacity missing") where the hypervisor object has
Status.EffectiveCapacity: nil and Allocation possibly nil, and verify the
collector returns zero metrics for both "cortex_kvm_host_capacity_total" and
"cortex_kvm_host_capacity_utilized" to lock in the skip behavior. Ensure the new
case follows the same metricLabels conventions as other cases so it integrates
with the existing check logic.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: ad8e9c04-022d-44b2-9eca-708e7d8e5f2c
📒 Files selected for processing (2)
internal/knowledge/kpis/plugins/compute/resource_capacity_kvm.gointernal/knowledge/kpis/plugins/compute/resource_capacity_kvm_test.go
Test Coverage ReportTest Coverage 📊: 67.9% |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@internal/knowledge/kpis/plugins/compute/resource_capacity_kvm_test.go`:
- Around line 50-72: The test uses an empty expectedMetrics map which doesn't
fail when metrics are unexpectedly emitted because the verifier only iterates
expected entries; update the tests (cases with name "single hypervisor with nil
effective capacity" and the other empty-expectation case) to explicitly assert
no metrics are emitted: either set a sentinel (e.g., expectedMetrics = nil or
add expectNoMetrics: true) and after collecting metrics assert
len(emittedMetrics) == 0, or update the verifier (the test helper that currently
iterates expectedMetrics) to treat an empty/nil expectedMetrics as a negative
assertion by scanning actual emitted metrics and failing if any metric is
present. Reference the test's expectedMetrics variable and the verifier helper
used in resource_capacity_kvm_test.go when making the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: a8fc5f78-bc33-4ad1-92a4-2aa1ed0d85e0
📒 Files selected for processing (1)
internal/knowledge/kpis/plugins/compute/resource_capacity_kvm_test.go
| { | ||
| name: "single hypervisor with nil effective capacity", | ||
| hypervisors: []hv1.Hypervisor{ | ||
| { | ||
| ObjectMeta: v1.ObjectMeta{ | ||
| Name: "node001-bb088", | ||
| Labels: map[string]string{ | ||
| "topology.kubernetes.io/zone": "qa-1a", | ||
| }, | ||
| }, | ||
| Status: hv1.HypervisorStatus{ | ||
| EffectiveCapacity: nil, // Simulate nil effective capacity | ||
| Allocation: map[hv1.ResourceName]resource.Quantity{ | ||
| hv1.ResourceCPU: resource.MustParse("64"), | ||
| hv1.ResourceMemory: resource.MustParse("256Gi"), | ||
| }, | ||
| Traits: []string{}, | ||
| }, | ||
| }, | ||
| }, | ||
| // No metrics should be emitted for this hypervisor since effective capacity is nil | ||
| expectedMetrics: map[string][]expectedMetric{}, | ||
| }, |
There was a problem hiding this comment.
expectedMetrics: {} does not actually assert “no metrics emitted”.
Line 71 and Line 97 define empty expectations, but the verifier only iterates expected entries. These cases pass even if metrics are emitted.
Suggested fix
@@
- // Verify expected metrics
+ // Verify expected metrics
+ if len(tt.expectedMetrics) == 0 {
+ if len(actualMetrics) != 0 {
+ t.Errorf("expected no metrics, got %v", actualMetrics)
+ }
+ return
+ }
+
+ for metricName := range actualMetrics {
+ if _, ok := tt.expectedMetrics[metricName]; !ok {
+ t.Errorf("unexpected metric %q found in actual metrics", metricName)
+ }
+ }
+
for metricName, expectedList := range tt.expectedMetrics {Also applies to: 73-98
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@internal/knowledge/kpis/plugins/compute/resource_capacity_kvm_test.go` around
lines 50 - 72, The test uses an empty expectedMetrics map which doesn't fail
when metrics are unexpectedly emitted because the verifier only iterates
expected entries; update the tests (cases with name "single hypervisor with nil
effective capacity" and the other empty-expectation case) to explicitly assert
no metrics are emitted: either set a sentinel (e.g., expectedMetrics = nil or
add expectNoMetrics: true) and after collecting metrics assert
len(emittedMetrics) == 0, or update the verifier (the test helper that currently
iterates expectedMetrics) to treat an empty/nil expectedMetrics as a negative
assertion by scanning actual emitted metrics and failing if any metric is
present. Reference the test's expectedMetrics variable and the verifier helper
used in resource_capacity_kvm_test.go when making the change.
Changes
EffectiveCapacityfrom the hypervisor CRD, see: Add overcommit spec and status openstack-hypervisor-operator#257