Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
90 changes: 90 additions & 0 deletions tests/results/dp-perf/2.3.0/2.3.0-oss.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Results

## Test environment

NGINX Plus: false

NGINX Gateway Fabric:

- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00
- Date: 2025-12-12T20:04:38Z
- Dirty: false

GKE Cluster:

- Node count: 12
- k8s version: v1.33.5-gke.1308000
- vCPUs per node: 16
- RAM per node: 65851520Ki
- Max pods per node: 110
- Zone: us-west1-b
- Instance Type: n2d-standard-16

## Summary:

- Latency continues to grow slightly, per the trend of past releases.

## Test1: Running latte path based routing

```text
Requests [total, rate, throughput] 30000, 1000.03, 999.99
Duration [total, attack, wait] 30s, 29.999s, 991.978µs
Latencies [min, mean, 50, 90, 95, 99, max] 816.445µs, 1.069ms, 1.045ms, 1.166ms, 1.217ms, 1.385ms, 23.061ms
Bytes In [total, mean] 4740000, 158.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```

## Test2: Running coffee header based routing

```text
Requests [total, rate, throughput] 30000, 1000.04, 1000.00
Duration [total, attack, wait] 30s, 29.999s, 1.132ms
Latencies [min, mean, 50, 90, 95, 99, max] 840.624µs, 1.096ms, 1.073ms, 1.204ms, 1.26ms, 1.44ms, 16.79ms
Bytes In [total, mean] 4770000, 159.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```

## Test3: Running coffee query based routing

```text
Requests [total, rate, throughput] 30000, 1000.04, 1000.00
Duration [total, attack, wait] 30s, 29.999s, 1.067ms
Latencies [min, mean, 50, 90, 95, 99, max] 825.3µs, 1.095ms, 1.071ms, 1.201ms, 1.256ms, 1.444ms, 16.845ms
Bytes In [total, mean] 5010000, 167.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```

## Test4: Running tea GET method based routing

```text
Requests [total, rate, throughput] 30000, 1000.02, 999.99
Duration [total, attack, wait] 30s, 29.999s, 954.141µs
Latencies [min, mean, 50, 90, 95, 99, max] 818.006µs, 1.079ms, 1.059ms, 1.187ms, 1.241ms, 1.411ms, 14.873ms
Bytes In [total, mean] 4680000, 156.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```

## Test5: Running tea POST method based routing

```text
Requests [total, rate, throughput] 30000, 1000.03, 1000.00
Duration [total, attack, wait] 30s, 29.999s, 992.607µs
Latencies [min, mean, 50, 90, 95, 99, max] 808.16µs, 1.086ms, 1.064ms, 1.196ms, 1.248ms, 1.42ms, 17.019ms
Bytes In [total, mean] 4680000, 156.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```
90 changes: 90 additions & 0 deletions tests/results/dp-perf/2.3.0/2.3.0-plus.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,90 @@
# Results

## Test environment

NGINX Plus: true

NGINX Gateway Fabric:

- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00
- Date: 2025-12-12T20:04:38Z
- Dirty: false

GKE Cluster:

- Node count: 12
- k8s version: v1.33.5-gke.1308000
- vCPUs per node: 16
- RAM per node: 65851520Ki
- Max pods per node: 110
- Zone: us-west1-b
- Instance Type: n2d-standard-16

## Summary:

- Latency looks to have improved slightly.

## Test1: Running latte path based routing

```text
Requests [total, rate, throughput] 30000, 1000.04, 1000.01
Duration [total, attack, wait] 30s, 29.999s, 880.439µs
Latencies [min, mean, 50, 90, 95, 99, max] 691.14µs, 886.932µs, 867.964µs, 976.348µs, 1.018ms, 1.153ms, 10.358ms
Bytes In [total, mean] 4830000, 161.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```

## Test2: Running coffee header based routing

```text
Requests [total, rate, throughput] 30000, 1000.04, 1000.01
Duration [total, attack, wait] 30s, 29.999s, 923.361µs
Latencies [min, mean, 50, 90, 95, 99, max] 726.599µs, 948.386µs, 919.848µs, 1.025ms, 1.07ms, 1.262ms, 22.38ms
Bytes In [total, mean] 4860000, 162.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```

## Test3: Running coffee query based routing

```text
Requests [total, rate, throughput] 30000, 1000.04, 1000.01
Duration [total, attack, wait] 30s, 29.999s, 980.118µs
Latencies [min, mean, 50, 90, 95, 99, max] 741.198µs, 949.099µs, 920.511µs, 1.025ms, 1.067ms, 1.241ms, 19.154ms
Bytes In [total, mean] 5100000, 170.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```

## Test4: Running tea GET method based routing

```text
Requests [total, rate, throughput] 30000, 1000.01, 999.98
Duration [total, attack, wait] 30.001s, 30s, 997.667µs
Latencies [min, mean, 50, 90, 95, 99, max] 716.164µs, 903.954µs, 881.394µs, 978.714µs, 1.019ms, 1.192ms, 21.825ms
Bytes In [total, mean] 4770000, 159.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```

## Test5: Running tea POST method based routing

```text
Requests [total, rate, throughput] 30000, 1000.01, 999.97
Duration [total, attack, wait] 30.001s, 30s, 919.688µs
Latencies [min, mean, 50, 90, 95, 99, max] 708.879µs, 925.517µs, 903.767µs, 1.012ms, 1.054ms, 1.21ms, 22.009ms
Bytes In [total, mean] 4770000, 159.00
Bytes Out [total, mean] 0, 0.00
Success [ratio] 100.00%
Status Codes [code:count] 200:30000
Error Set:
```
83 changes: 83 additions & 0 deletions tests/results/longevity/2.3.0/2.3.0-oss.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Results

## Test environment

NGINX Plus: false

NGINX Gateway Fabric:

- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00
- Date: 2025-12-12T20:04:38Z
- Dirty: false

GKE Cluster:

- Node count: 3
- k8s version: v1.33.5-gke.1308000
- vCPUs per node: 2
- RAM per node: 4015672Ki
- Max pods per node: 110
- Zone: us-west2-a
- Instance Type: e2-medium

## Summary:

- Still a lot of non-2xx or 3xx responses, many more than last time. Socket errors are all mostly read errors, with no write errors and fewer timeout errors.
- We observe a continual increase in NGINX memory usage over time which could indicate a memory leak. Will bring this up with the Agent team.
- CPU usage remained consistent with past results.
- Error contacting TokenReview API, but may be a one-off.

## Traffic

HTTP:

```text
Running 5760m test @ http://cafe.example.com/coffee
2 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 190.35ms 141.74ms 2.00s 83.52%
Req/Sec 289.84 187.59 3.52k 63.68%
195509968 requests in 5760.00m, 66.75GB read
Socket errors: connect 0, read 315485, write 0, timeout 6584
Non-2xx or 3xx responses: 1763516
Requests/sec: 565.71
Transfer/sec: 202.53KB
```

HTTPS:

```text
Running 5760m test @ https://cafe.example.com/tea
2 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 180.03ms 106.92ms 1.94s 67.25%
Req/Sec 287.34 184.95 1.73k 63.36%
193842103 requests in 5760.00m, 65.22GB read
Socket errors: connect 0, read 309621, write 0, timeout 1
Requests/sec: 560.89
Transfer/sec: 197.88KB
```
## Key Metrics

### Containers memory

![oss-memory.png](oss-memory.png)

### Containers CPU

![oss-cpu.png](oss-cpu.png)

## Error Logs

### nginx-gateway

error=rpc error: code = Internal desc = error creating TokenReview: context canceled;level=error;logger=agentGRPCServer;msg=error validating connection;stacktrace=github.com/nginx/nginx-gateway-fabric/v2/internal/controller/nginx/agent/grpc/interceptor.(*ContextSetter).Stream.ContextSetter.Stream.func1
/opt/actions-runner/_work/nginx-gateway-fabric/nginx-gateway-fabric/internal/controller/nginx/agent/grpc/interceptor/interceptor.go:62
google.golang.org/grpc.(*Server).processStreamingRPC
/opt/actions-runner/_work/nginx-gateway-fabric/nginx-gateway-fabric/.gocache/google.golang.org/[email protected]/server.go:1721
google.golang.org/grpc.(*Server).handleStream
/opt/actions-runner/_work/nginx-gateway-fabric/nginx-gateway-fabric/.gocache/google.golang.org/[email protected]/server.go:1836
google.golang.org/grpc.(*Server).serveStreams.func2.1
/opt/actions-runner/_work/nginx-gateway-fabric/nginx-gateway-fabric/.gocache/google.golang.org/[email protected]/server.go:1063;ts=2025-12-16T17:35:17Z

### nginx
83 changes: 83 additions & 0 deletions tests/results/longevity/2.3.0/2.3.0-plus.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
# Results

## Test environment

NGINX Plus: false

NGINX Gateway Fabric:

- Commit: 89aee48bf6e660a828ffd32ca35fc7f52e358e00
- Date: 2025-12-12T20:04:38Z
- Dirty: false

GKE Cluster:

- Node count: 3
- k8s version: v1.33.5-gke.1308000
- vCPUs per node: 2
- RAM per node: 4015672Ki
- Max pods per node: 110
- Zone: us-west2-a
- Instance Type: e2-medium

## Summary:

- Consistent traffic results from 2.2.
- We observe a continual increase in NGINX memory usage over time which could indicate a memory leak. Will bring this up with the Agent team.
- CPU usage remained consistent with past results.
- Still get some "no live upstreams" errors.

## Traffic

HTTP:

```text
Running 5760m test @ http://cafe.example.com/coffee
2 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 184.82ms 102.91ms 1.45s 65.52%
Req/Sec 284.19 179.74 1.52k 63.62%
192198367 requests in 5760.00m, 65.91GB read
Socket errors: connect 0, read 0, write 0, timeout 108
Non-2xx or 3xx responses: 5
Requests/sec: 556.13
Transfer/sec: 199.96KB
```

HTTPS:

```text
Running 5760m test @ https://cafe.example.com/tea
2 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 185.02ms 102.92ms 1.50s 65.52%
Req/Sec 283.70 179.19 1.43k 63.75%
191866398 requests in 5760.00m, 64.73GB read
Socket errors: connect 0, read 0, write 0, timeout 114
Non-2xx or 3xx responses: 6
Requests/sec: 555.17
Transfer/sec: 196.40KB
```
## Key Metrics

### Containers memory

![oss-memory.png](oss-memory.png)

### Containers CPU

![oss-cpu.png](oss-cpu.png)

## Error Logs

### nginx-gateway

### nginx




10.168.0.90 - - [16/Dec/2025:15:47:08 +0000] "GET /tea HTTP/1.1" 502 150 "-" "-"
2025/12/16 15:47:08 [error] 26#26: *361983622 no live upstreams while connecting to upstream, client: 10.168.0.90, server: cafe.example.com, request: "GET /tea HTTP/1.1", upstream: "http://longevity_tea_80/tea", host: "cafe.example.com"
10.168.0.90 - - [16/Dec/2025:12:49:07 +0000] "GET /coffee HTTP/1.1" 502 150 "-" "-"
2025/12/16 12:49:07 [error] 25#25: *350621339 no live upstreams while connecting to upstream, client: 10.168.0.90, server: cafe.example.com, request: "GET /coffee HTTP/1.1", upstream: "http://longevity_coffee_80/coffee", host: "cafe.example.com"
Binary file added tests/results/longevity/2.3.0/oss-cpu.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added tests/results/longevity/2.3.0/oss-memory.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added tests/results/longevity/2.3.0/plus-cpu.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added tests/results/longevity/2.3.0/plus-memory.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Loading