Skip to content

tests(core): Fix flaky metric sequence number test#19754

Merged
nicohrubec merged 2 commits intodevelopfrom
nh/unflake-metric-sequence-unit-test
Mar 11, 2026
Merged

tests(core): Fix flaky metric sequence number test#19754
nicohrubec merged 2 commits intodevelopfrom
nh/unflake-metric-sequence-unit-test

Conversation

@nicohrubec
Copy link
Member

Mock timestampInSeconds in the "increments the sequence number across consecutive metrics" test to return a fixed value. The test was flaky because consecutive calls could land on different milliseconds, causing the sequence counter to reset unexpectedly.

Closes #19749

Mock `timestampInSeconds` to return a fixed value so the sequence counter
does not reset due to millisecond boundary crossings between calls.

Fixes #19749

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@github-actions
Copy link
Contributor

github-actions bot commented Mar 11, 2026

size-limit report 📦

⚠️ Warning: Base artifact is not the latest one, because the latest workflow run is not done yet. This may lead to incorrect results. Try to re-run all tests to get up to date results.

Path Size % Change Change
@sentry/browser 25.64 kB +0.05% +12 B 🔺
@sentry/browser - with treeshaking flags 24.14 kB +0.03% +7 B 🔺
@sentry/browser (incl. Tracing) 42.62 kB +0.44% +184 B 🔺
@sentry/browser (incl. Tracing, Profiling) 47.28 kB +0.4% +187 B 🔺
@sentry/browser (incl. Tracing, Replay) 81.42 kB +0.22% +171 B 🔺
@sentry/browser (incl. Tracing, Replay) - with treeshaking flags 71 kB +0.18% +125 B 🔺
@sentry/browser (incl. Tracing, Replay with Canvas) 86.12 kB +0.21% +178 B 🔺
@sentry/browser (incl. Tracing, Replay, Feedback) 98.37 kB +0.18% +167 B 🔺
@sentry/browser (incl. Feedback) 42.45 kB +0.03% +9 B 🔺
@sentry/browser (incl. sendFeedback) 30.31 kB +0.05% +14 B 🔺
@sentry/browser (incl. FeedbackAsync) 35.36 kB +0.04% +12 B 🔺
@sentry/browser (incl. Metrics) 26.92 kB +0.48% +126 B 🔺
@sentry/browser (incl. Logs) 27.07 kB +0.48% +128 B 🔺
@sentry/browser (incl. Metrics & Logs) 27.74 kB +0.46% +127 B 🔺
@sentry/react 27.39 kB +0.04% +9 B 🔺
@sentry/react (incl. Tracing) 44.95 kB +0.41% +180 B 🔺
@sentry/vue 30.08 kB +0.03% +7 B 🔺
@sentry/vue (incl. Tracing) 44.48 kB +0.42% +183 B 🔺
@sentry/svelte 25.66 kB +0.04% +9 B 🔺
CDN Bundle 28.27 kB +0.35% +97 B 🔺
CDN Bundle (incl. Tracing) 43.5 kB +0.55% +237 B 🔺
CDN Bundle (incl. Logs, Metrics) 29.13 kB +0.42% +121 B 🔺
CDN Bundle (incl. Tracing, Logs, Metrics) 44.34 kB +0.56% +243 B 🔺
CDN Bundle (incl. Replay, Logs, Metrics) 68.2 kB +0.16% +107 B 🔺
CDN Bundle (incl. Tracing, Replay) 80.32 kB +0.23% +183 B 🔺
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) 81.22 kB +0.27% +218 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) 85.86 kB +0.24% +205 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) 86.76 kB +0.27% +226 B 🔺
CDN Bundle - uncompressed 82.56 kB +0.26% +211 B 🔺
CDN Bundle (incl. Tracing) - uncompressed 128.5 kB +0.34% +433 B 🔺
CDN Bundle (incl. Logs, Metrics) - uncompressed 85.43 kB +0.29% +244 B 🔺
CDN Bundle (incl. Tracing, Logs, Metrics) - uncompressed 131.37 kB +0.36% +466 B 🔺
CDN Bundle (incl. Replay, Logs, Metrics) - uncompressed 209.06 kB +0.11% +212 B 🔺
CDN Bundle (incl. Tracing, Replay) - uncompressed 245.35 kB +0.17% +401 B 🔺
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) - uncompressed 248.21 kB +0.18% +434 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed 258.26 kB +0.16% +401 B 🔺
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) - uncompressed 261.11 kB +0.17% +434 B 🔺
@sentry/nextjs (client) 47.37 kB +0.39% +184 B 🔺
@sentry/sveltekit (client) 43.07 kB +0.42% +178 B 🔺
@sentry/node-core 52.27 kB +0.06% +30 B 🔺
@sentry/node 174.76 kB +0.04% +54 B 🔺
@sentry/node - without tracing 97.44 kB +0.06% +51 B 🔺
@sentry/aws-serverless 113.24 kB +0.05% +46 B 🔺

View base workflow run

@github-actions
Copy link
Contributor

github-actions bot commented Mar 11, 2026

node-overhead report 🧳

Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.

Scenario Requests/s % of Baseline Prev. Requests/s Change %
GET Baseline 9,427 - 11,232 -16%
GET With Sentry 1,673 18% 1,894 -12%
GET With Sentry (error only) 5,965 63% 7,482 -20%
POST Baseline 1,196 - 1,140 +5%
POST With Sentry 579 48% 564 +3%
POST With Sentry (error only) 1,045 87% 1,016 +3%
MYSQL Baseline 3,159 - 3,884 -19%
MYSQL With Sentry 441 14% 441 -
MYSQL With Sentry (error only) 2,572 81% 3,147 -18%

View base workflow run

@nicohrubec nicohrubec marked this pull request as ready for review March 11, 2026 09:46
@nicohrubec nicohrubec requested review from Lms24 and logaretm March 11, 2026 09:46
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Copy link
Member

@Lms24 Lms24 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for fixing!

it('increments the sequence number across consecutive metrics', () => {
// Mock timestampInSeconds to return a fixed value so the sequence number
// does not reset due to millisecond boundary crossings between calls.
vi.spyOn(timeModule, 'timestampInSeconds').mockReturnValue(1234.567);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

l: can we use vi.useFakeTimers() instead of mocking our timer?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

any specific reason you would prefer this? I think it would probably work as well but I don't think it matters here since this is some abstractions down from what we are testing

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically just that it's the more "vitest-esque" way of influencing time in a test. Plus that it's more robust since this globally mocks timers. So if we were to get our time source from something other than timestampInSeconds for some reason, this test wouldn't break. But honestly, this is LOGAF-super-l, so feel free to leave it as-is. Using fake timers is much more relevant in cases where we actually have to wait for a certain timespan.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll let you make the call, I'm fine with either way

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, thanks for the detailed explanation! I will leave it as is for this one but will keep it in mind for the future :)

@nicohrubec nicohrubec merged commit 3df0bc6 into develop Mar 11, 2026
441 of 444 checks passed
@nicohrubec nicohrubec deleted the nh/unflake-metric-sequence-unit-test branch March 11, 2026 10:44
andreiborza pushed a commit that referenced this pull request Mar 11, 2026
Mock `timestampInSeconds` in the "increments the sequence number across
consecutive metrics" test to return a fixed value. The test was flaky
because consecutive calls could land on different milliseconds, causing
the sequence counter to reset unexpectedly.

Closes #19749

---------

Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Flaky CI]: Metric sequence unit test flakes

2 participants