Skip to content

fix(core): Improve Vercel AI SDK instrumentation attributes#19717

Merged
RulaKhaled merged 9 commits intodevelopfrom
vercelai-issues
Mar 11, 2026
Merged

fix(core): Improve Vercel AI SDK instrumentation attributes#19717
RulaKhaled merged 9 commits intodevelopfrom
vercelai-issues

Conversation

@RulaKhaled
Copy link
Member

@RulaKhaled RulaKhaled commented Mar 9, 2026

This PR introduces some attributes and fixes to Vercel AI SDK:

Closes #19574

@linear-code
Copy link

linear-code bot commented Mar 9, 2026

@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

size-limit report 📦

Path Size % Change Change
@sentry/browser 25.64 kB - -
@sentry/browser - with treeshaking flags 24.14 kB - -
@sentry/browser (incl. Tracing) 42.62 kB - -
@sentry/browser (incl. Tracing, Profiling) 47.28 kB - -
@sentry/browser (incl. Tracing, Replay) 81.42 kB - -
@sentry/browser (incl. Tracing, Replay) - with treeshaking flags 71 kB - -
@sentry/browser (incl. Tracing, Replay with Canvas) 86.12 kB - -
@sentry/browser (incl. Tracing, Replay, Feedback) 98.37 kB - -
@sentry/browser (incl. Feedback) 42.45 kB - -
@sentry/browser (incl. sendFeedback) 30.31 kB - -
@sentry/browser (incl. FeedbackAsync) 35.36 kB - -
@sentry/browser (incl. Metrics) 26.92 kB - -
@sentry/browser (incl. Logs) 27.07 kB - -
@sentry/browser (incl. Metrics & Logs) 27.74 kB - -
@sentry/react 27.39 kB - -
@sentry/react (incl. Tracing) 44.95 kB - -
@sentry/vue 30.08 kB - -
@sentry/vue (incl. Tracing) 44.48 kB - -
@sentry/svelte 25.66 kB - -
CDN Bundle 28.27 kB - -
CDN Bundle (incl. Tracing) 43.5 kB - -
CDN Bundle (incl. Logs, Metrics) 29.13 kB - -
CDN Bundle (incl. Tracing, Logs, Metrics) 44.34 kB - -
CDN Bundle (incl. Replay, Logs, Metrics) 68.2 kB - -
CDN Bundle (incl. Tracing, Replay) 80.32 kB - -
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) 81.22 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback) 85.86 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) 86.76 kB - -
CDN Bundle - uncompressed 82.56 kB - -
CDN Bundle (incl. Tracing) - uncompressed 128.5 kB - -
CDN Bundle (incl. Logs, Metrics) - uncompressed 85.43 kB - -
CDN Bundle (incl. Tracing, Logs, Metrics) - uncompressed 131.37 kB - -
CDN Bundle (incl. Replay, Logs, Metrics) - uncompressed 209.06 kB - -
CDN Bundle (incl. Tracing, Replay) - uncompressed 245.35 kB - -
CDN Bundle (incl. Tracing, Replay, Logs, Metrics) - uncompressed 248.21 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback) - uncompressed 258.26 kB - -
CDN Bundle (incl. Tracing, Replay, Feedback, Logs, Metrics) - uncompressed 261.11 kB - -
@sentry/nextjs (client) 47.37 kB - -
@sentry/sveltekit (client) 43.07 kB - -
@sentry/node-core 52.27 kB +0.02% +7 B 🔺
@sentry/node 175.13 kB +0.22% +368 B 🔺
@sentry/node - without tracing 97.43 kB +0.02% +11 B 🔺
@sentry/aws-serverless 113.23 kB +0.01% +7 B 🔺

View base workflow run

@github-actions
Copy link
Contributor

github-actions bot commented Mar 9, 2026

node-overhead report 🧳

Note: This is a synthetic benchmark with a minimal express app and does not necessarily reflect the real-world performance impact in an application.

Scenario Requests/s % of Baseline Prev. Requests/s Change %
GET Baseline 9,738 - 9,242 +5%
GET With Sentry 1,727 18% 1,700 +2%
GET With Sentry (error only) 6,166 63% 6,165 +0%
POST Baseline 1,209 - 1,209 -
POST With Sentry 591 49% 594 -1%
POST With Sentry (error only) 1,079 89% 1,070 +1%
MYSQL Baseline 3,345 - 3,256 +3%
MYSQL With Sentry 442 13% 470 -6%
MYSQL With Sentry (error only) 2,720 81% 2,684 +1%

View base workflow run

@RulaKhaled RulaKhaled changed the title fix(core): Resolve fix(core): Add output messages, tool description attributes, and fix media type stripping Mar 10, 2026
@RulaKhaled RulaKhaled changed the title fix(core): Add output messages, tool description attributes, and fix media type stripping fix(core): Improve Vercel AI SDK instrumentation attributes Mar 10, 2026
@RulaKhaled RulaKhaled marked this pull request as ready for review March 10, 2026 11:21
Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Autofix Details

Bugbot Autofix prepared a fix for the issue found in the latest run.

  • ✅ Fixed: V6 tests missing new output messages attribute assertions
    • Added explicit GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE assertions (and import) across the v6 span expectations so gen_ai.output.messages is now validated for text and tool-call outputs.

Create PR

Or push these changes by commenting:

@cursor push 8e0d6cceb7
Preview (8e0d6cceb7)
diff --git a/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts b/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
--- a/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
+++ b/dev-packages/node-integration-tests/suites/tracing/vercelai/v6/test.ts
@@ -4,6 +4,7 @@
 import {
   GEN_AI_INPUT_MESSAGES_ATTRIBUTE,
   GEN_AI_OPERATION_NAME_ATTRIBUTE,
+  GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE,
   GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE,
   GEN_AI_REQUEST_MODEL_ATTRIBUTE,
   GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE,
@@ -97,6 +98,8 @@
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           [GEN_AI_RESPONSE_MODEL_ATTRIBUTE]: 'mock-model-id',
           [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
           [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
@@ -129,6 +132,8 @@
           'vercel.ai.response.id': expect.any(String),
           'vercel.ai.response.timestamp': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
           [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
           [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
@@ -231,6 +236,8 @@
           'vercel.ai.prompt': '[{"role":"user","content":"Where is the first span?"}]',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the first span?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"First span here!"}],"finish_reason":"stop"}]',
           'vercel.ai.response.finishReason': 'stop',
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
@@ -257,6 +264,8 @@
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]:
             '[{"role":"user","content":[{"type":"text","text":"Where is the first span?"}]}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"First span here!"}],"finish_reason":"stop"}]',
           'vercel.ai.response.finishReason': 'stop',
           'vercel.ai.response.id': expect.any(String),
           'vercel.ai.response.model': 'mock-model-id',
@@ -289,6 +298,8 @@
           'vercel.ai.prompt': '[{"role":"user","content":"Where is the second span?"}]',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"Where is the second span?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           'vercel.ai.response.finishReason': 'stop',
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
@@ -324,6 +335,8 @@
           'vercel.ai.response.id': expect.any(String),
           'vercel.ai.response.timestamp': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"text","content":"Second span here!"}],"finish_reason":"stop"}]',
           [GEN_AI_RESPONSE_FINISH_REASONS_ATTRIBUTE]: ['stop'],
           [GEN_AI_USAGE_INPUT_TOKENS_ATTRIBUTE]: 10,
           [GEN_AI_USAGE_OUTPUT_TOKENS_ATTRIBUTE]: 20,
@@ -346,6 +359,8 @@
           'vercel.ai.prompt': '[{"role":"user","content":"What is the weather in San Francisco?"}]',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: '[{"role":"user","content":"What is the weather in San Francisco?"}]',
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"tool_call","id":"call-1","name":"getWeather","arguments":"{\\"location\\":\\"San Francisco\\"}"}],"finish_reason":"tool-calls"}]',
           'vercel.ai.response.finishReason': 'tool-calls',
           'vercel.ai.settings.maxRetries': 2,
           'vercel.ai.streaming': false,
@@ -371,6 +386,8 @@
           'vercel.ai.pipeline.name': 'generateText.doGenerate',
           'vercel.ai.request.headers.user-agent': expect.any(String),
           [GEN_AI_INPUT_MESSAGES_ATTRIBUTE]: expect.any(String),
+          [GEN_AI_OUTPUT_MESSAGES_ATTRIBUTE]:
+            '[{"role":"assistant","parts":[{"type":"tool_call","id":"call-1","name":"getWeather","arguments":"{\\"location\\":\\"San Francisco\\"}"}],"finish_reason":"tool-calls"}]',
           'vercel.ai.prompt.toolChoice': expect.any(String),
           [GEN_AI_REQUEST_AVAILABLE_TOOLS_ATTRIBUTE]: EXPECTED_AVAILABLE_TOOLS_JSON,
           'vercel.ai.response.finishReason': 'tool-calls',
This Bugbot Autofix run was free. To enable autofix for future PRs, go to the Cursor dashboard.

function truncateContentArrayMessage(message: ContentArrayMessage, maxBytes: number): unknown[] {
const { content } = message;

// Find the first text part to truncate
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

m: Why do we only truncate the first text part? Is the assumption that these messages usually only have one text part?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, because this is the most common use case, but we could and should account for more parts, i'll update

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rethinking this, python sdk completely removed single message truncation, we should follow, i will keep it as it is for now, and remove truncation for single message for object parts as well in a later PR :)

Copy link
Member

@nicohrubec nicohrubec Mar 11, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lol yeah I was just told that we'll likely be dropping most of truncation in the sdks soon so it's whatever

*/
function normalizeFinishReason(finishReason: unknown): string {
if (typeof finishReason !== 'string') {
return 'stop';
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

l: why do we default to stop if nothing is set?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because finish_reason is required according to the OTel schema for output messages. https://opentelemetry.io/docs/specs/semconv/gen-ai/gen-ai-output-messages.json

"FinishReason": {
      "enum": [
          "stop",
          "length",
          "content_filter",
          "tool_call",
          "error"
      ]
  }

when the SDK doesn't give us one, 'stop' (normal completion) is the most sensible default assumption.

// eslint-disable-next-line @typescript-eslint/no-dynamic-delete
delete attributes[AI_RESPONSE_TEXT_ATTRIBUTE];
// eslint-disable-next-line @typescript-eslint/no-dynamic-delete
delete attributes[AI_RESPONSE_TOOL_CALLS_ATTRIBUTE];
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

l: we do not delete the original finish reason attribute after normalizing here, is that on purpose?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yah finish reason is an independent attribute that was not deprecated by output messages attribute https://getsentry.github.io/sentry-conventions/attributes/gen_ai/#gen_ai-response-finish_reasons

Copy link

@cursor cursor bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Cursor Bugbot has reviewed your changes and found 1 potential issue.

Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.

Copy link
Member

@nicohrubec nicohrubec left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks!

@RulaKhaled RulaKhaled merged commit 62d3436 into develop Mar 11, 2026
443 of 445 checks passed
@RulaKhaled RulaKhaled deleted the vercelai-issues branch March 11, 2026 15:30
andreiborza pushed a commit that referenced this pull request Mar 11, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Fix Vercel AI Node.js tests

3 participants