Skip to content
Open
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,15 @@ module SpanAttributes

# Deprecated
TRACELOOP_CORRELATION_ID = "traceloop.correlation.id"

# Gen AI
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just replace the existing ones

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

and remove references to llm semantics?

GEN_AI_REQUEST_MODEL = "gen_ai.request.model"
GEN_AI_RESPONSE_MODEL = "gen_ai.response.model"
GEN_AI_USAGE_COMPLETION_TOKENS = "gen_ai.usage.completion_tokens"
GEN_AI_USAGE_PROMPT_TOKENS = "gen_ai.usage.prompt_tokens"
GEN_AI_COMPLETIONS = "gen_ai.completion"
GEN_AI_PROMPTS = "gen_ai.prompt"
GEN_AI_SYSTEM = "gen_ai.system"
Comment on lines 34 to 41
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Verify Gen AI semantic convention names match OpenTelemetry standards.

The Gen AI semantic convention constants should align with the official OpenTelemetry specification. Please verify these attribute names, particularly:

  • The singular vs. plural forms (e.g., "gen_ai.completion" vs "gen_ai.completions")
  • The overall naming structure matches the specification

🌐 Web query:

What are the official OpenTelemetry semantic conventions for generative AI operations?

💡 Result:

Short answer — OpenTelemetry provides a dedicated set of GenAI (Generative AI) semantic conventions covering four main areas plus provider-specific extensions:

  • Scope / pages (overall spec): GenAI semantic conventions (status: Development). [1]

  • Signals covered:

    • Events — “Generative AI inputs and outputs” (events, e.g., prompt/response events). [2]
    • Metrics — GenAI client & server metrics (e.g., gen_ai.client.token.usage, gen_ai.client.operation.duration, gen_ai.server.request.duration, gen_ai.server.time_to_first_token, gen_ai.server.time_per_output_token). [3][4]
    • Model (client) spans — conventions for inference/model calls (attributes such as gen_ai.operation.name, gen_ai.provider.name, gen_ai.request.model, gen_ai.response.model; span naming/kind/status guidance). [5]
    • Agent & framework spans — conventions for agent operations (e.g., create_agent, agent execution spans and agent-specific attributes). [6]
  • Key common attributes called out across the spec (examples):

    • gen_ai.operation.name (required for spans/metrics; e.g., chat, generate_content, text_completion). [5]
    • gen_ai.provider.name (identifies provider flavor; e.g., openai, gcp.gen_ai). [5]
    • gen_ai.request.model / gen_ai.response.model (model identifiers). [5]
    • gen_ai.conversation.id (when available for chat/history). [2]
    • error.type / standard error reporting guidance. [2][5]
  • Provider / system-specific conventions (examples):

    • OpenAI client conventions. [5]
    • Azure AI Inference conventions. [1]
    • AWS Bedrock conventions (agent/session specifics). [1][6]

If you want, I can: (a) extract the full list of metric names and their units/attributes, (b) paste the main span attribute tables for quick copy/paste, or (c) point you to the exact spec pages for each signal.

Sources: OpenTelemetry GenAI spec pages and repo. [1] [2] [3] [5] [6]

References
[1] Semantic conventions for generative AI systems — OpenTelemetry.
[2] Semantic conventions for Generative AI events — OpenTelemetry.
[3] Semantic conventions for generative AI metrics — OpenTelemetry.
[5] Semantic conventions for generative client AI spans — OpenTelemetry.
[6] Semantic conventions for GenAI agent and framework spans — OpenTelemetry.


Correct Gen AI semantic convention constant names to match OpenTelemetry specification.

The official OpenTelemetry specification uses event names gen_ai.content.prompt and gen_ai.content.completion, not the singular forms defined in lines 39-40. The attributes gen_ai.prompt and gen_ai.completion are deprecated.

Additionally, the official specification defines token usage as a single metric gen_ai.client.token.usage, not separate gen_ai.usage.completion_tokens and gen_ai.usage.prompt_tokens attributes.

Update lines 37-40 to use the correct attribute names from the current OpenTelemetry semantic conventions specification.

🤖 Prompt for AI Agents
In semantic_conventions_ai/lib/opentelemetry/semantic_conventions.rb around
lines 34 to 41, the Gen AI constant names don't match the OpenTelemetry spec;
replace the deprecated singular attributes and separate token usage attributes
with the spec-compliant names: change GEN_AI_COMPLETIONS to
GEN_AI_CONTENT_COMPLETION with value "gen_ai.content.completion", change
GEN_AI_PROMPTS to GEN_AI_CONTENT_PROMPT with value "gen_ai.content.prompt", and
replace the two token usage attributes (GEN_AI_USAGE_COMPLETION_TOKENS and
GEN_AI_USAGE_PROMPT_TOKENS) with a single GEN_AI_CLIENT_TOKEN_USAGE constant
with value "gen_ai.client.token.usage". Ensure the original deprecated constants
are removed or clearly marked deprecated if backward compatibility is required.

end

module LLMRequestTypeValues
Expand Down
69 changes: 52 additions & 17 deletions traceloop-sdk/lib/traceloop/sdk.rb
Original file line number Diff line number Diff line change
Expand Up @@ -6,16 +6,21 @@ module Traceloop
module SDK
class Traceloop
def initialize
api_key = ENV["TRACELOOP_API_KEY"]
raise "TRACELOOP_API_KEY environment variable is required" if api_key.nil? || api_key.empty?

OpenTelemetry::SDK.configure do |c|
c.add_span_processor(
OpenTelemetry::SDK::Trace::Export::SimpleSpanProcessor.new(
OpenTelemetry::SDK::Trace::Export::BatchSpanProcessor.new(
OpenTelemetry::Exporter::OTLP::Exporter.new(
endpoint: "#{ENV.fetch("TRACELOOP_BASE_URL", "https://api.traceloop.com")}/v1/traces",
headers: { "Authorization" => "Bearer #{ENV.fetch("TRACELOOP_API_KEY")}" }
headers: {
"Authorization" => "#{ENV.fetch("TRACELOOP_AUTH_SCHEME", "Bearer")} #{ENV.fetch("TRACELOOP_API_KEY")}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is that?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

authorization headers may vary in type. i.e. for our dynatrace exporter, they use Api-Token.

}
)
)
)
puts "Traceloop exporting traces to #{ENV.fetch("TRACELOOP_BASE", "https://api.traceloop.com")}"
puts "Traceloop exporting traces to #{ENV.fetch("TRACELOOP_BASE_URL", "https://api.traceloop.com")}"
end

@tracer = OpenTelemetry.tracer_provider.tracer("Traceloop")
Expand All @@ -41,25 +46,30 @@ def log_messages(messages)
def log_prompt(system_prompt="", user_prompt)
unless system_prompt.empty?
@span.add_attributes({
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.0.role" => "system",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.0.content" => system_prompt,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.1.role" => "user",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.1.content" => user_prompt
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.0.role" => "system",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.0.content" => system_prompt,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.1.role" => "user",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.1.content" => user_prompt
})
else
@span.add_attributes({
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.0.role" => "user",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_PROMPTS}.0.content" => user_prompt
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.0.role" => "user",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_PROMPTS}.0.content" => user_prompt
})
end
end

def log_response(response)
if response.respond_to?(:body)
log_bedrock_response(response)
# Check for RubyLLM::Message objects
elsif response.is_a?(::RubyLLM::Message)
log_ruby_llm_message(response)
elsif response.is_a?(::RubyLLM::Tool::Halt)
log_ruby_llm_halt(response)
# This is Gemini specific, see -
# https://github.com/gbaptista/gemini-ai?tab=readme-ov-file#generate_content
elsif response.has_key?("candidates")
elsif response.respond_to?(:has_key?) && response.has_key?("candidates")
log_gemini_response(response)
else
log_openai_response(response)
Expand All @@ -73,10 +83,29 @@ def log_gemini_response(response)

@span.add_attributes({
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => "assistant",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig("candidates", 0, "content", "parts", 0, "text")
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig(
"candidates", 0, "content", "parts", 0, "text")
})
end

def log_ruby_llm_message(response)
@span.add_attributes({
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => response.model_id,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_COMPLETION_TOKENS => response.output_tokens || 0,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_USAGE_PROMPT_TOKENS => response.input_tokens || 0,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => response.role.to_s,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content
})
end

def log_ruby_llm_halt(response)
@span.add_attributes({
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_RESPONSE_MODEL => @model,
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.role" => "tool",
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_COMPLETIONS}.0.content" => response.content
})
end

def log_bedrock_response(response)
body = JSON.parse(response.body.read())

Expand Down Expand Up @@ -109,15 +138,20 @@ def log_openai_response(response)
})
if response.has_key?("usage")
@span.add_attributes({
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_TOTAL_TOKENS => response.dig("usage", "total_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_COMPLETION_TOKENS => response.dig("usage", "completion_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_PROMPT_TOKENS => response.dig("usage", "prompt_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_TOTAL_TOKENS => response.dig("usage",
"total_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_COMPLETION_TOKENS => response.dig(
"usage", "completion_tokens"),
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_USAGE_PROMPT_TOKENS => response.dig("usage",
"prompt_tokens"),
})
end
if response.has_key?("choices")
@span.add_attributes({
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => response.dig("choices", 0, "message", "role"),
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig("choices", 0, "message", "content")
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.role" => response.dig(
"choices", 0, "message", "role"),
"#{OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_COMPLETIONS}.0.content" => response.dig(
"choices", 0, "message", "content")
})
end
end
Expand All @@ -126,7 +160,8 @@ def log_openai_response(response)
def llm_call(provider, model)
@tracer.in_span("#{provider}.chat") do |span|
span.add_attributes({
OpenTelemetry::SemanticConventionsAi::SpanAttributes::LLM_REQUEST_MODEL => model,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_REQUEST_MODEL => model,
OpenTelemetry::SemanticConventionsAi::SpanAttributes::GEN_AI_SYSTEM => provider
})
yield Tracer.new(span, provider, model)
end
Expand Down
4 changes: 2 additions & 2 deletions traceloop-sdk/traceloop-sdk.gemspec
Original file line number Diff line number Diff line change
Expand Up @@ -17,8 +17,8 @@ Gem::Specification.new do |spec|

spec.add_dependency 'opentelemetry-semantic_conventions_ai', '~> 0.0.3'

spec.add_dependency 'opentelemetry-sdk', '~> 1.3.1'
spec.add_dependency 'opentelemetry-exporter-otlp', '~> 0.26.1'
spec.add_dependency 'opentelemetry-exporter-otlp', '~> 0.31.1'
spec.add_dependency 'opentelemetry-sdk', '~> 1.10.0'

if spec.respond_to?(:metadata)
spec.metadata['source_code_uri'] = 'https://github.com/traceloop/openllmetry-ruby/tree/main/traceloop-sdk'
Expand Down