Conversation
|
✅ Docker image ready for
Use this tag to pull the image for testing. 📋 Copy commandsgcloud auth configure-docker us-central1-docker.pkg.dev
docker pull us-central1-docker.pkg.dev/robusta-development/temporary-builds/robusta-runner:26a4184
docker tag us-central1-docker.pkg.dev/robusta-development/temporary-builds/robusta-runner:26a4184 me-west1-docker.pkg.dev/robusta-development/development/robusta-runner-dev:26a4184
docker push me-west1-docker.pkg.dev/robusta-development/development/robusta-runner-dev:26a4184Patch Helm values in one line: helm upgrade --install robusta robusta/robusta \
--reuse-values \
--set runner.image=me-west1-docker.pkg.dev/robusta-development/development/robusta-runner-dev:26a4184 |
WalkthroughA single documentation file receives comprehensive additions detailing the Holmes Chat API's streaming (SSE) mode, including enablement instructions, request/response formats, event type definitions with payloads, and client consumption examples for Python and JavaScript. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes 🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/configuration/holmesgpt/holmes-chat-api.rst`:
- Line 463: The doc currently mixes "terminal event" and "stream pauses" for the
approval_required event—pick one semantic and make the description consistent:
either mark approval_required as a terminal event (meaning the stream ends and a
separate resume/restart message is required) or as a pausing event (meaning the
stream remains open and resumes automatically once approval is granted). Update
the sentence describing approval_required to use the chosen protocol term
consistently and adjust any following text that references resume flow to match
(search for the token approval_required and the phrases "terminal" or "pauses"
to locate the related lines).
| ``approval_required`` | ||
| """"""""""""""""""""" | ||
|
|
||
| Emitted when Holmes needs explicit user approval before executing a tool. This is a **terminal event** — the stream pauses until approval is granted. |
There was a problem hiding this comment.
Clarify terminal semantics for approval_required.
Line 463 says “terminal event” and “stream pauses until approval is granted” in the same sentence. Those are conflicting states; please pick one protocol term and keep it consistent with the resume flow.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docs/configuration/holmesgpt/holmes-chat-api.rst` at line 463, The doc
currently mixes "terminal event" and "stream pauses" for the approval_required
event—pick one semantic and make the description consistent: either mark
approval_required as a terminal event (meaning the stream ends and a separate
resume/restart message is required) or as a pausing event (meaning the stream
remains open and resumes automatically once approval is granted). Update the
sentence describing approval_required to use the chosen protocol term
consistently and adjust any following text that references resume flow to match
(search for the token approval_required and the phrases "terminal" or "pauses"
to locate the related lines).
| **JavaScript (using EventSource or fetch):** | ||
|
|
||
| .. code-block:: javascript | ||
|
|
||
| const response = await fetch( | ||
| `https://api.robusta.dev/api/holmes/${accountId}/chat`, | ||
| { | ||
| method: "POST", | ||
| headers: { | ||
| "Content-Type": "application/json", | ||
| Authorization: `Bearer ${apiKey}`, | ||
| }, | ||
| body: JSON.stringify({ ask: "Which pods are failing prod-us?", stream: true }), | ||
| } | ||
| ); | ||
|
|
||
| const reader = response.body.getReader(); | ||
| const decoder = new TextDecoder(); | ||
| let eventType = null; | ||
|
|
||
| while (true) { | ||
| const { done, value } = await reader.read(); | ||
| if (done) break; | ||
|
|
||
| const text = decoder.decode(value); | ||
| for (const line of text.split("\n")) { | ||
| if (line.startsWith("event: ")) { | ||
| eventType = line.slice(7); | ||
| } else if (line.startsWith("data: ") && eventType) { | ||
| const data = JSON.parse(line.slice(6)); | ||
| switch (eventType) { | ||
| case "ai_answer_end": | ||
| console.log("Final analysis:", data.analysis); | ||
| break; | ||
| case "ai_message": | ||
| console.log("AI:", data.content); | ||
| break; | ||
| case "start_tool_calling": | ||
| console.log("Running tool:", data.tool_name); | ||
| break; | ||
| case "error": | ||
| console.error("Error:", data.msg); | ||
| break; | ||
| } | ||
| eventType = null; | ||
| } | ||
| } | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Can JavaScript EventSource API send POST requests with custom Authorization headers
💡 Result:
No. The browser’s native EventSource API always opens the SSE connection with an HTTP GET request and does not provide any way to set custom request headers (so you can’t send an Authorization: Bearer ... header, and you can’t switch it to POST). [1] [2] [3]
Common alternatives/workarounds:
- Cookie-based auth (optionally with
withCredentials: true) so the browser automatically sends credentials with theGET. [4] - Token in the URL (e.g., query string) if acceptable for your threat model. [5]
- Don’t use
EventSource: implement SSE overfetch()streaming, which does let you usePOSTand setAuthorizationheaders. Libraries exist to help with parsing/reconnect logic (e.g., fetch-based SSE helpers). [6] - Use an EventSource polyfill that supports custom headers / non-GET methods (implemented via XHR/fetch under the hood), e.g.
sse.jsorevent-source-polyfill. [7] [8]
Sources
[1] https://w3c.github.io/wot-profile/
[2] https://www.ietf.org/archive/id/draft-gupta-httpapi-events-query-01.html
[3] https://stackoverflow.com/questions/36201347/custom-headers-in-eventsource-connection
[4] https://developer.mozilla.org/en-US/docs/Web/API/EventSource/withCredentials
[5] https://www.hahwul.com/sec/web-security/sse/
[6] https://github.com/lukeed/fetch-event-stream
[7] https://github.com/mpetazzoni/sse.js/
[8] https://www.npmjs.com/package/event-source-polyfill/v/1.0.23
🌐 Web query:
Server-Sent Events SSE parsing ReadableStream TextDecoder chunk boundaries best practices
💡 Result:
Key constraints (why naïve splitting breaks)
ReadableStreamchunk boundaries are arbitrary: a chunk can end mid‑UTF‑8 code point, mid‑line, or mid‑event, so you must buffer and parse incrementally. (developer.mozilla.org)- SSE is line-oriented and UTF‑8 (no other encoding), and events are delimited by a blank line (i.e., two line breaks in a row, accounting for
\n,\r, or\r\n). (html.spec.whatwg.org)
Best-practice approach
1) Decode bytes as a stream
Prefer:
response.body.pipeThrough(new TextDecoderStream())(cleanest), or (developer.mozilla.org)TextDecoder.decode(chunk, { stream: true })and finallydecoder.decode()to flush.
This avoids corrupting characters that span chunks. (developer.mozilla.org)
2) Buffer text and parse by lines, not chunks
Maintain:
buffer(string) holding any incomplete trailing line between reads.- Per-event state:
eventName,dataLines[],lastEventId,retry.
Then:
- Append decoded text to
buffer - Extract complete lines (handle
\r\n,\n, or\r) - A blank line triggers dispatch of the accumulated event. (html.spec.whatwg.org)
3) Implement SSE field rules exactly
Per spec/standard behavior:
- Lines starting with
:are comments → ignore. (html.spec.whatwg.org) - Parse
field: value(optional single leading space after:). Unknown fields ignored. (html.spec.whatwg.org) - Multiple
data:lines are joined with\n(and browsers remove the final trailing newline on dispatch). (developer.mozilla.org) - Ignore a leading UTF‑8 BOM if present. (w3.org)
4) Avoid “wait forever” buffering delays
Parsing must be line-buffered; block buffering can delay dispatch (especially if you wait for large buffers instead of scanning for line endings). (w3.org)
5) Put safety limits on buffering
In real systems, add caps like maxLineBytes / maxEventBytes to prevent memory blow-ups on malformed streams. (Example of such protections in an SSE parser implementation.) (pekko.apache.org)
Minimal reference implementation pattern (browser Fetch → parsed events)
If you don’t want to reimplement the edge cases, use a spec-oriented parser library that already handles chunk boundaries and SSE semantics, e.g. parse-sse:
- Bytes →
TextDecoderStream→ SSE transform → async iteration of events. (github.com)
Conceptually:
const textStream = response.body.pipeThrough(new TextDecoderStream());
for await (const event of textStream.pipeThrough(new ServerSentEventTransformStream())) {
// event.type, event.data, event.lastEventId, event.retry
}(That exact composition is the recommended usage pattern in parse-sse.) (github.com)
Citations:
- 1: https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamDefaultReader/read?utm_source=openai
- 2: https://html.spec.whatwg.org/dev/server-sent-events.html?utm_source=openai
- 3: https://developer.mozilla.org/en-US/docs/Web/API/TextDecoderStream?utm_source=openai
- 4: https://developer.mozilla.org/en-US/docs/Web/API/ReadableStreamDefaultReader/read?utm_source=openai
- 5: https://html.spec.whatwg.org/dev/server-sent-events.html?utm_source=openai
- 6: https://html.spec.whatwg.org/dev/server-sent-events.html?utm_source=openai
- 7: https://html.spec.whatwg.org/dev/server-sent-events.html?utm_source=openai
- 8: https://developer.mozilla.org/en-US/docs/Web/API/Server-sent_events/Using_server-sent_events?utm_source=openai
- 9: https://www.w3.org/TR/2012/WD-eventsource-20120426/?utm_source=openai
- 10: https://www.w3.org/TR/2012/WD-eventsource-20120426/?utm_source=openai
- 11: https://pekko.apache.org/docs/pekko-http/current/common/sse-support.html?utm_source=openai
- 12: https://github.com/sindresorhus/parse-sse?utm_source=openai
- 13: https://github.com/sindresorhus/parse-sse?utm_source=openai
🌐 Web query:
EventSource API browser support POST method authentication headers MDN
💡 Result:
- Browser support (native
EventSource): Supported in modern Chrome, Firefox, Safari, and Chromium-based Edge; not supported in Internet Explorer (and legacy Edge). MDN’s SSE docs include the compatibility table. [1] - HTTP method: The browser
EventSourceAPI is GET-only (no way to specify POST). [2] - Request headers / auth headers: The browser
EventSource(url, options)only allows thewithCredentialsoption (for cookies); it does not provide any way to set custom request headers (e.g.,Authorization). [3] - Authentication options:
- Cookies (same-origin by default; cross-origin via
withCredentials: trueplus proper CORS). [1][3] - If you need Bearer tokens/custom headers or POST, use an alternative client approach (e.g.,
fetch()streaming) rather than nativeEventSource. [3]
- Cookies (same-origin by default; cross-origin via
Sources: MDN “Using server-sent events” [1], web.dev EventSource basics (GET-only) [2], MDN EventSource() constructor/options (withCredentials only) [3].
Fix the JS example: heading is misleading, and parser is chunk-unsafe for SSE events.
At Line 538, the heading "EventSource or fetch" is misleading. Native browser EventSource cannot send POST requests or custom Authorization headers—only GET with optional cookie credentials. The shown code uses fetch with POST and Authorization: Bearer, which EventSource does not support.
Additionally, Lines 562–584 are vulnerable to chunk boundaries. ReadableStream chunks can split anywhere (mid-line, mid-character), so naive split("\n") loses or corrupts SSE events. Per SSE spec, events are delimited by blank lines, and the parser must buffer incrementally with proper UTF-8 stream handling.
Proposed doc fix (heading + robust parser)
-**JavaScript (using EventSource or fetch):**
+**JavaScript (using fetch + ReadableStream):**
.. code-block:: javascript
const reader = response.body.getReader();
const decoder = new TextDecoder();
- let eventType = null;
+ let buffer = "";
+ let eventType = null;
+ let dataLines = [];
while (true) {
const { done, value } = await reader.read();
if (done) break;
-
- const text = decoder.decode(value);
- for (const line of text.split("\n")) {
- if (line.startsWith("event: ")) {
- eventType = line.slice(7);
- } else if (line.startsWith("data: ") && eventType) {
- const data = JSON.parse(line.slice(6));
- switch (eventType) {
- case "ai_answer_end":
- console.log("Final analysis:", data.analysis);
- break;
- case "ai_message":
- console.log("AI:", data.content);
- break;
- case "start_tool_calling":
- console.log("Running tool:", data.tool_name);
- break;
- case "error":
- console.error("Error:", data.msg);
- break;
- }
- eventType = null;
- }
+ buffer += decoder.decode(value, { stream: true });
+ const lines = buffer.split(/\r?\n/);
+ buffer = lines.pop() ?? "";
+
+ for (const line of lines) {
+ if (line === "") {
+ if (eventType && dataLines.length) {
+ const data = JSON.parse(dataLines.join("\n"));
+ switch (eventType) {
+ case "ai_answer_end":
+ console.log("Final analysis:", data.analysis);
+ break;
+ case "ai_message":
+ console.log("AI:", data.content);
+ break;
+ case "start_tool_calling":
+ console.log("Running tool:", data.tool_name);
+ break;
+ case "error":
+ console.error("Error:", data.msg);
+ break;
+ }
+ }
+ eventType = null;
+ dataLines = [];
+ continue;
+ }
+
+ if (line.startsWith("event: ")) {
+ eventType = line.slice(7);
+ } else if (line.startsWith("data: ")) {
+ dataLines.push(line.slice(6));
+ }
}
}
No description provided.