A Go SDK for programmatic access to the GitHub Copilot CLI.
Note: This SDK is in technical preview and may change in breaking ways.
go get github.com/github/copilot-sdk/goTry the interactive chat sample (from the repo root):
cd go/samples
go run chat.gopackage main
import (
"context"
"fmt"
"log"
copilot "github.com/github/copilot-sdk/go"
)
func main() {
// Create client
client := copilot.NewClient(&copilot.ClientOptions{
LogLevel: "error",
})
// Start the client
if err := client.Start(context.Background()); err != nil {
log.Fatal(err)
}
defer client.Stop()
// Create a session (OnPermissionRequest is required)
session, err := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
OnPermissionRequest: copilot.PermissionHandler.ApproveAll,
})
if err != nil {
log.Fatal(err)
}
defer session.Disconnect()
// Set up event handler
done := make(chan bool)
session.On(func(event copilot.SessionEvent) {
if event.Type == "assistant.message" {
if event.Data.Content != nil {
fmt.Println(*event.Data.Content)
}
}
if event.Type == "session.idle" {
close(done)
}
})
// Send a message
_, err = session.Send(context.Background(), copilot.MessageOptions{
Prompt: "What is 2+2?",
})
if err != nil {
log.Fatal(err)
}
// Wait for completion
<-done
}The SDK supports bundling, using Go's embed package, the Copilot CLI binary within your application's distribution.
This allows you to bundle a specific CLI version and avoid external dependencies on the user's system.
Follow these steps to embed the CLI:
- Run
go get -tool github.com/github/copilot-sdk/go/cmd/bundler. This is a one-time setup step per project. - Run
go tool bundlerin your build environment just before building your application.
That's it! When your application calls copilot.NewClient without a CLIPath nor the COPILOT_CLI_PATH environment variable, the SDK will automatically install the embedded CLI to a cache directory and use it for all operations.
NewClient(options *ClientOptions) *Client- Create a new clientStart(ctx context.Context) error- Start the CLI serverStop() error- Stop the CLI serverForceStop()- Forcefully stop without graceful cleanupCreateSession(config *SessionConfig) (*Session, error)- Create a new sessionResumeSession(sessionID string, config *ResumeSessionConfig) (*Session, error)- Resume an existing sessionResumeSessionWithOptions(sessionID string, config *ResumeSessionConfig) (*Session, error)- Resume with additional configurationListSessions(filter *SessionListFilter) ([]SessionMetadata, error)- List sessions (with optional filter)DeleteSession(sessionID string) error- Delete a session permanentlyGetLastSessionID(ctx context.Context) (*string, error)- Get the ID of the most recently updated sessionGetState() ConnectionState- Get connection statePing(message string) (*PingResponse, error)- Ping the serverGetForegroundSessionID(ctx context.Context) (*string, error)- Get the session ID currently displayed in TUI (TUI+server mode only)SetForegroundSessionID(ctx context.Context, sessionID string) error- Request TUI to display a specific session (TUI+server mode only)On(handler SessionLifecycleHandler) func()- Subscribe to all lifecycle events; returns unsubscribe functionOnEventType(eventType SessionLifecycleEventType, handler SessionLifecycleHandler) func()- Subscribe to specific lifecycle event type
Session Lifecycle Events:
// Subscribe to all lifecycle events
unsubscribe := client.On(func(event copilot.SessionLifecycleEvent) {
fmt.Printf("Session %s: %s\n", event.SessionID, event.Type)
})
defer unsubscribe()
// Subscribe to specific event type
unsubscribe := client.OnEventType(copilot.SessionLifecycleForeground, func(event copilot.SessionLifecycleEvent) {
fmt.Printf("Session %s is now in foreground\n", event.SessionID)
})Event types: SessionLifecycleCreated, SessionLifecycleDeleted, SessionLifecycleUpdated, SessionLifecycleForeground, SessionLifecycleBackground
ClientOptions:
CLIPath(string): Path to CLI executable (default: "copilot" orCOPILOT_CLI_PATHenv var)CLIUrl(string): URL of existing CLI server (e.g.,"localhost:8080","http://127.0.0.1:9000", or just"8080"). When provided, the client will not spawn a CLI process.Cwd(string): Working directory for CLI processPort(int): Server port for TCP mode (default: 0 for random)UseStdio(bool): Use stdio transport instead of TCP (default: true)LogLevel(string): Log level (default: "info")AutoStart(*bool): Auto-start server on first use (default: true). UseBool(false)to disable.Env([]string): Environment variables for CLI process (default: inherits from current process)GitHubToken(string): GitHub token for authentication. When provided, takes priority over other auth methods.UseLoggedInUser(*bool): Whether to use logged-in user for authentication (default: true, but false whenGitHubTokenis provided). Cannot be used withCLIUrl.Telemetry(*TelemetryConfig): OpenTelemetry configuration for the CLI process. Providing this enables telemetry — no separate flag needed. See Telemetry below.
SessionConfig:
Model(string): Model to use ("gpt-5", "claude-sonnet-4.5", etc.). Required when using custom provider.ReasoningEffort(string): Reasoning effort level for models that support it ("low", "medium", "high", "xhigh"). UseListModels()to check which models support this option.SessionID(string): Custom session IDTools([]Tool): Custom tools exposed to the CLISystemMessage(*SystemMessageConfig): System message configuration. Supports three modes:- append (default): Appends
Contentafter the SDK-managed prompt - replace: Replaces the entire prompt with
Content - customize: Selectively override individual sections via
Sectionsmap (keys:SectionIdentity,SectionTone,SectionToolEfficiency,SectionEnvironmentContext,SectionCodeChangeRules,SectionGuidelines,SectionSafety,SectionToolInstructions,SectionCustomInstructions,SectionLastInstructions; values:SectionOverridewithActionand optionalContent)
- append (default): Appends
Provider(*ProviderConfig): Custom API provider configuration (BYOK). See Custom Providers section.Streaming(bool): Enable streaming delta eventsInfiniteSessions(*InfiniteSessionConfig): Automatic context compaction configurationOnPermissionRequest(PermissionHandlerFunc): Required. Handler called before each tool execution to approve or deny it. Usecopilot.PermissionHandler.ApproveAllto allow everything, or provide a custom function for fine-grained control. See Permission Handling section.OnUserInputRequest(UserInputHandler): Handler for user input requests from the agent (enables ask_user tool). See User Input Requests section.Hooks(*SessionHooks): Hook handlers for session lifecycle events. See Session Hooks section.
ResumeSessionConfig:
OnPermissionRequest(PermissionHandlerFunc): Required. Handler called before each tool execution to approve or deny it. See Permission Handling section.Tools([]Tool): Tools to expose when resumingReasoningEffort(string): Reasoning effort level for models that support itProvider(*ProviderConfig): Custom API provider configuration (BYOK). See Custom Providers section.Streaming(bool): Enable streaming delta events
Send(ctx context.Context, options MessageOptions) (string, error)- Send a messageOn(handler SessionEventHandler) func()- Subscribe to events (returns unsubscribe function)Abort(ctx context.Context) error- Abort the currently processing messageGetMessages(ctx context.Context) ([]SessionEvent, error)- Get message historyDisconnect() error- Disconnect the session (releases in-memory resources, preserves disk state)Destroy() error- (Deprecated) UseDisconnect()instead
Bool(v bool) *bool- Helper to create bool pointers forAutoStartoption
Control the system prompt using SystemMessage in session config:
session, err := client.CreateSession(ctx, &copilot.SessionConfig{
SystemMessage: &copilot.SystemMessageConfig{
Content: "Always check for security vulnerabilities before suggesting changes.",
},
})The SDK auto-injects environment context, tool instructions, and security guardrails. The default CLI persona is preserved, and your Content is appended after SDK-managed sections. To change the persona or fully redefine the prompt, use Mode: "replace" or Mode: "customize".
Use Mode: "customize" to selectively override individual sections of the prompt while preserving the rest:
session, err := client.CreateSession(ctx, &copilot.SessionConfig{
SystemMessage: &copilot.SystemMessageConfig{
Mode: "customize",
Sections: map[string]copilot.SectionOverride{
// Replace the tone/style section
copilot.SectionTone: {Action: "replace", Content: "Respond in a warm, professional tone. Be thorough in explanations."},
// Remove coding-specific rules
copilot.SectionCodeChangeRules: {Action: "remove"},
// Append to existing guidelines
copilot.SectionGuidelines: {Action: "append", Content: "\n* Always cite data sources"},
},
// Additional instructions appended after all sections
Content: "Focus on financial analysis and reporting.",
},
})Available section constants: SectionIdentity, SectionTone, SectionToolEfficiency, SectionEnvironmentContext, SectionCodeChangeRules, SectionGuidelines, SectionSafety, SectionToolInstructions, SectionCustomInstructions, SectionLastInstructions.
Each section override supports four actions:
replace— Replace the section content entirelyremove— Remove the section from the promptappend— Add content after the existing sectionprepend— Add content before the existing section
Unknown section IDs are handled gracefully: content from replace/append/prepend overrides is appended to additional instructions, and remove overrides are silently ignored.
The SDK supports image attachments via the Attachments field in MessageOptions. You can attach images by providing their file path, or by passing base64-encoded data directly using a blob attachment:
// File attachment — runtime reads from disk
_, err = session.Send(context.Background(), copilot.MessageOptions{
Prompt: "What's in this image?",
Attachments: []copilot.Attachment{
{
Type: "file",
Path: "/path/to/image.jpg",
},
},
})
// Blob attachment — provide base64 data directly
mimeType := "image/png"
_, err = session.Send(context.Background(), copilot.MessageOptions{
Prompt: "What's in this image?",
Attachments: []copilot.Attachment{
{
Type: copilot.AttachmentTypeBlob,
Data: &base64ImageData,
MIMEType: &mimeType,
},
},
})Supported image formats include JPG, PNG, GIF, and other common image types. The agent's view tool can also read images directly from the filesystem, so you can also ask questions like:
_, err = session.Send(context.Background(), copilot.MessageOptions{
Prompt: "What does the most recent jpg in this directory portray?",
})Expose your own functionality to Copilot by attaching tools to a session.
Use DefineTool for type-safe tools with automatic JSON schema generation:
type LookupIssueParams struct {
ID string `json:"id" jsonschema:"Issue identifier"`
}
lookupIssue := copilot.DefineTool("lookup_issue", "Fetch issue details from our tracker",
func(params LookupIssueParams, inv copilot.ToolInvocation) (any, error) {
// params is automatically unmarshaled from the LLM's arguments
issue, err := fetchIssue(params.ID)
if err != nil {
return nil, err
}
return issue.Summary, nil
})
session, _ := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
Tools: []copilot.Tool{lookupIssue},
})For more control over the JSON schema, use the Tool struct directly:
lookupIssue := copilot.Tool{
Name: "lookup_issue",
Description: "Fetch issue details from our tracker",
Parameters: map[string]any{
"type": "object",
"properties": map[string]any{
"id": map[string]any{
"type": "string",
"description": "Issue identifier",
},
},
"required": []string{"id"},
},
Handler: func(invocation copilot.ToolInvocation) (copilot.ToolResult, error) {
args := invocation.Arguments.(map[string]any)
issue, err := fetchIssue(args["id"].(string))
if err != nil {
return copilot.ToolResult{}, err
}
return copilot.ToolResult{
TextResultForLLM: issue.Summary,
ResultType: "success",
SessionLog: fmt.Sprintf("Fetched issue %s", issue.ID),
}, nil
},
}
session, _ := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
Tools: []copilot.Tool{lookupIssue},
})When the model selects a tool, the SDK automatically runs your handler (in parallel with other calls) and responds to the CLI's tool.call with the handler's result.
If you register a tool with the same name as a built-in CLI tool (e.g. edit_file, read_file), the SDK will throw an error unless you explicitly opt in by setting OverridesBuiltInTool = true. This flag signals that you intend to replace the built-in tool with your custom implementation.
editFile := copilot.DefineTool("edit_file", "Custom file editor with project-specific validation",
func(params EditFileParams, inv copilot.ToolInvocation) (any, error) {
// your logic
})
editFile.OverridesBuiltInTool = trueSet SkipPermission = true on a tool to allow it to execute without triggering a permission prompt:
safeLookup := copilot.DefineTool("safe_lookup", "A read-only lookup that needs no confirmation",
func(params LookupParams, inv copilot.ToolInvocation) (any, error) {
// your logic
})
safeLookup.SkipPermission = trueEnable streaming to receive assistant response chunks as they're generated:
package main
import (
"context"
"fmt"
"log"
copilot "github.com/github/copilot-sdk/go"
)
func main() {
client := copilot.NewClient(nil)
if err := client.Start(context.Background()); err != nil {
log.Fatal(err)
}
defer client.Stop()
session, err := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
Streaming: true,
})
if err != nil {
log.Fatal(err)
}
defer session.Disconnect()
done := make(chan bool)
session.On(func(event copilot.SessionEvent) {
if event.Type == "assistant.message_delta" {
// Streaming message chunk - print incrementally
if event.Data.DeltaContent != nil {
fmt.Print(*event.Data.DeltaContent)
}
} else if event.Type == "assistant.reasoning_delta" {
// Streaming reasoning chunk (if model supports reasoning)
if event.Data.DeltaContent != nil {
fmt.Print(*event.Data.DeltaContent)
}
} else if event.Type == "assistant.message" {
// Final message - complete content
fmt.Println("\n--- Final message ---")
if event.Data.Content != nil {
fmt.Println(*event.Data.Content)
}
} else if event.Type == "assistant.reasoning" {
// Final reasoning content (if model supports reasoning)
fmt.Println("--- Reasoning ---")
if event.Data.Content != nil {
fmt.Println(*event.Data.Content)
}
}
if event.Type == "session.idle" {
close(done)
}
})
_, err = session.Send(context.Background(), copilot.MessageOptions{
Prompt: "Tell me a short story",
})
if err != nil {
log.Fatal(err)
}
<-done
}When Streaming: true:
assistant.message_deltaevents are sent withDeltaContentcontaining incremental textassistant.reasoning_deltaevents are sent withDeltaContentfor reasoning/chain-of-thought (model-dependent)- Accumulate
DeltaContentvalues to build the full response progressively - The final
assistant.messageandassistant.reasoningevents contain the complete content
Note: assistant.message and assistant.reasoning (final events) are always sent regardless of streaming setting.
By default, sessions use infinite sessions which automatically manage context window limits through background compaction and persist state to a workspace directory.
// Default: infinite sessions enabled with default thresholds
session, _ := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
})
// Access the workspace path for checkpoints and files
fmt.Println(session.WorkspacePath())
// => ~/.copilot/session-state/{sessionId}/
// Custom thresholds
session, _ := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
InfiniteSessions: &copilot.InfiniteSessionConfig{
Enabled: copilot.Bool(true),
BackgroundCompactionThreshold: copilot.Float64(0.80), // Start compacting at 80% context usage
BufferExhaustionThreshold: copilot.Float64(0.95), // Block at 95% until compaction completes
},
})
// Disable infinite sessions
session, _ := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
InfiniteSessions: &copilot.InfiniteSessionConfig{
Enabled: copilot.Bool(false),
},
})When enabled, sessions emit compaction events:
session.compaction_start- Background compaction startedsession.compaction_complete- Compaction finished (includes token counts)
The SDK supports custom OpenAI-compatible API providers (BYOK - Bring Your Own Key), including local providers like Ollama. When using a custom provider, you must specify the Model explicitly.
ProviderConfig:
Type(string): Provider type - "openai", "azure", or "anthropic" (default: "openai")BaseURL(string): API endpoint URL (required)APIKey(string): API key (optional for local providers like Ollama)BearerToken(string): Bearer token for authentication (takes precedence over APIKey)WireApi(string): API format for OpenAI/Azure - "completions" or "responses" (default: "completions")Azure.APIVersion(string): Azure API version (default: "2024-10-21")
Example with Ollama:
session, err := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "deepseek-coder-v2:16b", // Required when using custom provider
Provider: &copilot.ProviderConfig{
Type: "openai",
BaseURL: "http://localhost:11434/v1", // Ollama endpoint
// APIKey not required for Ollama
},
})Example with custom OpenAI-compatible API:
session, err := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-4",
Provider: &copilot.ProviderConfig{
Type: "openai",
BaseURL: "https://my-api.example.com/v1",
APIKey: os.Getenv("MY_API_KEY"),
},
})Example with Azure OpenAI:
session, err := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-4",
Provider: &copilot.ProviderConfig{
Type: "azure", // Must be "azure" for Azure endpoints, NOT "openai"
BaseURL: "https://my-resource.openai.azure.com", // Just the host, no path
APIKey: os.Getenv("AZURE_OPENAI_KEY"),
Azure: &copilot.AzureProviderOptions{
APIVersion: "2024-10-21",
},
},
})Important notes:
- When using a custom provider, the
Modelparameter is required. The SDK will return an error if no model is specified.- For Azure OpenAI endpoints (
*.openai.azure.com), you must useType: "azure", notType: "openai".- The
BaseURLshould be just the host (e.g.,https://my-resource.openai.azure.com). Do not include/openai/v1in the URL - the SDK handles path construction automatically.
The SDK supports OpenTelemetry for distributed tracing. Provide a Telemetry config to enable trace export and automatic W3C Trace Context propagation.
client, err := copilot.NewClient(copilot.ClientOptions{
Telemetry: &copilot.TelemetryConfig{
OTLPEndpoint: "http://localhost:4318",
},
})TelemetryConfig fields:
OTLPEndpoint(string): OTLP HTTP endpoint URLFilePath(string): File path for JSON-lines trace outputExporterType(string):"otlp-http"or"file"SourceName(string): Instrumentation scope nameCaptureContent(bool): Whether to capture message content
Trace context (traceparent/tracestate) is automatically propagated between the SDK and CLI on CreateSession, ResumeSession, and Send calls, and inbound when the CLI invokes tool handlers.
Note: The current
ToolHandlersignature does not accept acontext.Context, so the inbound trace context cannot be passed to handler code. Spans created inside a tool handler will not be automatically parented to the CLI'sexecute_toolspan. A future version may add a context parameter.
Dependency: go.opentelemetry.io/otel
An OnPermissionRequest handler is required whenever you create or resume a session. The handler is called before the agent executes each tool (file writes, shell commands, custom tools, etc.) and must return a decision.
Use the built-in PermissionHandler.ApproveAll helper to allow every tool call without any checks:
session, err := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
OnPermissionRequest: copilot.PermissionHandler.ApproveAll,
})Provide your own PermissionHandlerFunc to inspect each request and apply custom logic:
session, err := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
OnPermissionRequest: func(request copilot.PermissionRequest, invocation copilot.PermissionInvocation) (copilot.PermissionRequestResult, error) {
// request.Kind — what type of operation is being requested:
// copilot.KindShell — executing a shell command
// copilot.Write — writing or editing a file
// copilot.Read — reading a file
// copilot.MCP — calling an MCP tool
// copilot.CustomTool — calling one of your registered tools
// copilot.URL — fetching a URL
// copilot.Memory — accessing or updating Copilot-managed memory
// copilot.Hook — invoking a registered hook
// request.ToolCallID — pointer to the tool call that triggered this request
// request.ToolName — pointer to the name of the tool (for custom-tool / mcp)
// request.FileName — pointer to the file being written (for write)
// request.FullCommandText — pointer to the full shell command (for shell)
if request.Kind == copilot.KindShell {
// Deny shell commands
return copilot.PermissionRequestResult{Kind: copilot.PermissionRequestResultKindDeniedInteractivelyByUser}, nil
}
return copilot.PermissionRequestResult{Kind: copilot.PermissionRequestResultKindApproved}, nil
},
})| Constant | Meaning |
|---|---|
PermissionRequestResultKindApproved |
Allow the tool to run |
PermissionRequestResultKindDeniedInteractivelyByUser |
User explicitly denied the request |
PermissionRequestResultKindDeniedCouldNotRequestFromUser |
No approval rule matched and user could not be asked |
PermissionRequestResultKindDeniedByRules |
Denied by a policy rule |
PermissionRequestResultKindNoResult |
Leave the permission request unanswered (protocol v1 only; not allowed for protocol v2) |
Pass OnPermissionRequest when resuming a session too — it is required:
session, err := client.ResumeSession(context.Background(), sessionID, &copilot.ResumeSessionConfig{
OnPermissionRequest: copilot.PermissionHandler.ApproveAll,
})To let a specific custom tool bypass the permission prompt entirely, set SkipPermission = true on the tool. See Skipping Permission Prompts under Tools.
Enable the agent to ask questions to the user using the ask_user tool by providing an OnUserInputRequest handler:
session, err := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
OnUserInputRequest: func(request copilot.UserInputRequest, invocation copilot.UserInputInvocation) (copilot.UserInputResponse, error) {
// request.Question - The question to ask
// request.Choices - Optional slice of choices for multiple choice
// request.AllowFreeform - Whether freeform input is allowed (default: true)
fmt.Printf("Agent asks: %s\n", request.Question)
if len(request.Choices) > 0 {
fmt.Printf("Choices: %v\n", request.Choices)
}
// Return the user's response
return copilot.UserInputResponse{
Answer: "User's answer here",
WasFreeform: true, // Whether the answer was freeform (not from choices)
}, nil
},
})Hook into session lifecycle events by providing handlers in the Hooks configuration:
session, err := client.CreateSession(context.Background(), &copilot.SessionConfig{
Model: "gpt-5",
Hooks: &copilot.SessionHooks{
// Called before each tool execution
OnPreToolUse: func(input copilot.PreToolUseHookInput, invocation copilot.HookInvocation) (*copilot.PreToolUseHookOutput, error) {
fmt.Printf("About to run tool: %s\n", input.ToolName)
// Return permission decision and optionally modify args
return &copilot.PreToolUseHookOutput{
PermissionDecision: "allow", // "allow", "deny", or "ask"
ModifiedArgs: input.ToolArgs, // Optionally modify tool arguments
AdditionalContext: "Extra context for the model",
}, nil
},
// Called after each tool execution
OnPostToolUse: func(input copilot.PostToolUseHookInput, invocation copilot.HookInvocation) (*copilot.PostToolUseHookOutput, error) {
fmt.Printf("Tool %s completed\n", input.ToolName)
return &copilot.PostToolUseHookOutput{
AdditionalContext: "Post-execution notes",
}, nil
},
// Called when user submits a prompt
OnUserPromptSubmitted: func(input copilot.UserPromptSubmittedHookInput, invocation copilot.HookInvocation) (*copilot.UserPromptSubmittedHookOutput, error) {
fmt.Printf("User prompt: %s\n", input.Prompt)
return &copilot.UserPromptSubmittedHookOutput{
ModifiedPrompt: input.Prompt, // Optionally modify the prompt
}, nil
},
// Called when session starts
OnSessionStart: func(input copilot.SessionStartHookInput, invocation copilot.HookInvocation) (*copilot.SessionStartHookOutput, error) {
fmt.Printf("Session started from: %s\n", input.Source) // "startup", "resume", "new"
return &copilot.SessionStartHookOutput{
AdditionalContext: "Session initialization context",
}, nil
},
// Called when session ends
OnSessionEnd: func(input copilot.SessionEndHookInput, invocation copilot.HookInvocation) (*copilot.SessionEndHookOutput, error) {
fmt.Printf("Session ended: %s\n", input.Reason)
return nil, nil
},
// Called when an error occurs
OnErrorOccurred: func(input copilot.ErrorOccurredHookInput, invocation copilot.HookInvocation) (*copilot.ErrorOccurredHookOutput, error) {
fmt.Printf("Error in %s: %s\n", input.ErrorContext, input.Error)
return &copilot.ErrorOccurredHookOutput{
ErrorHandling: "retry", // "retry", "skip", or "abort"
}, nil
},
},
})Available hooks:
OnPreToolUse- Intercept tool calls before execution. Can allow/deny or modify arguments.OnPostToolUse- Process tool results after execution. Can modify results or add context.OnUserPromptSubmitted- Intercept user prompts. Can modify the prompt before processing.OnSessionStart- Run logic when a session starts or resumes.OnSessionEnd- Cleanup or logging when session ends.OnErrorOccurred- Handle errors with retry/skip/abort strategies.
Communicates with CLI via stdin/stdout pipes. Recommended for most use cases.
client := copilot.NewClient(nil) // Uses stdio by defaultCommunicates with CLI via TCP socket. Useful for distributed scenarios.
COPILOT_CLI_PATH- Path to the Copilot CLI executable
MIT