diff --git a/AGENTS.md b/AGENTS.md
index c05b440..2200cfd 100644
--- a/AGENTS.md
+++ b/AGENTS.md
@@ -17,9 +17,9 @@ git push -u origin feature/description-of-work
## Project Overview
-**py-code-mode** gives AI agents code execution with persistent skills and tool integration.
+**py-code-mode** gives AI agents code execution with persistent workflows and tool integration.
-The core idea: Agents write Python code. When a workflow succeeds, they save it as a **skill**. Next time, they invoke the skill directly - no re-planning required.
+The core idea: Agents write Python code. When a workflow succeeds, they save it as a **workflow**. Next time, they invoke the workflow directly - no re-planning required.
**Python version:** 3.12+ (see `pyproject.toml`)
@@ -34,7 +34,7 @@ src/py_code_mode/
subprocess/ # Jupyter kernel-based subprocess executor
container/ # Docker container executor
in_process/ # Same-process executor
- skills/ # Skill storage, library, and vector stores
+ workflows/ # Skill storage, library, and vector stores
tools/ # Tool adapters: CLI, MCP, HTTP
adapters/ # CLI, MCP, HTTP adapter implementations
artifacts/ # Artifact storage (file, redis)
@@ -60,13 +60,13 @@ When agents write code, four namespaces are available:
| Namespace | Purpose |
|-----------|---------|
| `tools.*` | CLI commands, MCP servers, HTTP APIs |
-| `skills.*` | Reusable Python workflows with semantic search |
+| `workflows.*` | Reusable Python workflows with semantic search |
| `artifacts.*` | Persistent data storage across sessions |
| `deps.*` | Runtime Python package management |
### Storage vs Executor
-- **Storage** (FileStorage, RedisStorage): Owns skills and artifacts
+- **Storage** (FileStorage, RedisStorage): Owns workflows and artifacts
- **Executor** (InProcess, Subprocess, Container): Owns tools and deps via config
### Executors
@@ -88,13 +88,13 @@ When agents write code, four namespaces are available:
uv run pytest
# Run specific test file
-uv run pytest tests/test_skills.py
+uv run pytest tests/test_workflows.py
# Run with verbose output
uv run pytest -v
# Run tests matching pattern
-uv run pytest -k "test_skill"
+uv run pytest -k "test_workflow"
# Run without parallelism (for debugging)
uv run pytest -n 0
@@ -143,7 +143,7 @@ uv run py-code-mode-mcp --base ~/.code-mode --redis redis://localhost:6379
Storage backends implement `StorageBackend` protocol:
- `get_serializable_access()` - For cross-process communication
-- `get_skill_library()` - Returns SkillLibrary
+- `get_workflow_library()` - Returns SkillLibrary
- `get_artifact_store()` - Returns ArtifactStore
### Bootstrap Pattern
@@ -204,4 +204,4 @@ Tools are defined in YAML files. Key patterns:
| `src/py_code_mode/execution/protocol.py` | Executor protocol definition |
| `src/py_code_mode/storage/backends.py` | Storage backend implementations |
| `src/py_code_mode/tools/namespace.py` | ToolsNamespace, ToolProxy |
-| `src/py_code_mode/skills/library.py` | SkillLibrary implementation |
+| `src/py_code_mode/workflows/library.py` | SkillLibrary implementation |
diff --git a/README.md b/README.md
index e24cd1d..737bb1c 100644
--- a/README.md
+++ b/README.md
@@ -5,21 +5,21 @@
[](https://www.python.org/downloads/)
[](https://opensource.org/licenses/MIT)
-Give your AI agents code execution with persistent skills and tool integration.
+Give your AI agents code execution with persistent workflows and tool integration.
## The Core Idea
Multi-step agent workflows are fragile. Each step requires a new LLM call that can hallucinate, pick the wrong tool, or lose context.
-**py-code-mode takes a different approach:** Agents write Python code. When a workflow succeeds, they save it as a **skill**. Next time they need that capability, they invoke the skill directly—no re-planning required.
+**py-code-mode takes a different approach:** Agents write Python code. When a workflow succeeds, they save it as a **workflow**. Next time they need that capability, they invoke the workflow directly—no re-planning required.
```
First time: Problem → Iterate → Success → Save as Skill
-Next time: Search Skills → Found! → Invoke (no iteration needed)
+Next time: Search Workflows → Found! → Invoke (no iteration needed)
Later: Skill A + Skill B → Compose into Skill C
```
-Over time, agents build a library of reliable capabilities. Simple skills become building blocks for complex workflows.
+Over time, agents build a library of reliable capabilities. Simple workflows become building blocks for complex workflows.

@@ -28,19 +28,19 @@ Over time, agents build a library of reliable capabilities. Simple skills become
```python
from py_code_mode import Session
-# One line setup - auto-discovers tools/, skills/, artifacts/, requirements.txt
+# One line setup - auto-discovers tools/, workflows/, artifacts/, requirements.txt
async with Session.from_base("./.code-mode") as session:
result = await session.run('''
-# Search for existing skills
-results = skills.search("github analysis")
+# Search for existing workflows
+results = workflows.search("github analysis")
# Or create a new workflow
import json
repo_data = tools.curl.get(url="https://api.github.com/repos/anthropics/anthropic-sdk-python")
parsed = json.loads(repo_data)
-# Save successful workflows as skills
-skills.create(
+# Save successful workflows as workflows
+workflows.create(
name="fetch_repo_stars",
source="""async def run(owner: str, repo: str) -> int:
import json
@@ -87,8 +87,8 @@ claude mcp add py-code-mode -- uvx --from git+https://github.com/xpcmdshell/py-c
## Features
-- **Skill persistence** - Save working code as reusable skills, invoke later without re-planning
-- **Semantic search** - Find relevant skills and tools by natural language description
+- **Skill persistence** - Save working code as reusable workflows, invoke later without re-planning
+- **Semantic search** - Find relevant workflows and tools by natural language description
- **Tool integration** - Wrap CLI commands, MCP servers, and HTTP APIs as callable functions
- **Process isolation** - SubprocessExecutor runs code in a separate process with clean venv
- **Multiple storage backends** - FileStorage for local dev, RedisStorage for distributed deployments
@@ -99,7 +99,7 @@ claude mcp add py-code-mode -- uvx --from git+https://github.com/xpcmdshell/py-c
When agents write code, four namespaces are available:
**tools**: CLI commands, MCP servers, and REST APIs wrapped as callable functions
-**skills**: Reusable Python workflows with semantic search
+**workflows**: Reusable Python workflows with semantic search
**artifacts**: Persistent data storage across sessions
**deps**: Runtime Python package management
@@ -108,12 +108,12 @@ When agents write code, four namespaces are available:
tools.curl.get(url="https://api.example.com/data")
tools.jq.query(filter=".key", input=json_data)
-# Skills: reusable workflows
-analysis = skills.invoke("analyze_repo", owner="anthropics", repo="anthropic-sdk-python")
+# Workflows: reusable workflows
+analysis = workflows.invoke("analyze_repo", owner="anthropics", repo="anthropic-sdk-python")
-# Skills can build on other skills
+# Workflows can build on other workflows
async def run(repos: list) -> dict:
- summaries = [skills.invoke("analyze_repo", **parse_repo(r)) for r in repos]
+ summaries = [workflows.invoke("analyze_repo", **parse_repo(r)) for r in repos]
return {"total": len(summaries), "results": summaries}
# Artifacts: persistent storage
@@ -130,7 +130,7 @@ For programmatic access without code strings, Session also provides facade metho
```python
# Direct API access (useful for MCP servers, framework integrations)
tools = await session.list_tools()
-skills = await session.search_skills("github analysis")
+workflows = await session.search_workflows("github analysis")
await session.save_artifact("data", {"key": "value"})
```
@@ -151,7 +151,7 @@ For MCP server installation, see [Getting Started](./docs/getting-started.md).
**Core Concepts:**
- **[Tools](./docs/tools.md)** - CLI, MCP, and REST API adapters
-- **[Skills](./docs/skills.md)** - Creating, composing, and managing workflows
+- **[Workflows](./docs/workflows.md)** - Creating, composing, and managing workflows
- **[Artifacts](./docs/artifacts.md)** - Persistent data storage patterns
- **[Dependencies](./docs/dependencies.md)** - Managing Python packages
diff --git a/docker/Dockerfile.tools b/docker/Dockerfile.tools
index 6561b69..9d0c4d2 100644
--- a/docker/Dockerfile.tools
+++ b/docker/Dockerfile.tools
@@ -26,8 +26,9 @@ RUN apt-get update && apt-get install -y --no-install-recommends \
COPY docker/configs/tools.yaml /app/tools.yaml
ENV TOOLS_CONFIG=/app/tools.yaml
-# Copy skills if present
-COPY docker/configs/skills/ /app/skills/
+# Copy workflows if present
+COPY docker/configs/workflows/ /app/workflows/
+ENV WORKFLOWS_PATH=/app/workflows
# Python dependencies are auto-installed from tools.yaml at startup
# See python_deps section in docker/configs/tools.yaml
diff --git a/docker/configs/skills/.gitkeep b/docker/configs/workflows/.gitkeep
similarity index 100%
rename from docker/configs/skills/.gitkeep
rename to docker/configs/workflows/.gitkeep
diff --git a/docs/ARCHITECTURE.md b/docs/ARCHITECTURE.md
index 3c636e3..2a5ecaf 100644
--- a/docs/ARCHITECTURE.md
+++ b/docs/ARCHITECTURE.md
@@ -1,6 +1,6 @@
# py-code-mode Architecture
-This document explains how tools, skills, and artifacts interact across different deployment scenarios.
+This document explains how tools, workflows, and artifacts interact across different deployment scenarios.
## Quick Reference
@@ -24,10 +24,10 @@ This document explains how tools, skills, and artifacts interact across differen
| Component | Purpose | Format |
|-----------|---------|--------|
| **Tools** | CLI commands, MCP servers, HTTP APIs | YAML definitions |
-| **Skills** | Reusable Python code recipes | `.py` files with `run()` function |
+| **Workflows** | Reusable Python code recipes | `.py` files with `run()` function |
| **Artifacts** | Persistent data storage | Binary data with metadata |
| **Deps** | Python package dependencies | `requirements.txt` (file) or Redis keys |
-| **VectorStore** | Cached skill embeddings for fast search | ChromaDB or Redis keys |
+| **VectorStore** | Cached workflow embeddings for fast search | ChromaDB or Redis keys |
## Agent-Facing Namespaces
@@ -36,19 +36,19 @@ When code executes, agents access four main namespaces:
| Namespace | Purpose | Operations |
|-----------|---------|-----------|
| **tools.\*** | Call CLI commands, MCP servers, HTTP APIs | `call()`, `list()`, `search()` |
-| **skills.\*** | Execute or manage reusable Python recipes | `invoke()`, `create()`, `delete()`, `list()`, `search()` |
+| **workflows.\*** | Execute or manage reusable Python recipes | `invoke()`, `create()`, `delete()`, `list()`, `search()` |
| **artifacts.\*** | Save and retrieve persistent data | `save()`, `load()`, `delete()`, `list()` |
| **deps.\*** | Manage Python package dependencies | `add()`, `remove()`, `list()`, `sync()` |
-All namespaces are automatically injected into code execution. Skills also have access to these namespaces.
+All namespaces are automatically injected into code execution. Workflows also have access to these namespaces.
---
## Storage Abstraction
-Storage handles where skills and artifacts live. Tools and deps are owned by executors via config.
+Storage handles where workflows and artifacts live. Tools and deps are owned by executors via config.
-| Storage Type | Use Case | Skills | Artifacts |
+| Storage Type | Use Case | Workflows | Artifacts |
|-------------|----------|--------|-----------|
| `FileStorage` | Local development | `.py` files | Binary files |
| `RedisStorage` | Distributed/production | Redis keys | Redis keys |
@@ -59,13 +59,13 @@ from pathlib import Path
from py_code_mode import Session, FileStorage, RedisStorage
from py_code_mode.execution import InProcessExecutor, InProcessConfig, ContainerExecutor, ContainerConfig
-# File-based storage for skills and artifacts
+# File-based storage for workflows and artifacts
storage = FileStorage(base_path=Path("./storage"))
-# Creates: ./storage/skills/, ./storage/artifacts/
+# Creates: ./storage/workflows/, ./storage/artifacts/
-# Redis-based storage for skills and artifacts
+# Redis-based storage for workflows and artifacts
storage = RedisStorage(url="redis://localhost:6379", prefix="myapp")
-# Uses keys: myapp:skills:*, myapp:artifacts:*
+# Uses keys: myapp:workflows:*, myapp:artifacts:*
# Configure executor with tools and deps (owned by executor, not storage)
config = InProcessConfig(
@@ -91,9 +91,9 @@ async with Session(storage=storage, executor=executor) as session:
**Key design:**
- `Session` accepts typed `Executor` instances
-- `FileStorage`/`RedisStorage` only handle skills and artifacts
+- `FileStorage`/`RedisStorage` only handle workflows and artifacts
- Tools and deps are configured via executor config (`tools_path`, `deps`, `deps_file`)
-- Session uses `StorageBackend` protocol for skills and artifacts
+- Session uses `StorageBackend` protocol for workflows and artifacts
## StorageBackend Protocol
@@ -103,7 +103,7 @@ The `StorageBackend` protocol provides a clean interface for storage backends:
class StorageBackend(Protocol):
"""Protocol for unified storage backend.
- Provides skills and artifacts storage. Tools and deps are owned by executors.
+ Provides workflows and artifacts storage. Tools and deps are owned by executors.
"""
def get_serializable_access(self) -> FileStorageAccess | RedisStorageAccess:
@@ -114,7 +114,7 @@ class StorageBackend(Protocol):
"""
...
- def get_skill_library(self) -> SkillLibrary:
+ def get_workflow_library(self) -> SkillLibrary:
"""Return SkillLibrary for in-process execution."""
...
@@ -125,7 +125,7 @@ class StorageBackend(Protocol):
**Design rationale:**
- `get_serializable_access()`: Returns path/connection info that can be sent to other processes (containers, subprocesses)
-- `get_skill_library()`, `get_artifact_store()`: Return live objects for in-process execution
+- `get_workflow_library()`, `get_artifact_store()`: Return live objects for in-process execution
- Tools and deps are owned by executors (via `config.tools_path`, `config.deps`)
- No wrapper layers or dict-like access - components are accessed directly
@@ -133,7 +133,7 @@ class StorageBackend(Protocol):
## Bootstrap Architecture
-Cross-process executors (SubprocessExecutor, ContainerExecutor) need to reconstruct the `tools`, `skills`, `artifacts` namespaces in their isolated environment. The bootstrap pattern handles this:
+Cross-process executors (SubprocessExecutor, ContainerExecutor) need to reconstruct the `tools`, `workflows`, `artifacts` namespaces in their isolated environment. The bootstrap pattern handles this:
```
Host Process Subprocess/Container
@@ -147,7 +147,7 @@ storage.to_bootstrap_config()
"base_path": "/path/to/storage", v
} +-------------------+
+ tools_path from executor | tools namespace |
- + deps from executor | skills namespace |
+ + deps from executor | workflows namespace |
| | artifacts namespace|
+---- (serialized) ------------> +-------------------+
```
@@ -156,7 +156,7 @@ storage.to_bootstrap_config()
| Function | Location | Purpose |
|----------|----------|---------|
-| `storage.to_bootstrap_config()` | `storage/backends.py` | Serialize storage config (skills, artifacts) |
+| `storage.to_bootstrap_config()` | `storage/backends.py` | Serialize storage config (workflows, artifacts) |
| `executor.config.tools_path` | Executor config | Path to tool YAML definitions |
| `bootstrap_namespaces(config)` | `execution/bootstrap.py` | Reconstruct namespaces from config |
@@ -166,7 +166,7 @@ storage.to_bootstrap_config()
"type": "file",
"base_path": "/absolute/path/to/storage"
}
-# Skills at base_path/skills/, artifacts at base_path/artifacts/
+# Workflows at base_path/workflows/, artifacts at base_path/artifacts/
# Tools come from executor config.tools_path (separate from storage)
```
@@ -177,7 +177,7 @@ storage.to_bootstrap_config()
"url": "redis://localhost:6379",
"prefix": "myapp"
}
-# Skills at myapp:skills:*, artifacts at myapp:artifacts:*
+# Workflows at myapp:workflows:*, artifacts at myapp:artifacts:*
# Tools come from executor config.tools_path (separate from storage)
```
@@ -186,7 +186,7 @@ storage.to_bootstrap_config()
- Cannot pass live Python objects across process boundaries
- Config dict is JSON-serializable and can be sent via IPC, HTTP, environment variables
- Tools path is passed separately from storage config (executor owns tools)
-- `bootstrap_namespaces()` returns a dict with `tools`, `skills`, `artifacts` ready for code execution
+- `bootstrap_namespaces()` returns a dict with `tools`, `workflows`, `artifacts` ready for code execution
## Session Architecture
@@ -195,8 +195,8 @@ Session orchestrates storage and execution:
```
Session(storage=StorageBackend, executor=Executor)
|
- +-- Storage provides (skills and artifacts only):
- | storage.get_skill_library() -> SkillLibrary
+ +-- Storage provides (workflows and artifacts only):
+ | storage.get_workflow_library() -> SkillLibrary
| storage.get_artifact_store() -> ArtifactStoreProtocol
|
+-- Executor provides (tools and deps):
@@ -208,7 +208,7 @@ Session(storage=StorageBackend, executor=Executor)
|
+-- Executor implementations:
+-- InProcessExecutor (default)
- | Gets skills/artifacts from storage, tools from config
+ | Gets workflows/artifacts from storage, tools from config
|
+-- ContainerExecutor (Docker)
| Receives serializable access + tools_path, reconstructs
@@ -220,9 +220,9 @@ Session(storage=StorageBackend, executor=Executor)
**Key Flow:**
1. User creates `Session(storage=storage, executor=executor)`
2. Session starts executor with storage backend
-3. Executor gets skills/artifacts from storage, tools from its own config
+3. Executor gets workflows/artifacts from storage, tools from its own config
4. Cross-process executors serialize storage access + tools_path
-5. Executor builds namespaces: `tools.*`, `skills.*`, `artifacts.*`
+5. Executor builds namespaces: `tools.*`, `workflows.*`, `artifacts.*`
6. User calls `session.run(code)` which delegates to executor
---
@@ -319,26 +319,26 @@ await session.run('deps.remove("requests")') # Removes
---
-## SkillsNamespace Decoupling
+## WorkflowsNamespace Decoupling
-`SkillsNamespace` is decoupled from executors and accepts a plain namespace dict:
+`WorkflowsNamespace` is decoupled from executors and accepts a plain namespace dict:
```python
-class SkillsNamespace:
+class WorkflowsNamespace:
def __init__(self, library: SkillLibrary, namespace: dict[str, Any]) -> None:
- """Initialize SkillsNamespace.
+ """Initialize WorkflowsNamespace.
Args:
- library: The skill library for skill lookup and storage.
- namespace: Dict containing tools, skills, artifacts for skill execution.
+ library: The workflow library for workflow lookup and storage.
+ namespace: Dict containing tools, workflows, artifacts for workflow execution.
Must be a plain dict, not an executor object.
"""
```
**Design rationale:**
-- Any executor (InProcess, Container, Subprocess) can use `SkillsNamespace`
+- Any executor (InProcess, Container, Subprocess) can use `WorkflowsNamespace`
- No coupling to specific executor implementations
-- Skills execute with `tools`, `skills`, `artifacts` from the namespace dict
+- Workflows execute with `tools`, `workflows`, `artifacts` from the namespace dict
- Explicit rejection of executor-like objects prevents accidental coupling
---
@@ -381,7 +381,7 @@ result = tools.curl.get.call_sync(url="...")
| | Your Agent | |
| | | |
| | storage = FileStorage(base_path=Path("./storage")) | |
-| | # Creates: skills/, artifacts/ subdirs | |
+| | # Creates: workflows/, artifacts/ subdirs | |
| | | |
| | config = InProcessConfig(tools_path=Path("./tools")) | |
| | executor = InProcessExecutor(config=config) | |
@@ -404,7 +404,7 @@ result = tools.curl.get.call_sync(url="...")
| | | | |
| +----------v------+ +------v------+ +--------v--------+ |
| | ./tools/ | |./storage/ | |./storage/ | |
-| | +-- curl.yaml | | skills/ | | artifacts/ | |
+| | +-- curl.yaml | | workflows/ | | artifacts/ | |
| | +-- nmap.yaml | | +-- *.py | | +-- *.bin | |
| +-----------------+ +-------------+ +-----------------+ |
| |
@@ -417,7 +417,7 @@ from pathlib import Path
from py_code_mode import Session, FileStorage
from py_code_mode.execution import InProcessConfig, InProcessExecutor
-# Storage for skills and artifacts
+# Storage for workflows and artifacts
storage = FileStorage(base_path=Path("./storage"))
# Executor with tools path (separate from storage)
@@ -444,7 +444,7 @@ async with Session(storage=storage, executor=executor) as session:
| | | |
| | storage = RedisStorage(url="redis://localhost:6379", | |
| | prefix="agent") | |
-| | # Uses agent:skills:*, agent:artifacts:* | |
+| | # Uses agent:workflows:*, agent:artifacts:* | |
| | | |
| | config = InProcessConfig(tools_path=Path("./tools")) | |
| | executor = InProcessExecutor(config=config) | |
@@ -474,7 +474,7 @@ async with Session(storage=storage, executor=executor) as session:
+--------------------------v-----------------v-----------+
| Redis |
| |
- | agent:skills:* | agent:artifacts:* |
+ | agent:workflows:* | agent:artifacts:* |
| (python code) | (binary data) |
| |
+--------------------------------------------------------+
@@ -486,7 +486,7 @@ from pathlib import Path
from py_code_mode import Session, RedisStorage
from py_code_mode.execution import InProcessConfig, InProcessExecutor
-# RedisStorage for skills and artifacts
+# RedisStorage for workflows and artifacts
storage = RedisStorage(url="redis://localhost:6379", prefix="agent")
# Executor with tools from local filesystem
@@ -498,13 +498,13 @@ async with Session(storage=storage, executor=executor) as session:
print(result.value)
```
-**Provisioning skills to Redis:**
+**Provisioning workflows to Redis:**
```bash
-# Skills (provisioned to Redis for distributed access)
+# Workflows (provisioned to Redis for distributed access)
python -m py_code_mode.store bootstrap \
- --source ./skills \
+ --source ./workflows \
--target redis://localhost:6379 \
- --prefix agent-skills
+ --prefix agent-workflows
# Tools stay on filesystem (executor loads from tools_path)
```
@@ -516,7 +516,7 @@ python -m py_code_mode.store bootstrap \
**Best for:** Process isolation with local development.
**Note:** Container backend is used with Session by passing `ContainerExecutor` explicitly.
-Tools come from executor config (mounted to container). Skills and artifacts from storage.
+Tools come from executor config (mounted to container). Workflows and artifacts from storage.
```
+------------------------------------------------------------------+
@@ -553,7 +553,7 @@ Tools come from executor config (mounted to container). Skills and artifacts fro
| || | | | || |
| || +----------v------+ +-----v-----+ +-----v-------+ || |
| || |/app/tools/ | |/app/ | |/workspace/ | || |
-| || |(from config, | | skills/ | |artifacts/ | || |
+| || |(from config, | | workflows/ | |artifacts/ | || |
| || | volume mounted) | | (volume) | |(volume) | || |
| || +-----------------+ +-----^-----+ +------^------+ || |
| +=============================|===============|==========+ |
@@ -564,7 +564,7 @@ Tools come from executor config (mounted to container). Skills and artifacts fro
| +-----------------------------+--------------+-------------+ |
| | Host Filesystem | |
| | | |
-| | ./tools/ ./storage/skills/ ./storage/artifacts/ | |
+| | ./tools/ ./storage/workflows/ ./storage/artifacts/ | |
| | +-- *.yaml +-- *.py +-- (files) | |
| | | |
| +----------------------------------------------------------+ |
@@ -575,7 +575,7 @@ Tools come from executor config (mounted to container). Skills and artifacts fro
**Environment (container receives via mounts and env vars):**
```
TOOLS_PATH=/app/tools # From config.tools_path (mounted)
-SKILLS_PATH=/app/skills # From storage (mounted)
+SKILLS_PATH=/app/workflows # From storage (mounted)
ARTIFACTS_PATH=/workspace/artifacts # From storage (mounted)
```
@@ -586,7 +586,7 @@ ARTIFACTS_PATH=/workspace/artifacts # From storage (mounted)
**Best for:** Cloud deployments, horizontal scaling, shared state.
**Note:** Container backend is used with Session by passing `ContainerExecutor` explicitly.
-Tools still come from executor config (mounted). Skills and artifacts from Redis.
+Tools still come from executor config (mounted). Workflows and artifacts from Redis.
```
+------------------------------------------------------------------+
@@ -618,7 +618,7 @@ Tools still come from executor config (mounted). Skills and artifacts from Redis
| || | | || |
| || | Receives: | || |
| || | - tools_path from config (mounted) | || |
-| || | - RedisStorageAccess for skills/artifacts | || |
+| || | - RedisStorageAccess for workflows/artifacts | || |
| || | | || |
| || | +-------------+ +-------------+ +--------+ | || |
| || | |ToolRegistry | |SkillLibrary | |RedisArt| | || |
@@ -637,7 +637,7 @@ Tools still come from executor config (mounted). Skills and artifacts from Redis
+-------------------------v-------------v----------+
| Redis |
| |
- | agent:skills:* | agent:artifacts:* |
+ | agent:workflows:* | agent:artifacts:* |
| (python code) | (binary data) |
| |
| Provisioned via: |
@@ -649,16 +649,16 @@ Tools still come from executor config (mounted). Skills and artifacts from Redis
**Key flow:**
1. Session passes storage backend to `executor.start(storage=...)`
2. ContainerExecutor mounts tools_path from config
-3. ContainerExecutor passes Redis connection details for skills/artifacts
-4. SessionServer (in container) loads skills/artifacts from Redis, tools from mount
+3. ContainerExecutor passes Redis connection details for workflows/artifacts
+4. SessionServer (in container) loads workflows/artifacts from Redis, tools from mount
**Provisioning before deployment:**
```bash
-# Bootstrap skills to Redis (tools stay on filesystem)
+# Bootstrap workflows to Redis (tools stay on filesystem)
python -m py_code_mode.store bootstrap \
- --source ./skills \
+ --source ./workflows \
--target redis://redis:6379 \
- --prefix agent:skills
+ --prefix agent:workflows
# Tools are mounted from config.tools_path (not in Redis)
```
@@ -667,19 +667,19 @@ python -m py_code_mode.store bootstrap \
## Storage Comparison Matrix
-| Scenario | Storage | Tools Source | Skills Source | Artifacts Store |
+| Scenario | Storage | Tools Source | Workflows Source | Artifacts Store |
|----------|---------|--------------|---------------|-----------------|
-| Local dev | FileStorage | `config.tools_path/*.yaml` | `/skills/*.py` | `/artifacts/` |
-| Distributed | RedisStorage | `config.tools_path/*.yaml` | `:skills:*` | `:artifacts:*` |
-| Container + File | FileStorage | `config.tools_path` (mounted) | `/skills/` (mounted) | `/artifacts/` (mounted) |
+| Local dev | FileStorage | `config.tools_path/*.yaml` | `/workflows/*.py` | `/artifacts/` |
+| Distributed | RedisStorage | `config.tools_path/*.yaml` | `:workflows:*` | `:artifacts:*` |
+| Container + File | FileStorage | `config.tools_path` (mounted) | `/workflows/` (mounted) | `/artifacts/` (mounted) |
| Container + Redis | RedisStorage | `config.tools_path` (mounted) | Redis keys | Redis keys |
-**Key insight:** Tools always come from `config.tools_path` (executor owns tools). Only skills and artifacts vary by storage type.
+**Key insight:** Tools always come from `config.tools_path` (executor owns tools). Only workflows and artifacts vary by storage type.
**Decision tree:**
```
-Choose storage backend (for skills and artifacts):
+Choose storage backend (for workflows and artifacts):
|
+-- Single machine, local dev? -> FileStorage(base_path=Path("./storage"))
+-- Distributed, production? -> RedisStorage(url="redis://...", prefix="app")
@@ -735,8 +735,8 @@ SubprocessExecutor runs code in an IPython/Jupyter kernel within a subprocess. I
| || Subprocess (IPython Kernel) || |
| || || |
| || +-----------------------------------------------+ || |
-| || | tools.* skills.* artifacts.* namespaces | || |
-| || | (tools from config, skills/artifacts from | || |
+| || | tools.* workflows.* artifacts.* namespaces | || |
+| || | (tools from config, workflows/artifacts from | || |
| || | storage, injected at kernel start) | || |
| || +-----------------------------------------------+ || |
| || || |
@@ -835,18 +835,18 @@ Agent writes: "tools.curl.get(url='...')"
### Skill Execution
```
-Agent writes: "skills.analyze_repo(repo='...')"
+Agent writes: "workflows.analyze_repo(repo='...')"
|
v
+------------------------+
-| SkillsNamespace | Agent-facing API:
+| WorkflowsNamespace | Agent-facing API:
| |
-| skills.analyze_repo() | # Direct attribute access (preferred)
-| skills.invoke("name") | # Explicit invocation
-| skills.search("...") | # Semantic search
-| skills.list() | # List all skills
-| skills.create(...) | # Create new skill
-| skills.delete("name") | # Delete skill
+| workflows.analyze_repo() | # Direct attribute access (preferred)
+| workflows.invoke("name") | # Explicit invocation
+| workflows.search("...") | # Semantic search
+| workflows.list() | # List all workflows
+| workflows.create(...) | # Create new workflow
+| workflows.delete("name") | # Delete workflow
+------------------------+
|
| (internally calls SkillLibrary)
@@ -856,7 +856,7 @@ Agent writes: "skills.analyze_repo(repo='...')"
| |
| .get("analyze_repo") | # Retrieve PythonSkill
| .search("query") | # Semantic search
-| .list_all() | # All skills
+| .list_all() | # All workflows
+------------------------+
|
v
@@ -873,7 +873,7 @@ Agent writes: "skills.analyze_repo(repo='...')"
|
Skill has access to:
- tools (ToolsNamespace)
-- skills (SkillsNamespace)
+- workflows (WorkflowsNamespace)
- artifacts (ArtifactStore)
```
@@ -1054,22 +1054,22 @@ recipes: # Named presets
## Deployment Checklist
### Local Development (Session + FileStorage + SubprocessExecutor)
-- [ ] Create base storage directory for skills and artifacts
+- [ ] Create base storage directory for workflows and artifacts
- [ ] Add YAML tool definitions to separate tools directory
-- [ ] Add Python skill files to `/skills/`
+- [ ] Add Python workflow files to `/workflows/`
- [ ] Configure executor: `SubprocessConfig(tools_path=Path("./tools"))`
- [ ] Use `Session(storage=FileStorage(base_path=...), executor=SubprocessExecutor(config))`
### Local with Container Isolation (Container + File)
- [ ] Build Docker image with py-code-mode installed
- [ ] Configure `ContainerConfig(tools_path=Path("./tools"))` - will be mounted
-- [ ] Storage provides skills and artifacts directories (also mounted)
+- [ ] Storage provides workflows and artifacts directories (also mounted)
- [ ] Set `auth_disabled=True` for local development
- [ ] Use `Session(storage=FileStorage(...), executor=ContainerExecutor(config))`
### Production (Session + RedisStorage + SubprocessExecutor)
- [ ] Provision Redis instance
-- [ ] Bootstrap skills: `python -m py_code_mode.store bootstrap --target redis://... --prefix myapp:skills`
+- [ ] Bootstrap workflows: `python -m py_code_mode.store bootstrap --target redis://... --prefix myapp:workflows`
- [ ] Tools stay on filesystem (via executor config)
- [ ] Create storage: `RedisStorage(url="redis://...", prefix="myapp")`
- [ ] Configure executor: `SubprocessConfig(tools_path=Path("./tools"))`
@@ -1077,7 +1077,7 @@ recipes: # Named presets
### Production with Container Isolation
- [ ] Provision Redis instance
-- [ ] Bootstrap skills to Redis (as above)
+- [ ] Bootstrap workflows to Redis (as above)
- [ ] Tools on filesystem (mounted to container via `config.tools_path`)
- [ ] Create storage: `RedisStorage(url="redis://...", prefix="myapp")`
- [ ] Create executor: `ContainerExecutor(config=ContainerConfig(tools_path=..., auth_token=...))`
diff --git a/docs/architecture.jpg b/docs/architecture.jpg
index 860a191..96d65fb 100644
Binary files a/docs/architecture.jpg and b/docs/architecture.jpg differ
diff --git a/docs/artifacts.md b/docs/artifacts.md
index c16a8c9..38e2160 100644
--- a/docs/artifacts.md
+++ b/docs/artifacts.md
@@ -76,7 +76,7 @@ async def run(url: str) -> dict:
return {"status": "success", "content": content}
```
-### Sharing Data Between Skills
+### Sharing Data Between Workflows
```python
# Skill 1: Collect data
diff --git a/docs/cli-reference.md b/docs/cli-reference.md
index 6720dad..18ffae8 100644
--- a/docs/cli-reference.md
+++ b/docs/cli-reference.md
@@ -23,8 +23,8 @@ py-code-mode-mcp [OPTIONS]
| Flag | Description | Default |
|------|-------------|---------|
-| `--base PATH` | Base directory with `tools/`, `skills/`, `artifacts/` subdirs | - |
-| `--storage PATH` | Path to storage directory (skills, artifacts) | - |
+| `--base PATH` | Base directory with `tools/`, `workflows/`, `artifacts/` subdirs | - |
+| `--storage PATH` | Path to storage directory (workflows, artifacts) | - |
| `--tools PATH` | Path to tools directory (YAML definitions) | - |
| `--redis URL` | Redis URL for storage | - |
| `--prefix PREFIX` | Redis key prefix | `py-code-mode` |
@@ -35,7 +35,7 @@ py-code-mode-mcp [OPTIONS]
### Examples
```bash
-# Base directory (auto-discovers tools/, skills/, artifacts/)
+# Base directory (auto-discovers tools/, workflows/, artifacts/)
py-code-mode-mcp --base ~/.code-mode
# Explicit storage + tools paths
@@ -54,13 +54,13 @@ When running, the server exposes these tools to MCP clients:
| Tool | Description |
|------|-------------|
-| `run_code` | Execute Python with access to tools, skills, artifacts, deps |
+| `run_code` | Execute Python with access to tools, workflows, artifacts, deps |
| `list_tools` | List available tools |
| `search_tools` | Semantic search for tools |
-| `list_skills` | List available skills |
-| `search_skills` | Semantic search for skills |
-| `create_skill` | Save a new skill |
-| `delete_skill` | Remove a skill |
+| `list_workflows` | List available workflows |
+| `search_workflows` | Semantic search for workflows |
+| `create_workflow` | Save a new workflow |
+| `delete_workflow` | Remove a workflow |
| `list_artifacts` | List saved artifacts |
| `list_deps` | List configured dependencies |
| `add_dep` | Add and install a dependency (if `--no-runtime-deps` not set) |
@@ -70,7 +70,7 @@ When running, the server exposes these tools to MCP clients:
## Store CLI
-Manage skills, tools, and dependencies in Redis stores.
+Manage workflows, tools, and dependencies in Redis stores.
### Usage
@@ -82,14 +82,14 @@ python -m py_code_mode.cli.store [OPTIONS]
#### bootstrap
-Push skills, tools, or deps from local files to a store.
+Push workflows, tools, or deps from local files to a store.
```bash
python -m py_code_mode.cli.store bootstrap \
--source PATH \
--target URL \
--prefix PREFIX \
- [--type skills|tools|deps] \
+ [--type workflows|tools|deps] \
[--clear] \
[--deps "pkg1" "pkg2"]
```
@@ -98,17 +98,17 @@ python -m py_code_mode.cli.store bootstrap \
|--------|-------------|---------|
| `--source PATH` | Source directory or requirements file | required |
| `--target URL` | Target store URL (e.g., `redis://localhost:6379`) | required |
-| `--prefix PREFIX` | Key prefix for items | `skills` |
-| `--type TYPE` | Type of items: `skills`, `tools`, or `deps` | `skills` |
+| `--prefix PREFIX` | Key prefix for items | `workflows` |
+| `--type TYPE` | Type of items: `workflows`, `tools`, or `deps` | `workflows` |
| `--clear` | Remove existing items before adding | false |
| `--deps` | Inline package specs (for deps only) | - |
**Examples:**
```bash
-# Bootstrap skills
+# Bootstrap workflows
python -m py_code_mode.cli.store bootstrap \
- --source ./skills \
+ --source ./workflows \
--target redis://localhost:6379 \
--prefix my-agent
@@ -133,9 +133,9 @@ python -m py_code_mode.cli.store bootstrap \
--type deps \
--deps "requests>=2.31" "pandas>=2.0"
-# Replace all existing skills
+# Replace all existing workflows
python -m py_code_mode.cli.store bootstrap \
- --source ./skills \
+ --source ./workflows \
--target redis://localhost:6379 \
--prefix my-agent \
--clear
@@ -149,13 +149,13 @@ List items in a store.
python -m py_code_mode.cli.store list \
--target URL \
--prefix PREFIX \
- [--type skills|tools|deps]
+ [--type workflows|tools|deps]
```
**Examples:**
```bash
-# List skills
+# List workflows
python -m py_code_mode.cli.store list \
--target redis://localhost:6379 \
--prefix my-agent
@@ -175,7 +175,7 @@ python -m py_code_mode.cli.store list \
#### pull
-Retrieve skills from a store to local files.
+Retrieve workflows from a store to local files.
```bash
python -m py_code_mode.cli.store pull \
@@ -187,16 +187,16 @@ python -m py_code_mode.cli.store pull \
**Example:**
```bash
-# Pull skills to review agent-created ones
+# Pull workflows to review agent-created ones
python -m py_code_mode.cli.store pull \
--target redis://localhost:6379 \
--prefix my-agent \
- --dest ./skills-from-redis
+ --dest ./workflows-from-redis
```
#### diff
-Compare local skills vs remote store.
+Compare local workflows vs remote store.
```bash
python -m py_code_mode.cli.store diff \
@@ -210,7 +210,7 @@ python -m py_code_mode.cli.store diff \
```bash
# See what agent added or changed
python -m py_code_mode.cli.store diff \
- --source ./skills \
+ --source ./workflows \
--target redis://localhost:6379 \
--prefix my-agent
```
@@ -225,12 +225,12 @@ Output shows:
## CI/CD Patterns
-### Deploy Skills to Production
+### Deploy Workflows to Production
```bash
# In CI pipeline
python -m py_code_mode.cli.store bootstrap \
- --source ./skills \
+ --source ./workflows \
--target $REDIS_URL \
--prefix production \
--clear
@@ -247,7 +247,7 @@ python -m py_code_mode.cli.store pull \
# Compare to source
python -m py_code_mode.cli.store diff \
- --source ./skills \
+ --source ./workflows \
--target $REDIS_URL \
--prefix production
```
diff --git a/docs/getting-started.md b/docs/getting-started.md
index d1a5c3c..11b0841 100644
--- a/docs/getting-started.md
+++ b/docs/getting-started.md
@@ -36,7 +36,7 @@ claude mcp add -s project py-code-mode \
claude mcp list
```
-The base directory will contain `skills/`, `artifacts/`, and optionally `tools/` subdirectories.
+The base directory will contain `workflows/`, `artifacts/`, and optionally `tools/` subdirectories.
## Your First Session
@@ -45,17 +45,17 @@ The base directory will contain `skills/`, `artifacts/`, and optionally `tools/`
```python
from py_code_mode import Session
-# One line setup - auto-discovers tools/, skills/, artifacts/, requirements.txt
+# One line setup - auto-discovers tools/, workflows/, artifacts/, requirements.txt
async with Session.from_base("./data") as session:
result = await session.run('''
-# Search for existing skills
-results = skills.search("data processing")
+# Search for existing workflows
+results = workflows.search("data processing")
# List available tools
all_tools = tools.list()
-# Create a simple skill
-skills.create(
+# Create a simple workflow
+workflows.create(
name="hello_world",
source="""async def run(name: str = "World") -> str:
return f"Hello, {name}!"
@@ -63,8 +63,8 @@ skills.create(
description="Simple greeting function"
)
-# Invoke the skill
-greeting = skills.invoke("hello_world", name="Python")
+# Invoke the workflow
+greeting = workflows.invoke("hello_world", name="Python")
print(greeting)
''')
@@ -82,42 +82,42 @@ async with Session.subprocess("~/.code-mode") as session:
Once installed, the MCP server provides these tools to Claude:
-- `run_code` - Execute Python code with `tools`, `skills`, `artifacts`, `deps` namespaces
+- `run_code` - Execute Python code with `tools`, `workflows`, `artifacts`, `deps` namespaces
- `list_tools`, `search_tools` - Discover available tools
-- `list_skills`, `search_skills`, `create_skill`, `delete_skill` - Manage skills
+- `list_workflows`, `search_workflows`, `create_workflow`, `delete_workflow` - Manage workflows
- `list_artifacts` - View stored data
- `list_deps`, `add_dep`, `remove_dep` - Manage dependencies
Just ask Claude to use py-code-mode:
```
-Can you search for skills related to GitHub analysis?
+Can you search for workflows related to GitHub analysis?
```
-Claude will use the `search_skills` MCP tool automatically.
+Claude will use the `search_workflows` MCP tool automatically.
## Basic Workflow
-1. **Search for existing skills** - Always check if someone already solved this
+1. **Search for existing workflows** - Always check if someone already solved this
2. **Invoke if found** - Reuse existing workflows
3. **Script if not found** - Write code to solve the problem
-4. **Create skill if reusable** - Save successful workflows for future use
+4. **Create workflow if reusable** - Save successful workflows for future use
```python
# 1. Search
-results = skills.search("fetch json from url")
+results = workflows.search("fetch json from url")
# 2. Invoke if found
if results:
- data = skills.invoke(results[0]["name"], url="https://api.example.com/data")
+ data = workflows.invoke(results[0]["name"], url="https://api.example.com/data")
else:
# 3. Script the solution
import json
response = tools.curl.get(url="https://api.example.com/data")
data = json.loads(response)
- # 4. Save as skill
- skills.create(
+ # 4. Save as workflow
+ workflows.create(
name="fetch_json",
source='''async def run(url: str) -> dict:
import json
@@ -131,6 +131,6 @@ else:
## Next Steps
- **[Tools](./tools.md)** - Learn how to add CLI, MCP, and REST API adapters
-- **[Skills](./skills.md)** - Deep dive on creating and composing workflows
+- **[Workflows](./workflows.md)** - Deep dive on creating and composing workflows
- **[Artifacts](./artifacts.md)** - Persist data across sessions
- **[Examples](../examples/)** - See complete agent implementations
diff --git a/docs/integrations.md b/docs/integrations.md
index 0cd58d8..a8470f9 100644
--- a/docs/integrations.md
+++ b/docs/integrations.md
@@ -28,10 +28,10 @@ The MCP server exposes these tools:
| Tool | Purpose |
|------|---------|
-| `run_code` | Execute Python with access to tools/skills/artifacts |
+| `run_code` | Execute Python with access to tools/workflows/artifacts |
| `list_tools` / `search_tools` | Discover available tools |
-| `list_skills` / `search_skills` | Discover available skills |
-| `create_skill` / `delete_skill` | Manage skills |
+| `list_workflows` / `search_workflows` | Discover available workflows |
+| `create_workflow` / `delete_workflow` | Manage workflows |
| `list_artifacts` | List saved data |
| `list_deps` / `add_dep` / `remove_dep` | Manage dependencies |
@@ -154,21 +154,21 @@ async with CodeExecutionTool(Path("./data"), Path("./tools")) as tool:
When registering with your framework, provide a clear tool description:
```python
-TOOL_DESCRIPTION = """Execute Python code with access to tools, skills, and artifacts.
+TOOL_DESCRIPTION = """Execute Python code with access to tools, workflows, and artifacts.
NAMESPACES:
- tools.* - Call registered tools (e.g., tools.curl.get(url="..."))
-- skills.* - Invoke reusable workflows (e.g., skills.invoke("fetch_json", url="..."))
+- workflows.* - Invoke reusable workflows (e.g., workflows.invoke("fetch_json", url="..."))
- artifacts.* - Persist data (e.g., artifacts.save("key", data))
- deps.* - Manage packages (e.g., deps.add("pandas"))
Variables persist across calls within the same session.
WORKFLOW:
-1. Search for existing skills: skills.search("your task")
-2. If found, invoke it: skills.invoke("skill_name", arg=value)
+1. Search for existing workflows: workflows.search("your task")
+2. If found, invoke it: workflows.invoke("workflow_name", arg=value)
3. Otherwise, write code using tools
-4. Save successful workflows as skills for reuse
+4. Save successful workflows as workflows for reuse
"""
```
@@ -176,19 +176,19 @@ WORKFLOW:
## Redis Backend for Multi-Agent
-When running multiple agent instances, use Redis for shared skill library:
+When running multiple agent instances, use Redis for shared workflow library:
```python
from py_code_mode import Session, RedisStorage
from py_code_mode.execution import SubprocessExecutor, SubprocessConfig
-# All instances share skills via Redis
+# All instances share workflows via Redis
storage = RedisStorage(url="redis://localhost:6379", prefix="my-agents")
config = SubprocessConfig(tools_path=Path("./tools"))
executor = SubprocessExecutor(config=config)
async with Session(storage=storage, executor=executor) as session:
- # Skills created by any agent are available to all
+ # Workflows created by any agent are available to all
result = await session.run(code)
```
diff --git a/docs/production.md b/docs/production.md
index 025ab6f..bd717de 100644
--- a/docs/production.md
+++ b/docs/production.md
@@ -5,7 +5,7 @@ Guide for deploying py-code-mode in production environments.
## Architecture
Production deployments typically combine:
-- **RedisStorage** - Shared skill library across instances
+- **RedisStorage** - Shared workflow library across instances
- **ContainerExecutor** - Isolated code execution
- **Pre-configured dependencies** - Locked down environment
- **Monitoring and observability** - Health checks and logging
@@ -15,7 +15,7 @@ import os
from py_code_mode import Session, RedisStorage
from py_code_mode.execution import ContainerExecutor, ContainerConfig
-# Shared skill library
+# Shared workflow library
storage = RedisStorage(url=os.getenv("REDIS_URL"), prefix="production")
# Isolated execution with authentication and pre-configured deps
@@ -110,12 +110,12 @@ def get_storage(tenant_id: str, redis_url: str) -> RedisStorage:
### Horizontal Scaling
-Multiple agent instances share skill library via Redis:
+Multiple agent instances share workflow library via Redis:
```
┌─────────────┐ ┌──────────┐ ┌─────────────┐
│ Instance 1 │────▶│ Redis │◀────│ Instance 2 │
-└─────────────┘ │ (Skills) │ └─────────────┘
+└─────────────┘ │ (Workflows) │ └─────────────┘
└──────────┘
▲
│
@@ -124,7 +124,7 @@ Multiple agent instances share skill library via Redis:
└─────────────┘
```
-All instances benefit when any instance creates a skill.
+All instances benefit when any instance creates a workflow.
### Load Balancing
diff --git a/docs/session-api.md b/docs/session-api.md
index af9f86c..4070959 100644
--- a/docs/session-api.md
+++ b/docs/session-api.md
@@ -4,12 +4,12 @@ Complete reference for the Session class - the primary interface for py-code-mod
## Overview
-Session wraps a storage backend and executor, providing a unified API for code execution with tools, skills, and artifacts.
+Session wraps a storage backend and executor, providing a unified API for code execution with tools, workflows, and artifacts.
```python
from py_code_mode import Session
-# Simplest: auto-discovers tools/, skills/, artifacts/, requirements.txt
+# Simplest: auto-discovers tools/, workflows/, artifacts/, requirements.txt
async with Session.from_base("./.code-mode") as session:
result = await session.run("tools.curl.get(url='https://api.github.com')")
```
@@ -43,7 +43,7 @@ Session.from_base(
**Auto-discovers:**
- `{base}/tools/` - Tool definitions (YAML files)
-- `{base}/skills/` - Skill files (Python)
+- `{base}/workflows/` - Workflow files (Python)
- `{base}/artifacts/` - Persistent data storage
- `{base}/requirements.txt` - Pre-configured dependencies
@@ -303,24 +303,24 @@ http_tools = await session.search_tools("make HTTP requests")
---
-## Skills Methods
+## Workflows Methods
-### list_skills()
+### list_workflows()
-List all available skills.
+List all available workflows.
```python
-async def list_skills(self) -> list[dict[str, Any]]
+async def list_workflows(self) -> list[dict[str, Any]]
```
-**Returns:** List of skill summaries (name, description, parameters - no source).
+**Returns:** List of workflow summaries (name, description, parameters - no source).
-### search_skills()
+### search_workflows()
-Search skills by semantic similarity.
+Search workflows by semantic similarity.
```python
-async def search_skills(
+async def search_workflows(
self,
query: str,
limit: int = 5
@@ -330,33 +330,33 @@ async def search_skills(
**Example:**
```python
-skills = await session.search_skills("fetch GitHub repository data")
+workflows = await session.search_workflows("fetch GitHub repository data")
```
-### get_skill()
+### get_workflow()
-Get a specific skill by name, including source code.
+Get a specific workflow by name, including source code.
```python
-async def get_skill(self, name: str) -> dict[str, Any] | None
+async def get_workflow(self, name: str) -> dict[str, Any] | None
```
-**Returns:** Skill dict with `name`, `description`, `parameters`, `source`, or None if not found.
+**Returns:** Workflow dict with `name`, `description`, `parameters`, `source`, or None if not found.
**Example:**
```python
-skill = await session.get_skill("fetch_json")
-if skill:
- print(skill["source"])
+workflow = await session.get_workflow("fetch_json")
+if workflow:
+ print(workflow["source"])
```
-### add_skill()
+### add_workflow()
-Create and persist a new skill.
+Create and persist a new workflow.
```python
-async def add_skill(
+async def add_workflow(
self,
name: str,
source: str,
@@ -367,7 +367,7 @@ async def add_skill(
**Example:**
```python
-await session.add_skill(
+await session.add_workflow(
name="fetch_json",
source='''async def run(url: str) -> dict:
import json
@@ -378,12 +378,12 @@ await session.add_skill(
)
```
-### remove_skill()
+### remove_workflow()
-Remove a skill by name.
+Remove a workflow by name.
```python
-async def remove_skill(self, name: str) -> bool
+async def remove_workflow(self, name: str) -> bool
```
**Returns:** True if removed, False if not found.
@@ -523,7 +523,7 @@ def storage(self) -> StorageBackend
```python
# Access storage for advanced operations
-skill_library = session.storage.get_skill_library()
+workflow_library = session.storage.get_workflow_library()
```
---
diff --git a/docs/storage.md b/docs/storage.md
index a00f2e8..307a363 100644
--- a/docs/storage.md
+++ b/docs/storage.md
@@ -1,10 +1,10 @@
# Storage
-Storage backends determine where skills and artifacts persist. Tools and dependencies are now owned by executors (via config).
+Storage backends determine where workflows and artifacts persist. Tools and dependencies are now owned by executors (via config).
## FileStorage
-Stores skills and artifacts in local directories. Good for development and single-instance deployments.
+Stores workflows and artifacts in local directories. Good for development and single-instance deployments.
```python
from pathlib import Path
@@ -17,7 +17,7 @@ storage = FileStorage(base_path=Path("./data"))
```
./data/
-├── skills/ # Skill .py files
+├── workflows/ # Workflow .py files
├── artifacts/ # Saved data
└── vectors/ # Embedding cache (if chromadb installed)
```
@@ -29,11 +29,11 @@ storage = FileStorage(base_path=Path("./data"))
- ✓ Local development
- ✓ Single-agent deployments
- ✓ Simple setup with no external dependencies
-- ✓ Version control integration (commit skills to git)
+- ✓ Version control integration (commit workflows to git)
### Limitations
-- Single-instance only (no skill sharing between agents)
+- Single-instance only (no workflow sharing between agents)
- No automatic backup
- Manual synchronization if running multiple instances
@@ -41,7 +41,7 @@ storage = FileStorage(base_path=Path("./data"))
## RedisStorage
-Stores data in Redis. Enables skill sharing across multiple agent instances.
+Stores data in Redis. Enables workflow sharing across multiple agent instances.
```python
from py_code_mode import RedisStorage
@@ -52,7 +52,7 @@ storage = RedisStorage(url="redis://localhost:6379", prefix="my-agents")
### Key Structure
```
-{prefix}:skills:{name} # Skill source code
+{prefix}:workflows:{name} # Workflow source code
{prefix}:artifacts:{name} # Artifact data
{prefix}:vectors:* # Embedding cache (if RediSearch available)
```
@@ -62,9 +62,9 @@ storage = RedisStorage(url="redis://localhost:6379", prefix="my-agents")
### When to Use
- ✓ Multi-instance deployments
-- ✓ Skill sharing across agents
+- ✓ Workflow sharing across agents
- ✓ Cloud deployments
-- ✓ Need centralized skill library
+- ✓ Need centralized workflow library
### Configuration
@@ -85,13 +85,13 @@ RedisStorage(
## One Agent Learns, All Agents Benefit
-**The power of RedisStorage:** When one agent creates a skill, it's immediately available to all other agents sharing the same Redis storage.
+**The power of RedisStorage:** When one agent creates a workflow, it's immediately available to all other agents sharing the same Redis storage.
```python
# Agent Instance 1
async with Session(storage=redis_storage) as session:
await session.run('''
-skills.create(
+workflows.create(
name="analyze_sentiment",
source="""async def run(text: str) -> dict:
# Implementation
@@ -103,8 +103,8 @@ skills.create(
# Agent Instance 2 (different process, different machine)
async with Session(storage=redis_storage) as session:
- # Skill is already available!
- result = await session.run('skills.invoke("analyze_sentiment", text="Great product!")')
+ # Workflow is already available!
+ result = await session.run('workflows.invoke("analyze_sentiment", text="Great product!")')
```
---
@@ -135,7 +135,7 @@ Use the CLI tools for migration (recommended):
```bash
python -m py_code_mode.store bootstrap \
- --source ./skills \
+ --source ./workflows \
--target redis://localhost:6379 \
--prefix production
```
@@ -146,36 +146,36 @@ python -m py_code_mode.store bootstrap \
python -m py_code_mode.store pull \
--target redis://localhost:6379 \
--prefix production \
- --dest ./skills-backup
+ --dest ./workflows-backup
```
---
## CLI Tools for Storage Management
-Bootstrap skills from file to Redis:
+Bootstrap workflows from file to Redis:
```bash
python -m py_code_mode.store bootstrap \
- --source ./skills \
+ --source ./workflows \
--target redis://localhost:6379 \
--prefix production
```
-Pull skills from Redis to file:
+Pull workflows from Redis to file:
```bash
python -m py_code_mode.store pull \
--target redis://localhost:6379 \
--prefix production \
- --dest ./skills-review
+ --dest ./workflows-review
```
Compare file and Redis storage:
```bash
python -m py_code_mode.store diff \
- --source ./skills \
+ --source ./workflows \
--target redis://localhost:6379 \
--prefix production
```
@@ -187,13 +187,13 @@ python -m py_code_mode.store diff \
### Development
- Use FileStorage for local development
-- Commit skills to version control
-- Use feature branches for experimental skills
+- Commit workflows to version control
+- Use feature branches for experimental workflows
### Production
- Use RedisStorage for multi-instance deployments
-- Set appropriate TTLs if skills should expire
+- Set appropriate TTLs if workflows should expire
- Use prefixes to isolate environments (dev/staging/prod)
- Regular backups to file storage
@@ -203,16 +203,16 @@ python -m py_code_mode.store diff \
- Consider separate Redis instances for hard isolation
- Monitor Redis memory usage
-### Skill Lifecycle
+### Workflow Lifecycle
```python
-# Development: Create skills in file storage
-file_storage = FileStorage(base_path=Path("./skills"))
+# Development: Create workflows in file storage
+file_storage = FileStorage(base_path=Path("./workflows"))
-# Review: Pull skills for code review
+# Review: Pull workflows for code review
# (use CLI tools)
-# Promotion: Push vetted skills to production
+# Promotion: Push vetted workflows to production
redis_storage = RedisStorage(url="redis://prod-redis:6379", prefix="prod")
# (use CLI tools to bootstrap)
```
@@ -242,4 +242,4 @@ def create_session(storage_type: str, tools_path: Path):
return Session(storage=storage, executor=executor)
```
-All session features work with any storage backend - the choice only affects where skills and artifacts persist. Tools and deps come from executor config.
+All session features work with any storage backend - the choice only affects where workflows and artifacts persist. Tools and deps come from executor config.
diff --git a/docs/tools.md b/docs/tools.md
index 4d00b7f..2597817 100644
--- a/docs/tools.md
+++ b/docs/tools.md
@@ -215,7 +215,7 @@ from pathlib import Path
from py_code_mode import Session, FileStorage
from py_code_mode.execution import InProcessConfig, InProcessExecutor
-# Storage for skills and artifacts
+# Storage for workflows and artifacts
storage = FileStorage(base_path=Path("./storage"))
# Executor loads tools from tools_path
diff --git a/docs/skills.md b/docs/workflows.md
similarity index 63%
rename from docs/skills.md
rename to docs/workflows.md
index 7f86742..b0ca0c5 100644
--- a/docs/skills.md
+++ b/docs/workflows.md
@@ -1,19 +1,19 @@
-# Skills
+# Workflows
-Skills are reusable Python workflows that persist across sessions. Agents create skills when they solve problems, then invoke them later instead of re-solving from scratch.
+Workflows are reusable Python workflows that persist across sessions. Agents create workflows when they solve problems, then invoke them later instead of re-solving from scratch.
## Core Concept
-When an agent successfully completes a multi-step workflow, they save it as a skill. Next time they need that capability, they search for and invoke the skill directly—no re-planning required.
+When an agent successfully completes a multi-step workflow, they save it as a workflow. Next time they need that capability, they search for and invoke the workflow directly—no re-planning required.
-Over time, the skill library grows. Simple skills become building blocks for more complex workflows.
+Over time, the workflow library grows. Simple workflows become building blocks for more complex workflows.
-## Creating Skills
+## Creating Workflows
-Skills are async Python functions with an `async def run()` entry point:
+Workflows are async Python functions with an `async def run()` entry point:
```python
-# skills/fetch_json.py
+# workflows/fetch_json.py
"""Fetch and parse JSON from a URL."""
async def run(url: str, headers: dict = None) -> dict:
@@ -37,14 +37,14 @@ async def run(url: str, headers: dict = None) -> dict:
raise RuntimeError(f"Invalid JSON from {url}: {e}") from e
```
-> **Note:** All skills must use `async def run()`. Synchronous `def run()` is not supported.
+> **Note:** All workflows must use `async def run()`. Synchronous `def run()` is not supported.
### Runtime Creation
-Agents can create skills dynamically:
+Agents can create workflows dynamically:
```python
-skills.create(
+workflows.create(
name="fetch_json",
source='''async def run(url: str) -> dict:
"""Fetch and parse JSON from a URL."""
@@ -56,46 +56,46 @@ skills.create(
)
```
-## Skill Discovery
+## Workflow Discovery
-Skills support semantic search based on descriptions:
+Workflows support semantic search based on descriptions:
```python
# Search by intent
-results = skills.search("fetch github repository data")
-# Returns skills ranked by relevance to the query
+results = workflows.search("fetch github repository data")
+# Returns workflows ranked by relevance to the query
-# List all skills
-all_skills = skills.list()
+# List all workflows
+all_workflows = workflows.list()
-# Get specific skill details
-skill = skills.get("fetch_json")
+# Get specific workflow details
+workflow = workflows.get("fetch_json")
```
The search uses embedding-based similarity, so it understands intent even if the exact words don't match.
-## Invoking Skills
+## Invoking Workflows
```python
# Direct invocation
-data = skills.invoke("fetch_json", url="https://api.github.com/repos/owner/repo")
+data = workflows.invoke("fetch_json", url="https://api.github.com/repos/owner/repo")
# With keyword arguments
-analysis = skills.invoke(
+analysis = workflows.invoke(
"analyze_repo",
owner="anthropics",
repo="anthropic-sdk-python"
)
```
-## Composing Skills
+## Composing Workflows
-Skills can invoke other skills, enabling layered workflows:
+Workflows can invoke other workflows, enabling layered workflows:
-### Layer 1: Base Skills (Building Blocks)
+### Layer 1: Base Workflows (Building Blocks)
```python
-# skills/fetch_json.py
+# workflows/fetch_json.py
async def run(url: str) -> dict:
"""Fetch and parse JSON from a URL."""
import json
@@ -103,14 +103,14 @@ async def run(url: str) -> dict:
return json.loads(response)
```
-### Layer 2: Domain Skills (Compositions)
+### Layer 2: Domain Workflows (Compositions)
```python
-# skills/get_repo_metadata.py
+# workflows/get_repo_metadata.py
async def run(owner: str, repo: str) -> dict:
"""Get GitHub repository metadata."""
- # Uses the fetch_json skill
- data = skills.invoke("fetch_json",
+ # Uses the fetch_json workflow
+ data = workflows.invoke("fetch_json",
url=f"https://api.github.com/repos/{owner}/{repo}")
return {
@@ -121,17 +121,17 @@ async def run(owner: str, repo: str) -> dict:
}
```
-### Layer 3: Workflow Skills (Orchestration)
+### Layer 3: Workflow Workflows (Orchestration)
```python
-# skills/analyze_multiple_repos.py
+# workflows/analyze_multiple_repos.py
async def run(repos: list) -> dict:
"""Analyze multiple GitHub repositories."""
summaries = []
for repo in repos:
owner, name = repo.split('/')
- # Uses the get_repo_metadata skill
- metadata = skills.invoke("get_repo_metadata", owner=owner, repo=name)
+ # Uses the get_repo_metadata workflow
+ metadata = workflows.invoke("get_repo_metadata", owner=owner, repo=name)
summaries.append(metadata)
# Aggregate results
@@ -146,11 +146,11 @@ async def run(repos: list) -> dict:
}
```
-**Simple skills become building blocks for complex workflows.** As the library grows, agents accomplish more by composing existing capabilities.
+**Simple workflows become building blocks for complex workflows.** As the library grows, agents accomplish more by composing existing capabilities.
## Quality Standards
-Skills should follow these standards for reliability and maintainability:
+Workflows should follow these standards for reliability and maintainability:
### Type Hints
@@ -223,25 +223,25 @@ async def run(repo_url: str, incl_contrib: bool = False) -> dict:
...
```
-## Managing Skills
+## Managing Workflows
-### Deleting Skills
+### Deleting Workflows
```python
-# Delete a skill by name
-skills.delete("old_skill_name")
+# Delete a workflow by name
+workflows.delete("old_workflow_name")
```
-### Updating Skills
+### Updating Workflows
-Skills are immutable. To update, delete and recreate:
+Workflows are immutable. To update, delete and recreate:
```python
# Delete old version
-skills.delete("fetch_json")
+workflows.delete("fetch_json")
# Create new version
-skills.create(
+workflows.create(
name="fetch_json",
source='''async def run(url: str, timeout: int = 30) -> dict:
# Updated implementation with timeout
@@ -251,16 +251,16 @@ skills.create(
)
```
-## Seeding Skills
+## Seeding Workflows
-You can pre-author skills for agents to discover:
+You can pre-author workflows for agents to discover:
### File-based (Recommended)
-Create `.py` files in the skills directory:
+Create `.py` files in the workflows directory:
```python
-# skills/fetch_and_summarize.py
+# workflows/fetch_and_summarize.py
"""Fetch a URL and extract key information."""
async def run(url: str) -> dict:
@@ -275,11 +275,11 @@ async def run(url: str) -> dict:
### Programmatic
-Use `session.add_skill()` for runtime skill creation (recommended):
+Use `session.add_workflow()` for runtime workflow creation (recommended):
```python
async with Session(storage=storage) as session:
- await session.add_skill(
+ await session.add_workflow(
name="greet",
source='''async def run(name: str = "World") -> str:
return f"Hello, {name}!"
@@ -288,11 +288,11 @@ async with Session(storage=storage) as session:
)
```
-For advanced use cases where you need to create skills outside of agent code execution, use `session.add_skill()`:
+For advanced use cases where you need to create workflows outside of agent code execution, use `session.add_workflow()`:
```python
async with Session(storage=storage, executor=executor) as session:
- await session.add_skill(
+ await session.add_workflow(
name="greet",
source='''async def run(name: str = "World") -> str:
return f"Hello, {name}!"
@@ -303,23 +303,23 @@ async with Session(storage=storage, executor=executor) as session:
## Best Practices
-**When to create skills:**
+**When to create workflows:**
- You'll need this operation again (or similar variants)
- It's more than 5 lines of meaningful logic
- It has clear inputs and outputs
- It could be composed into higher-level workflows
-**When NOT to create skills:**
+**When NOT to create workflows:**
- One-off operations you won't repeat
- Simple wrappers around single tool calls
- Exploration or debugging code
-**Skill composition guidelines:**
-- Start with simple, focused skills (single responsibility)
-- Build higher-level skills by composing simpler ones
-- Use semantic search to find existing skills before creating new ones
-- Name skills descriptively (what they do, not how they do it)
+**Workflow composition guidelines:**
+- Start with simple, focused workflows (single responsibility)
+- Build higher-level workflows by composing simpler ones
+- Use semantic search to find existing workflows before creating new ones
+- Name workflows descriptively (what they do, not how they do it)
## Examples
-See [examples/](../examples/) for complete skill libraries in working agent applications.
+See [examples/](../examples/) for complete workflow libraries in working agent applications.
diff --git a/examples/autogen-direct/README.md b/examples/autogen-direct/README.md
index 42140cb..37bb0ff 100644
--- a/examples/autogen-direct/README.md
+++ b/examples/autogen-direct/README.md
@@ -20,7 +20,7 @@ ANTHROPIC_API_KEY=sk-ant-...
## Run (File-Based)
-Default mode loads skills from disk:
+Default mode loads workflows from disk:
```bash
uv run python agent.py
@@ -36,9 +36,9 @@ For distributed deployments or persistent storage, use Redis:
docker run -d --name redis -p 6379:6379 redis:alpine
```
-### 2. Bootstrap Tools and Skills to Redis
+### 2. Bootstrap Tools and Workflows to Redis
-Load tools and skills from disk into Redis (one-time setup):
+Load tools and workflows from disk into Redis (one-time setup):
```bash
# Bootstrap tools
@@ -48,11 +48,11 @@ uv run python -m py_code_mode.store bootstrap \
--prefix agent-tools \
--type tools
-# Bootstrap skills
+# Bootstrap workflows
uv run python -m py_code_mode.store bootstrap \
- --source ../shared/skills \
+ --source ../shared/workflows \
--target redis://localhost:6379 \
- --prefix agent-skills
+ --prefix agent-workflows
```
### 3. Run with Redis
@@ -65,26 +65,26 @@ You should see:
```
Using Redis backend: redis://localhost:6379
Tools in Redis: 4
- Skills in Redis: 1
+ Workflows in Redis: 1
```
-### Managing Skills
+### Managing Workflows
```bash
-# List skills in Redis
-uv run python -m py_code_mode.store list --target redis://localhost:6379 --prefix agent-skills
+# List workflows in Redis
+uv run python -m py_code_mode.store list --target redis://localhost:6379 --prefix agent-workflows
# Compare local vs Redis
uv run python -m py_code_mode.store diff \
- --source ../shared/skills \
+ --source ../shared/workflows \
--target redis://localhost:6379 \
- --prefix agent-skills
+ --prefix agent-workflows
-# Pull skills from Redis to local (for review)
+# Pull workflows from Redis to local (for review)
uv run python -m py_code_mode.store pull \
--target redis://localhost:6379 \
- --prefix agent-skills \
- --dest ./skills-from-redis
+ --prefix agent-workflows \
+ --dest ./workflows-from-redis
```
## What's Included
@@ -101,7 +101,7 @@ uv run python -m py_code_mode.store pull \
MCP tools are launched via `uvx` (no pre-installation needed). They live in a separate directory from CLI tools because they use a different adapter.
-### Skills (`../shared/skills/`)
+### Workflows (`../shared/workflows/`)
- `check_api.py` - Fetches an API and optionally filters with jq
@@ -112,7 +112,7 @@ You: What time is it in Tokyo?
You: Fetch the GitHub API and tell me how many public repos octocat has
-You: Use the check_api skill to fetch https://api.github.com/users/torvalds and extract the name
+You: Use the check_api workflow to fetch https://api.github.com/users/torvalds and extract the name
```
## Architecture
@@ -132,7 +132,7 @@ You: Use the check_api skill to fetch https://api.github.com/users/torvalds and
┌──────────────────┐ ┌──────────────────┐
│ File Backend │ │ Redis Backend │
│ │ │ │
- │ Skills: disk │ │ Skills: Redis │
+ │ Workflows: disk │ │ Workflows: Redis │
│ Artifacts: disk │ │ Artifacts: Redis│
│ Tools: disk │ │ Tools: disk │
└──────────────────┘ └──────────────────┘
@@ -161,13 +161,13 @@ command: uvx
args: ["mcp-server-whatever"]
```
-## Adding Skills
+## Adding Workflows
-Create a Python file in `../shared/skills/` with an `async def run()` function:
+Create a Python file in `../shared/workflows/` with an `async def run()` function:
```python
-# skills/my_skill.py
-"""What this skill does."""
+# workflows/my_workflow.py
+"""What this workflow does."""
async def run(param1: str, param2: int = 10) -> str:
result = tools.some_tool(input=param1)
@@ -178,7 +178,7 @@ Then bootstrap to Redis if using Redis mode:
```bash
uv run python -m py_code_mode.store bootstrap \
- --source ../shared/skills \
+ --source ../shared/workflows \
--target redis://localhost:6379 \
- --prefix agent-skills
+ --prefix agent-workflows
```
diff --git a/examples/autogen-direct/agent.py b/examples/autogen-direct/agent.py
index 22dd20f..5823e34 100644
--- a/examples/autogen-direct/agent.py
+++ b/examples/autogen-direct/agent.py
@@ -1,10 +1,10 @@
-"""AutoGen agent with py-code-mode tools and skills.
+"""AutoGen agent with py-code-mode tools and workflows.
This example shows:
- CLI tools (curl, jq)
- MCP tools via uvx (fetch, time)
-- A skill that combines tools
-- Optional Redis backend for distributed skill/artifact storage
+- A workflow that combines tools
+- Optional Redis backend for distributed workflow/artifact storage
Run (file-based, default):
cd examples/autogen
@@ -14,11 +14,11 @@
# Start Redis (using Docker)
docker run -d --name redis -p 6379:6379 redis:alpine
- # Bootstrap skills to Redis (one-time)
+ # Bootstrap workflows to Redis (one-time)
uv run python -m py_code_mode.store bootstrap \
- --source ../shared/skills \
+ --source ../shared/workflows \
--target redis://localhost:6379 \
- --prefix agent-skills
+ --prefix agent-workflows
# Run agent with Redis
REDIS_URL=redis://localhost:6379 uv run python agent.py
@@ -50,10 +50,10 @@ def create_storage():
"""Create storage backend.
When REDIS_URL is set:
- - Tools, skills, and artifacts are loaded from Redis
+ - Tools, workflows, and artifacts are loaded from Redis
Without REDIS_URL:
- - Tools, skills, and artifacts are loaded from shared/ directory
+ - Tools, workflows, and artifacts are loaded from shared/ directory
"""
redis_url = os.environ.get("REDIS_URL")
@@ -69,7 +69,7 @@ def create_storage():
async def main():
- # Create storage (for skills and artifacts only)
+ # Create storage (for workflows and artifacts only)
storage = create_storage()
# Executor with tools from config
@@ -82,16 +82,16 @@ async def main():
system_prompt = """You are a helpful assistant that writes Python code to accomplish tasks.
-You have access to `tools`, `skills`, and `artifacts` namespaces in your code environment.
+You have access to `tools`, `workflows`, and `artifacts` namespaces in your code environment.
WORKFLOW:
-1. For any nontrivial task, FIRST search skills: skills.search("relevant keywords")
-2. If a skill exists, use it: skills.invoke("name", arg=value)
-3. If no skill matches, search tools: tools.search("keywords")
+1. For any nontrivial task, FIRST search workflows: workflows.search("relevant keywords")
+2. If a workflow exists, use it: workflows.invoke("name", arg=value)
+3. If no workflow matches, search tools: tools.search("keywords")
4. Script tools together: tools.name(arg=value)
DISCOVERY:
-- skills.search("query") / skills.list() - find prebaked solutions
+- workflows.search("query") / workflows.list() - find prebaked solutions
- tools.search("query") / tools.list() - find individual tools
ARTIFACTS (persistent storage):
@@ -99,7 +99,7 @@ async def main():
- artifacts.load("name") - Load previously saved data
- artifacts.list() - List saved artifacts
-Skills are reusable recipes that combine tools. Prefer them over scripting from scratch.
+Workflows are reusable recipes that combine tools. Prefer them over scripting from scratch.
Always wrap your code in ```python blocks."""
diff --git a/examples/autogen-direct/integration_test.py b/examples/autogen-direct/integration_test.py
index d1f7e26..81fa770 100644
--- a/examples/autogen-direct/integration_test.py
+++ b/examples/autogen-direct/integration_test.py
@@ -1,9 +1,9 @@
-"""Integration test: Agent solves multi-tool task and saves skill.
+"""Integration test: Agent solves multi-tool task and saves workflow.
This test verifies:
1. An AutoGen agent can solve a task requiring multiple steps
-2. The agent can save a successful solution as a reusable skill
-3. The skill persists to disk and can be invoked later
+2. The agent can save a successful solution as a reusable workflow
+3. The workflow persists to disk and can be invoked later
Run:
cd examples/autogen
@@ -27,21 +27,21 @@
# Paths
HERE = Path(__file__).parent
SHARED = HERE.parent / "shared"
-TEST_SKILLS_DIR = HERE / "test_skills"
+TEST_WORKFLOWS_DIR = HERE / "test_workflows"
SYSTEM_PROMPT = """You are a helpful assistant that writes Python code to accomplish tasks.
-You have access to `tools`, `skills`, and `artifacts` namespaces in your code environment.
+You have access to `tools`, `workflows`, and `artifacts` namespaces in your code environment.
WORKFLOW:
-1. For any nontrivial task, FIRST search skills: skills.search("relevant keywords")
-2. If a skill exists, use it: skills.invoke("skill_name", arg=value)
-3. If no skill matches, search tools: tools.search("keywords")
+1. For any nontrivial task, FIRST search workflows: workflows.search("relevant keywords")
+2. If a workflow exists, use it: workflows.invoke("workflow_name", arg=value)
+3. If no workflow matches, search tools: tools.search("keywords")
4. Script tools together: tools.name(arg=value)
DISCOVERY:
-- skills.search("query") / skills.list() - find prebaked solutions
+- workflows.search("query") / workflows.list() - find prebaked solutions
- tools.search("query") / tools.list() - find individual tools
ARTIFACTS (persistent storage):
@@ -49,28 +49,29 @@
- artifacts.load("name") - Load previously saved data
- artifacts.list() - List saved artifacts
-SKILL CREATION:
-When you solve a multi-step task that could be reused, save it as a skill:
+WORKFLOW CREATION:
+When you solve a multi-step task that could be reused, save it as a workflow:
-skills.create(
+workflows.create(
name="descriptive_name",
- description="What this skill does",
+ description="What this workflow does",
code='''
def run(param1: str, param2: int = default) -> dict:
- \"\"\"Docstring describing the skill.\"\"\"
+ \"\"\"Docstring describing the workflow.\"\"\"
# Your solution here
return result
'''
)
-This lets you reuse the solution later via skills.invoke() or skills.name().
+This lets you reuse the solution later via workflows.invoke() or workflows.name().
-The skill code must:
+The workflow code must:
- Define a `run()` function as the entrypoint
- Have parameters with type hints
- Return a value (not print)
-Skills are reusable recipes that combine tools. Prefer them over scripting from scratch.
+Workflows are reusable recipes that combine tools. Prefer them over scripting from scratch.
+
Always wrap your code in ```python blocks."""
@@ -86,12 +87,12 @@ async def main():
tools_dir = test_base / "tools"
shutil.copytree(SHARED / "tools", tools_dir)
- # Create empty skills directory for testing skill creation
- skills_dir = test_base / "skills"
- skills_dir.mkdir(exist_ok=True)
+ # Create empty workflows directory for testing workflow creation
+ workflows_dir = test_base / "workflows"
+ workflows_dir.mkdir(exist_ok=True)
print("=" * 60)
- print("Integration Test: Agent Creates Skill from Task Solution")
+ print("Integration Test: Agent Creates Workflow from Task Solution")
print("=" * 60)
# Create storage with test directory
@@ -110,13 +111,13 @@ async def main():
max_tool_iterations=20,
)
- # Task: Multi-step problem that should result in a saved skill
+ # Task: Multi-step problem that should result in a saved workflow
task = """
Fetch the HackerNews front page (https://news.ycombinator.com/).
Parse the HTML and extract the first 10 article titles.
Return them as a list of strings.
- Once you have a working solution, save it as a reusable skill called
+ Once you have a working solution, save it as a reusable workflow called
'get_hn_headlines' that takes an optional 'count' parameter (default 10).
"""
@@ -130,54 +131,54 @@ async def main():
print(result.messages[-1].content)
print("-" * 60)
- # Verify skill was created
+ # Verify workflow was created
print("\n" + "=" * 60)
print("Verification")
print("=" * 60)
# Check via storage API
- skill_info = storage.skills.get("get_hn_headlines")
- if skill_info is None:
- print("FAILED: Skill 'get_hn_headlines' was not created")
+ workflow_info = storage.workflows.get("get_hn_headlines")
+ if workflow_info is None:
+ print("FAILED: Workflow 'get_hn_headlines' was not created")
return False
- print(f"Skill created: {skill_info['name']}")
- print(f"Description: {skill_info['description']}")
+ print(f"Workflow created: {workflow_info['name']}")
+ print(f"Description: {workflow_info['description']}")
- # Verify skill file exists
- skill_file = skills_dir / "get_hn_headlines.py"
- if not skill_file.exists():
- print(f"FAILED: Skill file was not persisted to {skill_file}")
+ # Verify workflow file exists
+ workflow_file = workflows_dir / "get_hn_headlines.py"
+ if not workflow_file.exists():
+ print(f"FAILED: Workflow file was not persisted to {workflow_file}")
return False
- print(f"Skill file persisted: {skill_file}")
+ print(f"Workflow file persisted: {workflow_file}")
- # Invoke the skill to verify it works
- print("\nInvoking skill to verify it works...")
- invoke_result = await session.run('skills.invoke("get_hn_headlines", count=5)')
+ # Invoke the workflow to verify it works
+ print("\nInvoking workflow to verify it works...")
+ invoke_result = await session.run('workflows.invoke("get_hn_headlines", count=5)')
if not invoke_result.is_ok:
- print(f"FAILED: Skill invocation failed: {invoke_result.error}")
+ print(f"FAILED: Workflow invocation failed: {invoke_result.error}")
return False
- print(f"Skill result: {invoke_result.value}")
+ print(f"Workflow result: {invoke_result.value}")
- # Verify skill survives a fresh session (true persistence)
+ # Verify workflow survives a fresh session (true persistence)
print("\n" + "-" * 60)
- print("Testing persistence: loading skill in fresh session...")
+ print("Testing persistence: loading workflow in fresh session...")
fresh_storage = FileStorage(base_path=test_base)
async with Session(storage=fresh_storage) as fresh_session:
- skill_info = fresh_storage.skills.get("get_hn_headlines")
- if skill_info is None:
- print("FAILED: Skill not found in fresh session")
+ workflow_info = fresh_storage.workflows.get("get_hn_headlines")
+ if workflow_info is None:
+ print("FAILED: Workflow not found in fresh session")
return False
- print(f"Skill loaded from disk: {skill_info['name']}")
+ print(f"Workflow loaded from disk: {workflow_info['name']}")
- result = await fresh_session.run("skills.get_hn_headlines(count=3)")
+ result = await fresh_session.run("workflows.get_hn_headlines(count=3)")
if not result.is_ok:
- print(f"FAILED: Skill invocation failed: {result.error}")
+ print(f"FAILED: Workflow invocation failed: {result.error}")
return False
print(f"Fresh invocation result: {result.value}")
diff --git a/examples/autogen-direct/test_skills/get_hn_headlines.py b/examples/autogen-direct/test_workflows/get_hn_headlines.py
similarity index 100%
rename from examples/autogen-direct/test_skills/get_hn_headlines.py
rename to examples/autogen-direct/test_workflows/get_hn_headlines.py
diff --git a/examples/autogen-mcp/README.md b/examples/autogen-mcp/README.md
index 9307f12..b566a6d 100644
--- a/examples/autogen-mcp/README.md
+++ b/examples/autogen-mcp/README.md
@@ -51,8 +51,8 @@ This starts py-code-mode as an MCP server subprocess and connects AutoGen to it.
## How It Works
1. AutoGen's `McpWorkbench` spawns `py-code-mode-mcp` as a subprocess
-2. py-code-mode exposes 4 MCP tools: `run_code`, `list_tools`, `list_skills`, `search_skills`
-3. The agent uses these tools to execute Python code with access to tools/skills/artifacts
+2. py-code-mode exposes 4 MCP tools: `run_code`, `list_tools`, `list_workflows`, `search_workflows`
+3. The agent uses these tools to execute Python code with access to tools/workflows/artifacts
4. No custom integration code needed - just standard MCP
## Comparison with Direct Integration
@@ -72,5 +72,5 @@ You: What time is it in Tokyo?
You: Fetch the GitHub API and tell me how many public repos octocat has
-You: Search for skills related to API checking, then use one if it exists
+You: Search for workflows related to API checking, then use one if it exists
```
diff --git a/examples/autogen-mcp/agent.py b/examples/autogen-mcp/agent.py
index 873d345..eab69e3 100644
--- a/examples/autogen-mcp/agent.py
+++ b/examples/autogen-mcp/agent.py
@@ -37,8 +37,8 @@ async def main():
"py-code-mode-mcp",
"--tools",
str(SHARED / "tools"),
- "--skills",
- str(SHARED / "skills"),
+ "--workflows",
+ str(SHARED / "workflows"),
"--artifacts",
str(HERE / "artifacts"),
],
@@ -67,7 +67,7 @@ async def main():
# Interactive loop
print("Assistant ready. Type your request (or 'quit' to exit).")
- print("Tools available via MCP: run_code, list_tools, list_skills, search_skills\n")
+ print("Tools available via MCP: run_code, list_tools, list_workflows, search_workflows\n")
while True:
try:
diff --git a/examples/azure-container-apps/README.md b/examples/azure-container-apps/README.md
index 70a2b54..fefc638 100644
--- a/examples/azure-container-apps/README.md
+++ b/examples/azure-container-apps/README.md
@@ -19,7 +19,7 @@ Production deployment of py-code-mode on Azure Container Apps with Redis-backed
| | v |
| +-------+-------+ +------------------------+ |
| | User (HTTPS) | | Azure Cache for Redis | |
-| +---------------+ | - tools, skills | |
+| +---------------+ | - tools, workflows | |
| | - artifacts, deps | |
| +------------------------+ |
+-----------------------------------------------------------------------------------------+
@@ -32,7 +32,7 @@ Production deployment of py-code-mode on Azure Container Apps with Redis-backed
1. User sends request to Agent Server (external HTTPS endpoint)
2. Agent Server authenticates to Session Server using Bearer token
3. Session Server executes code with tools from Redis
-4. Artifacts and skills persist to Redis for cross-session access
+4. Artifacts and workflows persist to Redis for cross-session access
## Prerequisites
@@ -81,9 +81,9 @@ docker push .azurecr.io/py-code-mode-session:latest
docker push .azurecr.io/py-code-mode-agent:latest
```
-### 3. Bootstrap Redis with Tools/Skills/Deps
+### 3. Bootstrap Redis with Tools/Workflows/Deps
-Before deploying the apps, populate Redis with tools, skills, and pre-configured dependencies:
+Before deploying the apps, populate Redis with tools, workflows, and pre-configured dependencies:
```bash
# Get Redis connection string from infrastructure output
@@ -97,12 +97,12 @@ python -m py_code_mode.cli.store bootstrap \
--prefix pycodemode:tools \
--type tools
-# Bootstrap skills
+# Bootstrap workflows
python -m py_code_mode.cli.store bootstrap \
- --source ./examples/shared/skills \
+ --source ./examples/shared/workflows \
--target "$REDIS_URL" \
- --prefix pycodemode:skills \
- --type skills
+ --prefix pycodemode:workflows \
+ --type workflows
# Bootstrap deps (pre-configure Python packages)
python -m py_code_mode.cli.store bootstrap \
@@ -143,7 +143,7 @@ AGENT_URL=$(az containerapp show --name agent-server --resource-group my-resourc
# Submit a task
curl -X POST "https://$AGENT_URL/task" \
-H "Content-Type: application/json" \
- -d '{"task": "List available tools and skills"}'
+ -d '{"task": "List available tools and workflows"}'
# Check health
curl "https://$AGENT_URL/health"
@@ -158,14 +158,14 @@ All persistent data is stored in Azure Cache for Redis:
| Data Type | Redis Key Pattern | Description |
|-----------|-------------------|-------------|
| Tools | `agent:tools:*` | CLI tool definitions (YAML) |
-| Skills | `agent:skills:*` | Reusable Python skills |
+| Workflows | `agent:workflows:*` | Reusable Python workflows |
| Artifacts | `agent:artifacts:*` | Persisted data from agent sessions |
| Dependencies | `agent:deps` | Pre-configured Python packages |
-**Why Redis instead of Azure Files for tools/skills?**
+**Why Redis instead of Azure Files for tools/workflows?**
- Faster access (in-memory vs file I/O)
- Better for distributed deployments (multiple replicas)
-- Atomic operations for skill creation
+- Atomic operations for workflow creation
- Semantic search support via embeddings
Azure Files is still used for:
@@ -220,11 +220,11 @@ recipes:
### Skill Definitions
-Skills are Python files with an `async def run()` function:
+Workflows are Python files with an `async def run()` function:
```python
-# examples/shared/skills/analyze_repo.py
-"""Analyze a GitHub repository - demonstrates multi-tool skill workflow."""
+# examples/shared/workflows/analyze_repo.py
+"""Analyze a GitHub repository - demonstrates multi-tool workflow workflow."""
import json
@@ -255,7 +255,7 @@ async def run(repo: str) -> dict:
}
```
-Agents invoke skills with: `skills.invoke("analyze_repo", repo="anthropics/claude-code")`
+Agents invoke workflows with: `workflows.invoke("analyze_repo", repo="anthropics/claude-code")`
### Pre-configured Dependencies
@@ -370,7 +370,7 @@ scale:
- Session server scales based on code execution load
- Agent server scales based on concurrent user requests
- Redis handles concurrent connections from all replicas
-- Artifacts and skills are shared across all replicas via Redis
+- Artifacts and workflows are shared across all replicas via Redis
## Monitoring
@@ -438,14 +438,14 @@ az containerapp show --name agent-server --resource-group my-rg \
--query "properties.template.containers[0].env[?name=='SESSION_AUTH_TOKEN']"
```
-### Tools/Skills not found
+### Tools/Workflows not found
Verify data was bootstrapped to Redis:
```bash
# Connect to Redis and check keys
redis-cli -h .redis.cache.windows.net -p 6380 --tls -a
> KEYS agent:tools:*
-> KEYS agent:skills:*
+> KEYS agent:workflows:*
```
### Connection refused from agent
@@ -512,11 +512,11 @@ Run the agent locally against a local Redis:
# Start Redis
docker run -d --name redis -p 6379:6379 redis:7-alpine
-# Bootstrap tools and skills
+# Bootstrap tools and workflows
REDIS_URL=redis://localhost:6379 python -m py_code_mode.store bootstrap \
--source ../shared/tools --target redis://localhost:6379 --prefix agent:tools --type tools
REDIS_URL=redis://localhost:6379 python -m py_code_mode.store bootstrap \
- --source ../shared/skills --target redis://localhost:6379 --prefix agent:skills
+ --source ../shared/workflows --target redis://localhost:6379 --prefix agent:workflows
# Run agent
cd examples/azure-container-apps
diff --git a/examples/azure-container-apps/agent.py b/examples/azure-container-apps/agent.py
index 13c63f6..c3593d4 100644
--- a/examples/azure-container-apps/agent.py
+++ b/examples/azure-container-apps/agent.py
@@ -3,20 +3,20 @@
This example shows:
- Same agent pattern as the autogen example
- Deployed to Azure Container Apps with GPT-4o via Azure OpenAI
-- Uses Redis for both skills and artifacts when REDIS_URL is set
+- Uses Redis for both workflows and artifacts when REDIS_URL is set
- CLI tools (curl, jq) and MCP tools (fetch, time)
-- Multi-tool skill (analyze_repo.py)
+- Multi-tool workflow (analyze_repo.py)
Run locally (with Azure OpenAI):
cd examples/azure-container-apps
AZURE_OPENAI_ENDPOINT=https://your-openai.openai.azure.com uv run python agent.py
Run with Redis backend:
- # First, provision skills to Redis (one-time or deploy-time)
+ # First, provision workflows to Redis (one-time or deploy-time)
python -m py_code_mode.store bootstrap \
- --source ../shared/skills \
+ --source ../shared/workflows \
--target redis://localhost:6379 \
- --prefix agent-skills
+ --prefix agent-workflows
# Then run the agent
REDIS_URL=redis://localhost:6379 uv run python agent.py
@@ -83,10 +83,10 @@ def create_storage():
"""Create storage backend.
When REDIS_URL is set:
- - Tools, skills, and artifacts are loaded from Redis
+ - Tools, workflows, and artifacts are loaded from Redis
Without REDIS_URL:
- - Tools, skills, and artifacts are loaded from shared/ directory
+ - Tools, workflows, and artifacts are loaded from shared/ directory
"""
redis_url = os.environ.get("REDIS_URL")
@@ -110,19 +110,19 @@ async def main():
system_prompt = """You are a helpful assistant that writes Python code to accomplish tasks.
-You have access to `tools` and `skills` namespaces in your code environment.
+You have access to `tools` and `workflows` namespaces in your code environment.
WORKFLOW:
-1. For any nontrivial task, FIRST search skills: skills.search("relevant keywords")
-2. If a skill exists, use it: skills.invoke("name", arg=value)
-3. If no skill matches, search tools: tools.search("keywords")
+1. For any nontrivial task, FIRST search workflows: workflows.search("relevant keywords")
+2. If a workflow exists, use it: workflows.invoke("name", arg=value)
+3. If no workflow matches, search tools: tools.search("keywords")
4. Script tools together: tools.name(arg=value)
DISCOVERY:
-- skills.search("query") / skills.list() - find prebaked solutions
+- workflows.search("query") / workflows.list() - find prebaked solutions
- tools.search("query") / tools.list() - find individual tools
-Skills are reusable recipes that combine tools. Prefer them over scripting from scratch.
+Workflows are reusable recipes that combine tools. Prefer them over scripting from scratch.
ARTIFACTS (persistent storage):
- artifacts.save("name", data, description="...") - Save data for later
diff --git a/examples/azure-container-apps/agent_server.py b/examples/azure-container-apps/agent_server.py
index 28cfa60..c0b2be7 100644
--- a/examples/azure-container-apps/agent_server.py
+++ b/examples/azure-container-apps/agent_server.py
@@ -90,7 +90,7 @@ async def _run_code(code: str) -> str:
"""Execute Python code via the session server.
Args:
- code: Python code to execute. Has access to tools, skills,
+ code: Python code to execute. Has access to tools, workflows,
artifacts, and deps namespaces.
Returns:
@@ -109,14 +109,14 @@ async def _run_code(code: str) -> str:
run_code_tool = FunctionTool(
func=_run_code,
name="run_code",
- description="Execute Python code with tools, skills, artifacts, deps.",
+ description="Execute Python code with tools, workflows, artifacts, deps.",
)
system_prompt = """You are a helpful assistant that writes Python code to accomplish tasks.
You have access to these namespaces in run_code:
- tools.* - CLI tools (curl, jq, etc.)
-- skills.* - Reusable Python functions
+- workflows.* - Reusable Python functions
- artifacts.* - Persistent data storage
- deps.* - Python package management
@@ -151,7 +151,7 @@ async def _run_code(code: str) -> str:
@app.get("/info")
async def info():
- """Get available tools and skills from session server."""
+ """Get available tools and workflows from session server."""
# Create storage pointing to same Redis as session server
storage = RedisStorage(
url=os.environ["REDIS_URL"],
@@ -167,8 +167,8 @@ async def info():
async with Session(storage=storage, executor=executor) as session:
tools = await session.list_tools()
- skills = await session.list_skills()
+ workflows = await session.list_workflows()
return {
"tools": tools,
- "skills": skills,
+ "workflows": workflows,
}
diff --git a/examples/azure-container-apps/deploy/app.bicep b/examples/azure-container-apps/deploy/app.bicep
index 97fdd25..a83fcbf 100644
--- a/examples/azure-container-apps/deploy/app.bicep
+++ b/examples/azure-container-apps/deploy/app.bicep
@@ -98,8 +98,8 @@ resource sessionApp 'Microsoft.App/containerApps@2023-05-01' = {
value: '/workspace/configs/tools.yaml'
}
{
- name: 'SKILLS_PATH'
- value: '/workspace/configs/skills'
+ name: 'WORKFLOWS_PATH'
+ value: '/workspace/configs/workflows'
}
{
name: 'ARTIFACTS_PATH'
@@ -110,8 +110,8 @@ resource sessionApp 'Microsoft.App/containerApps@2023-05-01' = {
value: 'pycodemode:tools'
}
{
- name: 'REDIS_SKILLS_PREFIX'
- value: 'pycodemode:skills'
+ name: 'REDIS_WORKFLOWS_PREFIX'
+ value: 'pycodemode:workflows'
}
{
name: 'REDIS_ARTIFACTS_PREFIX'
diff --git a/examples/azure-container-apps/deploy/container-app.yaml b/examples/azure-container-apps/deploy/container-app.yaml
index b8086cf..1aedaa9 100644
--- a/examples/azure-container-apps/deploy/container-app.yaml
+++ b/examples/azure-container-apps/deploy/container-app.yaml
@@ -43,8 +43,8 @@ properties:
- name: TOOLS_CONFIG
value: /workspace/configs/tools.yaml
# Skills directory
- - name: SKILLS_DIR
- value: /workspace/configs/skills
+ - name: WORKFLOWS_DIR
+ value: /workspace/configs/workflows
# Artifact storage path
- name: ARTIFACTS_PATH
value: /workspace/artifacts
@@ -91,7 +91,7 @@ properties:
storageType: AzureFile
storageName: artifacts-share # Must be created first
- # Configuration files (tools.yaml, skills/)
+ # Configuration files (tools.yaml, workflows/)
- name: configs
storageType: AzureFile
storageName: configs-share # Must be created first
diff --git a/examples/azure-container-apps/deploy/deploy.sh b/examples/azure-container-apps/deploy/deploy.sh
index c604ed8..c3d88a2 100755
--- a/examples/azure-container-apps/deploy/deploy.sh
+++ b/examples/azure-container-apps/deploy/deploy.sh
@@ -2,7 +2,7 @@
# Deploy py-code-mode agent to Azure Container Apps
#
# This deploys:
-# - Azure Cache for Redis: Stores tools, skills, deps
+# - Azure Cache for Redis: Stores tools, workflows, deps
# - Azure OpenAI (GPT-4o): LLM for agent
# - Session server (internal): Code execution with tools
# - Agent server (external): AutoGen agent with GPT-4o
@@ -88,8 +88,8 @@ docker buildx build --platform linux/amd64 --push \
-f examples/azure-container-apps/Dockerfile.agent \
-t "$ACR_SERVER/py-code-mode-agent:latest" . 2>&1 | tail -5
-# Step 4: Bootstrap Redis with tools, skills, and deps
-echo "[4/7] Bootstrapping tools, skills, and deps to Redis..."
+# Step 4: Bootstrap Redis with tools, workflows, and deps
+echo "[4/7] Bootstrapping tools, workflows, and deps to Redis..."
SHARED_DIR="$SCRIPT_DIR/../../shared"
# Use built-in CLI for bootstrapping (run from repo root for module resolution)
@@ -103,12 +103,12 @@ uv run python -m py_code_mode.cli.store bootstrap \
--prefix "pycodemode:tools" \
--clear
-echo " Bootstrapping skills..."
+echo " Bootstrapping workflows..."
uv run python -m py_code_mode.cli.store bootstrap \
- --type skills \
- --source "$SHARED_DIR/skills" \
+ --type workflows \
+ --source "$SHARED_DIR/workflows" \
--target "$REDIS_CONNECTION_STRING" \
- --prefix "pycodemode:skills" \
+ --prefix "pycodemode:workflows" \
--clear
echo " Bootstrapping deps..."
diff --git a/examples/azure-container-apps/deploy/main.bicep b/examples/azure-container-apps/deploy/main.bicep
index a335fa2..618a9ea 100644
--- a/examples/azure-container-apps/deploy/main.bicep
+++ b/examples/azure-container-apps/deploy/main.bicep
@@ -164,8 +164,8 @@ resource sessionApp 'Microsoft.App/containerApps@2023-05-01' = {
value: '/workspace/configs/tools.yaml'
}
{
- name: 'SKILLS_DIR'
- value: '/workspace/configs/skills'
+ name: 'WORKFLOWS_DIR'
+ value: '/workspace/configs/workflows'
}
{
name: 'ARTIFACTS_PATH'
diff --git a/examples/azure-container-apps/deploy/redis.bicep b/examples/azure-container-apps/deploy/redis.bicep
index 7e214f3..fc1b1da 100644
--- a/examples/azure-container-apps/deploy/redis.bicep
+++ b/examples/azure-container-apps/deploy/redis.bicep
@@ -1,5 +1,5 @@
// Azure Cache for Redis module
-// Basic tier (C0 - 250MB) for tools/skills/artifacts storage
+// Basic tier (C0 - 250MB) for tools/workflows/artifacts storage
@description('Location for the Redis cache')
param location string
diff --git a/examples/minimal/README.md b/examples/minimal/README.md
index 21b3696..1802288 100644
--- a/examples/minimal/README.md
+++ b/examples/minimal/README.md
@@ -64,14 +64,14 @@ args: "-s {url}"
### 2. Storage + Executor + Session
-Storage handles skills and artifacts. Tools come from executor config:
+Storage handles workflows and artifacts. Tools come from executor config:
```python
from pathlib import Path
from py_code_mode import Session, FileStorage
from py_code_mode.execution import SubprocessExecutor, SubprocessConfig
-# File-based storage for skills and artifacts
+# File-based storage for workflows and artifacts
storage = FileStorage(base_path=Path("./configs"))
# Executor config loads tools from tools_path
@@ -182,12 +182,12 @@ async with Session(storage=storage) as session:
result = await session.run('tools.curl(url="...")')
```
-### Add Skills (Code Recipes)
+### Add Workflows (Code Recipes)
-Skills are reusable code snippets the agent can invoke:
+Workflows are reusable code snippets the agent can invoke:
```python
-# configs/skills/fetch_json.py
+# configs/workflows/fetch_json.py
async def run(url: str) -> dict:
"""Fetch JSON from a URL and parse it."""
import json
@@ -195,18 +195,18 @@ async def run(url: str) -> dict:
return json.loads(response)
```
-Skills are automatically loaded from the directory when using `FileStorage`:
+Workflows are automatically loaded from the directory when using `FileStorage`:
```python
from pathlib import Path
from py_code_mode import Session, FileStorage
-# Skills loaded from configs/skills/
+# Workflows loaded from configs/workflows/
storage = FileStorage(base_path=Path("./configs"))
async with Session(storage=storage) as session:
- # Agent can use: skills.fetch_json(url="...")
- result = await session.run('skills.fetch_json(url="https://api.example.com")')
+ # Agent can use: workflows.fetch_json(url="...")
+ result = await session.run('workflows.fetch_json(url="https://api.example.com")')
```
## Next Steps
diff --git a/examples/minimal/agent.py b/examples/minimal/agent.py
index 662bae4..326b8c4 100644
--- a/examples/minimal/agent.py
+++ b/examples/minimal/agent.py
@@ -2,12 +2,12 @@
"""Minimal Claude agent with py-code-mode code execution.
This example shows:
-1. Loading tools and skills from the shared directory
+1. Loading tools and workflows from the shared directory
2. Creating a Session with FileStorage
3. Running an agent loop with raw Claude API
The agent can write Python code that calls tools via the tools.* namespace
-and invoke skills via the skills.* namespace.
+and invoke workflows via the workflows.* namespace.
"""
import asyncio
@@ -23,36 +23,36 @@
# Load .env file (for ANTHROPIC_API_KEY)
load_dotenv()
-# Shared tools and skills directory
+# Shared tools and workflows directory
HERE = Path(__file__).parent
SHARED = HERE.parent / "shared"
-SYSTEM_PROMPT = """You are a helpful assistant with tools and skills via Python code execution.
+SYSTEM_PROMPT = """You are a helpful assistant with tools and workflows via Python code execution.
Write Python code in ```python blocks. The code runs in an environment with
-`tools` and `skills` namespaces.
+`tools` and `workflows` namespaces.
WORKFLOW:
-1. For any nontrivial task, FIRST search skills: skills.search("relevant keywords")
-2. If a skill exists, use it: skills.invoke("name", arg=value)
-3. If no skill matches, search tools: tools.search("keywords")
+1. For any nontrivial task, FIRST search workflows: workflows.search("relevant keywords")
+2. If a workflow exists, use it: workflows.invoke("name", arg=value)
+3. If no workflow matches, search tools: tools.search("keywords")
4. Script tools together: tools.name(arg=value)
DISCOVERY:
-- skills.search("query") / skills.list() - find prebaked solutions
+- workflows.search("query") / workflows.list() - find prebaked solutions
- tools.search("query") / tools.list() - find individual tools
Skills are reusable recipes that combine tools. Prefer them over scripting from scratch.
EXAMPLE:
```python
-# First check for existing skills
-skills.search("analyze repo")
+# First check for existing workflows
+workflows.search("analyze repo")
```
```python
-# If skill exists, use it
-skills.invoke("analyze_repo", repo="anthropics/claude-code")
+# If workflow exists, use it
+workflows.invoke("analyze_repo", repo="anthropics/claude-code")
```
```python
@@ -74,7 +74,7 @@ def extract_code(text: str) -> str | None:
async def main() -> None:
- # Storage for skills and artifacts only
+ # Storage for workflows and artifacts only
storage = FileStorage(base_path=SHARED)
# Executor with tools from config
diff --git a/examples/shared/skills/analyze_repo.py b/examples/shared/workflows/analyze_repo.py
similarity index 94%
rename from examples/shared/skills/analyze_repo.py
rename to examples/shared/workflows/analyze_repo.py
index 459b2ad..8874c46 100644
--- a/examples/shared/skills/analyze_repo.py
+++ b/examples/shared/workflows/analyze_repo.py
@@ -1,4 +1,4 @@
-"""Analyze a GitHub repository - demonstrates multi-tool skill workflow."""
+"""Analyze a GitHub repository - demonstrates multi-tool workflow workflow."""
import json
@@ -6,7 +6,7 @@
async def run(repo: str) -> dict:
"""Analyze a GitHub repository by combining multiple API calls.
- This skill demonstrates the value of prebaked workflows:
+ This workflow demonstrates the value of prebaked workflows:
1. Fetches repo metadata (stars, forks, language)
2. Gets recent commits to understand activity
3. Checks open issues count
diff --git a/examples/subprocess/README.md b/examples/subprocess/README.md
index a068ccb..5e171a1 100644
--- a/examples/subprocess/README.md
+++ b/examples/subprocess/README.md
@@ -58,7 +58,7 @@ from pathlib import Path
from py_code_mode import FileStorage, Session
from py_code_mode.execution import SubprocessConfig, SubprocessExecutor
-# Storage for skills and artifacts only
+# Storage for workflows and artifacts only
storage = FileStorage(base_path=Path("./data"))
# Executor with tools from config
diff --git a/examples/subprocess/example.py b/examples/subprocess/example.py
index fd6d568..5c400ec 100644
--- a/examples/subprocess/example.py
+++ b/examples/subprocess/example.py
@@ -30,13 +30,13 @@
from py_code_mode import FileStorage, Session
from py_code_mode.execution import SubprocessConfig, SubprocessExecutor
-# Shared tools and skills directory
+# Shared tools and workflows directory
HERE = Path(__file__).parent
SHARED = HERE.parent / "shared"
async def main() -> None:
- # Storage for skills and artifacts only
+ # Storage for workflows and artifacts only
storage = FileStorage(base_path=SHARED)
# Configure subprocess executor with tools from config
@@ -71,10 +71,10 @@ async def main() -> None:
result = await session.run("tools.list()")
print(f" Available tools: {result.value}")
- # Searching for skills
- print("\nSearching for skills...")
- result = await session.run('skills.search("fetch")')
- print(f" Found skills: {result.value}")
+ # Searching for workflows
+ print("\nSearching for workflows...")
+ result = await session.run('workflows.search("fetch")')
+ print(f" Found workflows: {result.value}")
# Demonstrate stdout capture
print("\nStdout capture...")
diff --git a/src/py_code_mode/__init__.py b/src/py_code_mode/__init__.py
index c5ee22a..eec9445 100644
--- a/src/py_code_mode/__init__.py
+++ b/src/py_code_mode/__init__.py
@@ -11,15 +11,15 @@
CodeModeError,
ConfigurationError,
DependencyError,
- SkillExecutionError,
- SkillNotFoundError,
- SkillValidationError,
StorageError,
StorageReadError,
StorageWriteError,
ToolCallError,
ToolNotFoundError,
ToolTimeoutError,
+ WorkflowExecutionError,
+ WorkflowNotFoundError,
+ WorkflowValidationError,
)
# Execution (commonly needed at top level)
@@ -85,9 +85,9 @@
"ToolTimeoutError",
"ArtifactNotFoundError",
"ArtifactWriteError",
- "SkillNotFoundError",
- "SkillValidationError",
- "SkillExecutionError",
+ "WorkflowNotFoundError",
+ "WorkflowValidationError",
+ "WorkflowExecutionError",
"DependencyError",
"StorageError",
"StorageReadError",
diff --git a/src/py_code_mode/artifacts/redis.py b/src/py_code_mode/artifacts/redis.py
index f4c53bd..f837968 100644
--- a/src/py_code_mode/artifacts/redis.py
+++ b/src/py_code_mode/artifacts/redis.py
@@ -4,7 +4,7 @@
import json
from datetime import UTC, datetime
-from typing import TYPE_CHECKING, Any
+from typing import TYPE_CHECKING, Any, cast
from py_code_mode.artifacts.base import Artifact
from py_code_mode.errors import ArtifactNotFoundError
@@ -108,7 +108,7 @@ def load(self, name: str) -> Any:
ArtifactNotFoundError: If artifact doesn't exist.
"""
data_key = self._data_key(name)
- content = self._redis.get(data_key)
+ content = cast(str | bytes | None, self._redis.get(data_key))
if content is None:
raise ArtifactNotFoundError(name)
@@ -116,7 +116,7 @@ def load(self, name: str) -> Any:
# Check metadata for data type
data_type = None
try:
- entry_json = self._redis.hget(self._index_key(), name)
+ entry_json = cast(str | bytes | None, self._redis.hget(self._index_key(), name))
if entry_json and isinstance(entry_json, str | bytes):
entry = json.loads(entry_json)
data_type = entry.get("metadata", {}).get("_data_type")
@@ -146,7 +146,7 @@ def get(self, name: str) -> Artifact | None:
Returns:
Artifact metadata or None if not found.
"""
- entry_json = self._redis.hget(self._index_key(), name)
+ entry_json = cast(str | bytes | None, self._redis.hget(self._index_key(), name))
if entry_json is None:
return None
@@ -165,12 +165,16 @@ def list(self) -> list[Artifact]:
Returns:
List of Artifact objects.
"""
- index_data = self._redis.hgetall(self._index_key())
+ index_data = cast(dict[str | bytes, str | bytes], self._redis.hgetall(self._index_key()))
if not index_data:
return []
artifacts = []
for name, entry_json in index_data.items():
+ if isinstance(name, bytes):
+ name = name.decode()
+ if isinstance(entry_json, bytes):
+ entry_json = entry_json.decode()
entry = json.loads(entry_json)
artifacts.append(
Artifact(
@@ -192,7 +196,7 @@ def exists(self, name: str) -> bool:
Returns:
True if artifact exists in index.
"""
- return self._redis.hexists(self._index_key(), name)
+ return bool(self._redis.hexists(self._index_key(), name))
def delete(self, name: str) -> None:
"""Delete artifact and its index entry.
diff --git a/src/py_code_mode/bootstrap.py b/src/py_code_mode/bootstrap.py
index a4d7fbc..b122206 100644
--- a/src/py_code_mode/bootstrap.py
+++ b/src/py_code_mode/bootstrap.py
@@ -23,7 +23,7 @@
if TYPE_CHECKING:
from py_code_mode.artifacts import ArtifactStoreProtocol
from py_code_mode.deps import DepsNamespace
- from py_code_mode.execution.in_process.skills_namespace import SkillsNamespace
+ from py_code_mode.execution.in_process.workflows_namespace import WorkflowsNamespace
from py_code_mode.tools import ToolsNamespace
@@ -33,13 +33,13 @@ class NamespaceBundle:
Provides the four namespaces needed for code execution:
- tools: ToolsNamespace for tool access
- - skills: SkillsNamespace for skill access
+ - workflows: WorkflowsNamespace for workflow access
- artifacts: ArtifactStoreProtocol for artifact storage
- deps: DepsNamespace for dependency management
"""
tools: ToolsNamespace
- skills: SkillsNamespace
+ workflows: WorkflowsNamespace
artifacts: ArtifactStoreProtocol
deps: DepsNamespace
@@ -58,7 +58,7 @@ async def bootstrap_namespaces(config: dict[str, Any]) -> NamespaceBundle:
- tools_path is optional; if provided, tools load from that directory
Returns:
- NamespaceBundle with tools, skills, artifacts namespaces.
+ NamespaceBundle with tools, workflows, artifacts namespaces.
Raises:
ValueError: If config["type"] is unknown or missing.
@@ -80,8 +80,7 @@ async def _load_tools_namespace(tools_path_str: str | None) -> ToolsNamespace:
if tools_path_str:
tools_path = Path(tools_path_str)
- registry = ToolRegistry()
- await registry.load_from_directory(tools_path)
+ registry = await ToolRegistry.from_dir(str(tools_path))
return ToolsNamespace(registry)
return ToolsNamespace(ToolRegistry())
@@ -94,19 +93,19 @@ def _build_namespace_bundle(
artifact_store: ArtifactStoreProtocol,
) -> NamespaceBundle:
"""Wire up namespaces into a NamespaceBundle."""
- from py_code_mode.execution.in_process.skills_namespace import SkillsNamespace
+ from py_code_mode.execution.in_process.workflows_namespace import WorkflowsNamespace
namespace_dict: dict[str, Any] = {}
- skills_ns = SkillsNamespace(storage.get_skill_library(), namespace_dict)
+ workflows_ns = WorkflowsNamespace(storage.get_workflow_library(), namespace_dict)
namespace_dict["tools"] = tools_ns
- namespace_dict["skills"] = skills_ns
+ namespace_dict["workflows"] = workflows_ns
namespace_dict["artifacts"] = artifact_store
namespace_dict["deps"] = deps_ns
return NamespaceBundle(
tools=tools_ns,
- skills=skills_ns,
+ workflows=workflows_ns,
artifacts=artifact_store,
deps=deps_ns,
)
diff --git a/src/py_code_mode/cli/mcp_server.py b/src/py_code_mode/cli/mcp_server.py
index f71d3ac..02c27ec 100644
--- a/src/py_code_mode/cli/mcp_server.py
+++ b/src/py_code_mode/cli/mcp_server.py
@@ -1,7 +1,7 @@
"""MCP server exposing py-code-mode executor to MCP clients.
Usage:
- # Base directory (auto-discovers tools/, skills/, artifacts/ subdirs)
+ # Base directory (auto-discovers tools/, workflows/, artifacts/ subdirs)
py-code-mode-mcp --base ~/.code-mode
# Explicit storage + tools
@@ -19,7 +19,7 @@
still allowing access to CLI tools on your system.
Note on architecture:
- Storage (--storage or --redis) holds skills and artifacts.
+ Storage (--storage or --redis) holds workflows and artifacts.
Tools are owned by the executor and loaded from --tools directory.
The --base flag is a convenience that sets both: storage=base, tools=base/tools.
"""
@@ -45,24 +45,24 @@
@mcp.tool
async def run_code(code: str) -> str:
- """Execute Python code with access to tools, skills, and artifacts.
+ """Execute Python code with access to tools, workflows, and artifacts.
WORKFLOW:
- 1. First, use search_skills to find existing solutions for your task
- 2. If a skill exists, invoke it: skills.invoke("skill_name", arg=value)
- 3. If no skill exists, solve the task ad-hoc using tools and Python
- 4. Once solved, save reusable solutions as skills for future use
+ 1. First, use search_workflows to find existing solutions for your task
+ 2. If a workflow exists, invoke it: workflows.invoke("workflow_name", arg=value)
+ 3. If no workflow exists, solve the task ad-hoc using tools and Python
+ 4. Once solved, save reusable solutions as workflows for future use
NAMESPACES:
- tools.* - Call registered tools (use list_tools to see available)
Example: tools.curl(url="https://api.example.com")
- - skills.* - Work with skills:
- - skills.invoke("name", arg=val) - Run an existing skill
- - skills.search("query") - Find skills (same as search_skills tool)
- - skills.create("name", code, "description") - Save a new skill
- - skills.list() - List all skills
- - skills.get("name") - Get skill details
+ - workflows.* - Work with workflows:
+ - workflows.invoke("name", arg=val) - Run an existing workflow
+ - workflows.search("query") - Find workflows (same as search_workflows tool)
+ - workflows.create("name", code, "description") - Save a new workflow
+ - workflows.list() - List all workflows
+ - workflows.get("name") - Get workflow details
- artifacts.* - Persist data across sessions:
- artifacts.save("filename", data) - Save data
@@ -70,7 +70,7 @@ async def run_code(code: str) -> str:
- deps.* - Manage Python dependencies:
- deps.add("package") - Install a package
- - deps.list() - List configured dependencies
+ - deps.list("package") - List configured dependencies
- deps.remove("package") - Remove a dependency
The namespace persists across calls - variables survive between run_code invocations.
@@ -119,33 +119,33 @@ async def search_tools(query: str, limit: int = 10) -> str:
@mcp.tool
-async def list_skills() -> str:
- """List all available skills with their descriptions."""
+async def list_workflows() -> str:
+ """List all available workflows with their descriptions."""
if _session is None:
raise RuntimeError("Session not initialized")
- skills = await _session.list_skills()
- return json.dumps(skills)
+ workflows = await _session.list_workflows()
+ return json.dumps(workflows)
@mcp.tool
-async def search_skills(query: str, limit: int = 5) -> str:
- """Search for existing skills before solving a task from scratch.
+async def search_workflows(query: str, limit: int = 5) -> str:
+ """Search for existing workflows before solving a task from scratch.
- START HERE: Before writing code, search for skills that might already solve
- your task. Skills are reusable solutions that combine tools and logic.
+ START HERE: Before writing code, search for workflows that might already solve
+ your task. Workflows are reusable solutions that combine tools and logic.
Args:
query: Natural language description of what you're trying to accomplish
limit: Maximum number of results to return (default: 5)
- Returns matching skills with their descriptions and parameters.
+ Returns matching workflows with their descriptions and parameters.
If no good match exists, use run_code to solve the task ad-hoc,
- then create a skill for future reuse.
+ then create a workflow for future reuse.
"""
if _session is None:
raise RuntimeError("Session not initialized")
- skills = await _session.search_skills(query, limit)
- return json.dumps(skills)
+ workflows = await _session.search_workflows(query, limit)
+ return json.dumps(workflows)
@mcp.tool
@@ -158,41 +158,41 @@ async def list_artifacts() -> str:
@mcp.tool
-async def create_skill(name: str, source: str, description: str) -> dict:
- """Create a reusable skill from Python source code.
+async def create_workflow(name: str, source: str, description: str) -> dict:
+ """Create a reusable workflow from Python source code.
The source must contain a `def run(...)` function that will be executed
- when the skill is invoked. The function can accept parameters and has
- access to tools, skills, and artifacts namespaces.
+ when the workflow is invoked. The function can accept parameters and has
+ access to tools, workflows, and artifacts namespaces.
Example source:
def run(url: str) -> str:
return tools.curl.get(url=url)
Args:
- name: Unique name for the skill (used to invoke it later)
+ name: Unique name for the workflow (used to invoke it later)
source: Python source code containing a `def run(...)` function
- description: Human-readable description of what the skill does
+ description: Human-readable description of what the workflow does
- Returns the created skill's metadata.
+ Returns the created workflow's metadata.
"""
if _session is None:
raise RuntimeError("Session not initialized")
- return await _session.add_skill(name, source, description)
+ return await _session.add_workflow(name, source, description)
@mcp.tool
-async def delete_skill(name: str) -> bool:
- """Delete a skill by name.
+async def delete_workflow(name: str) -> bool:
+ """Delete a workflow by name.
Args:
- name: Name of the skill to delete
+ name: Name of the workflow to delete
- Returns True if the skill was deleted, False if it was not found.
+ Returns True if the workflow was deleted, False if it was not found.
"""
if _session is None:
raise RuntimeError("Session not initialized")
- return await _session.remove_skill(name)
+ return await _session.remove_workflow(name)
async def list_deps() -> list[str]:
@@ -296,7 +296,7 @@ async def create_session(args: argparse.Namespace) -> Session:
tools_path = Path(args.tools) if args.tools else None
# Venv goes in ~/.cache/py-code-mode/venv-{version} by default (cache_venv=True)
- # Storage is for skills/artifacts only, not executor concerns like venvs
+ # Storage is for workflows/artifacts only, not executor concerns like venvs
config = SubprocessConfig(
allow_runtime_deps=not no_runtime_deps,
default_timeout=timeout,
@@ -323,7 +323,7 @@ def main() -> None:
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
- # Base directory (auto-discovers tools/, skills/, artifacts/)
+ # Base directory (auto-discovers tools/, workflows/, artifacts/)
py-code-mode-mcp --base ~/.code-mode
# Explicit storage + tools
@@ -337,16 +337,16 @@ def main() -> None:
""",
)
- # Base directory (convenience: auto-discovers tools/, skills/, artifacts/)
+ # Base directory (convenience: auto-discovers tools/, workflows/, artifacts/)
parser.add_argument(
"--base",
- help="Base directory with tools/, skills/, artifacts/ subdirs (convenience shorthand)",
+ help="Base directory with tools/, workflows/, artifacts/ subdirs (convenience shorthand)",
)
# File storage option
parser.add_argument(
"--storage",
- help="Path to storage directory (contains skills/, artifacts/)",
+ help="Path to storage directory (contains workflows/, artifacts/)",
)
# Tools path (separate from storage since tools are executor-owned)
diff --git a/src/py_code_mode/cli/store.py b/src/py_code_mode/cli/store.py
index ebefacf..37e97a9 100644
--- a/src/py_code_mode/cli/store.py
+++ b/src/py_code_mode/cli/store.py
@@ -1,17 +1,17 @@
-"""CLI for skill and tool store lifecycle management.
+"""CLI for workflow and tool store lifecycle management.
Commands:
- bootstrap - Push skills or tools from directory to store
- pull - Retrieve skills from store to local files
- diff - Compare local skills vs remote store
+ bootstrap - Push workflows or tools from directory to store
+ pull - Retrieve workflows from store to local files
+ diff - Compare local workflows vs remote store
list - List items in store
Usage:
- python -m py_code_mode.cli.store bootstrap --source ./skills --target redis://...
+ python -m py_code_mode.cli.store bootstrap --source ./workflows --target redis://...
python -m py_code_mode.cli.store bootstrap --source ./tools --target redis://... --type tools
- python -m py_code_mode.cli.store pull --target redis://... --dest ./skills-from-redis
- python -m py_code_mode.cli.store diff --source ./skills --target redis://...
- python -m py_code_mode.cli.store list --target redis://... --prefix agent-skills
+ python -m py_code_mode.cli.store pull --target redis://... --dest ./workflows-from-redis
+ python -m py_code_mode.cli.store diff --source ./workflows --target redis://...
+ python -m py_code_mode.cli.store list --target redis://... --prefix agent-workflows
"""
from __future__ import annotations
@@ -26,21 +26,21 @@
import yaml
from py_code_mode.deps import RedisDepsStore
-from py_code_mode.skills import FileSkillStore, PythonSkill, RedisSkillStore
from py_code_mode.storage import RedisToolStore
+from py_code_mode.workflows import FileWorkflowStore, PythonWorkflow, RedisWorkflowStore
logger = logging.getLogger(__name__)
-def _get_store(target: str, prefix: str) -> RedisSkillStore:
- """Get skill store based on URL scheme.
+def _get_store(target: str, prefix: str) -> RedisWorkflowStore:
+ """Get workflow store based on URL scheme.
Args:
target: Target URL (e.g., redis://localhost:6379).
- prefix: Key prefix for skills.
+ prefix: Key prefix for workflows.
Returns:
- RedisSkillStore connected to the target.
+ RedisWorkflowStore connected to the target.
Raises:
ValueError: Unknown URL scheme.
@@ -50,7 +50,7 @@ def _get_store(target: str, prefix: str) -> RedisSkillStore:
if parsed.scheme in ("redis", "rediss"):
r = redis_lib.from_url(target)
- return RedisSkillStore(r, prefix=prefix)
+ return RedisWorkflowStore(r, prefix=prefix)
elif parsed.scheme == "s3":
raise NotImplementedError("S3 adapter coming soon")
@@ -67,16 +67,16 @@ def _get_store(target: str, prefix: str) -> RedisSkillStore:
)
-def _skill_hash(skill: PythonSkill) -> str:
- """Hash skill content for quick comparison.
+def _workflow_hash(workflow: PythonWorkflow) -> str:
+ """Hash workflow content for quick comparison.
Args:
- skill: Skill to hash.
+ workflow: Workflow to hash.
Returns:
- 12-character hash of skill content.
+ 12-character hash of workflow content.
"""
- content = f"{skill.name}:{skill.description}:{skill.source}"
+ content = f"{workflow.name}:{workflow.description}:{workflow.source}"
return hashlib.sha256(content.encode()).hexdigest()[:12]
@@ -84,17 +84,17 @@ def bootstrap(
source: Path | None,
target: str,
prefix: str,
- store_type: str = "skills",
+ store_type: str = "workflows",
clear: bool = False,
deps: list[str] | None = None,
) -> int:
- """Push skills, tools, or deps to store.
+ """Push workflows, tools, or deps to store.
Args:
- source: Path to directory containing skill/tool files, or requirements file for deps.
+ source: Path to directory containing workflow/tool files, or requirements file for deps.
target: Target store URL.
prefix: Key prefix for items.
- store_type: Type of store ("skills", "tools", or "deps").
+ store_type: Type of store ("workflows", "tools", or "deps").
clear: If True, remove existing items first.
deps: Inline package specs for deps bootstrapping (only used when store_type is "deps").
@@ -109,35 +109,35 @@ def bootstrap(
return _bootstrap_deps(source, deps, target, prefix, clear)
else:
if source is None:
- raise ValueError("--source is required for skills bootstrapping")
- return _bootstrap_skills(source, target, prefix, clear)
+ raise ValueError("--source is required for workflows bootstrapping")
+ return _bootstrap_workflows(source, target, prefix, clear)
-def _bootstrap_skills(source: Path, target: str, prefix: str, clear: bool) -> int:
- """Bootstrap skills to store."""
+def _bootstrap_workflows(source: Path, target: str, prefix: str, clear: bool) -> int:
+ """Bootstrap workflows to store."""
store = _get_store(target, prefix)
if clear:
- for skill in store.list_all():
- store.delete(skill.name)
- print(f" Removed: {skill.name}")
+ for workflow in store.list_all():
+ store.delete(workflow.name)
+ print(f" Removed: {workflow.name}")
# Load from local directory
- local_store = FileSkillStore(source)
- skills = local_store.list_all()
+ local_store = FileWorkflowStore(source)
+ workflows = local_store.list_all()
- # Use batch save if available (RedisSkillStore)
+ # Use batch save if available (RedisWorkflowStore)
if hasattr(store, "save_batch"):
- store.save_batch(skills)
- for skill in skills:
- print(f" Added: {skill.name}")
+ store.save_batch(workflows)
+ for workflow in workflows:
+ print(f" Added: {workflow.name}")
else:
- for skill in skills:
- store.save(skill)
- print(f" Added: {skill.name}")
+ for workflow in workflows:
+ store.save(workflow)
+ print(f" Added: {workflow.name}")
- print(f"\nBootstrapped {len(skills)} skills to {target} (prefix: {prefix})")
- return len(skills)
+ print(f"\nBootstrapped {len(workflows)} workflows to {target} (prefix: {prefix})")
+ return len(workflows)
def _bootstrap_tools(source: Path, target: str, prefix: str, clear: bool) -> int:
@@ -246,53 +246,53 @@ def _bootstrap_deps(
def pull(target: str, prefix: str, dest: Path) -> int:
- """Pull skills from store to local files.
+ """Pull workflows from store to local files.
Args:
target: Target store URL.
- prefix: Key prefix for skills.
- dest: Destination directory for skill files.
+ prefix: Key prefix for workflows.
+ dest: Destination directory for workflow files.
Returns:
- Number of skills pulled.
+ Number of workflows pulled.
"""
store = _get_store(target, prefix)
dest.mkdir(parents=True, exist_ok=True)
pulled = 0
- for skill in store.list_all():
+ for workflow in store.list_all():
# Write as .py file
- file_path = dest / f"{skill.name}.py"
- file_path.write_text(skill.source)
+ file_path = dest / f"{workflow.name}.py"
+ file_path.write_text(workflow.source)
pulled += 1
- print(f" {skill.name} -> {file_path}")
+ print(f" {workflow.name} -> {file_path}")
- print(f"\nPulled {pulled} skills to {dest}")
+ print(f"\nPulled {pulled} workflows to {dest}")
return pulled
def diff(source: Path, target: str, prefix: str) -> dict:
- """Compare local skills vs remote store.
+ """Compare local workflows vs remote store.
Args:
- source: Path to local skills directory.
+ source: Path to local workflows directory.
target: Target store URL.
- prefix: Key prefix for skills.
+ prefix: Key prefix for workflows.
Returns:
Dict with keys: added, modified, removed, unchanged.
"""
store = _get_store(target, prefix)
- # Load local skills
- local_store = FileSkillStore(source)
- local_skills = {s.name: s for s in local_store.list_all()}
- local_hashes = {name: _skill_hash(s) for name, s in local_skills.items()}
+ # Load local workflows
+ local_store = FileWorkflowStore(source)
+ local_workflows = {w.name: w for w in local_store.list_all()}
+ local_hashes = {name: _workflow_hash(w) for name, w in local_workflows.items()}
- # Load remote skills
- remote_skills = {s.name: s for s in store.list_all()}
- remote_hashes = {name: _skill_hash(s) for name, s in remote_skills.items()}
+ # Load remote workflows
+ remote_workflows = {w.name: w for w in store.list_all()}
+ remote_hashes = {name: _workflow_hash(w) for name, w in remote_workflows.items()}
result: dict[str, list[str]] = {
"added": [],
@@ -301,10 +301,10 @@ def diff(source: Path, target: str, prefix: str) -> dict:
"unchanged": [],
}
- all_names = set(local_skills.keys()) | set(remote_skills.keys())
+ all_names = set(local_workflows.keys()) | set(remote_workflows.keys())
for name in sorted(all_names):
- in_local = name in local_skills
- in_remote = name in remote_skills
+ in_local = name in local_workflows
+ in_remote = name in remote_workflows
if in_remote and not in_local:
print(f" + {name} (agent-created)")
@@ -322,13 +322,13 @@ def diff(source: Path, target: str, prefix: str) -> dict:
return result
-def list_items(target: str, prefix: str, store_type: str = "skills") -> int:
+def list_items(target: str, prefix: str, store_type: str = "workflows") -> int:
"""List items in store.
Args:
target: Target store URL.
prefix: Key prefix for items.
- store_type: Type of store ("skills", "tools", or "deps").
+ store_type: Type of store ("workflows", "tools", or "deps").
Returns:
Number of items listed.
@@ -356,13 +356,13 @@ def list_items(target: str, prefix: str, store_type: str = "skills") -> int:
print(f"\n{len(deps)} deps in {target} (prefix: {prefix})")
return len(deps)
else:
- store = RedisSkillStore(r, prefix=prefix)
- skills = store.list_all()
- for skill in skills:
- desc = skill.description[:50] if skill.description else ""
- print(f" {skill.name}: {desc}")
- print(f"\n{len(skills)} skills in {target} (prefix: {prefix})")
- return len(skills)
+ store = RedisWorkflowStore(r, prefix=prefix)
+ workflows = store.list_all()
+ for workflow in workflows:
+ desc = workflow.description[:50] if workflow.description else ""
+ print(f" {workflow.name}: {desc}")
+ print(f"\n{len(workflows)} workflows in {target} (prefix: {prefix})")
+ return len(workflows)
def create_parser() -> argparse.ArgumentParser:
@@ -372,72 +372,73 @@ def create_parser() -> argparse.ArgumentParser:
Configured ArgumentParser.
"""
parser = argparse.ArgumentParser(
- description="Skill, tool, and deps store lifecycle management",
+ description="Workflow, tool, and deps store lifecycle management",
formatter_class=argparse.RawDescriptionHelpFormatter,
epilog="""
Examples:
- # Push skills from repo to Redis (deploy time)
- python -m py_code_mode.store bootstrap \\
- --source ./skills \\
- --target redis://localhost:6379 \\
- --prefix agent-skills
+ # Push workflows from repo to Redis (deploy time)
+ python -m py_code_mode.store bootstrap \
+ --source ./workflows \
+ --target redis://localhost:6379 \
+ --prefix agent-workflows
# Push tools to Redis
- python -m py_code_mode.store bootstrap \\
- --source ./tools \\
- --target redis://localhost:6379 \\
- --prefix agent-tools \\
+ python -m py_code_mode.store bootstrap \
+ --source ./tools \
+ --target redis://localhost:6379 \
+ --prefix agent-tools \
--type tools
# Push deps from requirements file
- python -m py_code_mode.store bootstrap \\
- --source requirements.txt \\
- --target redis://localhost:6379 \\
- --prefix agent-deps \\
+ python -m py_code_mode.store bootstrap \
+ --source requirements.txt \
+ --target redis://localhost:6379 \
+ --prefix agent-deps \
--type deps
# Push deps inline
- python -m py_code_mode.store bootstrap \\
- --target redis://localhost:6379 \\
- --prefix agent-deps \\
- --type deps \\
+ python -m py_code_mode.store bootstrap \
+ --target redis://localhost:6379 \
+ --prefix agent-deps \
+ --type deps \
--deps "requests>=2.31" "pandas>=2.0"
- # List skills in Redis
- python -m py_code_mode.store list \\
- --target redis://localhost:6379 \\
- --prefix agent-skills
+ # List workflows in Redis
+ python -m py_code_mode.store list \
+ --target redis://localhost:6379 \
+ --prefix agent-workflows
# List deps in Redis
- python -m py_code_mode.store list \\
- --target redis://localhost:6379 \\
- --prefix agent-deps \\
+ python -m py_code_mode.store list \
+ --target redis://localhost:6379 \
+ --prefix agent-deps \
--type deps
- # Pull skills from Redis to local files (review agent-created skills)
- python -m py_code_mode.store pull \\
- --target redis://localhost:6379 \\
- --prefix agent-skills \\
- --dest ./skills-from-redis
+ # Pull workflows from Redis to local files (review agent-created workflows)
+ python -m py_code_mode.store pull \
+ --target redis://localhost:6379 \
+ --prefix agent-workflows \
+ --dest ./workflows-from-redis
# Compare local vs Redis (what did agent add/change?)
- python -m py_code_mode.store diff \\
- --source ./skills \\
- --target redis://localhost:6379 \\
- --prefix agent-skills
+ python -m py_code_mode.store diff \
+ --source ./workflows \
+ --target redis://localhost:6379 \
+ --prefix agent-workflows
""",
)
+
subparsers = parser.add_subparsers(dest="command", required=True)
# bootstrap
boot = subparsers.add_parser(
"bootstrap",
- help="Push skills, tools, or deps to store",
+ help="Push workflows, tools, or deps to store",
)
boot.add_argument(
"--source",
type=Path,
- help="Path to directory (skills/tools) or requirements file (deps)",
+ help="Path to directory (workflows/tools) or requirements file (deps)",
)
boot.add_argument(
"--target",
@@ -446,14 +447,14 @@ def create_parser() -> argparse.ArgumentParser:
)
boot.add_argument(
"--prefix",
- default="skills",
- help="Key prefix (default: skills)",
+ default="workflows",
+ help="Key prefix (default: workflows)",
)
boot.add_argument(
"--type",
- choices=["skills", "tools", "deps"],
- default="skills",
- help="Type of items to bootstrap (default: skills)",
+ choices=["workflows", "tools", "deps"],
+ default="workflows",
+ help="Type of items to bootstrap (default: workflows)",
)
boot.add_argument(
"--clear",
@@ -478,20 +479,20 @@ def create_parser() -> argparse.ArgumentParser:
)
ls.add_argument(
"--prefix",
- default="skills",
- help="Key prefix (default: skills)",
+ default="workflows",
+ help="Key prefix (default: workflows)",
)
ls.add_argument(
"--type",
- choices=["skills", "tools", "deps"],
- default="skills",
- help="Type of items to list (default: skills)",
+ choices=["workflows", "tools", "deps"],
+ default="workflows",
+ help="Type of items to list (default: workflows)",
)
# pull
pl = subparsers.add_parser(
"pull",
- help="Retrieve skills from store to local files",
+ help="Retrieve workflows from store to local files",
)
pl.add_argument(
"--target",
@@ -500,26 +501,26 @@ def create_parser() -> argparse.ArgumentParser:
)
pl.add_argument(
"--prefix",
- default="skills",
- help="Key prefix for skills (default: skills)",
+ default="workflows",
+ help="Key prefix for workflows (default: workflows)",
)
pl.add_argument(
"--dest",
type=Path,
required=True,
- help="Destination directory for skill files",
+ help="Destination directory for workflow files",
)
# diff
df = subparsers.add_parser(
"diff",
- help="Compare local skills vs remote store",
+ help="Compare local workflows vs remote store",
)
df.add_argument(
"--source",
type=Path,
required=True,
- help="Path to local skills directory",
+ help="Path to local workflows directory",
)
df.add_argument(
"--target",
@@ -528,8 +529,8 @@ def create_parser() -> argparse.ArgumentParser:
)
df.add_argument(
"--prefix",
- default="skills",
- help="Key prefix for skills (default: skills)",
+ default="workflows",
+ help="Key prefix for workflows (default: workflows)",
)
return parser
diff --git a/src/py_code_mode/deps/store.py b/src/py_code_mode/deps/store.py
index 0654d77..52c8a0e 100644
--- a/src/py_code_mode/deps/store.py
+++ b/src/py_code_mode/deps/store.py
@@ -15,7 +15,7 @@
import hashlib
import re
from pathlib import Path
-from typing import TYPE_CHECKING, Protocol, runtime_checkable
+from typing import TYPE_CHECKING, Protocol, cast, runtime_checkable
if TYPE_CHECKING:
from redis import Redis
@@ -392,7 +392,7 @@ def __init__(self, redis: Redis, prefix: str = "deps") -> None:
def list(self) -> list[str]:
"""Return list of all packages."""
- members = self._redis.smembers(self._key)
+ members = cast(set[str | bytes], self._redis.smembers(self._key))
if not members:
return []
# Decode bytes if needed
diff --git a/src/py_code_mode/errors.py b/src/py_code_mode/errors.py
index 091f264..096bd1f 100644
--- a/src/py_code_mode/errors.py
+++ b/src/py_code_mode/errors.py
@@ -67,30 +67,30 @@ def __init__(self, artifact_name: str, reason: str) -> None:
super().__init__(f"Cannot write artifact '{artifact_name}': {reason}")
-class SkillNotFoundError(CodeModeError):
- """Raised when a skill name is not found."""
+class WorkflowNotFoundError(CodeModeError):
+ """Raised when a workflow name is not found."""
- def __init__(self, skill_name: str) -> None:
- self.skill_name = skill_name
- super().__init__(f"Skill '{skill_name}' not found")
+ def __init__(self, workflow_name: str) -> None:
+ self.workflow_name = workflow_name
+ super().__init__(f"Workflow '{workflow_name}' not found")
-class SkillValidationError(CodeModeError):
- """Raised when skill YAML is invalid."""
+class WorkflowValidationError(CodeModeError):
+ """Raised when workflow YAML is invalid."""
- def __init__(self, skill_name: str, reason: str) -> None:
- self.skill_name = skill_name
+ def __init__(self, workflow_name: str, reason: str) -> None:
+ self.workflow_name = workflow_name
self.reason = reason
- super().__init__(f"Invalid skill '{skill_name}': {reason}")
+ super().__init__(f"Invalid workflow '{workflow_name}': {reason}")
-class SkillExecutionError(CodeModeError):
- """Raised when skill code execution fails."""
+class WorkflowExecutionError(CodeModeError):
+ """Raised when workflow code execution fails."""
- def __init__(self, skill_name: str, cause: Exception) -> None:
- self.skill_name = skill_name
+ def __init__(self, workflow_name: str, cause: Exception) -> None:
+ self.workflow_name = workflow_name
self.cause = cause
- super().__init__(f"Skill '{skill_name}' execution failed: {cause}")
+ super().__init__(f"Workflow '{workflow_name}' execution failed: {cause}")
class DependencyError(CodeModeError):
@@ -170,8 +170,8 @@ class NamespaceError(RPCError):
original exception type caused the failure.
Attributes:
- namespace: The namespace where the error occurred (skills, tools, artifacts, deps).
- operation: The operation that failed (e.g., invoke_skill, call_tool).
+ namespace: The namespace where the error occurred (workflows, tools, artifacts, deps).
+ operation: The operation that failed (e.g., invoke_workflow, call_tool).
original_type: The original exception type name from the host.
"""
@@ -188,14 +188,14 @@ def __init__(
super().__init__(f"{namespace}.{operation}: [{original_type}] {message}")
-class SkillError(NamespaceError):
- """Error in skills namespace operation.
+class WorkflowError(NamespaceError):
+ """Error in workflows namespace operation.
- Raised when skill invocation, creation, search, or deletion fails.
+ Raised when workflow invocation, creation, search, or deletion fails.
"""
def __init__(self, operation: str, message: str, original_type: str = "RuntimeError") -> None:
- super().__init__("skills", operation, message, original_type)
+ super().__init__("workflows", operation, message, original_type)
class ToolError(NamespaceError):
diff --git a/src/py_code_mode/execution/container/client.py b/src/py_code_mode/execution/container/client.py
index c2bc303..bcc460a 100644
--- a/src/py_code_mode/execution/container/client.py
+++ b/src/py_code_mode/execution/container/client.py
@@ -55,7 +55,7 @@ class InfoResult:
"""Server info result."""
tools: list[dict[str, str]]
- skills: list[dict[str, str]]
+ workflows: list[dict[str, str]]
artifacts_path: str
@@ -177,7 +177,7 @@ async def info(self) -> InfoResult:
"""Get server info.
Returns:
- InfoResult with available tools and skills.
+ InfoResult with available tools and workflows.
"""
client = await self._get_client()
response = await client.get(f"{self.base_url}/info", headers=self._headers())
@@ -186,7 +186,7 @@ async def info(self) -> InfoResult:
return InfoResult(
tools=data["tools"],
- skills=data["skills"],
+ workflows=data["workflows"],
artifacts_path=data["artifacts_path"],
)
@@ -297,97 +297,58 @@ async def search_tools(self, query: str, limit: int = 10) -> list[dict[str, Any]
return response.json()
# ==========================================================================
- # Skills API Methods
+ # Workflows API Methods
# ==========================================================================
- async def list_skills(self) -> list[dict[str, Any]]:
- """List all skills.
-
- Returns:
- List of skill metadata dicts with name, description, parameters.
- """
+ async def list_workflows(self) -> list[dict[str, Any]]:
+ """List all workflows."""
client = await self._get_client()
response = await client.get(
- f"{self.base_url}/api/skills",
+ f"{self.base_url}/api/workflows",
headers=self._headers(),
)
response.raise_for_status()
return response.json()
- async def search_skills(self, query: str, limit: int = 5) -> list[dict[str, Any]]:
- """Search skills semantically.
-
- Args:
- query: Natural language search query.
- limit: Maximum number of results to return.
-
- Returns:
- List of matching skill metadata dicts.
- """
+ async def search_workflows(self, query: str, limit: int = 5) -> list[dict[str, Any]]:
+ """Search workflows."""
client = await self._get_client()
response = await client.get(
- f"{self.base_url}/api/skills/search",
+ f"{self.base_url}/api/workflows/search",
params={"query": query, "limit": limit},
headers=self._headers(),
)
response.raise_for_status()
return response.json()
- async def get_skill(self, name: str) -> dict[str, Any] | None:
- """Get skill by name with full source.
-
- Args:
- name: Skill name.
-
- Returns:
- Skill dict with name, description, parameters, source.
- None if skill not found.
- """
+ async def get_workflow(self, name: str) -> dict[str, Any] | None:
+ """Get workflow by name with full source."""
client = await self._get_client()
response = await client.get(
- f"{self.base_url}/api/skills/{name}",
+ f"{self.base_url}/api/workflows/{name}",
headers=self._headers(),
)
response.raise_for_status()
return response.json()
- async def create_skill(self, name: str, source: str, description: str) -> dict[str, Any]:
- """Create a new skill.
-
- Args:
- name: Skill name.
- source: Python source code with run() function.
- description: Skill description.
-
- Returns:
- Created skill metadata dict.
-
- Raises:
- RuntimeError: If skill creation fails.
- """
+ async def create_workflow(self, name: str, source: str, description: str) -> dict[str, Any]:
+ """Create a new workflow."""
client = await self._get_client()
response = await client.post(
- f"{self.base_url}/api/skills",
+ f"{self.base_url}/api/workflows",
json={"name": name, "source": source, "description": description},
headers=self._headers(),
)
if response.status_code != 200:
data = response.json()
- raise RuntimeError(data.get("detail", "Skill creation failed"))
+ raise RuntimeError(data.get("detail", "Workflow creation failed"))
return response.json()
- async def delete_skill(self, name: str) -> bool:
- """Delete a skill.
-
- Args:
- name: Skill name.
-
- Returns:
- True if skill was deleted, False if not found.
- """
+ async def delete_workflow(self, name: str) -> bool:
+ """Delete a workflow."""
client = await self._get_client()
response = await client.delete(
- f"{self.base_url}/api/skills/{name}",
+ f"{self.base_url}/api/workflows/{name}",
headers=self._headers(),
)
response.raise_for_status()
diff --git a/src/py_code_mode/execution/container/config.py b/src/py_code_mode/execution/container/config.py
index 8055669..f7df3c6 100644
--- a/src/py_code_mode/execution/container/config.py
+++ b/src/py_code_mode/execution/container/config.py
@@ -37,8 +37,8 @@ class SessionConfig:
# Python dependencies (auto-installed at startup)
python_deps: list[str] = field(default_factory=list)
- # Skills
- skills_path: Path = field(default_factory=lambda: Path("/app/skills"))
+ # Workflows
+ workflows_path: Path = field(default_factory=lambda: Path("/app/workflows"))
# Artifacts
artifacts_path: Path = field(default_factory=lambda: Path("/workspace/artifacts"))
@@ -79,8 +79,8 @@ def from_env(cls) -> SessionConfig:
config = cls()
# Load paths from env
- if skills_path := os.environ.get("SKILLS_PATH"):
- config.skills_path = Path(skills_path)
+ if workflows_path := os.environ.get("WORKFLOWS_PATH"):
+ config.workflows_path = Path(workflows_path)
if artifacts_path := os.environ.get("ARTIFACTS_PATH"):
config.artifacts_path = Path(artifacts_path)
@@ -143,7 +143,7 @@ def _from_dict(cls, data: dict[str, Any]) -> SessionConfig:
return cls(
mcp_servers=cls._parse_mcp_servers(data.get("mcp_servers", [])),
python_deps=data.get("python_deps", []),
- skills_path=Path(data.get("skills_path", "/app/skills")),
+ workflows_path=Path(data.get("workflows_path", "/app/workflows")),
artifacts_path=Path(data.get("artifacts_path", "/workspace/artifacts")),
artifact_backend=data.get("artifact_backend", "file"),
redis_url=data.get("redis_url"),
@@ -189,7 +189,7 @@ class ContainerConfig:
timeout: float = 30.0
startup_timeout: float = 60.0
health_check_interval: float = 0.5
- ipc_timeout: float = 30.0 # Timeout for IPC queries (tool/skill/artifact)
+ ipc_timeout: float = 30.0 # Timeout for IPC queries (tool/workflow/artifact)
# Container settings
environment: dict[str, str] = field(default_factory=dict)
@@ -213,12 +213,12 @@ class ContainerConfig:
def to_docker_config(
self,
tools_path: Path | None = None,
- skills_path: Path | None = None,
+ workflows_path: Path | None = None,
artifacts_path: Path | None = None,
deps_path: Path | None = None,
redis_url: str | None = None,
tools_prefix: str | None = None,
- skills_prefix: str | None = None,
+ workflows_prefix: str | None = None,
artifacts_prefix: str | None = None,
deps_prefix: str | None = None,
) -> dict[str, Any]:
@@ -227,19 +227,20 @@ def to_docker_config(
Args:
tools_path: Host path to tools directory (volume mount).
Falls back to self.tools_path if not provided.
- skills_path: Host path to skills directory (volume mount).
+ workflows_path: Host path to workflows directory (volume mount).
artifacts_path: Host path to artifacts directory (volume mount).
deps_path: Host path to deps directory (volume mount).
Falls back to self.deps_file's parent if not provided.
redis_url: Redis URL for Redis-based storage (sets env vars).
tools_prefix: Redis key prefix for tools.
- skills_prefix: Redis key prefix for skills.
+ workflows_prefix: Redis key prefix for workflows.
artifacts_prefix: Redis key prefix for artifacts.
deps_prefix: Redis key prefix for dependencies.
Returns:
- Docker SDK run() configuration dict.
+ Docker SDK configuration dict.
"""
+
# Use config fields as fallbacks for path arguments
effective_tools_path = tools_path if tools_path is not None else self.tools_path
effective_deps_path = deps_path
@@ -267,12 +268,12 @@ def to_docker_config(
"mode": "ro",
}
config["environment"]["TOOLS_PATH"] = "/app/tools"
- if skills_path:
- volumes[str(skills_path.absolute())] = {
- "bind": "/app/skills",
- "mode": "rw", # Agents create skills via skills.create()
+ if workflows_path:
+ volumes[str(workflows_path.absolute())] = {
+ "bind": "/app/workflows",
+ "mode": "rw", # Agents create workflows via workflows.create()
}
- config["environment"]["SKILLS_PATH"] = "/app/skills"
+ config["environment"]["WORKFLOWS_PATH"] = "/app/workflows"
if artifacts_path:
volumes[str(artifacts_path.absolute())] = {
"bind": "/workspace/artifacts",
@@ -295,8 +296,8 @@ def to_docker_config(
# Pass prefixes so container uses consistent keys
if tools_prefix:
config["environment"]["REDIS_TOOLS_PREFIX"] = tools_prefix
- if skills_prefix:
- config["environment"]["REDIS_SKILLS_PREFIX"] = skills_prefix
+ if workflows_prefix:
+ config["environment"]["REDIS_WORKFLOWS_PREFIX"] = workflows_prefix
if artifacts_prefix:
config["environment"]["REDIS_ARTIFACTS_PREFIX"] = artifacts_prefix
if deps_prefix:
diff --git a/src/py_code_mode/execution/container/executor.py b/src/py_code_mode/execution/container/executor.py
index 0193431..bc0b3f5 100644
--- a/src/py_code_mode/execution/container/executor.py
+++ b/src/py_code_mode/execution/container/executor.py
@@ -129,7 +129,7 @@ def __init__(self, config: ContainerConfig) -> None:
async def create(
cls,
artifacts: str | None = None,
- skills: str | None = None,
+ workflows: str | None = None,
tools: str | None = None,
image: str = DEFAULT_IMAGE,
timeout: float = 30.0,
@@ -143,7 +143,7 @@ async def create(
Args:
artifacts: Path to artifacts directory on host.
- skills: Path to skills directory on host.
+ workflows: Path to workflows directory on host.
tools: Path to tools directory on host.
image: Docker image to use.
timeout: Default execution timeout.
@@ -364,12 +364,12 @@ async def start(
# Extract paths/urls from storage via get_serializable_access()
tools_path = None
- skills_path = None
+ workflows_path = None
artifacts_path = None
deps_path = None
redis_url = None
tools_prefix = None
- skills_prefix = None
+ workflows_prefix = None
artifacts_prefix = None
deps_prefix = None
@@ -380,15 +380,15 @@ async def start(
if isinstance(access, FileStorageAccess):
# Tools and deps come from executor config, not storage
tools_path = self.config.tools_path
- skills_path = access.skills_path
+ workflows_path = access.workflows_path
artifacts_path = access.artifacts_path
deps_path = None
if artifacts_path is not None:
deps_path = artifacts_path.parent / "deps"
# Create directories on host before mounting
- # Skills need to exist for volume mount
- if skills_path:
- skills_path.mkdir(parents=True, exist_ok=True)
+ # Workflows need to exist for volume mount
+ if workflows_path:
+ workflows_path.mkdir(parents=True, exist_ok=True)
# Artifacts need to exist for volume mount
if artifacts_path:
artifacts_path.mkdir(parents=True, exist_ok=True)
@@ -402,7 +402,7 @@ async def start(
redis_url = _transform_localhost_for_docker(redis_url)
# Tools and deps prefixes come from executor config, not storage
tools_prefix = None # Tools owned by executor
- skills_prefix = access.skills_prefix
+ workflows_prefix = access.workflows_prefix
artifacts_prefix = access.artifacts_prefix
deps_prefix = None # Deps owned by executor
else:
@@ -414,12 +414,12 @@ async def start(
# Prepare container config with storage access
docker_config = self.config.to_docker_config(
tools_path=tools_path,
- skills_path=skills_path,
+ workflows_path=workflows_path,
artifacts_path=artifacts_path,
deps_path=deps_path,
redis_url=redis_url,
tools_prefix=tools_prefix,
- skills_prefix=skills_prefix,
+ workflows_prefix=workflows_prefix,
artifacts_prefix=artifacts_prefix,
deps_prefix=deps_prefix,
)
@@ -607,94 +607,38 @@ async def search_tools(self, query: str, limit: int = 10) -> list[dict[str, Any]
return await self._client.search_tools(query, limit)
# ==========================================================================
- # Skills API Methods
+ # Workflows API Methods
# ==========================================================================
- async def list_skills(self) -> list[dict[str, Any]]:
- """List all skills.
-
- Returns:
- List of skill metadata dicts with name, description, parameters.
-
- Raises:
- RuntimeError: If container is not started.
- """
+ async def list_workflows(self) -> list[dict[str, Any]]:
+ """List all workflows."""
if self._client is None:
raise RuntimeError("Container not started")
+ return await self._client.list_workflows()
- return await self._client.list_skills()
-
- async def search_skills(self, query: str, limit: int = 5) -> list[dict[str, Any]]:
- """Search skills semantically.
-
- Args:
- query: Natural language search query.
- limit: Maximum number of results to return.
-
- Returns:
- List of matching skill metadata dicts.
-
- Raises:
- RuntimeError: If container is not started.
- """
+ async def search_workflows(self, query: str, limit: int = 5) -> list[dict[str, Any]]:
+ """Search workflows."""
if self._client is None:
raise RuntimeError("Container not started")
+ return await self._client.search_workflows(query, limit)
- return await self._client.search_skills(query, limit)
-
- async def get_skill(self, name: str) -> dict[str, Any] | None:
- """Get skill by name with full source.
-
- Args:
- name: Skill name.
-
- Returns:
- Skill dict with name, description, parameters, source.
- None if skill not found.
-
- Raises:
- RuntimeError: If container is not started.
- """
+ async def get_workflow(self, name: str) -> dict[str, Any] | None:
+ """Get workflow by name with full source."""
if self._client is None:
raise RuntimeError("Container not started")
+ return await self._client.get_workflow(name)
- return await self._client.get_skill(name)
-
- async def add_skill(self, name: str, source: str, description: str) -> dict[str, Any]:
- """Create a new skill.
-
- Args:
- name: Skill name.
- source: Python source code with run() function.
- description: Skill description.
-
- Returns:
- Created skill metadata dict.
-
- Raises:
- RuntimeError: If container is not started or skill creation fails.
- """
+ async def add_workflow(self, name: str, source: str, description: str) -> dict[str, Any]:
+ """Create a new workflow."""
if self._client is None:
raise RuntimeError("Container not started")
+ return await self._client.create_workflow(name, source, description)
- return await self._client.create_skill(name, source, description)
-
- async def remove_skill(self, name: str) -> bool:
- """Delete a skill.
-
- Args:
- name: Skill name.
-
- Returns:
- True if skill was deleted, False if not found.
-
- Raises:
- RuntimeError: If container is not started.
- """
+ async def remove_workflow(self, name: str) -> bool:
+ """Delete a workflow."""
if self._client is None:
raise RuntimeError("Container not started")
-
- return await self._client.delete_skill(name)
+ return await self._client.delete_workflow(name)
# ==========================================================================
# Artifacts API Methods
diff --git a/src/py_code_mode/execution/container/server.py b/src/py_code_mode/execution/container/server.py
index 500a1df..753ffb5 100644
--- a/src/py_code_mode/execution/container/server.py
+++ b/src/py_code_mode/execution/container/server.py
@@ -62,8 +62,12 @@
from py_code_mode.execution.in_process import ( # noqa: E402
InProcessExecutor as CodeExecutor,
)
-from py_code_mode.skills import FileSkillStore, SkillLibrary, create_skill_library # noqa: E402
from py_code_mode.tools import ToolRegistry # noqa: E402
+from py_code_mode.workflows import ( # noqa: E402
+ FileWorkflowStore,
+ WorkflowLibrary,
+ create_workflow_library,
+)
# Session expiration (seconds)
SESSION_EXPIRY = 3600 # 1 hour
@@ -118,7 +122,7 @@ class InfoResponseModel(BaseModel): # type: ignore
"""Server info response."""
tools: list[dict[str, str]]
- skills: list[dict[str, str]]
+ workflows: list[dict[str, str]]
artifacts_path: str
class ResetResponseModel(BaseModel): # type: ignore
@@ -148,19 +152,19 @@ class DepsResponseModel(BaseModel): # type: ignore
# API Endpoint Request/Response Models
# ==========================================================================
- class CreateSkillRequest(BaseModel): # type: ignore
- """Request to create a new skill."""
+ class CreateWorkflowRequest(BaseModel): # type: ignore
+ """Request to create a new workflow."""
name: str
source: str
description: str
- class SkillResponse(BaseModel): # type: ignore
- """Response for skill information."""
+ class WorkflowResponse(BaseModel): # type: ignore
+ """Response for workflow information."""
name: str
description: str
- parameters: list[dict[str, Any]]
+ params: dict[str, str]
source: str
class SaveArtifactRequest(BaseModel): # type: ignore
@@ -216,7 +220,7 @@ class ServerState:
config: SessionConfig | None = None
registry: ToolRegistry | None = None
- skill_library: SkillLibrary | None = None
+ workflow_library: WorkflowLibrary | None = None
artifact_store: ArtifactStoreProtocol | None = None # Shared store for Redis mode
deps_store: DepsStore | None = None
deps_installer: PackageInstaller | None = None
@@ -232,7 +236,10 @@ class ServerState:
# Authentication helpers
# HTTPBearer with auto_error=False returns None instead of raising 401
# This lets us handle missing credentials ourselves for better error messages
-BEARER_SCHEME = HTTPBearer(auto_error=False) if FASTAPI_AVAILABLE else None
+if FASTAPI_AVAILABLE:
+ BEARER_SCHEME = HTTPBearer(auto_error=False)
+else:
+ BEARER_SCHEME = None
def verify_auth_token(provided: str, expected: str) -> bool:
@@ -248,20 +255,16 @@ def verify_auth_token(provided: str, expected: str) -> bool:
return hmac.compare_digest(provided.encode(), expected.encode())
-def build_skill_library(config: SessionConfig) -> SkillLibrary | None:
- """Build skill library from configuration with semantic search."""
- # Create directory if it doesn't exist (same as artifacts behavior)
+def build_workflow_library(config: SessionConfig) -> WorkflowLibrary | None:
+ """Build workflow library from configuration."""
try:
- config.skills_path.mkdir(parents=True, exist_ok=True)
+ config.workflows_path.mkdir(parents=True, exist_ok=True)
except OSError as e:
- # If we can't create the directory (e.g., read-only filesystem),
- # return None to signal no skill library is available
- logger.warning("Cannot create skills directory at %s: %s", config.skills_path, e)
+ logger.warning("Cannot create workflows directory at %s: %s", config.workflows_path, e)
return None
- # Use file-based store wrapped in skill library
- store = FileSkillStore(config.skills_path)
- return create_skill_library(store=store)
+ store = FileWorkflowStore(config.workflows_path)
+ return create_workflow_library(store=store)
def create_session(session_id: str) -> Session:
@@ -287,7 +290,7 @@ def create_session(session_id: str) -> Session:
# Create executor with shared registries but isolated namespace/artifacts
executor = CodeExecutor(
registry=_state.registry,
- skill_library=_state.skill_library,
+ workflow_library=_state.workflow_library,
artifact_store=artifact_store,
deps_namespace=deps_namespace,
default_timeout=_state.config.default_timeout,
@@ -355,7 +358,7 @@ def install_python_deps(deps: list[str]) -> None:
async def initialize_server(config: SessionConfig) -> None:
"""Initialize the server with shared resources.
- When REDIS_URL is set, uses Redis for tools, skills, and artifacts.
+ When REDIS_URL is set, uses Redis for tools, workflows, and artifacts.
Otherwise falls back to file-based storage.
"""
global _state
@@ -371,15 +374,15 @@ async def initialize_server(config: SessionConfig) -> None:
import redis as redis_lib
from py_code_mode.artifacts import RedisArtifactStore
- from py_code_mode.skills import RedisSkillStore
from py_code_mode.storage import RedisToolStore, registry_from_redis
+ from py_code_mode.workflows import RedisWorkflowStore
logger.info("Using Redis backend: %s...", redis_url[:50])
r = redis_lib.from_url(redis_url)
# Get prefixes from environment (set by ContainerExecutor), with defaults
tools_prefix = os.environ.get("REDIS_TOOLS_PREFIX", "tools")
- skills_prefix = os.environ.get("REDIS_SKILLS_PREFIX", "skills")
+ workflows_prefix = os.environ.get("REDIS_WORKFLOWS_PREFIX", "workflows")
artifacts_prefix = os.environ.get("REDIS_ARTIFACTS_PREFIX", "artifacts")
# Tools from Redis
@@ -387,11 +390,11 @@ async def initialize_server(config: SessionConfig) -> None:
registry = await registry_from_redis(tool_store)
logger.info(" Tools in Redis (%s): %d", tools_prefix, len(tool_store))
- # Skills from Redis with semantic search
- redis_store = RedisSkillStore(r, prefix=skills_prefix)
- skill_library = create_skill_library(store=redis_store)
- skill_count = len(redis_store)
- logger.info(" Skills in Redis (%s): %d (semantic)", skills_prefix, skill_count)
+ # Workflows from Redis
+ redis_store = RedisWorkflowStore(r, prefix=workflows_prefix)
+ workflow_library = create_workflow_library(store=redis_store)
+ workflow_count = len(redis_store)
+ logger.info(" Workflows in Redis (%s): %d (semantic)", workflows_prefix, workflow_count)
# Artifacts in Redis (shared across sessions)
artifact_store = RedisArtifactStore(r, prefix=artifacts_prefix)
@@ -416,7 +419,7 @@ async def initialize_server(config: SessionConfig) -> None:
_state = ServerState(
config=config,
registry=registry,
- skill_library=skill_library,
+ workflow_library=workflow_library,
artifact_store=artifact_store,
deps_store=deps_store,
deps_installer=deps_installer,
@@ -439,7 +442,7 @@ async def initialize_server(config: SessionConfig) -> None:
logger.info(" TOOLS_PATH not set, no tools available")
registry = ToolRegistry()
- skill_library = build_skill_library(config)
+ workflow_library = build_workflow_library(config)
# Create shared artifact store (same as Redis mode)
config.artifacts_path.mkdir(parents=True, exist_ok=True)
@@ -476,7 +479,7 @@ async def initialize_server(config: SessionConfig) -> None:
_state = ServerState(
config=config,
registry=registry,
- skill_library=skill_library,
+ workflow_library=workflow_library,
artifact_store=artifact_store,
deps_store=deps_store,
deps_installer=deps_installer,
@@ -629,22 +632,22 @@ async def health() -> HealthResponseModel:
@app.get("/info", response_model=InfoResponseModel, dependencies=[Depends(require_auth)])
async def info() -> InfoResponseModel:
- """Get information about available tools and skills."""
+ """Get information about available tools and workflows."""
tools = []
if _state.registry:
for tool in _state.registry.list_tools():
tools.append({"name": tool.name, "description": tool.description})
- skills = []
- if _state.skill_library:
- for skill in _state.skill_library.list():
- skills.append({"name": skill.name, "description": skill.description})
+ workflows = []
+ if _state.workflow_library:
+ for workflow in _state.workflow_library.list():
+ workflows.append({"name": workflow.name, "description": workflow.description})
artifacts_path = str(_state.config.artifacts_path) if _state.config else ""
return InfoResponseModel(
tools=tools,
- skills=skills,
+ workflows=workflows,
artifacts_path=artifacts_path,
)
@@ -783,125 +786,89 @@ async def api_search_tools(query: str, limit: int = 10) -> list[dict[str, Any]]:
return [tool.to_dict() for tool in tools]
# ==========================================================================
- # Skills API Endpoints
+ # Workflows API Endpoints
# ==========================================================================
- @app.get("/api/skills", dependencies=[Depends(require_auth)])
- async def api_list_skills() -> list[dict[str, Any]]:
- """Return all skills."""
- if _state.skill_library is None:
+ @app.get("/api/workflows", dependencies=[Depends(require_auth)])
+ async def api_list_workflows() -> list[dict[str, Any]]:
+ """Return all workflows."""
+ if _state.workflow_library is None:
return []
- skills = _state.skill_library.list()
+ workflows = _state.workflow_library.list()
return [
{
- "name": skill.name,
- "description": skill.description,
- "parameters": [
- {
- "name": p.name,
- "type": p.type,
- "description": p.description,
- "required": p.required,
- "default": p.default,
- }
- for p in skill.parameters
- ],
+ "name": workflow.name,
+ "description": workflow.description,
+ "params": {p.name: p.description or p.type for p in workflow.parameters},
}
- for skill in skills
+ for workflow in workflows
]
- @app.get("/api/skills/search", dependencies=[Depends(require_auth)])
- async def api_search_skills(query: str, limit: int = 5) -> list[dict[str, Any]]:
- """Search skills semantically."""
- if _state.skill_library is None:
+ @app.get("/api/workflows/search", dependencies=[Depends(require_auth)])
+ async def api_search_workflows(query: str, limit: int = 5) -> list[dict[str, Any]]:
+ """Search workflows."""
+ if _state.workflow_library is None:
return []
- skills = _state.skill_library.search(query, limit=limit)
+ workflows = _state.workflow_library.search(query, limit=limit)
return [
{
- "name": skill.name,
- "description": skill.description,
- "parameters": [
- {
- "name": p.name,
- "type": p.type,
- "description": p.description,
- "required": p.required,
- "default": p.default,
- }
- for p in skill.parameters
- ],
+ "name": workflow.name,
+ "description": workflow.description,
+ "params": {p.name: p.description or p.type for p in workflow.parameters},
}
- for skill in skills
+ for workflow in workflows
]
- @app.get("/api/skills/{name}", dependencies=[Depends(require_auth)])
- async def api_get_skill(name: str) -> dict[str, Any] | None:
- """Get skill by name with full source."""
- if _state.skill_library is None:
+ @app.get("/api/workflows/{name}", dependencies=[Depends(require_auth)])
+ async def api_get_workflow(name: str) -> dict[str, Any] | None:
+ """Get workflow by name with full source."""
+ if _state.workflow_library is None:
return None
- skill = _state.skill_library.get(name)
- if skill is None:
+ workflow = _state.workflow_library.get(name)
+ if workflow is None:
return None
return {
- "name": skill.name,
- "description": skill.description,
- "parameters": [
- {
- "name": p.name,
- "type": p.type,
- "description": p.description,
- "required": p.required,
- "default": p.default,
- }
- for p in skill.parameters
- ],
- "source": skill.source,
+ "name": workflow.name,
+ "description": workflow.description,
+ "params": {p.name: p.description or p.type for p in workflow.parameters},
+ "source": workflow.source,
}
- @app.post("/api/skills", dependencies=[Depends(require_auth)])
- async def api_create_skill(body: CreateSkillRequest) -> dict[str, Any]:
- """Create a new skill."""
- if _state.skill_library is None:
- raise HTTPException(status_code=503, detail="Skill library not initialized")
+ @app.post("/api/workflows", dependencies=[Depends(require_auth)])
+ async def api_create_workflow(body: CreateWorkflowRequest) -> dict[str, Any]:
+ """Create a new workflow."""
+ if _state.workflow_library is None:
+ raise HTTPException(status_code=503, detail="Workflow library not initialized")
- from py_code_mode.skills import PythonSkill
+ from py_code_mode.workflows import PythonWorkflow
try:
- skill = PythonSkill.from_source(
+ workflow = PythonWorkflow.from_source(
name=body.name,
source=body.source,
description=body.description,
)
- _state.skill_library.add(skill)
+ _state.workflow_library.add(workflow)
return {
- "name": skill.name,
- "description": skill.description,
- "parameters": [
- {
- "name": p.name,
- "type": p.type,
- "description": p.description,
- "required": p.required,
- "default": p.default,
- }
- for p in skill.parameters
- ],
- "source": skill.source,
+ "name": workflow.name,
+ "description": workflow.description,
+ "params": {p.name: p.description or p.type for p in workflow.parameters},
+ "source": workflow.source,
}
except (ValueError, SyntaxError) as e:
raise HTTPException(status_code=400, detail=str(e))
- @app.delete("/api/skills/{name}", dependencies=[Depends(require_auth)])
- async def api_delete_skill(name: str) -> bool:
- """Delete a skill."""
- if _state.skill_library is None:
+ @app.delete("/api/workflows/{name}", dependencies=[Depends(require_auth)])
+ async def api_delete_workflow(name: str) -> bool:
+ """Delete a workflow."""
+ if _state.workflow_library is None:
return False
- return _state.skill_library.remove(name)
+ return _state.workflow_library.remove(name)
# ==========================================================================
# Artifacts API Endpoints
diff --git a/src/py_code_mode/execution/in_process/__init__.py b/src/py_code_mode/execution/in_process/__init__.py
index ec8238e..4ea3187 100644
--- a/src/py_code_mode/execution/in_process/__init__.py
+++ b/src/py_code_mode/execution/in_process/__init__.py
@@ -2,6 +2,6 @@
from py_code_mode.execution.in_process.config import InProcessConfig
from py_code_mode.execution.in_process.executor import InProcessExecutor
-from py_code_mode.execution.in_process.skills_namespace import SkillsNamespace
+from py_code_mode.execution.in_process.workflows_namespace import WorkflowsNamespace
-__all__ = ["InProcessConfig", "InProcessExecutor", "SkillsNamespace"]
+__all__ = ["InProcessConfig", "InProcessExecutor", "WorkflowsNamespace"]
diff --git a/src/py_code_mode/execution/in_process/config.py b/src/py_code_mode/execution/in_process/config.py
index f33139c..3ca2e3c 100644
--- a/src/py_code_mode/execution/in_process/config.py
+++ b/src/py_code_mode/execution/in_process/config.py
@@ -23,7 +23,7 @@ class InProcessConfig:
None means no pre-configured deps.
deps_file: Path to requirements.txt-style file for pre-configured deps.
None means no deps file.
- ipc_timeout: Timeout for IPC queries (tool/skill/artifact) in seconds.
+ ipc_timeout: Timeout for IPC queries (tool/workflow/artifact) in seconds.
Default: 30.0.
"""
diff --git a/src/py_code_mode/execution/in_process/executor.py b/src/py_code_mode/execution/in_process/executor.py
index cf3a182..892a74d 100644
--- a/src/py_code_mode/execution/in_process/executor.py
+++ b/src/py_code_mode/execution/in_process/executor.py
@@ -25,12 +25,12 @@
collect_configured_deps,
)
from py_code_mode.execution.in_process.config import InProcessConfig
-from py_code_mode.execution.in_process.skills_namespace import SkillsNamespace
+from py_code_mode.execution.in_process.workflows_namespace import WorkflowsNamespace
from py_code_mode.execution.protocol import Capability, validate_storage_not_access
from py_code_mode.execution.registry import register_backend
-from py_code_mode.skills import SkillLibrary
from py_code_mode.tools import ToolRegistry, ToolsNamespace, load_tools_from_path
from py_code_mode.types import ExecutionResult
+from py_code_mode.workflows import WorkflowLibrary
if TYPE_CHECKING:
from py_code_mode.artifacts import ArtifactStoreProtocol
@@ -47,7 +47,7 @@ class InProcessExecutor:
"""Runs Python code with persistent state in the same process.
Variables, functions, and imports persist across runs.
- Optionally injects tools.*, skills.*, and artifacts.* namespaces.
+ Optionally injects tools.*, workflows.*, and artifacts.* namespaces.
Capabilities:
- TIMEOUT: Yes (via asyncio.wait_for)
@@ -58,7 +58,7 @@ class InProcessExecutor:
Usage:
executor = await InProcessExecutor.create(
tools="./tools/",
- skills="./skills/",
+ workflows="./workflows/",
)
result = await executor.run('tools.nmap(target="scanme.nmap.org")')
"""
@@ -76,14 +76,14 @@ class InProcessExecutor:
def __init__(
self,
registry: ToolRegistry | None = None,
- skill_library: SkillLibrary | None = None,
+ workflow_library: WorkflowLibrary | None = None,
artifact_store: ArtifactStoreProtocol | None = None,
deps_namespace: DepsNamespace | None = None,
default_timeout: float | None = 30.0,
config: InProcessConfig | None = None,
) -> None:
self._registry = registry
- self._skill_library = skill_library
+ self._workflow_library = workflow_library
self._artifact_store: ArtifactStoreProtocol | None = artifact_store
self._deps_namespace: DepsNamespace | None = deps_namespace
self._config = config or InProcessConfig()
@@ -95,9 +95,9 @@ def __init__(
if registry is not None:
self._namespace["tools"] = ToolsNamespace(registry)
- # Inject skills namespace if skill_library provided
- if skill_library is not None:
- self._namespace["skills"] = SkillsNamespace(skill_library, self._namespace)
+ # Inject workflows namespace if workflow_library provided
+ if workflow_library is not None:
+ self._namespace["workflows"] = WorkflowsNamespace(workflow_library, self._namespace)
# Inject artifacts namespace if artifact_store provided
if artifact_store is not None:
@@ -150,12 +150,12 @@ async def run(self, code: str, timeout: float | None = None) -> ExecutionResult:
timeout = timeout if timeout is not None else self._default_timeout
- # Store loop reference for tool/skill calls from thread context
+ # Store loop reference for tool/workflow calls from thread context
loop = asyncio.get_running_loop()
if "tools" in self._namespace:
self._namespace["tools"].set_loop(loop)
- if "skills" in self._namespace:
- self._namespace["skills"].set_loop(loop)
+ if "workflows" in self._namespace:
+ self._namespace["workflows"].set_loop(loop)
# Run in thread to allow timeout cancellation
try:
@@ -227,13 +227,13 @@ async def close(self) -> None:
async def reset(self) -> None:
"""Reset session state.
- Clears all user-defined variables but preserves tools, skills, artifacts, deps namespaces.
+ Clears all user-defined variables but preserves tools/workflows/artifacts/deps namespaces.
"""
# Store namespace items we want to preserve
preserved = {
"__builtins__": self._namespace.get("__builtins__"),
"tools": self._namespace.get("tools"),
- "skills": self._namespace.get("skills"),
+ "workflows": self._namespace.get("workflows"),
"artifacts": self._namespace.get("artifacts"),
"deps": self._namespace.get("deps"),
}
@@ -253,11 +253,11 @@ async def start(
"""Start executor and configure from config and storage backend.
Tools and deps are loaded from executor config (tools_path, deps, deps_file).
- Skills and artifacts come from storage backend.
+ Workflows and artifacts come from storage backend.
Args:
storage: Optional StorageBackend instance.
- If provided, uses storage for skills and artifacts.
+ If provided, uses storage for workflows and artifacts.
If None, uses whatever was passed to __init__.
Raises:
@@ -309,16 +309,20 @@ async def start(
else:
self._namespace["deps"] = self._deps_namespace
- # Skills and artifacts from storage (if provided)
+ # Workflows and artifacts from storage (if provided)
if storage is not None:
- self._skill_library = storage.get_skill_library()
- self._namespace["skills"] = SkillsNamespace(self._skill_library, self._namespace)
+ self._workflow_library = storage.get_workflow_library()
+ self._namespace["workflows"] = WorkflowsNamespace(
+ self._workflow_library, self._namespace
+ )
self._artifact_store = storage.get_artifact_store()
self._namespace["artifacts"] = self._artifact_store
- elif self._skill_library is not None:
- # Use skill_library from __init__ if provided
- self._namespace["skills"] = SkillsNamespace(self._skill_library, self._namespace)
+ elif self._workflow_library is not None:
+ # Use workflow_library from __init__ if provided
+ self._namespace["workflows"] = WorkflowsNamespace(
+ self._workflow_library, self._namespace
+ )
if storage is None and self._artifact_store is not None:
# Use artifact_store from __init__ if provided
diff --git a/src/py_code_mode/execution/in_process/skills_namespace.py b/src/py_code_mode/execution/in_process/skills_namespace.py
deleted file mode 100644
index 91dcb71..0000000
--- a/src/py_code_mode/execution/in_process/skills_namespace.py
+++ /dev/null
@@ -1,174 +0,0 @@
-"""Skills namespace for code execution.
-
-Provides the skills.* namespace that agents use to search,
-invoke, create, and delete skills during code execution.
-"""
-
-from __future__ import annotations
-
-import asyncio
-import builtins
-import inspect
-from typing import TYPE_CHECKING, Any
-
-from py_code_mode.skills import SkillLibrary
-
-if TYPE_CHECKING:
- from py_code_mode.skills import PythonSkill
-
-# Use builtins to avoid security hook false positive on Python's code execution
-_run_code = getattr(builtins, "exec")
-
-
-class SkillsNamespace:
- """Namespace object for skills.* access in executed code.
-
- Wraps a SkillLibrary and provides agent-facing methods plus skill execution.
- """
-
- def __init__(self, library: SkillLibrary, namespace: dict[str, Any]) -> None:
- """Initialize SkillsNamespace.
-
- Args:
- library: The skill library for skill lookup and storage.
- namespace: Dict containing tools, skills, artifacts for skill execution.
- Must be a plain dict, not an executor object.
-
- Raises:
- TypeError: If namespace is an executor-like object (has _namespace attr).
- """
- # Reject executor-like objects - require the actual namespace dict
- if hasattr(namespace, "_namespace"):
- raise TypeError(
- "SkillsNamespace expects a namespace dict, not an executor. "
- "Pass executor._namespace instead of the executor itself."
- )
-
- self._library = library
- self._namespace = namespace
- self._loop: asyncio.AbstractEventLoop | None = None
-
- def set_loop(self, loop: asyncio.AbstractEventLoop) -> None:
- """Set the event loop to use for async skill invocations.
-
- When code runs in a thread (via asyncio.to_thread), we need a reference
- to the main event loop to execute async skills via run_coroutine_threadsafe.
- """
- self._loop = loop
-
- @property
- def library(self) -> SkillLibrary:
- """Access the underlying SkillLibrary.
-
- Useful for tests and advanced use cases that need direct library access.
- """
- return self._library
-
- def search(self, query: str, limit: int = 10) -> list[dict[str, Any]]:
- """Search for skills matching query. Returns simplified skill info."""
- skills = self._library.search(query, limit)
- return [self._simplify(s) for s in skills]
-
- def get(self, name: str) -> Any:
- """Get a skill by name."""
- return self._library.get(name)
-
- def list(self) -> list[dict[str, Any]]:
- """List all available skills. Returns simplified skill info."""
- skills = self._library.list()
- return [self._simplify(s) for s in skills]
-
- def _simplify(self, skill: PythonSkill) -> dict[str, Any]:
- """Simplify skill for agent readability."""
- params = {}
- for p in skill.parameters:
- params[p.name] = p.description or p.type
- return {
- "name": skill.name,
- "description": skill.description,
- "params": params,
- }
-
- def create(
- self,
- name: str,
- source: str,
- description: str = "",
- ) -> dict[str, Any]:
- """Create and save a new Python skill.
-
- Args:
- name: Skill name (must be valid Python identifier).
- source: Python source code with def run(...) function.
- description: What the skill does.
-
- Returns:
- Simplified skill info dict.
-
- Raises:
- ValueError: If name is invalid, reserved, or code is malformed.
- SyntaxError: If code has syntax errors.
- """
- from py_code_mode.skills import PythonSkill
-
- # PythonSkill.from_source handles all validation
- skill = PythonSkill.from_source(
- name=name,
- source=source,
- description=description,
- )
-
- # Add to library (persists to store if configured)
- self._library.add(skill)
-
- return self._simplify(skill)
-
- def delete(self, name: str) -> bool:
- """Remove a skill from the library.
-
- Args:
- name: Name of skill to delete.
-
- Returns:
- True if skill was deleted, False if not found.
- """
- return self._library.remove(name)
-
- def __getattr__(self, name: str) -> Any:
- """Allow skills.skill_name(...) syntax."""
- if name.startswith("_"):
- raise AttributeError(name)
- skill = self._library.get(name)
- if skill is None:
- raise AttributeError(f"Skill not found: {name}")
- # Capture name in closure to avoid conflict with kwargs
- skill_name = name
- return lambda **kwargs: self.invoke(skill_name, **kwargs)
-
- def invoke(self, skill_name: str, **kwargs: Any) -> Any:
- """Invoke a skill by calling its run() function.
-
- Returns the result of the skill execution.
- """
- skill = self._library.get(skill_name)
- if skill is None:
- raise ValueError(f"Skill not found: {skill_name}")
-
- skill_namespace = {
- "tools": self._namespace.get("tools"),
- "skills": self._namespace.get("skills"),
- "artifacts": self._namespace.get("artifacts"),
- "deps": self._namespace.get("deps"),
- }
- code = compile(skill.source, f"", "exec")
- _run_code(code, skill_namespace)
- result = skill_namespace["run"](**kwargs)
-
- if inspect.iscoroutine(result):
- try:
- asyncio.get_running_loop()
- except RuntimeError:
- return asyncio.run(result)
- raise RuntimeError("Cannot invoke async skills from a running event loop")
-
- return result
diff --git a/src/py_code_mode/execution/in_process/workflows_namespace.py b/src/py_code_mode/execution/in_process/workflows_namespace.py
new file mode 100644
index 0000000..b8b3749
--- /dev/null
+++ b/src/py_code_mode/execution/in_process/workflows_namespace.py
@@ -0,0 +1,177 @@
+"""Workflows namespace for code execution.
+
+Provides the workflows.* namespace that agents use to search,
+invoke, create, and delete workflows during code execution.
+"""
+
+from __future__ import annotations
+
+import asyncio
+import builtins
+import inspect
+from typing import TYPE_CHECKING, Any
+
+from py_code_mode.workflows import WorkflowLibrary
+
+if TYPE_CHECKING:
+ from py_code_mode.workflows import PythonWorkflow
+
+# Use builtins to avoid security hook false positive on Python's code execution
+_run_code = getattr(builtins, "exec")
+
+
+class WorkflowsNamespace:
+ """Namespace object for workflows.* access in executed code.
+
+ Wraps a WorkflowLibrary and provides agent-facing methods plus workflow execution.
+ """
+
+ def __init__(self, library: WorkflowLibrary, namespace: dict[str, Any]) -> None:
+ """Initialize WorkflowsNamespace.
+
+ Args:
+ library: The workflow library for workflow lookup and storage.
+ namespace: Dict containing tools, workflows, artifacts for workflow execution.
+ Must be a plain dict, not an executor object.
+
+ Raises:
+ TypeError: If namespace is an executor-like object (has _namespace attr).
+ """
+ # Reject executor-like objects - require the actual namespace dict
+ if hasattr(namespace, "_namespace"):
+ raise TypeError(
+ "WorkflowsNamespace expects a namespace dict, not an executor. "
+ "Pass executor._namespace instead of the executor itself."
+ )
+
+ self._library = library
+ self._namespace = namespace
+ self._loop: asyncio.AbstractEventLoop | None = None
+
+ def set_loop(self, loop: asyncio.AbstractEventLoop) -> None:
+ """Set the event loop to use for async workflow invocations.
+
+ When code runs in a thread (via asyncio.to_thread), we need a reference
+ to the main event loop to execute async workflows via run_coroutine_threadsafe.
+ """
+ self._loop = loop
+
+ @property
+ def library(self) -> WorkflowLibrary:
+ """Access the underlying WorkflowLibrary.
+
+ Useful for tests and advanced use cases that need direct library access.
+ """
+ return self._library
+
+ def search(self, query: str, limit: int = 10) -> list[dict[str, Any]]:
+ """Search for workflows matching query. Returns simplified workflow info."""
+ workflows = self._library.search(query, limit)
+ return [self._simplify(w) for w in workflows]
+
+ def get(self, name: str) -> Any:
+ """Get a workflow by name."""
+ return self._library.get(name)
+
+ def list(self) -> list[dict[str, Any]]:
+ """List all available workflows. Returns simplified workflow info."""
+ workflows = self._library.list()
+ return [self._simplify(w) for w in workflows]
+
+ def _simplify(self, workflow: PythonWorkflow) -> dict[str, Any]:
+ """Simplify workflow for agent readability."""
+ params = {}
+ for p in workflow.parameters:
+ params[p.name] = p.description or p.type
+ return {
+ "name": workflow.name,
+ "description": workflow.description,
+ "params": params,
+ }
+
+ def create(
+ self,
+ name: str,
+ source: str,
+ description: str = "",
+ ) -> dict[str, Any]:
+ """Create and save a new Python workflow.
+
+ Args:
+ name: Workflow name (must be valid Python identifier).
+ source: Python source code with def run(...) function.
+ description: What the workflow does.
+
+ Returns:
+ Simplified workflow info dict.
+
+ Raises:
+ ValueError: If name is invalid, reserved, or code is malformed.
+ SyntaxError: If code has syntax errors.
+ """
+ from py_code_mode.workflows import PythonWorkflow
+
+ # PythonWorkflow.from_source handles all validation
+ workflow = PythonWorkflow.from_source(
+ name=name,
+ source=source,
+ description=description,
+ )
+
+ # Add to library (persists to store if configured)
+ self._library.add(workflow)
+
+ return self._simplify(workflow)
+
+ def delete(self, name: str) -> bool:
+ """Remove a workflow from the library.
+
+ Args:
+ name: Name of workflow to delete.
+
+ Returns:
+ True if workflow was deleted, False if not found.
+ """
+ return self._library.remove(name)
+
+ def __getattr__(self, name: str) -> Any:
+ """Allow workflows.workflow_name(...) syntax."""
+ if name.startswith("_"):
+ raise AttributeError(name)
+ workflow = self._library.get(name)
+ if workflow is None:
+ raise AttributeError(f"Workflow not found: {name}")
+ # Capture name in closure to avoid conflict with kwargs
+ workflow_name = name
+ return lambda **kwargs: self.invoke(workflow_name, **kwargs)
+
+ def invoke(self, workflow_name: str, **kwargs: Any) -> Any:
+ """Invoke a workflow by calling its run() function.
+
+ Returns the result of the workflow execution.
+ """
+ workflow = self._library.get(workflow_name)
+ if workflow is None:
+ raise ValueError(f"Workflow not found: {workflow_name}")
+
+ workflow_namespace = {
+ "tools": self._namespace.get("tools"),
+ "workflows": self._namespace.get("workflows"),
+ "artifacts": self._namespace.get("artifacts"),
+ "deps": self._namespace.get("deps"),
+ }
+ code = compile(workflow.source, f"", "exec")
+ _run_code(code, workflow_namespace)
+ run_func = workflow_namespace.get("run")
+ if not callable(run_func):
+ raise ValueError(f"Workflow {workflow_name} has no run() function")
+ result = run_func(**kwargs)
+
+ if inspect.iscoroutine(result):
+ try:
+ asyncio.get_running_loop()
+ except RuntimeError:
+ return asyncio.run(result)
+ raise RuntimeError("Cannot invoke async workflows from a running event loop")
+
+ return result
diff --git a/src/py_code_mode/execution/protocol.py b/src/py_code_mode/execution/protocol.py
index efa4a06..570987e 100644
--- a/src/py_code_mode/execution/protocol.py
+++ b/src/py_code_mode/execution/protocol.py
@@ -26,11 +26,11 @@ class FileStorageAccess:
"""Access descriptor for file-based storage.
Session derives this from FileStorage and passes to executor.start()
- so the executor knows where to find skills and artifacts.
+ so the executor knows where to find workflows and artifacts.
Tools and deps are owned by executors (via config), not storage.
"""
- skills_path: Path | None
+ workflows_path: Path | None
artifacts_path: Path
vectors_path: Path | None = None
@@ -40,12 +40,12 @@ class RedisStorageAccess:
"""Access descriptor for Redis storage.
Session derives this from RedisStorage and passes to executor.start()
- so the executor knows the Redis connection and key prefixes for skills
+ so the executor knows the Redis connection and key prefixes for workflows
and artifacts. Tools and deps are owned by executors (via config).
"""
redis_url: str
- skills_prefix: str
+ workflows_prefix: str
artifacts_prefix: str
vectors_prefix: str | None = None
@@ -188,7 +188,7 @@ async def start(self, storage: StorageBackend | None = None) -> None:
Args:
storage: StorageBackend instance. Executor decides how to use it:
- - InProcessExecutor: uses storage.tools/skills/artifacts directly
+ - InProcessExecutor: uses storage.tools/workflows/artifacts directly
- ContainerExecutor: calls storage.get_serializable_access()
- SubprocessExecutor: calls storage.get_serializable_access()
"""
diff --git a/src/py_code_mode/execution/subprocess/__init__.py b/src/py_code_mode/execution/subprocess/__init__.py
index 40c34c9..914445d 100644
--- a/src/py_code_mode/execution/subprocess/__init__.py
+++ b/src/py_code_mode/execution/subprocess/__init__.py
@@ -2,7 +2,7 @@
This executor runs Python code in an isolated subprocess (Jupyter kernel) with
bidirectional RPC for namespace operations. The kernel contains lightweight
-proxy objects that forward all tools/skills/artifacts/deps calls to the host.
+proxy objects that forward all tools/workflows/artifacts/deps calls to the host.
"""
from py_code_mode.execution.subprocess.config import SubprocessConfig
diff --git a/src/py_code_mode/execution/subprocess/config.py b/src/py_code_mode/execution/subprocess/config.py
index cfcfa96..87fe107 100644
--- a/src/py_code_mode/execution/subprocess/config.py
+++ b/src/py_code_mode/execution/subprocess/config.py
@@ -48,7 +48,7 @@ class SubprocessConfig:
are kernel dependencies. None means no pre-configured user deps.
deps_file: Path to requirements.txt-style file for pre-configured deps.
None means no deps file.
- ipc_timeout: Timeout for IPC queries (tool/skill/artifact) in seconds.
+ ipc_timeout: Timeout for IPC queries (tool/workflow/artifact) in seconds.
None means unlimited (default).
"""
@@ -72,7 +72,9 @@ def __post_init__(self) -> None:
object.__setattr__(self, "python_version", _get_current_python_version())
# Validate python_version (now guaranteed to be str)
- version = self.python_version # type: ignore[union-attr]
+ version = self.python_version
+ if version is None:
+ raise ValueError("python_version cannot be None")
stripped = version.strip()
if not stripped:
raise ValueError("python_version cannot be empty or whitespace-only")
diff --git a/src/py_code_mode/execution/subprocess/executor.py b/src/py_code_mode/execution/subprocess/executor.py
index 99f4a7a..f3c234d 100644
--- a/src/py_code_mode/execution/subprocess/executor.py
+++ b/src/py_code_mode/execution/subprocess/executor.py
@@ -2,7 +2,7 @@
This executor runs Python code in an isolated subprocess (Jupyter kernel) with
bidirectional RPC for namespace operations. The kernel contains lightweight
-proxy objects that forward all tools/skills/artifacts/deps calls to the host.
+proxy objects that forward all tools/workflows/artifacts/deps calls to the host.
Key advantages over code injection (old namespace.py approach):
- No py-code-mode install needed in subprocess venv (just ipykernel + zmq)
@@ -68,7 +68,7 @@ class StorageResourceProvider:
"""ResourceProvider that bridges RPC to storage backend.
This class implements the ResourceProvider protocol by delegating
- to the storage backend for skills and artifacts, and using
+ to the storage backend for workflows and artifacts, and using
executor-provided tool registry and deps store.
"""
@@ -84,7 +84,7 @@ def __init__(
"""Initialize provider.
Args:
- storage: Storage backend for skills and artifacts.
+ storage: Storage backend for workflows and artifacts.
tool_registry: Tool registry loaded from executor config.
deps_store: Deps store for dependency management.
allow_runtime_deps: Whether to allow deps.add() and deps.remove().
@@ -97,18 +97,18 @@ def __init__(
self._allow_runtime_deps = allow_runtime_deps
self._venv_manager = venv_manager
self._venv = venv
- # Cached skill library (lazy initialized)
- self._skill_library = None
+ # Cached workflow library (lazy initialized)
+ self._workflow_library = None
def _get_tool_registry(self) -> ToolRegistry | None:
"""Get tool registry. Already loaded at construction time."""
return self._tool_registry
- def _get_skill_library(self):
- """Get skill library, caching the result."""
- if self._skill_library is None:
- self._skill_library = self._storage.get_skill_library()
- return self._skill_library
+ def _get_workflow_library(self):
+ """Get workflow library, caching the result."""
+ if self._workflow_library is None:
+ self._workflow_library = self._storage.get_workflow_library()
+ return self._workflow_library
# -------------------------------------------------------------------------
# Tool methods
@@ -173,73 +173,73 @@ async def list_tool_recipes(self, name: str) -> list[dict[str, Any]]:
]
# -------------------------------------------------------------------------
- # Skill methods
+ # Workflow methods
# -------------------------------------------------------------------------
- async def search_skills(self, query: str, limit: int) -> list[dict[str, Any]]:
- """Search for skills matching query."""
- library = self._get_skill_library()
+ async def search_workflows(self, query: str, limit: int) -> list[dict[str, Any]]:
+ """Search for workflows matching query."""
+ library = self._get_workflow_library()
library.refresh()
- skills = library.search(query, limit=limit)
+ workflows = library.search(query, limit=limit)
return [
{
- "name": s.name,
- "description": s.description,
- "params": {p.name: p.description or p.type for p in s.parameters},
+ "name": w.name,
+ "description": w.description,
+ "params": {p.name: p.description or p.type for p in w.parameters},
}
- for s in skills
+ for w in workflows
]
- async def list_skills(self) -> list[dict[str, Any]]:
- """List all available skills."""
- library = self._get_skill_library()
+ async def list_workflows(self) -> list[dict[str, Any]]:
+ """List all available workflows."""
+ library = self._get_workflow_library()
library.refresh()
- skills = library.list()
+ workflows = library.list()
return [
{
- "name": s.name,
- "description": s.description,
- "params": {p.name: p.description or p.type for p in s.parameters},
+ "name": w.name,
+ "description": w.description,
+ "params": {p.name: p.description or p.type for p in w.parameters},
}
- for s in skills
+ for w in workflows
]
- async def get_skill(self, name: str) -> dict[str, Any] | None:
- """Get a skill by name."""
- library = self._get_skill_library()
+ async def get_workflow(self, name: str) -> dict[str, Any] | None:
+ """Get a workflow by name."""
+ library = self._get_workflow_library()
library.refresh()
- skill = library.get(name)
- if skill is None:
+ workflow = library.get(name)
+ if workflow is None:
return None
return {
- "name": skill.name,
- "description": skill.description,
- "source": skill.source,
- "params": {p.name: p.description or p.type for p in skill.parameters},
+ "name": workflow.name,
+ "description": workflow.description,
+ "source": workflow.source,
+ "params": {p.name: p.description or p.type for p in workflow.parameters},
}
- async def create_skill(self, name: str, source: str, description: str) -> dict[str, Any]:
- """Create and save a new skill."""
- from py_code_mode.skills import PythonSkill
+ async def create_workflow(self, name: str, source: str, description: str) -> dict[str, Any]:
+ """Create and save a new workflow."""
+ from py_code_mode.workflows import PythonWorkflow
- skill = PythonSkill.from_source(
+ workflow = PythonWorkflow.from_source(
name=name,
source=source,
description=description,
)
- library = self._get_skill_library()
- library.add(skill)
+ library = self._get_workflow_library()
+ library.add(workflow)
return {
- "name": skill.name,
- "description": skill.description,
- "params": {p.name: p.description or p.type for p in skill.parameters},
+ "name": workflow.name,
+ "description": workflow.description,
+ "params": {p.name: p.description or p.type for p in workflow.parameters},
}
- async def delete_skill(self, name: str) -> bool:
- """Delete a skill."""
- library = self._get_skill_library()
+ async def delete_workflow(self, name: str) -> bool:
+ """Delete a workflow."""
+ library = self._get_workflow_library()
return library.remove(name)
# -------------------------------------------------------------------------
@@ -387,7 +387,7 @@ class SubprocessExecutor:
This executor uses bidirectional RPC via the stdin channel for namespace
operations. The kernel contains lightweight proxy objects that forward
- all tools/skills/artifacts/deps calls to the host.
+ all tools/workflows/artifacts/deps calls to the host.
Capabilities:
- TIMEOUT: Yes (via message wait timeout)
@@ -452,10 +452,10 @@ async def start(self, storage: StorageBackend | None = None) -> None:
"""Start kernel: create venv, start kernel, initialize RPC.
Tools and deps are loaded from executor config (tools_path, deps, deps_file).
- Skills and artifacts come from storage backend.
+ Workflows and artifacts come from storage backend.
Args:
- storage: Optional StorageBackend for skills and artifacts.
+ storage: Optional StorageBackend for workflows and artifacts.
Raises:
RuntimeError: If already started or storage access fails.
@@ -521,7 +521,7 @@ async def start(self, storage: StorageBackend | None = None) -> None:
provider=self._provider, # type: ignore[arg-type]
kernel_name=self._venv.kernel_spec_name,
startup_timeout=self._config.startup_timeout,
- ipc_timeout=self._config.ipc_timeout,
+ ipc_timeout=self._config.ipc_timeout or 30.0,
)
async def run(self, code: str, timeout: float | None = None) -> ExecutionResult:
diff --git a/src/py_code_mode/execution/subprocess/host.py b/src/py_code_mode/execution/subprocess/host.py
index c4333f8..8b13b5b 100644
--- a/src/py_code_mode/execution/subprocess/host.py
+++ b/src/py_code_mode/execution/subprocess/host.py
@@ -19,7 +19,7 @@
from queue import Empty
from typing import TYPE_CHECKING, Any, Protocol, runtime_checkable
-from jupyter_client import AsyncKernelManager
+from jupyter_client.manager import AsyncKernelManager
from py_code_mode.execution.subprocess.kernel_init import get_kernel_init_code
from py_code_mode.execution.subprocess.rpc import RPCRequest, RPCResponse
@@ -34,7 +34,7 @@ def _parse_method(method: str) -> tuple[str, str]:
"""Parse RPC method name into namespace and operation.
Args:
- method: The RPC method name (e.g., "skills.invoke", "tools.call").
+ method: The RPC method name (e.g., "workflows.invoke", "tools.call").
Returns:
Tuple of (namespace, operation). Returns ("rpc", method) for unknown format.
@@ -70,28 +70,28 @@ async def list_tool_recipes(self, name: str) -> list[dict[str, Any]]:
"""List recipes for a specific tool."""
...
- # Skill methods
- # Note: No invoke_skill - skills execute locally in kernel after fetching
- # source via get_skill. This ensures skills can import runtime-installed packages.
+ # Workflow methods
+ # Note: No invoke_workflow - workflows execute locally in kernel after fetching
+ # source via get_workflow. This ensures workflows can import runtime-installed packages.
- async def search_skills(self, query: str, limit: int) -> list[dict[str, Any]]:
- """Search for skills matching query."""
+ async def search_workflows(self, query: str, limit: int) -> list[dict[str, Any]]:
+ """Search for workflows matching query."""
...
- async def list_skills(self) -> list[dict[str, Any]]:
- """List all available skills."""
+ async def list_workflows(self) -> list[dict[str, Any]]:
+ """List all available workflows."""
...
- async def get_skill(self, name: str) -> dict[str, Any] | None:
- """Get a skill by name."""
+ async def get_workflow(self, name: str) -> dict[str, Any] | None:
+ """Get a workflow by name."""
...
- async def create_skill(self, name: str, source: str, description: str) -> dict[str, Any]:
- """Create and save a new skill."""
+ async def create_workflow(self, name: str, source: str, description: str) -> dict[str, Any]:
+ """Create and save a new workflow."""
...
- async def delete_skill(self, name: str) -> bool:
- """Delete a skill."""
+ async def delete_workflow(self, name: str) -> bool:
+ """Delete a workflow."""
...
# Artifact methods
@@ -205,9 +205,10 @@ async def start(
try:
# Start kernel
- self._km = AsyncKernelManager(kernel_name=kernel_name)
- await self._km.start_kernel()
- self._kc = self._km.client()
+ km = AsyncKernelManager(kernel_name=kernel_name)
+ self._km = km
+ await km.start_kernel()
+ self._kc = km.client()
self._kc.start_channels()
# Wait for kernel to be ready
@@ -403,7 +404,17 @@ async def _handle_rpc_request(self, data: dict[str, Any]) -> None:
"""Handle an RPC request from the kernel."""
if self._provider is None:
self._send_input_reply(
- json.dumps(RPCResponse(id=data["id"], error="No provider").to_dict())
+ json.dumps(
+ RPCResponse(
+ id=data["id"],
+ error={
+ "namespace": "rpc",
+ "operation": "dispatch",
+ "message": "No provider",
+ "type": "RuntimeError",
+ },
+ ).to_dict()
+ )
)
return
@@ -446,22 +457,22 @@ async def _dispatch_rpc(self, request: RPCRequest) -> Any:
elif method == "tools.list_recipes":
return await self._provider.list_tool_recipes(params["name"])
- # Skills methods
- # Note: skills.invoke is NOT handled here - skills execute locally in kernel
- # after fetching source via skills.get. This ensures skills can import
+ # Workflows methods
+ # Note: workflows.invoke is NOT handled here - workflows execute locally in kernel
+ # after fetching source via workflows.get. This ensures workflows can import
# packages installed at runtime in the kernel's venv.
- elif method == "skills.search":
- return await self._provider.search_skills(params["query"], params.get("limit", 5))
- elif method == "skills.list":
- return await self._provider.list_skills()
- elif method == "skills.get":
- return await self._provider.get_skill(params["name"])
- elif method == "skills.create":
- return await self._provider.create_skill(
+ elif method == "workflows.search":
+ return await self._provider.search_workflows(params["query"], params.get("limit", 5))
+ elif method == "workflows.list":
+ return await self._provider.list_workflows()
+ elif method == "workflows.get":
+ return await self._provider.get_workflow(params["name"])
+ elif method == "workflows.create":
+ return await self._provider.create_workflow(
params["name"], params["source"], params.get("description", "")
)
- elif method == "skills.delete":
- return await self._provider.delete_skill(params["name"])
+ elif method == "workflows.delete":
+ return await self._provider.delete_workflow(params["name"])
# Artifacts methods
elif method == "artifacts.load":
@@ -538,4 +549,4 @@ def is_alive(self) -> bool:
if self._km is None:
return False
# is_alive() is sync for AsyncKernelManager
- return self._km.is_alive()
+ return bool(self._km.is_alive())
diff --git a/src/py_code_mode/execution/subprocess/kernel_init.py b/src/py_code_mode/execution/subprocess/kernel_init.py
index c481d24..c74b14f 100644
--- a/src/py_code_mode/execution/subprocess/kernel_init.py
+++ b/src/py_code_mode/execution/subprocess/kernel_init.py
@@ -2,7 +2,7 @@
This module provides the KERNEL_INIT_CODE constant - a Python code string that
is executed in the kernel subprocess to set up the RPC mechanism and proxy
-namespaces for tools, skills, artifacts, and deps.
+namespaces for tools, workflows, artifacts, and deps.
The proxies forward all namespace operations to the host via the stdin channel,
which allows the host to control access to storage and tools while maintaining
@@ -92,12 +92,12 @@ def __init__(
self.__traceback__ = None # Suppress traceback - error is from host, not kernel
-class SkillError(NamespaceError):
- """Error in skills namespace operation."""
+class WorkflowError(NamespaceError):
+ """Error in workflows namespace operation."""
def __init__(
self, operation: str, message: str, original_type: str = "RuntimeError"
) -> None:
- super().__init__("skills", operation, message, original_type)
+ super().__init__("workflows", operation, message, original_type)
class ToolError(NamespaceError):
@@ -153,10 +153,10 @@ class SyncResult(NamedTuple):
failed: tuple[str, ...]
-class Skill(NamedTuple):
- """Lightweight Skill for kernel-side use.
+class Workflow(NamedTuple):
+ """Lightweight Workflow for kernel-side use.
- Mirrors py_code_mode.skills.skill.PythonSkill structure.
+ Mirrors py_code_mode.workflows.workflow.PythonWorkflow structure.
"""
name: str
description: str
@@ -280,8 +280,8 @@ def _rpc_call(method: str, **params) -> Any:
# Map namespace to error class, suppress traceback (from None)
# The error originated host-side, kernel traceback is just RPC plumbing
- if namespace == "skills":
- raise SkillError(operation, message, error_type) from None
+ if namespace == "workflows":
+ raise WorkflowError(operation, message, error_type) from None
elif namespace == "tools":
raise ToolError(operation, message, error_type) from None
elif namespace == "artifacts":
@@ -401,54 +401,54 @@ def search(self, query: str, limit: int = 10) -> list[dict[str, Any]]:
return _rpc_call("tools.search", query=query, limit=limit)
-class SkillsProxy:
- """Proxy for invoking host skills.
+class WorkflowsProxy:
+ """Proxy for invoking host workflows.
Supports:
- - skills.invoke("name", arg=value) - invoke a skill
- - skills.search("query") - search for skills
- - skills.list() - list all skills
- - skills.get("name") - get skill details
- - skills.create("name", source, description) - create a skill
- - skills.skill_name(arg=value) - direct invocation syntax
+ - workflows.invoke("name", arg=value) - invoke a workflow
+ - workflows.search("query") - search for workflows
+ - workflows.list() - list all workflows
+ - workflows.get("name") - get workflow details
+ - workflows.create("name", source, description) - create a workflow
+ - workflows.workflow_name(arg=value) - direct invocation syntax
"""
- def invoke(self, skill_name: str, **kwargs) -> Any:
- """Invoke a skill by name.
+ def invoke(self, workflow_name: str, **kwargs) -> Any:
+ """Invoke a workflow by name.
- Gets skill source from host and executes it locally in the kernel.
- This ensures skills can import packages installed at runtime.
- Handles async skills by running them with asyncio.run().
+ Gets workflow source from host and executes it locally in the kernel.
+ This ensures workflows can import packages installed at runtime.
+ Handles async workflows by running them with asyncio.run().
Args:
- skill_name: Name of the skill to invoke.
- **kwargs: Arguments to pass to the skill's run() function.
+ workflow_name: Name of the workflow to invoke.
+ **kwargs: Arguments to pass to the workflow's run() function.
- Note: Uses skill_name (not name) to avoid collision with skills
+ Note: Uses workflow_name (not name) to avoid collision with workflows
that have a 'name' parameter.
"""
import asyncio
- skill = _rpc_call("skills.get", name=skill_name)
- if skill is None:
- raise ValueError(f"Skill not found: {{skill_name}}")
+ workflow = _rpc_call("workflows.get", name=workflow_name)
+ if workflow is None:
+ raise ValueError(f"Workflow not found: {{workflow_name}}")
- source = skill.get("source")
+ source = workflow.get("source")
if not source:
- raise ValueError(f"Skill has no source: {{skill_name}}")
+ raise ValueError(f"Workflow has no source: {{workflow_name}}")
- skill_namespace = {{
+ workflow_namespace = {{
"tools": tools,
- "skills": skills,
+ "workflows": workflows,
"artifacts": artifacts,
"deps": deps,
}}
- code = compile(source, f"", "exec")
- exec(code, skill_namespace)
+ code = compile(source, f"", "exec")
+ exec(code, workflow_namespace)
- run_func = skill_namespace.get("run")
+ run_func = workflow_namespace.get("run")
if not callable(run_func):
- raise ValueError(f"Skill {{skill_name}} has no run() function")
+ raise ValueError(f"Workflow {{workflow_name}} has no run() function")
result = run_func(**kwargs)
if asyncio.iscoroutine(result):
@@ -466,15 +466,15 @@ def invoke(self, skill_name: str, **kwargs) -> Any:
return asyncio.run(result)
return result
- def search(self, query: str, limit: int = 5) -> list[Skill]:
- """Search for skills matching query.
+ def search(self, query: str, limit: int = 5) -> list[Workflow]:
+ """Search for workflows matching query.
Returns:
- List of Skill objects matching the query.
+ List of Workflow objects matching the query.
"""
- result = _rpc_call("skills.search", query=query, limit=limit)
+ result = _rpc_call("workflows.search", query=query, limit=limit)
return [
- Skill(
+ Workflow(
name=s["name"],
description=s.get("description", ""),
params=s.get("params", {{}}),
@@ -482,15 +482,15 @@ def search(self, query: str, limit: int = 5) -> list[Skill]:
for s in result
]
- def list(self) -> list[Skill]:
- """List all available skills.
+ def list(self) -> list[Workflow]:
+ """List all available workflows.
Returns:
- List of Skill objects.
+ List of Workflow objects.
"""
- result = _rpc_call("skills.list")
+ result = _rpc_call("workflows.list")
return [
- Skill(
+ Workflow(
name=s["name"],
description=s.get("description", ""),
params=s.get("params", {{}}),
@@ -499,37 +499,38 @@ def list(self) -> list[Skill]:
]
def get(self, name: str) -> dict[str, Any] | None:
- """Get a skill by name.
+ """Get a workflow by name.
- Returns full skill details including source.
+ Returns full workflow details including source.
"""
- return _rpc_call("skills.get", name=name)
+ return _rpc_call("workflows.get", name=name)
- def create(self, name: str, source: str, description: str = "") -> Skill:
- """Create and save a new skill.
+ def create(self, name: str, source: str, description: str = "") -> Workflow:
+ """Create and save a new workflow.
Returns:
- Skill object for the created skill.
+ Workflow object for the created workflow.
"""
- result = _rpc_call("skills.create", name=name, source=source, description=description)
- return Skill(
+ result = _rpc_call("workflows.create", name=name, source=source, description=description)
+ return Workflow(
name=result["name"],
description=result.get("description", ""),
params=result.get("params", {{}}),
)
def delete(self, name: str) -> bool:
- """Delete a skill."""
- return _rpc_call("skills.delete", name=name)
+ """Delete a workflow."""
+ return _rpc_call("workflows.delete", name=name)
def __getattr__(self, name: str) -> Any:
- """Allow skills.skill_name(...) syntax."""
+ """Allow workflows.workflow_name(...) syntax."""
if name.startswith("_"):
raise AttributeError(name)
- # Return a callable that invokes the skill
+ # Return a callable that invokes the workflow
return lambda **kwargs: self.invoke(name, **kwargs)
+
class ArtifactsProxy:
"""Proxy for accessing host artifacts.
@@ -654,11 +655,11 @@ def __repr__(self) -> str:
# Inject proxies as globals
tools = ToolsProxy()
-skills = SkillsProxy()
+workflows = WorkflowsProxy()
artifacts = ArtifactsProxy()
deps = DepsProxy()
-print("RPC initialized: tools, skills, artifacts, deps are available (via stdin channel)")
+print("RPC initialized: tools, workflows, artifacts, deps are available (via stdin channel)")
'''
diff --git a/src/py_code_mode/execution/subprocess/namespace.py b/src/py_code_mode/execution/subprocess/namespace.py
index b21f774..80cddd7 100644
--- a/src/py_code_mode/execution/subprocess/namespace.py
+++ b/src/py_code_mode/execution/subprocess/namespace.py
@@ -1,6 +1,6 @@
"""Namespace setup code generation for SubprocessExecutor.
-Generates Python code that sets up tools, skills, artifacts, and deps namespaces
+Generates Python code that sets up tools, workflows, artifacts, and deps namespaces
in the kernel subprocess using full py-code-mode functionality.
"""
@@ -17,7 +17,7 @@ def build_namespace_setup_code(
The generated code imports from py-code-mode (which must be installed
in the kernel's venv) and creates real namespace objects with full
- functionality including tool invocation, skill creation, and semantic search.
+ functionality including tool invocation, workflow creation, and semantic search.
Args:
storage_access: Storage access descriptor with paths or connection info.
@@ -60,8 +60,8 @@ def _build_vector_store_setup_code(vectors_path_str: str) -> str:
# Setup vector store if vectors_path provided
_vector_store = None
try:
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
- from py_code_mode.skills import Embedder
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows import Embedder
_vectors_path.mkdir(parents=True, exist_ok=True)
_embedder = Embedder()
_vector_store = ChromaVectorStore(path=_vectors_path, embedder=_embedder)
@@ -98,8 +98,8 @@ def _build_file_storage_setup_code(
The generated code creates an empty ToolRegistry since tools loading is
handled separately by the executor.
"""
- skills_path_str = (
- repr(str(storage_access.skills_path)) if storage_access.skills_path else "None"
+ workflows_path_str = (
+ repr(str(storage_access.workflows_path)) if storage_access.workflows_path else "None"
)
artifacts_path_str = repr(str(storage_access.artifacts_path))
# Base path is parent of artifacts for deps store
@@ -198,32 +198,32 @@ def __call__(self, **kwargs):
tools = _SyncToolsWrapper(_base_tools)
# =============================================================================
-# Skills Namespace (with optional vector store)
+# Workflows Namespace (with optional vector store)
# =============================================================================
-from py_code_mode.skills import FileSkillStore, create_skill_library
-from py_code_mode.execution.in_process.skills_namespace import SkillsNamespace
+from py_code_mode.workflows import FileWorkflowStore, create_workflow_library
+from py_code_mode.execution.in_process.workflows_namespace import WorkflowsNamespace
-_skills_path = Path({skills_path_str}) if {skills_path_str} else None
+_workflows_path = Path({workflows_path_str}) if {workflows_path_str} else None
{vector_store_setup}
-if _skills_path is not None:
- _skills_path.mkdir(parents=True, exist_ok=True)
- _store = FileSkillStore(_skills_path)
- _library = create_skill_library(store=_store, vector_store=_vector_store)
+if _workflows_path is not None:
+ _workflows_path.mkdir(parents=True, exist_ok=True)
+ _store = FileWorkflowStore(_workflows_path)
+ _library = create_workflow_library(store=_store, vector_store=_vector_store)
else:
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
- _store = MemorySkillStore()
- _library = SkillLibrary(embedder=MockEmbedder(), store=_store, vector_store=_vector_store)
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
+ _store = MemoryWorkflowStore()
+ _library = WorkflowLibrary(embedder=MockEmbedder(), store=_store, vector_store=_vector_store)
-# SkillsNamespace now takes a namespace dict directly (no executor needed).
+# WorkflowsNamespace now takes a namespace dict directly (no executor needed).
# Create the namespace dict first, then wire up circular references.
-_skills_ns_dict = {{}}
-skills = SkillsNamespace(_library, _skills_ns_dict)
+_workflows_ns_dict = {{}}
+workflows = WorkflowsNamespace(_library, _workflows_ns_dict)
-# Wire up the namespace so skills can access tools/skills/artifacts
-_skills_ns_dict["tools"] = tools
-_skills_ns_dict["skills"] = skills
+# Wire up the namespace so workflows can access tools/workflows/artifacts
+_workflows_ns_dict["tools"] = tools
+_workflows_ns_dict["workflows"] = workflows
# =============================================================================
# Artifacts Namespace (with simplified API for agent usage)
@@ -279,8 +279,9 @@ def path(self):
artifacts = _SimpleArtifactStore(_base_artifacts)
-# Complete the namespace wiring for skills
-_skills_ns_dict["artifacts"] = artifacts
+# Complete the namespace wiring for workflows
+_workflows_ns_dict["artifacts"] = artifacts
+
# =============================================================================
# Deps Namespace (with optional runtime deps control)
@@ -312,9 +313,9 @@ class _ControlledDepsNamespace:
to prevent bypass attacks like deps._namespace.add().
"""
- _ALLOWED_ATTRS = frozenset({
+ _ALLOWED_ATTRS = frozenset({{
"add", "list", "remove", "sync", "__repr__", "__class__", "__doc__"
- })
+ }})
def __init__(self, namespace, allow_runtime):
# Use object.__setattr__ to bypass __getattribute__
@@ -372,27 +373,28 @@ def __repr__(self):
deps = _ControlledDepsNamespace(_base_deps, _allow_runtime_deps)
-# Complete the namespace wiring for skills to include deps
-_skills_ns_dict["deps"] = deps
+# Complete the namespace wiring for workflows to include deps
+_workflows_ns_dict["deps"] = deps
# =============================================================================
# Cleanup temporary variables (keep wrapper classes for runtime use)
# =============================================================================
del _registry, _base_tools
-del _skills_path, _store, _library, _skills_ns_dict
-{vector_store_cleanup}
+del _workflows_path, _store, _library, _workflows_ns_dict
del _artifacts_path, _base_artifacts
+{vector_store_cleanup}
del _base_path, _deps_store, _installer, _base_deps, _allow_runtime_deps
del Path
del ToolRegistry, ToolsNamespace, CLIAdapter
-del FileSkillStore, create_skill_library, SkillsNamespace
+del FileWorkflowStore, create_workflow_library, WorkflowsNamespace
try:
- del MemorySkillStore, MockEmbedder, SkillLibrary
+ del MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
except NameError:
pass
del FileArtifactStore
del DepsNamespace, FileDepsStore, PackageInstaller
+
# Note: Wrapper classes (_SyncToolsWrapper, _SyncToolProxy, _SyncCallableWrapper,
# _SimpleArtifactStore, _ControlledDepsNamespace) and asyncio/nest_asyncio are kept for runtime use
'''
@@ -409,12 +411,12 @@ def _build_redis_storage_setup_code(
handled separately by the executor.
"""
redis_url_str = repr(storage_access.redis_url)
- skills_prefix_str = repr(storage_access.skills_prefix)
+ workflows_prefix_str = repr(storage_access.workflows_prefix)
artifacts_prefix_str = repr(storage_access.artifacts_prefix)
- # Deps prefix follows the pattern: {base_prefix}:deps
- # Extract base prefix from artifacts_prefix (e.g., "test:artifacts" -> "test")
- base_prefix = storage_access.artifacts_prefix.rsplit(":", 1)[0]
- deps_prefix_str = repr(f"{base_prefix}:deps")
+ vectors_prefix_str = (
+ repr(storage_access.vectors_prefix) if storage_access.vectors_prefix else "None"
+ )
+ deps_prefix_str = repr(f"{storage_access.workflows_prefix.rsplit(':', 1)[0]}:deps")
allow_deps_str = "True" if allow_runtime_deps else "False"
return f'''# Auto-generated namespace setup for SubprocessExecutor (Redis)
@@ -504,24 +506,39 @@ def __call__(self, **kwargs):
tools = _SyncToolsWrapper(_base_tools)
# =============================================================================
-# Skills Namespace
+# Workflows Namespace (with optional vector store)
# =============================================================================
-from py_code_mode.skills import RedisSkillStore, create_skill_library
-from py_code_mode.execution.in_process.skills_namespace import SkillsNamespace
+from py_code_mode.workflows import RedisWorkflowStore, create_workflow_library
+from py_code_mode.execution.in_process.workflows_namespace import WorkflowsNamespace
-_skills_prefix = {skills_prefix_str}
-_store = RedisSkillStore(_redis_client, prefix=_skills_prefix)
-_library = create_skill_library(store=_store)
+_workflows_prefix = {workflows_prefix_str}
-# SkillsNamespace now takes a namespace dict directly (no executor needed).
+_vector_store = None
+if {vectors_prefix_str} is not None:
+ try:
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows import Embedder
+ _embedder = Embedder()
+ _vector_store = RedisVectorStore(
+ redis=_redis_client,
+ embedder=_embedder,
+ prefix={vectors_prefix_str},
+ )
+ except ImportError:
+ _vector_store = None
+
+_store = RedisWorkflowStore(_redis_client, prefix=_workflows_prefix)
+_library = create_workflow_library(store=_store, vector_store=_vector_store)
+
+# WorkflowsNamespace now takes a namespace dict directly (no executor needed).
# Create the namespace dict first, then wire up circular references.
-_skills_ns_dict = {{}}
-skills = SkillsNamespace(_library, _skills_ns_dict)
+_workflows_ns_dict = {{}}
+workflows = WorkflowsNamespace(_library, _workflows_ns_dict)
-# Wire up the namespace so skills can access tools/skills/artifacts
-_skills_ns_dict["tools"] = tools
-_skills_ns_dict["skills"] = skills
+# Wire up the namespace so workflows can access tools/workflows/artifacts
+_workflows_ns_dict["tools"] = tools
+_workflows_ns_dict["workflows"] = workflows
# =============================================================================
# Artifacts Namespace (with simplified API for agent usage)
@@ -571,8 +588,8 @@ def get(self, name):
artifacts = _SimpleArtifactStore(_base_artifacts)
-# Complete the namespace wiring for skills
-_skills_ns_dict["artifacts"] = artifacts
+# Complete the namespace wiring for workflows
+_workflows_ns_dict["artifacts"] = artifacts
# =============================================================================
# Deps Namespace (with optional runtime deps control)
@@ -664,19 +681,32 @@ def __repr__(self):
deps = _ControlledDepsNamespace(_base_deps, _allow_runtime_deps)
-# Complete the namespace wiring for skills to include deps
-_skills_ns_dict["deps"] = deps
+# Complete the namespace wiring for workflows to include deps
+_workflows_ns_dict["deps"] = deps
# =============================================================================
# Cleanup temporary variables (keep wrapper classes for runtime use)
# =============================================================================
+if {vectors_prefix_str} is not None:
+ del _vector_store
+ try:
+ del RedisVectorStore, Embedder, _embedder
+ except NameError:
+ pass
+else:
+ del _vector_store
+
del _registry, _base_tools
-del _skills_prefix, _store, _library, _skills_ns_dict
+del _workflows_prefix, _store, _library, _workflows_ns_dict
del _artifacts_prefix, _base_artifacts
del _deps_prefix, _deps_store, _installer, _base_deps, _allow_runtime_deps
del ToolRegistry, ToolsNamespace, CLIAdapter
-del RedisSkillStore, create_skill_library, SkillsNamespace
+del RedisWorkflowStore, create_workflow_library, WorkflowsNamespace
+try:
+ del MockEmbedder, WorkflowLibrary
+except NameError:
+ pass
del RedisArtifactStore
del DepsNamespace, RedisDepsStore, PackageInstaller
del Redis
diff --git a/src/py_code_mode/execution/subprocess/rpc.py b/src/py_code_mode/execution/subprocess/rpc.py
index a9ac30d..9d5a9ff 100644
--- a/src/py_code_mode/execution/subprocess/rpc.py
+++ b/src/py_code_mode/execution/subprocess/rpc.py
@@ -21,7 +21,7 @@ class RPCRequest:
and sends back an RPCResponse.
Attributes:
- method: The RPC method name (e.g., "tools.call", "skills.invoke").
+ method: The RPC method name (e.g., "tools.call", "workflows.invoke").
params: Method parameters as a dict.
id: Unique request ID for correlation with response.
"""
diff --git a/src/py_code_mode/integrations/autogen.py b/src/py_code_mode/integrations/autogen.py
index a48809f..06e0390 100644
--- a/src/py_code_mode/integrations/autogen.py
+++ b/src/py_code_mode/integrations/autogen.py
@@ -24,7 +24,7 @@
import json
from collections.abc import Callable
-from typing import TYPE_CHECKING
+from typing import TYPE_CHECKING, Any
if TYPE_CHECKING:
from py_code_mode.execution import InProcessExecutor
@@ -35,7 +35,7 @@ def create_run_code_tool(
session_url: str | None = None,
timeout: float = 30.0,
session_id: str | None = None,
-) -> Callable[[str], str]:
+) -> Callable[[str], Any]:
"""Create a run_code tool for AutoGen agents.
Provide either an executor (for in-process execution) or a session_url
@@ -69,15 +69,15 @@ def create_run_code_tool(
def _create_local_tool(
executor: InProcessExecutor,
timeout: float,
-) -> Callable[[str], str]:
+) -> Callable[[str], Any]:
"""Create tool using local CodeExecutor."""
async def run_code(code: str) -> str:
- """Execute Python code with access to tools.*, skills.*, and artifacts.*.
+ """Execute Python code with access to tools.*, workflows.*, and artifacts.*.
The code runs in a persistent environment where:
- tools.name(arg=value) invokes registered tools
- - skills.invoke("skill_name", arg=value) runs registered skills
+ - workflows.invoke("workflow_name", arg=value) runs registered workflows
- artifacts.save(name, data) persists data across executions
- artifacts.load(name) retrieves previously saved data
- Variables persist across calls
@@ -110,7 +110,7 @@ def _create_remote_tool(
session_url: str,
timeout: float,
session_id: str | None = None,
-) -> Callable[[str], str]:
+) -> Callable[[str], Any]:
"""Create tool using remote session server."""
# Lazy import to avoid requiring httpx for local-only usage
@@ -122,11 +122,11 @@ def _create_remote_tool(
_session_id = session_id or str(uuid.uuid4())
def run_code(code: str) -> str:
- """Execute Python code with access to tools.*, skills.*, and artifacts.*.
+ """Execute Python code with access to tools.*, workflows.*, and artifacts.*.
The code runs on a remote session server where:
- tools.name(arg=value) invokes registered tools
- - skills.invoke("skill_name", arg=value) runs registered skills
+ - workflows.invoke("workflow_name", arg=value) runs registered workflows
- artifacts.save(name, data) persists data across executions
- artifacts.load(name) retrieves previously saved data
- Variables persist across calls
diff --git a/src/py_code_mode/session.py b/src/py_code_mode/session.py
index 8c45c51..28d9a40 100644
--- a/src/py_code_mode/session.py
+++ b/src/py_code_mode/session.py
@@ -1,7 +1,7 @@
"""Session - unified interface for code execution with storage.
Session wraps a StorageBackend and Executor, providing the primary API
-for py-code-mode. It injects tools, skills, and artifacts namespaces
+for py-code-mode. It injects tools, workflows, and artifacts namespaces
into the executor's runtime environment.
"""
@@ -11,8 +11,8 @@
from typing import TYPE_CHECKING, Any
from py_code_mode.execution import Executor
-from py_code_mode.skills import PythonSkill
from py_code_mode.types import ExecutionResult
+from py_code_mode.workflows import PythonWorkflow
if TYPE_CHECKING:
from py_code_mode.storage import StorageBackend
@@ -51,7 +51,7 @@ def __init__(
TypeError: If executor is a string (unsupported) or wrong type.
For convenience, use class methods instead of __init__ directly:
- - Session.from_base(path) - auto-discover tools/skills/artifacts
+ - Session.from_base(path) - auto-discover tools/workflows/artifacts
- Session.subprocess(...) - subprocess isolation (recommended)
- Session.in_process(...) - same process (fastest, no isolation)
- Session.container(...) - Docker isolation (most secure)
@@ -97,7 +97,7 @@ def from_base(
Auto-discovers from workspace directory:
- tools/ for tool definitions
- - skills/ for skill files
+ - workflows/ for workflow files
- artifacts/ for persistent data
- requirements.txt for pre-configured dependencies
@@ -306,7 +306,7 @@ async def start(self) -> None:
# Start executor with storage backend directly
# Each executor handles storage access appropriately:
- # - InProcessExecutor: uses storage.tools/skills/artifacts directly
+ # - InProcessExecutor: uses storage.tools/workflows/artifacts directly
# - ContainerExecutor: calls storage.get_serializable_access() internally
# - SubprocessExecutor: calls storage.get_serializable_access() internally
await self._executor.start(storage=self._storage)
@@ -348,7 +348,7 @@ async def run(self, code: str, timeout: float | None = None) -> ExecutionResult:
async def reset(self) -> None:
"""Reset the execution environment.
- Clears all user-defined variables but preserves tools, skills, artifacts namespaces.
+ Clears all user-defined variables but preserves tools, workflows, artifacts namespaces.
"""
if self._executor is None:
return
@@ -432,96 +432,100 @@ async def search_tools(self, query: str, limit: int = 10) -> list[dict[str, Any]
return await self._executor.search_tools(query, limit)
# -------------------------------------------------------------------------
- # Skills facade methods
+ # Workflows facade methods
# -------------------------------------------------------------------------
- async def list_skills(self) -> list[dict[str, Any]]:
- """List all skills (refreshes from storage first).
+ async def list_workflows(self) -> list[dict[str, Any]]:
+ """List all workflows (refreshes from storage first).
Returns:
- List of skill summaries (name, description, parameters - no source).
- Use get_skill() to retrieve full source for a specific skill.
+ List of workflow summaries (name, description, parameters - no source).
+ Use get_workflow() to retrieve full source for a specific workflow.
"""
- library = self._storage.get_skill_library()
+ library = self._storage.get_workflow_library()
library.refresh()
- skills = library.list()
- return [self._skill_to_dict(skill, include_source=False) for skill in skills]
+ workflows = library.list()
+ return [self._workflow_to_dict(workflow, include_source=False) for workflow in workflows]
- async def search_skills(self, query: str, limit: int = 5) -> list[dict[str, Any]]:
- """Search skills (refreshes from storage first).
+ async def search_workflows(self, query: str, limit: int = 5) -> list[dict[str, Any]]:
+ """Search workflows (refreshes from storage first).
Args:
query: Natural language search query.
limit: Maximum number of results.
Returns:
- List of matching skill summaries (name, description, parameters - no source).
- Use get_skill() to retrieve full source for a specific skill.
+ List of matching workflow summaries (name, description, parameters - no source).
+ Use get_workflow() to retrieve full source for a specific workflow.
"""
- library = self._storage.get_skill_library()
+ library = self._storage.get_workflow_library()
library.refresh()
- skills = library.search(query, limit=limit)
- return [self._skill_to_dict(skill, include_source=False) for skill in skills]
+ workflows = library.search(query, limit=limit)
+ return [self._workflow_to_dict(workflow, include_source=False) for workflow in workflows]
- async def add_skill(self, name: str, source: str, description: str) -> dict[str, Any]:
- """Create and persist a skill.
+ async def add_workflow(self, name: str, source: str, description: str) -> dict[str, Any]:
+ """Create and persist a workflow.
Args:
- name: Unique skill name (must be valid Python identifier).
+ name: Unique workflow name (must be valid Python identifier).
source: Python source code with def run(...) function.
- description: What the skill does.
+ description: What the workflow does.
Returns:
- Skill metadata dict.
+ Workflow metadata dict.
Raises:
ValueError: If name is invalid or source doesn't define run().
SyntaxError: If source has syntax errors.
"""
- skill = PythonSkill.from_source(name=name, source=source, description=description)
- library = self._storage.get_skill_library()
- library.add(skill)
- return self._skill_to_dict(skill)
+ workflow = PythonWorkflow.from_source(name=name, source=source, description=description)
+ library = self._storage.get_workflow_library()
+ library.add(workflow)
+ return self._workflow_to_dict(workflow)
- async def remove_skill(self, name: str) -> bool:
- """Remove a skill.
+ async def remove_workflow(self, name: str) -> bool:
+ """Remove a workflow.
Args:
- name: Name of the skill to remove.
+ name: Name of the workflow to remove.
Returns:
True if removed, False if not found.
"""
- library = self._storage.get_skill_library()
+ library = self._storage.get_workflow_library()
return library.remove(name)
- async def get_skill(self, name: str) -> dict[str, Any] | None:
- """Get skill by name.
+ async def get_workflow(self, name: str) -> dict[str, Any] | None:
+ """Get workflow by name.
Args:
- name: Skill name.
+ name: Workflow name.
Returns:
- Skill info dict, or None if not found.
+ Workflow info dict, or None if not found.
"""
- library = self._storage.get_skill_library()
+ library = self._storage.get_workflow_library()
library.refresh()
- skill = library.get(name)
- if skill is None:
+ workflow = library.get(name)
+ if workflow is None:
return None
- return self._skill_to_dict(skill)
+ return self._workflow_to_dict(workflow)
- def _skill_to_dict(self, skill: PythonSkill, include_source: bool = True) -> dict[str, Any]:
- """Convert a PythonSkill to a JSON-serializable dict.
+ def _workflow_to_dict(
+ self,
+ workflow: PythonWorkflow,
+ include_source: bool = True,
+ ) -> dict[str, Any]:
+ """Convert a PythonWorkflow to a JSON-serializable dict.
Args:
- skill: The skill to convert.
+ workflow: The workflow to convert.
include_source: Whether to include full source code. False for listings,
- True for get_skill where the caller needs the implementation.
+ True for get_workflow where the caller needs the implementation.
"""
result: dict[str, Any] = {
- "name": skill.name,
- "description": skill.description,
+ "name": workflow.name,
+ "description": workflow.description,
"parameters": [
{
"name": p.name,
@@ -530,11 +534,11 @@ def _skill_to_dict(self, skill: PythonSkill, include_source: bool = True) -> dic
"required": p.required,
"default": p.default,
}
- for p in skill.parameters
+ for p in workflow.parameters
],
}
if include_source:
- result["source"] = skill.source
+ result["source"] = workflow.source
return result
# -------------------------------------------------------------------------
diff --git a/src/py_code_mode/skills/store.py b/src/py_code_mode/skills/store.py
deleted file mode 100644
index b316b4e..0000000
--- a/src/py_code_mode/skills/store.py
+++ /dev/null
@@ -1,288 +0,0 @@
-"""Skill persistence layer - stores and retrieves skills without search logic."""
-
-from __future__ import annotations
-
-import json
-import logging
-import re
-from dataclasses import asdict
-from datetime import UTC, datetime
-from pathlib import Path
-from typing import TYPE_CHECKING, Any, Protocol, runtime_checkable
-
-from py_code_mode.errors import StorageReadError
-from py_code_mode.skills.skill import PythonSkill, SkillMetadata
-
-# Valid skill name pattern: Python identifier (letters, digits, underscores)
-_VALID_SKILL_NAME = re.compile(r"^[a-zA-Z_][a-zA-Z0-9_]*$")
-
-if TYPE_CHECKING:
- from redis import Redis
-
-logger = logging.getLogger(__name__)
-
-
-@runtime_checkable
-class SkillStore(Protocol):
- """Protocol for skill persistence. No search logic - just storage."""
-
- def save(self, skill: PythonSkill) -> None:
- """Persist a skill."""
- ...
-
- def load(self, name: str) -> PythonSkill | None:
- """Load a skill by name. Returns None if not found."""
- ...
-
- def delete(self, name: str) -> bool:
- """Delete a skill. Returns True if deleted, False if not found."""
- ...
-
- def list_all(self) -> list[PythonSkill]:
- """List all persisted skills."""
- ...
-
- def exists(self, name: str) -> bool:
- """Check if a skill exists."""
- ...
-
-
-class MemorySkillStore:
- """In-memory skill store for testing and ephemeral use."""
-
- def __init__(self) -> None:
- self._skills: dict[str, PythonSkill] = {}
-
- def save(self, skill: PythonSkill) -> None:
- """Store skill in memory."""
- self._skills[skill.name] = skill
-
- def load(self, name: str) -> PythonSkill | None:
- """Load skill from memory."""
- return self._skills.get(name)
-
- def delete(self, name: str) -> bool:
- """Remove skill from memory."""
- if name in self._skills:
- del self._skills[name]
- return True
- return False
-
- def list_all(self) -> list[PythonSkill]:
- """List all skills in memory."""
- return list(self._skills.values())
-
- def exists(self, name: str) -> bool:
- """Check if skill exists in memory."""
- return name in self._skills
-
-
-class FileSkillStore:
- """File-based skill store. Reads/writes .py files to a directory."""
-
- def __init__(self, directory: Path) -> None:
- """Initialize file store.
-
- Args:
- directory: Directory to store skill files.
- """
- self._directory = directory
- # Ensure directory exists
- self._directory.mkdir(parents=True, exist_ok=True)
-
- def _validate_skill_name(self, name: str) -> None:
- """Validate skill name is a valid Python identifier.
-
- Args:
- name: Skill name to validate.
-
- Raises:
- ValueError: If name is not a valid Python identifier.
- """
- if not _VALID_SKILL_NAME.match(name):
- raise ValueError(
- f"Invalid skill name: {name!r}. "
- "Skill names must be valid Python identifiers "
- "(letters, digits, underscores, cannot start with digit)."
- )
-
- def save(self, skill: PythonSkill) -> None:
- """Write skill source to .py file.
-
- Raises:
- ValueError: If skill name is not a valid Python identifier.
- """
- self._validate_skill_name(skill.name)
- path = self._directory / f"{skill.name}.py"
- path.write_text(skill.source)
-
- def load(self, name: str) -> PythonSkill | None:
- """Load skill from .py file.
-
- Raises:
- ValueError: If skill name is not a valid Python identifier.
- StorageReadError: If skill file exists but cannot be parsed.
- """
- self._validate_skill_name(name)
- path = self._directory / f"{name}.py"
- if not path.exists():
- return None
- try:
- return PythonSkill.from_file(path)
- except FileNotFoundError:
- return None
- except (OSError, SyntaxError, ValueError) as e:
- logger.error(f"Failed to load skill '{name}' from {path}: {type(e).__name__}: {e}")
- raise StorageReadError(f"Failed to load skill '{name}' from {path}: {e}") from e
-
- def delete(self, name: str) -> bool:
- """Delete skill .py file.
-
- Raises:
- ValueError: If skill name is not a valid Python identifier.
- """
- self._validate_skill_name(name)
- path = self._directory / f"{name}.py"
- if path.exists():
- path.unlink()
- return True
- return False
-
- def list_all(self) -> list[PythonSkill]:
- """Load all .py skill files from directory."""
- skills: list[PythonSkill] = []
- for path in self._directory.glob("*.py"):
- # Skip files starting with underscore
- if path.name.startswith("_"):
- continue
- try:
- skill = PythonSkill.from_file(path)
- skills.append(skill)
- except (OSError, SyntaxError, ValueError) as e:
- logger.warning(f"Failed to load skill from {path}: {type(e).__name__}: {e}")
- continue
- return skills
-
- def exists(self, name: str) -> bool:
- """Check if skill .py file exists.
-
- Raises:
- ValueError: If skill name is not a valid Python identifier.
- """
- self._validate_skill_name(name)
- path = self._directory / f"{name}.py"
- return path.exists()
-
-
-class RedisSkillStore:
- """Redis-based skill store. Persists skills as JSON in a Redis hash."""
-
- # Suffix appended to prefix for Redis hash key: {prefix}:__skills__
- HASH_KEY = ":__skills__"
-
- def __init__(self, redis: Redis, prefix: str = "skills") -> None:
- """Initialize Redis store.
-
- Args:
- redis: Redis client instance.
- prefix: Key prefix for the skills hash.
- """
- self._redis = redis
- self._prefix = prefix
-
- def _hash_key(self) -> str:
- """Build the Redis hash key."""
- return f"{self._prefix}{self.HASH_KEY}"
-
- def save(self, skill: PythonSkill) -> None:
- """Serialize and store skill in Redis."""
- data = {
- "name": skill.name,
- "description": skill.description,
- "source": skill.source,
- "parameters": [asdict(p) for p in skill.parameters],
- }
- self._redis.hset(self._hash_key(), skill.name, json.dumps(data))
-
- def save_batch(self, skills: list[PythonSkill]) -> None:
- """Serialize and store multiple skills in Redis using a pipeline."""
- if not skills:
- return
- pipe = self._redis.pipeline()
- for skill in skills:
- data = {
- "name": skill.name,
- "description": skill.description,
- "source": skill.source,
- "parameters": [asdict(p) for p in skill.parameters],
- }
- pipe.hset(self._hash_key(), skill.name, json.dumps(data))
- pipe.execute()
-
- def _deserialize_skill(self, data: dict[str, Any]) -> PythonSkill:
- """Deserialize skill from stored JSON data."""
- required = ("name", "source", "description")
- missing = [k for k in required if k not in data]
- if missing:
- raise ValueError(f"Invalid skill data: missing keys {missing}")
-
- return PythonSkill.from_source(
- name=data["name"],
- source=data["source"],
- description=data["description"],
- metadata=SkillMetadata(
- created_at=datetime.now(UTC),
- created_by="unknown",
- source="redis",
- ),
- )
-
- def load(self, name: str) -> PythonSkill | None:
- """Load skill from Redis by name."""
- value = self._redis.hget(self._hash_key(), name)
- if value is None:
- return None
-
- try:
- if isinstance(value, bytes):
- value = value.decode()
-
- data = json.loads(value)
- return self._deserialize_skill(data)
- except (json.JSONDecodeError, ValueError) as e:
- logger.error(f"Failed to load skill '{name}': {type(e).__name__}: {e}")
- raise StorageReadError(f"Failed to load skill '{name}': {e}") from e
-
- def delete(self, name: str) -> bool:
- """Delete skill from Redis."""
- result = self._redis.hdel(self._hash_key(), name)
- return result > 0
-
- def list_all(self) -> list[PythonSkill]:
- """List all skills from Redis."""
- all_data = self._redis.hgetall(self._hash_key())
- if not all_data:
- return []
-
- skills = []
- for name, value in all_data.items():
- try:
- if isinstance(value, bytes):
- value = value.decode()
- if isinstance(name, bytes):
- name = name.decode()
- data = json.loads(value)
- skills.append(self._deserialize_skill(data))
- except (json.JSONDecodeError, ValueError, SyntaxError, KeyError) as e:
- logger.warning(f"Failed to deserialize skill '{name}': {type(e).__name__}: {e}")
- continue
-
- return skills
-
- def exists(self, name: str) -> bool:
- """Check if skill exists in Redis."""
- return self._redis.hexists(self._hash_key(), name)
-
- def __len__(self) -> int:
- """Return the number of skills in the store."""
- return self._redis.hlen(self._hash_key())
diff --git a/src/py_code_mode/storage/backends.py b/src/py_code_mode/storage/backends.py
index ac3bf46..dd676a0 100644
--- a/src/py_code_mode/storage/backends.py
+++ b/src/py_code_mode/storage/backends.py
@@ -1,4 +1,4 @@
-"""Unified storage backend protocol for skills and artifacts.
+"""Unified storage backend protocol for workflows and artifacts.
This module provides a protocol that unifies storage under a single interface,
enabling swapping between FileStorage and RedisStorage.
@@ -15,28 +15,28 @@
from py_code_mode.artifacts import ArtifactStoreProtocol, FileArtifactStore, RedisArtifactStore
from py_code_mode.execution.protocol import FileStorageAccess, RedisStorageAccess
-from py_code_mode.skills import (
- FileSkillStore,
- RedisSkillStore,
- SkillLibrary,
- SkillStore,
+from py_code_mode.workflows import (
+ FileWorkflowStore,
+ RedisWorkflowStore,
VectorStore,
- create_skill_library,
+ WorkflowLibrary,
+ WorkflowStore,
+ create_workflow_library,
)
# Import ChromaVectorStore at module level for test mocking support
# The actual import in get_vector_store() handles the ImportError gracefully
try:
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
except ImportError:
ChromaVectorStore = None # type: ignore[misc, assignment]
# Import RedisVectorStore at module level for test mocking support
try:
- from py_code_mode.skills.vector_stores.redis_store import (
+ from py_code_mode.workflows.vector_stores.redis_store import (
REDIS_AVAILABLE as REDIS_VECTOR_AVAILABLE,
)
- from py_code_mode.skills.vector_stores.redis_store import (
+ from py_code_mode.workflows.vector_stores.redis_store import (
RedisVectorStore,
)
except ImportError:
@@ -53,7 +53,7 @@
class StorageBackend(Protocol):
"""Protocol for unified storage backend.
- Provides skills and artifacts storage under a single interface.
+ Provides workflows and artifacts storage under a single interface.
Tools and deps are owned by executors (via config), not storage.
"""
@@ -65,10 +65,10 @@ def get_serializable_access(self) -> FileStorageAccess | RedisStorageAccess:
"""
...
- def get_skill_library(self) -> SkillLibrary:
- """Return SkillLibrary for in-process execution.
+ def get_workflow_library(self) -> WorkflowLibrary:
+ """Return WorkflowLibrary for in-process execution.
- This method provides a library of skills loaded from storage for executors.
+ This method provides a library of workflows loaded from storage for executors.
"""
...
@@ -81,7 +81,7 @@ def get_artifact_store(self) -> ArtifactStoreProtocol:
class FileStorage:
- """File-based storage using directories for skills and artifacts.
+ """File-based storage using directories for workflows and artifacts.
Tools and deps are owned by executors (via config), not storage.
"""
@@ -92,13 +92,13 @@ def __init__(self, base_path: Path | str) -> None:
"""Initialize file storage.
Args:
- base_path: Base directory for storage. Will create skills/, artifacts/ subdirs.
+ base_path: Base directory for storage. Will create workflows/, artifacts/ subdirs.
"""
self._base_path = Path(base_path) if isinstance(base_path, str) else base_path
self._base_path.mkdir(parents=True, exist_ok=True)
- # Lazy-initialized stores (skills and artifacts only)
- self._skill_library: SkillLibrary | None = None
+ # Lazy-initialized stores (workflows and artifacts only)
+ self._workflow_library: WorkflowLibrary | None = None
self._artifact_store: FileArtifactStore | None = None
self._vector_store: VectorStore | None | object = FileStorage._UNINITIALIZED
@@ -107,11 +107,11 @@ def root(self) -> Path:
"""Get the root storage path."""
return self._base_path
- def _get_skills_path(self) -> Path:
- """Get the skills directory path."""
- skills_path = self._base_path / "skills"
- skills_path.mkdir(parents=True, exist_ok=True)
- return skills_path
+ def _get_workflows_path(self) -> Path:
+ """Get the workflows directory path."""
+ workflows_path = self._base_path / "workflows"
+ workflows_path.mkdir(parents=True, exist_ok=True)
+ return workflows_path
def _get_artifacts_path(self) -> Path:
"""Get the artifacts directory path."""
@@ -141,7 +141,7 @@ def get_vector_store(self) -> VectorStore | None:
self._vector_store = None
else:
try:
- from py_code_mode.skills import Embedder
+ from py_code_mode.workflows import Embedder
vectors_path = self._get_vectors_path()
embedder = Embedder()
@@ -157,19 +157,19 @@ def get_serializable_access(self) -> FileStorageAccess:
vectors_path = base_path / "vectors"
return FileStorageAccess(
- skills_path=base_path / "skills",
+ workflows_path=base_path / "workflows",
artifacts_path=base_path / "artifacts",
vectors_path=vectors_path if vectors_path.exists() else None,
)
- def get_skill_library(self) -> SkillLibrary:
- """Return SkillLibrary for in-process execution."""
- if self._skill_library is None:
- skills_path = self._get_skills_path()
- raw_store = FileSkillStore(skills_path)
+ def get_workflow_library(self) -> WorkflowLibrary:
+ """Return WorkflowLibrary for in-process execution."""
+ if self._workflow_library is None:
+ workflows_path = self._get_workflows_path()
+ raw_store = FileWorkflowStore(workflows_path)
vector_store = self.get_vector_store()
try:
- self._skill_library = create_skill_library(
+ self._workflow_library = create_workflow_library(
store=raw_store,
vector_store=vector_store,
)
@@ -178,14 +178,14 @@ def get_skill_library(self) -> SkillLibrary:
"Semantic search dependencies not available, falling back to MockEmbedder. "
"Install with: pip install sentence-transformers scikit-learn"
)
- from py_code_mode.skills import MockEmbedder
+ from py_code_mode.workflows import MockEmbedder
- self._skill_library = SkillLibrary(
+ self._workflow_library = WorkflowLibrary(
embedder=MockEmbedder(),
store=raw_store,
vector_store=vector_store,
)
- return self._skill_library
+ return self._workflow_library
def get_artifact_store(self) -> ArtifactStoreProtocol:
"""Return artifact store for in-process execution."""
@@ -193,10 +193,10 @@ def get_artifact_store(self) -> ArtifactStoreProtocol:
self._artifact_store = FileArtifactStore(self._get_artifacts_path())
return self._artifact_store
- def get_skill_store(self) -> SkillStore:
- """Return the underlying SkillStore for direct access."""
- skills_path = self._get_skills_path()
- return FileSkillStore(skills_path)
+ def get_workflow_store(self) -> WorkflowStore:
+ """Return the underlying WorkflowStore for direct access."""
+ workflows_path = self._get_workflows_path()
+ return FileWorkflowStore(workflows_path)
def to_bootstrap_config(self) -> dict[str, str]:
"""Serialize storage configuration for subprocess bootstrap.
@@ -213,7 +213,7 @@ def to_bootstrap_config(self) -> dict[str, str]:
class RedisStorage:
- """Redis-based storage for skills and artifacts.
+ """Redis-based storage for workflows and artifacts.
Tools and deps are owned by executors (via config), not storage.
"""
@@ -249,13 +249,18 @@ def __init__(
self._redis = RedisClient.from_url(url)
self._url = url
else:
+ if redis is None:
+ raise ValueError("Redis client must be provided when url is None")
self._redis = redis
self._url = None # Will be reconstructed if needed
+ if self._redis is None:
+ raise ValueError("Redis client is required")
+
self._prefix = prefix
- # Lazy-initialized stores (skills and artifacts only)
- self._skill_library: SkillLibrary | None = None
+ # Lazy-initialized stores (workflows and artifacts only)
+ self._workflow_library: WorkflowLibrary | None = None
self._artifact_store: RedisArtifactStore | None = None
self._vector_store: VectorStore | None | object = RedisStorage._UNINITIALIZED
@@ -309,7 +314,7 @@ def get_vector_store(self) -> VectorStore | None:
self._vector_store = None
else:
try:
- from py_code_mode.skills import Embedder
+ from py_code_mode.workflows import Embedder
embedder = Embedder()
self._vector_store = RedisVectorStore(
@@ -345,18 +350,18 @@ def get_serializable_access(self) -> RedisStorageAccess:
)
return RedisStorageAccess(
redis_url=redis_url,
- skills_prefix=f"{prefix}:skills",
+ workflows_prefix=f"{prefix}:workflows",
artifacts_prefix=f"{prefix}:artifacts",
vectors_prefix=vectors_prefix,
)
- def get_skill_library(self) -> SkillLibrary:
- """Return SkillLibrary for in-process execution."""
- if self._skill_library is None:
- raw_store = RedisSkillStore(self._redis, prefix=f"{self._prefix}:skills")
+ def get_workflow_library(self) -> WorkflowLibrary:
+ """Return WorkflowLibrary for in-process execution."""
+ if self._workflow_library is None:
+ raw_store = RedisWorkflowStore(self._redis, prefix=f"{self._prefix}:workflows")
vector_store = self.get_vector_store()
try:
- self._skill_library = create_skill_library(
+ self._workflow_library = create_workflow_library(
store=raw_store,
vector_store=vector_store,
)
@@ -365,14 +370,14 @@ def get_skill_library(self) -> SkillLibrary:
"Semantic search dependencies not available, falling back to MockEmbedder. "
"Install with: pip install sentence-transformers scikit-learn"
)
- from py_code_mode.skills import MockEmbedder
+ from py_code_mode.workflows import MockEmbedder
- self._skill_library = SkillLibrary(
+ self._workflow_library = WorkflowLibrary(
embedder=MockEmbedder(),
store=raw_store,
vector_store=vector_store,
)
- return self._skill_library
+ return self._workflow_library
def get_artifact_store(self) -> ArtifactStoreProtocol:
"""Return artifact store for in-process execution."""
@@ -382,9 +387,9 @@ def get_artifact_store(self) -> ArtifactStoreProtocol:
)
return self._artifact_store
- def get_skill_store(self) -> SkillStore:
- """Return the underlying SkillStore for direct access."""
- return RedisSkillStore(self._redis, prefix=f"{self._prefix}:skills")
+ def get_workflow_store(self) -> WorkflowStore:
+ """Return the underlying WorkflowStore for direct access."""
+ return RedisWorkflowStore(self._redis, prefix=f"{self._prefix}:workflows")
def to_bootstrap_config(self) -> dict[str, str]:
"""Serialize storage configuration for subprocess bootstrap.
diff --git a/src/py_code_mode/storage/redis_tools.py b/src/py_code_mode/storage/redis_tools.py
index dc00d0b..38f7ef4 100644
--- a/src/py_code_mode/storage/redis_tools.py
+++ b/src/py_code_mode/storage/redis_tools.py
@@ -5,7 +5,7 @@
import json
import logging
from pathlib import Path
-from typing import TYPE_CHECKING, Any
+from typing import TYPE_CHECKING, Any, cast
import yaml
@@ -14,8 +14,8 @@
if TYPE_CHECKING:
from redis import Redis
- from py_code_mode.skills.embeddings import EmbeddingProvider
from py_code_mode.tools import ToolRegistry
+ from py_code_mode.workflows.embeddings import EmbeddingProvider
class RedisToolStore:
@@ -47,7 +47,7 @@ def _index_key(self) -> str:
def __len__(self) -> int:
"""Return number of tools in store."""
- return self._redis.hlen(self._index_key())
+ return cast(int, self._redis.hlen(self._index_key()))
def add(self, name: str, config: dict[str, Any]) -> None:
"""Store tool configuration in Redis.
@@ -67,7 +67,7 @@ def get(self, name: str) -> dict[str, Any] | None:
Returns:
Tool config dict if found, None otherwise.
"""
- value = self._redis.hget(self._index_key(), name)
+ value = cast(str | bytes | None, self._redis.hget(self._index_key(), name))
if value is None:
return None
@@ -82,7 +82,7 @@ def list(self) -> dict[str, dict[str, Any]]:
Returns:
Dict mapping tool name to config.
"""
- all_data = self._redis.hgetall(self._index_key())
+ all_data = cast(dict[str | bytes, str | bytes], self._redis.hgetall(self._index_key()))
if not all_data:
return {}
@@ -105,7 +105,7 @@ def remove(self, name: str) -> bool:
Returns:
True if tool was removed, False if it didn't exist.
"""
- result = self._redis.hdel(self._index_key(), name)
+ result = cast(int, self._redis.hdel(self._index_key(), name))
return result > 0
@classmethod
diff --git a/src/py_code_mode/tools/adapters/mcp.py b/src/py_code_mode/tools/adapters/mcp.py
index 090b624..4a050b4 100644
--- a/src/py_code_mode/tools/adapters/mcp.py
+++ b/src/py_code_mode/tools/adapters/mcp.py
@@ -14,7 +14,7 @@
try:
from mcp import JSONRPCError, McpError
- MCP_ERRORS: tuple[type[Exception], ...] = (McpError, JSONRPCError)
+ MCP_ERRORS = (McpError, JSONRPCError)
except ImportError:
MCP_ERRORS = ()
diff --git a/src/py_code_mode/tools/loader.py b/src/py_code_mode/tools/loader.py
index baaae2b..aa29750 100644
--- a/src/py_code_mode/tools/loader.py
+++ b/src/py_code_mode/tools/loader.py
@@ -7,8 +7,8 @@
from pathlib import Path
-from py_code_mode.skills.embeddings import Embedder
from py_code_mode.tools.registry import ToolRegistry
+from py_code_mode.workflows.embeddings import Embedder
async def load_tools_from_path(path: Path) -> ToolRegistry:
diff --git a/src/py_code_mode/tools/namespace.py b/src/py_code_mode/tools/namespace.py
index 621ea41..0192e56 100644
--- a/src/py_code_mode/tools/namespace.py
+++ b/src/py_code_mode/tools/namespace.py
@@ -127,10 +127,10 @@ def __call__(self, **kwargs: Any) -> Any:
Returns coroutine in async context, executes sync otherwise.
When set_loop() has been called, always uses sync execution to support
- calling tools from within synchronously-executed skills.
+ calling tools from within synchronously-executed workflows.
"""
# If we have an explicit loop reference, always use sync path
- # This supports calling tools from sync skill code within async context
+ # This supports calling tools from sync workflow code within async context
if self._loop is not None:
return self.call_sync(**kwargs)
@@ -211,10 +211,10 @@ def __call__(self, **kwargs: Any) -> Any:
This allows both `await tools.x.y()` and `tools.x.y()` to work.
When set_loop() has been called on the parent namespace, always uses
- sync execution to support calling tools from within synchronously-executed skills.
+ sync execution to support calling tools from within synchronously-executed workflows.
"""
# If we have an explicit loop reference, always use sync path
- # This supports calling tools from sync skill code within async context
+ # This supports calling tools from sync workflow code within async context
if self._loop is not None:
return self.call_sync(**kwargs)
diff --git a/src/py_code_mode/tools/registry.py b/src/py_code_mode/tools/registry.py
index 036cffb..b726fe6 100644
--- a/src/py_code_mode/tools/registry.py
+++ b/src/py_code_mode/tools/registry.py
@@ -4,17 +4,20 @@
import logging
from collections.abc import Callable
-from typing import Any, TypeVar
+from typing import TYPE_CHECKING, Any, TypeVar
from py_code_mode.errors import CodeModeError, ToolCallError, ToolNotFoundError
-from py_code_mode.skills import EmbeddingProvider, cosine_similarity
from py_code_mode.tools.adapters.base import ToolAdapter
from py_code_mode.tools.types import Tool
+from py_code_mode.workflows import EmbeddingProvider, cosine_similarity
logger = logging.getLogger(__name__)
# Type alias for MCP adapter to avoid import at module level
-MCPAdapterType = "MCPAdapter"
+if TYPE_CHECKING:
+ from py_code_mode.tools.adapters.mcp import MCPAdapter as MCPAdapterType
+else:
+ MCPAdapterType = Any
async def _load_mcp_adapter(
diff --git a/src/py_code_mode/types.py b/src/py_code_mode/types.py
index 3f082e5..620c493 100644
--- a/src/py_code_mode/types.py
+++ b/src/py_code_mode/types.py
@@ -137,7 +137,7 @@ class ExecutorConfig:
# Common
default_timeout: float = 30.0
tools_path: str | None = None
- skills_path: str | None = None
+ workflows_path: str | None = None
artifacts_path: str | None = None
# Security policies (backends ignore if unsupported)
diff --git a/src/py_code_mode/skills/__init__.py b/src/py_code_mode/workflows/__init__.py
similarity index 57%
rename from src/py_code_mode/skills/__init__.py
rename to src/py_code_mode/workflows/__init__.py
index b33f8fb..c1adc6c 100644
--- a/src/py_code_mode/skills/__init__.py
+++ b/src/py_code_mode/workflows/__init__.py
@@ -1,26 +1,26 @@
-"""py_code_mode.skills - Skill store, library, and semantic search."""
+"""py_code_mode.workflows - Workflow store, library, and semantic search."""
-from py_code_mode.skills.skill import (
- PythonSkill,
- SkillMetadata,
- SkillParameter,
+from py_code_mode.workflows.store import (
+ FileWorkflowStore,
+ MemoryWorkflowStore,
+ RedisWorkflowStore,
+ WorkflowStore,
)
-from py_code_mode.skills.store import (
- FileSkillStore,
- MemorySkillStore,
- RedisSkillStore,
- SkillStore,
-)
-from py_code_mode.skills.vector_store import (
+from py_code_mode.workflows.vector_store import (
ModelInfo,
SearchResult,
VectorStore,
compute_content_hash,
)
+from py_code_mode.workflows.workflow import (
+ PythonWorkflow,
+ WorkflowMetadata,
+ WorkflowParameter,
+)
# Semantic features require numpy/scikit-learn - optional import
try:
- from py_code_mode.skills.embeddings import (
+ from py_code_mode.workflows.embeddings import (
MODEL_ALIASES,
Embedder,
EmbeddingProvider,
@@ -28,10 +28,10 @@
cosine_similarity,
resolve_model_name,
)
- from py_code_mode.skills.library import (
+ from py_code_mode.workflows.library import (
RankingConfig,
- SkillLibrary,
- create_skill_library,
+ WorkflowLibrary,
+ create_workflow_library,
)
SEMANTIC_AVAILABLE = True
@@ -44,19 +44,19 @@
cosine_similarity = None # type: ignore[assignment]
resolve_model_name = None # type: ignore[assignment]
RankingConfig = None # type: ignore[assignment, misc]
- SkillLibrary = None # type: ignore[assignment, misc]
- create_skill_library = None # type: ignore[assignment]
+ WorkflowLibrary = None # type: ignore[assignment, misc]
+ create_workflow_library = None # type: ignore[assignment]
__all__ = [
# Core types
- "PythonSkill",
- "SkillMetadata",
- "SkillParameter",
+ "PythonWorkflow",
+ "WorkflowMetadata",
+ "WorkflowParameter",
# Stores
- "SkillStore",
- "MemorySkillStore",
- "FileSkillStore",
- "RedisSkillStore",
+ "WorkflowStore",
+ "MemoryWorkflowStore",
+ "FileWorkflowStore",
+ "RedisWorkflowStore",
# VectorStore types
"VectorStore",
"ModelInfo",
@@ -71,6 +71,6 @@
"cosine_similarity",
"resolve_model_name",
"RankingConfig",
- "SkillLibrary",
- "create_skill_library",
+ "WorkflowLibrary",
+ "create_workflow_library",
]
diff --git a/src/py_code_mode/skills/embeddings.py b/src/py_code_mode/workflows/embeddings.py
similarity index 100%
rename from src/py_code_mode/skills/embeddings.py
rename to src/py_code_mode/workflows/embeddings.py
diff --git a/src/py_code_mode/skills/library.py b/src/py_code_mode/workflows/library.py
similarity index 52%
rename from src/py_code_mode/skills/library.py
rename to src/py_code_mode/workflows/library.py
index 53f5b5c..06c42e0 100644
--- a/src/py_code_mode/skills/library.py
+++ b/src/py_code_mode/workflows/library.py
@@ -1,18 +1,18 @@
-"""Skill library with semantic search capabilities."""
+"""Workflow library with semantic search capabilities."""
from __future__ import annotations
from dataclasses import dataclass, field
from typing import TYPE_CHECKING
-from py_code_mode.skills.embeddings import (
+from py_code_mode.workflows.embeddings import (
Embedder,
EmbeddingProvider,
cosine_similarity,
)
-from py_code_mode.skills.skill import PythonSkill
-from py_code_mode.skills.store import SkillStore
-from py_code_mode.skills.vector_store import VectorStore, compute_content_hash
+from py_code_mode.workflows.store import WorkflowStore
+from py_code_mode.workflows.vector_store import VectorStore, compute_content_hash
+from py_code_mode.workflows.workflow import PythonWorkflow
if TYPE_CHECKING:
pass
@@ -22,7 +22,7 @@
class RankingConfig:
"""Configuration for search ranking formula.
- Tune these based on your skill library characteristics.
+ Tune these based on your workflow library characteristics.
"""
description_weight: float = 0.7
@@ -32,16 +32,16 @@ class RankingConfig:
@dataclass
-class SkillLibrary:
- """Skill management with semantic search.
+class WorkflowLibrary:
+ """Workflow management with semantic search.
- The primary interface for working with skills. Provides:
+ The primary interface for working with workflows. Provides:
- Semantic search using embeddings
- - Optional persistence via SkillStore
+ - Optional persistence via WorkflowStore
- Optional VectorStore for embedding caching
- - Skill lifecycle management (add, remove, get, list)
+ - Workflow lifecycle management (add, remove, get, list)
- If a store is provided, skills are persisted there and loaded at
+ If a store is provided, workflows are persisted there and loaded at
construction time. Use refresh() to reload from store.
If a vector_store is provided, embeddings are cached there and
@@ -52,31 +52,31 @@ class SkillLibrary:
"""
embedder: EmbeddingProvider
- store: SkillStore | None = None
+ store: WorkflowStore | None = None
vector_store: VectorStore | None = None
ranking: RankingConfig = field(default_factory=RankingConfig)
- _skills: dict[str, PythonSkill] = field(default_factory=dict)
+ _workflows: dict[str, PythonWorkflow] = field(default_factory=dict)
_description_vectors: dict[str, list[float]] = field(default_factory=dict)
_code_vectors: dict[str, list[float]] = field(default_factory=dict)
def __post_init__(self) -> None:
- """Load and index skills from store if provided."""
+ """Load and index workflows from store if provided."""
if self.store is not None:
self.refresh()
def __len__(self) -> int:
- return len(self._skills)
+ return len(self._workflows)
def refresh(self) -> None:
- """Reload all skills from store and rebuild embedding index.
+ """Reload all workflows from store and rebuild embedding index.
Clears in-memory state and reloads from the store. When a VectorStore
- is configured, content-hash checking in _index_skill() handles caching:
- - New skills: indexed (hash not found)
- - Changed skills: re-indexed (hash mismatch)
- - Unchanged skills: skipped (hash match, fast path)
- - Deleted skills: stale vectors remain in VectorStore but search()
- filters results via _skills dict
+ is configured, content-hash checking in _index_workflow() handles caching:
+ - New workflows: indexed (hash not found)
+ - Changed workflows: re-indexed (hash mismatch)
+ - Unchanged workflows: skipped (hash match, fast path)
+ - Deleted workflows: stale vectors remain in VectorStore but search()
+ filters results via _workflows dict
No-op if no store is configured.
"""
@@ -84,72 +84,72 @@ def refresh(self) -> None:
return
# Clear current in-memory index
- self._skills.clear()
+ self._workflows.clear()
self._description_vectors.clear()
self._code_vectors.clear()
- # Load and index all skills from store
- # Note: VectorStore is NOT cleared - _index_skill() uses content hashes
- # to skip re-embedding unchanged skills
- for skill in self.store.list_all():
- self._index_skill(skill)
+ # Load and index all workflows from store
+ # Note: VectorStore is NOT cleared - _index_workflow() uses content hashes
+ # to skip re-embedding unchanged workflows
+ for workflow in self.store.list_all():
+ self._index_workflow(workflow)
- def _index_skill(self, skill: PythonSkill) -> None:
- """Add skill to local embedding index without touching store.
+ def _index_workflow(self, workflow: PythonWorkflow) -> None:
+ """Add workflow to local embedding index without touching store.
If vector_store is configured, embeddings are cached there with
- content hash checking to skip re-embedding unchanged skills.
+ content hash checking to skip re-embedding unchanged workflows.
"""
- # Always add to _skills dict for get() by name
- self._skills[skill.name] = skill
+ # Always add to _workflows dict for get() by name
+ self._workflows[workflow.name] = workflow
if self.vector_store is not None:
# Use vector_store with content hash checking
- content_hash = compute_content_hash(skill.description, skill.source)
- stored_hash = self.vector_store.get_content_hash(skill.name)
+ content_hash = compute_content_hash(workflow.description, workflow.source)
+ stored_hash = self.vector_store.get_content_hash(workflow.name)
if stored_hash != content_hash:
- # New or changed skill - add to vector_store
+ # New or changed workflow - add to vector_store
self.vector_store.add(
- id=skill.name,
- description=skill.description,
- source=skill.source,
+ id=workflow.name,
+ description=workflow.description,
+ source=workflow.source,
content_hash=content_hash,
)
else:
# Fallback: in-memory vectors
# Embed description
- desc_vec = self.embedder.embed([skill.description])[0]
- self._description_vectors[skill.name] = desc_vec
+ desc_vec = self.embedder.embed([workflow.description])[0]
+ self._description_vectors[workflow.name] = desc_vec
# Embed source code
- code_vec = self.embedder.embed([skill.source])[0]
- self._code_vectors[skill.name] = code_vec
+ code_vec = self.embedder.embed([workflow.source])[0]
+ self._code_vectors[workflow.name] = code_vec
- def add(self, skill: PythonSkill) -> None:
- """Add a skill to the library.
+ def add(self, workflow: PythonWorkflow) -> None:
+ """Add a workflow to the library.
Stores in store (if configured) and indexes embeddings for search.
"""
# Store if configured
if self.store is not None:
- self.store.save(skill)
+ self.store.save(workflow)
# Index locally for semantic search
- self._index_skill(skill)
+ self._index_workflow(workflow)
- def list(self) -> list[PythonSkill]:
- """List all skills."""
- return list(self._skills.values())
+ def list(self) -> list[PythonWorkflow]:
+ """List all workflows."""
+ return list(self._workflows.values())
def remove(self, name: str) -> bool:
- """Remove a skill from the library.
+ """Remove a workflow from the library.
Removes from store (if configured), vector_store (if configured),
and from local embedding index.
Returns:
- True if skill was removed, False if not found.
+ True if workflow was removed, False if not found.
"""
# Remove from store if configured
if self.store is not None:
@@ -160,9 +160,9 @@ def remove(self, name: str) -> bool:
self.vector_store.remove(name)
# Remove from local index
- if name not in self._skills:
+ if name not in self._workflows:
return False
- del self._skills[name]
+ del self._workflows[name]
if name in self._description_vectors:
del self._description_vectors[name]
if name in self._code_vectors:
@@ -173,17 +173,17 @@ def search(
self,
query: str,
limit: int = 10,
- ) -> list[PythonSkill]:
- """Search for skills by semantic similarity.
+ ) -> list[PythonWorkflow]:
+ """Search for workflows by semantic similarity.
Args:
query: Natural language search query.
limit: Maximum results to return.
Returns:
- Skills ranked by combined semantic similarity.
+ Workflows ranked by combined semantic similarity.
"""
- if not self._skills:
+ if not self._workflows:
return []
# Delegate to vector_store if configured
@@ -194,23 +194,26 @@ def search(
desc_weight=self.ranking.description_weight,
code_weight=self.ranking.code_weight,
)
- # Filter out stale vectors: if a skill was deleted from the store
+ # Filter out stale vectors: if a workflow was deleted from the store
# but its vectors remain in VectorStore (refresh doesn't clear VectorStore),
- # exclude it from results by checking _skills membership
- return [self._skills[r.id] for r in results if r.id in self._skills]
+ # exclude it from results by checking _workflows membership
+ return [self._workflows[r.id] for r in results if r.id in self._workflows]
# Fallback: in-memory cosine similarity
# Embed query (uses instruction prefix for retrieval models)
query_vec = self.embedder.embed_query(query)
- # Score each skill
+ # Score each workflow
scored: list[tuple[float, str]] = []
- for name, skill in self._skills.items():
+ for name, workflow in self._workflows.items():
# Cosine similarity with description
desc_sim = cosine_similarity(query_vec, self._description_vectors[name])
# Cosine similarity with code (if code is substantial enough)
- if len(skill.source) >= self.ranking.code_min_length and self.ranking.code_weight > 0:
+ if (
+ len(workflow.source) >= self.ranking.code_min_length
+ and self.ranking.code_weight > 0
+ ):
code_sim = cosine_similarity(query_vec, self._code_vectors[name])
score = (
self.ranking.description_weight * desc_sim + self.ranking.code_weight * code_sim
@@ -226,27 +229,27 @@ def search(
# Sort by score descending
scored.sort(key=lambda x: x[0], reverse=True)
- # Return top skills
- return [self._skills[name] for _, name in scored[:limit]]
+ # Return top workflows
+ return [self._workflows[name] for _, name in scored[:limit]]
- def get(self, name: str) -> PythonSkill | None:
- """Get skill by exact name."""
- return self._skills.get(name)
+ def get(self, name: str) -> PythonWorkflow | None:
+ """Get workflow by exact name."""
+ return self._workflows.get(name)
-def create_skill_library(
- store: SkillStore | None = None,
+def create_workflow_library(
+ store: WorkflowStore | None = None,
embedder: EmbeddingProvider | None = None,
embedding_model: str | None = None,
vector_store: VectorStore | None = None,
-) -> SkillLibrary:
- """Create a skill library, optionally backed by storage.
+) -> WorkflowLibrary:
+ """Create a workflow library, optionally backed by storage.
- This is the recommended way to create a SkillLibrary for production use.
+ This is the recommended way to create a WorkflowLibrary for production use.
Args:
- store: Optional storage (MemorySkillStore, FileSkillStore, RedisSkillStore, etc.).
- If provided, skills are loaded and indexed at creation time.
+ store: Optional storage (MemoryWorkflowStore, FileWorkflowStore, RedisWorkflowStore, etc.).
+ If provided, workflows are loaded and indexed at creation time.
embedder: Optional embedding provider. If not provided, creates Embedder
with the specified embedding_model.
embedding_model: Model alias ("bge-small", "bge-base", "granite") or full
@@ -255,23 +258,23 @@ def create_skill_library(
embeddings are cached there and search is delegated to it.
Returns:
- SkillLibrary configured with the provided store, embedder, and vector_store.
+ WorkflowLibrary configured with the provided store, embedder, and vector_store.
Example:
# In-memory only (default BGE-small model)
- library = create_skill_library()
+ library = create_workflow_library()
# With file-based store
- from py_code_mode.skills.store import FileSkillStore
- store = FileSkillStore(Path("./skills"))
- library = create_skill_library(store=store)
+ from py_code_mode.workflows.store import FileWorkflowStore
+ store = FileWorkflowStore(Path("./workflows"))
+ library = create_workflow_library(store=store)
# With custom model
- library = create_skill_library(embedding_model="bge-base")
+ library = create_workflow_library(embedding_model="bge-base")
# With vector store for embedding caching
- library = create_skill_library(store=store, vector_store=my_vector_store)
+ library = create_workflow_library(store=store, vector_store=my_vector_store)
"""
if embedder is None:
embedder = Embedder(model_name=embedding_model)
- return SkillLibrary(embedder=embedder, store=store, vector_store=vector_store)
+ return WorkflowLibrary(embedder=embedder, store=store, vector_store=vector_store)
diff --git a/src/py_code_mode/workflows/store.py b/src/py_code_mode/workflows/store.py
new file mode 100644
index 0000000..e3c5f11
--- /dev/null
+++ b/src/py_code_mode/workflows/store.py
@@ -0,0 +1,288 @@
+"""Workflow persistence layer - stores and retrieves workflows without search logic."""
+
+from __future__ import annotations
+
+import json
+import logging
+import re
+from dataclasses import asdict
+from datetime import UTC, datetime
+from pathlib import Path
+from typing import TYPE_CHECKING, Any, Protocol, cast, runtime_checkable
+
+from py_code_mode.errors import StorageReadError
+from py_code_mode.workflows.workflow import PythonWorkflow, WorkflowMetadata
+
+# Valid workflow name pattern: Python identifier (letters, digits, underscores)
+_VALID_WORKFLOW_NAME = re.compile(r"^[a-zA-Z_][a-zA-Z0-9_]*$")
+
+if TYPE_CHECKING:
+ from redis import Redis
+
+logger = logging.getLogger(__name__)
+
+
+@runtime_checkable
+class WorkflowStore(Protocol):
+ """Protocol for workflow persistence. No search logic - just storage."""
+
+ def save(self, workflow: PythonWorkflow) -> None:
+ """Persist a workflow."""
+ ...
+
+ def load(self, name: str) -> PythonWorkflow | None:
+ """Load a workflow by name. Returns None if not found."""
+ ...
+
+ def delete(self, name: str) -> bool:
+ """Delete a workflow. Returns True if deleted, False if not found."""
+ ...
+
+ def list_all(self) -> list[PythonWorkflow]:
+ """List all persisted workflows."""
+ ...
+
+ def exists(self, name: str) -> bool:
+ """Check if a workflow exists."""
+ ...
+
+
+class MemoryWorkflowStore:
+ """In-memory workflow store for testing and ephemeral use."""
+
+ def __init__(self) -> None:
+ self._workflows: dict[str, PythonWorkflow] = {}
+
+ def save(self, workflow: PythonWorkflow) -> None:
+ """Store workflow in memory."""
+ self._workflows[workflow.name] = workflow
+
+ def load(self, name: str) -> PythonWorkflow | None:
+ """Load workflow from memory."""
+ return self._workflows.get(name)
+
+ def delete(self, name: str) -> bool:
+ """Remove workflow from memory."""
+ if name in self._workflows:
+ del self._workflows[name]
+ return True
+ return False
+
+ def list_all(self) -> list[PythonWorkflow]:
+ """List all workflows in memory."""
+ return list(self._workflows.values())
+
+ def exists(self, name: str) -> bool:
+ """Check if workflow exists in memory."""
+ return name in self._workflows
+
+
+class FileWorkflowStore:
+ """File-based workflow store. Reads/writes .py files to a directory."""
+
+ def __init__(self, directory: Path) -> None:
+ """Initialize file store.
+
+ Args:
+ directory: Directory to store workflow files.
+ """
+ self._directory = directory
+ # Ensure directory exists
+ self._directory.mkdir(parents=True, exist_ok=True)
+
+ def _validate_workflow_name(self, name: str) -> None:
+ """Validate workflow name is a valid Python identifier.
+
+ Args:
+ name: Workflow name to validate.
+
+ Raises:
+ ValueError: If name is not a valid Python identifier.
+ """
+ if not _VALID_WORKFLOW_NAME.match(name):
+ raise ValueError(
+ f"Invalid workflow name: {name!r}. "
+ "Workflow names must be valid Python identifiers "
+ "(letters, digits, underscores, cannot start with digit)."
+ )
+
+ def save(self, workflow: PythonWorkflow) -> None:
+ """Write workflow source to .py file.
+
+ Raises:
+ ValueError: If workflow name is not a valid Python identifier.
+ """
+ self._validate_workflow_name(workflow.name)
+ path = self._directory / f"{workflow.name}.py"
+ path.write_text(workflow.source)
+
+ def load(self, name: str) -> PythonWorkflow | None:
+ """Load workflow from .py file.
+
+ Raises:
+ ValueError: If workflow name is not a valid Python identifier.
+ StorageReadError: If workflow file exists but cannot be parsed.
+ """
+ self._validate_workflow_name(name)
+ path = self._directory / f"{name}.py"
+ if not path.exists():
+ return None
+ try:
+ return PythonWorkflow.from_file(path)
+ except FileNotFoundError:
+ return None
+ except (OSError, SyntaxError, ValueError) as e:
+ logger.error(f"Failed to load workflow '{name}' from {path}: {type(e).__name__}: {e}")
+ raise StorageReadError(f"Failed to load workflow '{name}' from {path}: {e}") from e
+
+ def delete(self, name: str) -> bool:
+ """Delete workflow .py file.
+
+ Raises:
+ ValueError: If workflow name is not a valid Python identifier.
+ """
+ self._validate_workflow_name(name)
+ path = self._directory / f"{name}.py"
+ if path.exists():
+ path.unlink()
+ return True
+ return False
+
+ def list_all(self) -> list[PythonWorkflow]:
+ """Load all .py workflow files from directory."""
+ workflows: list[PythonWorkflow] = []
+ for path in self._directory.glob("*.py"):
+ # Skip files starting with underscore
+ if path.name.startswith("_"):
+ continue
+ try:
+ workflow = PythonWorkflow.from_file(path)
+ workflows.append(workflow)
+ except (OSError, SyntaxError, ValueError) as e:
+ logger.warning(f"Failed to load workflow from {path}: {type(e).__name__}: {e}")
+ continue
+ return workflows
+
+ def exists(self, name: str) -> bool:
+ """Check if workflow .py file exists.
+
+ Raises:
+ ValueError: If workflow name is not a valid Python identifier.
+ """
+ self._validate_workflow_name(name)
+ path = self._directory / f"{name}.py"
+ return path.exists()
+
+
+class RedisWorkflowStore:
+ """Redis-based workflow store. Persists workflows as JSON in a Redis hash."""
+
+ # Suffix appended to prefix for Redis hash key: {prefix}:__workflows__
+ HASH_KEY = ":__workflows__"
+
+ def __init__(self, redis: Redis, prefix: str = "workflows") -> None:
+ """Initialize Redis store.
+
+ Args:
+ redis: Redis client instance.
+ prefix: Key prefix for the workflows hash.
+ """
+ self._redis = redis
+ self._prefix = prefix
+
+ def _hash_key(self) -> str:
+ """Build the Redis hash key."""
+ return f"{self._prefix}{self.HASH_KEY}"
+
+ def save(self, workflow: PythonWorkflow) -> None:
+ """Serialize and store workflow in Redis."""
+ data = {
+ "name": workflow.name,
+ "description": workflow.description,
+ "source": workflow.source,
+ "parameters": [asdict(p) for p in workflow.parameters],
+ }
+ self._redis.hset(self._hash_key(), workflow.name, json.dumps(data))
+
+ def save_batch(self, workflows: list[PythonWorkflow]) -> None:
+ """Serialize and store multiple workflows in Redis using a pipeline."""
+ if not workflows:
+ return
+ pipe = self._redis.pipeline()
+ for workflow in workflows:
+ data = {
+ "name": workflow.name,
+ "description": workflow.description,
+ "source": workflow.source,
+ "parameters": [asdict(p) for p in workflow.parameters],
+ }
+ pipe.hset(self._hash_key(), workflow.name, json.dumps(data))
+ pipe.execute()
+
+ def _deserialize_workflow(self, data: dict[str, Any]) -> PythonWorkflow:
+ """Deserialize workflow from stored JSON data."""
+ required = ("name", "source", "description")
+ missing = [k for k in required if k not in data]
+ if missing:
+ raise ValueError(f"Invalid workflow data: missing keys {missing}")
+
+ return PythonWorkflow.from_source(
+ name=data["name"],
+ source=data["source"],
+ description=data["description"],
+ metadata=WorkflowMetadata(
+ created_at=datetime.now(UTC),
+ created_by="unknown",
+ source="redis",
+ ),
+ )
+
+ def load(self, name: str) -> PythonWorkflow | None:
+ """Load workflow from Redis by name."""
+ value = cast(str | bytes | None, self._redis.hget(self._hash_key(), name))
+ if value is None:
+ return None
+
+ try:
+ if isinstance(value, bytes):
+ value = value.decode()
+
+ data = json.loads(value)
+ return self._deserialize_workflow(data)
+ except (json.JSONDecodeError, ValueError) as e:
+ logger.error(f"Failed to load workflow '{name}': {type(e).__name__}: {e}")
+ raise StorageReadError(f"Failed to load workflow '{name}': {e}") from e
+
+ def delete(self, name: str) -> bool:
+ """Delete workflow from Redis."""
+ result = cast(int, self._redis.hdel(self._hash_key(), name))
+ return result > 0
+
+ def list_all(self) -> list[PythonWorkflow]:
+ """List all workflows from Redis."""
+ all_data = cast(dict[str | bytes, str | bytes], self._redis.hgetall(self._hash_key()))
+ if not all_data:
+ return []
+
+ workflows = []
+ for name, value in all_data.items():
+ try:
+ if isinstance(value, bytes):
+ value = value.decode()
+ if isinstance(name, bytes):
+ name = name.decode()
+ data = json.loads(value)
+ workflows.append(self._deserialize_workflow(data))
+ except (json.JSONDecodeError, ValueError, SyntaxError, KeyError) as e:
+ logger.warning(f"Failed to deserialize workflow '{name}': {type(e).__name__}: {e}")
+ continue
+
+ return workflows
+
+ def exists(self, name: str) -> bool:
+ """Check if workflow exists in Redis."""
+ return bool(self._redis.hexists(self._hash_key(), name))
+
+ def __len__(self) -> int:
+ """Return the number of workflows in the store."""
+ return cast(int, self._redis.hlen(self._hash_key()))
diff --git a/src/py_code_mode/skills/vector_store.py b/src/py_code_mode/workflows/vector_store.py
similarity index 64%
rename from src/py_code_mode/skills/vector_store.py
rename to src/py_code_mode/workflows/vector_store.py
index b53d87a..3b141a7 100644
--- a/src/py_code_mode/skills/vector_store.py
+++ b/src/py_code_mode/workflows/vector_store.py
@@ -1,4 +1,4 @@
-"""VectorStore protocol and core types for skill embedding caching."""
+"""VectorStore protocol and core types for workflow embedding caching."""
from __future__ import annotations
@@ -24,7 +24,7 @@ class SearchResult:
"""Result from a VectorStore similarity search.
Attributes:
- id: The skill identifier.
+ id: The workflow identifier.
score: Similarity score (0.0 to 1.0, higher is more similar).
metadata: Additional metadata about the match.
"""
@@ -36,31 +36,31 @@ class SearchResult:
@runtime_checkable
class VectorStore(Protocol):
- """Protocol for vector stores that cache skill embeddings.
+ """Protocol for vector stores that cache workflow embeddings.
- VectorStore implementations persist embeddings for skills, enabling
+ VectorStore implementations persist embeddings for workflows, enabling
fast semantic search without re-embedding on every startup.
"""
def add(self, id: str, description: str, source: str, content_hash: str) -> None:
- """Add or update a skill's embeddings in the store.
+ """Add or update a workflow's embeddings in the store.
Args:
- id: Unique identifier for the skill.
- description: Skill description text to embed.
- source: Skill source code to embed.
+ id: Unique identifier for the workflow.
+ description: Workflow description text to embed.
+ source: Workflow source code to embed.
content_hash: Hash of description + source for change detection.
"""
...
def remove(self, id: str) -> bool:
- """Remove a skill's embeddings from the store.
+ """Remove a workflow's embeddings from the store.
Args:
- id: Unique identifier for the skill.
+ id: Unique identifier for the workflow.
Returns:
- True if the skill was removed, False if it wasn't in the store.
+ True if the workflow was removed, False if it wasn't in the store.
"""
...
@@ -71,7 +71,7 @@ def search(
desc_weight: float,
code_weight: float,
) -> list[SearchResult]:
- """Search for skills by semantic similarity.
+ """Search for workflows by semantic similarity.
Args:
query: Search query text.
@@ -85,34 +85,30 @@ def search(
...
def get_content_hash(self, id: str) -> str | None:
- """Get the stored content hash for a skill.
+ """Get the stored content hash for a workflow.
Args:
- id: Unique identifier for the skill.
+ id: Unique identifier for the workflow.
Returns:
- The content hash if the skill exists, None otherwise.
+ The content hash if the workflow exists, None otherwise.
"""
...
- def get_model_info(self) -> ModelInfo:
- """Get information about the embedding model.
+ def count(self) -> int:
+ """Get the number of workflows indexed in the store.
Returns:
- ModelInfo describing the model used for embeddings.
+ Number of unique workflows with embeddings.
"""
...
- def clear(self) -> None:
- """Remove all embeddings from the store."""
+ def get_model_info(self) -> ModelInfo:
+ """Return embedder model identity for cache validation."""
...
- def count(self) -> int:
- """Get the number of skills indexed in the store.
-
- Returns:
- Number of unique skills with embeddings.
- """
+ def clear(self) -> None:
+ """Remove all stored embeddings."""
...
@@ -122,8 +118,8 @@ def compute_content_hash(description: str, source: str) -> str:
Uses SHA-256 and returns the first 16 characters (8 bytes) of the hex digest.
Args:
- description: Skill description text.
- source: Skill source code.
+ description: Workflow description text.
+ source: Workflow source code.
Returns:
16-character hex string hash.
diff --git a/src/py_code_mode/skills/vector_stores/__init__.py b/src/py_code_mode/workflows/vector_stores/__init__.py
similarity index 69%
rename from src/py_code_mode/skills/vector_stores/__init__.py
rename to src/py_code_mode/workflows/vector_stores/__init__.py
index 35def64..827e714 100644
--- a/src/py_code_mode/skills/vector_stores/__init__.py
+++ b/src/py_code_mode/workflows/vector_stores/__init__.py
@@ -1,10 +1,10 @@
-"""VectorStore implementations for skill embedding caching."""
+"""VectorStore implementations for workflow embedding caching."""
from __future__ import annotations
# ChromaDB is an optional dependency
try:
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
CHROMA_AVAILABLE = True
except ImportError:
@@ -13,7 +13,7 @@
# Redis is an optional dependency
try:
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
REDIS_AVAILABLE = True
except ImportError:
diff --git a/src/py_code_mode/skills/vector_stores/chroma.py b/src/py_code_mode/workflows/vector_stores/chroma.py
similarity index 80%
rename from src/py_code_mode/skills/vector_stores/chroma.py
rename to src/py_code_mode/workflows/vector_stores/chroma.py
index 090bf61..d99a544 100644
--- a/src/py_code_mode/skills/vector_stores/chroma.py
+++ b/src/py_code_mode/workflows/vector_stores/chroma.py
@@ -8,7 +8,7 @@
logger = logging.getLogger(__name__)
-from py_code_mode.skills.vector_store import ModelInfo, SearchResult # noqa: E402
+from py_code_mode.workflows.vector_store import ModelInfo, SearchResult # noqa: E402
try:
import chromadb
@@ -19,7 +19,7 @@
CHROMADB_AVAILABLE = False
if TYPE_CHECKING:
- from py_code_mode.skills.embeddings import EmbeddingProvider
+ from py_code_mode.workflows.embeddings import EmbeddingProvider
# Metadata keys used in ChromaDB collection
@@ -30,7 +30,7 @@
# Metadata keys used in vector documents
_KEY_CONTENT_HASH = "content_hash"
_KEY_TYPE = "type"
-_KEY_SKILL_ID = "skill_id"
+_KEY_WORKFLOW_ID = "workflow_id"
# Vector type suffixes
_TYPE_DESC = "desc"
@@ -40,15 +40,15 @@
class ChromaVectorStore:
"""VectorStore implementation backed by ChromaDB.
- Stores skill embeddings in a persistent ChromaDB collection with two
- vectors per skill: one for description, one for source code. Supports
+ Stores workflow embeddings in a persistent ChromaDB collection with two
+ vectors per workflow: one for description, one for source code. Supports
weighted search combining both similarity scores.
Model changes are detected via stored ModelInfo metadata. When the model
changes (different dimension, name, or version), the collection is cleared.
"""
- COLLECTION_NAME = "skills"
+ COLLECTION_NAME = "workflows"
def __init__(self, path: Path, embedder: EmbeddingProvider) -> None:
"""Initialize ChromaVectorStore.
@@ -122,20 +122,20 @@ def _validate_or_clear_model(self) -> None:
}
)
- def _desc_id(self, skill_id: str) -> str:
+ def _desc_id(self, workflow_id: str) -> str:
"""Build vector ID for description embedding."""
- return f"{skill_id}:{_TYPE_DESC}"
+ return f"{workflow_id}:{_TYPE_DESC}"
- def _code_id(self, skill_id: str) -> str:
+ def _code_id(self, workflow_id: str) -> str:
"""Build vector ID for code embedding."""
- return f"{skill_id}:{_TYPE_CODE}"
+ return f"{workflow_id}:{_TYPE_CODE}"
def add(self, id: str, description: str, source: str, content_hash: str) -> None:
- """Add or update a skill's embeddings.
+ """Add or update a workflow's embeddings.
- If the skill already exists with the same content_hash, this is a no-op.
+ If the workflow already exists with the same content_hash, this is a no-op.
"""
- # Check if skill already exists with same hash (skip re-embedding)
+ # Check if workflow already exists with same hash (skip re-embedding)
existing_hash = self.get_content_hash(id)
if existing_hash == content_hash:
return
@@ -153,22 +153,22 @@ def add(self, id: str, description: str, source: str, content_hash: str) -> None
{
_KEY_CONTENT_HASH: content_hash,
_KEY_TYPE: _TYPE_DESC,
- _KEY_SKILL_ID: id,
+ _KEY_WORKFLOW_ID: id,
},
{
_KEY_CONTENT_HASH: content_hash,
_KEY_TYPE: _TYPE_CODE,
- _KEY_SKILL_ID: id,
+ _KEY_WORKFLOW_ID: id,
},
],
)
def remove(self, id: str) -> bool:
- """Remove a skill's embeddings.
+ """Remove a workflow's embeddings.
- Returns True if the skill was removed, False if it wasn't found.
+ Returns True if the workflow was removed, False if it wasn't found.
"""
- # Check if skill exists
+ # Check if workflow exists
if self.get_content_hash(id) is None:
return False
@@ -183,7 +183,7 @@ def search(
desc_weight: float = 0.7,
code_weight: float = 0.3,
) -> list[SearchResult]:
- """Search for skills by semantic similarity.
+ """Search for workflows by semantic similarity.
Searches both description and code vectors, combining scores with
the given weights. Returns results sorted by combined score.
@@ -203,8 +203,8 @@ def search(
include=["metadatas", "distances"],
)
- # Process results - combine desc and code scores per skill
- skill_scores: dict[str, dict[str, float]] = {}
+ # Process results - combine desc and code scores per workflow
+ workflow_scores: dict[str, dict[str, float]] = {}
if results["ids"] and results["ids"][0]:
ids = results["ids"][0]
@@ -219,24 +219,24 @@ def search(
# ChromaDB cosine distance = 1 - similarity for normalized vectors
similarity = max(0.0, min(1.0, 1.0 - distance))
- skill_id = metadata.get(_KEY_SKILL_ID, doc_id.rsplit(":", 1)[0])
+ workflow_id = metadata.get(_KEY_WORKFLOW_ID, doc_id.rsplit(":", 1)[0])
doc_type = metadata.get(_KEY_TYPE)
- if skill_id not in skill_scores:
- skill_scores[skill_id] = {"desc": 0.0, "code": 0.0}
+ if workflow_id not in workflow_scores:
+ workflow_scores[workflow_id] = {"desc": 0.0, "code": 0.0}
if doc_type == _TYPE_DESC:
- skill_scores[skill_id]["desc"] = similarity
+ workflow_scores[workflow_id]["desc"] = similarity
elif doc_type == _TYPE_CODE:
- skill_scores[skill_id]["code"] = similarity
+ workflow_scores[workflow_id]["code"] = similarity
# Combine scores and build results
search_results: list[SearchResult] = []
- for skill_id, scores in skill_scores.items():
+ for workflow_id, scores in workflow_scores.items():
combined_score = scores["desc"] * desc_weight + scores["code"] * code_weight
search_results.append(
SearchResult(
- id=skill_id,
+ id=workflow_id,
score=combined_score,
metadata={},
)
@@ -247,7 +247,7 @@ def search(
return search_results[:limit]
def get_content_hash(self, id: str) -> str | None:
- """Get the stored content hash for a skill."""
+ """Get the stored content hash for a workflow."""
try:
result = self._collection.get(
ids=[self._desc_id(id)],
@@ -257,7 +257,7 @@ def get_content_hash(self, id: str) -> str | None:
metadata = result["metadatas"][0]
return metadata.get(_KEY_CONTENT_HASH)
except (KeyError, IndexError):
- # Malformed result structure - skill doesn't exist
+ # Malformed result structure - workflow doesn't exist
return None
except Exception as e:
logger.debug(f"Failed to get content hash for {id}: {e}")
@@ -275,10 +275,10 @@ def clear(self) -> None:
self._collection.delete(ids=all_data["ids"])
def count(self) -> int:
- """Get the number of skills indexed.
+ """Get the number of workflows indexed.
- Returns the number of unique skills, not vector count.
- Each skill has 2 vectors (desc + code), so we divide by 2.
+ Returns the number of unique workflows, not vector count.
+ Each workflow has 2 vectors (desc + code), so we divide by 2.
"""
vector_count = self._collection.count()
return vector_count // 2
diff --git a/src/py_code_mode/skills/vector_stores/redis_store.py b/src/py_code_mode/workflows/vector_stores/redis_store.py
similarity index 78%
rename from src/py_code_mode/skills/vector_stores/redis_store.py
rename to src/py_code_mode/workflows/vector_stores/redis_store.py
index 9373c01..970ba45 100644
--- a/src/py_code_mode/skills/vector_stores/redis_store.py
+++ b/src/py_code_mode/workflows/vector_stores/redis_store.py
@@ -10,11 +10,11 @@
logger = logging.getLogger(__name__)
-# Skill ID validation
-_VALID_SKILL_ID = re.compile(r"^[a-zA-Z_][a-zA-Z0-9_]*$")
+# Workflow ID validation
+_VALID_WORKFLOW_ID = re.compile(r"^[a-zA-Z_][a-zA-Z0-9_]*$")
_MAX_ID_LENGTH = 128
-from py_code_mode.skills.vector_store import ModelInfo, SearchResult # noqa: E402
+from py_code_mode.workflows.vector_store import ModelInfo, SearchResult # noqa: E402
try:
import redis.exceptions
@@ -30,7 +30,7 @@
REDIS_AVAILABLE = False
if TYPE_CHECKING:
- from py_code_mode.skills.embeddings import EmbeddingProvider
+ from py_code_mode.workflows.embeddings import EmbeddingProvider
# Metadata keys
@@ -42,13 +42,13 @@
_FIELD_DESC_VECTOR = "desc_vector"
_FIELD_CODE_VECTOR = "code_vector"
_FIELD_CONTENT_HASH = "content_hash"
-_FIELD_SKILL_ID = "skill_id"
+_FIELD_WORKFLOW_ID = "workflow_id"
class RedisVectorStore:
"""VectorStore implementation backed by Redis with RediSearch.
- Stores skill embeddings in Redis using RediSearch vector fields. Each skill
+ Stores workflow embeddings in Redis using RediSearch vector fields. Each workflow
has two vectors (description and code) stored in a single hash. Supports
weighted search combining both similarity scores.
@@ -63,7 +63,7 @@ def __init__(
redis: Redis,
embedder: EmbeddingProvider,
prefix: str = "vectors",
- index_name: str = "skills_idx",
+ index_name: str = "workflows_idx",
) -> None:
"""Initialize RedisVectorStore.
@@ -71,7 +71,7 @@ def __init__(
redis: Connected Redis client.
embedder: Embedding provider for generating vectors.
prefix: Key prefix for stored documents (default: "vectors").
- index_name: RediSearch index name (default: "skills_idx").
+ index_name: RediSearch index name (default: "workflows_idx").
Raises:
ImportError: If redis is not installed.
@@ -189,7 +189,7 @@ def _ensure_index_exists(self) -> None:
},
),
TextField(_FIELD_CONTENT_HASH),
- TagField(_FIELD_SKILL_ID),
+ TagField(_FIELD_WORKFLOW_ID),
)
definition = IndexDefinition(
@@ -202,28 +202,30 @@ def _ensure_index_exists(self) -> None:
definition=definition,
)
- def _doc_key(self, skill_id: str) -> str:
- """Build Redis key for a skill document."""
- return f"{self._doc_prefix}:{skill_id}"
+ def _doc_key(self, workflow_id: str) -> str:
+ """Build Redis key for a workflow document."""
+ return f"{self._doc_prefix}:{workflow_id}"
- def _validate_skill_id(self, skill_id: str) -> None:
- """Validate skill ID format.
+ def _validate_workflow_id(self, workflow_id: str) -> None:
+ """Validate workflow ID format.
Args:
- skill_id: The skill ID to validate.
+ workflow_id: The workflow ID to validate.
Raises:
- ValueError: If skill ID is empty, too long, or has invalid format.
+ ValueError: If workflow ID is empty, too long, or has invalid format.
"""
- if not skill_id or len(skill_id) > _MAX_ID_LENGTH:
- raise ValueError(f"Invalid skill ID length: {len(skill_id) if skill_id else 0}")
- if not _VALID_SKILL_ID.match(skill_id):
- raise ValueError(f"Invalid skill ID format: {skill_id!r}")
+ if not workflow_id or len(workflow_id) > _MAX_ID_LENGTH:
+ raise ValueError(
+ f"Invalid workflow ID length: {len(workflow_id) if workflow_id else 0}"
+ )
+ if not _VALID_WORKFLOW_ID.match(workflow_id):
+ raise ValueError(f"Invalid workflow ID format: {workflow_id!r}")
# Defense-in-depth: explicit check for characters that would break Redis keys
redis_unsafe = frozenset(":{}[]")
- if any(c in skill_id for c in redis_unsafe):
- raise ValueError(f"Skill ID contains unsafe characters: {skill_id!r}")
+ if any(c in workflow_id for c in redis_unsafe):
+ raise ValueError(f"Workflow ID contains unsafe characters: {workflow_id!r}")
def _vector_to_bytes(self, vector: list[float]) -> bytes:
"""Convert vector to bytes for Redis storage.
@@ -244,22 +246,22 @@ def _vector_to_bytes(self, vector: list[float]) -> bytes:
return np.array(vector, dtype=np.float32).tobytes()
def add(self, id: str, description: str, source: str, content_hash: str) -> None:
- """Add or update a skill's embeddings.
+ """Add or update a workflow's embeddings.
- If the skill already exists with the same content_hash, this is a no-op.
+ If the workflow already exists with the same content_hash, this is a no-op.
Args:
- id: Unique identifier for the skill.
- description: Skill description text to embed.
- source: Skill source code to embed.
+ id: Unique identifier for the workflow.
+ description: Workflow description text to embed.
+ source: Workflow source code to embed.
content_hash: Hash of description + source for change detection.
Raises:
- ValueError: If skill ID format is invalid.
+ ValueError: If workflow ID format is invalid.
"""
- self._validate_skill_id(id)
+ self._validate_workflow_id(id)
- # Check if skill already exists with same hash (skip re-embedding)
+ # Check if workflow already exists with same hash (skip re-embedding)
existing_hash = self.get_content_hash(id)
if existing_hash == content_hash:
return
@@ -276,25 +278,25 @@ def add(self, id: str, description: str, source: str, content_hash: str) -> None
_FIELD_DESC_VECTOR: self._vector_to_bytes(desc_vector),
_FIELD_CODE_VECTOR: self._vector_to_bytes(code_vector),
_FIELD_CONTENT_HASH: content_hash,
- _FIELD_SKILL_ID: id,
+ _FIELD_WORKFLOW_ID: id,
},
)
def remove(self, id: str) -> bool:
- """Remove a skill's embeddings.
+ """Remove a workflow's embeddings.
Args:
- id: Unique identifier for the skill.
+ id: Unique identifier for the workflow.
Returns:
- True if the skill was removed, False if it wasn't in the store.
+ True if the workflow was removed, False if it wasn't in the store.
Raises:
- ValueError: If skill ID format is invalid.
+ ValueError: If workflow ID format is invalid.
"""
- self._validate_skill_id(id)
+ self._validate_workflow_id(id)
- # Check if skill exists
+ # Check if workflow exists
if self.get_content_hash(id) is None:
return False
@@ -309,7 +311,7 @@ def search(
desc_weight: float = 0.7,
code_weight: float = 0.3,
) -> list[SearchResult]:
- """Search for skills by semantic similarity.
+ """Search for workflows by semantic similarity.
Searches both description and code vectors, combining scores with
the given weights. Returns results sorted by combined score.
@@ -340,30 +342,30 @@ def search(
# Query for code similarity
code_scores = self._knn_search(_FIELD_CODE_VECTOR, query_bytes, fetch_limit, "code_score")
- # Combine scores per skill
- skill_scores: dict[str, dict[str, float]] = {}
+ # Combine scores per workflow
+ workflow_scores: dict[str, dict[str, float]] = {}
- for skill_id, distance in desc_scores.items():
+ for workflow_id, distance in desc_scores.items():
# RediSearch cosine distance: 0 = identical, 2 = opposite
# Convert to similarity: 1 - (distance / 2)
similarity = max(0.0, min(1.0, 1.0 - (distance / 2.0)))
- if skill_id not in skill_scores:
- skill_scores[skill_id] = {"desc": 0.0, "code": 0.0}
- skill_scores[skill_id]["desc"] = similarity
+ if workflow_id not in workflow_scores:
+ workflow_scores[workflow_id] = {"desc": 0.0, "code": 0.0}
+ workflow_scores[workflow_id]["desc"] = similarity
- for skill_id, distance in code_scores.items():
+ for workflow_id, distance in code_scores.items():
similarity = max(0.0, min(1.0, 1.0 - (distance / 2.0)))
- if skill_id not in skill_scores:
- skill_scores[skill_id] = {"desc": 0.0, "code": 0.0}
- skill_scores[skill_id]["code"] = similarity
+ if workflow_id not in workflow_scores:
+ workflow_scores[workflow_id] = {"desc": 0.0, "code": 0.0}
+ workflow_scores[workflow_id]["code"] = similarity
# Build results with combined scores
results: list[SearchResult] = []
- for skill_id, scores in skill_scores.items():
+ for workflow_id, scores in workflow_scores.items():
combined_score = scores["desc"] * desc_weight + scores["code"] * code_weight
results.append(
SearchResult(
- id=skill_id,
+ id=workflow_id,
score=combined_score,
metadata={},
)
@@ -385,12 +387,12 @@ def _knn_search(
score_alias: Alias for the distance score in results.
Returns:
- Dict mapping skill_id to distance score.
+ Dict mapping workflow_id to distance score.
"""
query_str = f"*=>[KNN {limit} @{field} $vec AS {score_alias}]"
q = (
Query(query_str)
- .return_fields(_FIELD_SKILL_ID, score_alias)
+ .return_fields(_FIELD_WORKFLOW_ID, score_alias)
.sort_by(score_alias)
.dialect(2)
)
@@ -406,37 +408,37 @@ def _knn_search(
scores: dict[str, float] = {}
for doc in results.docs:
- skill_id = getattr(doc, _FIELD_SKILL_ID, None)
+ workflow_id = getattr(doc, _FIELD_WORKFLOW_ID, None)
score = getattr(doc, score_alias, None)
- if skill_id is None or score is None:
+ if workflow_id is None or score is None:
continue
# Handle bytes vs string
- if isinstance(skill_id, bytes):
- skill_id = skill_id.decode()
+ if isinstance(workflow_id, bytes):
+ workflow_id = workflow_id.decode()
if isinstance(score, bytes):
score = float(score.decode())
else:
score = float(score)
- scores[skill_id] = score
+ scores[workflow_id] = score
return scores
def get_content_hash(self, id: str) -> str | None:
- """Get the stored content hash for a skill.
+ """Get the stored content hash for a workflow.
Args:
- id: Unique identifier for the skill.
+ id: Unique identifier for the workflow.
Returns:
- The content hash if the skill exists, None otherwise.
+ The content hash if the workflow exists, None otherwise.
Raises:
- ValueError: If skill ID format is invalid.
+ ValueError: If workflow ID format is invalid.
"""
- self._validate_skill_id(id)
+ self._validate_workflow_id(id)
value = self._redis.hget(self._doc_key(id), _FIELD_CONTENT_HASH)
if value is None:
return None
@@ -477,12 +479,12 @@ def clear(self) -> None:
self._ensure_index_exists()
def count(self) -> int:
- """Get the number of skills indexed in the store.
+ """Get the number of workflows indexed in the store.
Returns:
- Number of unique skills with embeddings.
+ Number of unique workflows with embeddings.
"""
- # Count documents in the index (each skill is one document)
+ # Count documents in the index (each workflow is one document)
try:
info = self._redis.ft(self._index_name).info()
# info is a dict-like object, num_docs gives total documents
diff --git a/src/py_code_mode/skills/skill.py b/src/py_code_mode/workflows/workflow.py
similarity index 77%
rename from src/py_code_mode/skills/skill.py
rename to src/py_code_mode/workflows/workflow.py
index aaa380c..d8c9e20 100644
--- a/src/py_code_mode/skills/skill.py
+++ b/src/py_code_mode/workflows/workflow.py
@@ -1,4 +1,4 @@
-"""Skills system - Python skills with IDE support."""
+"""Workflows system - Python workflows with IDE support."""
from __future__ import annotations
@@ -20,15 +20,15 @@
@dataclass
-class SkillMetadata:
- """Metadata about skill creation and origin."""
+class WorkflowMetadata:
+ """Metadata about workflow creation and origin."""
created_at: datetime
created_by: str # "agent" or "human"
source: str # "file", "redis", "runtime"
@classmethod
- def now(cls, created_by: str = "agent", source: str = "runtime") -> SkillMetadata:
+ def now(cls, created_by: str = "agent", source: str = "runtime") -> WorkflowMetadata:
"""Create metadata with current timestamp."""
return cls(
created_at=datetime.now(UTC),
@@ -38,8 +38,8 @@ def now(cls, created_by: str = "agent", source: str = "runtime") -> SkillMetadat
@dataclass
-class SkillParameter:
- """A parameter for a skill."""
+class WorkflowParameter:
+ """A parameter for a workflow."""
name: str
type: str
@@ -59,11 +59,11 @@ class SkillParameter:
}
# Special parameters that are injected, not user-provided
-_INJECTED_PARAMS = {"tools", "skills", "artifacts"}
+_INJECTED_PARAMS = {"tools", "workflows", "artifacts", "deps"}
-def _extract_parameters(func: Callable[..., Any], name: str) -> list[SkillParameter]:
- """Extract SkillParameter list from a function's signature."""
+def _extract_parameters(func: Callable[..., Any], name: str) -> list[WorkflowParameter]:
+ """Extract WorkflowParameter list from a function's signature."""
sig = inspect.signature(func)
try:
type_hints = get_type_hints(func)
@@ -85,7 +85,7 @@ def _extract_parameters(func: Callable[..., Any], name: str) -> list[SkillParame
default = param.default if has_default else None
parameters.append(
- SkillParameter(
+ WorkflowParameter(
name=param_name,
type=type_str,
description="",
@@ -97,8 +97,8 @@ def _extract_parameters(func: Callable[..., Any], name: str) -> list[SkillParame
@dataclass
-class PythonSkill:
- """A skill defined as a Python module with run() entrypoint.
+class PythonWorkflow:
+ """A workflow defined as a Python module with run() entrypoint.
Provides full IDE support (syntax highlighting, intellisense)
and exposes source code for agent inspection and adaptation.
@@ -106,10 +106,10 @@ class PythonSkill:
name: str
description: str
- parameters: list[SkillParameter]
+ parameters: list[WorkflowParameter]
source: str
_func: Callable[..., Any] = field(repr=False)
- metadata: SkillMetadata | None = None
+ metadata: WorkflowMetadata | None = None
@classmethod
def from_source(
@@ -117,18 +117,18 @@ def from_source(
name: str,
source: str,
description: str = "",
- metadata: SkillMetadata | None = None,
- ) -> PythonSkill:
- """Create a PythonSkill from source code string.
+ metadata: WorkflowMetadata | None = None,
+ ) -> PythonWorkflow:
+ """Create a PythonWorkflow from source code string.
Args:
- name: Skill name (must be valid Python identifier).
+ name: Workflow name (must be valid Python identifier).
source: Python source code with def run(...) function.
- description: What the skill does.
+ description: What the workflow does.
metadata: Optional creation metadata.
Returns:
- PythonSkill instance.
+ PythonWorkflow instance.
Raises:
ValueError: If name is invalid or code doesn't define run().
@@ -136,18 +136,18 @@ def from_source(
"""
# Validate name is valid identifier
if not name.isidentifier():
- raise ValueError(f"Invalid skill name: {name!r} (must be valid Python identifier)")
+ raise ValueError(f"Invalid workflow name: {name!r} (must be valid Python identifier)")
- # Reserved names that would shadow SkillsNamespace methods
+ # Reserved names that would shadow WorkflowsNamespace methods
reserved = {"list", "search", "get", "invoke", "create", "delete"}
if name in reserved:
- raise ValueError(f"Reserved skill name: {name!r}")
+ raise ValueError(f"Reserved workflow name: {name!r}")
# Parse and validate syntax
try:
tree = ast.parse(source)
except SyntaxError as e:
- raise SyntaxError(f"Syntax error in skill code: {e}")
+ raise SyntaxError(f"Syntax error in workflow code: {e}")
has_async_run = False
has_sync_run = False
@@ -159,13 +159,13 @@ def from_source(
has_sync_run = True
if has_sync_run and not has_async_run:
- raise ValueError("Skill must define 'async def run()', not 'def run()'")
+ raise ValueError("Workflow must define 'async def run()', not 'def run()'")
if not has_async_run:
- raise ValueError("Skill must define an 'async def run()' function")
+ raise ValueError("Workflow must define an 'async def run()' function")
# Compile and execute to get the function
namespace: dict[str, Any] = {}
- _run_code(compile(tree, f"", "exec"), namespace)
+ _run_code(compile(tree, f"", "exec"), namespace)
func = namespace.get("run")
if not callable(func):
@@ -191,12 +191,12 @@ def from_source(
parameters=parameters,
source=source,
_func=func,
- metadata=metadata or SkillMetadata.now(),
+ metadata=metadata or WorkflowMetadata.now(),
)
@classmethod
- def from_file(cls, path: Path) -> PythonSkill:
- """Load a Python skill from a .py file.
+ def from_file(cls, path: Path) -> PythonWorkflow:
+ """Load a Python workflow from a .py file.
The file must have an async def run() function as entrypoint.
Parameters are extracted from the function signature.
@@ -208,7 +208,7 @@ def from_file(cls, path: Path) -> PythonSkill:
try:
tree = ast.parse(source)
except SyntaxError as e:
- raise SyntaxError(f"Syntax error in skill {path}: {e}")
+ raise SyntaxError(f"Syntax error in workflow {path}: {e}")
has_async_run = False
has_sync_run = False
@@ -220,9 +220,9 @@ def from_file(cls, path: Path) -> PythonSkill:
has_sync_run = True
if has_sync_run and not has_async_run:
- raise ValueError(f"Skill {path} must define 'async def run()', not 'def run()'")
+ raise ValueError(f"Workflow {path} must define 'async def run()', not 'def run()'")
if not has_async_run:
- raise ValueError(f"Skill {path} must define an 'async def run()' function")
+ raise ValueError(f"Workflow {path} must define an 'async def run()' function")
# Load module dynamically
spec = importlib.util.spec_from_file_location(path.stem, path)
@@ -246,7 +246,7 @@ def from_file(cls, path: Path) -> PythonSkill:
parameters=parameters,
source=source,
_func=func,
- metadata=SkillMetadata(
+ metadata=WorkflowMetadata(
created_at=datetime.now(UTC),
created_by="human",
source="file",
@@ -254,7 +254,7 @@ def from_file(cls, path: Path) -> PythonSkill:
)
async def invoke(self, **kwargs: Any) -> Any:
- """Invoke the skill with given parameters.
+ """Invoke the workflow with given parameters.
Awaits the async run() function.
"""
diff --git a/tests/conftest.py b/tests/conftest.py
index 21c83ec..5c6c8cf 100644
--- a/tests/conftest.py
+++ b/tests/conftest.py
@@ -25,7 +25,7 @@
os.environ["ACCELERATE_DISABLE"] = "1"
# Singleton Embedder to avoid OOM from multiple model loads in parallel xdist workers
-from py_code_mode.skills.embeddings import Embedder
+from py_code_mode.workflows.embeddings import Embedder
_SHARED_EMBEDDER = None
_original_embedder_init = Embedder.__init__
@@ -716,8 +716,8 @@ def temp_storage_dir(tmp_path: Any) -> Any:
tools_dir = storage_root / "tools"
tools_dir.mkdir()
- skills_dir = storage_root / "skills"
- skills_dir.mkdir()
+ workflows_dir = storage_root / "workflows"
+ workflows_dir.mkdir()
artifacts_dir = storage_root / "artifacts"
artifacts_dir.mkdir()
@@ -750,8 +750,8 @@ def sample_tool_yaml() -> str:
@pytest.fixture
-def sample_skill_source() -> str:
- """Sample skill source code for testing."""
+def sample_workflow_source() -> str:
+ """Sample workflow source code for testing."""
return '''"""Double a number."""
async def run(n: int) -> int:
diff --git a/tests/container/test_client.py b/tests/container/test_client.py
index 18595c7..4759f39 100644
--- a/tests/container/test_client.py
+++ b/tests/container/test_client.py
@@ -144,14 +144,14 @@ class TestSessionClientInfo:
"""Tests for info method."""
@pytest.mark.asyncio
- async def test_info_returns_tools_and_skills(self) -> None:
- """Info returns available tools and skills."""
+ async def test_info_returns_tools_and_workflows(self) -> None:
+ """Info returns available tools and workflows."""
client = SessionClient()
mock_response = make_mock_response(
{
"tools": [{"name": "cli.nmap", "description": "Network scanner"}],
- "skills": [{"name": "scan", "description": "Port scanner"}],
+ "workflows": [{"name": "scan", "description": "Port scanner"}],
"artifacts_path": "/workspace/artifacts",
}
)
@@ -164,8 +164,8 @@ async def test_info_returns_tools_and_skills(self) -> None:
assert len(info.tools) == 1
assert info.tools[0]["name"] == "cli.nmap"
- assert len(info.skills) == 1
- assert info.skills[0]["name"] == "scan"
+ assert len(info.workflows) == 1
+ assert info.workflows[0]["name"] == "scan"
class TestSessionClientReset:
diff --git a/tests/container/test_container_api.py b/tests/container/test_container_api.py
index 5bea730..87d5433 100644
--- a/tests/container/test_container_api.py
+++ b/tests/container/test_container_api.py
@@ -1,12 +1,12 @@
"""Tests for ContainerExecutor HTTP API endpoints.
-Tests the /api/* endpoints for structured queries of tools, skills, artifacts, and deps.
+Tests the /api/* endpoints for structured queries of tools, workflows, artifacts, and deps.
These endpoints allow the executor to query metadata directly via HTTP instead of
executing Python code.
Feature areas covered:
1. Tools API (list, search)
-2. Skills API (list, search, get, create, delete)
+2. Workflows API (list, search, get, create, delete)
3. Artifacts API (list, load, save, delete)
4. Deps API (list, add, remove, sync)
5. Auth enforcement on all /api/* endpoints
@@ -77,12 +77,12 @@ def test_search_tools_requires_auth(self, auth_client) -> None:
# =============================================================================
-# SECTION 2: SKILLS API
+# SECTION 2: WORKFLOWS API
# =============================================================================
-class TestSkillsAPI:
- """Tests for /api/skills endpoints."""
+class TestWorkflowsAPI:
+ """Tests for /api/workflows endpoints."""
@pytest.fixture
def auth_client(self, tmp_path):
@@ -97,7 +97,7 @@ def auth_client(self, tmp_path):
config = SessionConfig(
artifacts_path=tmp_path / "artifacts",
- skills_path=tmp_path / "skills",
+ workflows_path=tmp_path / "workflows",
)
config.auth_token = "test-token"
@@ -105,28 +105,28 @@ def auth_client(self, tmp_path):
with TestClient(app) as client:
yield client, "test-token"
- def test_list_skills_returns_empty_when_no_skills(self, auth_client) -> None:
- """GET /api/skills returns empty list when no skills registered."""
+ def test_list_workflows_returns_empty_when_no_workflows(self, auth_client) -> None:
+ """GET /api/workflows returns empty list when no workflows registered."""
client, token = auth_client
response = client.get(
- "/api/skills",
+ "/api/workflows",
headers={"Authorization": f"Bearer {token}"},
)
assert response.status_code == 200
data = response.json()
assert isinstance(data, list)
- def test_list_skills_requires_auth(self, auth_client) -> None:
- """GET /api/skills requires authentication."""
+ def test_list_workflows_requires_auth(self, auth_client) -> None:
+ """GET /api/workflows requires authentication."""
client, _ = auth_client
- response = client.get("/api/skills")
+ response = client.get("/api/workflows")
assert response.status_code == 401
- def test_search_skills_returns_empty_when_no_skills(self, auth_client) -> None:
- """GET /api/skills/search returns empty list when no skills registered."""
+ def test_search_workflows_returns_empty_when_no_workflows(self, auth_client) -> None:
+ """GET /api/workflows/search returns empty list when no workflows registered."""
client, token = auth_client
response = client.get(
- "/api/skills/search",
+ "/api/workflows/search",
params={"query": "fetch"},
headers={"Authorization": f"Bearer {token}"},
)
@@ -134,36 +134,36 @@ def test_search_skills_returns_empty_when_no_skills(self, auth_client) -> None:
data = response.json()
assert isinstance(data, list)
- def test_search_skills_requires_auth(self, auth_client) -> None:
- """GET /api/skills/search requires authentication."""
+ def test_search_workflows_requires_auth(self, auth_client) -> None:
+ """GET /api/workflows/search requires authentication."""
client, _ = auth_client
- response = client.get("/api/skills/search", params={"query": "fetch"})
+ response = client.get("/api/workflows/search", params={"query": "fetch"})
assert response.status_code == 401
- def test_get_skill_returns_none_when_not_found(self, auth_client) -> None:
- """GET /api/skills/{name} returns null when skill not found."""
+ def test_get_workflow_returns_none_when_not_found(self, auth_client) -> None:
+ """GET /api/workflows/{name} returns null when workflow not found."""
client, token = auth_client
response = client.get(
- "/api/skills/nonexistent",
+ "/api/workflows/nonexistent",
headers={"Authorization": f"Bearer {token}"},
)
assert response.status_code == 200
data = response.json()
assert data is None
- def test_get_skill_requires_auth(self, auth_client) -> None:
- """GET /api/skills/{name} requires authentication."""
+ def test_get_workflow_requires_auth(self, auth_client) -> None:
+ """GET /api/workflows/{name} requires authentication."""
client, _ = auth_client
- response = client.get("/api/skills/nonexistent")
+ response = client.get("/api/workflows/nonexistent")
assert response.status_code == 401
- def test_create_skill_success(self, auth_client) -> None:
- """POST /api/skills creates a new skill."""
+ def test_create_workflow_success(self, auth_client) -> None:
+ """POST /api/workflows creates a new workflow."""
client, token = auth_client
response = client.post(
- "/api/skills",
+ "/api/workflows",
json={
- "name": "test_skill",
+ "name": "test_workflow",
"source": "async def run(x: int) -> int:\n return x * 2",
"description": "Doubles a number",
},
@@ -171,30 +171,30 @@ def test_create_skill_success(self, auth_client) -> None:
)
assert response.status_code == 200
data = response.json()
- assert data["name"] == "test_skill"
+ assert data["name"] == "test_workflow"
assert data["description"] == "Doubles a number"
assert "source" in data
- def test_create_skill_requires_auth(self, auth_client) -> None:
- """POST /api/skills requires authentication."""
+ def test_create_workflow_requires_auth(self, auth_client) -> None:
+ """POST /api/workflows requires authentication."""
client, _ = auth_client
response = client.post(
- "/api/skills",
+ "/api/workflows",
json={
- "name": "test_skill",
+ "name": "test_workflow",
"source": "async def run(): pass",
"description": "Test",
},
)
assert response.status_code == 401
- def test_create_skill_invalid_source_returns_400(self, auth_client) -> None:
- """POST /api/skills returns 400 for invalid source code."""
+ def test_create_workflow_invalid_source_returns_400(self, auth_client) -> None:
+ """POST /api/workflows returns 400 for invalid source code."""
client, token = auth_client
response = client.post(
- "/api/skills",
+ "/api/workflows",
json={
- "name": "bad_skill",
+ "name": "bad_workflow",
"source": "not valid python +++",
"description": "Invalid",
},
@@ -202,13 +202,13 @@ def test_create_skill_invalid_source_returns_400(self, auth_client) -> None:
)
assert response.status_code == 400
- def test_create_skill_no_run_returns_400(self, auth_client) -> None:
- """POST /api/skills returns 400 when source has no run() function."""
+ def test_create_workflow_no_run_returns_400(self, auth_client) -> None:
+ """POST /api/workflows returns 400 when source has no run() function."""
client, token = auth_client
response = client.post(
- "/api/skills",
+ "/api/workflows",
json={
- "name": "no_run_skill",
+ "name": "no_run_workflow",
"source": "def other_func(): pass",
"description": "No run",
},
@@ -216,58 +216,58 @@ def test_create_skill_no_run_returns_400(self, auth_client) -> None:
)
assert response.status_code == 400
- def test_delete_skill_requires_auth(self, auth_client) -> None:
- """DELETE /api/skills/{name} requires authentication."""
+ def test_delete_workflow_requires_auth(self, auth_client) -> None:
+ """DELETE /api/workflows/{name} requires authentication."""
client, _ = auth_client
- response = client.delete("/api/skills/test_skill")
+ response = client.delete("/api/workflows/test_workflow")
assert response.status_code == 401
- def test_delete_skill_returns_false_when_not_found(self, auth_client) -> None:
- """DELETE /api/skills/{name} returns false when skill not found."""
+ def test_delete_workflow_returns_false_when_not_found(self, auth_client) -> None:
+ """DELETE /api/workflows/{name} returns false when workflow not found."""
client, token = auth_client
response = client.delete(
- "/api/skills/nonexistent",
+ "/api/workflows/nonexistent",
headers={"Authorization": f"Bearer {token}"},
)
assert response.status_code == 200
assert response.json() is False
- def test_skill_lifecycle_create_get_delete(self, auth_client) -> None:
- """Full skill lifecycle: create, get, delete."""
+ def test_workflow_lifecycle_create_get_delete(self, auth_client) -> None:
+ """Full workflow lifecycle: create, get, delete."""
client, token = auth_client
headers = {"Authorization": f"Bearer {token}"}
# Create
- skill_source = (
+ workflow_source = (
'async def run(n: int) -> int:\n """Square a number."""\n return n ** 2'
)
response = client.post(
- "/api/skills",
+ "/api/workflows",
json={
- "name": "lifecycle_skill",
- "source": skill_source,
+ "name": "lifecycle_workflow",
+ "source": workflow_source,
"description": "Squares a number",
},
headers=headers,
)
assert response.status_code == 200
created = response.json()
- assert created["name"] == "lifecycle_skill"
+ assert created["name"] == "lifecycle_workflow"
# Get
- response = client.get("/api/skills/lifecycle_skill", headers=headers)
+ response = client.get("/api/workflows/lifecycle_workflow", headers=headers)
assert response.status_code == 200
fetched = response.json()
- assert fetched["name"] == "lifecycle_skill"
+ assert fetched["name"] == "lifecycle_workflow"
assert fetched["source"] is not None
# Delete
- response = client.delete("/api/skills/lifecycle_skill", headers=headers)
+ response = client.delete("/api/workflows/lifecycle_workflow", headers=headers)
assert response.status_code == 200
assert response.json() is True
# Verify deleted
- response = client.get("/api/skills/lifecycle_skill", headers=headers)
+ response = client.get("/api/workflows/lifecycle_workflow", headers=headers)
assert response.status_code == 200
assert response.json() is None
@@ -563,12 +563,12 @@ def auth_client(self, tmp_path):
# Tools
("/api/tools", "get"),
("/api/tools/search?query=test", "get"),
- # Skills
- ("/api/skills", "get"),
- ("/api/skills/search?query=test", "get"),
- ("/api/skills/test", "get"),
- ("/api/skills", "post"),
- ("/api/skills/test", "delete"),
+ # Workflows
+ ("/api/workflows", "get"),
+ ("/api/workflows/search?query=test", "get"),
+ ("/api/workflows/test", "get"),
+ ("/api/workflows", "post"),
+ ("/api/workflows/test", "delete"),
# Artifacts
("/api/artifacts", "get"),
("/api/artifacts/test", "get"),
diff --git a/tests/container/test_server.py b/tests/container/test_server.py
index 3501056..01d6c2b 100644
--- a/tests/container/test_server.py
+++ b/tests/container/test_server.py
@@ -80,13 +80,13 @@ def test_health_endpoint(self, client) -> None:
# Note: active_sessions removed for security (information disclosure)
def test_info_endpoint(self, client) -> None:
- """Info endpoint returns tools and skills."""
+ """Info endpoint returns tools and workflows."""
response = client.get("/info")
assert response.status_code == 200
data = response.json()
assert "tools" in data
- assert "skills" in data
+ assert "workflows" in data
assert "artifacts_path" in data
def test_execute_simple_expression(self, client) -> None:
diff --git a/tests/test_backend_integration.py b/tests/test_backend_integration.py
index 75cc577..697f270 100644
--- a/tests/test_backend_integration.py
+++ b/tests/test_backend_integration.py
@@ -1,6 +1,6 @@
"""Integration tests for backend abstraction.
-These tests verify that artifacts, skills, and tools work consistently
+These tests verify that artifacts, workflows, and tools work consistently
across all execution backends (in-process, container).
"""
@@ -14,16 +14,16 @@
from py_code_mode.artifacts import FileArtifactStore
from py_code_mode.execution import Capability
from py_code_mode.execution.in_process import InProcessExecutor
-from py_code_mode.skills import FileSkillStore, PythonSkill, create_skill_library
from py_code_mode.tools.adapters.cli import CLIAdapter
from py_code_mode.tools.registry import ToolRegistry
+from py_code_mode.workflows import FileWorkflowStore, PythonWorkflow, create_workflow_library
if TYPE_CHECKING:
pass
class TestCreateExecutorIntegration:
- """Test create_executor() factory with tools, skills, artifacts."""
+ """Test create_executor() factory with tools, workflows, artifacts."""
@pytest.fixture
def tools_dir(self, tmp_path: Path) -> Path:
@@ -54,18 +54,18 @@ def tools_dir(self, tmp_path: Path) -> Path:
return tools
@pytest.fixture
- def skills_dir(self, tmp_path: Path) -> Path:
- """Create a skills directory with a simple skill."""
- skills = tmp_path / "skills"
- skills.mkdir()
- (skills / "double.py").write_text(
+ def workflows_dir(self, tmp_path: Path) -> Path:
+ """Create a workflows directory with a simple workflow."""
+ workflows = tmp_path / "workflows"
+ workflows.mkdir()
+ (workflows / "double.py").write_text(
'''"""Double a number."""
async def run(n: int) -> int:
return n * 2
'''
)
- return skills
+ return workflows
@pytest.fixture
def artifacts_dir(self, tmp_path: Path) -> Path:
@@ -109,49 +109,49 @@ async def test_create_executor_tools_list(self, tools_dir: Path) -> None:
await executor.close()
@pytest.mark.asyncio
- async def test_create_executor_with_skills(self, skills_dir: Path) -> None:
- """Executor loads skills and makes them callable."""
- store = FileSkillStore(skills_dir)
- skill_library = create_skill_library(store=store)
- executor = InProcessExecutor(skill_library=skill_library)
+ async def test_create_executor_with_workflows(self, workflows_dir: Path) -> None:
+ """Executor loads workflows and makes them callable."""
+ store = FileWorkflowStore(workflows_dir)
+ workflow_library = create_workflow_library(store=store)
+ executor = InProcessExecutor(workflow_library=workflow_library)
try:
- # Skill should be callable
- result = await executor.run("skills.double(n=21)")
- assert result.is_ok, f"Skill call failed: {result.error}"
+ # Workflow should be callable
+ result = await executor.run("workflows.double(n=21)")
+ assert result.is_ok, f"Workflow call failed: {result.error}"
assert result.value == 42
finally:
await executor.close()
@pytest.mark.asyncio
- async def test_create_executor_skills_list(self, skills_dir: Path) -> None:
- """Executor provides skills.list() that returns skill info."""
- store = FileSkillStore(skills_dir)
- skill_library = create_skill_library(store=store)
- executor = InProcessExecutor(skill_library=skill_library)
+ async def test_create_executor_workflows_list(self, workflows_dir: Path) -> None:
+ """Executor provides workflows.list() that returns workflow info."""
+ store = FileWorkflowStore(workflows_dir)
+ workflow_library = create_workflow_library(store=store)
+ executor = InProcessExecutor(workflow_library=workflow_library)
try:
- result = await executor.run("skills.list()")
- assert result.is_ok, f"skills.list() failed: {result.error}"
- assert result.value is not None, "skills.list() returned None"
- # Should contain our double skill
- skills_str = str(result.value)
- assert "double" in skills_str.lower(), f"double not in {skills_str}"
+ result = await executor.run("workflows.list()")
+ assert result.is_ok, f"workflows.list() failed: {result.error}"
+ assert result.value is not None, "workflows.list() returned None"
+ # Should contain our double workflow
+ workflows_str = str(result.value)
+ assert "double" in workflows_str.lower(), f"double not in {workflows_str}"
finally:
await executor.close()
@pytest.mark.asyncio
- async def test_create_executor_skills_search(self, skills_dir: Path) -> None:
- """Executor provides skills.search() for semantic search."""
- store = FileSkillStore(skills_dir)
- skill_library = create_skill_library(store=store)
- executor = InProcessExecutor(skill_library=skill_library)
+ async def test_create_executor_workflows_search(self, workflows_dir: Path) -> None:
+ """Executor provides workflows.search() for semantic search."""
+ store = FileWorkflowStore(workflows_dir)
+ workflow_library = create_workflow_library(store=store)
+ executor = InProcessExecutor(workflow_library=workflow_library)
try:
- result = await executor.run('skills.search("multiply number")')
- assert result.is_ok, f"skills.search() failed: {result.error}"
+ result = await executor.run('workflows.search("multiply number")')
+ assert result.is_ok, f"workflows.search() failed: {result.error}"
assert result.value is not None
- # Should find double skill (semantically similar to multiply)
+ # Should find double workflow (semantically similar to multiply)
assert len(result.value) > 0
finally:
await executor.close()
@@ -195,19 +195,19 @@ async def test_create_executor_artifacts_list(self, artifacts_dir: Path) -> None
@pytest.mark.asyncio
async def test_create_executor_full_integration(
- self, tools_dir: Path, skills_dir: Path, artifacts_dir: Path
+ self, tools_dir: Path, workflows_dir: Path, artifacts_dir: Path
) -> None:
- """Executor with all three: tools, skills, artifacts."""
+ """Executor with all three: tools, workflows, artifacts."""
adapter = CLIAdapter(tools_path=tools_dir)
registry = ToolRegistry()
registry._adapters.append(adapter)
- store = FileSkillStore(skills_dir)
- skill_library = create_skill_library(store=store)
+ store = FileWorkflowStore(workflows_dir)
+ workflow_library = create_workflow_library(store=store)
artifacts_dir.mkdir(parents=True, exist_ok=True)
artifact_store = FileArtifactStore(artifacts_dir)
executor = InProcessExecutor(
registry=registry,
- skill_library=skill_library,
+ workflow_library=workflow_library,
artifact_store=artifact_store,
)
@@ -216,9 +216,9 @@ async def test_create_executor_full_integration(
result = await executor.run('tools.echo.echo(text="test")')
assert result.is_ok, f"tools failed: {result.error}"
- # Skills work
- result = await executor.run("skills.double(n=5)")
- assert result.is_ok, f"skills failed: {result.error}"
+ # Workflows work
+ result = await executor.run("workflows.double(n=5)")
+ assert result.is_ok, f"workflows failed: {result.error}"
assert result.value == 10
# Artifacts work
@@ -293,76 +293,76 @@ async def test_artifact_delete(self, executor_with_artifacts: InProcessExecutor)
assert not result.is_ok or result.value is None
-class TestBackendSkills:
- """Test skills invocation across backends."""
+class TestBackendWorkflows:
+ """Test workflows invocation across backends."""
@pytest.fixture
- def executor_with_skills(self, tmp_path: Path) -> InProcessExecutor:
- """Create executor with skills."""
- skills_path = tmp_path / "skills"
- skills_path.mkdir()
+ def executor_with_workflows(self, tmp_path: Path) -> InProcessExecutor:
+ """Create executor with workflows."""
+ workflows_path = tmp_path / "workflows"
+ workflows_path.mkdir()
- # Create a test skill using from_source
- skill = PythonSkill.from_source(
+ # Create a test workflow using from_source
+ workflow = PythonWorkflow.from_source(
name="double",
source='async def run(n: int) -> int:\n """Double a number."""\n return n * 2',
description="Double a number",
)
- store = FileSkillStore(skills_path)
- store.save(skill)
+ store = FileWorkflowStore(workflows_path)
+ store.save(workflow)
- library = create_skill_library(store=store)
- return InProcessExecutor(skill_library=library)
+ library = create_workflow_library(store=store)
+ return InProcessExecutor(workflow_library=library)
@pytest.mark.asyncio
- async def test_skill_invocation(self, executor_with_skills: InProcessExecutor) -> None:
- """Skills can be invoked via skills namespace."""
- async with executor_with_skills as executor:
- result = await executor.run("skills.double(n=21)")
+ async def test_workflow_invocation(self, executor_with_workflows: InProcessExecutor) -> None:
+ """Workflows can be invoked via workflows namespace."""
+ async with executor_with_workflows as executor:
+ result = await executor.run("workflows.double(n=21)")
- assert result.is_ok, f"Skill invocation failed: {result.error}"
+ assert result.is_ok, f"Workflow invocation failed: {result.error}"
assert result.value == 42
@pytest.mark.asyncio
- async def test_skills_list(self, executor_with_skills: InProcessExecutor) -> None:
- """Can list available skills."""
- async with executor_with_skills as executor:
- result = await executor.run("skills.list()")
+ async def test_workflows_list(self, executor_with_workflows: InProcessExecutor) -> None:
+ """Can list available workflows."""
+ async with executor_with_workflows as executor:
+ result = await executor.run("workflows.list()")
assert result.is_ok
- # Should contain our skill
+ # Should contain our workflow
assert any("double" in str(s) for s in result.value)
@pytest.mark.asyncio
- async def test_skill_with_default_args(self, tmp_path: Path) -> None:
- """Skills with default arguments work correctly."""
- skills_path = tmp_path / "skills"
- skills_path.mkdir()
+ async def test_workflow_with_default_args(self, tmp_path: Path) -> None:
+ """Workflows with default arguments work correctly."""
+ workflows_path = tmp_path / "workflows"
+ workflows_path.mkdir()
source = (
'async def run(name: str = "World") -> str:\n'
' """Greet someone."""\n'
' return f"Hello, {name}!"'
)
- skill = PythonSkill.from_source(
+ workflow = PythonWorkflow.from_source(
name="greet",
source=source,
description="Greet someone",
)
- store = FileSkillStore(skills_path)
- store.save(skill)
- library = create_skill_library(store=store)
+ store = FileWorkflowStore(workflows_path)
+ store.save(workflow)
+ library = create_workflow_library(store=store)
- async with InProcessExecutor(skill_library=library) as executor:
+ async with InProcessExecutor(workflow_library=library) as executor:
# With default
- result = await executor.run("skills.greet()")
+ result = await executor.run("workflows.greet()")
assert result.is_ok
assert result.value == "Hello, World!"
# With override
- result = await executor.run('skills.greet(name="Alice")')
+ result = await executor.run('workflows.greet(name="Alice")')
assert result.is_ok
assert result.value == "Hello, Alice!"
diff --git a/tests/test_backend_user_journey.py b/tests/test_backend_user_journey.py
index 525ec53..c3437ba 100644
--- a/tests/test_backend_user_journey.py
+++ b/tests/test_backend_user_journey.py
@@ -2,9 +2,9 @@
These tests simulate real agent workflows that use multiple features together:
- Tools discovery and invocation
-- Skills creation, invocation, and persistence
+- Workflows creation, invocation, and persistence
- Artifacts save/load
-- Cross-namespace operations (skills calling tools)
+- Cross-namespace operations (workflows calling tools)
Critical deployment scenarios:
- FileStorage + ContainerExecutor (standard)
@@ -97,7 +97,7 @@ class TestAgentFullWorkflow:
These tests simulate a real LLM agent session where the agent:
1. Discovers available tools
2. Uses tools to accomplish tasks
- 3. Creates reusable skills from patterns
+ 3. Creates reusable workflows from patterns
4. Saves results as artifacts
"""
@@ -107,8 +107,8 @@ async def test_agent_full_workflow_in_process(
) -> None:
"""Complete agent workflow with InProcessExecutor.
- User story: An agent lists tools, uses a tool, creates a skill,
- invokes the skill, and saves results as an artifact.
+ User story: An agent lists tools, uses a tool, creates a workflow,
+ invokes the workflow, and saves results as an artifact.
"""
storage = FileStorage(tools_storage)
config = InProcessConfig(tools_path=tools_dir)
@@ -127,19 +127,19 @@ async def test_agent_full_workflow_in_process(
assert result.is_ok, f"tools.echo() failed: {result.error}"
assert "hello world" in result.value
- # 3. Agent creates a skill from what it learned
+ # 3. Agent creates a workflow from what it learned
result = await session.run("""
-skills.create(
+workflows.create(
name="shout",
description="Echo text in uppercase",
source="async def run(text: str) -> str:\\n return text.upper()"
)
""")
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
- # 4. Agent invokes the created skill
- result = await session.run('skills.shout(text="quiet")')
- assert result.is_ok, f"skills.shout() failed: {result.error}"
+ # 4. Agent invokes the created workflow
+ result = await session.run('workflows.shout(text="quiet")')
+ assert result.is_ok, f"workflows.shout() failed: {result.error}"
assert result.value == "QUIET"
# 5. Agent saves results as artifact
@@ -186,19 +186,19 @@ async def test_agent_full_workflow_container(
assert result.is_ok, f"tools.echo() failed: {result.error}"
assert "container hello" in result.value
- # 3. Agent creates a skill
+ # 3. Agent creates a workflow
result = await session.run("""
-skills.create(
+workflows.create(
name="container_shout",
description="Echo text in uppercase",
source="async def run(text: str) -> str:\\n return text.upper()"
)
""")
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
- # 4. Agent invokes the created skill
- result = await session.run('skills.container_shout(text="whisper")')
- assert result.is_ok, f"skills.container_shout() failed: {result.error}"
+ # 4. Agent invokes the created workflow
+ result = await session.run('workflows.container_shout(text="whisper")')
+ assert result.is_ok, f"workflows.container_shout() failed: {result.error}"
assert result.value == "WHISPER"
# 5. Agent saves artifact
@@ -268,73 +268,73 @@ async def test_tools_search_in_container(self, tools_storage: Path, tools_dir: P
# =============================================================================
-# Container + Skills Lifecycle Tests
+# Container + Workflows Lifecycle Tests
# =============================================================================
@pytest.mark.xdist_group("docker")
-class TestContainerSkillsLifecycle:
- """Test full skills lifecycle inside a container."""
+class TestContainerWorkflowsLifecycle:
+ """Test full workflows lifecycle inside a container."""
@pytest.mark.asyncio
@pytest.mark.skipif(not _docker_available(), reason="Docker not available")
- async def test_skills_create_invoke_delete(self, empty_storage: Path) -> None:
- """Skills can be created, invoked, and deleted in container."""
+ async def test_workflows_create_invoke_delete(self, empty_storage: Path) -> None:
+ """Workflows can be created, invoked, and deleted in container."""
storage = FileStorage(empty_storage)
executor = ContainerExecutor(ContainerConfig(timeout=30.0, auth_disabled=True))
async with Session(storage=storage, executor=executor) as session:
- # 1. Verify empty skills
- result = await session.run("skills.list()")
+ # 1. Verify empty workflows
+ result = await session.run("workflows.list()")
assert result.is_ok
assert result.value == []
- # 2. Create skill
+ # 2. Create workflow
result = await session.run("""
-skills.create(
+workflows.create(
name="add_numbers",
description="Add two numbers",
source="async def run(a: int, b: int) -> int:\\n return a + b"
)
""")
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
- # 3. List shows new skill
- result = await session.run("skills.list()")
+ # 3. List shows new workflow
+ result = await session.run("workflows.list()")
assert result.is_ok
names = [s.name if hasattr(s, "name") else s["name"] for s in result.value]
assert "add_numbers" in names
- # 4. Invoke skill
- result = await session.run("skills.add_numbers(a=5, b=3)")
- assert result.is_ok, f"skills.add_numbers() failed: {result.error}"
+ # 4. Invoke workflow
+ result = await session.run("workflows.add_numbers(a=5, b=3)")
+ assert result.is_ok, f"workflows.add_numbers() failed: {result.error}"
assert result.value == 8
- # 5. Delete skill
- result = await session.run('skills.delete("add_numbers")')
- assert result.is_ok, f"skills.delete() failed: {result.error}"
+ # 5. Delete workflow
+ result = await session.run('workflows.delete("add_numbers")')
+ assert result.is_ok, f"workflows.delete() failed: {result.error}"
# 6. Verify deletion
- result = await session.run("skills.list()")
+ result = await session.run("workflows.list()")
assert result.is_ok
names = [s.name if hasattr(s, "name") else s["name"] for s in result.value]
assert "add_numbers" not in names
@pytest.mark.asyncio
@pytest.mark.skipif(not _docker_available(), reason="Docker not available")
- async def test_skill_uses_tools_in_container(
+ async def test_workflow_uses_tools_in_container(
self, tools_storage: Path, tools_dir: Path
) -> None:
- """Skills can call tools from within container execution."""
+ """Workflows can call tools from within container execution."""
storage = FileStorage(tools_storage)
executor = ContainerExecutor(
ContainerConfig(timeout=30.0, auth_disabled=True, tools_path=tools_dir)
)
async with Session(storage=storage, executor=executor) as session:
- # Create skill that uses tools
+ # Create workflow that uses tools
result = await session.run('''
-skills.create(
+workflows.create(
name="loud_echo",
description="Echo text and uppercase it",
source="""async def run(text: str) -> str:
@@ -343,11 +343,11 @@ async def test_skill_uses_tools_in_container(
"""
)
''')
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
- # Invoke skill that calls tool
- result = await session.run('skills.loud_echo(text="hello")')
- assert result.is_ok, f"skills.loud_echo() failed: {result.error}"
+ # Invoke workflow that calls tool
+ result = await session.run('workflows.loud_echo(text="hello")')
+ assert result.is_ok, f"workflows.loud_echo() failed: {result.error}"
assert result.value == "HELLO"
@@ -362,37 +362,37 @@ class TestContainerSessionPersistence:
@pytest.mark.asyncio
@pytest.mark.skipif(not _docker_available(), reason="Docker not available")
- async def test_skill_persists_across_container_sessions(self, empty_storage: Path) -> None:
- """Skills created in one container session are available in next."""
- # Session 1: Create skill
+ async def test_workflow_persists_across_container_sessions(self, empty_storage: Path) -> None:
+ """Workflows created in one container session are available in next."""
+ # Session 1: Create workflow
storage1 = FileStorage(empty_storage)
executor1 = ContainerExecutor(ContainerConfig(timeout=30.0, auth_disabled=True))
async with Session(storage=storage1, executor=executor1) as session:
result = await session.run("""
-skills.create(
- name="persistent_skill",
+workflows.create(
+ name="persistent_workflow",
description="Should persist",
source="async def run() -> str:\\n return 'persisted'"
)
""")
assert result.is_ok
- result = await session.run("skills.persistent_skill()")
+ result = await session.run("workflows.persistent_workflow()")
assert result.is_ok
assert result.value == "persisted"
- # Session 2: Skill should still exist
+ # Session 2: Workflow should still exist
storage2 = FileStorage(empty_storage)
executor2 = ContainerExecutor(ContainerConfig(timeout=30.0, auth_disabled=True))
async with Session(storage=storage2, executor=executor2) as session:
- result = await session.run("skills.list()")
+ result = await session.run("workflows.list()")
assert result.is_ok
names = [s.name if hasattr(s, "name") else s["name"] for s in result.value]
- assert "persistent_skill" in names, f"Skill not persisted: {names}"
+ assert "persistent_workflow" in names, f"Workflow not persisted: {names}"
- result = await session.run("skills.persistent_skill()")
+ result = await session.run("workflows.persistent_workflow()")
assert result.is_ok
assert result.value == "persisted"
@@ -442,19 +442,19 @@ async def test_redis_container_full_workflow(self, redis_url: str) -> None:
executor = ContainerExecutor(ContainerConfig(timeout=60.0, auth_disabled=True))
async with Session(storage=storage, executor=executor) as session:
- # 1. Create skill (stored in Redis)
+ # 1. Create workflow (stored in Redis)
result = await session.run("""
-skills.create(
- name="redis_skill",
- description="Test skill in Redis",
+workflows.create(
+ name="redis_workflow",
+ description="Test workflow in Redis",
source="async def run(x: int) -> int:\\n return x * 2"
)
""")
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
- # 2. Invoke skill
- result = await session.run("skills.redis_skill(x=21)")
- assert result.is_ok, f"skills.redis_skill() failed: {result.error}"
+ # 2. Invoke workflow
+ result = await session.run("workflows.redis_workflow(x=21)")
+ assert result.is_ok, f"workflows.redis_workflow() failed: {result.error}"
assert result.value == 42
# 3. Save artifact (stored in Redis)
@@ -470,17 +470,17 @@ async def test_redis_container_full_workflow(self, redis_url: str) -> None:
@pytest.mark.asyncio
@pytest.mark.skipif(not _docker_available(), reason="Docker not available")
- async def test_redis_container_skill_persistence(self, redis_url: str) -> None:
- """Skills persist in Redis across container sessions."""
+ async def test_redis_container_workflow_persistence(self, redis_url: str) -> None:
+ """Workflows persist in Redis across container sessions."""
client = redis.from_url(redis_url)
- # Session 1: Create skill
+ # Session 1: Create workflow
storage1 = RedisStorage(redis=client, prefix="persist_test")
executor1 = ContainerExecutor(ContainerConfig(timeout=30.0, auth_disabled=True))
async with Session(storage=storage1, executor=executor1) as session:
result = await session.run("""
-skills.create(
+workflows.create(
name="redis_persistent",
description="Should persist in Redis",
source="async def run() -> str:\\n return 'from redis'"
@@ -488,42 +488,42 @@ async def test_redis_container_skill_persistence(self, redis_url: str) -> None:
""")
assert result.is_ok
- # Session 2: Skill should exist
+ # Session 2: Workflow should exist
storage2 = RedisStorage(redis=client, prefix="persist_test")
executor2 = ContainerExecutor(ContainerConfig(timeout=30.0, auth_disabled=True))
async with Session(storage=storage2, executor=executor2) as session:
- result = await session.run("skills.list()")
+ result = await session.run("workflows.list()")
assert result.is_ok
names = [s.name if hasattr(s, "name") else s["name"] for s in result.value]
assert "redis_persistent" in names
- result = await session.run("skills.redis_persistent()")
+ result = await session.run("workflows.redis_persistent()")
assert result.is_ok
assert result.value == "from redis"
@pytest.mark.asyncio
@pytest.mark.skipif(not _docker_available(), reason="Docker not available")
- async def test_redis_container_skill_search(self, redis_url: str) -> None:
- """Skill search should find created skills by semantic similarity."""
+ async def test_redis_container_workflow_search(self, redis_url: str) -> None:
+ """Workflow search should find created workflows by semantic similarity."""
client = redis.from_url(redis_url)
storage = RedisStorage(redis=client, prefix="search_test")
executor = ContainerExecutor(ContainerConfig(timeout=60.0, auth_disabled=True))
async with Session(storage=storage, executor=executor) as session:
- # 1. Create a skill with clear description
+ # 1. Create a workflow with clear description
result = await session.run("""
-skills.create(
+workflows.create(
name="port_scanner",
description="Scan network ports to find open services",
source="async def run(host: str) -> list:\\n return ['port scanning', host]"
)
""")
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
- # 2. Search for the skill using semantic query
- result = await session.run('skills.search("network port scanning")')
- assert result.is_ok, f"skills.search() failed: {result.error}"
+ # 2. Search for the workflow using semantic query
+ result = await session.run('workflows.search("network port scanning")')
+ assert result.is_ok, f"workflows.search() failed: {result.error}"
# Should find port_scanner since query matches description
found_names = [s["name"] if isinstance(s, dict) else s.name for s in result.value]
@@ -533,22 +533,22 @@ async def test_redis_container_skill_search(self, redis_url: str) -> None:
@pytest.mark.asyncio
@pytest.mark.skipif(not _docker_available(), reason="Docker not available")
- async def test_redis_search_skills_facade(self, redis_url: str) -> None:
- """session.search_skills() facade should find skills with RedisStorage."""
+ async def test_redis_search_workflows_facade(self, redis_url: str) -> None:
+ """session.search_workflows() facade should find workflows with RedisStorage."""
client = redis.from_url(redis_url)
storage = RedisStorage(redis=client, prefix="facade_test")
executor = ContainerExecutor(ContainerConfig(timeout=60.0, auth_disabled=True))
async with Session(storage=storage, executor=executor) as session:
- # 1. Create a skill using facade method
- await session.add_skill(
+ # 1. Create a workflow using facade method
+ await session.add_workflow(
name="web_scraper",
source="async def run(url: str) -> str:\n return f'scraped {url}'",
description="Scrape web pages and extract content",
)
# 2. Search using facade method (host-side library)
- results = await session.search_skills("web scraping extract")
+ results = await session.search_workflows("web scraping extract")
# Should find web_scraper since query matches description
found_names = [s["name"] for s in results]
@@ -590,8 +590,8 @@ def redis_stack_url(self, redis_stack_container) -> str:
@pytest.mark.asyncio
@pytest.mark.skipif(not _docker_available(), reason="Docker not available")
- async def test_redis_stack_search_skills_facade(self, redis_stack_url: str) -> None:
- """session.search_skills() should work with RedisVectorStore (RediSearch)."""
+ async def test_redis_stack_search_workflows_facade(self, redis_stack_url: str) -> None:
+ """session.search_workflows() should work with RedisVectorStore (RediSearch)."""
import redis as redis_lib
client = redis_lib.from_url(redis_stack_url)
@@ -604,15 +604,15 @@ async def test_redis_stack_search_skills_facade(self, redis_stack_url: str) -> N
executor = ContainerExecutor(ContainerConfig(timeout=60.0, auth_disabled=True))
async with Session(storage=storage, executor=executor) as session:
- # 1. Create a skill
- await session.add_skill(
+ # 1. Create a workflow
+ await session.add_workflow(
name="data_analyzer",
source="async def run(data: list) -> dict:\n return {'count': len(data)}",
description="Analyze data and return statistics",
)
# 2. Search using facade method - this uses RedisVectorStore
- results = await session.search_skills("data analysis statistics")
+ results = await session.search_workflows("data analysis statistics")
found_names = [s["name"] for s in results]
assert "data_analyzer" in found_names, (
@@ -656,27 +656,27 @@ async def test_container_tool_not_found_error(self, empty_storage: Path) -> None
@pytest.mark.asyncio
@pytest.mark.skipif(not _docker_available(), reason="Docker not available")
- async def test_container_skill_not_found_error(self, empty_storage: Path) -> None:
- """Calling non-existent skill gives clear error."""
+ async def test_container_workflow_not_found_error(self, empty_storage: Path) -> None:
+ """Calling non-existent workflow gives clear error."""
storage = FileStorage(empty_storage)
executor = ContainerExecutor(ContainerConfig(timeout=30.0, auth_disabled=True))
async with Session(storage=storage, executor=executor) as session:
- result = await session.run("skills.nonexistent_skill()")
- assert not result.is_ok, "Expected error for missing skill"
+ result = await session.run("workflows.nonexistent_workflow()")
+ assert not result.is_ok, "Expected error for missing workflow"
assert result.error is not None
@pytest.mark.asyncio
@pytest.mark.skipif(not _docker_available(), reason="Docker not available")
- async def test_container_invalid_skill_source_rejected(self, empty_storage: Path) -> None:
- """Creating skill with syntax error fails gracefully."""
+ async def test_container_invalid_workflow_source_rejected(self, empty_storage: Path) -> None:
+ """Creating workflow with syntax error fails gracefully."""
storage = FileStorage(empty_storage)
executor = ContainerExecutor(ContainerConfig(timeout=30.0, auth_disabled=True))
async with Session(storage=storage, executor=executor) as session:
result = await session.run("""
-skills.create(
- name="bad_skill",
+workflows.create(
+ name="bad_workflow",
description="Invalid syntax",
source="async def run( broken"
)
diff --git a/tests/test_bootstrap.py b/tests/test_bootstrap.py
index b0aa938..4440f19 100644
--- a/tests/test_bootstrap.py
+++ b/tests/test_bootstrap.py
@@ -36,7 +36,7 @@ class TestNamespaceBundle:
NamespaceBundle contains the three namespaces needed for code execution:
- tools: ToolsNamespace for tool access
- - skills: SkillsNamespace for skill access
+ - workflows: WorkflowsNamespace for workflow access
- artifacts: ArtifactStoreProtocol for artifact storage
"""
@@ -61,16 +61,16 @@ def test_namespace_bundle_has_tools_field(self) -> None:
field_names = [f.name for f in fields(NamespaceBundle)]
assert "tools" in field_names
- def test_namespace_bundle_has_skills_field(self) -> None:
- """NamespaceBundle has a 'skills' field.
+ def test_namespace_bundle_has_workflows_field(self) -> None:
+ """NamespaceBundle has a 'workflows' field.
- Contract: Must have skills field for SkillsNamespace access.
+ Contract: Must have workflows field for WorkflowsNamespace access.
Breaks when: Field is missing or renamed.
"""
from py_code_mode.bootstrap import NamespaceBundle
field_names = [f.name for f in fields(NamespaceBundle)]
- assert "skills" in field_names
+ assert "workflows" in field_names
def test_namespace_bundle_has_artifacts_field(self) -> None:
"""NamespaceBundle has an 'artifacts' field.
@@ -95,7 +95,7 @@ def test_namespace_bundle_has_deps_field(self) -> None:
assert "deps" in field_names
def test_namespace_bundle_has_exactly_four_fields(self) -> None:
- """NamespaceBundle has exactly four fields (tools, skills, artifacts, deps).
+ """NamespaceBundle has exactly four fields (tools, workflows, artifacts, deps).
Contract: Bundle should contain the four namespace fields.
Breaks when: Extra fields are added without updating tests.
@@ -134,7 +134,7 @@ async def test_bootstrap_file_storage_returns_bundle(self, tmp_path: Path) -> No
# Create required directories
(tmp_path / "tools").mkdir()
- (tmp_path / "skills").mkdir()
+ (tmp_path / "workflows").mkdir()
(tmp_path / "artifacts").mkdir()
config = {
@@ -157,7 +157,7 @@ async def test_bootstrap_file_storage_bundle_has_tools_namespace(self, tmp_path:
from py_code_mode.tools import ToolsNamespace
(tmp_path / "tools").mkdir()
- (tmp_path / "skills").mkdir()
+ (tmp_path / "workflows").mkdir()
(tmp_path / "artifacts").mkdir()
config = {
@@ -170,17 +170,19 @@ async def test_bootstrap_file_storage_bundle_has_tools_namespace(self, tmp_path:
assert isinstance(result.tools, ToolsNamespace)
@pytest.mark.asyncio
- async def test_bootstrap_file_storage_bundle_has_skills_namespace(self, tmp_path: Path) -> None:
- """File bootstrap returns bundle with SkillsNamespace for skills.
+ async def test_bootstrap_file_storage_bundle_has_workflows_namespace(
+ self, tmp_path: Path
+ ) -> None:
+ """File bootstrap returns bundle with WorkflowsNamespace for workflows.
- Contract: Bundle.skills is SkillsNamespace instance
- Breaks when: Skills namespace is wrong type or missing.
+ Contract: Bundle.workflows is WorkflowsNamespace instance
+ Breaks when: Workflows namespace is wrong type or missing.
"""
from py_code_mode.bootstrap import bootstrap_namespaces
- from py_code_mode.execution.in_process.skills_namespace import SkillsNamespace
+ from py_code_mode.execution.in_process.workflows_namespace import WorkflowsNamespace
(tmp_path / "tools").mkdir()
- (tmp_path / "skills").mkdir()
+ (tmp_path / "workflows").mkdir()
(tmp_path / "artifacts").mkdir()
config = {
@@ -190,7 +192,7 @@ async def test_bootstrap_file_storage_bundle_has_skills_namespace(self, tmp_path
result = await bootstrap_namespaces(config)
- assert isinstance(result.skills, SkillsNamespace)
+ assert isinstance(result.workflows, WorkflowsNamespace)
@pytest.mark.asyncio
async def test_bootstrap_file_storage_bundle_has_artifact_store(self, tmp_path: Path) -> None:
@@ -203,7 +205,7 @@ async def test_bootstrap_file_storage_bundle_has_artifact_store(self, tmp_path:
from py_code_mode.bootstrap import bootstrap_namespaces
(tmp_path / "tools").mkdir()
- (tmp_path / "skills").mkdir()
+ (tmp_path / "workflows").mkdir()
(tmp_path / "artifacts").mkdir()
config = {
@@ -226,7 +228,7 @@ async def test_bootstrap_file_storage_bundle_has_deps_namespace(self, tmp_path:
from py_code_mode.deps import DepsNamespace
(tmp_path / "tools").mkdir()
- (tmp_path / "skills").mkdir()
+ (tmp_path / "workflows").mkdir()
(tmp_path / "artifacts").mkdir()
config = {
@@ -275,7 +277,7 @@ async def test_bootstrap_file_storage_creates_directories_if_missing(
) -> None:
"""File bootstrap creates subdirectories if they don't exist.
- Contract: Bootstrap should create tools/, skills/, artifacts/ directories
+ Contract: Bootstrap should create tools/, workflows/, artifacts/ directories
Breaks when: Bootstrap fails on missing directories instead of creating them.
"""
from py_code_mode.bootstrap import bootstrap_namespaces
@@ -345,16 +347,16 @@ async def test_bootstrap_redis_storage_bundle_has_tools_namespace(
assert isinstance(result.tools, ToolsNamespace)
@pytest.mark.asyncio
- async def test_bootstrap_redis_storage_bundle_has_skills_namespace(
+ async def test_bootstrap_redis_storage_bundle_has_workflows_namespace(
self, mock_redis: MockRedisClient
) -> None:
- """Redis bootstrap returns bundle with SkillsNamespace for skills.
+ """Redis bootstrap returns bundle with WorkflowsNamespace for workflows.
- Contract: Bundle.skills is SkillsNamespace instance
- Breaks when: Skills namespace is wrong type or missing.
+ Contract: Bundle.workflows is WorkflowsNamespace instance
+ Breaks when: Workflows namespace is wrong type or missing.
"""
from py_code_mode.bootstrap import bootstrap_namespaces
- from py_code_mode.execution.in_process.skills_namespace import SkillsNamespace
+ from py_code_mode.execution.in_process.workflows_namespace import WorkflowsNamespace
config = {
"type": "redis",
@@ -365,7 +367,7 @@ async def test_bootstrap_redis_storage_bundle_has_skills_namespace(
with patch("redis.Redis.from_url", return_value=mock_redis):
result = await bootstrap_namespaces(config)
- assert isinstance(result.skills, SkillsNamespace)
+ assert isinstance(result.workflows, WorkflowsNamespace)
@pytest.mark.asyncio
async def test_bootstrap_redis_storage_bundle_has_artifact_store(
@@ -693,12 +695,12 @@ async def test_config_roundtrip(self, tmp_path: Path) -> None:
# Create storage with some content
storage = FileStorage(tmp_path)
- (tmp_path / "skills").mkdir(exist_ok=True)
- skill_file = tmp_path / "skills" / "greet.py"
- skill_content = (
+ (tmp_path / "workflows").mkdir(exist_ok=True)
+ workflow_file = tmp_path / "workflows" / "greet.py"
+ workflow_content = (
'"""Greet."""\nasync def run(name: str) -> str:\n return f"Hello, {name}!"'
)
- skill_file.write_text(skill_content)
+ workflow_file.write_text(workflow_content)
# Serialize
config = storage.to_bootstrap_config()
@@ -706,10 +708,10 @@ async def test_config_roundtrip(self, tmp_path: Path) -> None:
# Reconstruct (async because get_tool_registry() is async for MCP support)
bundle = await bootstrap_namespaces(config)
- # Verify skills are accessible
- skill = bundle.skills.library.get("greet")
- assert skill is not None
- assert skill.name == "greet"
+ # Verify workflows are accessible
+ workflow = bundle.workflows.library.get("greet")
+ assert workflow is not None
+ assert workflow.name == "greet"
# =============================================================================
@@ -846,18 +848,18 @@ async def test_config_roundtrip(self, mock_redis: MockRedisClient) -> None:
Breaks when: Serialization loses critical information.
"""
from py_code_mode.bootstrap import bootstrap_namespaces
- from py_code_mode.skills import PythonSkill
from py_code_mode.storage import RedisStorage
+ from py_code_mode.workflows import PythonWorkflow
# Create storage with some content
storage = RedisStorage(redis=mock_redis, prefix="test")
- skill_store = storage.get_skill_store()
- test_skill = PythonSkill.from_source(
+ workflow_store = storage.get_workflow_store()
+ test_workflow = PythonWorkflow.from_source(
name="greet",
source='async def run(name: str) -> str:\n return f"Hello, {name}!"',
description="Greet a user",
)
- skill_store.save(test_skill)
+ workflow_store.save(test_workflow)
# Serialize
config = storage.to_bootstrap_config()
@@ -866,10 +868,10 @@ async def test_config_roundtrip(self, mock_redis: MockRedisClient) -> None:
with patch("redis.Redis.from_url", return_value=mock_redis):
bundle = await bootstrap_namespaces(config)
- # Verify skills are accessible
- skill = bundle.skills.library.get("greet")
- assert skill is not None
- assert skill.name == "greet"
+ # Verify workflows are accessible
+ workflow = bundle.workflows.library.get("greet")
+ assert workflow is not None
+ assert workflow.name == "greet"
# =============================================================================
@@ -952,8 +954,8 @@ def test_get_artifact_store_triggers_connection(self, mock_redis: MockRedisClien
# Store should be usable
assert artifact_store is not None
- def test_get_skill_library_triggers_connection(self, mock_redis: MockRedisClient) -> None:
- """get_skill_library() triggers Redis connection/usage.
+ def test_get_workflow_library_triggers_connection(self, mock_redis: MockRedisClient) -> None:
+ """get_workflow_library() triggers Redis connection/usage.
Contract: Lazy connection should happen on first actual use
Breaks when: Connection happens too early or not at all.
@@ -963,7 +965,7 @@ def test_get_skill_library_triggers_connection(self, mock_redis: MockRedisClient
storage = RedisStorage(redis=mock_redis, prefix="test")
# This should create the library (lazy initialization)
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
# Library should be usable
assert library is not None
@@ -1105,11 +1107,11 @@ async def test_file_storage_bootstrap_journey(self, tmp_path: Path) -> None:
User action: Set up storage, serialize for subprocess, reconstruct and use
Steps:
- 1. Create FileStorage with tools/skills
+ 1. Create FileStorage with tools/workflows
2. Serialize via to_bootstrap_config()
3. Reconstruct via bootstrap_namespaces()
4. Verify namespaces work correctly
- Verification: Skills can be loaded and invoked after reconstruction
+ Verification: Workflows can be loaded and invoked after reconstruction
Breaks when: Any step in the bootstrap process fails.
"""
from py_code_mode.bootstrap import bootstrap_namespaces
@@ -1117,10 +1119,10 @@ async def test_file_storage_bootstrap_journey(self, tmp_path: Path) -> None:
# Step 1: Create storage with content
storage = FileStorage(tmp_path)
- (tmp_path / "skills").mkdir(exist_ok=True)
- skill_file = tmp_path / "skills" / "double.py"
- skill_content = '"""Double a number."""\nasync def run(n: int) -> int:\n return n * 2'
- skill_file.write_text(skill_content)
+ (tmp_path / "workflows").mkdir(exist_ok=True)
+ workflow_file = tmp_path / "workflows" / "double.py"
+ workflow_content = '"""Double a number."""\nasync def run(n: int) -> int:\n return n * 2'
+ workflow_file.write_text(workflow_content)
(tmp_path / "artifacts").mkdir(exist_ok=True)
@@ -1137,10 +1139,10 @@ async def test_file_storage_bootstrap_journey(self, tmp_path: Path) -> None:
bundle = await bootstrap_namespaces(restored_config)
# Step 4: Verify namespaces work
- # Check skills
- skill = bundle.skills.library.get("double")
- assert skill is not None
- assert skill.name == "double"
+ # Check workflows
+ workflow = bundle.workflows.library.get("double")
+ assert workflow is not None
+ assert workflow.name == "double"
# Check artifacts (should be usable)
bundle.artifacts.save("test", {"value": 42}, description="test data")
@@ -1153,26 +1155,26 @@ async def test_redis_storage_bootstrap_journey(self, mock_redis: MockRedisClient
User action: Set up storage, serialize for subprocess, reconstruct and use
Steps:
- 1. Create RedisStorage with skills
+ 1. Create RedisStorage with workflows
2. Serialize via to_bootstrap_config()
3. Reconstruct via bootstrap_namespaces()
4. Verify namespaces work correctly
- Verification: Skills can be loaded after reconstruction
+ Verification: Workflows can be loaded after reconstruction
Breaks when: Any step in the bootstrap process fails.
"""
from py_code_mode.bootstrap import bootstrap_namespaces
- from py_code_mode.skills import PythonSkill
from py_code_mode.storage import RedisStorage
+ from py_code_mode.workflows import PythonWorkflow
# Step 1: Create storage with content
storage = RedisStorage(redis=mock_redis, prefix="journey")
- skill_store = storage.get_skill_store()
- skill = PythonSkill.from_source(
+ workflow_store = storage.get_workflow_store()
+ workflow = PythonWorkflow.from_source(
name="triple",
source="async def run(n: int) -> int:\n return n * 3",
description="Triple a number",
)
- skill_store.save(skill)
+ workflow_store.save(workflow)
# Step 2: Serialize
config = storage.to_bootstrap_config()
@@ -1188,6 +1190,6 @@ async def test_redis_storage_bootstrap_journey(self, mock_redis: MockRedisClient
bundle = await bootstrap_namespaces(restored_config)
# Step 4: Verify namespaces work
- loaded_skill = bundle.skills.library.get("triple")
- assert loaded_skill is not None
- assert loaded_skill.name == "triple"
+ loaded_workflow = bundle.workflows.library.get("triple")
+ assert loaded_workflow is not None
+ assert loaded_workflow.name == "triple"
diff --git a/tests/test_chroma_vector_store.py b/tests/test_chroma_vector_store.py
index 76b6293..aa29dc2 100644
--- a/tests/test_chroma_vector_store.py
+++ b/tests/test_chroma_vector_store.py
@@ -12,7 +12,7 @@
import pytest
if TYPE_CHECKING:
- from py_code_mode.skills.embeddings import EmbeddingProvider
+ from py_code_mode.workflows.embeddings import EmbeddingProvider
class TestChromaVectorStoreImport:
@@ -21,7 +21,7 @@ class TestChromaVectorStoreImport:
def test_chroma_vector_store_importable_when_chromadb_available(self) -> None:
"""ChromaVectorStore should be importable when chromadb is installed."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
assert ChromaVectorStore is not None
@@ -29,9 +29,9 @@ def test_chroma_vector_store_satisfies_protocol(self) -> None:
"""ChromaVectorStore should implement VectorStore protocol."""
pytest.importorskip("chromadb")
# Protocol compliance via isinstance check
- from py_code_mode.skills.embeddings import MockEmbedder
- from py_code_mode.skills.vector_store import VectorStore
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.embeddings import MockEmbedder
+ from py_code_mode.workflows.vector_store import VectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
embedder = MockEmbedder()
store = ChromaVectorStore(path=Path("/tmp/test"), embedder=embedder)
@@ -45,7 +45,7 @@ class TestChromaVectorStoreInitialization:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder with consistent behavior."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@@ -54,7 +54,7 @@ def test_creates_persistent_client_at_path(
) -> None:
"""Should create ChromaDB persistent client in specified directory."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store_path = tmp_path / "chroma_store"
ChromaVectorStore(path=store_path, embedder=mock_embedder)
@@ -68,7 +68,7 @@ def test_creates_collection_without_embedding_function(
) -> None:
"""Should create collection configured for pre-computed vectors."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store = ChromaVectorStore(path=tmp_path / "chroma", embedder=mock_embedder)
@@ -81,7 +81,7 @@ def test_stores_model_info_in_collection_metadata(
) -> None:
"""Should persist ModelInfo in collection metadata for validation."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store = ChromaVectorStore(path=tmp_path / "chroma", embedder=mock_embedder)
@@ -97,7 +97,7 @@ def test_uses_cosine_distance_metric(
) -> None:
"""Collection should be configured for cosine similarity search."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store = ChromaVectorStore(path=tmp_path / "chroma", embedder=mock_embedder)
@@ -112,14 +112,14 @@ class TestChromaVectorStoreModelValidation:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder with consistent behavior."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@pytest.fixture
def different_embedder(self) -> EmbeddingProvider:
"""Different embedder to trigger model change."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
# Different dimension means different model
return MockEmbedder(dimension=768)
@@ -132,15 +132,15 @@ def test_detects_model_change_different_dimension(
) -> None:
"""Should detect when model dimension changes."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store_path = tmp_path / "chroma"
# Create store with first embedder (384-dim)
store1 = ChromaVectorStore(path=store_path, embedder=mock_embedder)
store1.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="abc123",
)
@@ -157,22 +157,22 @@ def test_preserves_vectors_when_model_unchanged(
) -> None:
"""Should keep vectors when reopening with same model."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store_path = tmp_path / "chroma"
# Create store and add vectors
store1 = ChromaVectorStore(path=store_path, embedder=mock_embedder)
store1.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="abc123",
)
assert store1.count() == 1
# Create fresh MockEmbedder with same dimension
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
same_embedder = MockEmbedder(dimension=384)
@@ -190,16 +190,16 @@ def test_clears_all_vectors_on_model_change(
) -> None:
"""Model change should clear entire index, not partial."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store_path = tmp_path / "chroma"
- # Add multiple skills
+ # Add multiple workflows
store1 = ChromaVectorStore(path=store_path, embedder=mock_embedder)
for i in range(5):
store1.add(
- id=f"skill{i}",
- description=f"Skill {i}",
+ id=f"workflow{i}",
+ description=f"Workflow {i}",
source=f"async def run(): return {i}",
content_hash=f"hash{i}",
)
@@ -218,7 +218,7 @@ class TestChromaVectorStoreCRUD:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder for testing."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@@ -226,7 +226,7 @@ def mock_embedder(self) -> EmbeddingProvider:
def store(self, tmp_path: Path, mock_embedder: EmbeddingProvider):
"""Fresh ChromaVectorStore for each test."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
return ChromaVectorStore(path=tmp_path / "chroma", embedder=mock_embedder)
@@ -239,31 +239,31 @@ def test_add_embeds_and_stores_vectors(self, store) -> None:
content_hash="abc123def456",
)
- # Verify skill was indexed
+ # Verify workflow was indexed
assert store.count() == 1
def test_add_stores_content_hash(self, store) -> None:
"""add() should persist content hash for change detection."""
store.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="contenthash123",
)
# Should be able to retrieve stored hash
- stored_hash = store.get_content_hash("skill1")
+ stored_hash = store.get_content_hash("workflow1")
assert stored_hash == "contenthash123"
def test_get_content_hash_returns_none_for_nonexistent(self, store) -> None:
- """get_content_hash() should return None for skills not in index."""
- hash_value = store.get_content_hash("nonexistent_skill")
+ """get_content_hash() should return None for workflows not in index."""
+ hash_value = store.get_content_hash("nonexistent_workflow")
assert hash_value is None
- def test_add_overwrites_existing_skill(self, store) -> None:
- """Adding same skill ID should update vectors, not duplicate."""
+ def test_add_overwrites_existing_workflow(self, store) -> None:
+ """Adding same workflow ID should update vectors, not duplicate."""
store.add(
- id="skill1",
+ id="workflow1",
description="Original description",
source="async def run(): return 1",
content_hash="hash1",
@@ -271,57 +271,57 @@ def test_add_overwrites_existing_skill(self, store) -> None:
assert store.count() == 1
store.add(
- id="skill1",
+ id="workflow1",
description="Updated description",
source="async def run(): return 2",
content_hash="hash2",
)
- # Should still be 1 skill (updated, not duplicated)
+ # Should still be 1 workflow (updated, not duplicated)
assert store.count() == 1
# Hash should be updated
- assert store.get_content_hash("skill1") == "hash2"
+ assert store.get_content_hash("workflow1") == "hash2"
- def test_remove_deletes_skill_vectors(self, store) -> None:
+ def test_remove_deletes_workflow_vectors(self, store) -> None:
"""remove() should delete both description and code vectors."""
store.add(
- id="skill1",
+ id="workflow1",
description="Test",
source="async def run(): pass",
content_hash="hash1",
)
assert store.count() == 1
- result = store.remove("skill1")
+ result = store.remove("workflow1")
assert result is True
assert store.count() == 0
- assert store.get_content_hash("skill1") is None
+ assert store.get_content_hash("workflow1") is None
def test_remove_returns_false_for_nonexistent(self, store) -> None:
- """remove() should return False if skill not in index."""
+ """remove() should return False if workflow not in index."""
result = store.remove("nonexistent")
assert result is False
- def test_count_reflects_indexed_skills(self, store) -> None:
- """count() should return number of unique skills indexed."""
+ def test_count_reflects_indexed_workflows(self, store) -> None:
+ """count() should return number of unique workflows indexed."""
assert store.count() == 0
- store.add("skill1", "desc1", "code1", "hash1")
+ store.add("workflow1", "desc1", "code1", "hash1")
assert store.count() == 1
- store.add("skill2", "desc2", "code2", "hash2")
+ store.add("workflow2", "desc2", "code2", "hash2")
assert store.count() == 2
- store.remove("skill1")
+ store.remove("workflow1")
assert store.count() == 1
def test_clear_removes_all_vectors(self, store) -> None:
- """clear() should remove all indexed skills."""
- # Add multiple skills
+ """clear() should remove all indexed workflows."""
+ # Add multiple workflows
for i in range(5):
- store.add(f"skill{i}", f"desc{i}", f"code{i}", f"hash{i}")
+ store.add(f"workflow{i}", f"desc{i}", f"code{i}", f"hash{i}")
assert store.count() == 5
store.clear()
@@ -335,19 +335,19 @@ class TestChromaVectorStoreSimilaritySearch:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder for deterministic testing."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@pytest.fixture
def store(self, tmp_path: Path, mock_embedder: EmbeddingProvider):
- """Fresh store with sample skills."""
+ """Fresh store with sample workflows."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store = ChromaVectorStore(path=tmp_path / "chroma", embedder=mock_embedder)
- # Add diverse skills for search testing
+ # Add diverse workflows for search testing
store.add(
id="port_scanner",
description="Scan network ports using nmap",
@@ -380,7 +380,7 @@ def test_search_returns_search_results(self, store) -> None:
code_weight=0.3,
)
- from py_code_mode.skills.vector_store import SearchResult
+ from py_code_mode.workflows.vector_store import SearchResult
assert isinstance(results, list)
assert all(isinstance(r, SearchResult) for r in results)
@@ -465,8 +465,8 @@ def test_search_combines_description_and_code_scores(self, store) -> None:
def test_search_returns_empty_for_no_matches(self, tmp_path: Path) -> None:
"""search() on empty index should return empty list."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.embeddings import MockEmbedder
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.embeddings import MockEmbedder
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
empty_store = ChromaVectorStore(path=tmp_path / "empty_chroma", embedder=MockEmbedder())
@@ -479,8 +479,8 @@ def test_search_returns_empty_for_no_matches(self, tmp_path: Path) -> None:
assert results == []
- def test_search_result_contains_skill_id(self, store) -> None:
- """SearchResult.id should contain the skill identifier."""
+ def test_search_result_contains_workflow_id(self, store) -> None:
+ """SearchResult.id should contain the workflow identifier."""
results = store.search(
query="network",
limit=10,
@@ -488,7 +488,7 @@ def test_search_result_contains_skill_id(self, store) -> None:
code_weight=0.3,
)
- # IDs should be skill names we added
+ # IDs should be workflow names we added
result_ids = {r.id for r in results}
assert result_ids.issubset({"port_scanner", "web_scraper", "file_reader"})
@@ -499,7 +499,7 @@ class TestChromaVectorStoreContentHashInvalidation:
@pytest.fixture
def embedder_with_call_tracking(self):
"""Embedder that tracks how many times embed() is called."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
embedder = MockEmbedder(dimension=384)
@@ -517,16 +517,16 @@ def tracked_embed(texts: list[str]):
def test_same_content_hash_skips_re_embedding(
self, tmp_path: Path, embedder_with_call_tracking
) -> None:
- """Adding skill with same hash should skip embedding (idempotent)."""
+ """Adding workflow with same hash should skip embedding (idempotent)."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store = ChromaVectorStore(path=tmp_path / "chroma", embedder=embedder_with_call_tracking)
# First add: should embed
store.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="stable_hash",
)
@@ -534,8 +534,8 @@ def test_same_content_hash_skips_re_embedding(
# Add again with same hash: should NOT re-embed
store.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="stable_hash",
)
@@ -546,15 +546,15 @@ def test_same_content_hash_skips_re_embedding(
def test_different_content_hash_triggers_re_embedding(
self, tmp_path: Path, embedder_with_call_tracking
) -> None:
- """Adding skill with different hash should re-embed."""
+ """Adding workflow with different hash should re-embed."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store = ChromaVectorStore(path=tmp_path / "chroma", embedder=embedder_with_call_tracking)
# First add
store.add(
- id="skill1",
+ id="workflow1",
description="Original description",
source="async def run(): return 1",
content_hash="hash_v1",
@@ -563,7 +563,7 @@ def test_different_content_hash_triggers_re_embedding(
# Update with different hash: should re-embed
store.add(
- id="skill1",
+ id="workflow1",
description="Updated description",
source="async def run(): return 2",
content_hash="hash_v2",
@@ -579,7 +579,7 @@ class TestChromaVectorStorePersistence:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@@ -588,33 +588,33 @@ def test_vectors_persist_after_close_and_reopen(
) -> None:
"""Vectors should persist to disk and reload on next init."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store_path = tmp_path / "persistent_chroma"
# Create store, add data
store1 = ChromaVectorStore(path=store_path, embedder=mock_embedder)
- store1.add("skill1", "Network scanner", "nmap code", "hash1")
- store1.add("skill2", "File reader", "file code", "hash2")
+ store1.add("workflow1", "Network scanner", "nmap code", "hash1")
+ store1.add("workflow2", "File reader", "file code", "hash2")
assert store1.count() == 2
# Close and reopen (create new instance)
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
fresh_embedder = MockEmbedder(dimension=384)
store2 = ChromaVectorStore(path=store_path, embedder=fresh_embedder)
# Vectors should be reloaded
assert store2.count() == 2
- assert store2.get_content_hash("skill1") == "hash1"
- assert store2.get_content_hash("skill2") == "hash2"
+ assert store2.get_content_hash("workflow1") == "hash1"
+ assert store2.get_content_hash("workflow2") == "hash2"
def test_model_metadata_persists_across_sessions(
self, tmp_path: Path, mock_embedder: EmbeddingProvider
) -> None:
"""Model info should persist and be validated on reopen."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store_path = tmp_path / "persistent_chroma"
@@ -623,7 +623,7 @@ def test_model_metadata_persists_across_sessions(
model_info1 = store1.get_model_info()
# Second session (same model)
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
same_embedder = MockEmbedder(dimension=384)
store2 = ChromaVectorStore(path=store_path, embedder=same_embedder)
@@ -637,11 +637,11 @@ def test_search_works_on_persisted_vectors(
) -> None:
"""Search should work on vectors loaded from disk."""
pytest.importorskip("chromadb")
- from py_code_mode.skills.vector_stores.chroma import ChromaVectorStore
+ from py_code_mode.workflows.vector_stores.chroma import ChromaVectorStore
store_path = tmp_path / "persistent_chroma"
- # First session: add skills
+ # First session: add workflows
store1 = ChromaVectorStore(path=store_path, embedder=mock_embedder)
store1.add(
"port_scanner",
@@ -651,7 +651,7 @@ def test_search_works_on_persisted_vectors(
)
# Second session: search should work
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
fresh_embedder = MockEmbedder(dimension=384)
store2 = ChromaVectorStore(path=store_path, embedder=fresh_embedder)
@@ -663,6 +663,6 @@ def test_search_works_on_persisted_vectors(
code_weight=0.3,
)
- # Should find the persisted skill
+ # Should find the persisted workflow
assert len(results) > 0
assert any(r.id == "port_scanner" for r in results)
diff --git a/tests/test_deps_config_gaps.py b/tests/test_deps_config_gaps.py
index 19c12d8..7b3f8c8 100644
--- a/tests/test_deps_config_gaps.py
+++ b/tests/test_deps_config_gaps.py
@@ -7,7 +7,7 @@
- allow_runtime_deps=False blocks deps.add()/remove() at runtime
Architecture (post executor-ownership refactor):
-- Storage owns: skills, artifacts
+- Storage owns: workflows, artifacts
- Executor owns: tools, deps (via config.tools_path, config.deps)
Gap 1: Initial Deps + sync_deps_on_start
@@ -458,7 +458,7 @@ def mcp_storage_dir(self, tmp_path: Path) -> Path:
storage = tmp_path / "storage"
storage.mkdir()
(storage / "tools").mkdir()
- (storage / "skills").mkdir()
+ (storage / "workflows").mkdir()
(storage / "artifacts").mkdir()
(storage / "deps").mkdir()
return storage
diff --git a/tests/test_error_handling.py b/tests/test_error_handling.py
index cd966d0..c4f1f4c 100644
--- a/tests/test_error_handling.py
+++ b/tests/test_error_handling.py
@@ -80,33 +80,33 @@ def tools_dir_with_corruption(tmp_path: Path) -> Path:
@pytest.fixture
-def skills_dir_with_corruption(tmp_path: Path) -> Path:
- """Create a skills directory with valid and corrupted Python files."""
- skills_dir = tmp_path / "skills"
- skills_dir.mkdir()
+def workflows_dir_with_corruption(tmp_path: Path) -> Path:
+ """Create a workflows directory with valid and corrupted Python files."""
+ workflows_dir = tmp_path / "workflows"
+ workflows_dir.mkdir()
- # Valid skill
- (skills_dir / "valid_skill.py").write_text('''"""A valid skill."""
+ # Valid workflow
+ (workflows_dir / "valid_workflow.py").write_text('''"""A valid workflow."""
async def run(x: int) -> int:
return x * 2
''')
# Corrupted Python (syntax error)
- (skills_dir / "syntax_error.py").write_text('''"""Skill with syntax error."""
+ (workflows_dir / "syntax_error.py").write_text('''"""Workflow with syntax error."""
async def run(x: int) -> int
return x * 2 # Missing colon above
''')
# Valid but missing run function
- (skills_dir / "no_run.py").write_text('''"""Skill without run function."""
+ (workflows_dir / "no_run.py").write_text('''"""Workflow without run function."""
def helper(x: int) -> int:
return x * 2
''')
- return skills_dir
+ return workflows_dir
@pytest.fixture
@@ -116,12 +116,12 @@ def mock_redis_with_corruption() -> MagicMock:
class CorruptRedis:
def __init__(self):
self._data = {
- "skills:__skills__": {
+ "workflows:__workflows__": {
"valid": json.dumps(
{
"name": "valid",
"source": "async def run(): pass",
- "description": "Valid skill",
+ "description": "Valid workflow",
}
).encode(),
"corrupt_json": b"not valid json{{{",
@@ -430,29 +430,29 @@ async def test_get_raises_for_corruption(
# =============================================================================
-class TestFileSkillStoreErrorHandling:
- """Tests for FileSkillStore.load() error handling.
+class TestFileWorkflowStoreErrorHandling:
+ """Tests for FileWorkflowStore.load() error handling.
Current behavior (HIGH #10): except Exception -> return None (with warning)
Fixed behavior: Return None ONLY for FileNotFoundError, raise StorageReadError for others.
"""
def test_load_returns_none_for_missing_file(self, tmp_path: Path):
- """load() should return None when skill file doesn't exist."""
- from py_code_mode.skills import FileSkillStore
+ """load() should return None when workflow file doesn't exist."""
+ from py_code_mode.workflows import FileWorkflowStore
- store = FileSkillStore(tmp_path)
+ store = FileWorkflowStore(tmp_path)
result = store.load("nonexistent")
assert result is None # Expected behavior
def test_load_raises_for_syntax_error(
- self, skills_dir_with_corruption: Path, log_capture: pytest.LogCaptureFixture
+ self, workflows_dir_with_corruption: Path, log_capture: pytest.LogCaptureFixture
):
"""load() should raise StorageReadError for Python syntax errors."""
- from py_code_mode.skills import FileSkillStore
+ from py_code_mode.workflows import FileWorkflowStore
- store = FileSkillStore(skills_dir_with_corruption)
+ store = FileWorkflowStore(workflows_dir_with_corruption)
from py_code_mode import errors
@@ -471,25 +471,25 @@ def test_load_raises_for_syntax_error(
store.load("syntax_error")
-class TestFileSkillStoreListAllLogging:
- """Tests for FileSkillStore.list_all() logging behavior.
+class TestFileWorkflowStoreListAllLogging:
+ """Tests for FileWorkflowStore.list_all() logging behavior.
- Current behavior (MEDIUM #11): Silently skips failed skills (with warning)
+ Current behavior (MEDIUM #11): Silently skips failed workflows (with warning)
Fixed behavior: Same, but ensure logging is consistent.
"""
def test_list_all_logs_warning_for_corrupt_files(
- self, skills_dir_with_corruption: Path, log_capture: pytest.LogCaptureFixture
+ self, workflows_dir_with_corruption: Path, log_capture: pytest.LogCaptureFixture
):
"""list_all() should log warning for each corrupt file."""
- from py_code_mode.skills import FileSkillStore
+ from py_code_mode.workflows import FileWorkflowStore
- store = FileSkillStore(skills_dir_with_corruption)
- skills = store.list_all()
+ store = FileWorkflowStore(workflows_dir_with_corruption)
+ workflows = store.list_all()
- # Valid skill should be loaded
- skill_names = {s.name for s in skills}
- assert "valid_skill" in skill_names
+ # Valid workflow should be loaded
+ workflow_names = {s.name for s in workflows}
+ assert "valid_workflow" in workflow_names
# Warning should be logged for syntax_error.py
assert any("syntax_error" in record.message for record in log_capture.records), (
@@ -498,8 +498,8 @@ def test_list_all_logs_warning_for_corrupt_files(
)
-class TestRedisSkillStoreErrorHandling:
- """Tests for RedisSkillStore error handling.
+class TestRedisWorkflowStoreErrorHandling:
+ """Tests for RedisWorkflowStore error handling.
Current behavior (HIGH/MEDIUM #12-13): except Exception -> return None
Fixed behavior: Return None ONLY for missing key, raise for corruption.
@@ -507,9 +507,9 @@ class TestRedisSkillStoreErrorHandling:
def test_load_returns_none_for_missing_key(self, mock_redis_with_corruption):
"""load() should return None when key doesn't exist."""
- from py_code_mode.skills import RedisSkillStore
+ from py_code_mode.workflows import RedisWorkflowStore
- store = RedisSkillStore(mock_redis_with_corruption, prefix="skills")
+ store = RedisWorkflowStore(mock_redis_with_corruption, prefix="workflows")
result = store.load("totally_nonexistent")
assert result is None
@@ -518,9 +518,9 @@ def test_load_raises_for_invalid_json(
self, mock_redis_with_corruption, log_capture: pytest.LogCaptureFixture
):
"""load() should raise StorageReadError for invalid JSON."""
- from py_code_mode.skills import RedisSkillStore
+ from py_code_mode.workflows import RedisWorkflowStore
- store = RedisSkillStore(mock_redis_with_corruption, prefix="skills")
+ store = RedisWorkflowStore(mock_redis_with_corruption, prefix="workflows")
from py_code_mode import errors
@@ -541,10 +541,10 @@ def test_load_raises_for_invalid_json(
def test_load_raises_for_missing_fields(
self, mock_redis_with_corruption, log_capture: pytest.LogCaptureFixture
):
- """load() should raise StorageReadError for incomplete skill data."""
- from py_code_mode.skills import RedisSkillStore
+ """load() should raise StorageReadError for incomplete workflow data."""
+ from py_code_mode.workflows import RedisWorkflowStore
- store = RedisSkillStore(mock_redis_with_corruption, prefix="skills")
+ store = RedisWorkflowStore(mock_redis_with_corruption, prefix="workflows")
from py_code_mode import errors
@@ -563,14 +563,14 @@ def test_list_all_logs_warning_for_corrupt_entries(
self, mock_redis_with_corruption, log_capture: pytest.LogCaptureFixture
):
"""list_all() should log warning for corrupt entries."""
- from py_code_mode.skills import RedisSkillStore
+ from py_code_mode.workflows import RedisWorkflowStore
- store = RedisSkillStore(mock_redis_with_corruption, prefix="skills")
- skills = store.list_all()
+ store = RedisWorkflowStore(mock_redis_with_corruption, prefix="workflows")
+ workflows = store.list_all()
- # Valid skill should be loaded
- skill_names = {s.name for s in skills}
- assert "valid" in skill_names
+ # Valid workflow should be loaded
+ workflow_names = {s.name for s in workflows}
+ assert "valid" in workflow_names
# Warnings should be logged for corrupt entries
log_messages = " ".join(r.message for r in log_capture.records)
@@ -589,41 +589,41 @@ def test_list_all_logs_warning_for_corrupt_entries(
# to use CLIAdapter(tools_path=...) interface before this test can work.
-class TestServerBuildSkillLibraryUsesLogging:
+class TestServerBuildWorkflowLibraryUsesLogging:
"""Tests that container/server.py uses logging instead of print.
Current behavior (MEDIUM #21): Uses print() for warnings.
Fixed behavior: Uses logging module.
"""
- def test_build_skill_library_uses_logging_not_print(
+ def test_build_workflow_library_uses_logging_not_print(
self,
tmp_path: Path,
capsys: pytest.CaptureFixture,
log_capture: pytest.LogCaptureFixture,
):
- """build_skill_library should use logging, not print."""
+ """build_workflow_library should use logging, not print."""
# This test requires mocking to trigger the OSError path
# Skip if FastAPI not available
try:
from py_code_mode.execution.container.config import SessionConfig
- from py_code_mode.execution.container.server import build_skill_library
+ from py_code_mode.execution.container.server import build_workflow_library
except ImportError:
pytest.skip("FastAPI not installed")
# Create config with a path that will fail
config = SessionConfig(
- skills_path=Path("/root/definitely_no_permission"),
+ workflows_path=Path("/root/definitely_no_permission"),
)
# Mock mkdir to raise OSError
with patch.object(Path, "mkdir", side_effect=OSError("Permission denied")):
- _result = build_skill_library(config)
+ _result = build_workflow_library(config)
captured = capsys.readouterr()
if "Warning:" in captured.out or "Cannot create" in captured.out:
pytest.fail(
- f"build_skill_library uses print() for warnings:\n"
+ f"build_workflow_library uses print() for warnings:\n"
f"stdout: {captured.out}\n"
"Fix: Replace print() in server.py:162 with logging.warning()"
)
@@ -738,7 +738,7 @@ async def test_mcp_import_error_logs_warning(
)
-# TestEmbedderFallbackLogging removed - SkillStoreWrapper was removed
+# TestEmbedderFallbackLogging removed - WorkflowStoreWrapper was removed
# in Track B: Wrapper Cleanup. Test the fallback logging directly on FileStorage/RedisStorage.
# TestStorageWrapperErrorPropagation removed - ArtifactStoreWrapper was removed
@@ -781,17 +781,17 @@ async def test_developer_discovers_missing_tools_through_logs(
)
@pytest.mark.asyncio
- async def test_developer_discovers_skill_parse_error_through_logs(
- self, skills_dir_with_corruption: Path, log_capture: pytest.LogCaptureFixture
+ async def test_developer_discovers_workflow_parse_error_through_logs(
+ self, workflows_dir_with_corruption: Path, log_capture: pytest.LogCaptureFixture
):
"""Developer should see specific parse errors in logs."""
- from py_code_mode.skills import FileSkillStore
+ from py_code_mode.workflows import FileWorkflowStore
- store = FileSkillStore(skills_dir_with_corruption)
- skills = store.list_all()
+ store = FileWorkflowStore(workflows_dir_with_corruption)
+ workflows = store.list_all()
- # Developer sees some skills loaded
- assert len(skills) >= 1
+ # Developer sees some workflows loaded
+ assert len(workflows) >= 1
# Developer should see which files failed and why
log_messages = " ".join(r.message for r in log_capture.records)
diff --git a/tests/test_errors.py b/tests/test_errors.py
index a06c052..e18272f 100644
--- a/tests/test_errors.py
+++ b/tests/test_errors.py
@@ -7,12 +7,12 @@
ArtifactWriteError,
CodeModeError,
DependencyError,
- SkillExecutionError,
- SkillNotFoundError,
- SkillValidationError,
ToolCallError,
ToolNotFoundError,
ToolTimeoutError,
+ WorkflowExecutionError,
+ WorkflowNotFoundError,
+ WorkflowValidationError,
)
@@ -36,10 +36,10 @@ def test_artifact_errors_inheritance(self) -> None:
assert isinstance(ArtifactNotFoundError("test"), CodeModeError)
assert isinstance(ArtifactWriteError("test", "reason"), CodeModeError)
- def test_skill_errors_inheritance(self) -> None:
- assert isinstance(SkillNotFoundError("test"), CodeModeError)
- assert isinstance(SkillValidationError("test", "reason"), CodeModeError)
- assert isinstance(SkillExecutionError("test", ValueError("x")), CodeModeError)
+ def test_workflow_errors_inheritance(self) -> None:
+ assert isinstance(WorkflowNotFoundError("test"), CodeModeError)
+ assert isinstance(WorkflowValidationError("test", "reason"), CodeModeError)
+ assert isinstance(WorkflowExecutionError("test", ValueError("x")), CodeModeError)
def test_dependency_error_inheritance(self) -> None:
assert isinstance(DependencyError("numpy"), CodeModeError)
@@ -111,26 +111,26 @@ def test_write_error(self) -> None:
assert "disk full" in str(err)
-class TestSkillErrors:
- """Tests for skill-related errors."""
+class TestWorkflowErrors:
+ """Tests for workflow-related errors."""
def test_not_found(self) -> None:
- err = SkillNotFoundError("web_enum")
- assert err.skill_name == "web_enum"
+ err = WorkflowNotFoundError("web_enum")
+ assert err.workflow_name == "web_enum"
assert "web_enum" in str(err)
def test_validation_error(self) -> None:
- err = SkillValidationError("bad_skill", "missing required field 'code'")
- assert err.skill_name == "bad_skill"
+ err = WorkflowValidationError("bad_workflow", "missing required field 'code'")
+ assert err.workflow_name == "bad_workflow"
assert err.reason == "missing required field 'code'"
- assert "bad_skill" in str(err)
+ assert "bad_workflow" in str(err)
def test_execution_error(self) -> None:
cause = RuntimeError("division by zero")
- err = SkillExecutionError("buggy_skill", cause)
- assert err.skill_name == "buggy_skill"
+ err = WorkflowExecutionError("buggy_workflow", cause)
+ assert err.workflow_name == "buggy_workflow"
assert err.cause is cause
- assert "buggy_skill" in str(err)
+ assert "buggy_workflow" in str(err)
class TestDependencyError:
@@ -160,9 +160,9 @@ def test_catch_all(self) -> None:
ToolTimeoutError("x", 1.0),
ArtifactNotFoundError("x"),
ArtifactWriteError("x", "y"),
- SkillNotFoundError("x"),
- SkillValidationError("x", "y"),
- SkillExecutionError("x", ValueError()),
+ WorkflowNotFoundError("x"),
+ WorkflowValidationError("x", "y"),
+ WorkflowExecutionError("x", ValueError()),
DependencyError("x"),
]
diff --git a/tests/test_executor_protocol.py b/tests/test_executor_protocol.py
index e904d6c..7d1fde8 100644
--- a/tests/test_executor_protocol.py
+++ b/tests/test_executor_protocol.py
@@ -146,7 +146,7 @@ async def test_uses_executor_config_for_tools(self, tmp_path: Path) -> None:
message: {}
""")
- # Create storage (for skills/artifacts only)
+ # Create storage (for workflows/artifacts only)
storage = FileStorage(tmp_path)
# Configure executor with tools_path
@@ -165,8 +165,8 @@ async def test_uses_executor_config_for_tools(self, tmp_path: Path) -> None:
await executor.close()
@pytest.mark.asyncio
- async def test_uses_storage_skills_via_get_skill_library(self, tmp_path: Path) -> None:
- """InProcessExecutor uses storage.get_skill_library() for skills."""
+ async def test_uses_storage_workflows_via_get_workflow_library(self, tmp_path: Path) -> None:
+ """InProcessExecutor uses storage.get_workflow_library() for workflows."""
from py_code_mode.execution.in_process import InProcessExecutor
from py_code_mode.storage.backends import FileStorage
@@ -175,8 +175,8 @@ async def test_uses_storage_skills_via_get_skill_library(self, tmp_path: Path) -
executor = InProcessExecutor()
await executor.start(storage=storage)
- # Should have skills namespace
- result = await executor.run("'skills' in dir()")
+ # Should have workflows namespace
+ result = await executor.run("'workflows' in dir()")
assert result.value is True
await executor.close()
@@ -226,7 +226,7 @@ async def test_rejects_file_storage_access(self, tmp_path: Path) -> None:
from py_code_mode.execution.protocol import FileStorageAccess
storage_access = FileStorageAccess(
- skills_path=tmp_path / "skills",
+ workflows_path=tmp_path / "workflows",
artifacts_path=tmp_path / "artifacts",
)
@@ -247,7 +247,7 @@ async def test_rejects_redis_storage_access(self) -> None:
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
@@ -344,7 +344,7 @@ async def test_calls_get_serializable_access_for_redis_storage(self) -> None:
# NOTE: tools_prefix and deps_prefix removed - tools/deps now owned by executors
expected_access = RedisStorageAccess(
redis_url="redis://localhost:6379/0",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
storage.get_serializable_access = MagicMock(return_value=expected_access)
@@ -388,7 +388,7 @@ async def test_rejects_file_storage_access(self, tmp_path: Path) -> None:
from py_code_mode.execution.protocol import FileStorageAccess
storage_access = FileStorageAccess(
- skills_path=tmp_path / "skills",
+ workflows_path=tmp_path / "workflows",
artifacts_path=tmp_path / "artifacts",
)
@@ -495,7 +495,7 @@ async def test_rejects_file_storage_access(self, tmp_path: Path) -> None:
from py_code_mode.execution.subprocess.config import SubprocessConfig
storage_access = FileStorageAccess(
- skills_path=tmp_path / "skills",
+ workflows_path=tmp_path / "workflows",
artifacts_path=tmp_path / "artifacts",
)
diff --git a/tests/test_feature_matrix_comprehensive.py b/tests/test_feature_matrix_comprehensive.py
index 4abfc0a..876d9b2 100644
--- a/tests/test_feature_matrix_comprehensive.py
+++ b/tests/test_feature_matrix_comprehensive.py
@@ -6,18 +6,18 @@
The "from scratch" scenario is the most common real-world case:
1. Developer creates new project
2. Points py-code-mode at empty directory
- 3. Expects skills.create(), artifacts.save() to work
- 4. Expects created skills to persist across sessions
+ 3. Expects workflows.create(), artifacts.save() to work
+ 4. Expects created workflows to persist across sessions
Test Matrix:
- Storage: FileStorage, RedisStorage (mock)
- Executor: InProcessExecutor, ContainerExecutor (if Docker)
- Directory conditions: empty, partial, populated
- - Features: 12 (tools: 4, skills: 4, artifacts: 4)
+ - Features: 12 (tools: 4, workflows: 4, artifacts: 4)
Critical tests:
- "From scratch" scenario - empty dir, all features work
- - Persistence across sessions - skills/artifacts survive close/reopen
+ - Persistence across sessions - workflows/artifacts survive close/reopen
- Directory auto-creation - save() creates missing dirs
"""
@@ -46,20 +46,20 @@ def empty_base_dir(tmp_path: Path) -> Path:
"""Base directory exists but NO subdirs created.
This is the critical "from scratch" scenario that was previously masked
- by fixtures that pre-create tools/, skills/, artifacts/ directories.
+ by fixtures that pre-create tools/, workflows/, artifacts/ directories.
"""
# tmp_path already exists (pytest creates it)
# Explicitly verify no subdirs exist
assert not (tmp_path / "tools").exists()
- assert not (tmp_path / "skills").exists()
+ assert not (tmp_path / "workflows").exists()
assert not (tmp_path / "artifacts").exists()
return tmp_path
@pytest.fixture
-def partial_dir_skills_only(tmp_path: Path) -> Path:
- """Only skills/ exists - tests tools and artifacts without their dirs."""
- (tmp_path / "skills").mkdir()
+def partial_dir_workflows_only(tmp_path: Path) -> Path:
+ """Only workflows/ exists - tests tools and artifacts without their dirs."""
+ (tmp_path / "workflows").mkdir()
return tmp_path
@@ -80,11 +80,11 @@ def populated_dir(tmp_path: Path) -> tuple[Path, Path]:
Tuple of (base_path, tools_path) for storage and executor config.
"""
tools_dir = tmp_path / "tools"
- skills_dir = tmp_path / "skills"
+ workflows_dir = tmp_path / "workflows"
artifacts_dir = tmp_path / "artifacts"
tools_dir.mkdir()
- skills_dir.mkdir()
+ workflows_dir.mkdir()
artifacts_dir.mkdir()
# Sample tool
@@ -96,8 +96,8 @@ def populated_dir(tmp_path: Path) -> tuple[Path, Path]:
description: Echo text back
""")
- # Sample skill
- (skills_dir / "double.py").write_text('''"""Double a number."""
+ # Sample workflow
+ (workflows_dir / "double.py").write_text('''"""Double a number."""
async def run(n: int) -> int:
return n * 2
@@ -157,7 +157,7 @@ async def test_complete_workflow_from_empty_directory(self, empty_base_dir: Path
This test WILL FAIL if:
- Directory auto-creation is broken
- - skills.list() crashes on missing skills/
+ - workflows.list() crashes on missing workflows/
- artifacts.save() fails to create artifacts/
- Persistence doesn't work
"""
@@ -171,31 +171,31 @@ async def test_complete_workflow_from_empty_directory(self, empty_base_dir: Path
assert isinstance(result.value, list), f"tools.list() returned {type(result.value)}"
# Empty list is expected - no tools defined yet
- # 2. Verify skills namespace exists (empty is fine)
- result = await session.run("skills.list()")
- assert result.is_ok, f"skills.list() failed on empty dir: {result.error}"
- assert result.value is not None, "skills.list() returned None"
- assert isinstance(result.value, list), f"skills.list() returned {type(result.value)}"
+ # 2. Verify workflows namespace exists (empty is fine)
+ result = await session.run("workflows.list()")
+ assert result.is_ok, f"workflows.list() failed on empty dir: {result.error}"
+ assert result.value is not None, "workflows.list() returned None"
+ assert isinstance(result.value, list), f"workflows.list() returned {type(result.value)}"
- # 3. Create a skill - this MUST create skills/ directory
+ # 3. Create a workflow - this MUST create workflows/ directory
result = await session.run("""
-skills.create(
+workflows.create(
name="triple",
description="Triple a number",
source="async def run(n: int) -> int:\\n return n * 3"
)
""")
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
- # 4. Verify skill appears in list
- result = await session.run("skills.list()")
+ # 4. Verify workflow appears in list
+ result = await session.run("workflows.list()")
assert result.is_ok
- skill_names = [s["name"] for s in result.value]
- assert "triple" in skill_names, f"Created skill not in list: {skill_names}"
+ workflow_names = [s["name"] for s in result.value]
+ assert "triple" in workflow_names, f"Created workflow not in list: {workflow_names}"
- # 5. Invoke the created skill
- result = await session.run("skills.triple(n=7)")
- assert result.is_ok, f"skills.triple() failed: {result.error}"
+ # 5. Invoke the created workflow
+ result = await session.run("workflows.triple(n=7)")
+ assert result.is_ok, f"workflows.triple() failed: {result.error}"
assert result.value == 21, f"Expected 21, got {result.value}"
# 6. Save an artifact - this MUST create artifacts/ directory
@@ -215,37 +215,37 @@ async def test_complete_workflow_from_empty_directory(self, empty_base_dir: Path
assert len(result.value) >= 1, "Artifact not in list"
# Session closed. Verify files exist on disk.
- skills_dir = empty_base_dir / "skills"
+ workflows_dir = empty_base_dir / "workflows"
artifacts_dir = empty_base_dir / "artifacts"
- assert skills_dir.exists(), "skills/ directory was not created"
- assert (skills_dir / "triple.py").exists(), "Skill file was not persisted"
+ assert workflows_dir.exists(), "workflows/ directory was not created"
+ assert (workflows_dir / "triple.py").exists(), "Workflow file was not persisted"
assert artifacts_dir.exists(), "artifacts/ directory was not created"
@pytest.mark.asyncio
- async def test_skills_persist_across_sessions(self, empty_base_dir: Path) -> None:
- """Skills created in one session are available in the next.
+ async def test_workflows_persist_across_sessions(self, empty_base_dir: Path) -> None:
+ """Workflows created in one session are available in the next.
This test WILL FAIL if:
- - Skills are only stored in memory
- - FileSkillStore doesn't save to disk
- - SkillLibrary doesn't reload from store on new session
+ - Workflows are only stored in memory
+ - FileWorkflowStore doesn't save to disk
+ - WorkflowLibrary doesn't reload from store on new session
"""
storage = FileStorage(empty_base_dir)
- # Session 1: Create a skill
+ # Session 1: Create a workflow
async with Session(storage=storage) as session:
result = await session.run("""
-skills.create(
+workflows.create(
name="quadruple",
description="Multiply by 4",
source="async def run(n: int) -> int:\\n return n * 4"
)
""")
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
# Verify it works in this session
- result = await session.run("skills.quadruple(n=5)")
+ result = await session.run("workflows.quadruple(n=5)")
assert result.is_ok
assert result.value == 20
@@ -254,17 +254,17 @@ async def test_skills_persist_across_sessions(self, empty_base_dir: Path) -> Non
storage2 = FileStorage(empty_base_dir)
async with Session(storage=storage2) as session:
- # Skill should be visible in list
- result = await session.run("skills.list()")
+ # Workflow should be visible in list
+ result = await session.run("workflows.list()")
assert result.is_ok
- skill_names = [s["name"] for s in result.value]
- assert "quadruple" in skill_names, (
- f"Skill not persisted across sessions. Found: {skill_names}"
+ workflow_names = [s["name"] for s in result.value]
+ assert "quadruple" in workflow_names, (
+ f"Workflow not persisted across sessions. Found: {workflow_names}"
)
- # Skill should be callable
- result = await session.run("skills.quadruple(n=10)")
- assert result.is_ok, f"Persisted skill failed: {result.error}"
+ # Workflow should be callable
+ result = await session.run("workflows.quadruple(n=10)")
+ assert result.is_ok, f"Persisted workflow failed: {result.error}"
assert result.value == 40
@pytest.mark.asyncio
@@ -299,7 +299,7 @@ async def test_from_scratch_with_container_executor(self, empty_base_dir: Path)
"""From scratch scenario works with container executor too.
Verifies that the container receives proper storage access configuration:
- - skills_path is created and mounted read-write
+ - workflows_path is created and mounted read-write
- artifacts_path is created and mounted read-write
- Environment variables set for container's SessionConfig
"""
@@ -310,9 +310,9 @@ async def test_from_scratch_with_container_executor(self, empty_base_dir: Path)
executor = ContainerExecutor(config)
async with Session(storage=storage, executor=executor) as session:
- # Skills should work
- result = await session.run("skills.list()")
- assert result.is_ok, f"skills.list() failed in container: {result.error}"
+ # Workflows should work
+ result = await session.run("workflows.list()")
+ assert result.is_ok, f"workflows.list() failed in container: {result.error}"
# Artifacts should work
result = await session.run('artifacts.save("container_test.txt", b"hello", "test")')
@@ -333,16 +333,16 @@ class TestDirectoryAutoCreation:
"""
@pytest.mark.asyncio
- async def test_skills_create_creates_directory(self, empty_base_dir: Path) -> None:
- """skills.create() creates skills/ directory if missing."""
+ async def test_workflows_create_creates_directory(self, empty_base_dir: Path) -> None:
+ """workflows.create() creates workflows/ directory if missing."""
storage = FileStorage(empty_base_dir)
- assert not (empty_base_dir / "skills").exists()
+ assert not (empty_base_dir / "workflows").exists()
async with Session(storage=storage) as session:
result = await session.run("""
-skills.create(
- name="test_skill",
+workflows.create(
+ name="test_workflow",
description="Test",
source="async def run() -> str:\\n return 'ok'"
)
@@ -350,7 +350,9 @@ async def test_skills_create_creates_directory(self, empty_base_dir: Path) -> No
assert result.is_ok, f"Failed: {result.error}"
# Directory should now exist
- assert (empty_base_dir / "skills").exists(), "skills/ not created by skills.create()"
+ assert (empty_base_dir / "workflows").exists(), (
+ "workflows/ not created by workflows.create()"
+ )
@pytest.mark.asyncio
async def test_artifacts_save_creates_directory(self, empty_base_dir: Path) -> None:
@@ -405,26 +407,26 @@ async def test_tools_list_on_empty_directory(self, empty_base_dir: Path) -> None
assert result.value == []
@pytest.mark.asyncio
- async def test_skills_list_on_missing_directory(self, empty_base_dir: Path) -> None:
- """skills.list() returns [] when skills/ doesn't exist."""
+ async def test_workflows_list_on_missing_directory(self, empty_base_dir: Path) -> None:
+ """workflows.list() returns [] when workflows/ doesn't exist."""
storage = FileStorage(empty_base_dir)
async with Session(storage=storage) as session:
- result = await session.run("skills.list()")
+ result = await session.run("workflows.list()")
- assert result.is_ok, f"skills.list() crashed: {result.error}"
- assert result.value is not None, "skills.list() returned None"
+ assert result.is_ok, f"workflows.list() crashed: {result.error}"
+ assert result.value is not None, "workflows.list() returned None"
assert isinstance(result.value, list)
assert result.value == []
@pytest.mark.asyncio
- async def test_skills_list_on_empty_directory(self, empty_base_dir: Path) -> None:
- """skills.list() returns [] when skills/ exists but is empty."""
- (empty_base_dir / "skills").mkdir()
+ async def test_workflows_list_on_empty_directory(self, empty_base_dir: Path) -> None:
+ """workflows.list() returns [] when workflows/ exists but is empty."""
+ (empty_base_dir / "workflows").mkdir()
storage = FileStorage(empty_base_dir)
async with Session(storage=storage) as session:
- result = await session.run("skills.list()")
+ result = await session.run("workflows.list()")
assert result.is_ok
assert result.value == []
@@ -478,14 +480,14 @@ async def test_tool_not_found_gives_clear_error(self, empty_base_dir: Path) -> N
assert any(x in error_lower for x in ["nonexistent", "not found", "attribute"])
@pytest.mark.asyncio
- async def test_skill_not_found_gives_clear_error(self, empty_base_dir: Path) -> None:
- """Calling non-existent skill gives clear error."""
+ async def test_workflow_not_found_gives_clear_error(self, empty_base_dir: Path) -> None:
+ """Calling non-existent workflow gives clear error."""
storage = FileStorage(empty_base_dir)
async with Session(storage=storage) as session:
- result = await session.run("skills.nonexistent_skill()")
+ result = await session.run("workflows.nonexistent_workflow()")
- assert not result.is_ok, "Expected error for missing skill"
+ assert not result.is_ok, "Expected error for missing workflow"
assert result.error is not None
error_lower = result.error.lower()
assert any(x in error_lower for x in ["nonexistent", "not found", "attribute"])
@@ -516,12 +518,14 @@ class TestPartialDirectoryConditions:
"""Test behavior when some directories exist but others don't."""
@pytest.mark.asyncio
- async def test_skills_work_without_tools_directory(self, partial_dir_skills_only: Path) -> None:
- """Skills work even when tools/ doesn't exist."""
- storage = FileStorage(partial_dir_skills_only)
+ async def test_workflows_work_without_tools_directory(
+ self, partial_dir_workflows_only: Path
+ ) -> None:
+ """Workflows work even when tools/ doesn't exist."""
+ storage = FileStorage(partial_dir_workflows_only)
- # Add a skill to the existing skills dir
- (partial_dir_skills_only / "skills" / "add.py").write_text('''
+ # Add a workflow to the existing workflows dir
+ (partial_dir_workflows_only / "workflows" / "add.py").write_text('''
"""Add two numbers."""
async def run(a: int, b: int) -> int:
@@ -533,16 +537,16 @@ async def run(a: int, b: int) -> int:
result = await session.run("tools.list()")
assert result.is_ok
- # skills should work
- result = await session.run("skills.add(a=1, b=2)")
+ # workflows should work
+ result = await session.run("workflows.add(a=1, b=2)")
assert result.is_ok
assert result.value == 3
@pytest.mark.asyncio
- async def test_artifacts_work_without_skills_directory(
+ async def test_artifacts_work_without_workflows_directory(
self, partial_dir_artifacts_only: Path
) -> None:
- """Artifacts work even when skills/ doesn't exist."""
+ """Artifacts work even when workflows/ doesn't exist."""
storage = FileStorage(partial_dir_artifacts_only)
async with Session(storage=storage) as session:
@@ -550,8 +554,8 @@ async def test_artifacts_work_without_skills_directory(
result = await session.run('artifacts.save("test.json", {"ok": True}, "Test")')
assert result.is_ok
- # skills.list() should work (empty)
- result = await session.run("skills.list()")
+ # workflows.list() should work (empty)
+ result = await session.run("workflows.list()")
assert result.is_ok
@@ -572,29 +576,29 @@ def redis_storage(self, mock_redis: MockRedisClient):
return RedisStorage(redis=mock_redis, prefix="test")
@pytest.mark.asyncio
- async def test_skills_create_and_persist_in_redis(self, redis_storage) -> None:
- """Skills can be created and retrieved from Redis storage."""
+ async def test_workflows_create_and_persist_in_redis(self, redis_storage) -> None:
+ """Workflows can be created and retrieved from Redis storage."""
async with Session(storage=redis_storage) as session:
- # Create skill
+ # Create workflow
result = await session.run("""
-skills.create(
- name="redis_skill",
- description="Test skill",
+workflows.create(
+ name="redis_workflow",
+ description="Test workflow",
source="async def run() -> str:\\n return 'from redis'"
)
""")
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
- # Invoke skill
- result = await session.run("skills.redis_skill()")
+ # Invoke workflow
+ result = await session.run("workflows.redis_workflow()")
assert result.is_ok
assert result.value == "from redis"
# Verify in list
- result = await session.run("skills.list()")
+ result = await session.run("workflows.list()")
assert result.is_ok
names = [s["name"] for s in result.value]
- assert "redis_skill" in names
+ assert "redis_workflow" in names
@pytest.mark.asyncio
async def test_artifacts_save_and_load_in_redis(self, redis_storage) -> None:
@@ -632,13 +636,13 @@ def executor(self, request):
return _create_container_executor()
@pytest.mark.asyncio
- async def test_skills_list_works_with_executor(self, executor, empty_base_dir: Path) -> None:
- """skills.list() works across executors."""
+ async def test_workflows_list_works_with_executor(self, executor, empty_base_dir: Path) -> None:
+ """workflows.list() works across executors."""
storage = FileStorage(empty_base_dir)
async with Session(storage=storage, executor=executor) as session:
- result = await session.run("skills.list()")
- assert result.is_ok, f"skills.list() failed with {type(executor)}: {result.error}"
+ result = await session.run("workflows.list()")
+ assert result.is_ok, f"workflows.list() failed with {type(executor)}: {result.error}"
assert isinstance(result.value, list)
@pytest.mark.asyncio
@@ -704,39 +708,39 @@ async def test_tools_search_feature(self, storage_and_executor) -> None:
assert isinstance(result.value, list)
@pytest.mark.asyncio
- async def test_skills_list_feature(self, storage_and_executor) -> None:
- """skills.list() works across all combinations."""
+ async def test_workflows_list_feature(self, storage_and_executor) -> None:
+ """workflows.list() works across all combinations."""
storage, executor = storage_and_executor
async with Session(storage=storage, executor=executor) as session:
- result = await session.run("skills.list()")
- assert result.is_ok, f"skills.list() failed: {result.error}"
+ result = await session.run("workflows.list()")
+ assert result.is_ok, f"workflows.list() failed: {result.error}"
assert isinstance(result.value, list)
@pytest.mark.asyncio
- async def test_skills_search_feature(self, storage_and_executor) -> None:
- """skills.search() works across all combinations."""
+ async def test_workflows_search_feature(self, storage_and_executor) -> None:
+ """workflows.search() works across all combinations."""
storage, executor = storage_and_executor
async with Session(storage=storage, executor=executor) as session:
- result = await session.run('skills.search("test")')
- assert result.is_ok, f"skills.search() failed: {result.error}"
+ result = await session.run('workflows.search("test")')
+ assert result.is_ok, f"workflows.search() failed: {result.error}"
assert isinstance(result.value, list)
@pytest.mark.asyncio
- async def test_skills_create_feature(self, storage_and_executor) -> None:
- """skills.create() works across all combinations."""
+ async def test_workflows_create_feature(self, storage_and_executor) -> None:
+ """workflows.create() works across all combinations."""
storage, executor = storage_and_executor
async with Session(storage=storage, executor=executor) as session:
result = await session.run("""
-skills.create(
+workflows.create(
name="matrix_test",
- description="Matrix test skill",
+ description="Matrix test workflow",
source="async def run() -> str:\\n return 'matrix'"
)
""")
- assert result.is_ok, f"skills.create() failed: {result.error}"
+ assert result.is_ok, f"workflows.create() failed: {result.error}"
# Verify it's callable
- result = await session.run("skills.matrix_test()")
+ result = await session.run("workflows.matrix_test()")
assert result.is_ok
assert result.value == "matrix"
diff --git a/tests/test_integration.py b/tests/test_integration.py
index b1cd92b..2d2ca8a 100644
--- a/tests/test_integration.py
+++ b/tests/test_integration.py
@@ -9,10 +9,10 @@
import pytest
from py_code_mode.execution.in_process import InProcessExecutor
-from py_code_mode.skills import FileSkillStore, MockEmbedder, SkillLibrary
from py_code_mode.tools.adapters.base import ToolAdapter
from py_code_mode.tools.adapters.cli import CLIAdapter
from py_code_mode.tools.registry import ToolRegistry
+from py_code_mode.workflows import FileWorkflowStore, MockEmbedder, WorkflowLibrary
class TestCLIToExecutorFlow:
@@ -370,51 +370,53 @@ async def test_scoped_registry_limits_tools(self) -> None:
assert tools[0].name == "scan"
-class TestSkillsIntegration:
- """Tests skills integration with executor."""
+class TestWorkflowsIntegration:
+ """Tests workflows integration with executor."""
@pytest.fixture
- def skill_library(self, tmp_path: Path) -> SkillLibrary:
- """Create skill library with test skills."""
- skills_path = tmp_path / "skills"
- skills_path.mkdir()
+ def workflow_library(self, tmp_path: Path) -> WorkflowLibrary:
+ """Create workflow library with test workflows."""
+ workflows_path = tmp_path / "workflows"
+ workflows_path.mkdir()
- # Create a simple skill
- (skills_path / "double.py").write_text('''"""Double a number."""
+ # Create a simple workflow
+ (workflows_path / "double.py").write_text('''"""Double a number."""
async def run(n: int) -> int:
return n * 2
''')
- store = FileSkillStore(skills_path)
- return SkillLibrary(embedder=MockEmbedder(), store=store)
+ store = FileWorkflowStore(workflows_path)
+ return WorkflowLibrary(embedder=MockEmbedder(), store=store)
@pytest.fixture
- def executor_with_skills(self, skill_library: SkillLibrary) -> InProcessExecutor:
- """Executor with skill library."""
- return InProcessExecutor(skill_library=skill_library)
+ def executor_with_workflows(self, workflow_library: WorkflowLibrary) -> InProcessExecutor:
+ """Executor with workflow library."""
+ return InProcessExecutor(workflow_library=workflow_library)
@pytest.mark.asyncio
- async def test_skills_list_shows_skills(self, executor_with_skills: InProcessExecutor) -> None:
- """skills.list() returns available skills."""
- result = await executor_with_skills.run("len(skills.list())")
+ async def test_workflows_list_shows_workflows(
+ self, executor_with_workflows: InProcessExecutor
+ ) -> None:
+ """workflows.list() returns available workflows."""
+ result = await executor_with_workflows.run("len(workflows.list())")
assert result.is_ok, f"Execution failed: {result.error}"
assert result.value == 1
@pytest.mark.asyncio
- async def test_skill_invocation(self, executor_with_skills: InProcessExecutor) -> None:
- """Can invoke skill from executed code."""
- result = await executor_with_skills.run("skills.double(n=21)")
+ async def test_workflow_invocation(self, executor_with_workflows: InProcessExecutor) -> None:
+ """Can invoke workflow from executed code."""
+ result = await executor_with_workflows.run("workflows.double(n=21)")
assert result.is_ok, f"Execution failed: {result.error}"
assert result.value == 42
@pytest.mark.asyncio
- async def test_skills_search(self, executor_with_skills: InProcessExecutor) -> None:
- """skills.search() can find skills."""
- result = await executor_with_skills.run("""
-matches = skills.search("double")
+ async def test_workflows_search(self, executor_with_workflows: InProcessExecutor) -> None:
+ """workflows.search() can find workflows."""
+ result = await executor_with_workflows.run("""
+matches = workflows.search("double")
[s["name"] for s in matches]
""")
diff --git a/tests/test_mcp_server.py b/tests/test_mcp_server.py
index 10fb5f9..52c10c6 100644
--- a/tests/test_mcp_server.py
+++ b/tests/test_mcp_server.py
@@ -2,9 +2,9 @@
Tests cover:
- E2E tests via stdio transport (production path)
-- Skill creation, persistence, and invocation
+- Workflow creation, persistence, and invocation
- Artifact storage and persistence
-- Cross-namespace operations (skills calling tools)
+- Cross-namespace operations (workflows calling tools)
- State persistence across run_code calls
- Tool invocation patterns
- Complete workflow scenarios
@@ -36,8 +36,8 @@ def mcp_storage_dir(tmp_path: Path) -> tuple[Path, Path]:
tools_dir = tmp_path / "tools"
tools_dir.mkdir()
- skills_dir = storage / "skills"
- skills_dir.mkdir()
+ workflows_dir = storage / "workflows"
+ workflows_dir.mkdir()
artifacts_dir = storage / "artifacts"
artifacts_dir.mkdir()
@@ -88,14 +88,14 @@ def mcp_storage_dir(tmp_path: Path) -> tuple[Path, Path]:
url: {}
""")
- # Simple skill for basic testing
- (skills_dir / "double.py").write_text('''"""Double a number."""
+ # Simple workflow for basic testing
+ (workflows_dir / "double.py").write_text('''"""Double a number."""
async def run(n: int) -> int:
return n * 2
''')
- # Skill that calls tools (for cross-namespace testing)
- (skills_dir / "fetch_title.py").write_text('''"""Fetch a URL and extract title."""
+ # Workflow that calls tools (for cross-namespace testing)
+ (workflows_dir / "fetch_title.py").write_text('''"""Fetch a URL and extract title."""
async def run(url: str) -> str:
import re
content = tools.curl.get(url=url)
@@ -140,8 +140,8 @@ async def test_mcp_server_starts_via_stdio(
"run_code",
"list_tools",
"search_tools",
- "list_skills",
- "search_skills",
+ "list_workflows",
+ "search_workflows",
}
assert expected_tools <= tool_names, f"Missing tools: {expected_tools - tool_names}"
@@ -171,11 +171,11 @@ async def test_mcp_server_list_tools(
assert "curl" in tool_names
@pytest.mark.asyncio
- async def test_mcp_server_list_skills(
+ async def test_mcp_server_list_workflows(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: list_skills returns seeded Python skills."""
+ """E2E: list_workflows returns seeded Python workflows."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -188,12 +188,12 @@ async def test_mcp_server_list_skills(
async with ClientSession(read, write) as session:
await session.initialize()
- result = await session.call_tool("list_skills", {})
- skills_data = json.loads(result.content[0].text)
+ result = await session.call_tool("list_workflows", {})
+ workflows_data = json.loads(result.content[0].text)
- skill_names = {s["name"] for s in skills_data}
- assert "double" in skill_names
- assert "fetch_title" in skill_names
+ workflow_names = {s["name"] for s in workflows_data}
+ assert "double" in workflow_names
+ assert "fetch_title" in workflow_names
@pytest.mark.asyncio
async def test_mcp_server_search_tools(
@@ -221,11 +221,11 @@ async def test_mcp_server_search_tools(
assert "curl" in tool_names
@pytest.mark.asyncio
- async def test_mcp_server_search_skills(
+ async def test_mcp_server_search_workflows(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: search_skills finds skills by intent."""
+ """E2E: search_workflows finds workflows by intent."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -239,24 +239,24 @@ async def test_mcp_server_search_skills(
await session.initialize()
result = await session.call_tool(
- "search_skills", {"query": "multiply number", "limit": 5}
+ "search_workflows", {"query": "multiply number", "limit": 5}
)
- skills_data = json.loads(result.content[0].text)
+ workflows_data = json.loads(result.content[0].text)
# Should find double since it multiplies by 2
- skill_names = {s["name"] for s in skills_data}
- assert "double" in skill_names
+ workflow_names = {s["name"] for s in workflows_data}
+ assert "double" in workflow_names
# -------------------------------------------------------------------------
- # Runtime Skill Creation Tests
+ # Runtime Workflow Creation Tests
# -------------------------------------------------------------------------
@pytest.mark.asyncio
- async def test_mcp_server_skill_create_and_invoke(
+ async def test_mcp_server_workflow_create_and_invoke(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: Create skill at runtime, then invoke it."""
+ """E2E: Create workflow at runtime, then invoke it."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -269,9 +269,9 @@ async def test_mcp_server_skill_create_and_invoke(
async with ClientSession(read, write) as session:
await session.initialize()
- # Create a skill at runtime
+ # Create a workflow at runtime
create_code = '''
-skills.create(
+workflows.create(
name="add_numbers",
source="""
async def run(a: int, b: int) -> int:
@@ -283,18 +283,18 @@ async def run(a: int, b: int) -> int:
result = await session.call_tool("run_code", {"code": create_code})
assert "error" not in result.content[0].text.lower()
- # Now invoke the skill we just created
+ # Now invoke the workflow we just created
invoke_result = await session.call_tool(
- "run_code", {"code": 'skills.invoke("add_numbers", a=10, b=32)'}
+ "run_code", {"code": 'workflows.invoke("add_numbers", a=10, b=32)'}
)
assert "42" in invoke_result.content[0].text
@pytest.mark.asyncio
- async def test_mcp_server_skill_persists_across_calls(
+ async def test_mcp_server_workflow_persists_across_calls(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: Skill created in call 1 is available in call 2."""
+ """E2E: Workflow created in call 1 is available in call 2."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -307,12 +307,12 @@ async def test_mcp_server_skill_persists_across_calls(
async with ClientSession(read, write) as session:
await session.initialize()
- # Call 1: Create skill
+ # Call 1: Create workflow
await session.call_tool(
"run_code",
{
"code": """
-skills.create(
+workflows.create(
name="triple",
source="async def run(n: int) -> int:\\n return n * 3",
description="Triple a number"
@@ -321,26 +321,26 @@ async def test_mcp_server_skill_persists_across_calls(
},
)
- # Call 2: Search for the skill (should find it)
+ # Call 2: Search for the workflow (should find it)
search_result = await session.call_tool(
- "search_skills", {"query": "triple multiply", "limit": 5}
+ "search_workflows", {"query": "triple multiply", "limit": 5}
)
- skills_found = json.loads(search_result.content[0].text)
- skill_names = {s["name"] for s in skills_found}
- assert "triple" in skill_names
+ workflows_found = json.loads(search_result.content[0].text)
+ workflow_names = {s["name"] for s in workflows_found}
+ assert "triple" in workflow_names
- # Call 3: Invoke the skill
+ # Call 3: Invoke the workflow
invoke_result = await session.call_tool(
- "run_code", {"code": 'skills.invoke("triple", n=14)'}
+ "run_code", {"code": 'workflows.invoke("triple", n=14)'}
)
assert "42" in invoke_result.content[0].text
@pytest.mark.asyncio
- async def test_mcp_server_skill_delete(
+ async def test_mcp_server_workflow_delete(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: Delete skill via skills.delete()."""
+ """E2E: Delete workflow via workflows.delete()."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -353,34 +353,34 @@ async def test_mcp_server_skill_delete(
async with ClientSession(read, write) as session:
await session.initialize()
- # Create a skill
+ # Create a workflow
await session.call_tool(
"run_code",
{
"code": """
-skills.create(
- name="temp_skill",
+workflows.create(
+ name="temp_workflow",
source="async def run() -> str:\\n return 'temporary'",
- description="Temporary skill for deletion test"
+ description="Temporary workflow for deletion test"
)
"""
},
)
# Verify it exists
- list_result = await session.call_tool("list_skills", {})
- skills_data = json.loads(list_result.content[0].text)
- skill_names = {s["name"] for s in skills_data}
- assert "temp_skill" in skill_names
+ list_result = await session.call_tool("list_workflows", {})
+ workflows_data = json.loads(list_result.content[0].text)
+ workflow_names = {s["name"] for s in workflows_data}
+ assert "temp_workflow" in workflow_names
# Delete it
- await session.call_tool("run_code", {"code": 'skills.delete("temp_skill")'})
+ await session.call_tool("run_code", {"code": 'workflows.delete("temp_workflow")'})
# Verify it's gone
- list_result2 = await session.call_tool("list_skills", {})
- skills_data2 = json.loads(list_result2.content[0].text)
- skill_names2 = {s["name"] for s in skills_data2}
- assert "temp_skill" not in skill_names2
+ list_result2 = await session.call_tool("list_workflows", {})
+ workflows_data2 = json.loads(list_result2.content[0].text)
+ workflow_names2 = {s["name"] for s in workflows_data2}
+ assert "temp_workflow" not in workflow_names2
# -------------------------------------------------------------------------
# Artifact Storage Tests
@@ -573,11 +573,11 @@ async def test_mcp_server_artifact_delete(
# -------------------------------------------------------------------------
@pytest.mark.asyncio
- async def test_mcp_server_skill_calls_tool(
+ async def test_mcp_server_workflow_calls_tool(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: Create skill that uses tools namespace, then invoke it."""
+ """E2E: Create workflow that uses tools namespace, then invoke it."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -590,12 +590,12 @@ async def test_mcp_server_skill_calls_tool(
async with ClientSession(read, write) as session:
await session.initialize()
- # Create skill that calls echo tool
+ # Create workflow that calls echo tool
await session.call_tool(
"run_code",
{
"code": '''
-skills.create(
+workflows.create(
name="shout",
source="""
async def run(message: str) -> str:
@@ -607,18 +607,18 @@ async def run(message: str) -> str:
},
)
- # Invoke the skill
+ # Invoke the workflow
result = await session.call_tool(
- "run_code", {"code": 'skills.invoke("shout", message="hello world")'}
+ "run_code", {"code": 'workflows.invoke("shout", message="hello world")'}
)
assert "HELLO WORLD" in result.content[0].text
@pytest.mark.asyncio
- async def test_mcp_server_seeded_skill_calls_tool(
+ async def test_mcp_server_seeded_workflow_calls_tool(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: Invoke seeded skill (fetch_title) that calls curl tool."""
+ """E2E: Invoke seeded workflow (fetch_title) that calls curl tool."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -631,9 +631,10 @@ async def test_mcp_server_seeded_skill_calls_tool(
async with ClientSession(read, write) as session:
await session.initialize()
- # Invoke seeded skill that calls tools.curl.get()
+ # Invoke seeded workflow that calls tools.curl.get()
result = await session.call_tool(
- "run_code", {"code": 'skills.invoke("fetch_title", url="https://example.com")'}
+ "run_code",
+ {"code": 'workflows.invoke("fetch_title", url="https://example.com")'},
)
# example.com has title "Example Domain"
assert (
@@ -642,11 +643,11 @@ async def test_mcp_server_seeded_skill_calls_tool(
)
@pytest.mark.asyncio
- async def test_mcp_server_skill_calls_another_skill(
+ async def test_mcp_server_workflow_calls_another_workflow(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: Skill that calls skills.invoke()."""
+ """E2E: Workflow that calls workflows.invoke()."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -659,17 +660,17 @@ async def test_mcp_server_skill_calls_another_skill(
async with ClientSession(read, write) as session:
await session.initialize()
- # Create a skill that calls the seeded "double" skill
+ # Create a workflow that calls the seeded "double" workflow
await session.call_tool(
"run_code",
{
"code": '''
-skills.create(
+workflows.create(
name="quadruple",
source="""
async def run(n: int) -> int:
- doubled = skills.invoke("double", n=n)
- return skills.invoke("double", n=doubled)
+ doubled = workflows.invoke("double", n=n)
+ return workflows.invoke("double", n=doubled)
""",
description="Quadruple a number by doubling twice"
)
@@ -677,9 +678,9 @@ async def run(n: int) -> int:
},
)
- # Invoke the skill
+ # Invoke the workflow
result = await session.call_tool(
- "run_code", {"code": 'skills.invoke("quadruple", n=10)'}
+ "run_code", {"code": 'workflows.invoke("quadruple", n=10)'}
)
assert "40" in result.content[0].text
@@ -829,15 +830,15 @@ async def test_mcp_server_list_artifacts(
assert "test_data" in artifact_names
# -------------------------------------------------------------------------
- # MCP Tool: create_skill
+ # MCP Tool: create_workflow
# -------------------------------------------------------------------------
@pytest.mark.asyncio
- async def test_mcp_server_create_skill(
+ async def test_mcp_server_create_workflow(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: create_skill MCP tool creates a skill directly."""
+ """E2E: create_workflow MCP tool creates a workflow directly."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -850,35 +851,35 @@ async def test_mcp_server_create_skill(
async with ClientSession(read, write) as session:
await session.initialize()
- # Create skill via dedicated MCP tool (not run_code)
- skill_source = "async def run(x: int, y: int) -> int:\n return x + y\n"
+ # Create workflow via dedicated MCP tool (not run_code)
+ workflow_source = "async def run(x: int, y: int) -> int:\n return x + y\n"
result = await session.call_tool(
- "create_skill",
+ "create_workflow",
{
"name": "add_two",
- "source": skill_source,
+ "source": workflow_source,
"description": "Add two numbers",
},
)
- # Should return skill info
- skill_info = json.loads(result.content[0].text)
- assert skill_info["name"] == "add_two"
- assert "Add two numbers" in skill_info["description"]
+ # Should return workflow info
+ workflow_info = json.loads(result.content[0].text)
+ assert workflow_info["name"] == "add_two"
+ assert "Add two numbers" in workflow_info["description"]
- # Verify skill works by invoking it via run_code
+ # Verify workflow works by invoking it via run_code
invoke_result = await session.call_tool(
"run_code",
- {"code": 'skills.invoke("add_two", x=17, y=25)'},
+ {"code": 'workflows.invoke("add_two", x=17, y=25)'},
)
assert "42" in invoke_result.content[0].text
@pytest.mark.asyncio
- async def test_mcp_server_create_skill_persists(
+ async def test_mcp_server_create_workflow_persists(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: Skill created via create_skill MCP tool persists and is searchable."""
+ """E2E: Workflow created via create_workflow MCP tool persists and is searchable."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -891,36 +892,36 @@ async def test_mcp_server_create_skill_persists(
async with ClientSession(read, write) as session:
await session.initialize()
- # Create skill via dedicated MCP tool
- skill_source = "async def run(text: str) -> str:\n return text.upper()\n"
+ # Create workflow via dedicated MCP tool
+ workflow_source = "async def run(text: str) -> str:\n return text.upper()\n"
await session.call_tool(
- "create_skill",
+ "create_workflow",
{
"name": "uppercase_text",
- "source": skill_source,
+ "source": workflow_source,
"description": "Convert text to uppercase",
},
)
- # Search for the skill (should be found)
+ # Search for the workflow (should be found)
search_result = await session.call_tool(
- "search_skills",
+ "search_workflows",
{"query": "uppercase convert text", "limit": 5},
)
- skills_found = json.loads(search_result.content[0].text)
- skill_names = {s["name"] for s in skills_found}
- assert "uppercase_text" in skill_names
+ workflows_found = json.loads(search_result.content[0].text)
+ workflow_names = {s["name"] for s in workflows_found}
+ assert "uppercase_text" in workflow_names
# -------------------------------------------------------------------------
- # MCP Tool: delete_skill
+ # MCP Tool: delete_workflow
# -------------------------------------------------------------------------
@pytest.mark.asyncio
- async def test_mcp_server_delete_skill(
+ async def test_mcp_server_delete_workflow(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: delete_skill MCP tool removes a skill."""
+ """E2E: delete_workflow MCP tool removes a workflow."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -933,43 +934,43 @@ async def test_mcp_server_delete_skill(
async with ClientSession(read, write) as session:
await session.initialize()
- # Create a skill via MCP tool
- skill_source = "async def run() -> str:\n return 'I exist'\n"
+ # Create a workflow via MCP tool
+ workflow_source = "async def run() -> str:\n return 'I exist'\n"
await session.call_tool(
- "create_skill",
+ "create_workflow",
{
"name": "deleteme",
- "source": skill_source,
- "description": "Skill to be deleted",
+ "source": workflow_source,
+ "description": "Workflow to be deleted",
},
)
- # Verify it exists via list_skills
- list_result = await session.call_tool("list_skills", {})
- skills_data = json.loads(list_result.content[0].text)
- skill_names = {s["name"] for s in skills_data}
- assert "deleteme" in skill_names
+ # Verify it exists via list_workflows
+ list_result = await session.call_tool("list_workflows", {})
+ workflows_data = json.loads(list_result.content[0].text)
+ workflow_names = {s["name"] for s in workflows_data}
+ assert "deleteme" in workflow_names
- # Delete the skill via dedicated MCP tool
+ # Delete the workflow via dedicated MCP tool
delete_result = await session.call_tool(
- "delete_skill",
+ "delete_workflow",
{"name": "deleteme"},
)
delete_data = json.loads(delete_result.content[0].text)
assert delete_data is True
- # Verify it's gone via list_skills
- list_result2 = await session.call_tool("list_skills", {})
- skills_data2 = json.loads(list_result2.content[0].text)
- skill_names2 = {s["name"] for s in skills_data2}
- assert "deleteme" not in skill_names2
+ # Verify it's gone via list_workflows
+ list_result2 = await session.call_tool("list_workflows", {})
+ workflows_data2 = json.loads(list_result2.content[0].text)
+ workflow_names2 = {s["name"] for s in workflows_data2}
+ assert "deleteme" not in workflow_names2
@pytest.mark.asyncio
- async def test_mcp_server_delete_skill_nonexistent(
+ async def test_mcp_server_delete_workflow_nonexistent(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: delete_skill returns False for nonexistent skill."""
+ """E2E: delete_workflow returns False for nonexistent workflow."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -982,10 +983,10 @@ async def test_mcp_server_delete_skill_nonexistent(
async with ClientSession(read, write) as session:
await session.initialize()
- # Try to delete a skill that doesn't exist
+ # Try to delete a workflow that doesn't exist
delete_result = await session.call_tool(
- "delete_skill",
- {"name": "nonexistent_skill_xyz"},
+ "delete_workflow",
+ {"name": "nonexistent_workflow_xyz"},
)
delete_data = json.loads(delete_result.content[0].text)
assert delete_data is False
@@ -999,7 +1000,7 @@ async def test_mcp_server_full_workflow(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: Complete agent workflow - fetch, parse, save skill, invoke, store artifact."""
+ """E2E: Complete agent workflow - fetch, parse, save workflow, invoke, store artifact."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -1033,12 +1034,12 @@ async def test_mcp_server_full_workflow(
},
)
- # Step 3: Create a skill to process this type of data
+ # Step 3: Create a workflow to process this type of data
await session.call_tool(
"run_code",
{
"code": '''
-skills.create(
+workflows.create(
name="sum_csv",
source="""
import re
@@ -1052,12 +1053,12 @@ async def run(text: str) -> int:
},
)
- # Step 4: Invoke the skill
+ # Step 4: Invoke the workflow
await session.call_tool(
"run_code",
{
"code": """
-total = skills.invoke("sum_csv", text="Values: 10, 20, 30, 40")
+total = workflows.invoke("sum_csv", text="Values: 10, 20, 30, 40")
"""
},
)
@@ -1067,7 +1068,7 @@ async def run(text: str) -> int:
"run_code",
{
"code": """
-artifacts.save("calculation_result", {"total": total, "source": "sum_csv skill"})
+artifacts.save("calculation_result", {"total": total, "source": "sum_csv workflow"})
"""
},
)
@@ -1106,7 +1107,7 @@ async def test_mcp_server_empty_tools_dir(
storage = tmp_path / "storage"
storage.mkdir()
(storage / "tools").mkdir()
- (storage / "skills").mkdir()
+ (storage / "workflows").mkdir()
(storage / "artifacts").mkdir()
# Server should still start (tools are optional)
@@ -1190,11 +1191,11 @@ async def test_mcp_server_run_code_runtime_error(
assert "4" in result2.content[0].text
@pytest.mark.asyncio
- async def test_mcp_server_invoke_nonexistent_skill(
+ async def test_mcp_server_invoke_nonexistent_workflow(
self,
mcp_storage_dir: tuple[Path, Path],
) -> None:
- """E2E: Invoking nonexistent skill returns error."""
+ """E2E: Invoking nonexistent workflow returns error."""
from mcp import ClientSession
from mcp.client.stdio import stdio_client
@@ -1207,12 +1208,12 @@ async def test_mcp_server_invoke_nonexistent_skill(
async with ClientSession(read, write) as session:
await session.initialize()
- # Try to invoke skill that doesn't exist
+ # Try to invoke workflow that doesn't exist
result = await session.call_tool(
- "run_code", {"code": 'skills.invoke("nonexistent_skill_xyz", arg=1)'}
+ "run_code", {"code": 'workflows.invoke("nonexistent_workflow_xyz", arg=1)'}
)
- # Should return error about skill not found
+ # Should return error about workflow not found
text = result.content[0].text.lower()
assert "error" in text or "not found" in text or "does not exist" in text
@@ -1521,7 +1522,7 @@ async def test_list_deps_empty_returns_empty_list(
storage = tmp_path / "storage"
storage.mkdir()
(storage / "tools").mkdir()
- (storage / "skills").mkdir()
+ (storage / "workflows").mkdir()
(storage / "artifacts").mkdir()
(storage / "deps").mkdir()
diff --git a/tests/test_negative.py b/tests/test_negative.py
index fdb9ad9..e3cd1ac 100644
--- a/tests/test_negative.py
+++ b/tests/test_negative.py
@@ -173,18 +173,18 @@ async def test_tools_escape_hatch_nonexistent_error(
assert result.error is not None
-# --- Skills Namespace Errors ---
+# --- Workflows Namespace Errors ---
-class TestSkillsNamespaceErrors:
- """Tests for skills namespace error handling."""
+class TestWorkflowsNamespaceErrors:
+ """Tests for workflows namespace error handling."""
@pytest.fixture
def storage(self, tmp_path: Path) -> FileStorage:
- """Create FileStorage with skills directory."""
- skills_dir = tmp_path / "skills"
- skills_dir.mkdir()
- (skills_dir / "divide.py").write_text(
+ """Create FileStorage with workflows directory."""
+ workflows_dir = tmp_path / "workflows"
+ workflows_dir.mkdir()
+ (workflows_dir / "divide.py").write_text(
'''"""Divide two numbers."""
async def run(a: int, b: int) -> float:
@@ -194,10 +194,10 @@ async def run(a: int, b: int) -> float:
return FileStorage(tmp_path)
@pytest.mark.asyncio
- async def test_skill_not_found_error(self, storage: FileStorage) -> None:
- """Calling nonexistent skill gives clear error."""
+ async def test_workflow_not_found_error(self, storage: FileStorage) -> None:
+ """Calling nonexistent workflow gives clear error."""
async with Session(storage=storage) as session:
- result = await session.run("skills.nonexistent()")
+ result = await session.run("workflows.nonexistent()")
assert not result.is_ok
assert result.error is not None
@@ -208,33 +208,33 @@ async def test_skill_not_found_error(self, storage: FileStorage) -> None:
)
@pytest.mark.asyncio
- async def test_skill_missing_required_arg_error(self, storage: FileStorage) -> None:
- """Skill called without required args gives clear error."""
+ async def test_workflow_missing_required_arg_error(self, storage: FileStorage) -> None:
+ """Workflow called without required args gives clear error."""
async with Session(storage=storage) as session:
- result = await session.run("skills.divide()") # Missing required args
+ result = await session.run("workflows.divide()") # Missing required args
assert not result.is_ok
assert result.error is not None
@pytest.mark.asyncio
- async def test_skill_runtime_error_captured(self, storage: FileStorage) -> None:
- """Runtime error in skill is captured."""
+ async def test_workflow_runtime_error_captured(self, storage: FileStorage) -> None:
+ """Runtime error in workflow is captured."""
async with Session(storage=storage) as session:
- result = await session.run("skills.divide(a=1, b=0)") # Division by zero
+ result = await session.run("workflows.divide(a=1, b=0)") # Division by zero
assert not result.is_ok
assert result.error is not None
assert "ZeroDivision" in result.error or "division" in result.error.lower()
@pytest.mark.asyncio
- async def test_skill_create_invalid_source_error(self, storage: FileStorage) -> None:
- """Creating skill with invalid source gives error."""
+ async def test_workflow_create_invalid_source_error(self, storage: FileStorage) -> None:
+ """Creating workflow with invalid source gives error."""
async with Session(storage=storage) as session:
result = await session.run(
"""
-skills.create(
+workflows.create(
name="bad",
- description="Invalid skill",
+ description="Invalid workflow",
source="async def run( INVALID SYNTAX"
)
"""
@@ -244,12 +244,12 @@ async def test_skill_create_invalid_source_error(self, storage: FileStorage) ->
assert result.error is not None
@pytest.mark.asyncio
- async def test_skill_create_missing_run_function_error(self, storage: FileStorage) -> None:
- """Creating skill without run() function gives error."""
+ async def test_workflow_create_missing_run_function_error(self, storage: FileStorage) -> None:
+ """Creating workflow without run() function gives error."""
async with Session(storage=storage) as session:
result = await session.run(
"""
-skills.create(
+workflows.create(
name="norun",
description="No run function",
source="def helper(): return 1"
@@ -260,7 +260,7 @@ async def test_skill_create_missing_run_function_error(self, storage: FileStorag
# Should either fail at creation or when calling
if result.is_ok:
# If creation succeeded, calling should fail
- result = await session.run("skills.norun()")
+ result = await session.run("workflows.norun()")
assert not result.is_ok
@@ -331,8 +331,8 @@ async def test_invalid_path_handling(self, tmp_path: Path) -> None:
storage = FileStorage(nonexistent)
# Should work (directory created on demand)
- # Storage no longer provides tools (executor-owned), test skill library instead
- library = storage.get_skill_library()
+ # Storage no longer provides tools (executor-owned), test workflow library instead
+ library = storage.get_workflow_library()
result = library.list()
assert isinstance(result, list)
@@ -368,17 +368,17 @@ async def test_corrupted_yaml_handling(self, tmp_path: Path) -> None:
except Exception:
pass # Raising on corrupted files is acceptable
- def test_corrupted_skill_handling(self, tmp_path: Path) -> None:
- """FileStorage handles corrupted skill files."""
- skills_dir = tmp_path / "skills"
- skills_dir.mkdir()
- (skills_dir / "bad.py").write_text("async def run( INVALID")
+ def test_corrupted_workflow_handling(self, tmp_path: Path) -> None:
+ """FileStorage handles corrupted workflow files."""
+ workflows_dir = tmp_path / "workflows"
+ workflows_dir.mkdir()
+ (workflows_dir / "bad.py").write_text("async def run( INVALID")
storage = FileStorage(tmp_path)
# Should not crash on list
try:
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
result = library.list()
assert isinstance(result, list)
except Exception:
@@ -394,22 +394,22 @@ async def test_connection_error_handling(self, mock_redis: MockRedisClient) -> N
storage = RedisStorage(redis=mock_redis, prefix="test")
# Mock client always works, so this tests basic functionality
- # Storage no longer provides tools (executor-owned), test skill library instead
- library = storage.get_skill_library()
+ # Storage no longer provides tools (executor-owned), test workflow library instead
+ library = storage.get_workflow_library()
result = library.list()
assert isinstance(result, list)
@pytest.mark.asyncio
async def test_deserialization_error_handling(self, mock_redis: MockRedisClient) -> None:
- """RedisStorage handles corrupted skill data."""
+ """RedisStorage handles corrupted workflow data."""
storage = RedisStorage(redis=mock_redis, prefix="test")
- # Manually inject corrupted data into skills
- mock_redis.hset("test:skills:__skills__", "bad", b"not valid json {{{")
+ # Manually inject corrupted data into workflows
+ mock_redis.hset("test:workflows:__workflows__", "bad", b"not valid json {{{")
# Should not crash - may skip or error gracefully
try:
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
result = library.list()
# May return empty or skip corrupted entries
assert isinstance(result, list)
diff --git a/tests/test_redis_vector_store.py b/tests/test_redis_vector_store.py
index 813a443..b332e28 100644
--- a/tests/test_redis_vector_store.py
+++ b/tests/test_redis_vector_store.py
@@ -4,7 +4,7 @@
Tests fail because RedisVectorStore doesn't exist yet.
RedisVectorStore uses Redis with RediSearch module for distributed vector storage,
-enabling multiple agents to share skill embeddings across deployments.
+enabling multiple agents to share workflow embeddings across deployments.
"""
from __future__ import annotations
@@ -16,7 +16,7 @@
if TYPE_CHECKING:
from redis import Redis
- from py_code_mode.skills.embeddings import EmbeddingProvider
+ from py_code_mode.workflows.embeddings import EmbeddingProvider
@pytest.fixture
@@ -59,16 +59,16 @@ class TestRedisVectorStoreImport:
def test_redis_vector_store_importable_when_redis_available(self) -> None:
"""RedisVectorStore should be importable when redis is installed."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
assert RedisVectorStore is not None
def test_redis_vector_store_satisfies_protocol(self, redis_client: Redis) -> None:
"""RedisVectorStore should implement VectorStore protocol."""
pytest.importorskip("redis")
- from py_code_mode.skills.embeddings import MockEmbedder
- from py_code_mode.skills.vector_store import VectorStore
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.embeddings import MockEmbedder
+ from py_code_mode.workflows.vector_store import VectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
# Protocol compliance via isinstance check
embedder = MockEmbedder()
@@ -88,7 +88,7 @@ class TestRedisVectorStoreInitialization:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder with consistent behavior."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@@ -97,13 +97,13 @@ def test_creates_search_index_on_init(
) -> None:
"""Should create RediSearch index with vector fields on initialization."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
store = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
# Index should be created (verify via count - empty index returns 0)
@@ -114,13 +114,13 @@ def test_stores_model_info_in_index_metadata(
) -> None:
"""Should persist ModelInfo in index for validation."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
store = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
# Should be able to retrieve model info
@@ -135,13 +135,13 @@ def test_uses_cosine_similarity_metric(
) -> None:
"""Index should be configured for cosine similarity search."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
store = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
# Implementation should use COSINE distance metric
@@ -153,17 +153,17 @@ def test_reuses_existing_compatible_index(
) -> None:
"""Should reuse existing index if model matches."""
pytest.importorskip("redis")
- from py_code_mode.skills.embeddings import MockEmbedder
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.embeddings import MockEmbedder
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
# Create first store and add data
store1 = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
- store1.add("skill1", "Test skill", "async def run(): pass", "hash1")
+ store1.add("workflow1", "Test workflow", "async def run(): pass", "hash1")
assert store1.count() == 1
# Create second store with same config - should reuse index
@@ -171,13 +171,13 @@ def test_reuses_existing_compatible_index(
store2 = RedisVectorStore(
redis=redis_client,
embedder=same_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
# Data should still be present
assert store2.count() == 1
- assert store2.get_content_hash("skill1") == "hash1"
+ assert store2.get_content_hash("workflow1") == "hash1"
class TestRedisVectorStoreModelValidation:
@@ -186,14 +186,14 @@ class TestRedisVectorStoreModelValidation:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder with consistent behavior."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@pytest.fixture
def different_embedder(self) -> EmbeddingProvider:
"""Different embedder to trigger model change."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
# Different dimension means different model
return MockEmbedder(dimension=768)
@@ -206,18 +206,18 @@ def test_detects_model_change_different_dimension(
) -> None:
"""Should detect when model dimension changes."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
# Create store with first embedder (384-dim)
store1 = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
store1.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="abc123",
)
@@ -227,8 +227,8 @@ def test_detects_model_change_different_dimension(
store2 = RedisVectorStore(
redis=redis_client,
embedder=different_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
# Model change should have cleared vectors
@@ -239,19 +239,19 @@ def test_preserves_vectors_when_model_unchanged(
) -> None:
"""Should keep vectors when reopening with same model."""
pytest.importorskip("redis")
- from py_code_mode.skills.embeddings import MockEmbedder
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.embeddings import MockEmbedder
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
# Create store and add vectors
store1 = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
store1.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="abc123",
)
@@ -264,8 +264,8 @@ def test_preserves_vectors_when_model_unchanged(
store2 = RedisVectorStore(
redis=redis_client,
embedder=same_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
# Vectors should be preserved
@@ -279,19 +279,19 @@ def test_clears_all_vectors_on_model_change(
) -> None:
"""Model change should clear entire index, not partial."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
- # Add multiple skills
+ # Add multiple workflows
store1 = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
for i in range(5):
store1.add(
- id=f"skill{i}",
- description=f"Skill {i}",
+ id=f"workflow{i}",
+ description=f"Workflow {i}",
source=f"async def run(): return {i}",
content_hash=f"hash{i}",
)
@@ -301,8 +301,8 @@ def test_clears_all_vectors_on_model_change(
store2 = RedisVectorStore(
redis=redis_client,
embedder=different_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
# All vectors should be cleared
@@ -315,7 +315,7 @@ class TestRedisVectorStoreCRUD:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder for testing."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@@ -323,7 +323,7 @@ def mock_embedder(self) -> EmbeddingProvider:
def store(self, redis_client: Redis, mock_embedder: EmbeddingProvider):
"""Fresh RedisVectorStore for each test."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
return RedisVectorStore(
redis=redis_client,
@@ -341,31 +341,31 @@ def test_add_embeds_and_stores_vectors(self, store) -> None:
content_hash="abc123def456",
)
- # Verify skill was indexed
+ # Verify workflow was indexed
assert store.count() == 1
def test_add_stores_content_hash(self, store) -> None:
"""add() should persist content hash for change detection."""
store.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="contenthash123",
)
# Should be able to retrieve stored hash
- stored_hash = store.get_content_hash("skill1")
+ stored_hash = store.get_content_hash("workflow1")
assert stored_hash == "contenthash123"
def test_get_content_hash_returns_none_for_nonexistent(self, store) -> None:
- """get_content_hash() should return None for skills not in index."""
- hash_value = store.get_content_hash("nonexistent_skill")
+ """get_content_hash() should return None for workflows not in index."""
+ hash_value = store.get_content_hash("nonexistent_workflow")
assert hash_value is None
- def test_add_overwrites_existing_skill(self, store) -> None:
- """Adding same skill ID should update vectors, not duplicate."""
+ def test_add_overwrites_existing_workflow(self, store) -> None:
+ """Adding same workflow ID should update vectors, not duplicate."""
store.add(
- id="skill1",
+ id="workflow1",
description="Original description",
source="async def run(): return 1",
content_hash="hash1",
@@ -373,57 +373,57 @@ def test_add_overwrites_existing_skill(self, store) -> None:
assert store.count() == 1
store.add(
- id="skill1",
+ id="workflow1",
description="Updated description",
source="async def run(): return 2",
content_hash="hash2",
)
- # Should still be 1 skill (updated, not duplicated)
+ # Should still be 1 workflow (updated, not duplicated)
assert store.count() == 1
# Hash should be updated
- assert store.get_content_hash("skill1") == "hash2"
+ assert store.get_content_hash("workflow1") == "hash2"
- def test_remove_deletes_skill_vectors(self, store) -> None:
+ def test_remove_deletes_workflow_vectors(self, store) -> None:
"""remove() should delete both description and code vectors."""
store.add(
- id="skill1",
+ id="workflow1",
description="Test",
source="async def run(): pass",
content_hash="hash1",
)
assert store.count() == 1
- result = store.remove("skill1")
+ result = store.remove("workflow1")
assert result is True
assert store.count() == 0
- assert store.get_content_hash("skill1") is None
+ assert store.get_content_hash("workflow1") is None
def test_remove_returns_false_for_nonexistent(self, store) -> None:
- """remove() should return False if skill not in index."""
+ """remove() should return False if workflow not in index."""
result = store.remove("nonexistent")
assert result is False
- def test_count_reflects_indexed_skills(self, store) -> None:
- """count() should return number of unique skills indexed."""
+ def test_count_reflects_indexed_workflows(self, store) -> None:
+ """count() should return number of unique workflows indexed."""
assert store.count() == 0
- store.add("skill1", "desc1", "code1", "hash1")
+ store.add("workflow1", "desc1", "code1", "hash1")
assert store.count() == 1
- store.add("skill2", "desc2", "code2", "hash2")
+ store.add("workflow2", "desc2", "code2", "hash2")
assert store.count() == 2
- store.remove("skill1")
+ store.remove("workflow1")
assert store.count() == 1
def test_clear_removes_all_vectors(self, store) -> None:
- """clear() should remove all indexed skills and drop index."""
- # Add multiple skills
+ """clear() should remove all indexed workflows and drop index."""
+ # Add multiple workflows
for i in range(5):
- store.add(f"skill{i}", f"desc{i}", f"code{i}", f"hash{i}")
+ store.add(f"workflow{i}", f"desc{i}", f"code{i}", f"hash{i}")
assert store.count() == 5
store.clear()
@@ -437,15 +437,15 @@ class TestRedisVectorStoreSimilaritySearch:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder for deterministic testing."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@pytest.fixture
def store(self, redis_client: Redis, mock_embedder: EmbeddingProvider):
- """Fresh store with sample skills."""
+ """Fresh store with sample workflows."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
store = RedisVectorStore(
redis=redis_client,
@@ -454,7 +454,7 @@ def store(self, redis_client: Redis, mock_embedder: EmbeddingProvider):
index_name="test_idx",
)
- # Add diverse skills for search testing
+ # Add diverse workflows for search testing
store.add(
id="port_scanner",
description="Scan network ports using nmap",
@@ -487,7 +487,7 @@ def test_search_returns_search_results(self, store) -> None:
code_weight=0.3,
)
- from py_code_mode.skills.vector_store import SearchResult
+ from py_code_mode.workflows.vector_store import SearchResult
assert isinstance(results, list)
assert all(isinstance(r, SearchResult) for r in results)
@@ -574,7 +574,7 @@ def test_search_returns_empty_for_no_matches(
) -> None:
"""search() on empty index should return empty list."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
empty_store = RedisVectorStore(
redis=redis_client,
@@ -592,8 +592,8 @@ def test_search_returns_empty_for_no_matches(
assert results == []
- def test_search_result_contains_skill_id(self, store) -> None:
- """SearchResult.id should contain the skill identifier."""
+ def test_search_result_contains_workflow_id(self, store) -> None:
+ """SearchResult.id should contain the workflow identifier."""
results = store.search(
query="network",
limit=10,
@@ -601,7 +601,7 @@ def test_search_result_contains_skill_id(self, store) -> None:
code_weight=0.3,
)
- # IDs should be skill names we added
+ # IDs should be workflow names we added
result_ids = {r.id for r in results}
assert result_ids.issubset({"port_scanner", "web_scraper", "file_reader"})
@@ -612,7 +612,7 @@ class TestRedisVectorStoreContentHashInvalidation:
@pytest.fixture
def embedder_with_call_tracking(self):
"""Embedder that tracks how many times embed() is called."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
embedder = MockEmbedder(dimension=384)
@@ -630,9 +630,9 @@ def tracked_embed(texts: list[str]):
def test_same_content_hash_skips_re_embedding(
self, redis_client: Redis, embedder_with_call_tracking
) -> None:
- """Adding skill with same hash should skip embedding (idempotent)."""
+ """Adding workflow with same hash should skip embedding (idempotent)."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
store = RedisVectorStore(
redis=redis_client,
@@ -643,8 +643,8 @@ def test_same_content_hash_skips_re_embedding(
# First add: should embed
store.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="stable_hash",
)
@@ -652,8 +652,8 @@ def test_same_content_hash_skips_re_embedding(
# Add again with same hash: should NOT re-embed
store.add(
- id="skill1",
- description="Test skill",
+ id="workflow1",
+ description="Test workflow",
source="async def run(): pass",
content_hash="stable_hash",
)
@@ -664,9 +664,9 @@ def test_same_content_hash_skips_re_embedding(
def test_different_content_hash_triggers_re_embedding(
self, redis_client: Redis, embedder_with_call_tracking
) -> None:
- """Adding skill with different hash should re-embed."""
+ """Adding workflow with different hash should re-embed."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
store = RedisVectorStore(
redis=redis_client,
@@ -677,7 +677,7 @@ def test_different_content_hash_triggers_re_embedding(
# First add
store.add(
- id="skill1",
+ id="workflow1",
description="Original description",
source="async def run(): return 1",
content_hash="hash_v1",
@@ -686,7 +686,7 @@ def test_different_content_hash_triggers_re_embedding(
# Update with different hash: should re-embed
store.add(
- id="skill1",
+ id="workflow1",
description="Updated description",
source="async def run(): return 2",
content_hash="hash_v2",
@@ -702,7 +702,7 @@ class TestRedisVectorStorePersistence:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
@@ -711,18 +711,18 @@ def test_vectors_persist_across_connections(
) -> None:
"""Vectors should persist in Redis and reload on next connection."""
pytest.importorskip("redis")
- from py_code_mode.skills.embeddings import MockEmbedder
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.embeddings import MockEmbedder
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
# Create store, add data
store1 = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
- store1.add("skill1", "Network scanner", "nmap code", "hash1")
- store1.add("skill2", "File reader", "file code", "hash2")
+ store1.add("workflow1", "Network scanner", "nmap code", "hash1")
+ store1.add("workflow2", "File reader", "file code", "hash2")
assert store1.count() == 2
# Create new store instance (simulates reconnection)
@@ -730,29 +730,29 @@ def test_vectors_persist_across_connections(
store2 = RedisVectorStore(
redis=redis_client,
embedder=fresh_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
# Vectors should be reloaded from Redis
assert store2.count() == 2
- assert store2.get_content_hash("skill1") == "hash1"
- assert store2.get_content_hash("skill2") == "hash2"
+ assert store2.get_content_hash("workflow1") == "hash1"
+ assert store2.get_content_hash("workflow2") == "hash2"
def test_model_metadata_persists_across_sessions(
self, redis_client: Redis, mock_embedder: EmbeddingProvider
) -> None:
"""Model info should persist and be validated on reconnect."""
pytest.importorskip("redis")
- from py_code_mode.skills.embeddings import MockEmbedder
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.embeddings import MockEmbedder
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
# First session
store1 = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
model_info1 = store1.get_model_info()
@@ -761,8 +761,8 @@ def test_model_metadata_persists_across_sessions(
store2 = RedisVectorStore(
redis=redis_client,
embedder=same_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
model_info2 = store2.get_model_info()
@@ -774,15 +774,15 @@ def test_search_works_on_persisted_vectors(
) -> None:
"""Search should work on vectors loaded from Redis."""
pytest.importorskip("redis")
- from py_code_mode.skills.embeddings import MockEmbedder
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.embeddings import MockEmbedder
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
- # First session: add skills
+ # First session: add workflows
store1 = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
store1.add(
"port_scanner",
@@ -796,8 +796,8 @@ def test_search_works_on_persisted_vectors(
store2 = RedisVectorStore(
redis=redis_client,
embedder=fresh_embedder,
- prefix="skills",
- index_name="skills_idx",
+ prefix="workflows",
+ index_name="workflows_idx",
)
results = store2.search(
@@ -807,7 +807,7 @@ def test_search_works_on_persisted_vectors(
code_weight=0.3,
)
- # Should find the persisted skill
+ # Should find the persisted workflow
assert len(results) > 0
assert any(r.id == "port_scanner" for r in results)
@@ -818,16 +818,16 @@ class TestRedisVectorStoreIndexIsolation:
@pytest.fixture
def mock_embedder(self) -> EmbeddingProvider:
"""Mock embedder."""
- from py_code_mode.skills.embeddings import MockEmbedder
+ from py_code_mode.workflows.embeddings import MockEmbedder
return MockEmbedder(dimension=384)
- def test_different_prefixes_isolate_skills(
+ def test_different_prefixes_isolate_workflows(
self, redis_client: Redis, mock_embedder: EmbeddingProvider
) -> None:
- """Skills in different prefixes should not interfere."""
+ """Workflows in different prefixes should not interfere."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
# Create two stores with different prefixes
store1 = RedisVectorStore(
@@ -844,42 +844,42 @@ def test_different_prefixes_isolate_skills(
index_name="project_b_idx",
)
- # Add skills to each
- store1.add("skill1", "Project A skill", "code1", "hash1")
- store2.add("skill2", "Project B skill", "code2", "hash2")
+ # Add workflows to each
+ store1.add("workflow1", "Project A workflow", "code1", "hash1")
+ store2.add("workflow2", "Project B workflow", "code2", "hash2")
- # Each store should only see its own skills
+ # Each store should only see its own workflows
assert store1.count() == 1
assert store2.count() == 1
- assert store1.get_content_hash("skill1") == "hash1"
- assert store1.get_content_hash("skill2") is None
- assert store2.get_content_hash("skill2") == "hash2"
- assert store2.get_content_hash("skill1") is None
+ assert store1.get_content_hash("workflow1") == "hash1"
+ assert store1.get_content_hash("workflow2") is None
+ assert store2.get_content_hash("workflow2") == "hash2"
+ assert store2.get_content_hash("workflow1") is None
def test_different_index_names_create_separate_indexes(
self, redis_client: Redis, mock_embedder: EmbeddingProvider
) -> None:
"""Different index names should create independent search indexes."""
pytest.importorskip("redis")
- from py_code_mode.skills.vector_stores.redis_store import RedisVectorStore
+ from py_code_mode.workflows.vector_stores.redis_store import RedisVectorStore
# Create two stores with same prefix but different index names
store1 = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
+ prefix="workflows",
index_name="idx_v1",
)
store2 = RedisVectorStore(
redis=redis_client,
embedder=mock_embedder,
- prefix="skills",
+ prefix="workflows",
index_name="idx_v2",
)
- # Add skills to first index
- store1.add("skill1", "First index skill", "code1", "hash1")
+ # Add workflows to first index
+ store1.add("workflow1", "First index workflow", "code1", "hash1")
# Second index should be empty
assert store1.count() == 1
diff --git a/tests/test_registry.py b/tests/test_registry.py
index 67e7b05..068a429 100644
--- a/tests/test_registry.py
+++ b/tests/test_registry.py
@@ -392,7 +392,7 @@ async def test_semantic_search_end_to_end(
) -> None:
"""End-to-end: semantic search finds tools by intent."""
# Use real embedder for integration test
- from py_code_mode.skills.embeddings import Embedder
+ from py_code_mode.workflows.embeddings import Embedder
registry = ToolRegistry(embedder=Embedder())
registry.register_adapter(web_adapter)
diff --git a/tests/test_rpc_errors.py b/tests/test_rpc_errors.py
index 311669d..97a1918 100644
--- a/tests/test_rpc_errors.py
+++ b/tests/test_rpc_errors.py
@@ -57,67 +57,67 @@ async def executor_with_deps_allowed(tmp_path: Path):
# =============================================================================
-# SkillError Tests
+# WorkflowError Tests
# =============================================================================
@pytest.mark.slow
@pytest.mark.xdist_group("subprocess")
-class TestSkillErrors:
- """Tests for skill-related RPC errors raising SkillError."""
+class TestWorkflowErrors:
+ """Tests for workflow-related RPC errors raising WorkflowError."""
@pytest.mark.asyncio
- async def test_skill_with_missing_import_raises_skill_error(
+ async def test_workflow_with_missing_import_raises_workflow_error(
self, executor_with_deps_allowed
) -> None:
- """Skill with missing import raises SkillError with ModuleNotFoundError type.
+ """Workflow with missing import raises WorkflowError with ModuleNotFoundError type.
- Contract: When a skill invocation fails due to missing import, the error
- should be SkillError with original_type=ModuleNotFoundError.
+ Contract: When a workflow invocation fails due to missing import, the error
+ should be WorkflowError with original_type=ModuleNotFoundError.
Breaks when: Host sends unstructured errors or kernel doesn't parse correctly.
"""
- # Create a skill that imports a nonexistent module
+ # Create a workflow that imports a nonexistent module
create_code = '''
-skills.create(
- "broken_skill",
+workflows.create(
+ "broken_workflow",
"""async def run():
import this_module_definitely_does_not_exist
return "never reached"
""",
- "A skill that will fail on import"
+ "A workflow that will fail on import"
)
'''
result = await executor_with_deps_allowed.run(create_code)
- assert result.error is None, f"Failed to create skill: {result.error}"
+ assert result.error is None, f"Failed to create workflow: {result.error}"
- # Invoke the skill - should raise SkillError
- result = await executor_with_deps_allowed.run('skills.invoke("broken_skill")')
+ # Invoke the workflow - should raise WorkflowError
+ result = await executor_with_deps_allowed.run('workflows.invoke("broken_workflow")')
assert result.error is not None
# Error message should contain structured information
- # Format: "skills.invoke: [ModuleNotFoundError] message"
- assert "skills.invoke" in result.error
+ # Format: "workflows.invoke: [ModuleNotFoundError] message"
+ assert "workflows.invoke" in result.error
assert "ModuleNotFoundError" in result.error
@pytest.mark.asyncio
- async def test_skill_creation_with_syntax_error_raises_skill_error(
+ async def test_workflow_creation_with_syntax_error_raises_workflow_error(
self, executor_with_deps_allowed
) -> None:
- """Skill creation with syntax error raises SkillError with SyntaxError type.
+ """Workflow creation with syntax error raises WorkflowError with SyntaxError type.
- Contract: When skill creation fails due to syntax error, the error
- should be SkillError with original_type=SyntaxError.
+ Contract: When workflow creation fails due to syntax error, the error
+ should be WorkflowError with original_type=SyntaxError.
"""
- # Try to create a skill with invalid Python syntax
+ # Try to create a workflow with invalid Python syntax
result = await executor_with_deps_allowed.run("""
-skills.create("bad_syntax", "if x", "broken skill")
+workflows.create("bad_syntax", "if x", "broken workflow")
""")
assert result.error is not None
# Error message should contain structured information
- # Format: "skills.create: [SyntaxError] message"
- assert "skills.create" in result.error
+ # Format: "workflows.create: [SyntaxError] message"
+ assert "workflows.create" in result.error
assert "SyntaxError" in result.error or "syntax" in result.error.lower()
@@ -229,21 +229,21 @@ async def test_error_preserves_original_exception_type(
Contract: The [ExceptionType] should appear in the error message.
"""
- # Create a skill that raises a specific exception type
+ # Create a workflow that raises a specific exception type
create_code = '''
-skills.create(
- "value_error_skill",
+workflows.create(
+ "value_error_workflow",
"""async def run():
raise ValueError("intentional value error")
""",
- "A skill that raises ValueError"
+ "A workflow that raises ValueError"
)
'''
result = await executor_with_deps_allowed.run(create_code)
- assert result.error is None, f"Failed to create skill: {result.error}"
+ assert result.error is None, f"Failed to create workflow: {result.error}"
- # Invoke - should raise SkillError with ValueError in the message
- result = await executor_with_deps_allowed.run('skills.invoke("value_error_skill")')
+ # Invoke - should raise WorkflowError with ValueError in the message
+ result = await executor_with_deps_allowed.run('workflows.invoke("value_error_workflow")')
assert result.error is not None
# Should preserve the original exception type in brackets
@@ -304,12 +304,12 @@ class TestErrorClassHierarchy:
"""Tests verifying the error class hierarchy is properly defined in kernel."""
@pytest.mark.asyncio
- async def test_skill_error_is_namespace_error(self, executor_with_storage) -> None:
- """SkillError is a subclass of NamespaceError in the kernel.
+ async def test_workflow_error_is_namespace_error(self, executor_with_storage) -> None:
+ """WorkflowError is a subclass of NamespaceError in the kernel.
- Contract: SkillError inherits from NamespaceError.
+ Contract: WorkflowError inherits from NamespaceError.
"""
- result = await executor_with_storage.run("issubclass(SkillError, NamespaceError)")
+ result = await executor_with_storage.run("issubclass(WorkflowError, NamespaceError)")
assert result.error is None
assert result.value in (True, "True")
@@ -388,14 +388,14 @@ async def test_namespace_error_has_namespace_attribute(self, executor_with_stora
"""
result = await executor_with_storage.run("""
try:
- raise NamespaceError("skills", "invoke", "test error")
+ raise NamespaceError("workflows", "invoke", "test error")
except NamespaceError as e:
result = e.namespace
result
""")
assert result.error is None
- assert "skills" in str(result.value)
+ assert "workflows" in str(result.value)
@pytest.mark.asyncio
async def test_namespace_error_has_operation_attribute(self, executor_with_storage) -> None:
@@ -405,7 +405,7 @@ async def test_namespace_error_has_operation_attribute(self, executor_with_stora
"""
result = await executor_with_storage.run("""
try:
- raise NamespaceError("skills", "invoke", "test error")
+ raise NamespaceError("workflows", "invoke", "test error")
except NamespaceError as e:
result = e.operation
result
@@ -422,7 +422,7 @@ async def test_namespace_error_has_original_type_attribute(self, executor_with_s
"""
result = await executor_with_storage.run("""
try:
- raise NamespaceError("skills", "invoke", "test error", "ValueError")
+ raise NamespaceError("workflows", "invoke", "test error", "ValueError")
except NamespaceError as e:
result = e.original_type
result
diff --git a/tests/test_semantic.py b/tests/test_semantic.py
index f812e61..f59e82d 100644
--- a/tests/test_semantic.py
+++ b/tests/test_semantic.py
@@ -4,13 +4,13 @@
import pytest
-from py_code_mode.skills import PythonSkill
+from py_code_mode.workflows import PythonWorkflow
-def _make_skill(name: str, description: str, code: str) -> PythonSkill:
- """Helper to create a PythonSkill from minimal info."""
+def _make_workflow(name: str, description: str, code: str) -> PythonWorkflow:
+ """Helper to create a PythonWorkflow from minimal info."""
source = f'"""{description}"""\n\nasync def run():\n {code}'
- return PythonSkill.from_source(name=name, source=source, description=description)
+ return PythonWorkflow.from_source(name=name, source=source, description=description)
class TestEmbeddingProviderProtocol:
@@ -18,20 +18,20 @@ class TestEmbeddingProviderProtocol:
def test_provider_has_embed_method(self) -> None:
"""Provider must have embed() that returns vectors."""
- from py_code_mode.skills import EmbeddingProvider
+ from py_code_mode.workflows import EmbeddingProvider
# Protocol should define embed method
assert hasattr(EmbeddingProvider, "embed")
def test_provider_has_dimension_property(self) -> None:
"""Provider exposes embedding dimension for index allocation."""
- from py_code_mode.skills import EmbeddingProvider
+ from py_code_mode.workflows import EmbeddingProvider
assert hasattr(EmbeddingProvider, "dimension")
def test_embed_returns_list_of_vectors(self) -> None:
"""embed() takes list of strings, returns list of float vectors."""
- from py_code_mode.skills import MockEmbedder
+ from py_code_mode.workflows import MockEmbedder
embedder = MockEmbedder(dimension=384)
vectors = embedder.embed(["hello world", "test query"])
@@ -42,7 +42,7 @@ def test_embed_returns_list_of_vectors(self) -> None:
def test_embed_single_text(self) -> None:
"""Convenience: can embed single string."""
- from py_code_mode.skills import MockEmbedder
+ from py_code_mode.workflows import MockEmbedder
embedder = MockEmbedder(dimension=384)
vectors = embedder.embed(["single text"])
@@ -57,7 +57,7 @@ class TestEmbedder:
def embedder(self):
"""Create embedder - skip if model not available."""
pytest.importorskip("sentence_transformers")
- from py_code_mode.skills import Embedder
+ from py_code_mode.workflows import Embedder
return Embedder()
@@ -98,24 +98,24 @@ def test_detects_device(self, embedder) -> None:
assert embedder.device in ("mps", "cuda", "cpu")
-class TestSkillLibrary:
- """Tests for SkillLibrary semantic search with dual indexing."""
+class TestWorkflowLibrary:
+ """Tests for WorkflowLibrary semantic search with dual indexing."""
@pytest.fixture
- def sample_skills(self) -> list[PythonSkill]:
- """Sample skills for testing."""
+ def sample_workflows(self) -> list[PythonWorkflow]:
+ """Sample workflows for testing."""
return [
- _make_skill(
+ _make_workflow(
name="fetch_url",
description="Fetch content from a URL using HTTP GET request",
code="return requests.get(url).text",
),
- _make_skill(
+ _make_workflow(
name="parse_json",
description="Parse JSON string into Python dict",
code="return json.loads(text)",
),
- _make_skill(
+ _make_workflow(
name="write_file",
description="Write text content to a file on disk",
code="Path(path).write_text(content)",
@@ -123,54 +123,54 @@ def sample_skills(self) -> list[PythonSkill]:
]
@pytest.fixture
- def python_skill(self) -> PythonSkill:
- """A Python skill fixture."""
+ def python_workflow(self) -> PythonWorkflow:
+ """A Python workflow fixture."""
source = dedent('''
"""Calculate sum of numbers."""
async def run(numbers: list[int]) -> int:
return sum(numbers)
''').strip()
- return PythonSkill.from_source(name="sum_numbers", source=source)
+ return PythonWorkflow.from_source(name="sum_numbers", source=source)
def test_can_create_empty_library(self) -> None:
- """Library can be created without skills."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ """Library can be created without workflows."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
+ library = WorkflowLibrary(embedder)
assert len(library) == 0
- def test_add_skill_indexes_description(self, sample_skills: list[PythonSkill]) -> None:
- """Adding skill indexes its description for search."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_add_workflow_indexes_description(self, sample_workflows: list[PythonWorkflow]) -> None:
+ """Adding workflow indexes its description for search."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
- library.add(sample_skills[0])
+ library = WorkflowLibrary(embedder)
+ library.add(sample_workflows[0])
assert len(library) == 1
- def test_add_skill_indexes_code(self, sample_skills: list[PythonSkill]) -> None:
- """Adding skill indexes its source code for search."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_add_workflow_indexes_code(self, sample_workflows: list[PythonWorkflow]) -> None:
+ """Adding workflow indexes its source code for search."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
- library.add(sample_skills[0])
+ library = WorkflowLibrary(embedder)
+ library.add(sample_workflows[0])
- # Code should be indexed (we can't easily verify embedding, but skill should exist)
+ # Code should be indexed (we can't easily verify embedding, but workflow should exist)
assert library.get("fetch_url") is not None
- def test_search_by_description(self, sample_skills: list[PythonSkill]) -> None:
- """Search finds skills by description similarity."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_search_by_description(self, sample_workflows: list[PythonWorkflow]) -> None:
+ """Search finds workflows by description similarity."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
- for skill in sample_skills:
- library.add(skill)
+ library = WorkflowLibrary(embedder)
+ for workflow in sample_workflows:
+ library.add(workflow)
# Should find fetch_url when searching for URL-related queries
results = library.search("download web content")
@@ -178,87 +178,89 @@ def test_search_by_description(self, sample_skills: list[PythonSkill]) -> None:
assert len(results) >= 1
# Results are returned - content depends on embedding model
- def test_search_by_code_intent(self, sample_skills: list[PythonSkill]) -> None:
- """Search finds skills by code content."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_search_by_code_intent(self, sample_workflows: list[PythonWorkflow]) -> None:
+ """Search finds workflows by code content."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
- for skill in sample_skills:
- library.add(skill)
+ library = WorkflowLibrary(embedder)
+ for workflow in sample_workflows:
+ library.add(workflow)
# Search for something matching code
results = library.search("json.loads")
assert len(results) >= 1
- def test_combined_description_and_code_search(self, sample_skills: list[PythonSkill]) -> None:
+ def test_combined_description_and_code_search(
+ self, sample_workflows: list[PythonWorkflow]
+ ) -> None:
"""Search considers both description and code."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
- for skill in sample_skills:
- library.add(skill)
+ library = WorkflowLibrary(embedder)
+ for workflow in sample_workflows:
+ library.add(workflow)
# Query that matches description
results = library.search("fetch URL content")
assert len(results) >= 1
- def test_search_with_python_skill(
- self, sample_skills: list[PythonSkill], python_skill: PythonSkill
+ def test_search_with_python_workflow(
+ self, sample_workflows: list[PythonWorkflow], python_workflow: PythonWorkflow
) -> None:
- """Search works with Python skills that have full source."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ """Search works with Python workflows that have full source."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
- for skill in sample_skills:
- library.add(skill)
- library.add(python_skill)
+ library = WorkflowLibrary(embedder)
+ for workflow in sample_workflows:
+ library.add(workflow)
+ library.add(python_workflow)
- # Should find the Python skill
+ # Should find the Python workflow
results = library.search("calculate sum")
assert len(results) >= 1
- def test_get_by_name(self, sample_skills: list[PythonSkill]) -> None:
- """Can retrieve skill by exact name."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_get_by_name(self, sample_workflows: list[PythonWorkflow]) -> None:
+ """Can retrieve workflow by exact name."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
- for skill in sample_skills:
- library.add(skill)
+ library = WorkflowLibrary(embedder)
+ for workflow in sample_workflows:
+ library.add(workflow)
- skill = library.get("parse_json")
+ workflow = library.get("parse_json")
- assert skill is not None
- assert skill.name == "parse_json"
+ assert workflow is not None
+ assert workflow.name == "parse_json"
- def test_search_limit(self, sample_skills: list[PythonSkill]) -> None:
+ def test_search_limit(self, sample_workflows: list[PythonWorkflow]) -> None:
"""Search respects limit parameter."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
- for skill in sample_skills:
- library.add(skill)
+ library = WorkflowLibrary(embedder)
+ for workflow in sample_workflows:
+ library.add(workflow)
results = library.search("content", limit=1)
assert len(results) == 1
- def test_remove_skill(self) -> None:
- """Can remove skill from library."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_remove_workflow(self) -> None:
+ """Can remove workflow from library."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder)
+ library = WorkflowLibrary(embedder)
- skill = _make_skill("test", "Test skill", "pass")
- library.add(skill)
+ workflow = _make_workflow("test", "Test workflow", "pass")
+ library.add(workflow)
assert len(library) == 1
result = library.remove("test")
@@ -273,20 +275,20 @@ class TestRankingConfig:
def test_default_ranking_weights(self) -> None:
"""Default weights favor description over code."""
- from py_code_mode.skills import RankingConfig
+ from py_code_mode.workflows import RankingConfig
config = RankingConfig()
assert config.description_weight > config.code_weight
def test_code_only_ranking(self) -> None:
"""Can configure to only use code embeddings."""
- from py_code_mode.skills import MockEmbedder, RankingConfig, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, RankingConfig, WorkflowLibrary
config = RankingConfig(description_weight=0.0, code_weight=1.0)
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder, ranking=config)
+ library = WorkflowLibrary(embedder, ranking=config)
- library.add(_make_skill("test", "test skill", "pass"))
+ library.add(_make_workflow("test", "test workflow", "pass"))
# Should still work
results = library.search("test")
@@ -294,102 +296,102 @@ def test_code_only_ranking(self) -> None:
def test_threshold_filtering(self) -> None:
"""Can filter results below threshold."""
- from py_code_mode.skills import MockEmbedder, RankingConfig, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, RankingConfig, WorkflowLibrary
config = RankingConfig(min_score_threshold=0.99) # Very high threshold
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder, ranking=config)
+ library = WorkflowLibrary(embedder, ranking=config)
- # Add skill with low similarity to any query
- library.add(_make_skill("obscure", "very specific thing", "pass"))
+ # Add workflow with low similarity to any query
+ library.add(_make_workflow("obscure", "very specific thing", "pass"))
# Most queries won't meet high threshold - this test just verifies
# the threshold config is accepted, actual filtering depends on embeddings
library.search("completely unrelated query")
-class TestSkillLibraryWithStore:
- """Tests for SkillLibrary with storage backend integration.
+class TestWorkflowLibraryWithStore:
+ """Tests for WorkflowLibrary with storage backend integration.
Verifies that add/remove/refresh operations correctly coordinate
between the library's embedding index and the storage backend.
"""
@pytest.fixture
- def sample_skills(self) -> list[PythonSkill]:
- """Sample skills for testing."""
+ def sample_workflows(self) -> list[PythonWorkflow]:
+ """Sample workflows for testing."""
return [
- _make_skill(
+ _make_workflow(
name="fetch_url",
description="Fetch content from a URL",
code="return requests.get(url).text",
),
- _make_skill(
+ _make_workflow(
name="parse_json",
description="Parse JSON string into Python dict",
code="return json.loads(text)",
),
]
- def test_backend_skills_searchable_at_construction(
- self, sample_skills: list[PythonSkill]
+ def test_backend_workflows_searchable_at_construction(
+ self, sample_workflows: list[PythonWorkflow]
) -> None:
- """Skills in store should be searchable immediately after construction."""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ """Workflows in store should be searchable immediately after construction."""
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
# Populate store first
- store = MemorySkillStore()
- for skill in sample_skills:
- store.save(skill)
+ store = MemoryWorkflowStore()
+ for workflow in sample_workflows:
+ store.save(workflow)
# Create library with store - should load and embed
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder, store=store)
+ library = WorkflowLibrary(embedder=embedder, store=store)
- # Skills should be searchable without explicit add()
+ # Workflows should be searchable without explicit add()
results = library.search("download web content")
assert len(results) >= 1
assert any(r.name == "fetch_url" for r in results)
- def test_add_stores_in_store(self, sample_skills: list[PythonSkill]) -> None:
- """add() should store skill in store."""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ def test_add_stores_in_store(self, sample_workflows: list[PythonWorkflow]) -> None:
+ """add() should store workflow in store."""
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
- store = MemorySkillStore()
+ store = MemoryWorkflowStore()
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder, store=store)
+ library = WorkflowLibrary(embedder=embedder, store=store)
# Add through library
- library.add(sample_skills[0])
+ library.add(sample_workflows[0])
# Should appear in store
assert store.load("fetch_url") is not None
- def test_add_makes_skill_searchable(self, sample_skills: list[PythonSkill]) -> None:
- """add() should make skill immediately searchable."""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ def test_add_makes_workflow_searchable(self, sample_workflows: list[PythonWorkflow]) -> None:
+ """add() should make workflow immediately searchable."""
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
- store = MemorySkillStore()
+ store = MemoryWorkflowStore()
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder, store=store)
+ library = WorkflowLibrary(embedder=embedder, store=store)
# Add through library
- library.add(sample_skills[0])
+ library.add(sample_workflows[0])
# Should be searchable
results = library.search("download content")
assert any(r.name == "fetch_url" for r in results)
- def test_remove_removes_from_store(self, sample_skills: list[PythonSkill]) -> None:
+ def test_remove_removes_from_store(self, sample_workflows: list[PythonWorkflow]) -> None:
"""remove() should remove from store."""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
- store = MemorySkillStore()
- for skill in sample_skills:
- store.save(skill)
+ store = MemoryWorkflowStore()
+ for workflow in sample_workflows:
+ store.save(workflow)
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder, store=store)
+ library = WorkflowLibrary(embedder=embedder, store=store)
# Remove through library
result = library.remove("fetch_url")
@@ -397,16 +399,16 @@ def test_remove_removes_from_store(self, sample_skills: list[PythonSkill]) -> No
assert result is True
assert store.load("fetch_url") is None
- def test_remove_removes_from_search(self, sample_skills: list[PythonSkill]) -> None:
- """remove() should make skill no longer searchable."""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ def test_remove_removes_from_search(self, sample_workflows: list[PythonWorkflow]) -> None:
+ """remove() should make workflow no longer searchable."""
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
- store = MemorySkillStore()
- for skill in sample_skills:
- store.save(skill)
+ store = MemoryWorkflowStore()
+ for workflow in sample_workflows:
+ store.save(workflow)
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder, store=store)
+ library = WorkflowLibrary(embedder=embedder, store=store)
# Remove through library
library.remove("fetch_url")
@@ -415,20 +417,20 @@ def test_remove_removes_from_search(self, sample_skills: list[PythonSkill]) -> N
results = library.search("download content")
assert not any(r.name == "fetch_url" for r in results)
- def test_refresh_picks_up_store_changes(self, sample_skills: list[PythonSkill]) -> None:
+ def test_refresh_picks_up_store_changes(self, sample_workflows: list[PythonWorkflow]) -> None:
"""refresh() should reload from store and rebuild embeddings."""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
- store = MemorySkillStore()
+ store = MemoryWorkflowStore()
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder, store=store)
+ library = WorkflowLibrary(embedder=embedder, store=store)
# Initially empty
assert len(library) == 0
# Add directly to store (bypassing library)
- new_skill = _make_skill("send_email", "Send email via SMTP", "smtp.send(message)")
- store.save(new_skill)
+ new_workflow = _make_workflow("send_email", "Send email via SMTP", "smtp.send(message)")
+ store.save(new_workflow)
# Not searchable yet (not indexed)
results = library.search("send email")
@@ -441,18 +443,18 @@ def test_refresh_picks_up_store_changes(self, sample_skills: list[PythonSkill])
results = library.search("send email")
assert any(r.name == "send_email" for r in results)
- def test_refresh_clears_stale_embeddings(self, sample_skills: list[PythonSkill]) -> None:
- """refresh() should remove embeddings for skills no longer in store."""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ def test_refresh_clears_stale_embeddings(self, sample_workflows: list[PythonWorkflow]) -> None:
+ """refresh() should remove embeddings for workflows no longer in store."""
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
- store = MemorySkillStore()
- for skill in sample_skills:
- store.save(skill)
+ store = MemoryWorkflowStore()
+ for workflow in sample_workflows:
+ store.save(workflow)
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder, store=store)
+ library = WorkflowLibrary(embedder=embedder, store=store)
- # Both skills searchable
+ # Both workflows searchable
assert len(library) == 2
# Remove directly from store
@@ -461,21 +463,21 @@ def test_refresh_clears_stale_embeddings(self, sample_skills: list[PythonSkill])
# Refresh
library.refresh()
- # Only one skill remains
+ # Only one workflow remains
assert len(library) == 1
assert library.get("fetch_url") is None
assert library.get("parse_json") is not None
- def test_no_store_works_in_memory_only(self, sample_skills: list[PythonSkill]) -> None:
+ def test_no_store_works_in_memory_only(self, sample_workflows: list[PythonWorkflow]) -> None:
"""Without store, library works as in-memory only."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder) # No store
+ library = WorkflowLibrary(embedder=embedder) # No store
# Add works
- for skill in sample_skills:
- library.add(skill)
+ for workflow in sample_workflows:
+ library.add(workflow)
# Search works
results = library.search("download")
@@ -483,87 +485,91 @@ def test_no_store_works_in_memory_only(self, sample_skills: list[PythonSkill]) -
def test_refresh_with_no_store_is_noop(self) -> None:
"""refresh() with no store should do nothing, not crash."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder) # No store
+ library = WorkflowLibrary(embedder=embedder) # No store
- library.add(_make_skill("test", "test", "pass"))
+ library.add(_make_workflow("test", "test", "pass"))
- # Should not crash or clear skills
+ # Should not crash or clear workflows
library.refresh()
assert len(library) == 1
-class TestCreateSkillLibraryFactory:
- """Tests for the create_skill_library factory function."""
+class TestCreateWorkflowLibraryFactory:
+ """Tests for the create_workflow_library factory function."""
def test_creates_with_default_embedder(self) -> None:
"""Factory creates library with Embedder (BGE-small) by default."""
pytest.importorskip("sentence_transformers")
- from py_code_mode.skills import Embedder, create_skill_library
+ from py_code_mode.workflows import Embedder, create_workflow_library
- library = create_skill_library()
+ library = create_workflow_library()
assert isinstance(library.embedder, Embedder)
def test_creates_with_custom_embedder(self) -> None:
"""Factory accepts custom embedder."""
- from py_code_mode.skills import MockEmbedder, create_skill_library
+ from py_code_mode.workflows import MockEmbedder, create_workflow_library
embedder = MockEmbedder(dimension=128)
- library = create_skill_library(embedder=embedder)
+ library = create_workflow_library(embedder=embedder)
assert library.embedder is embedder
def test_creates_with_store(self) -> None:
- """Factory accepts store and loads skills."""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, create_skill_library
+ """Factory accepts store and loads workflows."""
+ from py_code_mode.workflows import (
+ MemoryWorkflowStore,
+ MockEmbedder,
+ create_workflow_library,
+ )
- store = MemorySkillStore()
- store.save(_make_skill("test", "test skill", "pass"))
+ store = MemoryWorkflowStore()
+ store.save(_make_workflow("test", "test workflow", "pass"))
embedder = MockEmbedder(dimension=384)
- library = create_skill_library(store=store, embedder=embedder)
+ library = create_workflow_library(store=store, embedder=embedder)
- # Skill should be loaded and searchable
+ # Workflow should be loaded and searchable
assert len(library) == 1
results = library.search("test")
assert len(results) == 1
-class TestSkillLibraryWithRealEmbedder:
+class TestWorkflowLibraryWithRealEmbedder:
"""Integration tests with real embeddings (BGE-small)."""
@pytest.fixture
def embedder(self):
"""Create embedder - skip if not available."""
pytest.importorskip("sentence_transformers")
- from py_code_mode.skills import Embedder
+ from py_code_mode.workflows import Embedder
return Embedder()
@pytest.fixture
- def sample_skills(self) -> list[PythonSkill]:
- """Sample skills for testing."""
+ def sample_workflows(self) -> list[PythonWorkflow]:
+ """Sample workflows for testing."""
return [
- _make_skill(
+ _make_workflow(
name="port_scan",
description="Scan network ports using nmap to find open services",
code="result = tools.nmap(target=target, ports='1-1000')",
),
- _make_skill(
+ _make_workflow(
name="web_screenshot",
description="Capture a screenshot of a webpage using headless browser",
code="tools.chromium(url=url, screenshot=output_path)",
),
- _make_skill(
+ _make_workflow(
name="dir_bruteforce",
description="Bruteforce web directories to find hidden paths",
code="tools.ffuf(url=url, wordlist=wordlist)",
),
- _make_skill(
+ _make_workflow(
name="dns_enum",
description="Enumerate DNS records for a domain",
code="tools.dig(domain=domain, type='ANY')",
@@ -571,16 +577,16 @@ def sample_skills(self) -> list[PythonSkill]:
]
def test_semantic_search_finds_conceptual_match(
- self, embedder, sample_skills: list[PythonSkill]
+ self, embedder, sample_workflows: list[PythonWorkflow]
) -> None:
- """Semantic search finds skills by meaning, not just keywords."""
- from py_code_mode.skills import SkillLibrary
+ """Semantic search finds workflows by meaning, not just keywords."""
+ from py_code_mode.workflows import WorkflowLibrary
- library = SkillLibrary(embedder)
- for skill in sample_skills:
- library.add(skill)
+ library = WorkflowLibrary(embedder)
+ for workflow in sample_workflows:
+ library.add(workflow)
- # Query uses different words than skill description
+ # Query uses different words than workflow description
results = library.search("discover which TCP ports are listening")
assert len(results) >= 1
@@ -588,14 +594,14 @@ def test_semantic_search_finds_conceptual_match(
assert results[0].name == "port_scan"
def test_semantic_search_code_understanding(
- self, embedder, sample_skills: list[PythonSkill]
+ self, embedder, sample_workflows: list[PythonWorkflow]
) -> None:
"""Search understands code semantics."""
- from py_code_mode.skills import SkillLibrary
+ from py_code_mode.workflows import WorkflowLibrary
- library = SkillLibrary(embedder)
- for skill in sample_skills:
- library.add(skill)
+ library = WorkflowLibrary(embedder)
+ for workflow in sample_workflows:
+ library.add(workflow)
# Query about what the code does
results = library.search("use ffuf tool")
@@ -603,15 +609,15 @@ def test_semantic_search_code_understanding(
assert len(results) >= 1
assert results[0].name == "dir_bruteforce"
- def test_semantic_ranking(self, embedder, sample_skills: list[PythonSkill]) -> None:
+ def test_semantic_ranking(self, embedder, sample_workflows: list[PythonWorkflow]) -> None:
"""Results are ranked by semantic relevance."""
- from py_code_mode.skills import SkillLibrary
+ from py_code_mode.workflows import WorkflowLibrary
- library = SkillLibrary(embedder)
- for skill in sample_skills:
- library.add(skill)
+ library = WorkflowLibrary(embedder)
+ for workflow in sample_workflows:
+ library.add(workflow)
- # Query that should match web skills
+ # Query that should match web workflows
results = library.search("find hidden web pages")
# dir_bruteforce should rank higher than port_scan
diff --git a/tests/test_session.py b/tests/test_session.py
index aaf62b7..c10c417 100644
--- a/tests/test_session.py
+++ b/tests/test_session.py
@@ -1,7 +1,7 @@
"""Tests for Session class.
Session wraps a StorageBackend and Executor, providing the unified
-interface that agents use. It injects the tools, skills, and artifacts
+interface that agents use. It injects the tools, workflows, and artifacts
namespaces into the executor's namespace.
Session lifecycle:
@@ -158,16 +158,16 @@ async def test_state_persists_within_session(self, storage: FileStorage) -> None
assert result.value == 84
-class TestSessionSkillsNamespace:
- """Tests for Session skills namespace injection."""
+class TestSessionWorkflowsNamespace:
+ """Tests for Session workflows namespace injection."""
@pytest.fixture
- def storage_with_skills(self, tmp_path: Path) -> FileStorage:
- """Create FileStorage with skills."""
+ def storage_with_workflows(self, tmp_path: Path) -> FileStorage:
+ """Create FileStorage with workflows."""
storage = FileStorage(tmp_path)
- skills_dir = tmp_path / "skills"
- skills_dir.mkdir(exist_ok=True)
- (skills_dir / "double.py").write_text(
+ workflows_dir = tmp_path / "workflows"
+ workflows_dir.mkdir(exist_ok=True)
+ (workflows_dir / "double.py").write_text(
'''"""Double a number."""
async def run(n: int) -> int:
@@ -177,44 +177,46 @@ async def run(n: int) -> int:
return storage
@pytest.mark.asyncio
- async def test_skills_list_callable(self, storage_with_skills: FileStorage) -> None:
- """skills.list() is callable and returns list."""
- async with Session(storage=storage_with_skills) as session:
- result = await session.run("skills.list()")
+ async def test_workflows_list_callable(self, storage_with_workflows: FileStorage) -> None:
+ """workflows.list() is callable and returns list."""
+ async with Session(storage=storage_with_workflows) as session:
+ result = await session.run("workflows.list()")
- assert result.is_ok, f"skills.list() failed: {result.error}"
+ assert result.is_ok, f"workflows.list() failed: {result.error}"
assert result.value is not None
assert isinstance(result.value, list)
@pytest.mark.asyncio
- async def test_skills_list_returns_skill_info(self, storage_with_skills: FileStorage) -> None:
- """skills.list() returns skill info with name, description, params."""
- async with Session(storage=storage_with_skills) as session:
- result = await session.run("skills.list()")
+ async def test_workflows_list_returns_workflow_info(
+ self, storage_with_workflows: FileStorage
+ ) -> None:
+ """workflows.list() returns workflow info with name, description, params."""
+ async with Session(storage=storage_with_workflows) as session:
+ result = await session.run("workflows.list()")
assert result.is_ok
assert len(result.value) >= 1
- skill = result.value[0]
- assert "name" in skill
- assert "description" in skill
- assert "params" in skill
+ workflow = result.value[0]
+ assert "name" in workflow
+ assert "description" in workflow
+ assert "params" in workflow
@pytest.mark.asyncio
- async def test_skills_search_callable(self, storage_with_skills: FileStorage) -> None:
- """skills.search(query) is callable and returns list."""
- async with Session(storage=storage_with_skills) as session:
- result = await session.run('skills.search("number")')
+ async def test_workflows_search_callable(self, storage_with_workflows: FileStorage) -> None:
+ """workflows.search(query) is callable and returns list."""
+ async with Session(storage=storage_with_workflows) as session:
+ result = await session.run('workflows.search("number")')
assert result.is_ok
assert isinstance(result.value, list)
@pytest.mark.asyncio
- async def test_skills_create_callable(self, storage_with_skills: FileStorage) -> None:
- """skills.create(name, description, source) is callable."""
- async with Session(storage=storage_with_skills) as session:
+ async def test_workflows_create_callable(self, storage_with_workflows: FileStorage) -> None:
+ """workflows.create(name, description, source) is callable."""
+ async with Session(storage=storage_with_workflows) as session:
result = await session.run(
"""
-skills.create(
+workflows.create(
name="triple",
description="Triple a number",
source="async def run(n: int) -> int:\\n return n * 3"
@@ -224,16 +226,16 @@ async def test_skills_create_callable(self, storage_with_skills: FileStorage) ->
assert result.is_ok
- # Verify skill was created
- result = await session.run("skills.triple(n=10)")
+ # Verify workflow was created
+ result = await session.run("workflows.triple(n=10)")
assert result.is_ok
assert result.value == 30
@pytest.mark.asyncio
- async def test_skill_direct_invocation(self, storage_with_skills: FileStorage) -> None:
- """skills.skill_name(**kwargs) syntax works."""
- async with Session(storage=storage_with_skills) as session:
- result = await session.run("skills.double(n=21)")
+ async def test_workflow_direct_invocation(self, storage_with_workflows: FileStorage) -> None:
+ """workflows.workflow_name(**kwargs) syntax works."""
+ async with Session(storage=storage_with_workflows) as session:
+ result = await session.run("workflows.double(n=21)")
assert result.is_ok
assert result.value == 42
@@ -319,7 +321,7 @@ async def test_reset_clears_variables(self, storage: FileStorage) -> None:
@pytest.mark.asyncio
async def test_reset_preserves_namespaces(self, storage: FileStorage) -> None:
- """reset() preserves tools, skills, artifacts namespaces."""
+ """reset() preserves tools, workflows, artifacts namespaces."""
async with Session(storage=storage) as session:
await session.run("x = 42")
await session.reset()
@@ -329,7 +331,7 @@ async def test_reset_preserves_namespaces(self, storage: FileStorage) -> None:
assert result.is_ok
assert result.value is True
- result = await session.run("'skills' in dir()")
+ result = await session.run("'workflows' in dir()")
assert result.is_ok
assert result.value is True
@@ -467,31 +469,31 @@ def test_redis_storage_access_exists(self) -> None:
assert RedisStorageAccess is not None
def test_file_storage_access_has_paths(self) -> None:
- """FileStorageAccess has skills_path, artifacts_path.
+ """FileStorageAccess has workflows_path, artifacts_path.
NOTE: tools_path and deps_path removed - tools/deps now owned by executors.
"""
from py_code_mode.execution.protocol import FileStorageAccess
access = FileStorageAccess(
- skills_path=Path("/tmp/skills"),
+ workflows_path=Path("/tmp/workflows"),
artifacts_path=Path("/tmp/artifacts"),
)
- assert access.skills_path == Path("/tmp/skills")
+ assert access.workflows_path == Path("/tmp/workflows")
assert access.artifacts_path == Path("/tmp/artifacts")
def test_file_storage_access_paths_optional(self) -> None:
- """FileStorageAccess allows None for skills_path.
+ """FileStorageAccess allows None for workflows_path.
NOTE: tools_path and deps_path removed - tools/deps now owned by executors.
"""
from py_code_mode.execution.protocol import FileStorageAccess
access = FileStorageAccess(
- skills_path=None,
+ workflows_path=None,
artifacts_path=Path("/tmp/artifacts"),
)
- assert access.skills_path is None
+ assert access.workflows_path is None
def test_redis_storage_access_has_url_and_prefixes(self) -> None:
"""RedisStorageAccess has redis_url and prefix fields.
@@ -502,11 +504,11 @@ def test_redis_storage_access_has_url_and_prefixes(self) -> None:
access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="app:skills",
+ workflows_prefix="app:workflows",
artifacts_prefix="app:artifacts",
)
assert access.redis_url == "redis://localhost:6379"
- assert access.skills_prefix == "app:skills"
+ assert access.workflows_prefix == "app:workflows"
assert access.artifacts_prefix == "app:artifacts"
@@ -586,11 +588,11 @@ class TestStorageAccessWiring:
# storage_with_tools fixture removed - not compatible with unified interface
@pytest.fixture
- def storage_with_skills(self, tmp_path: Path) -> FileStorage:
- """Create FileStorage with skills."""
- skills_dir = tmp_path / "skills"
- skills_dir.mkdir()
- (skills_dir / "double.py").write_text(
+ def storage_with_workflows(self, tmp_path: Path) -> FileStorage:
+ """Create FileStorage with workflows."""
+ workflows_dir = tmp_path / "workflows"
+ workflows_dir.mkdir()
+ (workflows_dir / "double.py").write_text(
'''"""Double a number."""
async def run(n: int) -> int:
@@ -600,29 +602,31 @@ async def run(n: int) -> int:
return FileStorage(tmp_path)
@pytest.mark.asyncio
- async def test_skills_namespace_available_with_typed_executor(
- self, storage_with_skills: FileStorage
+ async def test_workflows_namespace_available_with_typed_executor(
+ self, storage_with_workflows: FileStorage
) -> None:
- """skills namespace works when using typed executor."""
+ """workflows namespace works when using typed executor."""
from py_code_mode.execution.in_process import InProcessExecutor
executor = InProcessExecutor()
- async with Session(storage=storage_with_skills, executor=executor) as session:
- result = await session.run("'skills' in dir()")
+ async with Session(storage=storage_with_workflows, executor=executor) as session:
+ result = await session.run("'workflows' in dir()")
assert result.is_ok
assert result.value is True
@pytest.mark.asyncio
- async def test_skills_are_loaded_from_storage(self, storage_with_skills: FileStorage) -> None:
- """skills from storage are available in executor."""
+ async def test_workflows_are_loaded_from_storage(
+ self, storage_with_workflows: FileStorage
+ ) -> None:
+ """workflows from storage are available in executor."""
from py_code_mode.execution.in_process import InProcessExecutor
executor = InProcessExecutor()
- async with Session(storage=storage_with_skills, executor=executor) as session:
- result = await session.run("skills.list()")
+ async with Session(storage=storage_with_workflows, executor=executor) as session:
+ result = await session.run("workflows.list()")
assert result.is_ok
- skill_names = [s["name"] for s in result.value]
- assert "double" in skill_names
+ workflow_names = [s["name"] for s in result.value]
+ assert "double" in workflow_names
# =============================================================================
@@ -663,7 +667,7 @@ async def test_container_config_no_storage_fields(self) -> None:
config = ContainerConfig(timeout=30.0, auth_disabled=True)
assert not hasattr(config, "host_tools_path")
- assert not hasattr(config, "host_skills_path")
+ assert not hasattr(config, "host_workflows_path")
assert not hasattr(config, "host_artifacts_path")
assert not hasattr(config, "redis_url")
assert not hasattr(config, "artifact_backend")
@@ -892,20 +896,20 @@ async def test_subprocess_executor_gets_serializable_access(self, tmp_path: Path
@pytest.mark.asyncio
@pytest.mark.skipif(not _subprocess_executor_available(), reason="uv not available")
async def test_subprocess_namespaces_work_via_session(self, tmp_path: Path) -> None:
- """tools/skills/artifacts namespaces accessible via Session.
+ """tools/workflows/artifacts namespaces accessible via Session.
User action: Access namespaces in code via Session.run()
- Setup: FileStorage with skills directory, py-code-mode in venv
+ Setup: FileStorage with workflows directory, py-code-mode in venv
Verification: Namespaces exist and are callable
Breaks when: Namespace injection fails in subprocess
"""
from py_code_mode.execution.subprocess import SubprocessExecutor
from py_code_mode.execution.subprocess.config import SubprocessConfig
- # Create skills directory with a test skill
- skills_dir = tmp_path / "skills"
- skills_dir.mkdir()
- (skills_dir / "add_numbers.py").write_text(
+ # Create workflows directory with a test workflow
+ workflows_dir = tmp_path / "workflows"
+ workflows_dir.mkdir()
+ (workflows_dir / "add_numbers.py").write_text(
'''"""Add two numbers together."""
async def run(a: int, b: int) -> int:
@@ -929,19 +933,19 @@ async def run(a: int, b: int) -> int:
assert result.is_ok, f"Failed to check tools: {result.error}"
assert result.value in (True, "True"), "tools namespace not found"
- # Verify skills namespace exists
- result = await session.run("'skills' in dir()")
- assert result.is_ok, f"Failed to check skills: {result.error}"
- assert result.value in (True, "True"), "skills namespace not found"
+ # Verify workflows namespace exists
+ result = await session.run("'workflows' in dir()")
+ assert result.is_ok, f"Failed to check workflows: {result.error}"
+ assert result.value in (True, "True"), "workflows namespace not found"
# Verify artifacts namespace exists
result = await session.run("'artifacts' in dir()")
assert result.is_ok, f"Failed to check artifacts: {result.error}"
assert result.value in (True, "True"), "artifacts namespace not found"
- # Verify skills.list() works and contains our skill
- result = await session.run("skills.list()")
- assert result.is_ok, f"skills.list() failed: {result.error}"
+ # Verify workflows.list() works and contains our workflow
+ result = await session.run("workflows.list()")
+ assert result.is_ok, f"workflows.list() failed: {result.error}"
@pytest.mark.asyncio
@pytest.mark.skipif(not _subprocess_executor_available(), reason="uv not available")
diff --git a/tests/test_skill_library_vector_store.py b/tests/test_skill_library_vector_store.py
index d86c346..2222c1b 100644
--- a/tests/test_skill_library_vector_store.py
+++ b/tests/test_skill_library_vector_store.py
@@ -1,7 +1,7 @@
-"""Tests for SkillLibrary VectorStore integration - TDD RED phase.
+"""Tests for WorkflowLibrary VectorStore integration - TDD RED phase.
These tests define the new behavior we want:
-- SkillLibrary accepts vector_store parameter
+- WorkflowLibrary accepts vector_store parameter
- Search delegates to VectorStore when provided
- Content hash change detection skips re-embedding when unchanged
- Fallback to in-memory when vector_store=None
@@ -12,14 +12,14 @@
from dataclasses import dataclass
from typing import Any
-from py_code_mode.skills import PythonSkill
-from py_code_mode.skills.vector_store import ModelInfo, SearchResult, VectorStore
+from py_code_mode.workflows import PythonWorkflow
+from py_code_mode.workflows.vector_store import ModelInfo, SearchResult, VectorStore
-def _make_skill(name: str, description: str, code: str) -> PythonSkill:
- """Helper to create a PythonSkill from minimal info."""
+def _make_workflow(name: str, description: str, code: str) -> PythonWorkflow:
+ """Helper to create a PythonWorkflow from minimal info."""
source = f'"""{description}"""\n\nasync def run():\n {code}'
- return PythonSkill.from_source(name=name, source=source, description=description)
+ return PythonWorkflow.from_source(name=name, source=source, description=description)
@dataclass
@@ -73,12 +73,12 @@ def search(
"""Record search call and return mock results."""
self.search_calls.append((query, limit, desc_weight, code_weight))
- # Return all stored skills as results (mock similarity)
+ # Return all stored workflows as results (mock similarity)
results = []
- for skill_id, data in self._store.items():
+ for workflow_id, data in self._store.items():
# Mock score based on presence of query term in description
score = 0.8 if query.lower() in data["description"].lower() else 0.5
- results.append(SearchResult(id=skill_id, score=score, metadata={"mock": True}))
+ results.append(SearchResult(id=workflow_id, score=score, metadata={"mock": True}))
# Sort by score descending
results.sort(key=lambda r: r.score, reverse=True)
@@ -99,50 +99,50 @@ def clear(self) -> None:
self._store.clear()
def count(self) -> int:
- """Return count of stored skills."""
+ """Return count of stored workflows."""
return len(self._store)
-class TestSkillLibraryParameterAcceptance:
- """Test that SkillLibrary accepts vector_store parameter."""
+class TestWorkflowLibraryParameterAcceptance:
+ """Test that WorkflowLibrary accepts vector_store parameter."""
def test_accepts_vector_store_parameter(self) -> None:
- """SkillLibrary constructor should accept vector_store parameter.
+ """WorkflowLibrary constructor should accept vector_store parameter.
- This test will FAIL because SkillLibrary doesn't accept vector_store yet.
+ This test will FAIL because WorkflowLibrary doesn't accept vector_store yet.
"""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
# This should work but will fail - parameter doesn't exist yet
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
assert library.vector_store is vector_store
def test_works_with_vector_store_none(self) -> None:
- """SkillLibrary should work with vector_store=None (current behavior).
+ """WorkflowLibrary should work with vector_store=None (current behavior).
This ensures backward compatibility.
"""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
# This should work - None is default
- library = SkillLibrary(embedder=embedder, vector_store=None)
+ library = WorkflowLibrary(embedder=embedder, vector_store=None)
assert library.vector_store is None
def test_works_with_vector_store_instance(self) -> None:
- """SkillLibrary should work with a VectorStore instance."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ """WorkflowLibrary should work with a VectorStore instance."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
# Should store the vector_store
assert isinstance(library.vector_store, VectorStore)
@@ -156,16 +156,16 @@ def test_search_delegates_to_vector_store(self) -> None:
This test will FAIL because delegation logic doesn't exist yet.
"""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- # Add a skill (so search has something to find)
- skill = _make_skill("test", "test skill", "pass")
- library.add(skill)
+ # Add a workflow (so search has something to find)
+ workflow = _make_workflow("test", "test workflow", "pass")
+ library.add(workflow)
# Search should delegate to vector_store
library.search("test")
@@ -176,46 +176,46 @@ def test_search_delegates_to_vector_store(self) -> None:
assert query == "test"
assert limit == 10 # default
- def test_search_returns_python_skill_objects(self) -> None:
- """search() should return PythonSkill objects, not SearchResult.
+ def test_search_returns_python_workflow_objects(self) -> None:
+ """search() should return PythonWorkflow objects, not SearchResult.
- VectorStore.search() returns SearchResult, but SkillLibrary.search()
- should map those back to PythonSkill objects.
+ VectorStore.search() returns SearchResult, but WorkflowLibrary.search()
+ should map those back to PythonWorkflow objects.
"""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- skill = _make_skill(
+ workflow = _make_workflow(
"fetch_url", "Fetch content from a URL", "return requests.get(url).text"
)
- library.add(skill)
+ library.add(workflow)
results = library.search("download")
- # Should return PythonSkill objects
+ # Should return PythonWorkflow objects
assert len(results) >= 1
- assert all(isinstance(r, PythonSkill) for r in results)
+ assert all(isinstance(r, PythonWorkflow) for r in results)
assert results[0].name == "fetch_url"
def test_search_respects_limit_parameter(self) -> None:
"""search(limit=N) should pass limit to vector_store.search()."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- # Add multiple skills
+ # Add multiple workflows
for i in range(5):
- library.add(_make_skill(f"skill_{i}", f"skill {i}", "pass"))
+ library.add(_make_workflow(f"workflow_{i}", f"workflow {i}", "pass"))
# Search with custom limit
- results = library.search("skill", limit=3)
+ results = library.search("workflow", limit=3)
# Verify limit was passed through
assert len(vector_store.search_calls) == 1
@@ -227,7 +227,7 @@ def test_search_respects_limit_parameter(self) -> None:
def test_search_passes_ranking_config_weights(self) -> None:
"""search() should pass RankingConfig weights to vector_store.search()."""
- from py_code_mode.skills import MockEmbedder, RankingConfig, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, RankingConfig, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
@@ -235,14 +235,14 @@ def test_search_passes_ranking_config_weights(self) -> None:
# Custom ranking config
ranking = RankingConfig(description_weight=0.8, code_weight=0.2)
- library = SkillLibrary(
+ library = WorkflowLibrary(
embedder=embedder,
vector_store=vector_store,
ranking=ranking,
)
- skill = _make_skill("test", "test", "pass")
- library.add(skill)
+ workflow = _make_workflow("test", "test", "pass")
+ library.add(workflow)
library.search("test")
@@ -252,143 +252,143 @@ def test_search_passes_ranking_config_weights(self) -> None:
assert desc_weight == 0.8
assert code_weight == 0.2
- def test_search_filters_missing_skills(self) -> None:
- """search() should filter out SearchResults whose IDs aren't in _skills.
+ def test_search_filters_missing_workflows(self) -> None:
+ """search() should filter out SearchResults whose IDs aren't in _workflows.
- VectorStore might return stale results for deleted skills.
- SkillLibrary should filter those out.
+ VectorStore might return stale results for deleted workflows.
+ WorkflowLibrary should filter those out.
"""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- # Add skill to vector_store directly (bypassing library)
- vector_store.add("stale_skill", "stale", "pass", "hash123")
+ # Add workflow to vector_store directly (bypassing library)
+ vector_store.add("stale_workflow", "stale", "pass", "hash123")
# Search - should not crash, should filter out the stale result
results = library.search("stale")
- # Should be empty (skill not in library._skills)
+ # Should be empty (workflow not in library._workflows)
assert len(results) == 0
class TestContentHashChangeDetection:
"""Test content hash change detection to skip re-embedding."""
- def test_index_skill_computes_content_hash(self) -> None:
- """_index_skill should compute content hash for the skill.
+ def test_index_workflow_computes_content_hash(self) -> None:
+ """_index_workflow should compute content hash for the workflow.
This test will FAIL because hash computation doesn't exist yet.
"""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
+ from py_code_mode.workflows.vector_store import compute_content_hash
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- skill = _make_skill("test", "test skill", "pass")
+ workflow = _make_workflow("test", "test workflow", "pass")
- # Index the skill
- library.add(skill)
+ # Index the workflow
+ library.add(workflow)
# Verify vector_store.add was called with correct hash
assert len(vector_store.add_calls) == 1
id, desc, source, content_hash = vector_store.add_calls[0]
- expected_hash = compute_content_hash(skill.description, skill.source)
+ expected_hash = compute_content_hash(workflow.description, workflow.source)
assert content_hash == expected_hash
- def test_unchanged_skill_skips_re_embedding(self) -> None:
- """When skill content hasn't changed, skip re-embedding.
+ def test_unchanged_workflow_skips_re_embedding(self) -> None:
+ """When workflow content hasn't changed, skip re-embedding.
If vector_store.get_content_hash() returns same hash as current content,
don't call vector_store.add() again.
"""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
+ from py_code_mode.workflows.vector_store import compute_content_hash
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- skill = _make_skill("test", "test skill", "pass")
+ workflow = _make_workflow("test", "test workflow", "pass")
# First add - should call vector_store.add()
- library.add(skill)
+ library.add(workflow)
assert len(vector_store.add_calls) == 1
# Store the hash in vector_store (simulating it was already embedded)
- expected_hash = compute_content_hash(skill.description, skill.source)
+ expected_hash = compute_content_hash(workflow.description, workflow.source)
vector_store._store["test"]["hash"] = expected_hash
- # Re-index same skill (e.g., during refresh)
- library._index_skill(skill)
+ # Re-index same workflow (e.g., during refresh)
+ library._index_workflow(workflow)
# Should check hash but NOT call add() again (hash matches)
assert len(vector_store.get_content_hash_calls) >= 1
assert len(vector_store.add_calls) == 1 # Still just one add call
- def test_changed_skill_triggers_re_embedding(self) -> None:
- """When skill content changes, re-embed it.
+ def test_changed_workflow_triggers_re_embedding(self) -> None:
+ """When workflow content changes, re-embed it.
If vector_store.get_content_hash() returns different hash,
call vector_store.add() with new embeddings.
"""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
+ from py_code_mode.workflows.vector_store import compute_content_hash
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- skill_v1 = _make_skill("test", "version 1", "pass")
+ workflow_v1 = _make_workflow("test", "version 1", "pass")
# First add
- library.add(skill_v1)
+ library.add(workflow_v1)
assert len(vector_store.add_calls) == 1
# Store old hash
- old_hash = compute_content_hash(skill_v1.description, skill_v1.source)
+ old_hash = compute_content_hash(workflow_v1.description, workflow_v1.source)
vector_store._store["test"]["hash"] = old_hash
# Create modified version (different description)
- skill_v2 = _make_skill("test", "version 2 updated", "pass")
+ workflow_v2 = _make_workflow("test", "version 2 updated", "pass")
# Re-index with new content
- library._index_skill(skill_v2)
+ library._index_workflow(workflow_v2)
# Should detect hash change and call add() again
assert len(vector_store.add_calls) == 2
_, _, _, new_hash = vector_store.add_calls[1]
- expected_new_hash = compute_content_hash(skill_v2.description, skill_v2.source)
+ expected_new_hash = compute_content_hash(workflow_v2.description, workflow_v2.source)
assert new_hash == expected_new_hash
assert new_hash != old_hash
- def test_new_skill_always_added_to_vector_store(self) -> None:
- """New skills (not in vector_store) should always get added."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_new_workflow_always_added_to_vector_store(self) -> None:
+ """New workflows (not in vector_store) should always get added."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- skill = _make_skill("new_skill", "brand new", "pass")
+ workflow = _make_workflow("new_workflow", "brand new", "pass")
- # vector_store.get_content_hash() will return None (skill doesn't exist)
- library.add(skill)
+ # vector_store.get_content_hash() will return None (workflow doesn't exist)
+ library.add(workflow)
# Should add to vector_store
assert len(vector_store.add_calls) == 1
- assert vector_store.add_calls[0][0] == "new_skill"
+ assert vector_store.add_calls[0][0] == "new_workflow"
class TestFallbackBehavior:
@@ -396,15 +396,15 @@ class TestFallbackBehavior:
def test_vector_store_none_uses_in_memory_vectors(self) -> None:
"""When vector_store=None, should use existing in-memory behavior."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
# No vector_store provided
- library = SkillLibrary(embedder=embedder)
+ library = WorkflowLibrary(embedder=embedder)
- skill = _make_skill("test", "test skill", "pass")
- library.add(skill)
+ workflow = _make_workflow("test", "test workflow", "pass")
+ library.add(workflow)
# Should have populated in-memory vectors
assert "test" in library._description_vectors
@@ -412,151 +412,153 @@ def test_vector_store_none_uses_in_memory_vectors(self) -> None:
def test_vector_store_none_search_uses_cosine_similarity(self) -> None:
"""When vector_store=None, search() should use existing cosine_similarity logic."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
- library = SkillLibrary(embedder=embedder) # No vector_store
+ library = WorkflowLibrary(embedder=embedder) # No vector_store
- # Add skills
- skill1 = _make_skill("fetch_url", "Fetch content from URL", "return requests.get(url).text")
- skill2 = _make_skill("parse_json", "Parse JSON string", "return json.loads(text)")
- library.add(skill1)
- library.add(skill2)
+ # Add workflows
+ workflow1 = _make_workflow(
+ "fetch_url", "Fetch content from URL", "return requests.get(url).text"
+ )
+ workflow2 = _make_workflow("parse_json", "Parse JSON string", "return json.loads(text)")
+ library.add(workflow1)
+ library.add(workflow2)
# Search should work (using in-memory cosine similarity)
results = library.search("download")
# Should return results (exact results depend on embeddings)
assert isinstance(results, list)
- assert all(isinstance(r, PythonSkill) for r in results)
+ assert all(isinstance(r, PythonWorkflow) for r in results)
-class TestCreateSkillLibraryFactory:
- """Test create_skill_library() factory accepts vector_store parameter."""
+class TestCreateWorkflowLibraryFactory:
+ """Test create_workflow_library() factory accepts vector_store parameter."""
def test_factory_accepts_vector_store_parameter(self) -> None:
- """create_skill_library() should accept vector_store parameter.
+ """create_workflow_library() should accept vector_store parameter.
This test will FAIL because the factory doesn't accept it yet.
"""
- from py_code_mode.skills import MockEmbedder, create_skill_library
+ from py_code_mode.workflows import MockEmbedder, create_workflow_library
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
# This should work but will fail
- library = create_skill_library(embedder=embedder, vector_store=vector_store)
+ library = create_workflow_library(embedder=embedder, vector_store=vector_store)
assert library.vector_store is vector_store
- def test_factory_passes_vector_store_to_skill_library(self) -> None:
- """Factory should pass vector_store to SkillLibrary constructor."""
- from py_code_mode.skills import MockEmbedder, create_skill_library
+ def test_factory_passes_vector_store_to_workflow_library(self) -> None:
+ """Factory should pass vector_store to WorkflowLibrary constructor."""
+ from py_code_mode.workflows import MockEmbedder, create_workflow_library
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = create_skill_library(embedder=embedder, vector_store=vector_store)
+ library = create_workflow_library(embedder=embedder, vector_store=vector_store)
# Library should have the vector_store
assert library.vector_store is vector_store
-class TestIndexSkillIntegration:
- """Test _index_skill integrates with vector_store."""
+class TestIndexWorkflowIntegration:
+ """Test _index_workflow integrates with vector_store."""
- def test_index_skill_adds_to_vector_store_when_provided(self) -> None:
- """_index_skill should add to vector_store when provided."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_index_workflow_adds_to_vector_store_when_provided(self) -> None:
+ """_index_workflow should add to vector_store when provided."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- skill = _make_skill("test", "test skill", "pass")
+ workflow = _make_workflow("test", "test workflow", "pass")
- # Index directly (bypassing add, to test _index_skill in isolation)
- library._index_skill(skill)
+ # Index directly (bypassing add, to test _index_workflow in isolation)
+ library._index_workflow(workflow)
# Should have called vector_store.add()
assert len(vector_store.add_calls) == 1
- def test_index_skill_still_adds_to_skills_dict(self) -> None:
- """_index_skill should still add to _skills dict for get() by name."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_index_workflow_still_adds_to_workflows_dict(self) -> None:
+ """_index_workflow should still add to _workflows dict for get() by name."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- skill = _make_skill("test", "test skill", "pass")
- library._index_skill(skill)
+ workflow = _make_workflow("test", "test workflow", "pass")
+ library._index_workflow(workflow)
- # Should be in _skills dict
- assert "test" in library._skills
+ # Should be in _workflows dict
+ assert "test" in library._workflows
assert library.get("test") is not None
- def test_refresh_indexes_only_new_and_changed_skills(self) -> None:
- """refresh() should only index new/changed skills, skipping unchanged ones.
+ def test_refresh_indexes_only_new_and_changed_workflows(self) -> None:
+ """refresh() should only index new/changed workflows, skipping unchanged ones.
- When refresh() is called, unchanged skills are skipped via content hash
- checking. Only new skills (hash not found) or changed skills (hash mismatch)
+ When refresh() is called, unchanged workflows are skipped via content hash
+ checking. Only new workflows (hash not found) or changed workflows (hash mismatch)
are re-indexed.
"""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- store = MemorySkillStore()
+ store = MemoryWorkflowStore()
- # Populate store with skills
- skill1 = _make_skill("skill1", "first skill", "pass")
- skill2 = _make_skill("skill2", "second skill", "pass")
- store.save(skill1)
- store.save(skill2)
+ # Populate store with workflows
+ workflow1 = _make_workflow("workflow1", "first workflow", "pass")
+ workflow2 = _make_workflow("workflow2", "second workflow", "pass")
+ store.save(workflow1)
+ store.save(workflow2)
# Create library - should index on construction
- library = SkillLibrary(embedder=embedder, vector_store=vector_store, store=store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store, store=store)
- # Should have indexed both skills
+ # Should have indexed both workflows
initial_add_count = len(vector_store.add_calls)
assert initial_add_count == 2
- # Add another skill to store (bypassing library)
- skill3 = _make_skill("skill3", "third skill", "pass")
- store.save(skill3)
+ # Add another workflow to store (bypassing library)
+ workflow3 = _make_workflow("workflow3", "third workflow", "pass")
+ store.save(workflow3)
# Refresh to pick up changes
library.refresh()
- # Should have indexed only the NEW skill (skill3)
- # Unchanged skills (skill1, skill2) are skipped via content hash match
- # Total adds should be initial + 1 (only skill3)
+ # Should have indexed only the NEW workflow (workflow3)
+ # Unchanged workflows (workflow1, workflow2) are skipped via content hash match
+ # Total adds should be initial + 1 (only workflow3)
assert len(vector_store.add_calls) == initial_add_count + 1
- # Verify the new skill was indexed
+ # Verify the new workflow was indexed
last_add = vector_store.add_calls[-1]
- assert last_add[0] == "skill3" # id is first element
+ assert last_add[0] == "workflow3" # id is first element
-class TestRemoveSkillVectorStore:
+class TestRemoveWorkflowVectorStore:
"""Test that remove() cleans up vector_store."""
def test_remove_deletes_from_vector_store(self) -> None:
"""remove() should delete embeddings from vector_store."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- skill = _make_skill("test", "test skill", "pass")
- library.add(skill)
+ workflow = _make_workflow("test", "test workflow", "pass")
+ library.add(workflow)
- # Remove the skill
+ # Remove the workflow
result = library.remove("test")
assert result is True
@@ -564,22 +566,22 @@ def test_remove_deletes_from_vector_store(self) -> None:
assert len(vector_store.remove_calls) == 1
assert vector_store.remove_calls[0] == "test"
- def test_remove_still_removes_from_skills_dict(self) -> None:
- """remove() should still remove from _skills dict (existing behavior)."""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ def test_remove_still_removes_from_workflows_dict(self) -> None:
+ """remove() should still remove from _workflows dict (existing behavior)."""
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- skill = _make_skill("test", "test skill", "pass")
- library.add(skill)
+ workflow = _make_workflow("test", "test workflow", "pass")
+ library.add(workflow)
library.remove("test")
- # Should be gone from _skills
- assert "test" not in library._skills
+ # Should be gone from _workflows
+ assert "test" not in library._workflows
assert library.get("test") is None
@@ -610,38 +612,38 @@ def test_mock_vector_store_has_all_required_methods(self) -> None:
class TestWarmStartupCaching:
"""Test that vector_store caching works across library instances."""
- def test_warm_startup_skips_embedding_for_unchanged_skills(self) -> None:
- """Warm startup should skip re-embedding unchanged skills.
+ def test_warm_startup_skips_embedding_for_unchanged_workflows(self) -> None:
+ """Warm startup should skip re-embedding unchanged workflows.
- When SkillLibrary restarts with existing vector_store, unchanged skills
+ When WorkflowLibrary restarts with existing vector_store, unchanged workflows
should NOT be re-embedded.
- Scenario: Application restarts, creates new SkillLibrary with same vector_store.
+ Scenario: Application restarts, creates new WorkflowLibrary with same vector_store.
Expected: Embeddings cached in vector_store are reused, not regenerated.
This test will FAIL because refresh() calls clear() which defeats caching.
"""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- store = MemorySkillStore()
+ store = MemoryWorkflowStore()
- # Pre-populate store with a skill
- skill = _make_skill(
+ # Pre-populate store with a workflow
+ workflow = _make_workflow(
"fetch_url", "Fetch content from a URL", "return requests.get(url).text"
)
- store.save(skill)
+ store.save(workflow)
- # First startup: create library, indexes the skill
- SkillLibrary(embedder=embedder, vector_store=vector_store, store=store)
+ # First startup: create library, indexes the workflow
+ WorkflowLibrary(embedder=embedder, vector_store=vector_store, store=store)
# Verify first startup called add() once
- assert len(vector_store.add_calls) == 1, "First startup should embed the skill"
+ assert len(vector_store.add_calls) == 1, "First startup should embed the workflow"
- # SIMULATE RESTART: Create NEW SkillLibrary instance with SAME vector_store
+ # SIMULATE RESTART: Create NEW WorkflowLibrary instance with SAME vector_store
# (This is what happens when app restarts with persistent ChromaDB)
- SkillLibrary(embedder=embedder, vector_store=vector_store, store=store)
+ WorkflowLibrary(embedder=embedder, vector_store=vector_store, store=store)
# BUG: refresh() calls clear() which wipes the cache, so add() is called AGAIN
# EXPECTED: add() should NOT be called again (content hash matches)
@@ -655,25 +657,29 @@ class TestEndToEndVectorStoreWorkflow:
"""Integration test: end-to-end workflow with VectorStore."""
def test_add_search_remove_workflow(self) -> None:
- """Full workflow: add skills, search, remove.
+ """Full workflow: add workflows, search, remove.
This is the user journey test that exercises the full integration.
"""
- from py_code_mode.skills import MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- library = SkillLibrary(embedder=embedder, vector_store=vector_store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store)
- # Add skills
- skill1 = _make_skill("fetch_url", "Fetch content from URL", "return requests.get(url).text")
- skill2 = _make_skill("parse_json", "Parse JSON string", "return json.loads(text)")
- skill3 = _make_skill("write_file", "Write text to file", "Path(path).write_text(content)")
+ # Add workflows
+ workflow1 = _make_workflow(
+ "fetch_url", "Fetch content from URL", "return requests.get(url).text"
+ )
+ workflow2 = _make_workflow("parse_json", "Parse JSON string", "return json.loads(text)")
+ workflow3 = _make_workflow(
+ "write_file", "Write text to file", "Path(path).write_text(content)"
+ )
- library.add(skill1)
- library.add(skill2)
- library.add(skill3)
+ library.add(workflow1)
+ library.add(workflow2)
+ library.add(workflow3)
# Search should delegate to vector_store
results = library.search("download")
@@ -683,38 +689,38 @@ def test_add_search_remove_workflow(self) -> None:
# Verify vector_store was used
assert len(vector_store.search_calls) >= 1
- # Remove a skill
+ # Remove a workflow
library.remove("fetch_url")
# Should have removed from vector_store
assert "fetch_url" in vector_store.remove_calls
- # Search shouldn't find removed skill
+ # Search shouldn't find removed workflow
results = library.search("download")
assert not any(r.name == "fetch_url" for r in results)
def test_store_backed_library_with_vector_store(self) -> None:
- """SkillLibrary with both store and vector_store.
+ """WorkflowLibrary with both store and vector_store.
This tests the three-layer architecture:
- - SkillStore: persistence
+ - WorkflowStore: persistence
- VectorStore: embedding cache
- - SkillLibrary: orchestration
+ - WorkflowLibrary: orchestration
"""
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
embedder = MockEmbedder(dimension=384)
vector_store = MockVectorStore()
- store = MemorySkillStore()
+ store = MemoryWorkflowStore()
# Populate store
- skill1 = _make_skill("skill1", "first", "pass")
- skill2 = _make_skill("skill2", "second", "pass")
- store.save(skill1)
- store.save(skill2)
+ workflow1 = _make_workflow("workflow1", "first", "pass")
+ workflow2 = _make_workflow("workflow2", "second", "pass")
+ store.save(workflow1)
+ store.save(workflow2)
# Create library with both store and vector_store
- library = SkillLibrary(embedder=embedder, vector_store=vector_store, store=store)
+ library = WorkflowLibrary(embedder=embedder, vector_store=vector_store, store=store)
# Should have loaded from store and indexed in vector_store
assert len(library) == 2
@@ -725,9 +731,9 @@ def test_store_backed_library_with_vector_store(self) -> None:
assert len(results) >= 1
assert len(vector_store.search_calls) >= 1
- # Add new skill - should go to both store and vector_store
- skill3 = _make_skill("skill3", "third", "pass")
- library.add(skill3)
+ # Add new workflow - should go to both store and vector_store
+ workflow3 = _make_workflow("workflow3", "third", "pass")
+ library.add(workflow3)
- assert store.exists("skill3")
- assert any(call[0] == "skill3" for call in vector_store.add_calls)
+ assert store.exists("workflow3")
+ assert any(call[0] == "workflow3" for call in vector_store.add_calls)
diff --git a/tests/test_skill_store.py b/tests/test_skill_store.py
index e4a19fc..b84d6d9 100644
--- a/tests/test_skill_store.py
+++ b/tests/test_skill_store.py
@@ -1,24 +1,24 @@
-"""Tests for SkillStore protocol and implementations."""
+"""Tests for WorkflowStore protocol and implementations."""
from pathlib import Path
import pytest
-from py_code_mode.skills import (
- FileSkillStore,
- MemorySkillStore,
- PythonSkill,
- RedisSkillStore,
- SkillStore,
+from py_code_mode.workflows import (
+ FileWorkflowStore,
+ MemoryWorkflowStore,
+ PythonWorkflow,
+ RedisWorkflowStore,
+ WorkflowStore,
)
# --- Fixtures ---
@pytest.fixture
-def sample_python_skill() -> PythonSkill:
- """A simple Python skill for testing."""
- return PythonSkill.from_source(
+def sample_python_workflow() -> PythonWorkflow:
+ """A simple Python workflow for testing."""
+ return PythonWorkflow.from_source(
name="greet",
source='async def run(name: str) -> str:\n return f"Hello, {name}!"',
description="Greet someone",
@@ -26,9 +26,9 @@ def sample_python_skill() -> PythonSkill:
@pytest.fixture
-def another_python_skill() -> PythonSkill:
- """Another Python skill for testing list operations."""
- return PythonSkill.from_source(
+def another_python_workflow() -> PythonWorkflow:
+ """Another Python workflow for testing list operations."""
+ return PythonWorkflow.from_source(
name="farewell",
source='async def run() -> str:\n return "Goodbye!"',
description="Say goodbye",
@@ -36,108 +36,108 @@ def another_python_skill() -> PythonSkill:
@pytest.fixture
-def memory_store() -> MemorySkillStore:
+def memory_store() -> MemoryWorkflowStore:
"""Fresh in-memory store."""
- return MemorySkillStore()
+ return MemoryWorkflowStore()
@pytest.fixture
-def file_store(tmp_path: Path) -> FileSkillStore:
+def file_store(tmp_path: Path) -> FileWorkflowStore:
"""File store in temp directory."""
- return FileSkillStore(tmp_path)
+ return FileWorkflowStore(tmp_path)
-# --- SkillStore Protocol Tests ---
+# --- WorkflowStore Protocol Tests ---
-class TestSkillStoreProtocol:
- """Verify implementations satisfy the SkillStore protocol."""
+class TestWorkflowStoreProtocol:
+ """Verify implementations satisfy the WorkflowStore protocol."""
- def test_memory_store_is_skill_store(self):
- """MemorySkillStore should satisfy SkillStore protocol."""
- store = MemorySkillStore()
- assert isinstance(store, SkillStore)
+ def test_memory_store_is_workflow_store(self):
+ """MemoryWorkflowStore should satisfy WorkflowStore protocol."""
+ store = MemoryWorkflowStore()
+ assert isinstance(store, WorkflowStore)
- def test_file_store_is_skill_store(self, tmp_path: Path):
- """FileSkillStore should satisfy SkillStore protocol."""
- store = FileSkillStore(tmp_path)
- assert isinstance(store, SkillStore)
+ def test_file_store_is_workflow_store(self, tmp_path: Path):
+ """FileWorkflowStore should satisfy WorkflowStore protocol."""
+ store = FileWorkflowStore(tmp_path)
+ assert isinstance(store, WorkflowStore)
-# --- MemorySkillStore Tests ---
+# --- MemoryWorkflowStore Tests ---
-class TestMemorySkillStore:
- """Tests for in-memory skill store."""
+class TestMemoryWorkflowStore:
+ """Tests for in-memory workflow store."""
- def test_save_and_load_python_skill(
- self, memory_store: MemorySkillStore, sample_python_skill: PythonSkill
+ def test_save_and_load_python_workflow(
+ self, memory_store: MemoryWorkflowStore, sample_python_workflow: PythonWorkflow
):
- """Should save and load a Python skill."""
- memory_store.save(sample_python_skill)
+ """Should save and load a Python workflow."""
+ memory_store.save(sample_python_workflow)
loaded = memory_store.load("greet")
assert loaded is not None
assert loaded.name == "greet"
assert loaded.description == "Greet someone"
- def test_load_nonexistent_returns_none(self, memory_store: MemorySkillStore):
- """Should return None for nonexistent skill."""
+ def test_load_nonexistent_returns_none(self, memory_store: MemoryWorkflowStore):
+ """Should return None for nonexistent workflow."""
assert memory_store.load("nonexistent") is None
- def test_delete_existing_skill(
- self, memory_store: MemorySkillStore, sample_python_skill: PythonSkill
+ def test_delete_existing_workflow(
+ self, memory_store: MemoryWorkflowStore, sample_python_workflow: PythonWorkflow
):
- """Should delete an existing skill."""
- memory_store.save(sample_python_skill)
+ """Should delete an existing workflow."""
+ memory_store.save(sample_python_workflow)
result = memory_store.delete("greet")
assert result is True
assert memory_store.load("greet") is None
- def test_delete_nonexistent_returns_false(self, memory_store: MemorySkillStore):
- """Should return False when deleting nonexistent skill."""
+ def test_delete_nonexistent_returns_false(self, memory_store: MemoryWorkflowStore):
+ """Should return False when deleting nonexistent workflow."""
result = memory_store.delete("nonexistent")
assert result is False
- def test_list_all_empty(self, memory_store: MemorySkillStore):
+ def test_list_all_empty(self, memory_store: MemoryWorkflowStore):
"""Should return empty list for empty store."""
assert memory_store.list_all() == []
- def test_list_all_with_skills(
+ def test_list_all_with_workflows(
self,
- memory_store: MemorySkillStore,
- sample_python_skill: PythonSkill,
- another_python_skill: PythonSkill,
+ memory_store: MemoryWorkflowStore,
+ sample_python_workflow: PythonWorkflow,
+ another_python_workflow: PythonWorkflow,
):
- """Should list all saved skills."""
- memory_store.save(sample_python_skill)
- memory_store.save(another_python_skill)
+ """Should list all saved workflows."""
+ memory_store.save(sample_python_workflow)
+ memory_store.save(another_python_workflow)
- skills = memory_store.list_all()
- names = {s.name for s in skills}
+ workflows = memory_store.list_all()
+ names = {s.name for s in workflows}
- assert len(skills) == 2
+ assert len(workflows) == 2
assert names == {"greet", "farewell"}
- def test_exists_true_for_saved_skill(
- self, memory_store: MemorySkillStore, sample_python_skill: PythonSkill
+ def test_exists_true_for_saved_workflow(
+ self, memory_store: MemoryWorkflowStore, sample_python_workflow: PythonWorkflow
):
- """Should return True for existing skill."""
- memory_store.save(sample_python_skill)
+ """Should return True for existing workflow."""
+ memory_store.save(sample_python_workflow)
assert memory_store.exists("greet") is True
- def test_exists_false_for_missing_skill(self, memory_store: MemorySkillStore):
- """Should return False for nonexistent skill."""
+ def test_exists_false_for_missing_workflow(self, memory_store: MemoryWorkflowStore):
+ """Should return False for nonexistent workflow."""
assert memory_store.exists("nonexistent") is False
def test_save_overwrites_existing(
- self, memory_store: MemorySkillStore, sample_python_skill: PythonSkill
+ self, memory_store: MemoryWorkflowStore, sample_python_workflow: PythonWorkflow
):
"""Saving with same name should overwrite."""
- memory_store.save(sample_python_skill)
+ memory_store.save(sample_python_workflow)
- updated = PythonSkill.from_source(
+ updated = PythonWorkflow.from_source(
name="greet",
source='async def run(name: str) -> str:\n return f"Hi, {name}!"',
description="Updated greeting",
@@ -149,94 +149,96 @@ def test_save_overwrites_existing(
assert loaded.description == "Updated greeting"
-# --- FileSkillStore Tests ---
+# --- FileWorkflowStore Tests ---
-class TestFileSkillStore:
- """Tests for file-based skill store."""
+class TestFileWorkflowStore:
+ """Tests for file-based workflow store."""
def test_save_creates_python_file(
- self, file_store: FileSkillStore, sample_python_skill: PythonSkill, tmp_path: Path
+ self, file_store: FileWorkflowStore, sample_python_workflow: PythonWorkflow, tmp_path: Path
):
"""Should write .py file to disk."""
- file_store.save(sample_python_skill)
+ file_store.save(sample_python_workflow)
expected_path = tmp_path / "greet.py"
assert expected_path.exists()
assert "def run" in expected_path.read_text()
def test_load_reads_python_file(
- self, file_store: FileSkillStore, sample_python_skill: PythonSkill
+ self, file_store: FileWorkflowStore, sample_python_workflow: PythonWorkflow
):
- """Should load skill from .py file."""
- file_store.save(sample_python_skill)
+ """Should load workflow from .py file."""
+ file_store.save(sample_python_workflow)
loaded = file_store.load("greet")
assert loaded is not None
assert loaded.name == "greet"
- assert isinstance(loaded, PythonSkill)
+ assert isinstance(loaded, PythonWorkflow)
- def test_load_nonexistent_returns_none(self, file_store: FileSkillStore):
+ def test_load_nonexistent_returns_none(self, file_store: FileWorkflowStore):
"""Should return None for nonexistent file."""
assert file_store.load("nonexistent") is None
def test_delete_removes_file(
- self, file_store: FileSkillStore, sample_python_skill: PythonSkill, tmp_path: Path
+ self, file_store: FileWorkflowStore, sample_python_workflow: PythonWorkflow, tmp_path: Path
):
"""Should delete .py file from disk."""
- file_store.save(sample_python_skill)
+ file_store.save(sample_python_workflow)
result = file_store.delete("greet")
assert result is True
assert not (tmp_path / "greet.py").exists()
- def test_delete_nonexistent_returns_false(self, file_store: FileSkillStore):
+ def test_delete_nonexistent_returns_false(self, file_store: FileWorkflowStore):
"""Should return False when file doesn't exist."""
result = file_store.delete("nonexistent")
assert result is False
def test_list_all_finds_py_files(
- self, file_store: FileSkillStore, sample_python_skill: PythonSkill
+ self, file_store: FileWorkflowStore, sample_python_workflow: PythonWorkflow
):
"""Should list all .py files in directory."""
- file_store.save(sample_python_skill)
+ file_store.save(sample_python_workflow)
- # Create another skill
- another = PythonSkill.from_source(
+ # Create another workflow
+ another = PythonWorkflow.from_source(
name="farewell",
source='async def run() -> str:\n return "Goodbye!"',
description="Say goodbye",
)
file_store.save(another)
- skills = file_store.list_all()
- names = {s.name for s in skills}
+ workflows = file_store.list_all()
+ names = {s.name for s in workflows}
- assert len(skills) == 2
+ assert len(workflows) == 2
assert names == {"greet", "farewell"}
- def test_list_all_ignores_underscore_files(self, file_store: FileSkillStore, tmp_path: Path):
+ def test_list_all_ignores_underscore_files(self, file_store: FileWorkflowStore, tmp_path: Path):
"""Should skip files starting with underscore."""
# Create __init__.py
(tmp_path / "__init__.py").write_text("")
(tmp_path / "_private.py").write_text("async def run(): pass")
- skills = file_store.list_all()
- assert len(skills) == 0
+ workflows = file_store.list_all()
+ assert len(workflows) == 0
- def test_exists_checks_file(self, file_store: FileSkillStore, sample_python_skill: PythonSkill):
+ def test_exists_checks_file(
+ self, file_store: FileWorkflowStore, sample_python_workflow: PythonWorkflow
+ ):
"""Should check if .py file exists."""
assert file_store.exists("greet") is False
- file_store.save(sample_python_skill)
+ file_store.save(sample_python_workflow)
assert file_store.exists("greet") is True
-# --- RedisSkillStore Tests (Mocked) ---
+# --- RedisWorkflowStore Tests (Mocked) ---
-class TestRedisSkillStore:
- """Tests for Redis-based skill store."""
+class TestRedisWorkflowStore:
+ """Tests for Redis-based workflow store."""
@pytest.fixture
def mock_redis(self):
@@ -270,169 +272,171 @@ def hexists(self, key: str, field: str) -> bool:
return MockRedis()
@pytest.fixture
- def redis_store(self, mock_redis) -> RedisSkillStore:
+ def redis_store(self, mock_redis) -> RedisWorkflowStore:
"""Redis store with mock client."""
- return RedisSkillStore(mock_redis, prefix="test-skills")
+ return RedisWorkflowStore(mock_redis, prefix="test-workflows")
- def test_save_and_load_python_skill(
- self, redis_store: RedisSkillStore, sample_python_skill: PythonSkill
+ def test_save_and_load_python_workflow(
+ self, redis_store: RedisWorkflowStore, sample_python_workflow: PythonWorkflow
):
- """Should serialize and deserialize Python skill."""
- redis_store.save(sample_python_skill)
+ """Should serialize and deserialize Python workflow."""
+ redis_store.save(sample_python_workflow)
loaded = redis_store.load("greet")
assert loaded is not None
assert loaded.name == "greet"
assert loaded.description == "Greet someone"
- # Stored skills have source and can invoke - duck typing
+ # Stored workflows have source and can invoke - duck typing
assert hasattr(loaded, "source")
assert hasattr(loaded, "invoke")
- def test_load_nonexistent_returns_none(self, redis_store: RedisSkillStore):
- """Should return None for nonexistent skill."""
+ def test_load_nonexistent_returns_none(self, redis_store: RedisWorkflowStore):
+ """Should return None for nonexistent workflow."""
assert redis_store.load("nonexistent") is None
- def test_delete_existing_skill(
- self, redis_store: RedisSkillStore, sample_python_skill: PythonSkill
+ def test_delete_existing_workflow(
+ self, redis_store: RedisWorkflowStore, sample_python_workflow: PythonWorkflow
):
- """Should delete skill from Redis."""
- redis_store.save(sample_python_skill)
+ """Should delete workflow from Redis."""
+ redis_store.save(sample_python_workflow)
result = redis_store.delete("greet")
assert result is True
assert redis_store.load("greet") is None
- def test_delete_nonexistent_returns_false(self, redis_store: RedisSkillStore):
- """Should return False when skill doesn't exist."""
+ def test_delete_nonexistent_returns_false(self, redis_store: RedisWorkflowStore):
+ """Should return False when workflow doesn't exist."""
result = redis_store.delete("nonexistent")
assert result is False
- def test_list_all(self, redis_store: RedisSkillStore, sample_python_skill: PythonSkill):
- """Should list all skills from Redis."""
- redis_store.save(sample_python_skill)
+ def test_list_all(
+ self, redis_store: RedisWorkflowStore, sample_python_workflow: PythonWorkflow
+ ):
+ """Should list all workflows from Redis."""
+ redis_store.save(sample_python_workflow)
- another = PythonSkill.from_source(
+ another = PythonWorkflow.from_source(
name="farewell",
source='async def run() -> str:\n return "Goodbye!"',
description="Say goodbye",
)
redis_store.save(another)
- skills = redis_store.list_all()
- names = {s.name for s in skills}
+ workflows = redis_store.list_all()
+ names = {s.name for s in workflows}
- assert len(skills) == 2
+ assert len(workflows) == 2
assert names == {"greet", "farewell"}
- def test_exists(self, redis_store: RedisSkillStore, sample_python_skill: PythonSkill):
- """Should check if skill exists in Redis."""
+ def test_exists(self, redis_store: RedisWorkflowStore, sample_python_workflow: PythonWorkflow):
+ """Should check if workflow exists in Redis."""
assert redis_store.exists("greet") is False
- redis_store.save(sample_python_skill)
+ redis_store.save(sample_python_workflow)
assert redis_store.exists("greet") is True
- def test_uses_prefix_for_redis_key(self, mock_redis, sample_python_skill: PythonSkill):
+ def test_uses_prefix_for_redis_key(self, mock_redis, sample_python_workflow: PythonWorkflow):
"""Should use configured prefix for Redis hash key."""
- store = RedisSkillStore(mock_redis, prefix="my-prefix")
- store.save(sample_python_skill)
+ store = RedisWorkflowStore(mock_redis, prefix="my-prefix")
+ store.save(sample_python_workflow)
# Check the key in mock redis
- assert "my-prefix:__skills__" in mock_redis._data
+ assert "my-prefix:__workflows__" in mock_redis._data
-# --- FileSkillStore Name Validation Tests ---
+# --- FileWorkflowStore Name Validation Tests ---
-def _make_skill_with_invalid_name(name: str) -> PythonSkill:
- """Create a PythonSkill with an arbitrary name, bypassing from_source validation.
+def _make_workflow_with_invalid_name(name: str) -> PythonWorkflow:
+ """Create a PythonWorkflow with an arbitrary name, bypassing from_source validation.
- This is for testing the store-level validation, not skill construction.
- In production, PythonSkill.from_source already validates names, but
- FileSkillStore should also validate as defense-in-depth.
+ This is for testing the store-level validation, not workflow construction.
+ In production, PythonWorkflow.from_source already validates names, but
+ FileWorkflowStore should also validate as defense-in-depth.
"""
- # Create a valid skill first
- valid_skill = PythonSkill.from_source(
+ # Create a valid workflow first
+ valid_workflow = PythonWorkflow.from_source(
name="temp_valid_name",
source="async def run(): pass",
description="test",
)
# Replace the name with the invalid one for testing
- # This simulates what could happen if someone constructs a PythonSkill directly
- return PythonSkill(
+ # This simulates what could happen if someone constructs a PythonWorkflow directly
+ return PythonWorkflow(
name=name,
- description=valid_skill.description,
- parameters=valid_skill.parameters,
- source=valid_skill.source,
- _func=valid_skill._func,
- metadata=valid_skill.metadata,
+ description=valid_workflow.description,
+ parameters=valid_workflow.parameters,
+ source=valid_workflow.source,
+ _func=valid_workflow._func,
+ metadata=valid_workflow.metadata,
)
-class TestFileSkillStoreNameValidation:
- """Security tests for skill name validation to prevent path traversal."""
+class TestFileWorkflowStoreNameValidation:
+ """Security tests for workflow name validation to prevent path traversal."""
- def test_invalid_skill_name_rejected_dotdot_save(self, tmp_path: Path) -> None:
+ def test_invalid_workflow_name_rejected_dotdot_save(self, tmp_path: Path) -> None:
"""save() rejects path traversal names with ../"""
- store = FileSkillStore(tmp_path / "skills")
- skill = _make_skill_with_invalid_name("../malicious")
- with pytest.raises(ValueError, match="Invalid skill name"):
- store.save(skill)
+ store = FileWorkflowStore(tmp_path / "workflows")
+ workflow = _make_workflow_with_invalid_name("../malicious")
+ with pytest.raises(ValueError, match="Invalid workflow name"):
+ store.save(workflow)
- def test_invalid_skill_name_rejected_dotdot_load(self, tmp_path: Path) -> None:
+ def test_invalid_workflow_name_rejected_dotdot_load(self, tmp_path: Path) -> None:
"""load() rejects path traversal names with ../"""
- store = FileSkillStore(tmp_path / "skills")
- with pytest.raises(ValueError, match="Invalid skill name"):
+ store = FileWorkflowStore(tmp_path / "workflows")
+ with pytest.raises(ValueError, match="Invalid workflow name"):
store.load("../malicious")
- def test_invalid_skill_name_rejected_dotdot_delete(self, tmp_path: Path) -> None:
+ def test_invalid_workflow_name_rejected_dotdot_delete(self, tmp_path: Path) -> None:
"""delete() rejects path traversal names with ../"""
- store = FileSkillStore(tmp_path / "skills")
- with pytest.raises(ValueError, match="Invalid skill name"):
+ store = FileWorkflowStore(tmp_path / "workflows")
+ with pytest.raises(ValueError, match="Invalid workflow name"):
store.delete("../malicious")
- def test_invalid_skill_name_rejected_dotdot_exists(self, tmp_path: Path) -> None:
+ def test_invalid_workflow_name_rejected_dotdot_exists(self, tmp_path: Path) -> None:
"""exists() rejects path traversal names with ../"""
- store = FileSkillStore(tmp_path / "skills")
- with pytest.raises(ValueError, match="Invalid skill name"):
+ store = FileWorkflowStore(tmp_path / "workflows")
+ with pytest.raises(ValueError, match="Invalid workflow name"):
store.exists("../malicious")
- def test_invalid_skill_name_rejected_slash(self, tmp_path: Path) -> None:
+ def test_invalid_workflow_name_rejected_slash(self, tmp_path: Path) -> None:
"""save() rejects names with forward slashes."""
- store = FileSkillStore(tmp_path / "skills")
- skill = _make_skill_with_invalid_name("foo/bar")
- with pytest.raises(ValueError, match="Invalid skill name"):
- store.save(skill)
+ store = FileWorkflowStore(tmp_path / "workflows")
+ workflow = _make_workflow_with_invalid_name("foo/bar")
+ with pytest.raises(ValueError, match="Invalid workflow name"):
+ store.save(workflow)
- def test_invalid_skill_name_rejected_backslash(self, tmp_path: Path) -> None:
+ def test_invalid_workflow_name_rejected_backslash(self, tmp_path: Path) -> None:
"""save() rejects names with backslashes."""
- store = FileSkillStore(tmp_path / "skills")
- skill = _make_skill_with_invalid_name("foo\\bar")
- with pytest.raises(ValueError, match="Invalid skill name"):
- store.save(skill)
+ store = FileWorkflowStore(tmp_path / "workflows")
+ workflow = _make_workflow_with_invalid_name("foo\\bar")
+ with pytest.raises(ValueError, match="Invalid workflow name"):
+ store.save(workflow)
- def test_invalid_skill_name_rejected_starts_with_digit(self, tmp_path: Path) -> None:
+ def test_invalid_workflow_name_rejected_starts_with_digit(self, tmp_path: Path) -> None:
"""save() rejects names starting with a digit (invalid Python identifier)."""
- store = FileSkillStore(tmp_path / "skills")
- skill = _make_skill_with_invalid_name("123skill")
- with pytest.raises(ValueError, match="Invalid skill name"):
- store.save(skill)
+ store = FileWorkflowStore(tmp_path / "workflows")
+ workflow = _make_workflow_with_invalid_name("123workflow")
+ with pytest.raises(ValueError, match="Invalid workflow name"):
+ store.save(workflow)
- def test_invalid_skill_name_rejected_special_chars(self, tmp_path: Path) -> None:
+ def test_invalid_workflow_name_rejected_special_chars(self, tmp_path: Path) -> None:
"""save() rejects names with special characters."""
- store = FileSkillStore(tmp_path / "skills")
- for name in ["skill@name", "skill-name", "skill.name", "skill name"]:
- skill = _make_skill_with_invalid_name(name)
- with pytest.raises(ValueError, match="Invalid skill name"):
- store.save(skill)
+ store = FileWorkflowStore(tmp_path / "workflows")
+ for name in ["workflow@name", "workflow-name", "workflow.name", "workflow name"]:
+ workflow = _make_workflow_with_invalid_name(name)
+ with pytest.raises(ValueError, match="Invalid workflow name"):
+ store.save(workflow)
- def test_valid_skill_names_accepted(self, tmp_path: Path) -> None:
+ def test_valid_workflow_names_accepted(self, tmp_path: Path) -> None:
"""Valid Python identifiers are accepted."""
- store = FileSkillStore(tmp_path / "skills")
- for name in ["my_skill", "skill123", "_private", "CamelCase", "__dunder__"]:
- skill = PythonSkill.from_source(
+ store = FileWorkflowStore(tmp_path / "workflows")
+ for name in ["my_workflow", "workflow123", "_private", "CamelCase", "__dunder__"]:
+ workflow = PythonWorkflow.from_source(
name=name,
source="async def run(): pass",
description="test",
)
- store.save(skill)
+ store.save(workflow)
assert store.exists(name)
diff --git a/tests/test_skills.py b/tests/test_skills.py
index 5ea4a00..b21525e 100644
--- a/tests/test_skills.py
+++ b/tests/test_skills.py
@@ -1,19 +1,19 @@
-"""Tests for skills system - Python skills only."""
+"""Tests for workflows system - Python workflows only."""
from pathlib import Path
from textwrap import dedent
import pytest
-from py_code_mode.skills import PythonSkill, SkillParameter
+from py_code_mode.workflows import PythonWorkflow, WorkflowParameter
-class TestSkillParameter:
- """Tests for SkillParameter dataclass."""
+class TestWorkflowParameter:
+ """Tests for WorkflowParameter dataclass."""
def test_parameter_with_required(self) -> None:
"""Parameter can be marked required."""
- param = SkillParameter(
+ param = WorkflowParameter(
name="target",
type="string",
description="Target to process",
@@ -26,7 +26,7 @@ def test_parameter_with_required(self) -> None:
def test_parameter_with_default(self) -> None:
"""Parameters can have default values."""
- param = SkillParameter(
+ param = WorkflowParameter(
name="count",
type="integer",
description="Number of times",
@@ -38,18 +38,18 @@ def test_parameter_with_default(self) -> None:
assert param.required is False
-class TestPythonSkill:
- """Tests for .py skill format - full Python files with run() entrypoint."""
+class TestPythonWorkflow:
+ """Tests for .py workflow format - full Python files with run() entrypoint."""
@pytest.fixture
- def skill_file(self, tmp_path: Path) -> Path:
- """Create a sample .py skill file."""
- skill_path = tmp_path / "greet.py"
- skill_path.write_text(
+ def workflow_file(self, tmp_path: Path) -> Path:
+ """Create a sample .py workflow file."""
+ workflow_path = tmp_path / "greet.py"
+ workflow_path.write_text(
dedent('''
"""Greet someone by name.
- A friendly greeting skill.
+ A friendly greeting workflow.
"""
async def run(target_name: str, enthusiasm: int = 1) -> str:
@@ -65,70 +65,70 @@ async def run(target_name: str, enthusiasm: int = 1) -> str:
return f"Hello, {target_name}!" + "!" * (enthusiasm - 1)
''').strip()
)
- return skill_path
+ return workflow_path
- def test_load_from_file(self, skill_file: Path) -> None:
- """Load a Python skill from file."""
- skill = PythonSkill.from_file(skill_file)
+ def test_load_from_file(self, workflow_file: Path) -> None:
+ """Load a Python workflow from file."""
+ workflow = PythonWorkflow.from_file(workflow_file)
- assert skill.name == "greet"
- assert "Greet someone" in skill.description
+ assert workflow.name == "greet"
+ assert "Greet someone" in workflow.description
- def test_extracts_parameters_from_signature(self, skill_file: Path) -> None:
+ def test_extracts_parameters_from_signature(self, workflow_file: Path) -> None:
"""Parameters extracted from function signature."""
- skill = PythonSkill.from_file(skill_file)
+ workflow = PythonWorkflow.from_file(workflow_file)
- assert len(skill.parameters) == 2
+ assert len(workflow.parameters) == 2
# First param: target_name (required, no default)
- assert skill.parameters[0].name == "target_name"
- assert skill.parameters[0].type == "string"
- assert skill.parameters[0].required is True
+ assert workflow.parameters[0].name == "target_name"
+ assert workflow.parameters[0].type == "string"
+ assert workflow.parameters[0].required is True
# Second param: enthusiasm (optional, has default)
- assert skill.parameters[1].name == "enthusiasm"
- assert skill.parameters[1].type == "integer"
- assert skill.parameters[1].required is False
- assert skill.parameters[1].default == 1
+ assert workflow.parameters[1].name == "enthusiasm"
+ assert workflow.parameters[1].type == "integer"
+ assert workflow.parameters[1].required is False
+ assert workflow.parameters[1].default == 1
- def test_has_source_property(self, skill_file: Path) -> None:
- """Skill exposes source code for agent inspection."""
- skill = PythonSkill.from_file(skill_file)
+ def test_has_source_property(self, workflow_file: Path) -> None:
+ """Workflow exposes source code for agent inspection."""
+ workflow = PythonWorkflow.from_file(workflow_file)
- assert skill.source is not None
- assert "async def run(" in skill.source
- assert "Hello, {target_name}" in skill.source
+ assert workflow.source is not None
+ assert "async def run(" in workflow.source
+ assert "Hello, {target_name}" in workflow.source
@pytest.mark.asyncio
- async def test_invoke_calls_function(self, skill_file: Path) -> None:
- """Invoking skill calls the run() function."""
- skill = PythonSkill.from_file(skill_file)
+ async def test_invoke_calls_function(self, workflow_file: Path) -> None:
+ """Invoking workflow calls the run() function."""
+ workflow = PythonWorkflow.from_file(workflow_file)
- result = await skill.invoke(target_name="Alice")
+ result = await workflow.invoke(target_name="Alice")
assert result == "Hello, Alice!"
@pytest.mark.asyncio
- async def test_invoke_with_defaults(self, skill_file: Path) -> None:
+ async def test_invoke_with_defaults(self, workflow_file: Path) -> None:
"""Invoke uses default parameter values."""
- skill = PythonSkill.from_file(skill_file)
+ workflow = PythonWorkflow.from_file(workflow_file)
- result = await skill.invoke(target_name="Bob", enthusiasm=3)
+ result = await workflow.invoke(target_name="Bob", enthusiasm=3)
assert result == "Hello, Bob!!!"
@pytest.mark.asyncio
- async def test_invoke_validates_required_params(self, skill_file: Path) -> None:
+ async def test_invoke_validates_required_params(self, workflow_file: Path) -> None:
"""Invoke fails if required params missing."""
- skill = PythonSkill.from_file(skill_file)
+ workflow = PythonWorkflow.from_file(workflow_file)
with pytest.raises(TypeError):
- await skill.invoke()
+ await workflow.invoke()
- def test_skill_with_tools_access(self, tmp_path: Path) -> None:
- """Skill can reference tools in its code."""
- skill_path = tmp_path / "scan.py"
- skill_path.write_text(
+ def test_workflow_with_tools_access(self, tmp_path: Path) -> None:
+ """Workflow can reference tools in its code."""
+ workflow_path = tmp_path / "scan.py"
+ workflow_path.write_text(
dedent('''
"""Scan a network target."""
@@ -144,19 +144,19 @@ async def run(target: str, tools) -> str:
''').strip()
)
- skill = PythonSkill.from_file(skill_path)
+ workflow = PythonWorkflow.from_file(workflow_path)
# tools parameter should be recognized as special, not a user param
- user_params = [p for p in skill.parameters if p.name != "tools"]
+ user_params = [p for p in workflow.parameters if p.name != "tools"]
assert len(user_params) == 1
assert user_params[0].name == "target"
-class TestPythonSkillFromSource:
- """Tests for creating Python skills from source code."""
+class TestPythonWorkflowFromSource:
+ """Tests for creating Python workflows from source code."""
def test_from_source_basic(self) -> None:
- """Create skill from source string."""
+ """Create workflow from source string."""
source = dedent('''
"""Add two numbers."""
@@ -164,11 +164,11 @@ async def run(a: int, b: int) -> int:
return a + b
''').strip()
- skill = PythonSkill.from_source(name="add", source=source)
+ workflow = PythonWorkflow.from_source(name="add", source=source)
- assert skill.name == "add"
- assert skill.description == "Add two numbers."
- assert len(skill.parameters) == 2
+ assert workflow.name == "add"
+ assert workflow.description == "Add two numbers."
+ assert len(workflow.parameters) == 2
def test_from_source_with_description_override(self) -> None:
"""Description parameter overrides docstring."""
@@ -178,18 +178,18 @@ async def run() -> str:
return "hello"
''').strip()
- skill = PythonSkill.from_source(
+ workflow = PythonWorkflow.from_source(
name="test",
source=source,
description="Custom description",
)
- assert skill.description == "Custom description"
+ assert workflow.description == "Custom description"
def test_from_source_validates_syntax(self) -> None:
"""Invalid syntax raises SyntaxError."""
with pytest.raises(SyntaxError):
- PythonSkill.from_source(name="bad", source="async def run( broken")
+ PythonWorkflow.from_source(name="bad", source="async def run( broken")
def test_from_source_requires_run_function(self) -> None:
"""Must have run() function."""
@@ -200,25 +200,25 @@ def other_func():
''').strip()
with pytest.raises(ValueError, match="run"):
- PythonSkill.from_source(name="no_run", source=source)
+ PythonWorkflow.from_source(name="no_run", source=source)
def test_from_source_validates_name(self) -> None:
"""Name must be valid Python identifier."""
source = "async def run(): pass"
with pytest.raises(ValueError, match="identifier"):
- PythonSkill.from_source(name="invalid-name", source=source)
+ PythonWorkflow.from_source(name="invalid-name", source=source)
@pytest.mark.asyncio
- async def test_invoke_from_source_skill(self) -> None:
- """Can invoke skill created from source."""
+ async def test_invoke_from_source_workflow(self) -> None:
+ """Can invoke workflow created from source."""
source = dedent('''
"""Multiply numbers."""
async def run(x: int, y: int) -> int:
return x * y
''').strip()
- skill = PythonSkill.from_source(name="multiply", source=source)
- result = await skill.invoke(x=3, y=4)
+ workflow = PythonWorkflow.from_source(name="multiply", source=source)
+ result = await workflow.invoke(x=3, y=4)
assert result == 12
diff --git a/tests/test_skills_namespace_decoupling.py b/tests/test_skills_namespace_decoupling.py
index 2ed8829..26fb4c9 100644
--- a/tests/test_skills_namespace_decoupling.py
+++ b/tests/test_skills_namespace_decoupling.py
@@ -1,6 +1,6 @@
-"""Tests for SkillsNamespace decoupling from InProcessExecutor.
+"""Tests for WorkflowsNamespace decoupling from InProcessExecutor.
-SkillsNamespace should accept a namespace dict directly, not an executor reference.
+WorkflowsNamespace should accept a namespace dict directly, not an executor reference.
This enables use in contexts where there's no executor (subprocess, container).
"""
@@ -11,23 +11,28 @@
import pytest
-from py_code_mode.execution.in_process.skills_namespace import SkillsNamespace
-from py_code_mode.skills import MemorySkillStore, MockEmbedder, PythonSkill, SkillLibrary
+from py_code_mode.execution.in_process.workflows_namespace import WorkflowsNamespace
+from py_code_mode.workflows import (
+ MemoryWorkflowStore,
+ MockEmbedder,
+ PythonWorkflow,
+ WorkflowLibrary,
+)
@pytest.fixture
-def skill_library() -> SkillLibrary:
- """Create a skill library with a test skill."""
- store = MemorySkillStore()
- library = SkillLibrary(embedder=MockEmbedder(), store=store)
+def workflow_library() -> WorkflowLibrary:
+ """Create a workflow library with a test workflow."""
+ store = MemoryWorkflowStore()
+ library = WorkflowLibrary(embedder=MockEmbedder(), store=store)
- # Add a test skill that accesses tools
- skill = PythonSkill.from_source(
+ # Add a test workflow that accesses tools
+ workflow = PythonWorkflow.from_source(
name="use_tools",
source='async def run(val: str) -> str:\n return f"tools={tools}, val={val}"',
- description="A skill that uses tools from namespace",
+ description="A workflow that uses tools from namespace",
)
- library.add(skill)
+ library.add(workflow)
return library
@@ -37,27 +42,27 @@ def namespace_dict() -> dict[str, Any]:
"""Create a namespace dict similar to what executor provides."""
return {
"tools": MagicMock(name="tools_namespace"),
- "skills": MagicMock(name="skills_namespace"), # Will be replaced
+ "workflows": MagicMock(name="workflows_namespace"), # Will be replaced
"artifacts": MagicMock(name="artifacts_namespace"),
}
-class TestSkillsNamespaceAcceptsDict:
- """SkillsNamespace constructor accepts namespace dict."""
+class TestWorkflowsNamespaceAcceptsDict:
+ """WorkflowsNamespace constructor accepts namespace dict."""
def test_accepts_namespace_dict(
- self, skill_library: SkillLibrary, namespace_dict: dict[str, Any]
+ self, workflow_library: WorkflowLibrary, namespace_dict: dict[str, Any]
) -> None:
"""Constructor accepts a plain dict for namespace."""
# Should not raise
- skills_ns = SkillsNamespace(skill_library, namespace_dict)
+ workflows_ns = WorkflowsNamespace(workflow_library, namespace_dict)
- # Should be able to list skills
- skills = skills_ns.list()
- assert len(skills) == 1
- assert skills[0]["name"] == "use_tools"
+ # Should be able to list workflows
+ workflows = workflows_ns.list()
+ assert len(workflows) == 1
+ assert workflows[0]["name"] == "use_tools"
- def test_rejects_executor_argument(self, skill_library: SkillLibrary) -> None:
+ def test_rejects_executor_argument(self, workflow_library: WorkflowLibrary) -> None:
"""Constructor raises TypeError if passed an executor-like object."""
# Create something that looks like an executor (has _namespace attribute)
@@ -68,176 +73,180 @@ def __init__(self) -> None:
fake_executor = FakeExecutor()
with pytest.raises(TypeError) as exc_info:
- SkillsNamespace(skill_library, fake_executor) # type: ignore[arg-type]
+ WorkflowsNamespace(workflow_library, fake_executor) # type: ignore[arg-type]
assert "namespace dict" in str(exc_info.value).lower()
assert "_namespace" in str(exc_info.value) or "executor" in str(exc_info.value).lower()
- def test_rejects_object_with_namespace_attr(self, skill_library: SkillLibrary) -> None:
+ def test_rejects_object_with_namespace_attr(self, workflow_library: WorkflowLibrary) -> None:
"""Constructor rejects any object with _namespace attribute."""
# Even a mock with _namespace should be rejected
mock_with_namespace = MagicMock()
mock_with_namespace._namespace = {}
with pytest.raises(TypeError):
- SkillsNamespace(skill_library, mock_with_namespace) # type: ignore[arg-type]
+ WorkflowsNamespace(workflow_library, mock_with_namespace) # type: ignore[arg-type]
class TestInvokeUsesNamespaceDirectly:
- """invoke() method uses tools/skills/artifacts from namespace dict."""
+ """invoke() method uses tools/workflows/artifacts from namespace dict."""
- def test_invoke_uses_tools_from_namespace(self, skill_library: SkillLibrary) -> None:
- """Skill invocation can access tools from namespace dict."""
- # Create a skill that returns what tools it sees
- skill = PythonSkill.from_source(
+ def test_invoke_uses_tools_from_namespace(self, workflow_library: WorkflowLibrary) -> None:
+ """Workflow invocation can access tools from namespace dict."""
+ # Create a workflow that returns what tools it sees
+ workflow = PythonWorkflow.from_source(
name="echo_tools",
source="async def run() -> str:\n return str(type(tools).__name__)",
description="Returns tools type",
)
- skill_library.add(skill)
+ workflow_library.add(workflow)
# Create namespace with identifiable tools
mock_tools = MagicMock(name="my_tools")
namespace = {
"tools": mock_tools,
- "skills": None,
+ "workflows": None,
"artifacts": None,
}
- skills_ns = SkillsNamespace(skill_library, namespace)
- result = skills_ns.invoke("echo_tools")
+ workflows_ns = WorkflowsNamespace(workflow_library, namespace)
+ result = workflows_ns.invoke("echo_tools")
- # The skill saw MagicMock as tools
+ # The workflow saw MagicMock as tools
assert "MagicMock" in result
- def test_invoke_uses_artifacts_from_namespace(self, skill_library: SkillLibrary) -> None:
- """Skill invocation can access artifacts from namespace dict."""
- skill = PythonSkill.from_source(
+ def test_invoke_uses_artifacts_from_namespace(self, workflow_library: WorkflowLibrary) -> None:
+ """Workflow invocation can access artifacts from namespace dict."""
+ workflow = PythonWorkflow.from_source(
name="use_artifacts",
source="async def run() -> bool:\n return artifacts is not None",
description="Checks artifacts access",
)
- skill_library.add(skill)
+ workflow_library.add(workflow)
mock_artifacts = MagicMock(name="my_artifacts")
namespace = {
"tools": None,
- "skills": None,
+ "workflows": None,
"artifacts": mock_artifacts,
}
- skills_ns = SkillsNamespace(skill_library, namespace)
- result = skills_ns.invoke("use_artifacts")
+ workflows_ns = WorkflowsNamespace(workflow_library, namespace)
+ result = workflows_ns.invoke("use_artifacts")
assert result is True
- def test_invoke_uses_deps_from_namespace(self, skill_library: SkillLibrary) -> None:
- """Skill invocation can access deps from namespace dict."""
- skill = PythonSkill.from_source(
+ def test_invoke_uses_deps_from_namespace(self, workflow_library: WorkflowLibrary) -> None:
+ """Workflow invocation can access deps from namespace dict."""
+ workflow = PythonWorkflow.from_source(
name="use_deps",
source="async def run() -> str:\n return str(deps)",
description="Checks deps access",
)
- skill_library.add(skill)
+ workflow_library.add(workflow)
mock_deps = MagicMock(name="my_deps")
namespace = {
"tools": None,
- "skills": None,
+ "workflows": None,
"artifacts": None,
"deps": mock_deps,
}
- skills_ns = SkillsNamespace(skill_library, namespace)
- result = skills_ns.invoke("use_deps")
+ workflows_ns = WorkflowsNamespace(workflow_library, namespace)
+ result = workflows_ns.invoke("use_deps")
assert "MagicMock" in result
class TestNamespaceIsolation:
- """Skills cannot modify the parent namespace."""
+ """Workflows cannot modify the parent namespace."""
- def test_skill_cannot_modify_parent_namespace(self, skill_library: SkillLibrary) -> None:
- """Skill execution cannot add variables to parent namespace."""
+ def test_workflow_cannot_modify_parent_namespace(
+ self, workflow_library: WorkflowLibrary
+ ) -> None:
+ """Workflow execution cannot add variables to parent namespace."""
polluter_source = (
"async def run() -> str:\n"
" global pollution\n"
' pollution = "leaked"\n'
' return "done"'
)
- skill = PythonSkill.from_source(
+ workflow = PythonWorkflow.from_source(
name="polluter",
source=polluter_source,
description="Tries to pollute namespace",
)
- skill_library.add(skill)
+ workflow_library.add(workflow)
original_namespace: dict[str, Any] = {
"tools": None,
- "skills": None,
+ "workflows": None,
"artifacts": None,
}
- skills_ns = SkillsNamespace(skill_library, original_namespace)
- skills_ns.invoke("polluter")
+ workflows_ns = WorkflowsNamespace(workflow_library, original_namespace)
+ workflows_ns.invoke("polluter")
# Parent namespace should not have the pollution
assert "pollution" not in original_namespace
- def test_skill_cannot_modify_tools_reference(self, skill_library: SkillLibrary) -> None:
- """Skill cannot replace tools in parent namespace."""
+ def test_workflow_cannot_modify_tools_reference(
+ self, workflow_library: WorkflowLibrary
+ ) -> None:
+ """Workflow cannot replace tools in parent namespace."""
replacer_source = (
'async def run() -> str:\n global tools\n tools = "replaced"\n return "done"'
)
- skill = PythonSkill.from_source(
+ workflow = PythonWorkflow.from_source(
name="replacer",
source=replacer_source,
description="Tries to replace tools",
)
- skill_library.add(skill)
+ workflow_library.add(workflow)
original_tools = MagicMock(name="original")
namespace: dict[str, Any] = {
"tools": original_tools,
- "skills": None,
+ "workflows": None,
"artifacts": None,
}
- skills_ns = SkillsNamespace(skill_library, namespace)
- skills_ns.invoke("replacer")
+ workflows_ns = WorkflowsNamespace(workflow_library, namespace)
+ workflows_ns.invoke("replacer")
# Original tools should still be in namespace
assert namespace["tools"] is original_tools
class TestIntegrationWithExecutor:
- """SkillsNamespace works when wired up via executor."""
+ """WorkflowsNamespace works when wired up via executor."""
@pytest.mark.asyncio
async def test_executor_passes_namespace_not_self(self) -> None:
"""InProcessExecutor should pass self._namespace, not self."""
from py_code_mode.execution.in_process.executor import InProcessExecutor
- from py_code_mode.skills import MemorySkillStore, MockEmbedder, SkillLibrary
from py_code_mode.tools import ToolRegistry
+ from py_code_mode.workflows import MemoryWorkflowStore, MockEmbedder, WorkflowLibrary
- store = MemorySkillStore()
- library = SkillLibrary(embedder=MockEmbedder(), store=store)
+ store = MemoryWorkflowStore()
+ library = WorkflowLibrary(embedder=MockEmbedder(), store=store)
registry = ToolRegistry()
executor = InProcessExecutor(
registry=registry,
- skill_library=library,
+ workflow_library=library,
)
- # The skills namespace should have received a dict, not the executor
- skills_ns = executor._namespace.get("skills")
- assert skills_ns is not None
+ # The workflows namespace should have received a dict, not the executor
+ workflows_ns = executor._namespace.get("workflows")
+ assert workflows_ns is not None
# Verify it has _namespace attr (the dict we passed), not _executor
- assert hasattr(skills_ns, "_namespace")
- assert isinstance(skills_ns._namespace, dict)
+ assert hasattr(workflows_ns, "_namespace")
+ assert isinstance(workflows_ns._namespace, dict)
# Should NOT have _executor attribute
- assert not hasattr(skills_ns, "_executor")
+ assert not hasattr(workflows_ns, "_executor")
await executor.close()
diff --git a/tests/test_storage.py b/tests/test_storage.py
index 434acd1..ffefa41 100644
--- a/tests/test_storage.py
+++ b/tests/test_storage.py
@@ -50,17 +50,17 @@ def test_returns_file_storage_access_type(self, tmp_path: Path) -> None:
# NOTE: tools_path tests removed - tools now owned by executors, not storage
- def test_skills_path_is_always_set(self, tmp_path: Path) -> None:
- """skills_path is always set (points to skills/ subdirectory).
+ def test_workflows_path_is_always_set(self, tmp_path: Path) -> None:
+ """workflows_path is always set (points to workflows/ subdirectory).
- Breaks when: skills_path is None or points to wrong location.
+ Breaks when: workflows_path is None or points to wrong location.
"""
storage = FileStorage(tmp_path)
result = storage.get_serializable_access()
- expected_skills_path = tmp_path / "skills"
- assert result.skills_path == expected_skills_path
+ expected_workflows_path = tmp_path / "workflows"
+ assert result.workflows_path == expected_workflows_path
def test_artifacts_path_is_always_set(self, tmp_path: Path) -> None:
"""artifacts_path is always set (points to artifacts/ subdirectory).
@@ -83,9 +83,9 @@ def test_paths_are_absolute(self, tmp_path: Path) -> None:
result = storage.get_serializable_access()
- # skills_path may be None if skills directory doesn't exist
- if result.skills_path is not None:
- assert result.skills_path.is_absolute()
+ # workflows_path may be None if workflows directory doesn't exist
+ if result.workflows_path is not None:
+ assert result.workflows_path.is_absolute()
assert result.artifacts_path.is_absolute()
def test_access_descriptor_is_frozen_dataclass(self, tmp_path: Path) -> None:
@@ -173,8 +173,8 @@ def test_redis_url_without_password(self) -> None:
# NOTE: tools_prefix test removed - tools now owned by executors, not storage
- def test_skills_prefix_is_correctly_formatted(self, mock_redis: MockRedisClient) -> None:
- """skills_prefix follows {prefix}:skills format.
+ def test_workflows_prefix_is_correctly_formatted(self, mock_redis: MockRedisClient) -> None:
+ """workflows_prefix follows {prefix}:workflows format.
Breaks when: Prefix format doesn't match expected pattern.
"""
@@ -182,7 +182,7 @@ def test_skills_prefix_is_correctly_formatted(self, mock_redis: MockRedisClient)
result = storage.get_serializable_access()
- assert result.skills_prefix == "myapp:skills"
+ assert result.workflows_prefix == "myapp:workflows"
def test_artifacts_prefix_is_correctly_formatted(self, mock_redis: MockRedisClient) -> None:
"""artifacts_prefix follows {prefix}:artifacts format.
@@ -279,4 +279,4 @@ def test_method_exists_on_redis_storage(self, mock_redis: MockRedisClient) -> No
# NOTE: TestStorageBackendExecutionMethods was removed in the executor-ownership refactor.
# storage.get_tool_registry() was removed - tools are now owned by executors via config.
-# storage.get_skill_library() tests remain in test_skills.py and test_semantic.py.
+# storage.get_workflow_library() tests remain in test_workflows.py and test_semantic.py.
diff --git a/tests/test_storage_vector_store.py b/tests/test_storage_vector_store.py
index 9a6d557..af93e0e 100644
--- a/tests/test_storage_vector_store.py
+++ b/tests/test_storage_vector_store.py
@@ -3,7 +3,7 @@
Phase 4: Storage Backend Integration
This module tests that FileStorage and RedisStorage properly integrate
-with VectorStore implementations, providing vector stores to SkillLibrary
+with VectorStore implementations, providing vector stores to WorkflowLibrary
for semantic search.
TDD RED phase: These tests define the interface before implementation.
@@ -12,7 +12,7 @@
2. RedisStorage.get_vector_store() is implemented
3. FileStorageAccess gains vectors_path field
4. RedisStorageAccess gains vectors_prefix field
-5. Storage.get_skill_library() passes vector_store to create_skill_library()
+5. Storage.get_workflow_library() passes vector_store to create_workflow_library()
"""
from __future__ import annotations
@@ -107,60 +107,60 @@ def test_get_vector_store_creates_embedder(self, tmp_path: Path) -> None:
# =============================================================================
-# Phase 4.2: FileStorage.get_skill_library() with vector_store
+# Phase 4.2: FileStorage.get_workflow_library() with vector_store
# =============================================================================
-class TestFileStorageSkillLibraryVectorStoreIntegration:
- """Tests for SkillLibrary receiving vector_store from FileStorage."""
+class TestFileStorageWorkflowLibraryVectorStoreIntegration:
+ """Tests for WorkflowLibrary receiving vector_store from FileStorage."""
- def test_skill_library_has_vector_store_attribute(self, tmp_path: Path) -> None:
- """SkillLibrary created by FileStorage has vector_store attribute.
+ def test_workflow_library_has_vector_store_attribute(self, tmp_path: Path) -> None:
+ """WorkflowLibrary created by FileStorage has vector_store attribute.
- Breaks when: create_skill_library() not called with vector_store parameter.
+ Breaks when: create_workflow_library() not called with vector_store parameter.
"""
storage = FileStorage(tmp_path)
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
# Should have vector_store attribute
assert hasattr(library, "vector_store")
- def test_skill_library_vector_store_matches_get_vector_store(self, tmp_path: Path) -> None:
- """SkillLibrary.vector_store is same instance as storage.get_vector_store().
+ def test_workflow_library_vector_store_matches_get_vector_store(self, tmp_path: Path) -> None:
+ """WorkflowLibrary.vector_store is same instance as storage.get_vector_store().
Breaks when: Different vector store instances created.
"""
storage = FileStorage(tmp_path)
vector_store = storage.get_vector_store()
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
# Should be the same instance (or both None)
assert library.vector_store is vector_store
- def test_skill_library_uses_vector_store_for_search(self, tmp_path: Path) -> None:
- """SkillLibrary.search() uses vector_store when available.
+ def test_workflow_library_uses_vector_store_for_search(self, tmp_path: Path) -> None:
+ """WorkflowLibrary.search() uses vector_store when available.
Breaks when: Vector store not used for semantic search.
"""
- from py_code_mode.skills import PythonSkill
+ from py_code_mode.workflows import PythonWorkflow
storage = FileStorage(tmp_path)
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
- # Add a skill with distinctive description
- skill = PythonSkill.from_source(
+ # Add a workflow with distinctive description
+ workflow = PythonWorkflow.from_source(
name="calculate_total",
source="async def run(numbers): return sum(numbers)",
description="Add up all numbers in a list",
)
- library.add(skill)
+ library.add(workflow)
# Search by semantic meaning
results = library.search("sum values together")
- # Should find the skill via semantic similarity
+ # Should find the workflow via semantic similarity
assert len(results) > 0
assert any(r.name == "calculate_total" for r in results)
@@ -310,29 +310,29 @@ def test_vectors_prefix_follows_pattern_when_implemented(
class TestStorageVectorStoreIntegration:
- """Integration tests for storage + vector store + skill library."""
+ """Integration tests for storage + vector store + workflow library."""
def test_file_storage_end_to_end_semantic_search(self, tmp_path: Path) -> None:
- """Complete workflow: FileStorage -> VectorStore -> SkillLibrary -> search.
+ """Complete workflow: FileStorage -> VectorStore -> WorkflowLibrary -> search.
User journey: Developer uses FileStorage with semantic search.
Breaks when: Any link in the chain fails.
"""
- from py_code_mode.skills import PythonSkill
+ from py_code_mode.workflows import PythonWorkflow
storage = FileStorage(tmp_path)
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
- # Add skills with semantic descriptions
+ # Add workflows with semantic descriptions
library.add(
- PythonSkill.from_source(
+ PythonWorkflow.from_source(
name="http_get",
source="async def run(url): import requests; return requests.get(url)",
description="Fetch data from a URL using HTTP GET request",
)
)
library.add(
- PythonSkill.from_source(
+ PythonWorkflow.from_source(
name="parse_json",
source="async def run(text): import json; return json.loads(text)",
description="Parse JSON string into Python object",
@@ -344,35 +344,35 @@ def test_file_storage_end_to_end_semantic_search(self, tmp_path: Path) -> None:
# Should find http_get via semantic similarity
assert len(results) > 0
- skill_names = [r.name for r in results]
- assert "http_get" in skill_names
+ workflow_names = [r.name for r in results]
+ assert "http_get" in workflow_names
def test_vector_store_persists_across_storage_instances(self, tmp_path: Path) -> None:
"""Vector store persists when FileStorage recreated.
Breaks when: Vectors not saved to disk, lost on restart.
"""
- from py_code_mode.skills import PythonSkill
+ from py_code_mode.workflows import PythonWorkflow
- # First session: create skill
+ # First session: create workflow
storage1 = FileStorage(tmp_path)
- library1 = storage1.get_skill_library()
+ library1 = storage1.get_workflow_library()
library1.add(
- PythonSkill.from_source(
- name="test_skill",
+ PythonWorkflow.from_source(
+ name="test_workflow",
source="async def run(): return 1",
- description="A test skill for persistence",
+ description="A test workflow for persistence",
)
)
# Second session: new storage instance
storage2 = FileStorage(tmp_path)
- library2 = storage2.get_skill_library()
+ library2 = storage2.get_workflow_library()
- # Should still find the skill (vectors persisted)
- results = library2.search("test skill")
+ # Should still find the workflow (vectors persisted)
+ results = library2.search("test workflow")
assert len(results) > 0
- assert any(r.name == "test_skill" for r in results)
+ assert any(r.name == "test_workflow" for r in results)
def test_storage_access_includes_vector_store_path(self, tmp_path: Path) -> None:
"""get_serializable_access() includes vectors_path for subprocess.
@@ -400,61 +400,61 @@ def test_storage_access_includes_vector_store_path(self, tmp_path: Path) -> None
class TestVectorStoreEdgeCases:
"""Edge cases and error handling for vector store integration."""
- def test_skill_library_works_without_vector_store(self, tmp_path: Path) -> None:
- """SkillLibrary works when vector_store is None (fallback mode).
+ def test_workflow_library_works_without_vector_store(self, tmp_path: Path) -> None:
+ """WorkflowLibrary works when vector_store is None (fallback mode).
Breaks when: Library requires vector store, fails when chromadb unavailable.
"""
- from py_code_mode.skills import PythonSkill
+ from py_code_mode.workflows import PythonWorkflow
storage = FileStorage(tmp_path)
# Mock vector store being unavailable
with patch.object(storage, "get_vector_store", return_value=None):
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
# Should still work for basic operations
- skill = PythonSkill.from_source(
+ workflow = PythonWorkflow.from_source(
name="basic",
source="async def run(): return 1",
- description="Basic skill",
+ description="Basic workflow",
)
- library.add(skill)
+ library.add(workflow)
# Search should still work (using fallback embedder)
results = library.search("basic")
assert len(results) > 0
- def test_vector_store_handles_empty_skills_directory(self, tmp_path: Path) -> None:
- """Vector store handles empty skills directory gracefully.
+ def test_vector_store_handles_empty_workflows_directory(self, tmp_path: Path) -> None:
+ """Vector store handles empty workflows directory gracefully.
Breaks when: Vector store crashes on empty collection.
"""
storage = FileStorage(tmp_path)
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
- # Search with no skills should return empty
+ # Search with no workflows should return empty
results = library.search("anything")
assert results == []
- def test_vector_store_count_matches_skill_count(self, tmp_path: Path) -> None:
- """vector_store.count() matches number of skills in library.
+ def test_vector_store_count_matches_workflow_count(self, tmp_path: Path) -> None:
+ """vector_store.count() matches number of workflows in library.
- Breaks when: Vector count diverges from skill count.
+ Breaks when: Vector count diverges from workflow count.
"""
- from py_code_mode.skills import PythonSkill
+ from py_code_mode.workflows import PythonWorkflow
storage = FileStorage(tmp_path)
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
vector_store = storage.get_vector_store()
- # Add skills
+ # Add workflows
for i in range(3):
library.add(
- PythonSkill.from_source(
- name=f"skill_{i}",
+ PythonWorkflow.from_source(
+ name=f"workflow_{i}",
source="async def run(): return 1",
- description=f"Skill number {i}",
+ description=f"Workflow number {i}",
)
)
diff --git a/tests/test_storage_wrapper_cleanup.py b/tests/test_storage_wrapper_cleanup.py
index 82801da..e327995 100644
--- a/tests/test_storage_wrapper_cleanup.py
+++ b/tests/test_storage_wrapper_cleanup.py
@@ -12,15 +12,15 @@
import pytest
from py_code_mode.artifacts import ArtifactStoreProtocol
-from py_code_mode.skills import SkillLibrary
from py_code_mode.storage import FileStorage, RedisStorage
+from py_code_mode.workflows import WorkflowLibrary
if TYPE_CHECKING:
from tests.conftest import MockRedisClient
class TestWrapperPropertiesRemoved:
- """Verify .tools, .skills, .artifacts properties are REMOVED."""
+ """Verify .tools, .workflows, .artifacts properties are REMOVED."""
def test_file_storage_no_tools_property(self, tmp_path: Path) -> None:
"""FileStorage.tools property must be removed.
@@ -32,15 +32,15 @@ def test_file_storage_no_tools_property(self, tmp_path: Path) -> None:
with pytest.raises(AttributeError):
_ = storage.tools
- def test_file_storage_no_skills_property(self, tmp_path: Path) -> None:
- """FileStorage.skills property must be removed.
+ def test_file_storage_no_workflows_property(self, tmp_path: Path) -> None:
+ """FileStorage.workflows property must be removed.
Breaks when: Property still exists (should raise AttributeError).
"""
storage = FileStorage(tmp_path)
with pytest.raises(AttributeError):
- _ = storage.skills
+ _ = storage.workflows
def test_file_storage_no_artifacts_property(self, tmp_path: Path) -> None:
"""FileStorage.artifacts property must be removed.
@@ -62,15 +62,15 @@ def test_redis_storage_no_tools_property(self, mock_redis: MockRedisClient) -> N
with pytest.raises(AttributeError):
_ = storage.tools
- def test_redis_storage_no_skills_property(self, mock_redis: MockRedisClient) -> None:
- """RedisStorage.skills property must be removed.
+ def test_redis_storage_no_workflows_property(self, mock_redis: MockRedisClient) -> None:
+ """RedisStorage.workflows property must be removed.
Breaks when: Property still exists (should raise AttributeError).
"""
storage = RedisStorage(redis=mock_redis, prefix="test")
with pytest.raises(AttributeError):
- _ = storage.skills
+ _ = storage.workflows
def test_redis_storage_no_artifacts_property(self, mock_redis: MockRedisClient) -> None:
"""RedisStorage.artifacts property must be removed.
@@ -89,16 +89,18 @@ class TestGetMethodsReturnDirectTypes:
# NOTE: test_file_storage_get_tool_registry_returns_tool_registry removed
# tools are now owned by executors, not storage
- def test_file_storage_get_skill_library_returns_skill_library(self, tmp_path: Path) -> None:
- """get_skill_library() returns SkillLibrary directly.
+ def test_file_storage_get_workflow_library_returns_workflow_library(
+ self, tmp_path: Path
+ ) -> None:
+ """get_workflow_library() returns WorkflowLibrary directly.
- Breaks when: Returns SkillStoreWrapper or wrong type.
+ Breaks when: Returns WorkflowStoreWrapper or wrong type.
"""
storage = FileStorage(tmp_path)
- result = storage.get_skill_library()
+ result = storage.get_workflow_library()
- assert isinstance(result, SkillLibrary)
+ assert isinstance(result, WorkflowLibrary)
def test_file_storage_get_artifact_store_returns_artifact_store_protocol(
self, tmp_path: Path
@@ -118,18 +120,18 @@ def test_file_storage_get_artifact_store_returns_artifact_store_protocol(
# NOTE: test_redis_storage_get_tool_registry_returns_tool_registry removed
# tools are now owned by executors, not storage
- def test_redis_storage_get_skill_library_returns_skill_library(
+ def test_redis_storage_get_workflow_library_returns_workflow_library(
self, mock_redis: MockRedisClient
) -> None:
- """get_skill_library() returns SkillLibrary directly.
+ """get_workflow_library() returns WorkflowLibrary directly.
- Breaks when: Returns SkillStoreWrapper or wrong type.
+ Breaks when: Returns WorkflowStoreWrapper or wrong type.
"""
storage = RedisStorage(redis=mock_redis, prefix="test")
- result = storage.get_skill_library()
+ result = storage.get_workflow_library()
- assert isinstance(result, SkillLibrary)
+ assert isinstance(result, WorkflowLibrary)
def test_redis_storage_get_artifact_store_returns_artifact_store_protocol(
self, mock_redis: MockRedisClient
@@ -168,14 +170,14 @@ def test_redis_tool_store_wrapper_not_exported(self) -> None:
assert not hasattr(storage, "RedisToolStoreWrapper")
- def test_skill_store_wrapper_not_exported(self) -> None:
- """SkillStoreWrapper should not be in storage exports.
+ def test_workflow_store_wrapper_not_exported(self) -> None:
+ """WorkflowStoreWrapper should not be in storage exports.
Breaks when: Class is still exported from py_code_mode.storage.
"""
from py_code_mode import storage
- assert not hasattr(storage, "SkillStoreWrapper")
+ assert not hasattr(storage, "WorkflowStoreWrapper")
def test_artifact_store_wrapper_not_exported(self) -> None:
"""ArtifactStoreWrapper should not be in storage exports.
@@ -195,14 +197,14 @@ def test_tool_store_protocol_not_exported(self) -> None:
assert not hasattr(storage, "ToolStore")
- def test_skill_store_wrapper_protocol_not_exported(self) -> None:
- """SkillStoreWrapperProtocol should not be in storage exports.
+ def test_workflow_store_wrapper_protocol_not_exported(self) -> None:
+ """WorkflowStoreWrapperProtocol should not be in storage exports.
Breaks when: Protocol is still exported from py_code_mode.storage.
"""
from py_code_mode import storage
- assert not hasattr(storage, "SkillStoreWrapperProtocol")
+ assert not hasattr(storage, "WorkflowStoreWrapperProtocol")
def test_artifact_store_wrapper_protocol_not_exported(self) -> None:
"""ArtifactStoreWrapperProtocol should not be in storage exports.
@@ -254,14 +256,14 @@ class TestStorageBackendProtocolSimplified:
# NOTE: test_storage_backend_has_get_tool_registry removed
# tools are now owned by executors, not storage
- def test_storage_backend_has_get_skill_library(self) -> None:
- """StorageBackend protocol must have get_skill_library method.
+ def test_storage_backend_has_get_workflow_library(self) -> None:
+ """StorageBackend protocol must have get_workflow_library method.
Breaks when: Method is missing from protocol.
"""
from py_code_mode.storage.backends import StorageBackend
- assert hasattr(StorageBackend, "get_skill_library")
+ assert hasattr(StorageBackend, "get_workflow_library")
def test_storage_backend_has_get_artifact_store(self) -> None:
"""StorageBackend protocol must have get_artifact_store method.
@@ -293,15 +295,15 @@ def test_storage_backend_no_tools_property(self) -> None:
annotations = getattr(StorageBackend, "__protocol_attrs__", set())
assert "tools" not in annotations
- def test_storage_backend_no_skills_property(self) -> None:
- """StorageBackend protocol must NOT have skills property.
+ def test_storage_backend_no_workflows_property(self) -> None:
+ """StorageBackend protocol must NOT have workflows property.
Breaks when: Property still exists in protocol.
"""
from py_code_mode.storage.backends import StorageBackend
annotations = getattr(StorageBackend, "__protocol_attrs__", set())
- assert "skills" not in annotations
+ assert "workflows" not in annotations
def test_storage_backend_no_artifacts_property(self) -> None:
"""StorageBackend protocol must NOT have artifacts property.
diff --git a/tests/test_store.py b/tests/test_store.py
index 3529915..7308b50 100644
--- a/tests/test_store.py
+++ b/tests/test_store.py
@@ -1,4 +1,4 @@
-"""Tests for skill store CLI module."""
+"""Tests for workflow store CLI module."""
from __future__ import annotations
@@ -7,20 +7,20 @@
import pytest
-from py_code_mode.skills import PythonSkill
+from py_code_mode.workflows import PythonWorkflow
-def _make_skill(name: str, description: str, source: str) -> PythonSkill:
- """Helper to create a PythonSkill."""
+def _make_workflow(name: str, description: str, source: str) -> PythonWorkflow:
+ """Helper to create a PythonWorkflow."""
full_source = f'"""{description}"""\n\n{source}'
- return PythonSkill.from_source(name=name, source=full_source, description=description)
+ return PythonWorkflow.from_source(name=name, source=full_source, description=description)
class TestGetStore:
"""Test _get_store factory function."""
def test_redis_scheme_creates_redis_store(self) -> None:
- """redis:// scheme creates RedisSkillStore."""
+ """redis:// scheme creates RedisWorkflowStore."""
from py_code_mode.cli.store import _get_store
with patch("py_code_mode.cli.store.redis_lib") as mock_redis_lib:
@@ -33,7 +33,7 @@ def test_redis_scheme_creates_redis_store(self) -> None:
assert store is not None
def test_rediss_scheme_creates_redis_store(self) -> None:
- """rediss:// (TLS) scheme creates RedisSkillStore."""
+ """rediss:// (TLS) scheme creates RedisWorkflowStore."""
from py_code_mode.cli.store import _get_store
with patch("py_code_mode.cli.store.redis_lib") as mock_redis_lib:
@@ -66,46 +66,46 @@ def test_cosmos_scheme_not_implemented(self) -> None:
_get_store("cosmos://account.documents.azure.com", prefix="test")
-class TestSkillHash:
- """Test _skill_hash function."""
+class TestWorkflowHash:
+ """Test _workflow_hash function."""
- def test_same_skill_same_hash(self) -> None:
- """Same skill content produces same hash."""
- from py_code_mode.cli.store import _skill_hash
+ def test_same_workflow_same_hash(self) -> None:
+ """Same workflow content produces same hash."""
+ from py_code_mode.cli.store import _workflow_hash
- skill = _make_skill("test", "Test skill", "async def run():\n return 'hello'")
+ workflow = _make_workflow("test", "Test workflow", "async def run():\n return 'hello'")
- hash1 = _skill_hash(skill)
- hash2 = _skill_hash(skill)
+ hash1 = _workflow_hash(workflow)
+ hash2 = _workflow_hash(workflow)
assert hash1 == hash2
def test_different_content_different_hash(self) -> None:
- """Different skill content produces different hash."""
- from py_code_mode.cli.store import _skill_hash
+ """Different workflow content produces different hash."""
+ from py_code_mode.cli.store import _workflow_hash
- skill1 = _make_skill("test", "Desc 1", "async def run(): return 1")
- skill2 = _make_skill("test", "Desc 2", "async def run(): return 1")
+ workflow1 = _make_workflow("test", "Desc 1", "async def run(): return 1")
+ workflow2 = _make_workflow("test", "Desc 2", "async def run(): return 1")
- assert _skill_hash(skill1) != _skill_hash(skill2)
+ assert _workflow_hash(workflow1) != _workflow_hash(workflow2)
def test_hash_is_short(self) -> None:
"""Hash is truncated to 12 characters."""
- from py_code_mode.cli.store import _skill_hash
+ from py_code_mode.cli.store import _workflow_hash
- skill = _make_skill("test", "desc", "async def run(): pass")
- assert len(_skill_hash(skill)) == 12
+ workflow = _make_workflow("test", "desc", "async def run(): pass")
+ assert len(_workflow_hash(workflow)) == 12
class TestBootstrap:
"""Test bootstrap command."""
- def test_bootstrap_loads_skills_from_directory(self, tmp_path: Path) -> None:
- """Bootstrap loads skills from source directory."""
+ def test_bootstrap_loads_workflows_from_directory(self, tmp_path: Path) -> None:
+ """Bootstrap loads workflows from source directory."""
from py_code_mode.cli.store import bootstrap
- # Create test skill
- skill_file = tmp_path / "my_skill.py"
- skill_file.write_text('''"""My test skill."""
+ # Create test workflow
+ workflow_file = tmp_path / "my_workflow.py"
+ workflow_file.write_text('''"""My test workflow."""
async def run(x: int) -> int:
"""Double a number."""
@@ -120,35 +120,35 @@ async def run(x: int) -> int:
count = bootstrap(tmp_path, "redis://localhost", "test-prefix")
assert count == 1
- # Uses batch save when available (RedisSkillStore)
+ # Uses batch save when available (RedisWorkflowStore)
mock_store.save_batch.assert_called_once()
def test_bootstrap_with_clear_removes_existing(self, tmp_path: Path) -> None:
- """Bootstrap with clear=True removes existing skills first."""
+ """Bootstrap with clear=True removes existing workflows first."""
from py_code_mode.cli.store import bootstrap
- # Create test skill
- skill_file = tmp_path / "new_skill.py"
- skill_file.write_text('"""New skill."""\nasync def run() -> str:\n return "new"')
+ # Create test workflow
+ workflow_file = tmp_path / "new_workflow.py"
+ workflow_file.write_text('"""New workflow."""\nasync def run() -> str:\n return "new"')
- # Mock store with existing skill
+ # Mock store with existing workflow
mock_store = MagicMock()
- existing_skill = _make_skill("old_skill", "Old", "async def run(): pass")
- mock_store.list_all.return_value = [existing_skill]
+ existing_workflow = _make_workflow("old_workflow", "Old", "async def run(): pass")
+ mock_store.list_all.return_value = [existing_workflow]
with patch("py_code_mode.cli.store._get_store", return_value=mock_store):
bootstrap(tmp_path, "redis://localhost", "test-prefix", clear=True)
- # Should have deleted old skill
- mock_store.delete.assert_called_once_with("old_skill")
+ # Should have deleted old workflow
+ mock_store.delete.assert_called_once_with("old_workflow")
def test_bootstrap_returns_count(self, tmp_path: Path) -> None:
- """Bootstrap returns number of skills added."""
+ """Bootstrap returns number of workflows added."""
from py_code_mode.cli.store import bootstrap
- # Create multiple skills
- (tmp_path / "skill1.py").write_text('"""S1."""\nasync def run(): return 1')
- (tmp_path / "skill2.py").write_text('"""S2."""\nasync def run(): return 2')
+ # Create multiple workflows
+ (tmp_path / "workflow1.py").write_text('"""S1."""\nasync def run(): return 1')
+ (tmp_path / "workflow2.py").write_text('"""S2."""\nasync def run(): return 2')
mock_store = MagicMock()
mock_store.list_all.return_value = []
@@ -162,26 +162,26 @@ def test_bootstrap_returns_count(self, tmp_path: Path) -> None:
class TestPull:
"""Test pull command."""
- def test_pull_writes_skills_to_files(self, tmp_path: Path) -> None:
- """Pull writes skills to destination directory."""
+ def test_pull_writes_workflows_to_files(self, tmp_path: Path) -> None:
+ """Pull writes workflows to destination directory."""
from py_code_mode.cli.store import pull
dest = tmp_path / "pulled"
- # Mock store with skills
+ # Mock store with workflows
mock_store = MagicMock()
- skill = MagicMock()
- skill.name = "skill1"
- skill.description = "First skill"
- skill.source = '"""First skill."""\nasync def run():\n print("one")'
- mock_store.list_all.return_value = [skill]
+ workflow = MagicMock()
+ workflow.name = "workflow1"
+ workflow.description = "First workflow"
+ workflow.source = '"""First workflow."""\nasync def run():\n print("one")'
+ mock_store.list_all.return_value = [workflow]
with patch("py_code_mode.cli.store._get_store", return_value=mock_store):
count = pull("redis://localhost", "test-prefix", dest)
assert count == 1
assert dest.exists()
- assert (dest / "skill1.py").exists()
+ assert (dest / "workflow1.py").exists()
def test_pull_creates_destination_directory(self, tmp_path: Path) -> None:
"""Pull creates destination directory if it doesn't exist."""
@@ -202,21 +202,21 @@ def test_pull_creates_destination_directory(self, tmp_path: Path) -> None:
class TestDiff:
"""Test diff command."""
- def test_diff_finds_added_skills(self, tmp_path: Path) -> None:
- """Diff identifies skills only in remote (agent-created)."""
+ def test_diff_finds_added_workflows(self, tmp_path: Path) -> None:
+ """Diff identifies workflows only in remote (agent-created)."""
from py_code_mode.cli.store import diff
# Empty local directory
local = tmp_path / "local"
local.mkdir()
- # Remote has a skill
+ # Remote has a workflow
mock_store = MagicMock()
- remote_skill = MagicMock()
- remote_skill.name = "agent_created"
- remote_skill.description = "Created by agent"
- remote_skill.source = '"""Created by agent."""\nasync def run(): pass'
- mock_store.list_all.return_value = [remote_skill]
+ remote_workflow = MagicMock()
+ remote_workflow.name = "agent_created"
+ remote_workflow.description = "Created by agent"
+ remote_workflow.source = '"""Created by agent."""\nasync def run(): pass'
+ mock_store.list_all.return_value = [remote_workflow]
with patch("py_code_mode.cli.store._get_store", return_value=mock_store):
result = diff(local, "redis://localhost", "test-prefix")
@@ -225,14 +225,14 @@ def test_diff_finds_added_skills(self, tmp_path: Path) -> None:
assert len(result["removed"]) == 0
assert len(result["modified"]) == 0
- def test_diff_finds_removed_skills(self, tmp_path: Path) -> None:
- """Diff identifies skills only in local (removed from remote)."""
+ def test_diff_finds_removed_workflows(self, tmp_path: Path) -> None:
+ """Diff identifies workflows only in local (removed from remote)."""
from py_code_mode.cli.store import diff
- # Local has a skill
+ # Local has a workflow
local = tmp_path / "local"
local.mkdir()
- (local / "local_only.py").write_text('"""Local skill."""\nasync def run(): pass')
+ (local / "local_only.py").write_text('"""Local workflow."""\nasync def run(): pass')
# Remote is empty
mock_store = MagicMock()
@@ -244,50 +244,50 @@ def test_diff_finds_removed_skills(self, tmp_path: Path) -> None:
assert "local_only" in result["removed"]
assert len(result["added"]) == 0
- def test_diff_finds_modified_skills(self, tmp_path: Path) -> None:
- """Diff identifies skills with different content."""
+ def test_diff_finds_modified_workflows(self, tmp_path: Path) -> None:
+ """Diff identifies workflows with different content."""
from py_code_mode.cli.store import diff
local = tmp_path / "local"
local.mkdir()
- local_skill = '"""Local version."""\nasync def run(): return "local"'
- (local / "shared_skill.py").write_text(local_skill)
+ local_workflow = '"""Local version."""\nasync def run(): return "local"'
+ (local / "shared_workflow.py").write_text(local_workflow)
# Remote has different version
mock_store = MagicMock()
- remote_skill = MagicMock()
- remote_skill.name = "shared_skill"
- remote_skill.description = "Remote version"
- remote_skill.source = '"""Remote version."""\nasync def run(): return "remote"'
- mock_store.list_all.return_value = [remote_skill]
+ remote_workflow = MagicMock()
+ remote_workflow.name = "shared_workflow"
+ remote_workflow.description = "Remote version"
+ remote_workflow.source = '"""Remote version."""\nasync def run(): return "remote"'
+ mock_store.list_all.return_value = [remote_workflow]
with patch("py_code_mode.cli.store._get_store", return_value=mock_store):
result = diff(local, "redis://localhost", "test-prefix")
- assert "shared_skill" in result["modified"]
+ assert "shared_workflow" in result["modified"]
- def test_diff_finds_unchanged_skills(self, tmp_path: Path) -> None:
- """Diff identifies identical skills."""
+ def test_diff_finds_unchanged_workflows(self, tmp_path: Path) -> None:
+ """Diff identifies identical workflows."""
from py_code_mode.cli.store import diff
- # Local skill
+ # Local workflow
local = tmp_path / "local"
local.mkdir()
- skill_content = '"""Same skill."""\nasync def run(): return "same"'
- (local / "same_skill.py").write_text(skill_content)
+ workflow_content = '"""Same workflow."""\nasync def run(): return "same"'
+ (local / "same_workflow.py").write_text(workflow_content)
# Remote has same content
mock_store = MagicMock()
- remote_skill = MagicMock()
- remote_skill.name = "same_skill"
- remote_skill.description = "Same skill."
- remote_skill.source = skill_content
- mock_store.list_all.return_value = [remote_skill]
+ remote_workflow = MagicMock()
+ remote_workflow.name = "same_workflow"
+ remote_workflow.description = "Same workflow."
+ remote_workflow.source = workflow_content
+ mock_store.list_all.return_value = [remote_workflow]
with patch("py_code_mode.cli.store._get_store", return_value=mock_store):
result = diff(local, "redis://localhost", "test-prefix")
- assert "same_skill" in result["unchanged"]
+ assert "same_workflow" in result["unchanged"]
class TestCLI:
@@ -302,19 +302,19 @@ def test_bootstrap_command_parses_args(self) -> None:
[
"bootstrap",
"--source",
- "/path/to/skills",
+ "/path/to/workflows",
"--target",
"redis://localhost:6379",
"--prefix",
- "my-skills",
+ "my-workflows",
"--clear",
]
)
assert args.command == "bootstrap"
- assert str(args.source) == "/path/to/skills"
+ assert str(args.source) == "/path/to/workflows"
assert args.target == "redis://localhost:6379"
- assert args.prefix == "my-skills"
+ assert args.prefix == "my-workflows"
assert args.clear is True
def test_pull_command_parses_args(self) -> None:
@@ -328,7 +328,7 @@ def test_pull_command_parses_args(self) -> None:
"--target",
"redis://localhost:6379",
"--prefix",
- "my-skills",
+ "my-workflows",
"--dest",
"/path/to/dest",
]
@@ -336,7 +336,7 @@ def test_pull_command_parses_args(self) -> None:
assert args.command == "pull"
assert args.target == "redis://localhost:6379"
- assert args.prefix == "my-skills"
+ assert args.prefix == "my-workflows"
assert str(args.dest) == "/path/to/dest"
def test_diff_command_parses_args(self) -> None:
@@ -348,21 +348,21 @@ def test_diff_command_parses_args(self) -> None:
[
"diff",
"--source",
- "/path/to/skills",
+ "/path/to/workflows",
"--target",
"redis://localhost:6379",
"--prefix",
- "my-skills",
+ "my-workflows",
]
)
assert args.command == "diff"
- assert str(args.source) == "/path/to/skills"
+ assert str(args.source) == "/path/to/workflows"
assert args.target == "redis://localhost:6379"
- assert args.prefix == "my-skills"
+ assert args.prefix == "my-workflows"
def test_default_prefix(self) -> None:
- """Default prefix is 'skills'."""
+ """Default prefix is 'workflows'."""
from py_code_mode.cli.store import create_parser
parser = create_parser()
@@ -376,4 +376,4 @@ def test_default_prefix(self) -> None:
]
)
- assert args.prefix == "skills"
+ assert args.prefix == "workflows"
diff --git a/tests/test_subprocess_executor.py b/tests/test_subprocess_executor.py
index 60e0a73..bcbf47d 100644
--- a/tests/test_subprocess_executor.py
+++ b/tests/test_subprocess_executor.py
@@ -1077,7 +1077,7 @@ async def test_start_creates_kernel_that_can_execute(self, tmp_path: Path) -> No
@pytest.mark.asyncio
async def test_start_with_storage_injects_namespaces(self, tmp_path: Path) -> None:
- """start() with storage injects tools, skills, artifacts namespaces."""
+ """start() with storage injects tools, workflows, artifacts namespaces."""
from py_code_mode.execution.subprocess import SubprocessExecutor
from py_code_mode.storage.backends import FileStorage
@@ -1097,7 +1097,7 @@ async def test_start_with_storage_injects_namespaces(self, tmp_path: Path) -> No
result = await executor.run("'tools' in dir()")
assert result.value in (True, "True")
- result = await executor.run("'skills' in dir()")
+ result = await executor.run("'workflows' in dir()")
assert result.value in (True, "True")
result = await executor.run("'artifacts' in dir()")
@@ -1563,7 +1563,7 @@ async def test_reset_clears_user_defined_variables(self, executor) -> None:
@pytest.mark.asyncio
async def test_reset_preserves_injected_namespaces(self, executor) -> None:
- """reset() preserves tools, skills, artifacts namespaces."""
+ """reset() preserves tools, workflows, artifacts namespaces."""
# Verify namespaces exist before reset
result = await executor.run("'tools' in dir()")
assert result.value in (True, "True")
@@ -1574,7 +1574,7 @@ async def test_reset_preserves_injected_namespaces(self, executor) -> None:
result = await executor.run("'tools' in dir()")
assert result.value in (True, "True")
- result = await executor.run("'skills' in dir()")
+ result = await executor.run("'workflows' in dir()")
assert result.value in (True, "True")
result = await executor.run("'artifacts' in dir()")
diff --git a/tests/test_subprocess_namespace_injection.py b/tests/test_subprocess_namespace_injection.py
index e4c5ba4..fd4c8cb 100644
--- a/tests/test_subprocess_namespace_injection.py
+++ b/tests/test_subprocess_namespace_injection.py
@@ -1,10 +1,10 @@
"""Tests for SubprocessExecutor namespace injection with full py-code-mode functionality.
-These tests verify that the SubprocessExecutor properly injects tools, skills, and
+These tests verify that the SubprocessExecutor properly injects tools, workflows, and
artifacts namespaces into the kernel with FULL functionality (not stubs).
Target state: py-code-mode installed in kernel venv, providing real namespace
-implementations with tool invocation, skill creation/invocation, semantic search,
+implementations with tool invocation, workflow creation/invocation, semantic search,
and complete artifact management.
Tests are designed to FAIL with the current stub implementation, then pass once
@@ -108,12 +108,12 @@ class TestE2EUserJourneys:
"""End-to-end tests simulating real agent workflows."""
@pytest.mark.asyncio
- async def test_tool_to_skill_to_artifact_workflow(self, executor_with_storage) -> None:
- """Agent workflow: call tool -> create skill -> save artifact.
+ async def test_tool_to_workflow_to_artifact_workflow(self, executor_with_storage) -> None:
+ """Agent workflow: call tool -> create workflow -> save artifact.
This is the complete agent workflow:
1. Use a tool to get data
- 2. Create a skill that wraps the tool
+ 2. Create a workflow that wraps the tool
3. Save the result as an artifact
4. Load the artifact back
@@ -124,9 +124,9 @@ async def test_tool_to_skill_to_artifact_workflow(self, executor_with_storage) -
assert result.error is None, f"Tool invocation failed: {result.error}"
assert "hello world" in str(result.value) or "hello world" in result.stdout
- # Step 2: Create a skill that uses the tool
- skill_code = '''
-skills.create(
+ # Step 2: Create a workflow that uses the tool
+ workflow_code = '''
+workflows.create(
name="greet",
source="""
async def run(name: str) -> str:
@@ -136,12 +136,12 @@ async def run(name: str) -> str:
description="Greet someone by name"
)
'''
- result = await executor_with_storage.run(skill_code)
- assert result.error is None, f"Skill creation failed: {result.error}"
+ result = await executor_with_storage.run(workflow_code)
+ assert result.error is None, f"Workflow creation failed: {result.error}"
- # Step 3: Invoke the skill
- result = await executor_with_storage.run('skills.invoke("greet", name="Alice")')
- assert result.error is None, f"Skill invocation failed: {result.error}"
+ # Step 3: Invoke the workflow
+ result = await executor_with_storage.run('workflows.invoke("greet", name="Alice")')
+ assert result.error is None, f"Workflow invocation failed: {result.error}"
# Step 4: Save result as artifact
result = await executor_with_storage.run(
@@ -155,14 +155,14 @@ async def run(name: str) -> str:
assert "Alice" in str(result.value)
@pytest.mark.asyncio
- async def test_skill_uses_tools_namespace_internally(self, executor_with_storage) -> None:
- """Skills can access tools namespace when invoked.
+ async def test_workflow_uses_tools_namespace_internally(self, executor_with_storage) -> None:
+ """Workflows can access tools namespace when invoked.
- Breaks when: Skills don't receive injected namespaces during execution.
+ Breaks when: Workflows don't receive injected namespaces during execution.
"""
- # Create skill that uses tools internally
- skill_code = '''
-skills.create(
+ # Create workflow that uses tools internally
+ workflow_code = '''
+workflows.create(
name="echo_wrapper",
source="""
async def run(message: str) -> str:
@@ -171,14 +171,14 @@ async def run(message: str) -> str:
description="Wrapper around echo tool"
)
'''
- result = await executor_with_storage.run(skill_code)
- assert result.error is None, f"Skill creation failed: {result.error}"
+ result = await executor_with_storage.run(workflow_code)
+ assert result.error is None, f"Workflow creation failed: {result.error}"
- # Invoke skill - it should have access to tools namespace
+ # Invoke workflow - it should have access to tools namespace
result = await executor_with_storage.run(
- 'skills.invoke("echo_wrapper", message="test message")'
+ 'workflows.invoke("echo_wrapper", message="test message")'
)
- assert result.error is None, f"Skill invocation failed: {result.error}"
+ assert result.error is None, f"Workflow invocation failed: {result.error}"
assert "test message" in str(result.value) or "test message" in result.stdout
@@ -243,46 +243,46 @@ async def test_tool_escape_hatch_invocation(self, executor_with_storage) -> None
# =============================================================================
-# Contract Tests - Skills Namespace
+# Contract Tests - Workflows Namespace
# =============================================================================
@pytest.mark.slow
@pytest.mark.xdist_group("subprocess")
-class TestSkillsNamespaceContract:
- """Contract tests for skills namespace API."""
+class TestWorkflowsNamespaceContract:
+ """Contract tests for workflows namespace API."""
@pytest.mark.asyncio
- async def test_skills_list_returns_skill_info(self, executor_empty_storage) -> None:
- """skills.list() returns list of skill metadata.
+ async def test_workflows_list_returns_workflow_info(self, executor_empty_storage) -> None:
+ """workflows.list() returns list of workflow metadata.
Breaks when: Returns raw file names without metadata.
"""
- # Create a skill first
+ # Create a workflow first
create_code = """
-skills.create(
+workflows.create(
name="add",
source="async def run(a: int, b: int) -> int: return a + b",
description="Add two numbers"
)
"""
result = await executor_empty_storage.run(create_code)
- assert result.error is None, f"Skill creation failed: {result.error}"
+ assert result.error is None, f"Workflow creation failed: {result.error}"
- # List should include the skill
- result = await executor_empty_storage.run("skills.list()")
- assert result.error is None, f"skills.list() failed: {result.error}"
+ # List should include the workflow
+ result = await executor_empty_storage.run("workflows.list()")
+ assert result.error is None, f"workflows.list() failed: {result.error}"
assert "add" in str(result.value)
@pytest.mark.asyncio
- async def test_skills_search_semantic(self, executor_empty_storage) -> None:
- """skills.search(query) performs semantic search.
+ async def test_workflows_search_semantic(self, executor_empty_storage) -> None:
+ """workflows.search(query) performs semantic search.
Breaks when: Only name matching, semantic search not working.
"""
- # Create skill with descriptive purpose
+ # Create workflow with descriptive purpose
create_code = """
-skills.create(
+workflows.create(
name="calculate_sum",
source="async def run(numbers: list) -> int: return sum(numbers)",
description="Calculate the total of a list of numbers"
@@ -292,44 +292,44 @@ async def test_skills_search_semantic(self, executor_empty_storage) -> None:
assert result.error is None
# Search by semantic meaning (not exact name match)
- result = await executor_empty_storage.run('skills.search("add numbers together")')
- assert result.error is None, f"skills.search() failed: {result.error}"
+ result = await executor_empty_storage.run('workflows.search("add numbers together")')
+ assert result.error is None, f"workflows.search() failed: {result.error}"
# Should find calculate_sum based on semantic similarity
assert "calculate_sum" in str(result.value) or len(str(result.value)) > 2
@pytest.mark.asyncio
- async def test_skills_create_persists(self, executor_empty_storage) -> None:
- """skills.create() persists skill to store.
+ async def test_workflows_create_persists(self, executor_empty_storage) -> None:
+ """workflows.create() persists workflow to store.
- Breaks when: Skill not saved, lost on next list().
+ Breaks when: Workflow not saved, lost on next list().
"""
create_code = """
-skills.create(
+workflows.create(
name="multiply",
source="async def run(a: int, b: int) -> int: return a * b",
description="Multiply two numbers"
)
"""
result = await executor_empty_storage.run(create_code)
- assert result.error is None, f"Skill creation failed: {result.error}"
+ assert result.error is None, f"Workflow creation failed: {result.error}"
- # Skill should appear in list
+ # Workflow should appear in list
result = await executor_empty_storage.run(
- "'multiply' in [s['name'] if isinstance(s, dict) else s.name for s in skills.list()]"
+ "'multiply' in [s['name'] if isinstance(s, dict) else s.name for s in workflows.list()]"
)
assert result.error is None
# Accept True as bool or string
- assert result.value in (True, "True"), f"Skill not found in list: {result.value}"
+ assert result.value in (True, "True"), f"Workflow not found in list: {result.value}"
@pytest.mark.asyncio
- async def test_skills_invoke_executes_skill(self, executor_empty_storage) -> None:
- """skills.invoke(name, **kwargs) runs skill and returns result.
+ async def test_workflows_invoke_executes_workflow(self, executor_empty_storage) -> None:
+ """workflows.invoke(name, **kwargs) runs workflow and returns result.
Breaks when: Invocation fails, wrong args, execution error.
"""
- # Create skill
+ # Create workflow
create_code = """
-skills.create(
+workflows.create(
name="square",
source="async def run(n: int) -> int: return n * n",
description="Square a number"
@@ -338,20 +338,20 @@ async def test_skills_invoke_executes_skill(self, executor_empty_storage) -> Non
result = await executor_empty_storage.run(create_code)
assert result.error is None
- # Invoke skill
- result = await executor_empty_storage.run('skills.invoke("square", n=5)')
- assert result.error is None, f"Skill invocation failed: {result.error}"
+ # Invoke workflow
+ result = await executor_empty_storage.run('workflows.invoke("square", n=5)')
+ assert result.error is None, f"Workflow invocation failed: {result.error}"
assert result.value in (25, "25"), f"Wrong result: {result.value}"
@pytest.mark.asyncio
- async def test_skills_attribute_access_invocation(self, executor_empty_storage) -> None:
- """skills.(**kwargs) provides attribute-based invocation.
+ async def test_workflows_attribute_access_invocation(self, executor_empty_storage) -> None:
+ """workflows.(**kwargs) provides attribute-based invocation.
Breaks when: Attribute access not supported.
"""
- # Create skill
+ # Create workflow
create_code = """
-skills.create(
+workflows.create(
name="triple",
source="async def run(n: int) -> int: return n * 3",
description="Triple a number"
@@ -361,24 +361,26 @@ async def test_skills_attribute_access_invocation(self, executor_empty_storage)
assert result.error is None
# Invoke via attribute access
- result = await executor_empty_storage.run("skills.triple(n=4)")
+ result = await executor_empty_storage.run("workflows.triple(n=4)")
assert result.error is None, f"Attribute invocation failed: {result.error}"
assert result.value in (12, "12"), f"Wrong result: {result.value}"
@pytest.mark.asyncio
- async def test_skills_invoke_uses_runtime_installed_dep(self, executor_empty_storage) -> None:
- """skills.invoke() can use packages installed at runtime via deps.add().
+ async def test_workflows_invoke_uses_runtime_installed_dep(
+ self, executor_empty_storage
+ ) -> None:
+ """workflows.invoke() can use packages installed at runtime via deps.add().
- User story: Agent creates a skill that needs a package, installs it via
- deps.add(), then invokes the skill - all in the same session.
+ User story: Agent creates a workflow that needs a package, installs it via
+ deps.add(), then invokes the workflow - all in the same session.
- Breaks when: Skill execution happens in host process instead of kernel,
+ Breaks when: Workflow execution happens in host process instead of kernel,
or import caches not invalidated after package install.
"""
- # 1. Create a skill that uses a package we'll install at runtime
+ # 1. Create a workflow that uses a package we'll install at runtime
# Using 'art' package - small, pure Python, unlikely to be pre-installed
create_code = """
-skills.create(
+workflows.create(
name="ascii_art_test",
source='''
async def run(text: str) -> str:
@@ -389,10 +391,10 @@ async def run(text: str) -> str:
)
"""
result = await executor_empty_storage.run(create_code)
- assert result.error is None, f"Skill creation failed: {result.error}"
+ assert result.error is None, f"Workflow creation failed: {result.error}"
- # 2. Skill invoke should FAIL before installing the dep
- result = await executor_empty_storage.run('skills.invoke("ascii_art_test", text="hi")')
+ # 2. Workflow invoke should FAIL before installing the dep
+ result = await executor_empty_storage.run('workflows.invoke("ascii_art_test", text="hi")')
assert result.error is not None, "Expected ModuleNotFoundError before install"
assert "ModuleNotFoundError" in result.error or "No module named" in result.error
@@ -400,11 +402,11 @@ async def run(text: str) -> str:
result = await executor_empty_storage.run('deps.add("art")')
assert result.error is None, f"deps.add failed: {result.error}"
- # 4. Skill invoke should SUCCEED after installing the dep
- result = await executor_empty_storage.run('skills.invoke("ascii_art_test", text="hi")')
+ # 4. Workflow invoke should SUCCEED after installing the dep
+ result = await executor_empty_storage.run('workflows.invoke("ascii_art_test", text="hi")')
assert result.error is None, (
- f"Skill invoke failed after deps.add: {result.error}. "
- "This indicates skills are executing in host process instead of kernel, "
+ f"Workflow invoke failed after deps.add: {result.error}. "
+ "This indicates workflows are executing in host process instead of kernel, "
"or import caches not invalidated after package install."
)
# art.text2art returns multi-line ASCII art
@@ -537,9 +539,9 @@ async def test_namespace_state_persists_between_runs(self, executor_empty_storag
Breaks when: Kernel resets between runs, state lost.
"""
- # Create skill in first run
+ # Create workflow in first run
create_code = """
-skills.create(
+workflows.create(
name="counter",
source="async def run(): return 'counted'",
description="Simple counter"
@@ -554,7 +556,7 @@ async def test_namespace_state_persists_between_runs(self, executor_empty_storag
# Third run should see both
result = await executor_empty_storage.run(
- "'counter' in str(skills.list()) and 'state_test' in str(artifacts.list())"
+ "'counter' in str(workflows.list()) and 'state_test' in str(artifacts.list())"
)
assert result.error is None
assert result.value in (True, "True")
@@ -579,7 +581,7 @@ async def test_namespace_state_preserved_after_reset(self, tmp_path: Path) -> No
await executor.start(storage=storage)
await executor.run(
- "skills.create("
+ "workflows.create("
'name="persist", '
'source="async def run(): return 1", '
'description="test")'
@@ -590,11 +592,11 @@ async def test_namespace_state_preserved_after_reset(self, tmp_path: Path) -> No
await executor.reset()
# Namespaces should still be accessible
- result = await executor.run("'tools' in dir() and 'skills' in dir()")
+ result = await executor.run("'tools' in dir() and 'workflows' in dir()")
assert result.value in (True, "True")
# Persisted data should still be there (it's in storage, not kernel memory)
- result = await executor.run("'persist' in str(skills.list())")
+ result = await executor.run("'persist' in str(workflows.list())")
assert result.value in (True, "True")
result = await executor.run("'persist_artifact' in str(artifacts.list())")
@@ -632,7 +634,7 @@ async def test_namespaces_available_immediately_after_start(
# Immediately check all namespaces
result = await executor.run(
- "'tools' in dir() and 'skills' in dir() and 'artifacts' in dir()"
+ "'tools' in dir() and 'workflows' in dir() and 'artifacts' in dir()"
)
assert result.error is None
assert result.value in (True, "True")
@@ -697,27 +699,29 @@ async def test_loading_nonexistent_artifact_raises_error(self, executor_empty_st
assert result.value is None or result.value == "None"
@pytest.mark.asyncio
- async def test_invoking_nonexistent_skill_raises_error(self, executor_empty_storage) -> None:
- """Invoking nonexistent skill raises error.
+ async def test_invoking_nonexistent_workflow_raises_error(self, executor_empty_storage) -> None:
+ """Invoking nonexistent workflow raises error.
Breaks when: Silent failure, returns None.
"""
- result = await executor_empty_storage.run('skills.invoke("skill_that_does_not_exist")')
+ result = await executor_empty_storage.run(
+ 'workflows.invoke("workflow_that_does_not_exist")'
+ )
assert result.error is not None
@pytest.mark.asyncio
- async def test_creating_skill_with_invalid_source_raises_error(
+ async def test_creating_workflow_with_invalid_source_raises_error(
self, executor_empty_storage
) -> None:
- """Creating skill with invalid Python source raises error.
+ """Creating workflow with invalid Python source raises error.
- Breaks when: Invalid skill saved, error only at invoke time.
+ Breaks when: Invalid workflow saved, error only at invoke time.
"""
result = await executor_empty_storage.run(
- """skills.create(
+ """workflows.create(
name="broken",
source="async def run( this is not valid python {{{{",
- description="Broken skill"
+ description="Broken workflow"
)"""
)
# Should fail with SyntaxError during creation, not silently save
@@ -754,7 +758,7 @@ def test_redis_storage_generates_code(self) -> None:
# NOTE: tools_prefix and deps_prefix removed - tools/deps now owned by executors
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
@@ -773,7 +777,7 @@ def test_redis_storage_code_is_valid_python(self) -> None:
# NOTE: tools_prefix and deps_prefix removed - tools/deps now owned by executors
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379/0",
- skills_prefix="myapp:skills",
+ workflows_prefix="myapp:workflows",
artifacts_prefix="myapp:artifacts",
)
code = build_namespace_setup_code(storage_access)
@@ -797,7 +801,7 @@ def test_redis_storage_code_imports_redis(self) -> None:
# NOTE: tools_prefix and deps_prefix removed - tools/deps now owned by executors
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
@@ -820,7 +824,7 @@ def test_redis_storage_code_uses_provided_url(self) -> None:
test_url = "redis://testhost:12345/7"
storage_access = RedisStorageAccess(
redis_url=test_url,
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
@@ -831,7 +835,7 @@ def test_redis_storage_code_uses_provided_prefixes(self) -> None:
"""Generated code should use the exact prefixes provided.
Breaks when: Code hardcodes prefixes or doesn't properly inject
- the provided prefix values for skills and artifacts.
+ the provided prefix values for workflows and artifacts.
NOTE: tools_prefix removed - tools now owned by executors.
"""
@@ -840,19 +844,19 @@ def test_redis_storage_code_uses_provided_prefixes(self) -> None:
# Use distinctive prefixes that are easy to find
# NOTE: tools_prefix removed - tools now owned by executors
- skills_prefix = "unique_app_v1:skills"
+ workflows_prefix = "unique_app_v1:workflows"
artifacts_prefix = "unique_app_v1:artifacts"
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix=skills_prefix,
+ workflows_prefix=workflows_prefix,
artifacts_prefix=artifacts_prefix,
)
code = build_namespace_setup_code(storage_access)
assert code, "Code must be generated first"
# NOTE: tools_prefix assertion removed - tools now owned by executors
- assert skills_prefix in code, (
- f"Generated code should contain skills_prefix: {skills_prefix}"
+ assert workflows_prefix in code, (
+ f"Generated code should contain workflows_prefix: {workflows_prefix}"
)
assert artifacts_prefix in code, (
f"Generated code should contain artifacts_prefix: {artifacts_prefix}"
@@ -870,7 +874,7 @@ def test_redis_storage_code_sets_up_tools(self) -> None:
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
@@ -881,26 +885,28 @@ def test_redis_storage_code_sets_up_tools(self) -> None:
# Tools are now owned by executor, not storage - empty registry is created
assert "ToolRegistry()" in code, "Generated code should create empty ToolRegistry"
- def test_redis_storage_code_sets_up_skills(self) -> None:
- """Generated code should set up skills namespace with RedisSkillStore.
+ def test_redis_storage_code_sets_up_workflows(self) -> None:
+ """Generated code should set up workflows namespace with RedisWorkflowStore.
- Breaks when: Skills namespace not created, or uses wrong store type
- (FileSkillStore instead of RedisSkillStore).
+ Breaks when: Workflows namespace not created, or uses wrong store type
+ (FileWorkflowStore instead of RedisWorkflowStore).
"""
from py_code_mode.execution.protocol import RedisStorageAccess
from py_code_mode.execution.subprocess.namespace import build_namespace_setup_code
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
assert code, "Code must be generated first"
- assert "skills = " in code or "skills=" in code, (
- "Generated code should assign skills namespace"
+ assert "workflows = " in code or "workflows=" in code, (
+ "Generated code should assign workflows namespace"
+ )
+ assert "RedisWorkflowStore" in code, (
+ "Generated code should use RedisWorkflowStore for workflows"
)
- assert "RedisSkillStore" in code, "Generated code should use RedisSkillStore for skills"
def test_redis_storage_code_sets_up_artifacts(self) -> None:
"""Generated code should set up artifacts namespace with RedisArtifactStore.
@@ -913,7 +919,7 @@ def test_redis_storage_code_sets_up_artifacts(self) -> None:
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
@@ -976,7 +982,7 @@ def test_redis_code_uses_from_url_pattern(self) -> None:
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
@@ -996,7 +1002,7 @@ def test_redis_code_handles_nest_asyncio(self) -> None:
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
@@ -1015,36 +1021,37 @@ def test_redis_code_imports_cli_adapter(self) -> None:
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
assert code, "Code must be generated first"
assert "CLIAdapter" in code, "Generated code should use CLIAdapter for tool execution"
- def test_redis_code_imports_skill_library(self) -> None:
- """Generated code should import create_skill_library for semantic search.
+ def test_redis_code_imports_workflow_library(self) -> None:
+ """Generated code should import create_workflow_library for semantic search.
- Breaks when: Skill semantic search doesn't work because library not created.
+ Breaks when: Workflow semantic search doesn't work because library not created.
"""
from py_code_mode.execution.protocol import RedisStorageAccess
from py_code_mode.execution.subprocess.namespace import build_namespace_setup_code
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
assert code, "Code must be generated first"
- assert "create_skill_library" in code or "SkillLibrary" in code, (
- "Generated code should use create_skill_library or SkillLibrary for semantic search"
+ assert "create_workflow_library" in code or "WorkflowLibrary" in code, (
+ "Generated code should use create_workflow_library or WorkflowLibrary "
+ "for semantic search"
)
- def test_redis_code_wires_skills_namespace_with_tools(self) -> None:
- """Generated code should wire skills namespace so skills can access tools.
+ def test_redis_code_wires_workflows_namespace_with_tools(self) -> None:
+ """Generated code should wire workflows namespace so workflows can access tools.
- Breaks when: Skills that internally call tools.* fail with NameError
+ Breaks when: Workflows that internally call tools.* fail with NameError
because namespace dict wasn't properly wired.
"""
from py_code_mode.execution.protocol import RedisStorageAccess
@@ -1052,21 +1059,21 @@ def test_redis_code_wires_skills_namespace_with_tools(self) -> None:
storage_access = RedisStorageAccess(
redis_url="redis://localhost:6379",
- skills_prefix="test:skills",
+ workflows_prefix="test:workflows",
artifacts_prefix="test:artifacts",
)
code = build_namespace_setup_code(storage_access)
assert code, "Code must be generated first"
# Look for namespace dict wiring pattern (like FileStorage does)
- assert "SkillsNamespace" in code, "Generated code should create SkillsNamespace"
- # The namespace dict should wire tools into skills
+ assert "WorkflowsNamespace" in code, "Generated code should create WorkflowsNamespace"
+ # The namespace dict should wire tools into workflows
has_wiring = (
'"tools"' in code
or "'tools'" in code
or "namespace" in code.lower()
or "_ns_dict" in code
)
- assert has_wiring, "Generated code should wire tools into skills namespace"
+ assert has_wiring, "Generated code should wire tools into workflows namespace"
# =============================================================================
@@ -1102,7 +1109,7 @@ async def test_redis_namespace_full_execution(
"""Full E2E test: SubprocessExecutor with RedisStorage.
Breaks when: Generated code fails to execute, namespaces aren't
- accessible, or tools/skills/artifacts don't work.
+ accessible, or tools/workflows/artifacts don't work.
"""
from py_code_mode.execution.subprocess import SubprocessExecutor
from py_code_mode.execution.subprocess.config import SubprocessConfig
@@ -1118,7 +1125,7 @@ async def test_redis_namespace_full_execution(
# Verify namespaces are accessible
result = await executor.run(
- "'tools' in dir() and 'skills' in dir() and 'artifacts' in dir()"
+ "'tools' in dir() and 'workflows' in dir() and 'artifacts' in dir()"
)
assert result.error is None, f"Namespace check failed: {result.error}"
assert result.value in (True, "True"), "Namespaces should be available"
diff --git a/tests/test_subprocess_rpc.py b/tests/test_subprocess_rpc.py
index 647189d..f94babb 100644
--- a/tests/test_subprocess_rpc.py
+++ b/tests/test_subprocess_rpc.py
@@ -74,14 +74,14 @@ def test_from_dict_deserializes_correctly(self) -> None:
data = {
"type": "rpc_request",
"id": "test-id",
- "method": "skills.invoke",
- "params": {"name": "my_skill", "args": {}},
+ "method": "workflows.invoke",
+ "params": {"name": "my_workflow", "args": {}},
}
request = RPCRequest.from_dict(data)
assert request.id == "test-id"
- assert request.method == "skills.invoke"
- assert request.params == {"name": "my_skill", "args": {}}
+ assert request.method == "workflows.invoke"
+ assert request.params == {"name": "my_workflow", "args": {}}
def test_from_dict_with_missing_params_defaults_to_empty(self) -> None:
"""from_dict defaults params to empty dict if missing."""
@@ -249,9 +249,9 @@ def test_kernel_init_code_defines_tools_proxy(self) -> None:
"""KERNEL_INIT_CODE defines ToolsProxy class."""
assert "class ToolsProxy" in KERNEL_INIT_CODE
- def test_kernel_init_code_defines_skills_proxy(self) -> None:
- """KERNEL_INIT_CODE defines SkillsProxy class."""
- assert "class SkillsProxy" in KERNEL_INIT_CODE
+ def test_kernel_init_code_defines_workflows_proxy(self) -> None:
+ """KERNEL_INIT_CODE defines WorkflowsProxy class."""
+ assert "class WorkflowsProxy" in KERNEL_INIT_CODE
def test_kernel_init_code_defines_artifacts_proxy(self) -> None:
"""KERNEL_INIT_CODE defines ArtifactsProxy class."""
@@ -264,7 +264,7 @@ def test_kernel_init_code_defines_deps_proxy(self) -> None:
def test_kernel_init_code_creates_proxy_instances(self) -> None:
"""KERNEL_INIT_CODE creates proxy instances as globals."""
assert "tools = ToolsProxy()" in KERNEL_INIT_CODE
- assert "skills = SkillsProxy()" in KERNEL_INIT_CODE
+ assert "workflows = WorkflowsProxy()" in KERNEL_INIT_CODE
assert "artifacts = ArtifactsProxy()" in KERNEL_INIT_CODE
assert "deps = DepsProxy()" in KERNEL_INIT_CODE
@@ -321,12 +321,12 @@ def test_mock_provider_is_resource_provider(self) -> None:
mock.list_tools = AsyncMock(return_value=[])
mock.search_tools = AsyncMock(return_value=[])
mock.list_tool_recipes = AsyncMock(return_value=[])
- mock.invoke_skill = AsyncMock(return_value="result")
- mock.search_skills = AsyncMock(return_value=[])
- mock.list_skills = AsyncMock(return_value=[])
- mock.get_skill = AsyncMock(return_value=None)
- mock.create_skill = AsyncMock(return_value={})
- mock.delete_skill = AsyncMock(return_value=True)
+ mock.invoke_workflow = AsyncMock(return_value="result")
+ mock.search_workflows = AsyncMock(return_value=[])
+ mock.list_workflows = AsyncMock(return_value=[])
+ mock.get_workflow = AsyncMock(return_value=None)
+ mock.create_workflow = AsyncMock(return_value={})
+ mock.delete_workflow = AsyncMock(return_value=True)
mock.load_artifact = AsyncMock(return_value="data")
mock.save_artifact = AsyncMock(return_value={})
mock.list_artifacts = AsyncMock(return_value=[])
@@ -358,12 +358,12 @@ def mock_provider(self) -> MagicMock:
provider.list_tools = AsyncMock(return_value=[{"name": "curl"}])
provider.search_tools = AsyncMock(return_value=[{"name": "curl"}])
provider.list_tool_recipes = AsyncMock(return_value=[{"name": "get"}])
- # Note: No invoke_skill - skills execute locally in kernel, not via RPC
- provider.search_skills = AsyncMock(return_value=[{"name": "my_skill"}])
- provider.list_skills = AsyncMock(return_value=[])
- provider.get_skill = AsyncMock(return_value={"name": "test"})
- provider.create_skill = AsyncMock(return_value={"name": "new_skill"})
- provider.delete_skill = AsyncMock(return_value=True)
+ # Note: No invoke_workflow - workflows execute locally in kernel, not via RPC
+ provider.search_workflows = AsyncMock(return_value=[{"name": "my_workflow"}])
+ provider.list_workflows = AsyncMock(return_value=[])
+ provider.get_workflow = AsyncMock(return_value={"name": "test"})
+ provider.create_workflow = AsyncMock(return_value={"name": "new_workflow"})
+ provider.delete_workflow = AsyncMock(return_value=True)
provider.load_artifact = AsyncMock(return_value="artifact_data")
provider.save_artifact = AsyncMock(return_value={"name": "saved"})
provider.list_artifacts = AsyncMock(return_value=[])
@@ -402,8 +402,8 @@ async def test_dispatch_tools_list(self, mock_provider: MagicMock) -> None:
assert result == [{"name": "curl"}]
mock_provider.list_tools.assert_called_once()
- # Note: skills.invoke is NOT an RPC method - skills execute locally in kernel
- # after fetching source via skills.get. See test_dispatch_skills_get instead.
+ # Note: workflows.invoke is NOT an RPC method - workflows execute locally in kernel
+ # after fetching source via workflows.get. See test_dispatch_workflows_get instead.
@pytest.mark.asyncio
async def test_dispatch_artifacts_load(self, mock_provider: MagicMock) -> None:
diff --git a/tests/test_subprocess_vector_store.py b/tests/test_subprocess_vector_store.py
index 11da20f..1f4346f 100644
--- a/tests/test_subprocess_vector_store.py
+++ b/tests/test_subprocess_vector_store.py
@@ -6,14 +6,14 @@
vector stores, generating code that:
1. Imports ChromaVectorStore when vectors_path provided
2. Creates vector store in kernel
-3. Passes vector_store to create_skill_library()
+3. Passes vector_store to create_workflow_library()
4. Gracefully falls back when chromadb not available
TDD RED phase: These tests define the interface before implementation.
They will fail until:
1. build_namespace_setup_code() handles vectors_path
2. Generated code imports ChromaVectorStore
-3. Generated code passes vector_store to create_skill_library()
+3. Generated code passes vector_store to create_workflow_library()
4. Generated code handles ImportError for chromadb
"""
@@ -40,7 +40,7 @@ def test_generated_code_imports_chroma_when_vectors_path_provided(self) -> None:
Breaks when: Code doesn't import ChromaVectorStore despite vectors_path.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=Path("/app/vectors"), # Present
)
@@ -59,7 +59,7 @@ def test_generated_code_creates_chroma_vector_store(self) -> None:
"""
vectors_path = Path("/test/vectors")
storage_access = FileStorageAccess(
- skills_path=Path("/test/skills"),
+ workflows_path=Path("/test/workflows"),
artifacts_path=Path("/test/artifacts"),
vectors_path=vectors_path,
)
@@ -71,21 +71,21 @@ def test_generated_code_creates_chroma_vector_store(self) -> None:
# Should use the provided path
assert str(vectors_path) in code or repr(str(vectors_path)) in code
- def test_generated_code_passes_vector_store_to_create_skill_library(self) -> None:
- """Generated code passes vector_store to create_skill_library().
+ def test_generated_code_passes_vector_store_to_create_workflow_library(self) -> None:
+ """Generated code passes vector_store to create_workflow_library().
- Breaks when: create_skill_library() called without vector_store parameter.
+ Breaks when: create_workflow_library() called without vector_store parameter.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=Path("/app/vectors"),
)
code = build_namespace_setup_code(storage_access)
- # Should pass vector_store to create_skill_library
- assert "create_skill_library" in code
+ # Should pass vector_store to create_workflow_library
+ assert "create_workflow_library" in code
assert "vector_store" in code
def test_generated_code_creates_embedder_for_vector_store(self) -> None:
@@ -94,7 +94,7 @@ def test_generated_code_creates_embedder_for_vector_store(self) -> None:
Breaks when: ChromaVectorStore created without embedder.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=Path("/app/vectors"),
)
@@ -122,7 +122,7 @@ def test_generated_code_handles_chromadb_import_error(self) -> None:
Breaks when: ImportError crashes kernel instead of falling back.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=Path("/app/vectors"),
)
@@ -135,11 +135,11 @@ def test_generated_code_handles_chromadb_import_error(self) -> None:
def test_generated_code_sets_vector_store_none_on_import_error(self) -> None:
"""Generated code sets vector_store=None when chromadb unavailable.
- Breaks when: create_skill_library() called without vector_store parameter
+ Breaks when: create_workflow_library() called without vector_store parameter
on fallback path.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=Path("/app/vectors"),
)
@@ -156,7 +156,7 @@ def test_generated_code_compiles_without_syntax_errors(self) -> None:
Breaks when: Code generation has syntax errors.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=Path("/app/vectors"),
)
@@ -184,7 +184,7 @@ def test_generated_code_without_vectors_path_does_not_import_chroma(self) -> Non
Breaks when: Unnecessary import added even without vector store.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=None, # No vector store
)
@@ -194,21 +194,21 @@ def test_generated_code_without_vectors_path_does_not_import_chroma(self) -> Non
# Should not import ChromaVectorStore
assert "ChromaVectorStore" not in code
- def test_generated_code_without_vectors_path_still_creates_skill_library(self) -> None:
- """Generated code creates SkillLibrary without vector_store.
+ def test_generated_code_without_vectors_path_still_creates_workflow_library(self) -> None:
+ """Generated code creates WorkflowLibrary without vector_store.
- Breaks when: SkillLibrary creation requires vector_store.
+ Breaks when: WorkflowLibrary creation requires vector_store.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=None,
)
code = build_namespace_setup_code(storage_access)
- # Should still create skill library (without vector_store)
- assert "create_skill_library" in code or "SkillLibrary" in code
+ # Should still create workflow library (without vector_store)
+ assert "create_workflow_library" in code or "WorkflowLibrary" in code
# =============================================================================
@@ -230,16 +230,16 @@ async def test_subprocess_executor_with_vector_store(self, tmp_path: Path) -> No
"""
from py_code_mode.execution.subprocess import SubprocessExecutor
from py_code_mode.execution.subprocess.config import SubprocessConfig
- from py_code_mode.skills import PythonSkill
from py_code_mode.storage import FileStorage
+ from py_code_mode.workflows import PythonWorkflow
# Setup storage with vector store
storage = FileStorage(tmp_path / "storage")
- library = storage.get_skill_library()
+ library = storage.get_workflow_library()
- # Add skill with semantic description
+ # Add workflow with semantic description
library.add(
- PythonSkill.from_source(
+ PythonWorkflow.from_source(
name="fetch_url",
source="async def run(url): import requests; return requests.get(url).text",
description="Download content from a web URL using HTTP",
@@ -257,7 +257,7 @@ async def test_subprocess_executor_with_vector_store(self, tmp_path: Path) -> No
await executor.start(storage=storage)
# Search should use vector store for semantic similarity
- result = await executor.run('skills.search("get webpage")')
+ result = await executor.run('workflows.search("get webpage")')
assert result.error is None, f"Search failed: {result.error}"
# Should find fetch_url via semantic similarity
@@ -293,19 +293,19 @@ async def test_subprocess_executor_without_vector_store_falls_back(
try:
await executor.start(storage=storage)
- # Create skill in subprocess
+ # Create workflow in subprocess
create_code = """
-skills.create(
- name="test_skill",
+workflows.create(
+ name="test_workflow",
source="async def run(): return 1",
- description="Test skill for fallback"
+ description="Test workflow for fallback"
)
"""
result = await executor.run(create_code)
assert result.error is None
# Search should still work (using fallback embedder)
- result = await executor.run('skills.search("test")')
+ result = await executor.run('workflows.search("test")')
assert result.error is None
assert len(str(result.value)) > 0
@@ -328,7 +328,7 @@ def test_file_storage_access_vectors_path_serialized(self) -> None:
"""
vectors_path = Path("/app/vectors")
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=vectors_path,
)
@@ -342,7 +342,7 @@ def test_file_storage_access_vectors_path_none_serialized(self) -> None:
Breaks when: None value causes serialization errors.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=None,
)
@@ -357,7 +357,7 @@ def test_generated_code_uses_vectors_path_from_storage_access(self) -> None:
"""
vectors_path = Path("/specific/vectors/location")
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=vectors_path,
)
@@ -382,7 +382,7 @@ def test_code_generation_handles_missing_vectors_directory(self) -> None:
Breaks when: Code assumes vectors directory exists, crashes on startup.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=Path("/nonexistent/vectors"),
)
@@ -401,7 +401,7 @@ def test_code_generation_with_special_characters_in_path(self) -> None:
# Path with spaces and quotes
vectors_path = Path('/app/vectors with spaces/"quotes"')
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=vectors_path,
)
@@ -422,7 +422,7 @@ def test_code_generation_creates_embedder_only_once(self) -> None:
Breaks when: Multiple embedder instances created wastefully.
"""
storage_access = FileStorageAccess(
- skills_path=Path("/app/skills"),
+ workflows_path=Path("/app/workflows"),
artifacts_path=Path("/app/artifacts"),
vectors_path=Path("/app/vectors"),
)
diff --git a/tests/test_vector_store.py b/tests/test_vector_store.py
index ab340d7..188b226 100644
--- a/tests/test_vector_store.py
+++ b/tests/test_vector_store.py
@@ -15,60 +15,60 @@ class TestVectorStoreProtocol:
"""Protocol compliance tests for VectorStore implementations."""
def test_vector_store_protocol_exists(self) -> None:
- """VectorStore protocol should be importable from skills module."""
+ """VectorStore protocol should be importable from workflows module."""
# Protocol should be runtime checkable
- from py_code_mode.skills.vector_store import VectorStore
+ from py_code_mode.workflows.vector_store import VectorStore
assert isinstance(VectorStore, type)
def test_protocol_has_add_method(self) -> None:
"""VectorStore must define add() method signature."""
- from py_code_mode.skills.vector_store import VectorStore
+ from py_code_mode.workflows.vector_store import VectorStore
# Protocol defines method signatures at class level
assert hasattr(VectorStore, "add")
def test_protocol_has_remove_method(self) -> None:
"""VectorStore must define remove() method signature."""
- from py_code_mode.skills.vector_store import VectorStore
+ from py_code_mode.workflows.vector_store import VectorStore
assert hasattr(VectorStore, "remove")
def test_protocol_has_search_method(self) -> None:
"""VectorStore must define search() method signature."""
- from py_code_mode.skills.vector_store import VectorStore
+ from py_code_mode.workflows.vector_store import VectorStore
assert hasattr(VectorStore, "search")
def test_protocol_has_get_content_hash_method(self) -> None:
"""VectorStore must define get_content_hash() for change detection."""
- from py_code_mode.skills.vector_store import VectorStore
+ from py_code_mode.workflows.vector_store import VectorStore
assert hasattr(VectorStore, "get_content_hash")
def test_protocol_has_get_model_info_method(self) -> None:
"""VectorStore must define get_model_info() for model validation."""
- from py_code_mode.skills.vector_store import VectorStore
+ from py_code_mode.workflows.vector_store import VectorStore
assert hasattr(VectorStore, "get_model_info")
def test_protocol_has_clear_method(self) -> None:
"""VectorStore must define clear() to reset index."""
- from py_code_mode.skills.vector_store import VectorStore
+ from py_code_mode.workflows.vector_store import VectorStore
assert hasattr(VectorStore, "clear")
def test_protocol_has_count_method(self) -> None:
- """VectorStore must define count() to get indexed skill count."""
- from py_code_mode.skills.vector_store import VectorStore
+ """VectorStore must define count() to get indexed workflow count."""
+ from py_code_mode.workflows.vector_store import VectorStore
assert hasattr(VectorStore, "count")
def test_protocol_is_runtime_checkable(self) -> None:
"""Protocol should support isinstance() checks."""
- from py_code_mode.skills.vector_store import VectorStore
+ from py_code_mode.workflows.vector_store import VectorStore
# Check that VectorStore has the runtime_checkable marker
# This allows isinstance(obj, VectorStore) to work
@@ -80,7 +80,7 @@ class TestModelInfo:
def test_model_info_dataclass_exists(self) -> None:
"""ModelInfo should be importable and constructible."""
- from py_code_mode.skills.vector_store import ModelInfo
+ from py_code_mode.workflows.vector_store import ModelInfo
info = ModelInfo(model_name="bge-small", dimension=384, version="1.5")
@@ -90,7 +90,7 @@ def test_model_info_dataclass_exists(self) -> None:
def test_model_info_is_frozen(self) -> None:
"""ModelInfo should be immutable (frozen dataclass)."""
- from py_code_mode.skills.vector_store import ModelInfo
+ from py_code_mode.workflows.vector_store import ModelInfo
info = ModelInfo(model_name="bge-small", dimension=384, version="1.5")
@@ -100,7 +100,7 @@ def test_model_info_is_frozen(self) -> None:
def test_model_info_equality(self) -> None:
"""ModelInfo instances with same values should be equal."""
- from py_code_mode.skills.vector_store import ModelInfo
+ from py_code_mode.workflows.vector_store import ModelInfo
info1 = ModelInfo(model_name="bge-small", dimension=384, version="1.5")
info2 = ModelInfo(model_name="bge-small", dimension=384, version="1.5")
@@ -111,7 +111,7 @@ def test_model_info_equality(self) -> None:
def test_model_info_hashable(self) -> None:
"""Frozen dataclass should be hashable for use in sets/dicts."""
- from py_code_mode.skills.vector_store import ModelInfo
+ from py_code_mode.workflows.vector_store import ModelInfo
info1 = ModelInfo(model_name="bge-small", dimension=384, version="1.5")
info2 = ModelInfo(model_name="bge-base", dimension=768, version="1.5")
@@ -127,45 +127,45 @@ class TestSearchResult:
def test_search_result_dataclass_exists(self) -> None:
"""SearchResult should be importable and constructible."""
- from py_code_mode.skills.vector_store import SearchResult
+ from py_code_mode.workflows.vector_store import SearchResult
- result = SearchResult(id="skill_name", score=0.85, metadata={"tags": ["network"]})
+ result = SearchResult(id="workflow_name", score=0.85, metadata={"tags": ["network"]})
- assert result.id == "skill_name"
+ assert result.id == "workflow_name"
assert result.score == 0.85
assert result.metadata == {"tags": ["network"]}
def test_search_result_is_frozen(self) -> None:
"""SearchResult should be immutable."""
- from py_code_mode.skills.vector_store import SearchResult
+ from py_code_mode.workflows.vector_store import SearchResult
- result = SearchResult(id="skill_name", score=0.85, metadata={})
+ result = SearchResult(id="workflow_name", score=0.85, metadata={})
with pytest.raises(Exception): # dataclasses.FrozenInstanceError
result.score = 0.95 # type: ignore[misc]
def test_search_result_equality(self) -> None:
"""SearchResult instances with same values should be equal."""
- from py_code_mode.skills.vector_store import SearchResult
+ from py_code_mode.workflows.vector_store import SearchResult
- r1 = SearchResult(id="skill", score=0.85, metadata={})
- r2 = SearchResult(id="skill", score=0.85, metadata={})
- r3 = SearchResult(id="skill", score=0.90, metadata={})
+ r1 = SearchResult(id="workflow", score=0.85, metadata={})
+ r2 = SearchResult(id="workflow", score=0.85, metadata={})
+ r3 = SearchResult(id="workflow", score=0.90, metadata={})
assert r1 == r2
assert r1 != r3
def test_search_result_with_empty_metadata(self) -> None:
"""SearchResult should work with empty metadata dict."""
- from py_code_mode.skills.vector_store import SearchResult
+ from py_code_mode.workflows.vector_store import SearchResult
- result = SearchResult(id="skill", score=0.75, metadata={})
+ result = SearchResult(id="workflow", score=0.75, metadata={})
assert result.metadata == {}
def test_search_result_ordering_by_score(self) -> None:
"""SearchResult should be comparable by score for sorting."""
- from py_code_mode.skills.vector_store import SearchResult
+ from py_code_mode.workflows.vector_store import SearchResult
r1 = SearchResult(id="a", score=0.9, metadata={})
r2 = SearchResult(id="b", score=0.7, metadata={})
@@ -185,7 +185,7 @@ class TestContentHashUtility:
def test_compute_content_hash_exists(self) -> None:
"""compute_content_hash() should be importable and callable."""
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows.vector_store import compute_content_hash
hash_value = compute_content_hash(
description="Scan network ports",
@@ -197,7 +197,7 @@ def test_compute_content_hash_exists(self) -> None:
def test_content_hash_is_16_chars(self) -> None:
"""Hash should be 16-character hex string (8 bytes)."""
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows.vector_store import compute_content_hash
hash_value = compute_content_hash(description="test", source="async def run(): pass")
@@ -207,7 +207,7 @@ def test_content_hash_is_16_chars(self) -> None:
def test_same_input_produces_same_hash(self) -> None:
"""Deterministic: same input should produce same hash."""
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows.vector_store import compute_content_hash
description = "Scan network ports"
source = 'async def run(target: str):\n return f"nmap {target}"'
@@ -219,7 +219,7 @@ def test_same_input_produces_same_hash(self) -> None:
def test_different_description_produces_different_hash(self) -> None:
"""Different description should change hash."""
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows.vector_store import compute_content_hash
source = "async def run(): pass"
@@ -230,9 +230,9 @@ def test_different_description_produces_different_hash(self) -> None:
def test_different_source_produces_different_hash(self) -> None:
"""Different source code should change hash."""
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows.vector_store import compute_content_hash
- description = "Test skill"
+ description = "Test workflow"
hash1 = compute_content_hash(description, "async def run(): return 1")
hash2 = compute_content_hash(description, "async def run(): return 2")
@@ -241,7 +241,7 @@ def test_different_source_produces_different_hash(self) -> None:
def test_whitespace_changes_affect_hash(self) -> None:
"""Whitespace is significant - changes should affect hash."""
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows.vector_store import compute_content_hash
description = "Test"
source1 = "async def run(): pass"
@@ -254,7 +254,7 @@ def test_whitespace_changes_affect_hash(self) -> None:
def test_hash_uses_sha256_algorithm(self) -> None:
"""Hash should be first 16 chars of SHA-256 hex digest."""
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows.vector_store import compute_content_hash
description = "Test description"
source = "async def run(): pass"
@@ -269,7 +269,7 @@ def test_hash_uses_sha256_algorithm(self) -> None:
def test_hash_separates_description_and_source(self) -> None:
"""Hash should use delimiter to prevent collision."""
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows.vector_store import compute_content_hash
# These would collide if we just concatenated without delimiter
hash1 = compute_content_hash("AB", "C")
@@ -279,7 +279,7 @@ def test_hash_separates_description_and_source(self) -> None:
def test_empty_strings_produce_valid_hash(self) -> None:
"""Should handle empty description and source gracefully."""
- from py_code_mode.skills.vector_store import compute_content_hash
+ from py_code_mode.workflows.vector_store import compute_content_hash
hash_value = compute_content_hash("", "")
@@ -296,7 +296,7 @@ class TestVectorStoreSignatures:
def test_add_signature_accepts_required_params(self) -> None:
"""add() should accept id, description, source, content_hash."""
- from py_code_mode.skills.vector_store import VectorStore
+ from py_code_mode.workflows.vector_store import VectorStore
class MinimalVectorStore:
def add(self, id: str, description: str, source: str, content_hash: str) -> None:
@@ -314,7 +314,7 @@ def get_content_hash(self, id: str) -> str | None:
return None
def get_model_info(self):
- from py_code_mode.skills.vector_store import ModelInfo
+ from py_code_mode.workflows.vector_store import ModelInfo
return ModelInfo("test", 384, "1.0")
@@ -329,7 +329,7 @@ def count(self) -> int:
# Should be callable with these parameters
store.add(
- id="test_skill",
+ id="test_workflow",
description="Test description",
source="async def run(): pass",
content_hash="abcd1234",
@@ -337,7 +337,7 @@ def count(self) -> int:
def test_search_returns_list_of_search_results(self) -> None:
"""search() should return list[SearchResult]."""
- from py_code_mode.skills.vector_store import SearchResult, VectorStore
+ from py_code_mode.workflows.vector_store import SearchResult, VectorStore
class MinimalVectorStore:
def add(self, id: str, description: str, source: str, content_hash: str) -> None:
@@ -355,7 +355,7 @@ def get_content_hash(self, id: str) -> str | None:
return None
def get_model_info(self):
- from py_code_mode.skills.vector_store import ModelInfo
+ from py_code_mode.workflows.vector_store import ModelInfo
return ModelInfo("test", 384, "1.0")
@@ -376,7 +376,7 @@ def count(self) -> int:
def test_get_model_info_returns_model_info(self) -> None:
"""get_model_info() should return ModelInfo dataclass."""
- from py_code_mode.skills.vector_store import ModelInfo, VectorStore
+ from py_code_mode.workflows.vector_store import ModelInfo, VectorStore
class MinimalVectorStore:
def add(self, id: str, description: str, source: str, content_hash: str) -> None: