Skip to content

MigoXLab/webqa-agent

Repository files navigation

WebQA Agent

License GitHub stars GitHub forks GitHub issues Ask DeepWiki

Join us on 🎮Discord | 💬WeChat

English · 简体中文

If you like WebQA Agent, please give us a ⭐ on GitHub!
Click Star

🤖 WebQA Agent is a fully automated web testing agent that understands the web like a human — generating test cases, evaluating functionality, performance, and UX end-to-end. ✨ Available as GUI/CLI for direct use, or as an OpenClaw skill.

📑 Table of Contents

🚀 Core Features

📋 Feature Overview

WebQA-Agent provides two testing modes to support different scenarios 🤖 Generate Mode and 📋 Run Mode.

Capability 🤖 Generate Mode 📋 Run Mode
Core Features AI-driven discovery -> Dynamic generation -> Precise execution Execute based on instructions and expected verification
Use Cases New feature, comprehensive quality assurance Repeatable and regression testing scenarios
User Input Minimal: Only URL or a one-sentence business goal Structured: Simple natural language step descriptions
Advantages Reflection-based planning, adaptive to UI changes; Configurable functional / performance / security / UX evaluation for comprehensive QA Stable and predictable results; No selector maintenance; Real-time Console and Network monitoring

Usage & Deployment: Supports CLI execution (see CLI Usage); also supports full-stack deployment (Local / Docker / K8s) with a web interface for visual management. See Deployment.

🛠️ Tool System

Default Tools (Always Enabled):

  • UI Actions: Browser interactions (click, type, navigate)
  • UI Assertions: State verification
  • UX Verification: Text typo checking, layout analysis

Custom Tools (Optional, Configuration-Enabled):

  • Performance: Lighthouse-based performance testing
  • Security: Nuclei vulnerability scanning
  • Link Detection: Dynamic link discovery

Enable custom tools in config.yaml:

test_config:
  custom_tools:
    enabled:
      - lighthouse
      - nuclei

🧭 Architecture

WebQA Agent Architecture

📹 Examples

🎬 Watch Demo: One-click testing of Baidu.com

🚀 Quick Start

Choose between 🛠️ CLI Quick Start or 🖥️ Full-stack Deployment (Web Dashboard).

🛠️ CLI Quick Start (Recommended for Developers)

Recommended using uv (Python>=3.11):

# 1) Create project and install
uv init my-webqa && cd my-webqa
uv add webqa-agent

# 2) Install browser (Required)
uv run playwright install chromium

# 3) Generate Mode
uv run webqa-agent init -m gen  # Init config, edit config.yaml with URL & API Key
uv run webqa-agent gen          # Start AI-driven testing

# 4) Run Mode
uv run webqa-agent init -m run  # Init config, write natural language cases
uv run webqa-agent run          # Start execution

See CLI Usage for more CLI details.

🖥️ Full-stack Deployment (Recommended for Teams)

For visual dashboard, test management, and history, start with Docker Compose:

git clone https://github.com/MigoXLab/webqa-agent.git
cd webqa-agent/deploy/docker-compose
cp .env.example .env
# Edit .env: fill in your LLM API Key
./start.sh

Access via http://localhost. For other deployment methods, see Deployment.

⚙️ CLI Usage

CLI Parameter Details

WebQA Agent provides a concise command-line interface for initialization, autonomous exploration, case execution, and launching the Web UI.

Command Description Common Arguments
init Initialize configuration file -m <gen/run>: Specify mode; -o <path>: Output path; --force: Overwrite existing
gen Generate Mode: AI-driven test generation & execution -c <path>: Config path; -w <n>: Parallel workers
run Run Mode: Execute YAML-defined test cases -c <path/dir>: Config file or folder; -w <n>: Parallel workers

Examples:

# Initialize Run mode configuration
webqa-agent init -m run

# Run all cases in a directory with 4 parallel workers
webqa-agent run -c ./my_cases -w 4

Generate Mode - Configuration

🔧 Optional Dependencies (Custom Tools)

  • Performance testing (Lighthouse): npm install lighthouse chrome-launcher (requires Node.js ≥18)
  • Security testing (Nuclei):
  brew install nuclei      # macOS
  nuclei -ut               # Update templates
  # Linux/Windows: https://github.com/projectdiscovery/nuclei/releases

📄 Configuration Details

The configuration file must include the test_config field to define test types.

  • Business Objectives: Specifies business goals to steer AI test focus and coverage.
  • Custom Tools: Optional tools like Performance (Lighthouse), Security (Nuclei), button checks, and link detection.
  • Dynamic Step Generation: Automatically generates additional test steps when new UI elements are detected during execution.
  • Filter Model: Configures a lightweight model for pre-filtering page elements to improve planning efficiency.

For more details, please refer to docs/MODES&CLI.md

target:
  url: https://example.com              # Website URL to test
  description: Website QA testing

test_config:
  business_objectives: Test search functionality, generate 3 test cases
  custom_tools:                         # Optional: Enable custom testing tools (by step_type)
    enabled:
      # - lighthouse                    # Lighthouse performance testing
                                        # Requires: npm install lighthouse chrome-launcher (local, recommended)
                                        # or: npm install -g lighthouse chrome-launcher (global)
      # - nuclei                        # Nuclei security scanning
                                        # Requires: go install -v github.com/projectdiscovery/nuclei/v3/cmd/nuclei@latest
                                        # or download from: https://github.com/projectdiscovery/nuclei/releases
      # - traverse_clickable_elements   # Clickable element traversal testing
      # - detect_dynamic_links          # Dynamic link discovery and validation

llm_config:                             # LLM configuration, supports OpenAI, Anthropic Claude, Google Gemini, and OpenAI-compatible models (e.g., Doubao, Qwen)
  model: gpt-5.4                        # Primary model
  filter_model: gpt-5-mini              # Lightweight model for element filtering (optional)
  api_key: your_api_key                 # Or set via environment variable (OPENAI_API_KEY)
  base_url: https://api.openai.com/v1   # Optional, API endpoint. For OpenAI-compatible models (Doubao, Qwen, etc.), set to their API endpoint

browser_config:
  headless: False                       # Auto True in Docker
  language: en-US

report:
  language: en-US                       # zh-CN or en-US

Run Mode - Configuration

Run Mode configuration must include the cases field.

  • Multi-modal Interaction: Use action to describe visible text, images, or relative positions on the page. Supported browser actions include click, hover, input, clear, keyboard input, scrolling, mouse movement, file upload, drag-and-drop, and wait; page actions include navigation, back.
  • Multi-modal Verification: Use verify to ensure the agent stays on track, validating visual content, URLs, paths, and combined image–element conditions.
  • End-to-End Monitoring: Monitoring Console logs and Network request status, and supporting configuration of ignore_rules to ignore known errors.

For more details and test case writing specifications, please refer to docs/MODES&CLI.md

target:
  url: https://example.com              # Target website URL

llm_config:                             # LLM configuration
  api: openai
  model: gpt-5-mini
  api_key: your_api_key_here
  base_url: https://api.openai.com/v1

browser_config:
  viewport: {"width": 1280, "height": 720}
  headless: False                       # Auto True in Docker
  language: en-US
  # cookies: /path/to/cookie.json

ignore_rules:                           # Ignore rules configuration (optional)
  network:                              # Network request ignore rules
    - pattern: ".*\\.google-analytics\\.com.*"
      type: "domain"
  console:                              # Console log ignore rules
    - pattern: "Failed to load resource.*favicon"
      match_type: "regex"
    - pattern: "Warning:"
      match_type: "contains"

cases:                                  # Test case list
  - name: Image Upload                  # Test case name
    steps:                              # Test steps
      - action: Upload icon is the image icon in the input box, located next to the Baidu search button, used for uploading files
        args:
          file_path: ./tests/data/test.jpeg
      - action: Wait for image upload
      - verify: Verify that the input field displays an open palm/hand icon image
      - action: Enter "How many fingers are in the image?" in the search input box, then press Enter, wait 2 seconds

📊 View Results

Test reports are generated in the reports/ directory. Open the HTML file to view detailed results.

🛠️ Extending WebQA Agent Tools

WebQA Agent supports custom tool development for domain-specific testing capabilities.

Document Description
Custom Tool Development Quick reference for creating custom tools
LLM Context Document Comprehensive guide for AI-assisted development, useful for vibe coding

We welcome contributions! Check out existing tools for examples.

🖥️ Deployment

For teams that need a persistent web dashboard with test management, scheduled tasks, and execution history, deploy the full-stack platform:

Method Use Case Guide
Local Development Personal dev & debugging deploy/README.md
Docker Compose Single-machine / Team trial deploy/README.md
Kubernetes Production cluster deploy/k8s/README.md

💡 Extending Internal Logic: WebQA Agent supports extending internal logic based on your team's infrastructure (such as integrating internal SSO, OSS object storage, internal LLMs, etc.). You are free to customize and develop it to fit your needs. deploy/README.md

Note: The web dashboard platform is currently only available in Chinese.

🗺️ RoadMap

  1. Interaction & Visualization: Real-time display of reasoning processes
  2. Generate Mode Expansion: Integration of additional evaluation dimensions
  3. Tool Agent Context Integration: More comprehensive and precise execution

🙏 Acknowledgements

  • natbot: Drive a browser with GPT-3
  • Midscene.js: AI Operator for Web, Android, Automation & Testing
  • browser-use: AI Agent for Browser control

📄 License

This project is licensed under the Apache 2.0 License.

About

Autonomous web browser agent that audits performance, functionality & UX for engineers and vibe-coding creators. 网站自主评估测试 Agent,一键完成性能、功能使用与交互体验的测试评估

Topics

Resources

License

Stars

Watchers

Forks

Packages

 
 
 

Contributors