vectorclaw-mcp

MCP tools for Anki Vector: speech, motion, camera, sensors, and automation workflows.

Safety Notice

This listing is from the official public ClawHub registry. Review SKILL.md and referenced scripts before running.

Copy this and send it to your AI assistant to learn

Install skill "vectorclaw-mcp" with this command: npx skills add danmartinez78/vectorclaw-mcp

VectorClaw MCP

VectorClaw connects OpenClaw to an Anki / Digital Dream Labs Vector robot through MCP. It provides practical robot control primitives for speech, movement, camera capture, and status/sensor reads.

What you can do

  • Speak text with vector_say
  • Move and position with vector_drive, vector_head, vector_lift
  • Capture camera images with vector_look and vector_capture_image
  • Read robot state with vector_status, vector_pose, vector_proximity_status, vector_touch_status
  • Build look → reason → act workflows

Vision requirement for look → reason → act

For see → reason → act workflows, the agent must either be vision-capable itself (e.g., a VLM) or have access to a separate vision model/image-interpretation tool to analyze camera images before choosing actions.

Requirements

  • Vector robot configured and reachable
  • Wire-Pod running
  • SDK configured at ~/.anki_vector/sdk_config.ini
  • VECTOR_SERIAL environment variable set

Quick setup

  1. Install package: pip install vectorclaw-mcp
  2. Configure SDK: python3 -m anki_vector.configure
  3. Export robot serial: export VECTOR_SERIAL=your-serial
  4. Add MCP server:
{
  "mcpServers": {
    "vectorclaw": {
      "command": "python3",
      "args": ["-m", "vectorclaw_mcp.server"],
      "env": { "VECTOR_SERIAL": "${VECTOR_SERIAL}" }
    }
  }
}

Tool coverage

Hardware-verified core tools vector_say, vector_drive_off_charger, vector_drive, vector_emergency_stop, vector_head, vector_lift, vector_look, vector_capture_image, vector_face, vector_scan, vector_vision_reset, vector_pose, vector_status, vector_charger_status, vector_touch_status, vector_proximity_status

Experimental tools vector_animate, vector_drive_on_charger, vector_find_faces, vector_list_visible_faces, vector_face_detection, vector_list_visible_objects, vector_cube

Current limitations

  • Charger return (vector_drive_on_charger) is currently unreliable
  • Face/object detection is currently inconsistent
  • Visual interpretation requires the vision capability described above

Documentation

Source Transparency

This detail page is rendered from real SKILL.md content. Trust labels are metadata-based hints, not a safety guarantee.

Related Skills

Related by shared tags or category signals.

General

ClawRAG - Self-hosted RAG & Memory

Self-hosted RAG engine with hybrid semantic and keyword search, document ingestion, local privacy, and seamless OpenClaw integration via Docker.

Registry SourceRecently Updated
1.1K0Profile unavailable
Research

Embodied Ai News

Aggregates publicly available Embodied AI and Robotics news from curated sources (robotics media, arXiv, company blogs). Delivers structured briefings on hum...

Registry SourceRecently Updated
3522Profile unavailable
Automation

Fast.io

Workspaces for agentic teams. Complete agent guide with all 19 consolidated tools using action-based routing — parameters, workflows, ID formats, and constra...

Registry SourceRecently Updated
3.7K1dbalve
Automation

MCP Business Integration

Integrate AI agents with business data via Model Context Protocol. Query ads, analytics, CRM data through normalized interfaces. Use when connecting agents t...

Registry SourceRecently Updated
130Profile unavailable