Jump to content

Recommended Posts

Posted

Hi everyone, in particular @ryan @Peter Knight@ukyo @gebeer @maximus who seem to have been most AI active lately.

I've just added dai() and bdai() dumping calls so that objects and arrays are rendered in plain text format more friendly to LLMs, but I am curious what AI/LLM integration features you think would be most useful?

Claude suggested an MCP server - here is its plan. Does this sound useful? Any other ideas?

 

Two processes, loosely coupled:

 
┌──────────────────┐                        ┌──────────────────────────┐
│ Claude / Cursor  │      stdio MCP         │ TracyDebugger site       │
│     client       │ ─────────────────►     │                          │
│                  │      HTTP + token      │  tracy-ai/* endpoints    │
└──────────────────┘ ◄─────────────────     └──────────────────────────┘
  1. MCP server — a tiny program the agent launches over stdio (the MCP transport). Ships as a sibling module (TracyDebuggerMCP/) or a standalone npm package the user npx's.
  2. TracyDebugger HTTP endpoints — new authenticated tracy-ai/* routes inside the ProcessWire site. The MCP server is just a thin translator between MCP tool calls and these HTTP requests.

The MCP server holds no site logic. It's a dumb adapter. All the real work (reading panels, redacting secrets, rendering plaintext) stays inside TracyDebugger where the ProcessWire API is available.

What the agent sees

A handful of tools in the MCP catalog:

 
tracy_export_bundle(preset: "debug" | "performance" | "template" | "full")
tracy_get_request_info()
tracy_get_last_errors(limit: int = 10)
tracy_get_slow_queries(limit: int = 10)
tracy_get_template_schema(template: string)
tracy_list_dumps()
tracy_run_console(code: string)           ← gated, opt-in only

Every tool returns the scrubbed plaintext/JSON produced by AIExport — same output Phase 2's "Copy" button produces.

Config on the site

New module-config section:

  • aiExportHTTPEndpointEnabled (default off)
  • aiExportMCPToken — a random token generated once per site, shown to the user to paste into their MCP client config
  • aiExportAllowConsoleExec (default off) — gates tracy_run_console
  • aiExportAllowedIPs — optional whitelist

Config on the client

User's ~/.config/claude/mcp.json or equivalent:

 
json
{
  "mcpServers": {
    "tracy": {
      "command": "npx",
      "args": ["-y", "tracy-mcp"],
      "env": {
        "TRACY_URL":   "https://mysite.test",
        "TRACY_TOKEN": "<paste token from module config>"
      }
    }
  }
}

The agent launches the MCP server locally; the MCP server talks to the site over HTTPS with the token.

Auth

  • Per-site token (generated in module config, rotateable).
  • Token sent as Authorization: Bearer … header on every HTTP call.
  • Optional IP whitelist on the site side.
  • tracy_run_console additionally requires aiExportAllowConsoleExec=true — otherwise the MCP server gets a 403 and reports "console execution disabled for this site" to the agent.

Example flow — agent debugging an error

User in Claude: "Why is /about/team throwing a 500?"

Agent:

  1. Calls tracy_export_bundle(preset: "debug").
  2. MCP server hits GET https://mysite.test/tracy-ai/export?bundle=debug with the bearer token.
  3. Site responds with scrubbed JSON: request info, PW info, last error with stack, slow queries, recent PW logs.
  4. Agent reads the traceback, sees TemplateFile.php:123 Undefined index "featured_image", asks tracy_get_template_schema(template: "team").
  5. Site responds with the template's fields — no featured_image field exists.
  6. Agent suggests the fix, possibly calls tracy_run_console (if enabled) to verify.

No human pasting. Agent pulls what it needs on demand, scoped by the tool it calls.

What ships where

  • In TracyDebugger itself: the tracy-ai/* HTTP endpoints + auth + token config + AIExport (already built in Phase 1, extended in Phase 2).
  • In the MCP server (separate repo, ~200 lines): tool definitions, HTTP calls, response shaping for MCP.

This separation matters because the MCP server can be installed independently of the site, and the site is still useful without it (you can hit tracy-ai/export with curl directly).

Footprint on production

Zero unless you explicitly enable it. The endpoints, token, and MCP config are all opt-in behind module settings.


That's the shape. The main design choices worth confirming before building:

  1. Token-only auth, or also require the existing Tracy access? — i.e., should the agent's token have to belong to an allowed Tracy dev user, or is a separate machine token fine? I'd lean separate machine token for agents; reusing session auth is awkward over stdio.
  2. Read-only by default? — I strongly recommend yes. tracy_run_console is the only write path and should be a separate opt-in.
  3. Does the MCP server live in this repo or a separate repo? — I'd say separate. Different language, different release cadence, and the site works without it.
  • Like 2
Posted

Maybe it would be helpful to have an AI prompt (like Ryan's Agent Tools) built-in to the Console panel so you can prompt your way to a script, or ask it to fix/extend an existing script?

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...