Documentation
Installation
Download the Relectric desktop app for your platform. The app bundles Electron as the shell and launches a Python runtime for tool execution.
# Clone and install
git clone https://github.com/TeamRelectric/Relectric.git
cd Relectric
# Install Python SDK (editable mode)
cd python && pip install -e .
# Install desktop app deps
cd ../apps/desktop && npm install
# Start development mode
npm run dev
The Python SDK installs the relectric_sdk and
relectric_runtime packages. The runtime starts
automatically on port 8765 when the desktop app launches.
Quick Start
Every tool is a Python file in your .tools/ directory.
Here's a complete tool:
from relectric_sdk import RelectricTool, ui, relectric
class HelloTool(RelectricTool):
id = "hello_tool"
title = "Hello World"
def build_ui(self):
self.name_input = ui.Textbox(label="Your Name")
self.output = ui.Markdown()
self.greet_btn = ui.Button("Greet")
self.greet_btn.click(self.greet, inputs=[self.name_input], outputs=[self.output])
def greet(self, name):
return f"# Hello, {name}! 👋"
@RelectricTool.function(description="Greet someone by name")
def say_hello(self, name: str):
return f"Greeted {name} successfully!"
Save this as .tools/hello_tool.py. The runtime
hot-reloads it instantly — no restart needed. The tool appears in the dock, and the
agent can call say_hello when relevant.
Tool Lifecycle
Discovery
The runtime scans .tools/*.py for classes
that subclass RelectricTool. Each file can contain
one tool class. The registry collects tool metadata (id, title, functions).
Instantiation
Each tool is instantiated and build_ui() is called,
creating the component tree. Components are serialized to JSON and sent to the
Electron shell via WebSocket.
Rendering
The desktop shell receives the component tree and renders Preact components that mirror the Python declarations. User interactions are proxied back to Python event handlers via the host bridge.
Agent Registration
Methods decorated with @RelectricTool.function()
are exposed to the chat agent as callable functions. The agent sees their names,
descriptions, and parameter types via introspection.
Hot Reload
A watchdog observer monitors .tools/ for
file changes. On save, the tool is reimported, re-instantiated, and its UI is
pushed to the shell. No restart required.
Architecture
The Electron shell handles rendering, LLM communication, and user settings. The Python runtime handles tool execution, file operations, and process management. Communication flows bidirectionally over a single WebSocket connection.
Python Runtime
The runtime is a FastAPI server that accepts WebSocket connections from the Electron shell. It manages the tool registry, proxies API calls through the host bridge, and runs the automation job queue.
# Key runtime components:
relectric_runtime.server # FastAPI WebSocket server
relectric_runtime.nodes # UI component processing
relectric_sdk.hostbridge # API proxy (relectric.workspace, relectric.terminal, etc.)
relectric_sdk.registry # Tool discovery & lifecycle
relectric_sdk.automation # FIFO job queue with blocking semaphore
relectric_sdk.tool_base # RelectricTool base class & decorators
Host Bridge
The host bridge is the core RPC mechanism that lets Python tools call Electron-side
APIs. Every relectric.* call is serialized as a JSON
message, sent over WebSocket to the Electron shell, executed there, and the result
returned to Python.
# How a relectric.workspace.readFile call flows:
# 1. Python: relectric.workspace.readFile("src/main.py")
# 2. Serialized to: {"method": "workspace.readFile", "params": ["src/main.py"]}
# 3. Sent over WebSocket to Electron
# 4. Electron reads the file from the workspace
# 5. Result sent back over WebSocket
# 6. Python receives the file content
All relectric.* API calls are asynchronous under the
hood but exposed as sync calls in tool code for simplicity. The host bridge handles
request/response correlation using unique message IDs.
Hot Reload
The runtime uses watchdog to monitor the
.tools/ directory. When a file is saved:
- The module is reimported with a fresh import
- The old tool instance is torn down
- The new tool is instantiated and
build_ui()is called - The updated component tree is sent to the Electron shell
- Agent function registry is updated with any new/changed functions
This makes development rapid — save your file and see changes instantly in the dock.
RelectricTool Base Class
Every tool extends RelectricTool. The class provides
tool identity, UI building, and agent function registration.
from relectric_sdk import RelectricTool, ui, relectric
class MyTool(RelectricTool):
# Required: unique tool identifier (snake_case)
id = "my_tool"
# Required: display name shown in the dock
title = "My Tool"
def build_ui(self):
"""Called once during tool init. Build your component tree here."""
pass
@RelectricTool.function(
description="What this function does (shown to the agent)",
availability="always" # or "active" (only when tool is focused)
)
def my_agent_function(self, param: str, count: int = 1):
"""Agent can call this. Type hints become the function schema."""
return "Result string shown to agent"
Class Attributes
| Attribute | Type | Description |
|---|---|---|
id |
str | Unique tool identifier (snake_case, must match filename) |
title |
str | Human-readable name displayed in the tool dock |
@RelectricTool.function() Decorator
Exposes a method to the chat agent. The agent sees the function name, description, and parameter types (inferred from type hints).
| Parameter | Default | Description |
|---|---|---|
description |
required | Natural language description shown to the agent |
availability |
"always" | "always" = always available; "active" = only when tool is focused in dock |
UI Components
Relectric provides a Gradio-like ui namespace. Components
are declared in Python but rendered as Preact components in the Electron shell.
Input Components
ui.Textbox(label, value, lines)ui.Number(label, value, min, max, step)ui.Slider(label, min, max, step, value)ui.Checkbox(label, value)ui.Dropdown(label, choices, value)ui.File(label, file_types)
Display Components
ui.Markdown(value)ui.HTML(value)ui.Image(value)ui.Audio(value)ui.JSON(value)ui.Gallery(value)
Action Components
ui.Button(label, variant)
Layout Components
ui.Row()— horizontal containerui.Column()— vertical containerui.Group()— logical groupingui.Tab(label)— tab panelui.Tabs()— tab container
Event Binding
UI components support event handlers that connect user interactions to Python callbacks.
def build_ui(self):
self.prompt = ui.Textbox(label="Prompt")
self.output = ui.Image()
self.generate_btn = ui.Button("Generate")
self.clear_btn = ui.Button("Clear", variant="secondary")
# .click(handler, inputs=[...], outputs=[...])
# inputs: components whose values are passed as args
# outputs: components whose values are updated with return value
self.generate_btn.click(
self.generate_image,
inputs=[self.prompt],
outputs=[self.output]
)
# .change() fires when the component value changes
self.prompt.change(self.on_prompt_change, inputs=[self.prompt])
def generate_image(self, prompt):
# Return value is set on the output component
return "/path/to/generated/image.png"
Event handlers run in the Python runtime. The return value is automatically routed
to the designated output components. Components support .click(),
.change(), and .submit()
events.
Agent Functions
The dual-interface model: tools have both a visual UI and agent-callable functions. The agent can invoke functions, and functions can update the UI.
class GpuMonitor(RelectricTool):
id = "smi_tool"
title = "GPU Monitor"
def build_ui(self):
self.stats_display = ui.Markdown()
self.refresh_btn = ui.Button("Refresh")
self.refresh_btn.click(self.refresh_stats, outputs=[self.stats_display])
@RelectricTool.function(
description="Query GPU stats: VRAM usage, temp, utilisation",
availability="always"
)
def get_gpu_stats(self):
# Agent calls this — results go back to chat
stats = relectric.terminal.run("nvidia-smi --query-gpu=memory.used,memory.total,temperature.gpu --format=csv")
# Also update the UI for visual feedback
self.stats_display.update(value=f"```\\n{stats}\\n```")
return stats
Functions marked with availability="always" are available
to the agent regardless of which tool is focused. Use "active"
for functions that should only appear when the tool is in focus.
relectric.workspace
File system operations scoped to the current workspace directory.
| Method | Description |
|---|---|
readFile(path) |
Read file contents as string |
writeFile(path, content) |
Write string content to a file |
listDir(path) |
List directory contents |
createDir(path) |
Create directory recursively |
deleteFile(path) |
Delete a file |
searchText(query, options) |
Search workspace files by text content |
getWorkspacePath() |
Get absolute path to the workspace root |
relectric.terminal
Execute shell commands in the workspace context.
| Method | Description |
|---|---|
run(command) |
Run a shell command and return stdout |
# Run any shell command
result = relectric.terminal.run("ls -la")
result = relectric.terminal.run("python -m pytest tests/")
result = relectric.terminal.run("nvidia-smi --query-gpu=memory.used --format=csv")
relectric.browser
Web search capabilities for tools.
| Method | Description |
|---|---|
searchDuckDuckGo(query) |
Search the web via DuckDuckGo and return results |
# Used by tools like the DJ to find lyrics
results = relectric.browser.searchDuckDuckGo("song title artist lyrics")
relectric.chatagent
Interface to the main Relectric chat agent. This is the core of agent orchestration — tools can delegate complex multi-step tasks to the agent.
| Method | Description |
|---|---|
sendMessage(message, images?) |
Send a message to the agent (optionally with images for VLM) |
getConversation() |
Get the current conversation history |
clearConversation() |
Clear the conversation and start fresh |
# Delegate a complex task to the agent
# The agent will use any available tools to fulfil the request
result = relectric.chatagent.sendMessage(
"Find the lyrics for 'Bohemian Rhapsody' and save them to lyrics.txt"
)
# Send with image for visual analysis (VLM)
result = relectric.chatagent.sendMessage(
"Does this screenshot show the player character on screen?",
images=["/path/to/screenshot.png"]
)
This creates powerful orchestration patterns: a tool function invoked by the agent can itself send messages back to the agent, creating recursive delegation chains.
relectric.llamacpp
Manage and interact with a local llama.cpp LLM instance. Useful for tasks that need a secondary LLM (e.g., lyrics rewriting) without consuming the main agent's context.
| Method | Description |
|---|---|
start(modelPath, options?) |
Start a llama.cpp server with a GGUF model |
stop() |
Stop the running llama.cpp server |
chat(messages, options?) |
Send a chat completion request to the local LLM |
# Start a local LLM for lyrics generation
relectric.llamacpp.start("/models/llama-3.2-3b.gguf")
response = relectric.llamacpp.chat([
{"role": "system", "content": "You are a lyricist."},
{"role": "user", "content": "Rewrite these lyrics as a parody about coding..."}
])
# Stop when done to free GPU memory
relectric.llamacpp.stop()
relectric.automation
A FIFO job queue with a blocking semaphore for serializing GPU-heavy tasks. Jobs run sequentially to prevent VRAM contention. The key idea: you pass a callable (typically a lambda) — the automation manager handles scheduling, blocking, repeats, and persistence.
| Method | Description |
|---|---|
queueJob(fn, *, label, interval, is_blocking, max_repeats) |
Queue a callable. fn is a lambda or function. interval (seconds) for repeats, is_blocking acquires the GPU semaphore, max_repeats limits iterations (-1 = infinite). |
cancelJob(jobId) |
Cancel a pending or repeating job |
removeJob(jobId) |
Remove a job from the queue entirely |
listJobs() |
List all jobs (pending, running, completed, cancelled) |
getJob(jobId) |
Get status and result of a specific job |
clearFinished() |
Remove all completed and cancelled jobs |
onChange(callback) |
Register a listener for job state changes |
# Queue a one-time blocking task (acquires GPU semaphore)
relectric.automation.queueJob(
lambda: relectric.chatagent.sendMessage("Generate an image of a sunset"),
label="sunset-image",
is_blocking=True
)
# Queue a repeating task every 30 minutes
relectric.automation.queueJob(
lambda: relectric.chatagent.sendMessage("Check AAPL stock price and summarize"),
label="stock-check",
interval=1800,
is_blocking=True,
max_repeats=-1 # repeat forever
)
The blocking semaphore ensures only one GPU-heavy job runs at a time, preventing
out-of-memory errors when multiple tools try to use the GPU simultaneously.
Jobs with interval set will automatically
re-queue after each execution.
relectric.settings
Access and modify Relectric application settings. Settings are persisted in Electron's userData directory.
| Method | Description |
|---|---|
get(key) |
Get a setting value by key |
set(key, value) |
Set a setting value |
getAll() |
Get all settings |
Pattern: Agent Orchestration
The most powerful pattern in Relectric: tool functions that delegate complex tasks to the chat agent. This enables multi-step workflows where the agent chains multiple tools together.
class ParodyGenerator(RelectricTool):
id = "parody_generator"
title = "Parody Generator"
@RelectricTool.function(description="Generate a full song parody")
def generate_parody(self, song_title: str, artist: str, theme: str):
# Step 1: Agent finds and downloads the song
relectric.chatagent.sendMessage(
f"Use ytdlp_tool to download '{song_title}' by {artist}"
)
# Step 2: Agent finds the lyrics online
relectric.chatagent.sendMessage(
f"Search for the lyrics of '{song_title}' by {artist} and save to lyrics.txt"
)
# Step 3: Agent rewrites lyrics using local LLM
relectric.chatagent.sendMessage(
f"Use lyrics_editor to transform lyrics.txt into a parody about {theme}"
)
# Step 4: Agent generates the cover
relectric.chatagent.sendMessage(
f"Use acestep_cover to generate a cover using the parody lyrics"
)
return "Parody generation complete!"
A single user action triggers a cascade of agent calls, each using different tools. The agent brings context awareness — it knows where files were saved, which tools are available, and how to handle errors.
Pattern: VLM Feedback Loops
Use vision-language models (VLMs) to create automated visual assertion loops. Capture screenshots, send them to the agent, and get natural language evaluations.
@RelectricTool.function(description="Run visual test on game")
def visual_test(self, assertion: str):
# Capture current game state
screenshot = self.capture_screenshot()
# Send to agent with VLM for visual analysis
result = relectric.chatagent.sendMessage(
f"Look at this screenshot. {assertion} Answer YES or NO.",
images=[screenshot]
)
if "NO" in result.upper():
# Agent can try to fix the issue
relectric.chatagent.sendMessage(
"The visual test failed. Fix the issue and re-run."
)
return result
This pattern is used by godot_playtest and
mobile_automation for automated visual testing:
prepare → act → capture → assert → fix → repeat.
Pattern: Automation Jobs
Schedule long-running or repeating tasks using the automation queue. Pass a lambda that does the work — the queue handles sequencing, GPU semaphore, and repeats.
class DJTool(RelectricTool):
id = "dj_tool"
title = "DJ"
@RelectricTool.function(description="Start automated remix generation")
def start_remixing(self, genre: str, interval_minutes: int = 30):
# Queue a repeating job — pass a lambda with the work to do
relectric.automation.queueJob(
lambda: relectric.chatagent.sendMessage(
f"Find a {genre} track, download it, rewrite the lyrics, and generate a cover."
),
label=f"dj-{genre}",
interval=interval_minutes * 60,
is_blocking=True,
max_repeats=-1
)
return f"Remix queue started: generating {genre} every {interval_minutes}min"
The automation manager persists job state, so queued tasks survive runtime restarts. The blocking semaphore prevents GPU VRAM contention when multiple heavy tasks are queued.