Install How it works Models GitHub ↗ Contact
Open Source  ·  v2.0  ·  MIT License

Your AI agents,
always listening.

Leverler sits in your system tray and watches for triggers — keywords, emails, anything — then launches local AI agents automatically. No cloud. No API keys. No data leaves your machine.

Get Started View on GitHub

What is it

A loitering AI
that works for you

Leverler is an Electron desktop app that runs silently in your system tray. Define triggers. Assign agents. Walk away.

Fully local

Runs entirely on your machine via Ollama. Your clipboard, your emails, your data — never touches a cloud server.

Event-driven

Watches clipboard for keywords and polls your inbox for matching emails. Agents fire automatically when triggers match.

Open source

MIT licensed. Fork it, extend it, add your own trigger types and agent templates. The whole stack is yours.

Installation

Up and running
in four steps

Leverler requires Node.js 18+ and Ollama. Both are free and install in minutes.

1
Install Ollama

Download Ollama — the local model runtime that powers Leverler's agents.

Download Ollama →
2
Pull a model

We recommend qwen2.5:7b — fast, smart, and runs well on Apple Silicon and modern Windows machines.

3
Clone & install Leverler

Clone the repo and install dependencies. Node.js 18 or higher required.

4
Start the app

Leverler opens your dashboard and drops into your system tray. Hit Start Listening and define your first trigger.

terminal
# Start Ollama (if not running)
ollama serve
terminal
# Pull the recommended model (~4 GB)
ollama pull qwen2.5:7b

# Or the smarter 14B variant
ollama pull qwen2.5:14b
terminal
git clone https://github.com/leveleragentic/leverler
cd leverler
npm install
terminal
npm start

# Opens dashboard + drops to system tray
# Go to Settings → Test Connection
# Then add your first trigger

How it works

Trigger → agent → result

Leverler is built around a simple loop. You define the conditions; it handles the rest.

01 ——

Loiter

Runs in the background watching your clipboard and inbox continuously. Zero CPU at idle.

02 ——

Detect

A trigger fires when clipboard text matches a keyword, or a new email hits your inbox with matching content.

03 ——

Launch

An agent spins up with the trigger context. Qwen2.5 processes the task locally, streaming output in real time.

04 ——

Deliver

Results appear in the dashboard. Review output, inspect logs, or chain the result into another trigger.

Supported models

Any model Ollama runs,
Leverler runs

Configure the model in Settings. Switch any time without restarting.

ModelRAMSpeedBest for
qwen2.5:7b  recommended~5 GBFastDaily use, email, summarization
qwen2.5:14b~9 GBMediumComplex reasoning, research agents
qwen2.5:3b~2 GBVery fastLow-resource machines
llama3.2:3b  alt~2 GBFastLightweight alternative
mistral:7b  alt~4 GBFastStrong instruction following

Built in the open.
Free forever.

Leverler is MIT licensed. Add trigger types, build custom agent templates, integrate with any local model. Pull requests welcome.