Little Helpers Platform

The AI helper that runs on schedule, learns your style, and asks before acting.

Not another chatbot. A personal AI that handles recurring tasks, delivers briefings when you want them, and gets smarter from your feedback. Runs in its own Firecracker microVM with a persistent 5GB volume, 7-layer security hardening, and 28 ready-to-use templates.

Get started

What makes this different

  • 200+ service integrations via OpenAPI specs — Stripe, Slack, HubSpot, Notion, and more
  • Credentials encrypted on your volume — we never see your tokens or API keys
  • Scheduled tasks run automatically — heartbeat checks every 5 minutes
  • Persistent 5GB volume: plugins, skills, knowledge, and memories survive reboots
  • Daily volume snapshots with 5-day retention and one-click restore
  • Seven-layer security: VM isolation, domain-locked credentials, marketplace lockdown, spending limits
  • 28 vertical templates with pre-loaded knowledge and conversation seeds
  • Multi-step pipelines with approval gates and circuit breakers

How it works

Conversation replaces configuration

No forms, no setup wizards, no visual workflow builders. Just talk to your helper about what you need. It interviews you naturally, matches your needs to one of 28 vertical templates, and sets everything up for you. Approve the plan, and it starts working. If nothing fits exactly, your helper builds a custom configuration from scratch — combining pieces from multiple templates, adding the integrations you need, and setting up schedules specific to your workflow. Custom setups that work well become new templates for future users.

Scheduled automation without technical setup

Your helper runs tasks on schedule — morning briefings at 7:30am, competitor checks every 3 hours, invoice alerts when due. You set the schedule once through conversation. The heartbeat scheduler runs 11 checks every 5 minutes: cron pipelines, reactive triggers, pending approvals, cost limits, memory maintenance, pipeline failures, circuit breaker status, quota enforcement, learning queue, memory consolidation, and cleanup tasks.

Learns from your feedback

Every time you approve or reject something, your helper learns. It tracks patterns in what you like, adjusts tone and style, remembers your preferences across sessions. The learning system stores feedback patterns, voice preferences, successful strategies, and user corrections. Memories are consolidated nightly and survive hibernation and reboots. Over time, it anticipates what you need before you ask.

Approval workflows with graduated control

Nothing posts, publishes, or sends without your approval. Your helper drafts content, presents it in chat with action buttons — approve, reject, edit, snooze, or skip — and waits for your decision. Research and analysis tasks auto-approve. Anything that touches the outside world requires explicit permission. Per-step-type quotas (CAP gates) limit how many tweets, emails, or crawls can happen per day. Deploy actions always require human approval, no exceptions. Circuit breakers disable a pipeline after 3 consecutive failures. You stay in control without micromanaging every step.

Multi-step pipelines

Complex work broken into steps: Research → Draft → Review → Revise → Present. Each step uses the right AI model for the job (cheap models for research, expensive for final drafts). Dependencies handled automatically with cycle detection. Circuit breakers disable failing pipelines after 3 consecutive errors. Reactive triggers fire pipelines when conditions match (inbox keywords, calendar events, file changes). Max 20 steps per pipeline, 10 pipelines per template.

Shareable reports and pages

Your helper generates polished HTML pages — executive briefs, project proposals, competitor analyses, campaign reports — and gives you a shareable link. Each page is rendered from branded templates with your data, stored on your volume, and served from your helper's public endpoint with an unguessable 128-bit URL. Share the link with clients or colleagues, no login required. Pages persist until you delete them.

Built-in web research

Your helper fetches web content using Cloudflare's Markdown for Agents standard — sites behind Cloudflare serve pre-converted markdown with ~80% fewer tokens than raw HTML, making research faster and cheaper. For non-Cloudflare sites, built-in extractors pull articles, listing data, and competitor pricing from HTML. No headless browser, no Chromium, no 500MB memory overhead. Research results feed into pipelines, knowledge base entries, and shareable reports.

Source verification

When your helper makes claims or references data, it checks against source material and flags anything it can't verify. Grounding detection runs on generated content — if a fact doesn't trace back to a source document, conversation, or web result, it's marked as unverified. You see what's confirmed and what's the helper's best guess, so you can make informed decisions.

Cost-aware by design

Model routing based on task importance. Bootstrap tasks use Sonnet, conversations use mid-tier models via OpenRouter, cheap tasks use Haiku. Token usage tracked per call with separate budgets for daily and monthly limits. Fail-closed enforcement: if we can't verify budget, the call doesn't happen. Hit your budget and we throttle to Haiku-only mode instead of charging more. No surprise bills like self-hosted users report ($300-750/month).

Reactive triggers and event-driven automation

Beyond scheduled tasks, your helper responds to events in real-time. Inbox monitoring triggers pipelines when keywords match. Calendar integration fires briefings before meetings. Your helper's public endpoint receives webhook events from Stripe, GitHub, and other services directly — Fly.io auto-wakes hibernating helpers on incoming requests, so events are never lost. Persistent URLs mean external services always have somewhere to send events.

Vector knowledge base with semantic search

Your helper builds a searchable knowledge base from every conversation, approved output, and stored document. Vector embeddings enable semantic search across everything it knows. Reference past work, find similar examples, recall context from months ago. Knowledge persists on your volume, isolated per user, never shared across helpers.

Your environment

Each helper runs in its own Firecracker microVM on Fly.io with a persistent 5GB volume. The platform code is ephemeral. Your data is permanent.

What persists (volume)

Your 5GB volume at /app/data/ holds everything that matters: conversations and memory (SQLite), installed plugins and extensions, AI-created skills, your knowledge base (documents, embeddings, research), workspace files (drafts, reports, code), encrypted API credentials, generated reports and pages, OAuth tokens for connected services, service connection configs, compiled API tool definitions, and custom imported OpenAPI specs. This data survives reboots, hibernation, and platform updates.

What regenerates (rootfs)

Platform code, agent configs, templates, and security baselines are baked into the Docker image and regenerated each boot. OpenClaw config, auth profiles, and model routing are generated from environment variables. Credentials land in /run/secrets/ with root-only permissions, then env vars are unset. Your data is never touched during a platform update.

Daily volume snapshots

Fly.io takes automatic daily snapshots of your volume — your conversations, memories, plugins, skills, knowledge base, workspace, encrypted credentials, and OAuth tokens. Retained for 5 days. If something goes wrong (bad plugin, corrupted data), restore from yesterday's snapshot. Snapshots capture your volume only, not the platform image (that's rebuilt from our registry).

Platform updates are transparent

When we push a security patch or new feature, fly machine update swaps the image while your volume stays attached. Entrypoint recreates symlinks on boot. Your plugins, skills, knowledge, and conversation history are untouched. Like an OS update that doesn't wipe your home directory.

200+ integrations, credentials stay on your machine

Connect Stripe, Slack, HubSpot, Notion, Google Workspace, and hundreds more. Your helper uses real API specs to call services reliably — no hallucinated endpoints, no brittle raw HTTP. Your tokens and credentials stay on your volume. We never see them.

Spec-driven API calls, not guesswork

Your helper doesn't guess at API endpoints. It uses real OpenAPI specs — the same documentation developers use — compiled into structured tools. When you say "check my Stripe payments," your helper calls stripe_list_charges(limit: 10), not a hallucinated HTTP request. 200+ service specs ship with the platform, covering payments, messaging, CRM, project management, email, and more. The spec compiler generates typed tool definitions at connection time. The AI selects the right tool and fills in parameters. The execution layer handles the rest: authentication, retries, pagination, rate limits.

Connect any service with OAuth or API key

For OAuth services (Google, Microsoft, GitHub, Slack), your helper sends you a one-click authorization link. You approve, and tokens go directly to your helper's volume — our servers never see them. For API key services (Stripe, Notion, Airtable, SendGrid, and hundreds more), your helper generates a one-time secure link. Click it, paste your key, and it's encrypted with AES-256-GCM on your volume. The credential never passes through our servers or the AI model. At runtime, the execution layer decrypts credentials only at the moment of the API call, injects them into the correct header, and discards the plaintext.

Domain-locked security

Every service connection is locked to explicit domains. Your Stripe key can only go to api.stripe.com — even if the AI is confused or manipulated, the credential cannot leak to any other domain. No wildcards, no subdomains, no exceptions. The execution layer validates every request URL, rejects redirects entirely (preventing open-redirect attacks), blocks private IP ranges (SSRF protection), and enforces HTTPS-only. Write operations (POST, PUT, DELETE) require your explicit approval in chat before executing. Every API call is audit-logged: service, method, endpoint, status code. Never credentials, never response bodies.

Import custom services

Service not in the catalog? Provide an OpenAPI spec URL and your helper imports it automatically — tools generated, domain binding configured, ready to use. For services without a published spec, your helper can create one from API documentation. Custom service specs are stored on your volume alongside the built-in catalog. Same security model: domain-locked credentials, approval for writes, full audit trail.

Direct webhook delivery

Your helper has its own public HTTPS endpoint with a whitelist gateway — only explicitly allowed paths are reachable, everything else returns 404. Webhook events from Stripe, GitHub, and other services are delivered directly to your helper. When your helper is hibernating, Fly.io automatically wakes it on incoming requests. Raw headers and body are preserved for signature verification (Stripe HMAC, GitHub HMAC). No middleman buffering your webhook data.

Built-in web research

Most web tasks — reading articles, monitoring competitors, checking listings — work with lightweight HTTP requests. Your helper uses Cloudflare's Markdown for Agents standard for ~80% fewer tokens than raw HTML. For JavaScript-heavy sites, connect a third-party browser service like Browserless or CamoFox using the secure credential store. No Chromium in your helper's VM, no shared browser pool to manage.

Install plugins, create skills

Install OpenClaw plugins by package name (openclaw plugins install @openclaw/gmail). Plugins persist on your volume across reboots and are captured in daily snapshots. The ClawHub marketplace is disabled (22% malware rate), but direct package installation works. Your AI can also create local skills — just directories with a SKILL.md that get injected into the agent's system prompt. Say "I want you to check my Gmail every morning" and the agent writes the skill and schedule.

Security model

Seven layers, not one wall. Self-hosted OpenClaw has serious problems (CVE-2026-25253, 22% plugin malware, plaintext credentials, $300-750/month surprise bills). We harden each layer independently so a failure in one doesn't cascade.

1

VM isolation between users

Each helper runs in its own Firecracker hypervisor-isolated microVM on Fly.io. Not a Docker container — hardware-enforced isolation. A compromised helper can't read another user's memory, files, or network traffic. CPU and RAM limits enforced by the hypervisor. If a process runs away, the OOM killer terminates the VM, not your neighbor's.

2

Whitelist gateway

Your helper has a public HTTPS endpoint, but uses a default-deny whitelist gateway. Only four path patterns are allowed through: shareable pages (AI-generated briefs, proposals, and reports with unguessable 128-bit IDs — no login required to view, but impossible to guess), the credential storage form (one-time password authenticated, single use, 15-minute expiry), OAuth callbacks (Auth Code + PKCE, state validated, routed to originating machine), and webhook endpoints. Everything else — the internal API, message handlers, health checks, action endpoints — returns an identical 404 to the public internet. No information leakage: the response is the same whether a path exists internally or not. Internal traffic between your helper and our control plane runs on Fly.io's 6PN private network encrypted with WireGuard.

3

Credential isolation & encryption

Two layers of credential storage, both encrypted at rest with AES-256-GCM. Platform keys (LLM providers, chat platforms) are injected at boot into /run/secrets/ (root-only, mode 400) and stripped from the environment before any user-facing process starts. Your service credentials (Stripe, Notion, Slack, and 200+ more) are stored through a secure web form that goes directly from your browser to your helper — our servers never see the value. Each credential is domain-locked: your Stripe key can only be sent to api.stripe.com, never to any other domain, even if the AI is confused or manipulated. Credentials are encrypted with a per-helper key, stored in SQLite on your volume, and only decrypted in memory at the moment an API call needs them. The execution layer injects credentials into headers only (never URLs or request bodies), disables redirect following, and rejects private IP ranges. The credential form uses one-time passwords (hashed in the database, 15-minute expiry, single use) and is rate-limited to prevent brute force. 14 regex patterns scrub secrets from all LLM calls — catches API keys, JWT tokens, private keys, connection strings. Even if you paste a secret in chat, it's redacted before reaching the model.

4

Tool restrictions & marketplace lockdown

ClawHub marketplace disabled (22% of plugins flagged as malware in the ClawHavoc campaign). Shell execution denied — your helper can't run arbitrary bash. ~50 vetted tool names enforced server-side. You can install plugins manually by package name, and your AI can create local skills. On every boot, installed plugins are checked against a blocklist of known-malicious packages and auto-disabled if flagged. This is informed consent: we block the biggest malware vectors, but if you install a package yourself, VM isolation limits the blast radius to your sprite. Snapshot recovery as last resort.

5

Integrity monitoring

Security watchdog runs continuous checks: SHA-256 baseline verification of all platform files (company-system, agent configs, templates, rules), log scanning for suspicious patterns, process monitoring. /proc hardened with hidepid=2 to prevent cross-process snooping. Baselines cover platform files only — your data at /app/data/ is excluded by design. User-installed plugins and skills are your territory, not ours to audit.

6

Spending controls

Token budgets wired into the LLM router with daily and monthly limits. Fail-closed: if we can't verify your budget, the call doesn't happen. Hit your limit and we throttle to Haiku-only — no surprise charges, no overage billing. Model routing picks cheap models for routine work and expensive models only when quality matters.

7

Audit trail

Event types logged to the audit trail: sprite lifecycle (boot/shutdown/hibernate), template activation, config changes, approval decisions, spending events, security incidents (integrity failures, blocked plugins), pipeline execution, credential storage events, API calls (service, tool, method, endpoint, status — never credentials or response bodies), and learning updates. Immutable append-only log. OAuth tokens and encrypted credentials stored on your volume only — Auth Code + PKCE flow and direct credential storage mean we never see your access tokens, refresh tokens, or API keys.

28 vertical templates ready to use

Each template comes pre-configured with system knowledge, example workflows, conversation seeds, scheduled tasks, and reactive triggers. Pick one during onboarding, or start from scratch.

Content & Marketing (11)

  • Content Writer
  • Content Repurposer
  • Technical Writing
  • Social Media Content
  • Social Media Manager
  • Email Marketing
  • Email Sequence Writer
  • SEO
  • SEO Content Pipeline
  • Competitor Monitor
  • Social Listening

Sales & Client Management (2)

  • Sales Pipeline
  • Client Manager

Operations & Support (6)

  • Operations
  • Project Coordination
  • Executive Assistant
  • Daily Digest
  • Customer Support
  • Customer Onboarding

Industry-Specific (9)

  • Real Estate
  • Listing Monitor
  • Bookkeeping & Finance
  • Legal & Compliance
  • Recruiting & HR
  • Ecommerce
  • Data Analyst
  • Solopreneur & Freelancer
  • Research

Start with a conversation

Free 3-day trial. No credit card required. Cancel anytime.

Get started