Bring AI to every device on Earth.
A complete autonomous AI agent runtime in 528 kilobytes of pure C.
One static binary. Zero dependencies. Boots in under 2 milliseconds.
To run an AI assistant today, you need...
h-uman needs one thing: a CPU.
From data centers to five-dollar boards.
The same static binary. The same config. Every device.
Cloud
Docker, WASM, Cloudflare Workers. Full gateway with webhooks, tunnels, and multi-channel routing.
Workstation
macOS, Linux, Windows. Interactive TUI, local Ollama/llama.cpp. No cloud dependency.
Edge
Raspberry Pi, ARM SBCs. Under 5 MB RAM. Under 2 ms startup. Runs headless on boot.
Embedded
Arduino, STM32, Nucleo. Serial JSON protocol. Hardware peripherals via vtable interface.
Swap anything. Lock in nothing.
Every subsystem is a vtable interface. Change any layer with a config edit.
50+ providers. 34 channels. 67+ tools.
Connect to any AI model. Reach users on any platform.
Three commands. Zero dependencies.
Clone, build, run. No containers. No interpreters. No package managers.
A command center, not a status page.
Real-time stats. Live activity feed. Sparkline trends. All in 528 KB.
Try the live demo — no installation required.
Bring AI to every device on Earth.
Open source. MIT licensed. ~528 KB. Runs anywhere.
How h-uman Compares
Less code. Less memory. More integrations. By orders of magnitude.
Smaller bars = better for size/memory/dependencies. Larger bars = better for providers/channels.