AI Agent Mac Mini: How I Run an Always-On AI Infrastructure for €3/Month

TL;DR: I run 20+ autonomous AI agent tasks on a Mac Mini M4 using OpenClaw. It costs me about €3/month in electricity. Here's the full setup, what it does, and why a Mac Mini beats a cloud server for this.
What Are "Claws" — And Why Should You Care?
On February 21, 2026, Andrej Karpathy coined the term "Claws" for a new category of AI systems. Simon Willison picked it up, and it hit 178 points and 269 comments on Hacker News.
Claws are personal AI agent systems that run on your own hardware, communicate via messaging protocols, and both respond to direct instructions and schedule tasks autonomously.
Think of it as the next layer in the AI stack: LLMs → LLM Agents → Claws.
The ecosystem is growing fast. OpenClaw leads as a full-featured open-source option, with alternatives like NanoClaw (~4,000 lines of code, fully auditable) and zeroclaw for minimalists.
I've been running OpenClaw on a Mac Mini M4 for about 10 days now. Here's what that actually looks like in practice.
Why a Mac Mini for AI Agent Infrastructure?
AI agent infrastructure is the hardware and software stack that keeps autonomous AI agents running 24/7 — handling scheduling, tool access, messaging, and context persistence.
Let me be direct: you don't need a GPU server for this. The Mac Mini doesn't run LLMs locally. It orchestrates — it sends requests to cloud APIs (Anthropic's Claude in my case) and handles everything else: cron scheduling, tool calls, file access, messaging.
Here's why the Mac Mini M4 is close to perfect for this:
- Low power: 5–15W at idle. That's roughly €2–4/month in electricity.
- Always-on: No fans at idle, silent, designed to run 24/7.
- ARM efficiency: Apple Silicon is built for sustained low-power workloads.
- Affordable: ~€700 for the M4 with 16GB. One-time cost, no monthly bills.
- Local control: Your data stays on your hardware. No cloud provider reading your agent's context.
Compare that to a cloud VM with similar reliability: you'd pay €20–50/month minimum. The Mac Mini pays for itself in under two years — and you own the hardware.
My Stack: What's Actually Running
| Component | What | Why |
|---|---|---|
| Hardware | Mac Mini M4, 16GB RAM | Orchestration only — no local inference |
| Agent Runtime | OpenClaw (open source) | Full-featured Claw with cron, tools, messaging |
| Main Model | Claude Opus | Complex tasks, drafting, research |
| Cron Model | Claude Sonnet | Routine jobs — 80% cheaper than Opus |
| Messaging | WhatsApp + Telegram | I talk to my agent like I'd text a colleague |
| Routing | Multi-agent | Different agents for different task types |
The key insight: the Mac Mini is the orchestrator, not the brain. The LLM runs in the cloud. The Mac Mini handles everything around it — persistence, scheduling, tool execution, communication.
What 20 Cron Jobs Actually Do
I have roughly 20 autonomous tasks running. Here's what the agent handles without me touching anything:
- Morning Brief: Summary of overnight metrics, calendar, priorities
- Night Shift: Batch processing, content scheduling, research aggregation
- Social Media: Automated posting across platforms with fact-checking pipeline
- Metrics Collection: Pulling analytics, tracking KPIs
I communicate with the agent through WhatsApp and Telegram. It feels less like managing a server and more like texting a very efficient assistant.
Costs: Mac Mini vs. Cloud
| Mac Mini M4 | Cloud VM (comparable) | |
|---|---|---|
| Hardware | ~€700 one-time | €0 |
| Monthly electricity | €2–4 | €0 |
| Monthly hosting | €0 | €20–50 |
| Year 1 total | ~€730–750 | €240–600 |
| Year 2 total | ~€24–48 | €240–600 |
| Data sovereignty | ✅ Local | ❌ Cloud provider |
After year one, the Mac Mini costs almost nothing. And you get something a cloud VM doesn't: full control over your agent's data, tools, and file system.
Note: LLM API costs (Claude) come on top for both setups. That's the same either way.
Honest Take: 10 Days In
I want to be transparent — I've been running this setup for about 10 days. That's enough to say:
- It works. The Mac Mini runs OpenClaw without issues. The cron jobs execute reliably. Communication through WhatsApp and Telegram is smooth.
- It saves real time. The morning brief alone saves me 20 minutes of context-switching every day.
- It's not magic. You still need to configure jobs, define agent behavior, debug when things go wrong. This is infrastructure, not a plug-and-play product.
What I can't tell you yet: long-term reliability over months, edge cases I haven't hit, or how this scales if I add significantly more tasks. I'll update this post as I learn more.
Getting Started
If you want to try this yourself:
- Get a Mac Mini M4 (16GB is enough for orchestration)
- Install OpenClaw — it's open source
- Connect your messaging (WhatsApp, Telegram, or both)
- Start with 2-3 cron jobs — don't try to automate everything on day one
- Use a cheaper model for cron — Sonnet handles routine tasks fine and is 80% cheaper than Opus
The barrier to entry is lower than you think. If you can set up a Node.js project, you can run a Claw.
FAQ
Can you run LLMs locally on a Mac Mini M4?
You can run smaller models locally via llama.cpp, but that's not what I do. My Mac Mini orchestrates — it sends requests to cloud APIs (Claude) for the actual inference. The 16GB RAM is plenty for the orchestration layer.
How much does it cost to run an AI agent on a Mac Mini?
The hardware is ~€700 one-time. Electricity runs €2–4/month at 5–15W idle draw. LLM API costs depend on usage and are the same whether you run locally or in the cloud.
Is OpenClaw safe to run on your personal machine?
This is a fair concern. Karpathy himself noted he's "a bit sus'd to run OpenClaw specifically" because Claws need full system access. Alternatives like NanoClaw (~4,000 LoC) exist specifically for auditability. I run OpenClaw because it's the most feature-complete option, but you should review what you're comfortable with.
What's the difference between a Claw and a regular AI agent?
A regular AI agent responds to prompts. A Claw runs persistently on your hardware, schedules its own tasks, communicates via messaging, and maintains context across sessions. It's the infrastructure layer that turns an AI agent into an always-on system.
Do I need a Mac Mini specifically?
No. Any always-on computer works — a Raspberry Pi, an old laptop, a NUC. The Mac Mini M4 hits a sweet spot of power efficiency, performance, and silence. But the concept works on any hardware that stays on.
I wrote about my broader AI agent experience in My AI Employee: How an Autonomous Agent Runs My Business. For the technical side of AI in production, check out RAG in Production: What Comes After the Tutorial.