humanity.ai-A1 gets to know our CEO Alexey Lee.

Watch humanity.ai-A1 Run Fully Offline: Modular AI on a Mac Mini

Watch humanity.ai-A1 run three AI models concurrently—offline—on a 24GB Mac Mini. This demo showcases modular AGI, real-time learning, and verifiable orchestration with just 16B params. The future of local, plug-and-play AI is here.

Christopher Ford

A live demo of the future, where modular AI runs locally and adapts in real time.

The Chariot Technologies Lab team is thrilled to share a new demo of humanity.ai-A1, our modular AI system, running fully offline on a 24GB Mac Mini.

No internet, no cloud, and no tricks. Just real intelligence in a compact, verifiable, self-contained package.

Watch the Demo

What's in the Demo?

This may look like a clunky prototype, but under the hood, it's showcasing several breakthroughs that redefine what's possible with AI at the edge.

1. Local AI on Consumer Hardware
We’re running three models concurrently—a vision model and two LLMs—on a single Mac Mini with just 24GB of RAM. The combined setup totals 16 billion parameters, which would normally require at least 32GB.

2. Concurrent, Cross-Stack Execution
Each model uses different ML libraries (Whisper, face recognition, text-to-speech, text-to-text), normally incompatible. Our system runs them simultaneously, with no virtualization or wrappers.

3. Plug-and-Play AI Architecture
Thanks to our iCon architecture, any AI or ML model can be “containerized” and added on the fly as an expert module—no retraining required. This modularity is key to real-world scalability.

4. Orchestration + Verification
A built-in Conductor dynamically routes tasks to relevant experts. If no expert is available, the system says so. No hallucinations, no guesswork.

5. Real-Time Learning
The vision model adds new faces to memory on the fly, showing how dynamic training works in real time. This lays the groundwork for lifelong learning.

6. Apple Silicon + PyTorch + CUDA
We’re running PyTorch (CUDA-based) models natively on Apple Silicon—another technical milestone that opens the door to more diverse hardware support.


Why This Matters

This isn’t just a proof-of-concept. It’s a glimpse into what modular, private, adaptable AI can look like when it’s untethered from the cloud.

We’re not building an LLM. We’re building a brain—one that evolves, scales, respects user privacy by design, and unlocks a new path to AGI.

The next version of humanity.ai-A1 will tackle hard math problems. And it’ll still run locally.


Demo Recap:

  • Fully offline system
  • Runs on a 24GB Mac Mini
  • 3 models, 16B params total
  • Real-time learning
  • Hardware-agnostic modularity
  • Verifiable orchestration

Watch now >>>


Want to be part of our work?
We're actively seeking AI researchers and engineers to join our team. Get in touch at [email protected] if you're interested, and be sure to sign up for email newsletter below to stay up-to-date with the latest developments.

Demos