Photo by R.G. Angel
Photo by R.G. Angel

Modularity Memo: Self-Evolving AI, Safe Scaling, and Why LLMs Won't Cut it

From self-evolving AI that masters a school corpus to fresh research on safety and interpretability, the humanity.ai team is pushing modular AI forward. Catch our IMOL debut, conference submissions, and top industry takeaways.

Christopher Ford

Some quick updates on the latest from the humanity.ai team.

New Experiments

  • We've successfully developed a self-evolving assembly of our iCon modular AI system, which autonomously expanded to become an expert on a school corpus. Our paper on the research was accepted to be presented as a poster at IMOL at the University of Hertfordshire (September 8-10, 2025, Hatfield, UK). If you're at IMOL, come say hi! We'll also share more details from this research after the conference.
  • We've been exploring the safety, interpretability, and scalability implications of our modular AI system in recent weeks and have submitted papers with relevant research on these topics to both AAAI and SafeMM-AI at ICCV. We hope to be able to share our latest findings at one or both of these conferences in the coming months.

Tech Deep-Dives

Interested in getting an in-depth understanding at the technology that powers our self-evolving AI? Check out the Tech Deep-Dives tag in our blog for the latest.

Tokens of Note: "LLMs Aren't Enough" Edition

GPT5 failing on a kindergarten worksheet.
GPT-5, failing on a KINDERGARTEN worksheet, via @GaryMarcus on X, who simply commented, "No words.

A steady drumbeat has been growing across the AI industry, tapping out one consistent message: LLMs are powerful, but they won't get us to human-level artifical intelligence. If you follow Gary Marcus on X, you will get the gist of a lot of this, but for a deeper look at what researchers, engineers, academics, and enthusiasts are saying, we've compiled some of the most compelling bits and bytes that have come across our desks over the last few weeks:

  • "GPT-5 has sealed the deal. It is one in a line of underachieving flagship models from major AI labs." Full blog here >
  • "LLMs are not by themselves sufficient as a path to general machine intelligence; in some sense they are a distraction because of how far you can take them despite the approach being fundamentally incorrect." Full blog here >
  • Must-read paper on domain-specific superintelligence: It proposes a paradigm shift in AI development: Rather than scaling massive generalist models on noisy internet data, their approach fine-tunes smaller models using reasoning tasks synthesized from symbolic paths, each paired with detailed thinking traces. Check it out >
  • The Information reported on the latest International Conference on Machine Learning (ICML), where a lot of the discussions focused on where LLMs are falling short: They "overthink" on certain reasoning tasks and they struggle to truly understand multimodal content. This reinforces our belief that LLMs just aren't going to cut it when it comes to real-world practical, safe, and trustworthy human-level artificial intelligence.
  • Excellent interview with Microsoft AI CEO Mustafa Suleyman: "As AI models get commoditized, the value will be added in that final layer of orchestration." That's why we're building what we're building.
  • Insightful interview on Big Technology Podcast (LinkedIn) with Dwarkesh Patel: A good convo on how continual learning is a barrier keeping current AI systems from progressing to AGI/ASI. Earlier in this blog, we shared the self-evolving system we're presenting at IMOL. It successfully engages in continual self-learning to address this very barrier.
  • Excellent chat with Dr. Fei-Fei Li at Y Combinator on spatial intelligence: Over and over, we hear the smartest in the field saying some form of "AI is more than LLMs." And we couldn't agree more.
  • "[LLMs] still have many shortcomings that have not been fixed since their inception, and that doesn’t look to be solvable with the current approach. The current approach of 'run the biggest possible LLM and make it do everything' is a horrendous idea." Full blog here >

Opportunities to Work with Us

We're opening our doors to researchers, engineers, advisors and visionary design partners. Shape our self-evolving modular AI, access GPUs & robotics, co-author papers and deploy real-world solutions with us.

Follow Us

We're on InstagramThreadsXYouTube, and LinkedIn — connect with us to follow our journey in developing self-evolving modular AI!

Onward,
the humanity.ai team

News