AI is rapidly shifting from single models to coordinated systems of agents (multi-agent systems, or MAS), especially in the last couple of months. This means that newer AI systems become more complex and multi-layered than before, and now we need new rules and ideas to make them work the way we need. Here are some of the recent approaches with a fresh view on MAS:
RecursiveMAS
A multi-agent AI system where agents repeatedly improve a shared internal representation instead of exchanging text. A small RecursiveLink module lets them pass these latent states between each other in a loop. It boosts accuracy by ~8%, runs 1.2–2.4× faster, and cuts token use by up to 75% → Explore more
OneManCompany (OMC)
Treats a multi-agent system like a real organization. Agents act as reusable units with Talents - a full package of skills + tools + configs instead of just skills. A Talent Market lets the system recruit new agents dynamically. Work is managed via an Explore-Execute-Review (E2R) tree: tasks split top-down, results aggregated bottom-up, enabling adaptive planning and continuous improvement. → Explore more
OrgAgent
Another approach that organizes agents as a company. It works in three layers: the governance layer plans tasks and assigns resources, the execution layer solves problems and reviews results, and the compliance layer checks final answers. This hierarchy improves coordination, boosts performance up to +102%, and reduces token usage ~75% compared to flat agent setups. → Explore moreCORAL
A MAS for open-ended discovery where agents continuously explore and improve solutions. They run long-term, share a persistent memory, and work asynchronously. A heartbeat system monitors and adjusts them. It also includes safeguards (isolated workspaces, evaluators). CORAL achieves 3–10× better improvement with fewer evaluations. → Explore more
LLMA-Mem
Introduces a memory framework where agents store and reuse past experiences via flexible shared or local memory. Results show better long-term performance at lower cost. Importantly, more agents aren’t always better - well-designed memory can let smaller teams outperform larger ones. → Explore more
Agentic Federated Learning
Adds AI agents to federated learning to handle real-world variability. A server-side agent selects clients to reduce bias, while client-side agents manage privacy budgets and adjust model size based on device limits. Agents adapt dynamically, improving efficiency and fairness, but highlighting challenges like reliability and security. → Explore more
CASCADE
A MAS for handling disruptions (for example, in supply chains) under strict time and communication limits. Each agent keeps a local knowledge base, makes decisions, and communicates via contracts. Communication expands only when needed, based on validation. Everything allows for controlled coordination. → Explore moreGRASP
Lets agents share their gradients (learning signals) and combine them into a consensus gradient. This creates a stable target (Bellman equilibrium), reduces oscillations and also improves coordinated learning. → Explore more
Reinforced Agent
A simple MAS system that adds a reviewer agent into the tool-calling control loop. Before executing a tool call, the reviewer checks it for errors (wrong tool, parameters or scope), enabling real-time correction. The best reviewer achieves a 3:1 benefit-to-risk ratio. → Explore more

