Recent weeks and the rising popularity of Hermes Agent have demonstrated an important thing – people are seeking and are increasingly interested in agents that can self-improve.
Hermes Agent positions itself as “the agent that grows with you”, emphasizing persistent memory across sessions and automatic skill creation alongside a gateway for multi-platform messaging and sandboxed tool use. This is the first agent at this scale, but similar ideas and mechanisms appear in other agents.
Here are several interesting lightweight open-source agents and frameworks that support self-improvement concept:
HyperAgents
Meta’s research system for self-referential AI, combining a task agent with a meta agent in a single editable program that can modify both itself and its improvement process, making it one of the clearest examples of explicit self-improvement architectures. → Explore more
Agent0
Agent0 is a research-oriented autonomous framework built around zero-data self-evolution. Its description states that agents can improve and evolve without human-curated datasets and self-generate training data through intelligent exploration, using tool-integrated reasoning to drive continuous capability growth. → Explore moreEvoAgentX
A framework for building LLM agents that generate workflows, evaluate their own performance, and iteratively rewrite prompts and structures using built-in evolution algorithms. Over time, this creates agents that continuously refine how they work (not just what they output) through feedback-driven self-optimization. → Explore moreAgentEvolver
Trains agents that don’t rely on static data: they generate their own tasks, explore environments, and learn from past trajectories. By assigning credit across steps and refining policies, the system forms a closed feedback loop where agents continuously evolve. → Explore more
Agent Zero
An autonomous agent framework that runs in an execution environment, where agents use and create tools while refining workflows over time. With persistent memory, plugin-based skills, and iterative loops, it can self-correct and expand its capabilities, gradually evolving its behavior through continuous interaction and tool use. → Explore more
Letta Code
A memory-first coding harness built on the Letta API, where a persistent agent keeps state across sessions and updates its memory over time. It can learn reusable skills and adapt from past interactions, improving continuously instead of resetting each session, while remaining portable across different LLM backends. → Explore moreLettaBot
A multi-channel AI assistant built on the Letta Code SDK, using a single persistent agent with shared memory across apps like Telegram, Slack, Discord, WhatsApp and Signal. It stores conversations over time, executes local tools, and schedules tasks. The agent continuously updates its memory and skills, allowing it to improve behavior and responses from ongoing interactions. → Explore more
LangGraph Reflection
A reflection-style agent pattern that implements iterative self-critique: a critique agent evaluates the main agent’s output and, if issues are found, sends feedback for revision. This loop repeats until no further critiques remain, enabling step-by-step refinement within a single execution. → Explore moreSuperAGI
Autonomous agent framework that allows agents to continually improve performance across runs. It provides memory storage to retain information and adapt behavior over time, enabling incremental improvement through repeated execution and accumulated context. → Explore more
