Our mission is to pioneer a new path to AGI by harnessing hierarchical learning and reasoning, through an open and decentralized collaboration. We believe that achieving human-level intelligence will not come from sheer scale alone, but from architectures that learn and think in levels – much like the human brain.
This project aims to bring together researchers and enthusiasts globally to develop an AI system that learns a world model and reasons abstractly to tackle the hardest open benchmark, ARC-AGI-2, where current models stagnate at ~5% accuracy. By combining the strengths of approaches like Yann LeCun's JEPA/H-JEPA (for self-supervised world modeling) and with innovations like the Hierarchical Reasoning Model from Guan Wang _et al._ (2025) (for efficient latent reasoning), we will create a system that can adapt to new tasks, plan solutions, and interpret symbols with human-like flexibility.
Our decentralized pretraining effort means that no single entity owns or controls the resulting AGI; instead, the project will use open-source principles and distributed computing contributions to train the models, ensuring transparency and broad participation. We set out to achieve what was once thought impossible: an AI that can learn from minimal examples and solve novel reasoning problems efficiently, ultimately reaching and surpassing the 85% accuracy threshold on ARC-AGI-2 – a feat that would signify a major leap toward genuine general intelligence.
This mission is visionary yet tangible: by focusing on a concrete benchmark and leveraging hierarchical methods, we will drive AI research forward while adhering to principles of openness and collective innovation.
In sum, our project exists to guide open-source AGI research towards machines that can learn, reason, and plan as humans do, and in doing so, close the “easy for humans, hard for machines” gap. Together, the community will build an AI that not only excels at ARC-AGI-2, but also lays the groundwork for safe, broadly beneficial artificial general intelligence.