Inspiration
The world seems to be racing towards a concept called "Artificial General Intelligence", but what really is it? Let's start by defining intelligence:
According to Francois Chollet, founder of ARC-AGI, intelligence is the ability to efficiently learn new skills.
Artificial General Intelligence is any AI system that can perform at the same level as humans on new, unseen data.
The greatest and most generalizable AI models to date are Big Tech's Large Language Models, which are extremely powerful models. Yet they perform extremely poorly on most complex reasoning tasks. Our hypothesis for this project:
Reasoning is not exclusive to language, in fact it is independent from language.
What it does
Singularity is a dual approach towards Artificial General Intelligence.
- Singularis:
Our custom built & trained Reasoning Language Model generalized latent reasoning model
World Model:
Our custom built ARC-AGI-3 Solver
How we built it
- Singularis:
Leveraged pretrained LLM, Google's T5Gemma2, for it's encoder-decoder architecture. Integrated a modified version of Ubiquant's Universal Reasoning Model in between the encoder and decoder. Kept the encoder's weights frozen, trained the Universal Reasoning Model and fine tuned the Decoder's weights
World Model:
Proposer: Gemini 2.5 Flash in MVP, scaled up inference time learning CNN World Model. Abstraction layers gives detected object.
Critic: Gemini 2.5 Flash infers the goal and scores each path in the tree based on how likely it leads to the goal. Abstraction layer gives transitions.
MCTS: Monte Carlo Tree Search Built from Scratch
Challenges we ran into
- Singularis:
Consistent GPU Sessions on Google Colab, powerful compute resources were essential for training a 566 million parameter model
World Model:
MCTS requires a tree of saved states, but ARC-AGI-3 only returns frames when an action is taken, building a tree-based caching system was critical.
Accomplishments that we're proud of
- Singularis:
Singularis' trained version scored 38% on ARC-e and ARC-c, greatly improving the baseline T5Gemma2's score of 22% and the untrained version of Singularis' score of 10%
World Model:
Successfully implementing MCTS from the ground up, scaling up depth and rolling out the algorithm for expanded search
Creating a state abstraction layer to improve the model's understanding of ARC-AGI-3
Having our agentic system make intelligent actions in LS-20 of ARC-AGI-3
Building a practical caching system to save costs
In General:
The intelligent, methodical approach we took towards framing the problem of "what is reasoning?"
Lifting the fog behind what how this project should be implemented
Improving our research engineering skills, learning more about cutting edge AI technology
What we learned
We learned a tremendous amount! In our Research Phase, we read 20 research papers. Consisting of both foundational papers & state of the art research papers.
What's next for Singularity
We plan on heavily scaling up this project, investing into a deeper research phase, recruiting more people, and eventually applying for a research grant from ARC-AGI once we have a very refined approach.
Log in or sign up for Devpost to join the conversation.