aicpp is a deterministic symbolic program synthesis engine written in C++, guided by structural reductions generated by large language models (LLMs).
Instead of using LLMs to generate final solutions, aicpp uses them to reduce the structural search space, while a native compiled symbolic engine performs bounded-depth exhaustive search.
The goal is to explore a hybrid architecture that balances:
- Determinism
- Explicability
- Structural compositionality
- Native performance
- Controlled LLM guidance
Concept of connected neural network.pdf
LLMs are powerful pattern recognizers.
However, instead of letting them directly generate solutions, we use them to:
- Select relevant primitives
- Generate partial structural parameterizations
- Reduce combinatorial explosion
Then a deterministic C++ engine:
- Composes typed primitives
- Explores bounded search depth
- Orders by cost
- Returns explicit symbolic solutions
LLM = structural prior
C++ engine = deterministic solver
- ✔ Deterministic exhaustive symbolic exploration
- ✔ Strongly-typed neuron-based architecture
- ✔ Cost-based search ordering
- ✔ Structural partial parameterization via LLM
- ✔ Dynamic C++ code generation and compilation
- ✔ JSON serialization of discovered structures
- ✔ Reusable structural memory
- ✔ Docker reproducibility
export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxx"
git clone https://github.com/Julien-Livet/aicpp.git
cd aicpp
git clone https://github.com/arcprize/ARC-AGI-2.git
pip install -r requirements.txt
docker build -t aicpp .
docker run --rm aicpp
cd scripts
python -m pytest -sxv engine.py
git clone https://github.com/michaelhodel/arc-dsl.git
python -m pytest -sxv test_arc.pyThe system consists of:
- Primitives (C++)
- Typed transformation functions
- Neuron
- Wraps a primitive function
- Defines input/output types
- Connection
- Composed graph of neurons
- Brain
- Manages search space
- Performs cost-ordered exploration
- Serializes discovered structures
- LLM Pipeline (Python)
- Analyzes ARC task examples
- Selects relevant primitives
- Generates structural partials
- Triggers dynamic compilation
- Launches exploration
Given ARC input-output examples, the LLM selects only:
flipudfliplr
The engine then deterministically explores combinations and returns a symbolic solution such as:
flipud(fliplr(input))
No stochastic reasoning occurs in the solving phase.
Traditional approaches:
- Deep learning → latent, non-explicit
- Program synthesis → combinatorial explosion
- LLM direct generation → non-deterministic
aicpp explores: LLM-guided structural reduction + deterministic symbolic completion
This separation preserves:
- Reproducibility
- Inspectability
- Controlled search
- 📄 Research positioning: RESEARCH_POSITIONING.md
- 🗺 Roadmap: ROADMAP.md
- 🤝 Contribution guidelines: CONTRIBUTING.md
- 📘 Conceptual overview (PDF): see README links
Minimum requirements:
- C++23
- Python 3.10+
- Docker (recommended)
aicpp is an experimental research framework exploring:
- Structural partial parameterization
- Deterministic symbolic completion
- Hybrid symbolic–LLM architectures
- Combinatorial reduction strategies
It is not a production ARC solver.
- Core engine operational
- ARC flip, color mapping, and segmentation tasks tested
- Structural memory implemented
- Docker reproducibility ensured
- Ongoing combinatorial optimization research
Please read CONTRIBUTING.md before submitting pull requests.
We welcome contributions in:
- Primitive design
- Search pruning strategies
- Structural compression
- Performance optimization
- ARC benchmarking
See LICENSE file.
aicpp investigates a fundamental hypothesis: Large language models are most effective when used as structural reducers, not as direct reasoning engines.
The long-term goal is to build scalable, deterministic, and explicable hybrid reasoning systems.
