Skip to content

Julien-Livet/aicpp

Repository files navigation

logo aicpp

GitHub stars GitHub issues ARC-AGI C++23 Docker License Python CI

aicpp is a deterministic symbolic program synthesis engine written in C++, guided by structural reductions generated by large language models (LLMs).

Instead of using LLMs to generate final solutions, aicpp uses them to reduce the structural search space, while a native compiled symbolic engine performs bounded-depth exhaustive search.

The goal is to explore a hybrid architecture that balances:

  • Determinism
  • Explicability
  • Structural compositionality
  • Native performance
  • Controlled LLM guidance

Concept of connected neural network.pdf

LLM-Guided Hypothesis Generation with Progressive Feedback for Neuro-Symbolic Program Synthesis on ARC-AGI-2.pdf


🧠 Core Idea

LLMs are powerful pattern recognizers.

However, instead of letting them directly generate solutions, we use them to:

  1. Select relevant primitives
  2. Generate partial structural parameterizations
  3. Reduce combinatorial explosion

Then a deterministic C++ engine:

  • Composes typed primitives
  • Explores bounded search depth
  • Orders by cost
  • Returns explicit symbolic solutions

LLM = structural prior
C++ engine = deterministic solver


✨ Features

  • ✔ Deterministic exhaustive symbolic exploration
  • ✔ Strongly-typed neuron-based architecture
  • ✔ Cost-based search ordering
  • ✔ Structural partial parameterization via LLM
  • ✔ Dynamic C++ code generation and compilation
  • ✔ JSON serialization of discovered structures
  • ✔ Reusable structural memory
  • ✔ Docker reproducibility

🚀 Quick Start

Using Docker

export OPENAI_API_KEY="sk-xxxxxxxxxxxxxxxxxxxxxxxx"
git clone https://github.com/Julien-Livet/aicpp.git
cd aicpp
git clone https://github.com/arcprize/ARC-AGI-2.git
pip install -r requirements.txt
docker build -t aicpp .
docker run --rm aicpp
cd scripts
python -m pytest -sxv engine.py
git clone https://github.com/michaelhodel/arc-dsl.git
python -m pytest -sxv test_arc.py

🏗 Architecture Overview

The system consists of:

  1. Primitives (C++)
    • Typed transformation functions
  2. Neuron
    • Wraps a primitive function
    • Defines input/output types
  3. Connection
    • Composed graph of neurons
    • Brain
    • Manages search space
    • Performs cost-ordered exploration
    • Serializes discovered structures
  4. LLM Pipeline (Python)
    • Analyzes ARC task examples
    • Selects relevant primitives
    • Generates structural partials
    • Triggers dynamic compilation
    • Launches exploration

🧪 Example (ARC Flip Task)

Given ARC input-output examples, the LLM selects only:

  • flipud
  • fliplr

The engine then deterministically explores combinations and returns a symbolic solution such as: flipud(fliplr(input))

No stochastic reasoning occurs in the solving phase.


📊 Why This Approach?

Traditional approaches:

  • Deep learning → latent, non-explicit
  • Program synthesis → combinatorial explosion
  • LLM direct generation → non-deterministic

aicpp explores: LLM-guided structural reduction + deterministic symbolic completion

This separation preserves:

  • Reproducibility
  • Inspectability
  • Controlled search

📚 Documentation

  • 📄 Research positioning: RESEARCH_POSITIONING.md
  • 🗺 Roadmap: ROADMAP.md
  • 🤝 Contribution guidelines: CONTRIBUTING.md
  • 📘 Conceptual overview (PDF): see README links

🛠 Development

Minimum requirements:

  • C++23
  • Python 3.10+
  • Docker (recommended)

🔬 Research Perspective

aicpp is an experimental research framework exploring:

  • Structural partial parameterization
  • Deterministic symbolic completion
  • Hybrid symbolic–LLM architectures
  • Combinatorial reduction strategies

It is not a production ARC solver.


📈 Current Status

  • Core engine operational
  • ARC flip, color mapping, and segmentation tasks tested
  • Structural memory implemented
  • Docker reproducibility ensured
  • Ongoing combinatorial optimization research

🤝 Contributing

Please read CONTRIBUTING.md before submitting pull requests.

We welcome contributions in:

  • Primitive design
  • Search pruning strategies
  • Structural compression
  • Performance optimization
  • ARC benchmarking

📜 License

See LICENSE file.


🧠 Vision

aicpp investigates a fundamental hypothesis: Large language models are most effective when used as structural reducers, not as direct reasoning engines.

The long-term goal is to build scalable, deterministic, and explicable hybrid reasoning systems.

About

Artificial intelligence with a network of connected neurons

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages