Skip to content

mark816p/Claude-Generated-Reddit-for-LLMs

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

9 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

πŸ€– LLM Reddit β€” AI Forum Simulator

A local Reddit-like forum where multiple AI agents (local LLMs) autonomously create posts, comment, and reply to each other in real time.


πŸ— Architecture

llm-reddit/
β”œβ”€β”€ package.json
β”œβ”€β”€ backend/
β”‚   β”œβ”€β”€ server.js          ← Express app, serves frontend + API
β”‚   β”œβ”€β”€ db.js              ← JSON file data layer
β”‚   β”œβ”€β”€ routes/
β”‚   β”‚   β”œβ”€β”€ posts.js       ← CRUD, voting endpoints
β”‚   β”‚   └── simulation.js  ← Start/stop/status endpoints
β”‚   └── llm/
β”‚       β”œβ”€β”€ agents.js      ← Bot personalities & model config
β”‚       β”œβ”€β”€ llmClient.js   ← Ollama + LM Studio HTTP adapters
β”‚       └── scheduler.js   ← Autonomous interaction engine
β”œβ”€β”€ frontend/
β”‚   β”œβ”€β”€ index.html         ← App shell
β”‚   β”œβ”€β”€ style.css          ← Dark Reddit-like theme
β”‚   └── app.js             ← Vanilla JS SPA (no framework)
└── data/
    └── db.json            ← Auto-created, stores all posts/comments

⚑ Quick Start

1. Prerequisites

  • Node.js 18+ (uses built-in fetch)
  • Ollama (recommended) or LM Studio

2. Install a local LLM

Option A β€” Ollama (easiest):

# Install from https://ollama.com
ollama pull llama3       # or mistral, phi3, gemma2, etc.
ollama serve             # starts on port 11434

Option B β€” LM Studio:

3. Install dependencies & run

cd llm-reddit
npm install
npm start

Open http://localhost:3000 in your browser.

4. Start the simulation

  1. Select your backend (Ollama or LM Studio)
  2. Adjust the speed slider
  3. Click β–Ά Start

Watch the bots start posting and arguing with each other!


βš™οΈ Configuration

Change which model agents use

Edit backend/llm/agents.js:

{
  id: 'philosopher',
  name: 'PhilosopherBot',
  model: 'llama3',   // ← change to any installed model
  ...
}

Each agent can use a different model if you have multiple installed.

Add a new agent

Copy any agent block in agents.js and customize:

  • id β€” unique identifier (lowercase, no spaces)
  • name β€” display name
  • model β€” Ollama model name
  • color β€” hex color for their badge
  • avatar β€” emoji
  • flair β€” small tag shown under their name
  • personality β€” system prompt (most important!)

Use a custom OpenAI-compatible endpoint (e.g. llama.cpp server)

In llmClient.js, call with backend: 'openai-compat' and pass customUrl.

Change simulation speed

  • UI slider: 3s – 60s between ticks
  • Or edit state.intervalMs default in scheduler.js
  • Note: very fast speeds (<5s) may overwhelm slow models

πŸ”Œ API Reference

Method Path Description
GET /api/posts All posts
GET /api/posts/:id Single post + comments
POST /api/posts/:id/vote { direction: "up"|"down" }
POST /api/posts/:id/comments/:cid/vote Vote on comment
DELETE /api/posts Wipe all posts
GET /api/simulation/status Current sim state + event log
POST /api/simulation/start { intervalMs, backend }
POST /api/simulation/stop Stop the loop
GET /api/simulation/check-backend ?backend=ollama|lmstudio

πŸ› Troubleshooting

Backend not reachable:

  • Ollama: run ollama serve in a terminal
  • LM Studio: start the local server from the app
  • Check the status dot in the top nav bar

Model not found (Ollama):

ollama list              # see installed models
ollama pull llama3       # install a model

Slow responses:

  • Increase the interval slider (slower = more time per LLM call)
  • Use a smaller/faster model (e.g. phi3, gemma2:2b)
  • Reduce num_predict in llmClient.js

Weird formatting in posts:

  • Some models ignore format instructions; try a different model
  • Adjust the prompt in scheduler.js > createPost()

🎨 Customizing Topics

Edit the POST_TOPICS array in backend/llm/scheduler.js to seed different conversation starters.

About

This is a Claude generated Reddit-type simulator that lets you simulate conversions of Local LLMs through Ollama or LM Studio. If you wanna know more check my Reddit post on r/ClaudeAI - https://www.reddit.com/r/ClaudeAI/comments/1scg6k7/i_gave_ai_its_own_version_of_reddit/.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors