- Created
train_cortexgpt_consumer_gpu.pywith GPU-specific profiles - Implemented gradient accumulation support in unified trainer
- Reduced default configurations for consumer GPUs
- Created
train_neuroscience_3090.pyfor selective feature enabling - Added memory-optimized configurations for Phase 2 features
- Updated READMEs with detailed neuroscience training instructions
- Updated README.md and README_KR.md with:
- Consumer GPU configurations
- Neuroscience feature usage
- Memory optimization techniques
- Troubleshooting guides
- Removed old phase-specific implementations (phase2, phase3, stable)
- Moved documentation to docs/ directory
- Cleaned up test and temporary files
- Removed wandb logs
cortexgpt/models/cortex_gpt.py- Main model interfacecortexgpt/models/cortex_gpt_unified.py- Unified implementation with all phasescortexgpt/training/train_cortex_gpt.py- Unified trainer with gradient accumulation
scripts/train_cortexgpt.py- Main training scriptscripts/train_cortexgpt_consumer_gpu.py- Consumer GPU optimized trainingscripts/train_neuroscience_3090.py- Neuroscience features for RTX 3090scripts/quick_start_unified.py- Quick start guide with auto-detection
README.md- Main documentation (updated)README_KR.md- Korean documentation (updated)PHASE1_SUMMARY.md- Phase 1 development historyFIXES_SUMMARY.md- Recent fixes documentationdocs/- Additional technical documentation
uv run scripts/train_cortexgpt_consumer_gpu.py --auto-detect --epochs 10# Both homeostasis and sleep-wake (default)
uv run scripts/train_neuroscience_3090.py --epochs 20
# Only homeostasis (lower memory)
uv run scripts/train_neuroscience_3090.py --homeostasis-only --epochs 20uv run scripts/quick_start_unified.py
# Choose option 2 for consumer GPU optimized| Configuration | Memory Usage | Features |
|---|---|---|
| Minimal | 8-10GB | Base model only |
| Phase 1 | 10-12GB | + Stability features |
| + Homeostasis | 12-15GB | + Homeostatic plasticity |
| + Sleep-Wake | 15-18GB | + Sleep-wake cycles |
| Full (Default) | >20GB | All features enabled |
- Gradient Accumulation: Enables larger effective batch sizes on limited memory
- Selective Feature Enabling: Turn on only needed features to save memory
- Auto GPU Detection: Automatically configures for your GPU
- Memory Monitoring: Built-in warnings and recommendations
- Default configuration requires >20GB memory (not suitable for consumer GPUs)
- Use provided scripts for consumer GPU training
- Monitor GPU memory with
watch -n 1 nvidia-smi - Start with minimal features and gradually enable more
The project is now fully functional on consumer GPUs with intelligent memory management!