Transform natural language into autonomous drone missions - No joysticks, no complex GCS interfaces, just tell your drone what you need.
SkyScout democratizes drone operations by enabling anyone to command drones using natural language. Whether you're a construction manager needing to "inspect the north wall for cracks" or a farmer wanting to "count livestock in the east pasture," SkyScout translates your intent into safe, autonomous drone missions.
- π£οΈ Natural Language Control: Command drones with plain English - no technical expertise required
- π€ LLM-Powered Intelligence: Leverages OpenAI/Gemini APIs for understanding complex requests
- ποΈ Real-time Object Detection: YOLO-based perception for finding and tracking targets
- π‘οΈ Safety First: Built-in geofencing, battery monitoring, weather checks, and automatic return-to-launch
- π Autonomous Execution: Converts commands to structured mission plans executed autonomously
- π± Web Interface: Monitor missions with real-time map visualization (dark/light mode)
- π€οΈ Weather Integration: Real-time weather checks ensure safe flight conditions
- π Mission Templates: Pre-configured scenarios for common operations (search, inspect, delivery, etc.)
βοΈ Advanced Flight Patterns: Grid, spiral, perimeter, zigzag, circle/orbit, and polygon patterns
# Example commands SkyScout understands:
"Find all red vehicles in the parking lot using zigzag pattern"
"Inspect the building roof for damage" # Uses circle/orbit pattern
"Count people wearing safety helmets on the construction site"
"Deliver medical supplies to GPS 37.7749, -122.4194"
"Emergency response to accident at main gate"
"Patrol the property perimeter for security"
"Survey the agricultural field for crop health"
"Map the construction site in 3D"graph LR
A[Web UI] -->|WebSocket| B[ROS Bridge]
B --> C[Command Interface]
C --> D[LLM Agent]
D -->|Mission Plan| E[Mission Planner]
E --> F[Navigation Bridge]
E --> G[Perception]
F -->|MAVLink| H[PX4 Autopilot]
G -->|Detections| E
- Companion Computer: Raspberry Pi 5 (8GB recommended)
- Flight Controller: Pixhawk 6C with PX4
- Camera: USB/CSI camera for object detection
- Drone Platform: Any PX4-compatible frame
- ROS2 Iron: Robot middleware for component communication
- Next.js: Modern web interface for command input
- OpenAI/Gemini APIs: Natural language understanding
- YOLO/RT-DETR: Real-time object detection
- PX4: Professional autopilot firmware
# Clone the repository
git clone https://github.com/arnenoori/skyscout.git
cd skyscout
# Start with Docker Compose
docker-compose up
# Access the web interface at http://localhost:3000- Ubuntu 22.04 or macOS (development only)
- ROS2 Iron
- Node.js 20+
- Python 3.10+
- Git
- OpenAI or Gemini API key
- (Optional) OpenWeather API key for weather integration
- Clone and setup workspace:
git clone https://github.com/arnenoori/skyscout.git
cd skyscout- Configure API keys:
# Create .env file in project root
echo "OPENAI_API_KEY=your-openai-key" >> .env
echo "GEMINI_API_KEY=your-gemini-key" >> .env
# Optional: For weather integration
echo "OPENWEATHER_API_KEY=your-weather-api-key" >> .env- Build ROS2 packages:
cd ros_ws
source /opt/ros/iron/setup.bash
colcon build --symlink-install
source install/setup.bash- Install and start web frontend:
cd ../web_frontend
npm install
npm run dev- Launch ROS nodes (in separate terminals):
# Terminal 1: ROS Bridge
ros2 launch rosbridge_server rosbridge_websocket_launch.xml
# Terminal 2: SkyScout nodes
ros2 launch skyscout_bringup skyscout.launch.py- Access the interface:
- Open http://localhost:3000 in your browser
- Enter natural language commands
- Monitor mission execution in real-time
For the best development experience:
- Install VS Code and Docker
- Open the project in VS Code
- Click "Reopen in Container" when prompted
- Everything is pre-configured and ready to go!
| Component | Description | Key Technologies |
|---|---|---|
| command_interface | Receives and validates natural language commands | ROS2, Python |
| llm_agent | Converts NL to structured mission plans via LLM APIs | OpenAI/Gemini SDK, Weather API |
| perception | Real-time object detection and tracking | YOLO, OpenCV |
| mission_planner | Executes missions with advanced flight patterns | State Machine, Python |
| navigation_bridge | Interfaces with PX4 for drone control | MAVSDK, MAVLink |
| web_frontend | Modern UI with real-time map visualization | Next.js, React, MapLibre GL |
skyscout/
βββ ros_ws/ # ROS2 packages
β βββ src/
β βββ command_interface/
β βββ llm_agent/
β βββ perception/
β βββ mission_planner/
β βββ navigation_bridge/
βββ web_frontend/ # Next.js web interface
βββ docs/ # Documentation
βββ .devcontainer/ # VS Code dev container
βββ .github/ # CI/CD workflows
# ROS2 tests
cd ros_ws && colcon test
# Frontend tests
cd web_frontend && npm test
# Integration tests
./scripts/run_integration_tests.sh- Python: Black + Ruff (via pre-commit hooks)
- TypeScript: ESLint + Prettier
- Commit messages: Conventional Commits
- Geofencing: Configurable boundaries to prevent flyaways
- Battery Monitoring: Automatic RTL at configurable thresholds (20-35%)
- Weather Checks: Real-time wind, visibility, and precipitation monitoring
- Obstacle Detection: Future support for depth cameras
- Manual Override: Always maintain RC control as backup
- Pre-flight Checks: Automated sensor, GPS, and weather validation
- Mission Templates: Pre-validated parameters for common scenarios
- Always comply with local drone regulations
- Maintain visual line of sight during operations
- Test thoroughly in simulation before real flights
- Keep RC transmitter ready for manual takeover
- Never fly over people or restricted areas
We welcome contributions! See CONTRIBUTING.md for guidelines.
- Improve LLM prompt engineering for better mission planning
- Add support for more object detection models
- Implement multi-drone coordination
- Enhance web UI with 3D visualization
- Add support for edge TPUs (Coral, Jetson)
This project is licensed under the MIT License - see LICENSE file for details.
- ROS2 community for the excellent middleware
- PX4 team for the robust autopilot
- OpenAI/Google for accessible LLM APIs
- All contributors and testers
- Project Lead: Arne Noori
- GitHub Issues: Report bugs or request features
- Discussions: Join the conversation
Made with β€οΈ for the drone community
Star us on GitHub