Welcome to the Calabria Renewables platform, a comprehensive web application designed for the visualization and management of meteorological and environmental data. This platform allows users to upload, process, analyze, and impute campaign data from various observation stations.
- Interactive Dashboard: Real-time visualisation of environmental metrics with dynamic statistics and interactive maps.
- Campaign Management:
- Create, update, and delete monitoring campaigns.
- Robust Identification: Uses a composite key (Name, Latitude, Longitude, Measurement Instrument) to uniquely identify campaigns.
- Auto-Append: Intelligently appends new file uploads to existing campaigns if headers and composite keys match, streamlining data ingestion.
- Data Processing Pipeline:
- Supports upload of
.csv,.xlsx,.xls, and.odsfiles. - Iterative Processing: Efficiently handles large
.odsfiles using iterative, chunk-based conversion strategies to minimise memory overhead. - Intelligent Parsing: Automatically detects "Timestamp" columns for time-series alignment.
- Python Integration: Uses dedicated Python scripts (
convert_to_csv.py,impute_data.py) for robust data cleaning, format standardisation, and transformation before ingestion.
- Supports upload of
- Data Imputation:
- Ability to estimate missing metric values using time-based interpolation algorithms.
- Distinctly visualises imputed data points on charts for clear differentiation from raw measurements.
- Advanced Visualization:
- Time Series Charts: Dynamic line charts representing metric trends over selected time windows, highlighting imputed data.
- Geospatial Mapping: Integrated maps (Leaflet/Mapbox) to show station locations and status.
- Smart Stats: Automatically maps metrics to relevant indicators.
- User Authentication: Secure Login/Logout with JWT-based session management.
The platform is built using a modern MERN (MySQL/SQLite, Express, React, Node) architecture with robust Python integration for data science tasks.
- Framework: React (v18)
- Build Tool: Vite for blazing fast development.
- Styling: Tailwind CSS for a sleek, responsive, and modern UI.
- Visualization:
- Recharts for data plotting.
- Lucide React for beautiful icons.
- Leaflet &
react-leaflet/react-map-glfor maps.
- State Management: React Hooks (
useState,useEffect) and Context.
- Runtime: Node.js
- Web Framework: Express.js
- Database ORM: Prisma (Handling database querying and schema management).
- Data Processing:
- Python (Pandas): Used for heavy-lifting data conversion, missing value imputation (
impute_data.py), and iterative processing of large datasets. - CSV-Parser: For efficient, streaming ingestion of large datasets into the database.
- Python (Pandas): Used for heavy-lifting data conversion, missing value imputation (
- Real-time: Socket.io for upload progress tracking.
- Authentication:
jsonwebtoken(JWT) andbcryptjs/bcrypt.
- Upload: Users upload spreadsheet files via the Dashboard.
- Conversion: The backend spawns a Python subprocess to:
- Iteratively convert Excel/ODS formats to standard CSV without memory bloat.
- Prioritise "Timestamp": Scans columns to identify the primary time axis.
- Extract metadata (Start Date, End Date).
- Conflict Resolution: Applies composite key and header validation logic to either prompt the user for a merge, auto-append data, or reject the upload on schema mismatch.
- Streaming Import: The clean CSV is streamed into the database using Prisma
createManyin batches.
- Metric Selection: Users select a variable (e.g., Temperature, Wind Speed).
- Data Imputation: Users can opt to impute missing data, which triggers a backend script (
impute_data.py) that applies interpolation and persists the results. The chart updates dynamically to reflect this. - Visualisation: The
CampaignChartcomponent queries the API, filtering by the "Timestamp" column to generate accurate time-series plots.
Calabria Renewables/
βββ frontend/ # React Application
β βββ src/
β β βββ components/ # Reusable UI components (Navbar, StatsCard, Charts)
β β βββ pages/ # Main application pages (Dashboard, Login)
β β βββ api/ # Axios configuration
β βββ package.json
βββ backend/ # Express Server
β βββ src/
β β βββ controllers/# Business logic (Auth, Campaign management)
β β βββ routes/ # API Endpoints
β β βββ scripts/ # Python tools (convert_to_csv.py, impute_data.py)
β β βββ server.js # Server Entry
β βββ prisma/ # Database Schema
β βββ package.json
βββ docker-compose.yml # Docker deployment configuration
βββ ods_to_csv_converter.py
βββ README.md # This fileThe easiest way to run the entire platform (Frontend, Backend, and PostgreSQL database) is using Docker Compose.
- Docker installed.
- Docker Compose installed.
-
Make sure you are in the root directory (where
docker-compose.ymlis located). -
Start the services in detached mode:
docker-compose up --build -d
-
Access the application:
- Frontend: Open
http://localhost(or the IP of your Docker host) in your browser. - Backend API: Running on
http://localhost:5000. - Database: PostgreSQL is exposed internally to the backend, but its port is also mapped to
5432on your host.
- Frontend: Open
-
Stopping the platform:
docker-compose down
Note: The database data is persisted in a Docker volume (
pgdata), so your data won't be lost when you stop the containers.
- Node.js (v18+)
- Python (3.8+) with
pandasinstalled (pip install pandas openpyxl odfpy) - npm or yarn
-
Backend Setup:
cd backend npm install npx prisma generate npx prisma migrate dev --name init # Initialize Database npm run dev
-
Frontend Setup:
cd frontend npm install npm run dev -
Access: Open
http://localhost:5173(or your configured port) in your browser.
Generated by Antigravity Assistant