A comprehensive web-based platform for visualizing, analyzing, and classifying biomedical, acoustic, and radio frequency signals using advanced signal processing techniques and AI-powered analysis.
This project provides a unified platform for multi-modal signal analysis with emphasis on:
- Signal Processing: Real-time visualization with multiple viewing modes
- Sampling Theory: Interactive demonstrations of aliasing and anti-aliasing
- AI Classification: Automated signal classification using cloud-hosted models
- Resampling Analysis: Understanding the Nyquist theorem and sampling effects
- Biomedical Signals: ECG and EEG analysis with AI-powered disease detection
- Acoustic Signals: Speech recognition, Doppler effect analysis, and drone detection
- RF Signals: SAR image analysis for terrain classification
Visualization Modes:
- Continuous-time signal viewer
- XOR graph representation
- Polar graph visualization
- Recurrence plot analysis
- Sampling viewer - Interactive demonstration of sampling effects
Signal Processing Features:
- Multi-lead ECG visualization
- Adjustable sampling frequency (50Hz - 1000Hz)
- Real-time aliasing demonstration
- Anti-aliasing filter application
- Heart rate analysis and R-peak detection
AI-Powered Classification: Classifies ECG signals into six categories:
- Atrial Fibrillation/Flutter (AFIB)
- Hypertrophy (HYP)
- Myocardial Infarction (MI)
- Normal Sinus Rhythm
- Other Abnormality
- ST-T Changes (STTC)
Visualization Modes:
- Continuous-time multi-channel viewer
- XOR graph representation
- Polar graph visualization
- Recurrence plot analysis
- Sampling viewer with frequency manipulation
Signal Processing Features:
- Real-time EEG monitoring
- Multi-channel visualization (supports all EDF channels)
- Sampling frequency adjustment (50Hz - 1000Hz)
- Aliasing effect demonstration
- Frequency domain analysis
- Brain wave pattern detection
AI-Powered Classification: Detects neurological conditions:
- Normal brain activity
- Alzheimer's Disease
- Epilepsy
- Parkinson's Disease
Audio Analysis Pipeline:
-
Original Audio Analysis
- Upload and playback audio files (WAV, MP3, OGG, WebM)
- AI-powered gender recognition (Male/Female)
- Real-time waveform visualization
-
Resampled Audio
- Configurable downsampling (1kHz - 48kHz)
- Aliasing effect demonstration in audio
- Quality comparison at different sampling rates
-
Anti-Aliased Audio
- Low-pass filtering before resampling
- Aliasing artifact prevention
- Side-by-side quality comparison
Key Features:
- Interactive sampling frequency control
- Real-time audio playback
- Gender classification using AI
- Visual comparison of aliasing effects
Doppler Effect Generator:
- Simulates sound of passing vehicles
- Configurable parameters (velocity, frequency)
- Realistic Doppler-shifted audio generation
Doppler Effect Detector:
- Analyzes uploaded audio files
- Detects Doppler effect presence
- Estimates vehicle velocity
- Determines original sound frequency
Drone Sound Classifier:
- Binary classification (Drone / Not Drone)
- AI-powered detection
- Real-time audio processing
Synthetic Aperture Radar Processing:
- Upload and analyze SAR images (.tif, .png, .jpg, .bmp)
- Automated land-water classification
- Coverage percentage estimation
- Backscatter-based surface detection
- Real-time image analysis
┌─────────────────┐ ┌──────────────────────┐ ┌─────────────────┐
│ React + Vite │ ◄────► │ Flask API │ ◄────► │ Cloud-Hosted │
│ Frontend │ HTTP │ (Jupyter Notebook) │ REST │ AI Models │
│ │ │ │ │ │
└─────────────────┘ └──────────────────────┘ └─────────────────┘
│ │ │
│ │ │
▼ ▼ ▼
Signal Upload Data Processing Classification
Visualization DSP Operations Model Inference
User Interaction Response Handling Analysis Results
Backend (Jupyter Notebook):
- Serves as middleware between frontend and AI services
- Handles signal preprocessing and DSP operations
- Manages file uploads (audio, images, signal data)
- Routes requests to cloud-hosted classification models
- Returns processed results and analysis
Deployment Options:
- Local: Run Jupyter Notebook locally
- Cloud: Deploy on Google Colab
- Tunneling: Use ngrok for remote access
- React 19: Modern component-based UI framework
- Vite: Lightning-fast build tool and dev server
- React Router: Client-side routing
- Axios: HTTP client for API communication
- Lucide React: Beautiful icon library
- Custom DSP Visualizations: Canvas-based signal rendering
- Flask: Lightweight Python web framework
- NumPy: Numerical computing for DSP operations
- SciPy: Advanced signal processing algorithms
- MNE-Python: EEG/MEG data processing
- wfdb: PhysioNet WFDB tools for ECG processing
- librosa: Audio signal processing
- soundfile: Audio file I/O
- Cloud-hosted classification models
- REST API integration
- Real-time inference
- Node.js v18 or higher
- npm or yarn
- Python 3.8+
- pip package manager
# Clone the repository
git clone https://github.com/yourusername/signal-viewer-classifier.git
cd signal-viewer-classifier
# Navigate to frontend
cd frontend/app
# Install dependencies
npm install
# Start development server
npm run devThe application will be available at http://localhost:5173
# Navigate to project root
cd signal-viewer-classifier
# Install Python dependencies
pip install flask numpy scipy mne wfdb librosa soundfile
# Open the backend notebook
jupyter notebook backend_api.ipynbOr use Google Colab:
- Upload
backend_api.ipynbto Google Colab - Install dependencies in the notebook
- Run all cells to start the Flask server
- Use ngrok or Colab's built-in tunneling for remote access
-
Start the Backend:
- Open
backend_api.ipynbin Jupyter or Google Colab - Run all cells to start the Flask API server
- Note the API URL (e.g.,
http://localhost:5000or ngrok URL)
- Open
-
Start the Frontend:
cd frontend/app npm run dev -
Access the Application:
- Open browser to
http://localhost:5173 - Select a signal processing module from the home page
- Open browser to
- Upload ECG data files (.hea + .dat format)
- Select visualization mode (Continuous, XOR, Polar, Recurrence, Sampling)
- Adjust sampling frequency to see aliasing effects
- View AI classification results
- Upload EDF file containing EEG data
- Select channels to visualize
- Adjust sampling frequency (50Hz - 1000Hz)
- Explore different visualization modes
- View neurological condition classification
- Upload audio file (WAV, MP3, OGG, WebM)
- View original audio analysis with gender recognition
- Resample audio at different frequencies
- Apply anti-aliasing filters
- Compare audio quality and aliasing artifacts
Generator Mode:
- Set vehicle velocity and sound frequency
- Generate Doppler-shifted audio
- Play and download generated audio
Detector Mode:
- Upload audio file
- Analyze for Doppler effect
- View detected velocity and frequency
Drone Detection:
- Upload audio sample
- Get binary classification (Drone / Not Drone)
- Upload SAR image
- View automated land-water segmentation
- Analyze coverage percentages
- Inspect terrain classification results
signal-viewer-classifier/
├── frontend/
│ └── app/
│ ├── src/
│ │ ├── tasks/
│ │ │ └── modules/ # Main signal processing modules
│ │ │ ├── components/
│ │ │ │ ├── ecg/ # ECG viewer & classifier
│ │ │ │ ├── eeg/ # EEG viewer & classifier
│ │ │ │ ├── speech/ # Speech processing
│ │ │ │ ├── doppler/ # Doppler analysis
│ │ │ │ └── sar/ # SAR image analysis
│ │ │ ├── data/
│ │ │ └── styles/
│ │ ├── components/ # Shared components
│ │ ├── services/ # API services
│ │ ├── hooks/ # Custom React hooks
│ │ └── App.jsx # Main application
│ ├── public/
│ ├── package.json
│ └── vite.config.js
├── backend_api.ipynb # Backend API (Jupyter Notebook)
├── custom_dsp.py # Custom DSP functions
├── README.md
├── .gitignore
└── requirements.txt # Python dependencies
- Nyquist-Shannon Sampling Theorem: Sample rate must be at least 2× the maximum signal frequency
- Nyquist Frequency: Half of the sampling rate
- Proper sampling preserves all signal information
- Occurs when sampling rate < Nyquist frequency
- High-frequency components masquerade as low frequencies
- Causes signal distortion and information loss
- Demonstrated in all signal processing modules
- Low-pass filtering before downsampling
- Removes frequency components above Nyquist limit
- Prevents aliasing artifacts
- Implemented in ECG, EEG, and Speech modules
- Upsampling: Increasing sampling rate (interpolation)
- Downsampling: Decreasing sampling rate (decimation)
- Interactive controls for educational demonstration
This platform is ideal for:
- Digital Signal Processing courses
- Biomedical Engineering education
- Audio Engineering programs
- Machine Learning demonstrations
- Research in signal analysis
Key Learning Outcomes:
- Understanding sampling theory and aliasing
- Hands-on experience with real signals
- Visualization of abstract DSP concepts
- AI classification pipeline understanding
Contributions are welcome! Please follow these steps:
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
This project is open source and available under the MIT License.
- PhysioNet for ECG/EEG datasets and WFDB tools
- OpenML and Kaggle for signal classification datasets
- React and Vite communities for excellent tools
- Flask for lightweight backend framework
For issues, questions, or suggestions:
- Open an issue on GitHub
- Email: your.email@example.com
- Add more signal types (EMG, EOG)
- Implement real-time streaming
- Add more AI models
- Mobile responsive design improvements
- Docker containerization
- Advanced filtering options
- Export analysis reports
Built with ❤️ for signal processing education and research