Skip to content

Ahmedloay2/Signal-Viewer-And-Classifer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Multi-Modal Signal Viewer & Classifier

A comprehensive web-based platform for visualizing, analyzing, and classifying biomedical, acoustic, and radio frequency signals using advanced signal processing techniques and AI-powered analysis.

🎯 Overview

This project provides a unified platform for multi-modal signal analysis with emphasis on:

  • Signal Processing: Real-time visualization with multiple viewing modes
  • Sampling Theory: Interactive demonstrations of aliasing and anti-aliasing
  • AI Classification: Automated signal classification using cloud-hosted models
  • Resampling Analysis: Understanding the Nyquist theorem and sampling effects

Key Domains

  • Biomedical Signals: ECG and EEG analysis with AI-powered disease detection
  • Acoustic Signals: Speech recognition, Doppler effect analysis, and drone detection
  • RF Signals: SAR image analysis for terrain classification

✨ Features

🫀 ECG Signal Analysis

Visualization Modes:

  • Continuous-time signal viewer
  • XOR graph representation
  • Polar graph visualization
  • Recurrence plot analysis
  • Sampling viewer - Interactive demonstration of sampling effects

Signal Processing Features:

  • Multi-lead ECG visualization
  • Adjustable sampling frequency (50Hz - 1000Hz)
  • Real-time aliasing demonstration
  • Anti-aliasing filter application
  • Heart rate analysis and R-peak detection

AI-Powered Classification: Classifies ECG signals into six categories:

  • Atrial Fibrillation/Flutter (AFIB)
  • Hypertrophy (HYP)
  • Myocardial Infarction (MI)
  • Normal Sinus Rhythm
  • Other Abnormality
  • ST-T Changes (STTC)

🧠 EEG Signal Analysis

Visualization Modes:

  • Continuous-time multi-channel viewer
  • XOR graph representation
  • Polar graph visualization
  • Recurrence plot analysis
  • Sampling viewer with frequency manipulation

Signal Processing Features:

  • Real-time EEG monitoring
  • Multi-channel visualization (supports all EDF channels)
  • Sampling frequency adjustment (50Hz - 1000Hz)
  • Aliasing effect demonstration
  • Frequency domain analysis
  • Brain wave pattern detection

AI-Powered Classification: Detects neurological conditions:

  • Normal brain activity
  • Alzheimer's Disease
  • Epilepsy
  • Parkinson's Disease

🔊 Speech Signal Processing

Audio Analysis Pipeline:

  1. Original Audio Analysis

    • Upload and playback audio files (WAV, MP3, OGG, WebM)
    • AI-powered gender recognition (Male/Female)
    • Real-time waveform visualization
  2. Resampled Audio

    • Configurable downsampling (1kHz - 48kHz)
    • Aliasing effect demonstration in audio
    • Quality comparison at different sampling rates
  3. Anti-Aliased Audio

    • Low-pass filtering before resampling
    • Aliasing artifact prevention
    • Side-by-side quality comparison

Key Features:

  • Interactive sampling frequency control
  • Real-time audio playback
  • Gender classification using AI
  • Visual comparison of aliasing effects

📡 Doppler Shift Analysis

Doppler Effect Generator:

  • Simulates sound of passing vehicles
  • Configurable parameters (velocity, frequency)
  • Realistic Doppler-shifted audio generation

Doppler Effect Detector:

  • Analyzes uploaded audio files
  • Detects Doppler effect presence
  • Estimates vehicle velocity
  • Determines original sound frequency

Drone Sound Classifier:

  • Binary classification (Drone / Not Drone)
  • AI-powered detection
  • Real-time audio processing

🛰️ SAR Image Analysis

Synthetic Aperture Radar Processing:

  • Upload and analyze SAR images (.tif, .png, .jpg, .bmp)
  • Automated land-water classification
  • Coverage percentage estimation
  • Backscatter-based surface detection
  • Real-time image analysis

🏗️ Architecture

┌─────────────────┐         ┌──────────────────────┐         ┌─────────────────┐
│   React + Vite  │ ◄────►  │  Flask API           │ ◄────►  │  Cloud-Hosted   │
│   Frontend      │  HTTP   │  (Jupyter Notebook)  │  REST   │   AI Models     │
│                 │         │                      │         │                 │
└─────────────────┘         └──────────────────────┘         └─────────────────┘
        │                            │                              │
        │                            │                              │
        ▼                            ▼                              ▼
Signal Upload              Data Processing              Classification
Visualization              DSP Operations               Model Inference
User Interaction           Response Handling            Analysis Results

Backend (Jupyter Notebook):

  • Serves as middleware between frontend and AI services
  • Handles signal preprocessing and DSP operations
  • Manages file uploads (audio, images, signal data)
  • Routes requests to cloud-hosted classification models
  • Returns processed results and analysis

Deployment Options:

  • Local: Run Jupyter Notebook locally
  • Cloud: Deploy on Google Colab
  • Tunneling: Use ngrok for remote access

🛠️ Technology Stack

Frontend

  • React 19: Modern component-based UI framework
  • Vite: Lightning-fast build tool and dev server
  • React Router: Client-side routing
  • Axios: HTTP client for API communication
  • Lucide React: Beautiful icon library
  • Custom DSP Visualizations: Canvas-based signal rendering

Backend

  • Flask: Lightweight Python web framework
  • NumPy: Numerical computing for DSP operations
  • SciPy: Advanced signal processing algorithms
  • MNE-Python: EEG/MEG data processing
  • wfdb: PhysioNet WFDB tools for ECG processing
  • librosa: Audio signal processing
  • soundfile: Audio file I/O

AI/ML Services

  • Cloud-hosted classification models
  • REST API integration
  • Real-time inference

📦 Installation

Prerequisites

  • Node.js v18 or higher
  • npm or yarn
  • Python 3.8+
  • pip package manager

Frontend Setup

# Clone the repository
git clone https://github.com/yourusername/signal-viewer-classifier.git
cd signal-viewer-classifier

# Navigate to frontend
cd frontend/app

# Install dependencies
npm install

# Start development server
npm run dev

The application will be available at http://localhost:5173

Backend Setup

# Navigate to project root
cd signal-viewer-classifier

# Install Python dependencies
pip install flask numpy scipy mne wfdb librosa soundfile

# Open the backend notebook
jupyter notebook backend_api.ipynb

Or use Google Colab:

  1. Upload backend_api.ipynb to Google Colab
  2. Install dependencies in the notebook
  3. Run all cells to start the Flask server
  4. Use ngrok or Colab's built-in tunneling for remote access

🚀 Usage

Running the Application

  1. Start the Backend:

    • Open backend_api.ipynb in Jupyter or Google Colab
    • Run all cells to start the Flask API server
    • Note the API URL (e.g., http://localhost:5000 or ngrok URL)
  2. Start the Frontend:

    cd frontend/app
    npm run dev
  3. Access the Application:

    • Open browser to http://localhost:5173
    • Select a signal processing module from the home page

Signal Processing Modules

ECG Analysis

  1. Upload ECG data files (.hea + .dat format)
  2. Select visualization mode (Continuous, XOR, Polar, Recurrence, Sampling)
  3. Adjust sampling frequency to see aliasing effects
  4. View AI classification results

EEG Analysis

  1. Upload EDF file containing EEG data
  2. Select channels to visualize
  3. Adjust sampling frequency (50Hz - 1000Hz)
  4. Explore different visualization modes
  5. View neurological condition classification

Speech Processing

  1. Upload audio file (WAV, MP3, OGG, WebM)
  2. View original audio analysis with gender recognition
  3. Resample audio at different frequencies
  4. Apply anti-aliasing filters
  5. Compare audio quality and aliasing artifacts

Doppler Analysis

Generator Mode:

  1. Set vehicle velocity and sound frequency
  2. Generate Doppler-shifted audio
  3. Play and download generated audio

Detector Mode:

  1. Upload audio file
  2. Analyze for Doppler effect
  3. View detected velocity and frequency

Drone Detection:

  1. Upload audio sample
  2. Get binary classification (Drone / Not Drone)

SAR Analysis

  1. Upload SAR image
  2. View automated land-water segmentation
  3. Analyze coverage percentages
  4. Inspect terrain classification results

📂 Project Structure

signal-viewer-classifier/
├── frontend/
│   └── app/
│       ├── src/
│       │   ├── tasks/
│       │   │   └── modules/            # Main signal processing modules
│       │   │       ├── components/
│       │   │       │   ├── ecg/        # ECG viewer & classifier
│       │   │       │   ├── eeg/        # EEG viewer & classifier
│       │   │       │   ├── speech/     # Speech processing
│       │   │       │   ├── doppler/    # Doppler analysis
│       │   │       │   └── sar/        # SAR image analysis
│       │   │       ├── data/
│       │   │       └── styles/
│       │   ├── components/             # Shared components
│       │   ├── services/               # API services
│       │   ├── hooks/                  # Custom React hooks
│       │   └── App.jsx                 # Main application
│       ├── public/
│       ├── package.json
│       └── vite.config.js
├── backend_api.ipynb                   # Backend API (Jupyter Notebook)
├── custom_dsp.py                       # Custom DSP functions
├── README.md
├── .gitignore
└── requirements.txt                    # Python dependencies

🔬 Signal Processing Concepts

Sampling Theory

  • Nyquist-Shannon Sampling Theorem: Sample rate must be at least 2× the maximum signal frequency
  • Nyquist Frequency: Half of the sampling rate
  • Proper sampling preserves all signal information

Aliasing

  • Occurs when sampling rate < Nyquist frequency
  • High-frequency components masquerade as low frequencies
  • Causes signal distortion and information loss
  • Demonstrated in all signal processing modules

Anti-Aliasing

  • Low-pass filtering before downsampling
  • Removes frequency components above Nyquist limit
  • Prevents aliasing artifacts
  • Implemented in ECG, EEG, and Speech modules

Resampling

  • Upsampling: Increasing sampling rate (interpolation)
  • Downsampling: Decreasing sampling rate (decimation)
  • Interactive controls for educational demonstration

🎓 Educational Use

This platform is ideal for:

  • Digital Signal Processing courses
  • Biomedical Engineering education
  • Audio Engineering programs
  • Machine Learning demonstrations
  • Research in signal analysis

Key Learning Outcomes:

  • Understanding sampling theory and aliasing
  • Hands-on experience with real signals
  • Visualization of abstract DSP concepts
  • AI classification pipeline understanding

🤝 Contributing

Contributions are welcome! Please follow these steps:

  1. Fork the repository
  2. Create a feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

📄 License

This project is open source and available under the MIT License.


🙏 Acknowledgments

  • PhysioNet for ECG/EEG datasets and WFDB tools
  • OpenML and Kaggle for signal classification datasets
  • React and Vite communities for excellent tools
  • Flask for lightweight backend framework

📞 Support

For issues, questions, or suggestions:


🗺️ Roadmap

  • Add more signal types (EMG, EOG)
  • Implement real-time streaming
  • Add more AI models
  • Mobile responsive design improvements
  • Docker containerization
  • Advanced filtering options
  • Export analysis reports

Built with ❤️ for signal processing education and research

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors