Skip to content

mozturan/ImageClassification_hybridModels

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Satellite Image Classification with Hybrid CNN + Classical ML Models

Python PyTorch scikit-learn Jupyter License: MIT


Overview

This project investigates hybrid image classification pipelines for satellite remote sensing imagery. Rather than relying solely on end-to-end deep learning, we explore a two-stage approach: using a pretrained VGG16 CNN as a feature extractor, then benchmarking five classical machine learning classifiers on those extracted features.

The core question: can classical ML models match or complement a full deep network when given rich CNN-derived features?

The study is conducted on the RSI-CB256 dataset, a four-class satellite image benchmark, and provides a systematic comparison of task accuracy across all model configurations.


Dataset

RSI-CB256 — Remote Sensing Image Classification Benchmark

  • 4 classes of satellite sensor imagery
  • Images available in two resolutions: 224×224 and 64×64
  • All images are standardized to 224×224 for consistency across experiments

Screenshot from 2024-02-15 22-27-53


Method

Pipeline Architecture

Screenshot from 2024-02-15 22-32-01


The system is built around a two-stage design:

Stage 1 — Feature Extraction (CNN) A VGG16 backbone pretrained on ImageNet is used as a fixed feature extractor via transfer learning. The output of the convolutional layers serves as a rich, high-dimensional feature representation of each satellite image.

Stage 2 — Classification The extracted CNN features are passed to one of two classification heads:

  • Deep Network — A fully connected neural network trained end-to-end on the VGG16 features
  • Classical ML Classifiers — The same CNN features are used to train and evaluate five classical models:
Classifier Type
K-Nearest Neighbors (KNN) Instance-based
Logistic Regression Linear
Support Vector Machine (SVM) Kernel-based
Random Forest (RF) Ensemble / Bagging
AdaBoost Ensemble / Boosting

This setup enables a controlled, apples-to-apples comparison: every classifier receives identical feature inputs, so differences in accuracy are attributable purely to the classifier, not the features.


Results

Deep Network — Training & Validation Performance

Training and validation loss and accuracy rate of deep network.

Screenshot from 2024-02-15 22-40-02

Screenshot from 2024-02-15 22-41-44

Full classifier comparison results and metrics are available in the notebook.

Key Takeaways

  • Transfer learning is highly effective — VGG16 features pretrained on ImageNet generalize well to satellite imagery without any fine-tuning of the backbone
  • Classical ML on CNN features is competitive — Several classical classifiers achieve strong accuracy when given rich deep features, demonstrating that end-to-end training is not always necessary
  • SVM and Logistic Regression tend to perform best among the classical methods on linearly separable CNN feature spaces
  • The hybrid pipeline offers a practical alternative in resource-constrained settings where full deep network training is costly

Topics

machine-learning deep-learning computer-vision image-classification transfer-learning vgg16 svm knn random-forest adaboost satellite-imagery remote-sensing jupyter-notebook


License

This project is licensed under the MIT License. See LICENSE for details.


Bursa Uludağ University · Computer Engineering Department

Releases

No releases published

Packages

 
 
 

Contributors