Background • Usage • Code • Citation •
Progressive Adversarial Robustness Distillation (ProARD), enabling the efficient one-time training of a dynamic network that supports a diverse range of accurate and robust student networks without requiring retraining. ProARD makes a dynamic deep neural network based on dynamic layers by encompassing variations in width, depth, and expansion in each design stage to support a wide range of architectures.
git clone https://github.com/hamidmousavi0/ProARD.git
- attacks/ # Different Adversarial attack methods (PGD, AutoAttack, FGSM, DeepFool, etc. ([Refrence](https://github.com/imrahulr/hat.git)))
- proard/
- classification/
- data_provider/ # The dataset and dataloader definitions for Cifar-10, Cifar-100, and ImageNet.
- elastic_nn/
- modules/ # The deficnition of dynamic layers
- networks/ # The deficnition of dynamic networks
- training/ # Progressive training
-networks/ # The original networks
-run_anager/ # The Configs and distributed training
- nas
- accuracy_predictor/ # The accuracy and robustness predictor
- efficiency_predictor/ # The efficiency predictor
- search_algorithm/ # The Multi-Objective Search Engine
- utils/ # Utility functions
- model_zoo.py # All the models for evaluation
- create_acc_rob_pred_dataset.py # Create dataset to train the accuracy-robustness predictor.
- create_acc_rob_pred.py # make the predictor model.
- eval_ofa_net.py # Eval the sub-nets
- search_best.py # Search the best sub-net
- train_ofa_net_WPS.py # train the dynamic network without progressive training.
- train_ofa_net.py # Train the dynamic network with progressive training.
- train_teacher_net.py # Train teacher network for Robust knoweldge distillation.
From Source
Download this repository into your project folder.
python eval_ofa_net.py --path path of dataset --net Dynamic net name (ResNet50, MBV3)
--dataset (cifar10, cifar100) --robust_mode (True, False)
--WPS (True, False) --attack ('fgsm', 'linf-pgd', 'fgm', 'l2-pgd', 'linf-df', 'l2-df', 'linf-apgd', 'l2-apgd','squar_attack','autoattack','apgd_ce')
horovodrun -np 4 python train_teacher_net.py --model_name ("ResNet50", "MBV3") --dataset (cifar10, cifar100)
--robust_mode (True, False) --epsilon 0.031 --num_steps 10
--step_size 0.0078 --distance 'l-inf' --train_criterion 'trades'
--attack_type 'linf-pgd'
horovodrun -np 4 python train_ofa_net.py --task 'width' or 'kernel' --model_name ("ResNet50", "MBV3") --dataset (cifar10, cifar100)
--robust_mode (True, False) --epsilon 0.031 --num_steps 10
--step_size 0.0078 --distance 'l-inf' --train_criterion 'trades'
--attack_type 'linf-pgd' --kd_criterion 'rslad' --phase 1
horovodrun -np 4 python train_ofa_net.py --task 'depth' --model_name ("ResNet50", "MBV3") --dataset (cifar10, cifar100)
--robust_mode (True, False) --epsilon 0.031 --num_steps 10
--step_size 0.0078 --distance 'l-inf' --train_criterion 'trades'
--attack_type 'linf-pgd' --kd_criterion 'rslad' --phase 1
horovodrun -np 4 python train_ofa_net.py --task 'depth' --model_name ("ResNet50", "MBV3") --dataset (cifar10, cifar100)
--robust_mode (True, False) --epsilon 0.031 --num_steps 10
--step_size 0.0078 --distance 'l-inf' --train_criterion 'trades'
--attack_type 'linf-pgd' --kd_criterion 'rslad' --phase 2
horovodrun -np 4 python train_ofa_net.py --task 'expand' --model_name ("ResNet50", "MBV3") --dataset (cifar10, cifar100)
--robust_mode (True, False) --epsilon 0.031 --num_steps 10
--step_size 0.0078 --distance 'l-inf' --train_criterion 'trades'
--attack_type 'linf-pgd' --kd_criterion 'rslad' --phase 1
horovodrun -np 4 python train_ofa_net.py --task 'expand' --model_name ("ResNet50", "MBV3") --dataset (cifar10, cifar100)
--robust_mode (True, False) --epsilon 0.031 --num_steps 10
--step_size 0.0078 --distance 'l-inf' --train_criterion 'trades'
--attack_type 'linf-pgd' --kd_criterion 'rslad' --phase 2
- Add object detection Task
- Add Transformers architectures
View the published paper(preprint), Accepted in IJCNN 2025.
We acknowledge the National Academic Infrastructure for Supercomputing in Sweden (NAISS), partially funded by the Swedish Research Council through grant agreement no
Some of the code in this repository is based on the following amazing works:

