HYDRA: Pruning Adversarially Robust Neural Networks |
NeurIPS |
W |
PyTorch(Author) |
Logarithmic Pruning is All You Need |
NeurIPS |
W |
- |
Directional Pruning of Deep Neural Networks |
NeurIPS |
W |
- |
Movement Pruning: Adaptive Sparsity by Fine-Tuning |
NeurIPS |
W |
PyTorch(Author) |
Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot |
NeurIPS |
W |
PyTorch(Author) |
Neuron Merging: Compensating for Pruned Neurons |
NeurIPS |
F |
PyTorch(Author) |
Neuron-level Structured Pruning using Polarization Regularizer |
NeurIPS |
F |
PyTorch(Author) |
SCOP: Scientific Control for Reliable Neural Network Pruning |
NeurIPS |
F |
- |
Storage Efficient and Dynamic Flexible Runtime Channel Pruning via Deep Reinforcement Learning |
NeurIPS |
F |
- |
The Generalization-Stability Tradeoff In Neural Network Pruning |
NeurIPS |
F |
PyTorch(Author) |
Pruning Filter in Filter |
NeurIPS |
Other |
PyTorch(Author) |
Position-based Scaled Gradient for Model Quantization and Pruning |
NeurIPS |
Other |
PyTorch(Author) |
Bayesian Bits: Unifying Quantization and Pruning |
NeurIPS |
Other |
- |
Pruning neural networks without any data by iteratively conserving synaptic flow |
NeurIPS |
Other |
PyTorch(Author) |
EagleEye: Fast Sub-net Evaluation for Efficient Neural Network Pruning |
ECCV (Oral) |
F |
PyTorch(Author) |
DSA: More Efficient Budgeted Pruning via Differentiable Sparsity Allocation |
ECCV |
F |
- |
DHP: Differentiable Meta Pruning via HyperNetworks |
ECCV |
F |
PyTorch(Author) |
Meta-Learning with Network Pruning |
ECCV |
W |
- |
Accelerating CNN Training by Pruning Activation Gradients |
ECCV |
W |
- |
DA-NAS: Data Adapted Pruning for Efficient Neural Architecture Search |
ECCV |
Other |
- |
Differentiable Joint Pruning and Quantization for Hardware Efficiency |
ECCV |
Other |
- |
Channel Pruning via Automatic Structure Search |
IJCAI |
F |
PyTorch(Author) |
Adversarial Neural Pruning with Latent Vulnerability Suppression |
ICML |
W |
- |
Proving the Lottery Ticket Hypothesis: Pruning is All You Need |
ICML |
W |
- |
Soft Threshold Weight Reparameterization for Learnable Sparsity |
ICML |
WF |
Pytorch(Author) |
Network Pruning by Greedy Subnetwork Selection |
ICML |
F |
- |
Operation-Aware Soft Channel Pruning using Differentiable Masks |
ICML |
F |
- |
DropNet: Reducing Neural Network Complexity via Iterative Pruning |
ICML |
F |
- |
Towards Efficient Model Compression via Learned Global Ranking |
CVPR (Oral) |
F |
Pytorch(Author) |
HRank: Filter Pruning using High-Rank Feature Map |
CVPR (Oral) |
F |
Pytorch(Author) |
Neural Network Pruning with Residual-Connections and Limited-Data |
CVPR (Oral) |
F |
- |
Multi-Dimensional Pruning: A Unified Framework for Model Compression |
CVPR (Oral) |
WF |
- |
DMCP: Differentiable Markov Channel Pruning for Neural Networks |
CVPR (Oral) |
F |
TensorFlow(Author) |
Group Sparsity: The Hinge Between Filter Pruning and Decomposition for Network Compression |
CVPR |
F |
PyTorch(Author) |
Few Sample Knowledge Distillation for Efficient Network Compression |
CVPR |
F |
- |
Discrete Model Compression With Resource Constraint for Deep Neural Networks |
CVPR |
F |
- |
Structured Compression by Weight Encryption for Unstructured Pruning and Quantization |
CVPR |
W |
- |
Learning Filter Pruning Criteria for Deep Convolutional Neural Networks Acceleration |
CVPR |
F |
- |
APQ: Joint Search for Network Architecture, Pruning and Quantization Policy |
CVPR |
F |
- |
Comparing Rewinding and Fine-tuning in Neural Network Pruning |
ICLR (Oral) |
WF |
TensorFlow(Author) |
A Signal Propagation Perspective for Pruning Neural Networks at Initialization |
ICLR (Spotlight) |
W |
- |
ProxSGD: Training Structured Neural Networks under Regularization and Constraints |
ICLR |
W |
TF+PT(Author) |
One-Shot Pruning of Recurrent Neural Networks by Jacobian Spectrum Evaluation |
ICLR |
W |
- |
Lookahead: A Far-sighted Alternative of Magnitude-based Pruning |
ICLR |
W |
PyTorch(Author) |
Dynamic Model Pruning with Feedback |
ICLR |
WF |
- |
Provable Filter Pruning for Efficient Neural Networks |
ICLR |
F |
- |
Data-Independent Neural Pruning via Coresets |
ICLR |
W |
- |
AutoCompress: An Automatic DNN Structured Pruning Framework for Ultra-High Compression Rates |
AAAI |
F |
- |
DARB: A Density-Aware Regular-Block Pruning for Deep Neural Networks |
AAAI |
Other |
- |
Pruning from Scratch |
AAAI |
Other |
- |