Command Palette
Search for a command to run...
SmoothNets: Optimizing CNN architecture design for differentially
private deep learning
SmoothNets: Optimizing CNN architecture design for differentially private deep learning
Remerscheid Nicolas W. ; Ziller Alexander ; Rueckert Daniel ; Kaissis Georgios
Abstract
The arguably most widely employed algorithm to train deep neural networkswith Differential Privacy is DPSGD, which requires clipping and noising ofper-sample gradients. This introduces a reduction in model utility compared tonon-private training. Empirically, it can be observed that this accuracydegradation is strongly dependent on the model architecture. We investigatedthis phenomenon and, by combining components which exhibit good individualperformance, distilled a new model architecture termed SmoothNet, which ischaracterised by increased robustness to the challenges of DP-SGD training.Experimentally, we benchmark SmoothNet against standard architectures on twobenchmark datasets and observe that our architecture outperforms others,reaching an accuracy of 73.5% on CIFAR-10 at ε=7.0 and 69.2% atε=7.0 on ImageNette, a state-of-the-art result compared to priorarchitectural modifications for DP.