HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Deep regularization and direct training of the inner layers of Neural Networks with Kernel Flows

Gene Ryan Yoo Houman Owhadi

Deep regularization and direct training of the inner layers of Neural Networks with Kernel Flows

Abstract

We introduce a new regularization method for Artificial Neural Networks (ANNs) based on Kernel Flows (KFs). KFs were introduced as a method for kernel selection in regression/kriging based on the minimization of the loss of accuracy incurred by halving the number of interpolation points in random batches of the dataset. Writing $f_θ(x) = \big(f^{(n)}{θ_n}\circ f^{(n-1)}{θ_{n-1}} \circ \dots \circ f^{(1)}{θ_1}\big)(x)$ for the functional representation of compositional structure of the ANN, the inner layers outputs $h^{(i)}(x) = \big(f^{(i)}{θ_i}\circ f^{(i-1)}{θ{i-1}} \circ \dots \circ f^{(1)}_{θ_1}\big)(x)$ define a hierarchy of feature maps and kernels $k^{(i)}(x,x')=\exp(- γ_i \|h^{(i)}(x)-h^{(i)}(x')\|_2^2)$. When combined with a batch of the dataset these kernels produce KF losses $e_2^{(i)}$ (the $L^2$ regression error incurred by using a random half of the batch to predict the other half) depending on parameters of inner layers $θ_1,\ldots,θ_i$ (and $γ_i$). The proposed method simply consists in aggregating a subset of these KF losses with a classical output loss. We test the proposed method on CNNs and WRNs without alteration of structure nor output classifier and report reduced test errors, decreased generalization gaps, and increased robustness to distribution shift without significant increase in computational complexity. We suspect that these results might be explained by the fact that while conventional training only employs a linear functional (a generalized moment) of the empirical distribution defined by the dataset and can be prone to trapping in the Neural Tangent Kernel regime (under over-parameterizations), the proposed loss function (defined as a nonlinear functional of the empirical distribution) effectively trains the underlying kernel defined by the CNN beyond regressing the data with that kernel.

Code Repositories

kernel-enthusiasts/KF_NN2
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-qmnistDeep regularization
Accuracy (%): 99.67

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Deep regularization and direct training of the inner layers of Neural Networks with Kernel Flows | Papers | HyperAI