Command Palette
Search for a command to run...
Universal Approximation Theory
Date
Tags
Universal Approximation Theory (UAT) is an important theoretical foundation in the field of neural networks. It shows that a neural network with a sufficiently complex structure can approximate any continuous function with arbitrary precision. This theory was first proposed by George Cybenko in 1989. The relevant paper results are "Approximation by Superpositions of a Sigmoidal Function". He proved that as long as the number of neurons is sufficient and a nonlinear activation function (such as the Sigmoid function) is used, a feedforward neural network with a single hidden layer can approximate any continuous function. Subsequently, Kurt Hornik published the paper "Approximation capabilities of multilayer feedforward networks"extended this theory and showed that the choice of activation function can be wider and can be applied as long as the activation function is non-constant, bounded, monotonically increasing and continuous.
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.