HyperAIHyperAI

Command Palette

Search for a command to run...

Multilayer Perceptron

Date

3 years ago

Multilayer Perceptron (MLP) is a forward-structured artificial neural network that maps a set of input vectors to a set of output vectors. It can be viewed as a directed graph consisting of multiple node layers, each of which is fully connected to the next layer. In addition to the input node, each node is a neuron (or processing unit) with a nonlinear activation function. MLP is a generalization of the perceptron, overcoming the weakness of the perceptron that it cannot recognize linearly inseparable data.

The concept of multilayer perceptron emerged after the introduction of the backpropagation algorithm, which made it possible to train multilayer networks.Learning representations by back-propagating errorsThe backpropagation algorithm is described in detail in the 1986 paper by David E. Rumelhart, Geoffrey E. Hinton, and Ronald J. Williams, which shows how to use it to train multilayer perceptrons.

Although the early concept and prototype of multilayer perceptron existed before, this paper was an important document that clearly linked the backpropagation algorithm with the multilayer network structure and was widely recognized in the field of neural network research. Before this, multilayer networks had not been widely used due to the lack of effective training methods.

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Multilayer Perceptron | Wiki | HyperAI