Command Palette
Search for a command to run...
Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Search for a command to run...
We have compiled hundreds of related entries to help you understand "artificial intelligence"
The Halting Problem is an important problem in the theory of computability in logic and mathematics, proposed by British mathematician Alan Turing in 1936. The relevant paper is Turing’s famous paper “On Computable Numbers”.
When the model starts generating data during training that is far from the true data distribution, the performance of the model will drop drastically, eventually rendering the model output meaningless.
The Hopfield network is a recurrent neural network that is mainly used for problems such as associative memory and pattern recognition.
Reward error reduction refers to the problem in reinforcement learning (RL) caused by the reward function not fully matching the agent’s true goal.
Sequential recommendation system is an important type of recommendation system, whose main task is to predict the user's next behavior based on the user's historical behavior sequence.
R-MFDN enhances the model’s sensitivity to forged content through cross-modal contrastive learning loss function and identity-driven contrastive learning loss function.
The Karel puzzle is a set of problems that involve controlling a robot's actions in a simulated environment through instructions.
Fully Forward Mode (FFM) is a method for training optical neural networks. It was proposed by the research team of Academician Dai Qionghai and Professor Fang Lu of Tsinghua University in 2024. The relevant paper is “Fully forward mode training […]
The Busy Beavers game is a theoretical computer science problem proposed in 1962 by mathematician Tibor Radó.
The working principle of RNN is to store the information of previous time steps through the state of the hidden layer, so that the output of the network depends on the current input and the previous state.
ResNet effectively solves the gradient vanishing and gradient exploding problems that occur as the network depth increases by adding residual connections in the network.
Adam is an algorithm for first-order gradient optimization, which is particularly suitable for handling optimization problems with large-scale data and parameters.
The core technology of the GPT model is the Transformer architecture, which effectively captures contextual information through the self-attention mechanism.
Frequency Principle, or F-Principle for short, is an important concept in the field of deep learning. It describes the tendency of deep neural networks (DNNs) to fit the target function from low frequency to high frequency during training. This principle was proposed by Shanghai Jiao Tong University […]
Parameter aggregation describes the phenomenon that during the neural network training process, model parameters tend to gather towards specific values or directions.
Cyclomatic complexity is a software metric used to measure the complexity of a program.
The core idea of Dropout is to randomly discard (i.e. temporarily remove) some neurons in the network and their connections during the training process to prevent the model from overfitting.
Graph Attention Networks (GATs) are a type of neural network designed for graph-structured data. They were proposed by Petar Veličković and his colleagues in 2017. The related paper is “Graph Attention Networks (GATs)”.
Message Passing Neural Networks (MPNN) is a neural network framework for processing graph structured data. It was proposed by Gilmer et al. in 2017. The related paper is “Neural Messa […]
Graph Convolutional Networks (GCN), Kipf and Welling published a paper titled “Semi-Supervised Classification” at the 2017 ICLR conference.
The Gated Recurrent Unit (GRU) is a variant of the Recurrent Neural Network (RNN) proposed by Cho et al. in 2014. The related paper is “Empirical Evaluation of Gate […]
AlexNet is a deep convolutional neural network (CNN) proposed by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton in 2012 and used in the ImageNet image classification competition that year.
CART Decision Tree is a decision tree algorithm that can be used for classification and regression tasks.
Gradient Boosting is an ensemble learning algorithm that builds a strong prediction model by combining multiple weak prediction models (usually decision trees).