Command Palette
Search for a command to run...
Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Search for a command to run...
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Threshold shifting refers to adjusting the threshold for classifying categories according to actual conditions. It is often used to solve the problem of category imbalance.
Threshold Logic Unit (TLU) is the basic unit of neural network.
Threshold is also called critical value or threshold value. It is the value of a certain condition required for an object to undergo a certain change. It is a common term in academic research.
The least squares method is a mathematical optimization method that finds the best function matching the data by minimizing the sum of squared errors.
A tensor is a multilinear function that can be used to represent linear relationships between vectors, scalars, and other tensors.
Wasserstein Generative Adversarial Network has several advantages: It solves the problem of unstable GAN training, without the need to carefully balance the training degree of the generator and the discriminator; It basically solves the Collapse Mode problem and ensures the diversity of generated samples; There are problems such as cross entropy and quasi-[…]
The Viterbi algorithm is a dynamic programming algorithm.
The VC dimension is used to measure the capacity of a binary classifier.
A subspace is also generally called a linear subspace or a vector subspace, which is a subset of a vector space.
The significance of sparse expression lies in dimensionality reduction, and this dimensionality reduction is not limited to saving space. The dependence between the dimensions of the feature vector after sparse expression becomes lower and more independent.
The stability-plasticity dilemma is a constraint in both artificial and biological neural systems.
Speech recognition is a technology that enables computers to recognize natural language. Its goal is to convert human speech content into corresponding text.
Simulated annealing is a general probabilistic algorithm that is often used to find a near-optimal solution in a large search space within a certain period of time.
Similarity measurement is used to estimate the similarity between different samples and is often used as a criterion for classification problems.
The Sigmoid function is a common S-shaped function, also known as the S-shaped growth curve. Due to its monotonic and inverse monotonic properties, the Sigmoid function is often used as a threshold function for neural networks to map variables between 0 and 1.
Unmanned driving mainly refers to self-driving cars, also known as driverless cars, computer-driven cars or wheeled mobile robots. They are a type of unmanned ground vehicle with the transportation capabilities of traditional cars.
Reproducing Kernel Hilbert Space (RKHS) is a Hilbert space with a reproducing kernel composed of functions. In Hilbert space, a set of data is mapped to a high-dimensional space using the "kernel trick", which is a reproducing kernel Hilbert space.
Regularization is the process of introducing additional information to solve ill-posed problems or prevent overfitting.
The rectified linear unit (ReLU), also known as the linear rectification function, is a commonly used activation function in artificial neural networks, usually referring to nonlinear functions represented by ramp functions and their variants.
The recall rate, also known as the recall rate, is the ratio of the number of retrieved samples to the total number of samples, and measures the recall rate of the retrieval system.
The quasi-Newton method is an optimization method based on the Newton method. It is mainly used to solve the zero point or maximum and minimum value problems of nonlinear equations or continuous functions.
Pseudo-labeling (PL) is the operation of training a model to add predicted labels to unlabeled data.
Prior probability refers to the probability obtained based on past experience and analysis, usually statistical probability.
Principal component analysis (PCA) is a technique for analyzing and simplifying data sets. It uses the idea of dimensionality reduction to transform multiple indicators into fewer comprehensive indicators. PCA is the simplest method for analyzing multivariate statistical distributions using characteristic quantities.