Command Palette
Search for a command to run...
Wiki
Machine Learning Glossary: Explore definitions and explanations of key AI and ML concepts
Parameter Efficient Fine-tuning (PERT) is a fine-tuning method for large pre-trained models that reduces computational and storage costs by fine-tuning only a small subset of model parameters while maintaining performance comparable to full-parameter fine-tuning.
In the field of artificial intelligence, a "world model" is a model that can characterize the state of the environment or the world and predict the transition between states. This model enables the agent to learn in a simulated environment and transfer the learned strategy to the real world, thereby improving learning efficiency and reducing risks. Jürgen S […]
Multimodal Contrastive Learning with Joint Example Selection (JEST) aims to address the high energy consumption problem during training of large language models such as ChatGPT.
Full Parameter Tuning is a model optimization technique in deep learning, especially used in the context of transfer learning or domain adaptation. It involves fine-tuning all parameters of a pre-trained model to adapt it to a specific task or dataset.
Occupancy grid network plays an important role in autonomous driving perception tasks. It is a network model that emphasizes geometry over semantics. It can assist autonomous driving systems in better perceiving free space and is a key technology for improving perception capabilities and forming a closed loop.
The core idea of realignment during decoding is to dynamically adjust the alignment of the model during the decoding process without retraining the model, thus saving computing resources and improving research efficiency.
3D Gaussian splatting is an advanced computer graphics technique that has important applications in point cloud rendering, volume data visualization, and volume reconstruction. This technique achieves higher quality rendering by converting discrete data points or voxels into continuous surface or volume representations.
Shadow mode testing is a testing method used in the field of autonomous driving. It is mainly used to verify and evaluate autonomous driving algorithms in real traffic environments while ensuring that it does not interfere with the driver and surrounding traffic.
The curse of sparsity is a key scientific issue in the field of autonomous driving. It refers to the fact that in real driving environments, the probability of safety-critical events is extremely low, which causes these events to be extremely sparse in driving data, making it difficult for deep learning models to learn the characteristics of these events.
Diffusion loss is a loss function related to the diffusion model, which is used during the training process to guide the model to learn how to gradually remove noise and restore the original structure of the data.
The Long-Tail Challenge generally refers to a class of problems encountered in machine learning and deep learning, especially when dealing with visual recognition tasks.
Crapness Ratio is a metric used to evaluate the proportion of nonsense or invalid information in the answers given by large language models (LLMs).
In the field of artificial intelligence, lifelong learning refers to the ability of a machine to continuously update and improve its knowledge base and models by continuously receiving new data and experience.
Hardware independence refers to software, applications, operating systems, or other types of systems that are designed not to be dependent on or specific to any particular hardware platform or hardware architecture.
LlamaIndex is a tool for building indexes and querying local documents, which acts as a bridge between custom data and Large Language Models (LLMs).
The modality generator is a key component in a multimodal learning system, and its main role is to generate outputs of different modalities, such as images, videos, or audios.
The Visual Language Geographic Foundation Model is an artificial intelligence model specifically designed to process and analyze Earth observation data.
Future Multi-Predictor Mixture is a model component for time series forecasting that is part of the TimeMixer architecture.
PDM is a theoretical concept for time series forecasting and it is one of the core components of the TimeMixer model.
MRL learns information with different granularities by optimizing nested low-dimensional vectors and allows a single embedding to adapt to the computational constraints of downstream tasks.
Hadoop is an open source framework developed by the Apache Software Foundation for storing and processing large amounts of data on clusters of commodity hardware.
Edge AI refers to the deployment of AI algorithms and AI models directly on local edge devices such as sensors or Internet of Things (IoT) devices, enabling real-time data processing and analysis without constant reliance on cloud infrastructure. Simply put, edge AI refers to the integration of edge computing and human […]
An open source project, product, or initiative embraces and promotes the principles of open communication, collaborative participation, rapid prototyping, transparency, meritocracy, and community-oriented development.
Neuromorphic computing is the process by which computers are designed and built to mimic the structure and function of the human brain, with the aim of using artificial neurons and synapses to process information in this way.
Parameter Efficient Fine-tuning (PERT) is a fine-tuning method for large pre-trained models that reduces computational and storage costs by fine-tuning only a small subset of model parameters while maintaining performance comparable to full-parameter fine-tuning.
In the field of artificial intelligence, a "world model" is a model that can characterize the state of the environment or the world and predict the transition between states. This model enables the agent to learn in a simulated environment and transfer the learned strategy to the real world, thereby improving learning efficiency and reducing risks. Jürgen S […]
Multimodal Contrastive Learning with Joint Example Selection (JEST) aims to address the high energy consumption problem during training of large language models such as ChatGPT.
Full Parameter Tuning is a model optimization technique in deep learning, especially used in the context of transfer learning or domain adaptation. It involves fine-tuning all parameters of a pre-trained model to adapt it to a specific task or dataset.
Occupancy grid network plays an important role in autonomous driving perception tasks. It is a network model that emphasizes geometry over semantics. It can assist autonomous driving systems in better perceiving free space and is a key technology for improving perception capabilities and forming a closed loop.
The core idea of realignment during decoding is to dynamically adjust the alignment of the model during the decoding process without retraining the model, thus saving computing resources and improving research efficiency.
3D Gaussian splatting is an advanced computer graphics technique that has important applications in point cloud rendering, volume data visualization, and volume reconstruction. This technique achieves higher quality rendering by converting discrete data points or voxels into continuous surface or volume representations.
Shadow mode testing is a testing method used in the field of autonomous driving. It is mainly used to verify and evaluate autonomous driving algorithms in real traffic environments while ensuring that it does not interfere with the driver and surrounding traffic.
The curse of sparsity is a key scientific issue in the field of autonomous driving. It refers to the fact that in real driving environments, the probability of safety-critical events is extremely low, which causes these events to be extremely sparse in driving data, making it difficult for deep learning models to learn the characteristics of these events.
Diffusion loss is a loss function related to the diffusion model, which is used during the training process to guide the model to learn how to gradually remove noise and restore the original structure of the data.
The Long-Tail Challenge generally refers to a class of problems encountered in machine learning and deep learning, especially when dealing with visual recognition tasks.
Crapness Ratio is a metric used to evaluate the proportion of nonsense or invalid information in the answers given by large language models (LLMs).
In the field of artificial intelligence, lifelong learning refers to the ability of a machine to continuously update and improve its knowledge base and models by continuously receiving new data and experience.
Hardware independence refers to software, applications, operating systems, or other types of systems that are designed not to be dependent on or specific to any particular hardware platform or hardware architecture.
LlamaIndex is a tool for building indexes and querying local documents, which acts as a bridge between custom data and Large Language Models (LLMs).
The modality generator is a key component in a multimodal learning system, and its main role is to generate outputs of different modalities, such as images, videos, or audios.
The Visual Language Geographic Foundation Model is an artificial intelligence model specifically designed to process and analyze Earth observation data.
Future Multi-Predictor Mixture is a model component for time series forecasting that is part of the TimeMixer architecture.
PDM is a theoretical concept for time series forecasting and it is one of the core components of the TimeMixer model.
MRL learns information with different granularities by optimizing nested low-dimensional vectors and allows a single embedding to adapt to the computational constraints of downstream tasks.
Hadoop is an open source framework developed by the Apache Software Foundation for storing and processing large amounts of data on clusters of commodity hardware.
Edge AI refers to the deployment of AI algorithms and AI models directly on local edge devices such as sensors or Internet of Things (IoT) devices, enabling real-time data processing and analysis without constant reliance on cloud infrastructure. Simply put, edge AI refers to the integration of edge computing and human […]
An open source project, product, or initiative embraces and promotes the principles of open communication, collaborative participation, rapid prototyping, transparency, meritocracy, and community-oriented development.
Neuromorphic computing is the process by which computers are designed and built to mimic the structure and function of the human brain, with the aim of using artificial neurons and synapses to process information in this way.