Command Palette
Search for a command to run...
Wiki
Machine Learning Glossary: Explore definitions and explanations of key AI and ML concepts
Speaker Similarity aims to measure whether two speech samples come from the same speaker or how similar the two samples are.
Guided sampling is a technique used to enhance sample quality in generative models, aiming to improve the controllability of generative models.
Shallow self-reflection aims to quickly optimize the performance of the current task or behavior by making local adjustments to the model through immediate feedback.
Multimodal thinking visualization aims to provide a more intuitive and comprehensive display of thinking, decision-making and information processing processes through the collaborative work of multiple different modalities.
Sparse autoencoder is an unsupervised machine learning algorithm.
Continuous concept mixing aims to generate new data samples by mixing different concepts or features to expand the learning and reasoning capabilities of the model.
Deductive database arithmetic reasoning aims to deduce and calculate data in the database through inference rules and mathematical operations.
Token-level preference alignment methods aim to reduce the hallucination problem in Large Visual-Language Models (LVLMs).
Inference-time scaling is a method to improve the performance of Large Language Models (LLMs) by increasing the computational resources during the inference phase.
Slow perception aims to achieve detailed perception of geometric figures by splitting the perception process, so as to improve the performance of large multimodal models in visual reasoning tasks.
Thinking Evolution aims to expand the utilization of computing resources during reasoning in innovative ways, allowing models to handle complex problems more efficiently.
Large action models aim to achieve the transition from language interaction to real-world action execution, pushing AI towards artificial general intelligence (AGI).
Semantic frequency cues aim to address the limitations of traditional spatial domain methods through analysis and selective learning in the frequency domain.
ASAL aims to automatically explore the simulation space in the field of artificial life using foundational models.
Offline meta-reinforcement learning aims to utilize offline data to train models so that they can quickly adapt to new tasks or new environments without extensive online interactions.
Out-of-distribution generalization focuses on enabling the model to maintain good performance and stability when faced with unknown or unseen data distributions.
Universal approximation theory shows that a neural network with sufficiently complex structure can approximate any continuous function with arbitrary accuracy.
The core idea of DPO is to optimize directly on human preference data without training a separate reward model or using reinforcement learning.
The untrained guidance aims to address the difficulty of diffusion models in the field of conditional generation.
The primary odor map aims to model the connection between the chemical structure of an odor and its olfactory perceptual properties.
Out-of-distribution detection focuses on identifying data samples that were not covered during the model training phase.
Star Attention can significantly reduce inference time, reducing memory requirements and inference time by up to 11 times while maintaining 95-100% accuracy.
UniSeg3D can implement 6 different 3D point cloud segmentation tasks within the same model.
Numerical understanding and processing aims to independently evaluate the performance of large language models (LLMs) in the numerical domain.
Speaker Similarity aims to measure whether two speech samples come from the same speaker or how similar the two samples are.
Guided sampling is a technique used to enhance sample quality in generative models, aiming to improve the controllability of generative models.
Shallow self-reflection aims to quickly optimize the performance of the current task or behavior by making local adjustments to the model through immediate feedback.
Multimodal thinking visualization aims to provide a more intuitive and comprehensive display of thinking, decision-making and information processing processes through the collaborative work of multiple different modalities.
Sparse autoencoder is an unsupervised machine learning algorithm.
Continuous concept mixing aims to generate new data samples by mixing different concepts or features to expand the learning and reasoning capabilities of the model.
Deductive database arithmetic reasoning aims to deduce and calculate data in the database through inference rules and mathematical operations.
Token-level preference alignment methods aim to reduce the hallucination problem in Large Visual-Language Models (LVLMs).
Inference-time scaling is a method to improve the performance of Large Language Models (LLMs) by increasing the computational resources during the inference phase.
Slow perception aims to achieve detailed perception of geometric figures by splitting the perception process, so as to improve the performance of large multimodal models in visual reasoning tasks.
Thinking Evolution aims to expand the utilization of computing resources during reasoning in innovative ways, allowing models to handle complex problems more efficiently.
Large action models aim to achieve the transition from language interaction to real-world action execution, pushing AI towards artificial general intelligence (AGI).
Semantic frequency cues aim to address the limitations of traditional spatial domain methods through analysis and selective learning in the frequency domain.
ASAL aims to automatically explore the simulation space in the field of artificial life using foundational models.
Offline meta-reinforcement learning aims to utilize offline data to train models so that they can quickly adapt to new tasks or new environments without extensive online interactions.
Out-of-distribution generalization focuses on enabling the model to maintain good performance and stability when faced with unknown or unseen data distributions.
Universal approximation theory shows that a neural network with sufficiently complex structure can approximate any continuous function with arbitrary accuracy.
The core idea of DPO is to optimize directly on human preference data without training a separate reward model or using reinforcement learning.
The untrained guidance aims to address the difficulty of diffusion models in the field of conditional generation.
The primary odor map aims to model the connection between the chemical structure of an odor and its olfactory perceptual properties.
Out-of-distribution detection focuses on identifying data samples that were not covered during the model training phase.
Star Attention can significantly reduce inference time, reducing memory requirements and inference time by up to 11 times while maintaining 95-100% accuracy.
UniSeg3D can implement 6 different 3D point cloud segmentation tasks within the same model.
Numerical understanding and processing aims to independently evaluate the performance of large language models (LLMs) in the numerical domain.