Command Palette
Search for a command to run...
Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Search for a command to run...
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Sparse autoencoder is an unsupervised machine learning algorithm.
Continuous concept mixing aims to generate new data samples by mixing different concepts or features to expand the learning and reasoning capabilities of the model.
Deductive database arithmetic reasoning aims to deduce and calculate data in the database through inference rules and mathematical operations.
Token-level preference alignment methods aim to reduce the hallucination problem in Large Visual-Language Models (LVLMs).
Inference-time scaling is a method to improve the performance of Large Language Models (LLMs) by increasing the computational resources during the inference phase.
Slow perception aims to achieve detailed perception of geometric figures by splitting the perception process, so as to improve the performance of large multimodal models in visual reasoning tasks.
Thinking Evolution aims to expand the utilization of computing resources during reasoning in innovative ways, allowing models to handle complex problems more efficiently.
Large action models aim to achieve the transition from language interaction to real-world action execution, pushing AI towards artificial general intelligence (AGI).
Semantic frequency cues aim to address the limitations of traditional spatial domain methods through analysis and selective learning in the frequency domain.
ASAL aims to automatically explore the simulation space in the field of artificial life using foundational models.
Offline meta-reinforcement learning aims to utilize offline data to train models so that they can quickly adapt to new tasks or new environments without extensive online interactions.
Out-of-distribution generalization focuses on enabling the model to maintain good performance and stability when faced with unknown or unseen data distributions.
Universal approximation theory shows that a neural network with sufficiently complex structure can approximate any continuous function with arbitrary accuracy.
The core idea of DPO is to optimize directly on human preference data without training a separate reward model or using reinforcement learning.
The untrained guidance aims to address the difficulty of diffusion models in the field of conditional generation.
The primary odor map aims to model the connection between the chemical structure of an odor and its olfactory perceptual properties.
Out-of-distribution detection focuses on identifying data samples that were not covered during the model training phase.
Star Attention can significantly reduce inference time, reducing memory requirements and inference time by up to 11 times while maintaining 95-100% accuracy.
UniSeg3D can implement 6 different 3D point cloud segmentation tasks within the same model.
Numerical understanding and processing aims to independently evaluate the performance of large language models (LLMs) in the numerical domain.
Coconut frees the reasoning process from the traditional language space and allows the model to reason directly in the continuous latent space.
The density law describes that the power density of large language models (LLMs) increases exponentially over time.
Nearest neighbor search is an algorithmic problem of finding the point (or set of points) in a database or data set that is closest to a given query point.
Neighbor search refers to the process of determining the neighboring particles around each particle (usually an atom) in the simulation box.