Command Palette
Search for a command to run...
Wiki
We have compiled hundreds of related entries to help you understand "artificial intelligence"
Search for a command to run...
We have compiled hundreds of related entries to help you understand "artificial intelligence"
The emergence of the lottery hypothesis has spurred a series of methods for efficiently training neural networks.
TileLang, with its unified block and thread paradigm and transparent scheduling capabilities, meets the powerful functionality and flexibility required for the development of modern AI systems.
RPN and Fast R-CNN are combined into a single network for object detection by sharing convolutional features.
CSA aims to build systems that are not only secure, but also truly helpful.
CaT can be used at test time to improve inference time, or built into RL (CaT-RL) to improve policies.
MCP is used to connect AI assistants to where data is stored, including content repositories, business tools, and development environments.
MetaFold can handle a variety of clothing and a wide range of language commands, efficiently completing various clothing folding tasks.
ST-Raptor outperforms nine baseline models by up to 20% in answer accuracy.
SubLlME aims to achieve efficient and accurate model performance evaluation through ranking relevance prediction without the need for full-scale evaluation.
BSC-Nav constructs an allocentric cognitive map from egocentric trajectories and contextual clues, and dynamically retrieves spatial knowledge consistent with semantic goals.
Preliminary experiments show that DPCL can separate speech and achieve relatively ideal results.
The goal of dual-mode annealing is to develop a model that can grasp two different response modes: thinking mode and non-thinking mode.
The core principle of BPO is to learn adaptive policies by explicitly comparing the utilities of thinking and non-thinking paths under the same input query.
BED-LLM effectively applies the sequential Bayesian experimental design (BED) framework to the interactive information collection problem with LLMs.
Compared with the LLaMA model and other state-of-the-art baseline models, REFRAG achieves significant speedup without loss of accuracy.
As a general and lightweight solution, ATE enhances the practicality of deploying VLA models to new robotic platforms and tasks.
MoC provides a new blueprint for the next generation of scalable and controllable long-term video generation models.
The TiG framework enables LLMs to develop procedural understanding by interacting directly with the game environment while retaining their inherent reasoning and interpretation capabilities.
LOVON aims to leverage large language models for hierarchical task planning in conjunction with an open vocabulary visual detection model.
MP1 is able to directly generate motion trajectories within a single network function evaluation.
Meta-rater aims to integrate the four dimensions of expertise, readability, reasoning, and cleanliness with existing quality indicators by learning optimal weights.
MaCP aims to achieve excellent performance in fine-tuning large base models with minimal parameter and memory overhead.
Contextual engineering marks a paradigm upgrade in LLM practice from “prompt engineering” to systematic “contextual engineering”.
Imitation learning acquires strategies by learning from expert demonstrations