Command Palette
Search for a command to run...
When NAS Meets Trees: An Efficient Algorithm for Neural Architecture Search
Guocheng Qian Xuanyang Zhang Guohao Li Chen Zhao Yukang Chen Xiangyu Zhang Bernard Ghanem Jian Sun

Abstract
The key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space. We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures while also achieving a higher search accuracy. TNAS introduces an architecture tree and a binary operation tree, to factorize the search space and substantially reduce the exploration size. TNAS performs a modified bi-level Breadth-First Search in the proposed trees to discover a high-performance architecture. Impressively, TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37\% in four GPU hours in NAS-Bench-201. The average test accuracy is 94.35\%, which outperforms the state-of-the-art. Code is available at: \url{https://github.com/guochengqian/TNAS}.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| neural-architecture-search-on-nas-bench-201 | TNAS | Accuracy (Test): 46.31 |
| neural-architecture-search-on-nas-bench-201-1 | TNAS | Accuracy (Test): 94.35 |
| neural-architecture-search-on-nas-bench-201-2 | TNAS | Accuracy (Test): 73.02 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.