HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

When NAS Meets Trees: An Efficient Algorithm for Neural Architecture Search

Guocheng Qian Xuanyang Zhang Guohao Li Chen Zhao Yukang Chen Xiangyu Zhang Bernard Ghanem Jian Sun

When NAS Meets Trees: An Efficient Algorithm for Neural Architecture Search

Abstract

The key challenge in neural architecture search (NAS) is designing how to explore wisely in the huge search space. We propose a new NAS method called TNAS (NAS with trees), which improves search efficiency by exploring only a small number of architectures while also achieving a higher search accuracy. TNAS introduces an architecture tree and a binary operation tree, to factorize the search space and substantially reduce the exploration size. TNAS performs a modified bi-level Breadth-First Search in the proposed trees to discover a high-performance architecture. Impressively, TNAS finds the global optimal architecture on CIFAR-10 with test accuracy of 94.37\% in four GPU hours in NAS-Bench-201. The average test accuracy is 94.35\%, which outperforms the state-of-the-art. Code is available at: \url{https://github.com/guochengqian/TNAS}.

Code Repositories

guochengqian/tnas
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
neural-architecture-search-on-nas-bench-201TNAS
Accuracy (Test): 46.31
neural-architecture-search-on-nas-bench-201-1TNAS
Accuracy (Test): 94.35
neural-architecture-search-on-nas-bench-201-2TNAS
Accuracy (Test): 73.02

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
When NAS Meets Trees: An Efficient Algorithm for Neural Architecture Search | Papers | HyperAI