Command Palette
Search for a command to run...
Ching-Hsun Tseng Shin-Jye Lee Jia-Nan Feng Shengzhong Mao Yu-Ping Wu Jia-Yu Shang Mou-Chung Tseng Xiao-Jun Zeng

Abstract
Among image classification, skip and densely-connection-based networks have dominated most leaderboards. Recently, from the successful development of multi-head attention in natural language processing, it is sure that now is a time of either using a Transformer-like model or hybrid CNNs with attention. However, the former need a tremendous resource to train, and the latter is in the perfect balance in this direction. In this work, to make CNNs handle global and local information, we proposed UPANets, which equips channel-wise attention with a hybrid skip-densely-connection structure. Also, the extreme-connection structure makes UPANets robust with a smoother loss landscape. In experiments, UPANets surpassed most well-known and widely-used SOTAs with an accuracy of 96.47% in Cifar-10, 80.29% in Cifar-100, and 67.67% in Tiny Imagenet. Most importantly, these performances have high parameters efficiency and only trained in one customer-based GPU. We share implementing code of UPANets in https://github.com/hanktseng131415go/UPANets.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| image-classification-on-cifar-10 | UPANets | Percentage correct: 96.47 |
| image-classification-on-cifar-100 | UPANets | Percentage correct: 80.29 |
| image-classification-on-tiny-imagenet-1 | UPANets | Validation Acc: 67.67 |
| image-classification-on-tiny-imagenet-2 | UPANets | Top 1 Accuracy: 67.67 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.