Efficient Self-Learning Evolutionary Neural Architecture Search

24 Pages Posted: 12 Feb 2023

See all articles by Zhengzhong Qiu

Zhengzhong Qiu

Jilin University (JLU)

Wei Bi

affiliation not provided to SSRN

Dong Xu

University of Missouri

Hua Guo

affiliation not provided to SSRN

Hongwei Ge

Dalian University of Technology

Yanchun Liang

Jilin University (JLU)

Heow Pueh Lee

National University of Singapore (NUS)

Chunguo Wu

Jilin University (JLU)

Abstract

The evolutionary algorithm has become a major method for neural architecture search recently. However, the fixed probability distribution adopted by the traditional evolutionary algorithm makes the algorithm unable to control the increase or decrease of the individual architecture size and thus cannot guarantee the lightweight and inference efficiency of the candidate architecture. In addition, the existing approaches cannot learn the optimal sampling probability distribution relevant to the specific problem from the empirical information accumulated through the search process. What’s more, the algorithm needs to evaluate the performance of all the individual architectures, which requires huge computing resources and time overhead. To overcome these problems and challenges, an Efficient Self-learning Evolutionary Neural Architecture Search method, called ESE-NAS, is presented in this paper. Firstly, we propose a Model Size Control module, which generates the probability distribution for sampling the type of mutation operators according to the current model size, so as to control both the node number of network architecture and the sparsity of links, therefore ensuring that neural architectures can remain compact and efficient as they evolve. Thereafter, a Mutation Candidate Credit Assignment method is proposed, which enables the algorithm to dynamically adjust the probability distribution for available types of node operations and directed links according to the empirical information from individuals’ performance evaluation, which can guide the evolutionary direction of neural architecture and shorten the first hitting time of the optimal architecture. Finally, a neural architecture performance predictor is formulated, and the accuracy of each individual architecture is predicted by using the simplified input features, to further improve the efficiency of neural architecture search. Experiments show that the ESE-NAS method proposed in this paper can learn a highly interpretable credit assignment results and can significantly bring forward the first hitting time of optimal architecture compared with the method using a fixed probability distribution. The performances of the searched neural architectures are quite competitive with the SOTA representative hand-designed and NAS architectures, while the proposed ESE-NAS effectively guarantee the simplicity and inference efficiency of the neural architectures at the same time.

Keywords: evolutionary algorithm, neural architecture search, Probability Distribution, Model Size Control

Suggested Citation

Qiu, Zhengzhong and Bi, Wei and Xu, Dong and Guo, Hua and Ge, Hongwei and Liang, Yanchun and Lee, Heow Pueh and Wu, Chunguo, Efficient Self-Learning Evolutionary Neural Architecture Search. Available at SSRN: https://ssrn.com/abstract=4355124 or http://dx.doi.org/10.2139/ssrn.4355124

Zhengzhong Qiu

Jilin University (JLU) ( email )

China

Wei Bi

affiliation not provided to SSRN ( email )

No Address Available

Dong Xu

University of Missouri ( email )

USA

Hua Guo

affiliation not provided to SSRN ( email )

No Address Available

Hongwei Ge

Dalian University of Technology ( email )

Huiying Rd
DaLian, LiaoNing, 116024
China

Yanchun Liang

Jilin University (JLU) ( email )

China

Heow Pueh Lee

National University of Singapore (NUS) ( email )

Singapore
Singapore

Chunguo Wu (Contact Author)

Jilin University (JLU) ( email )

China

Do you have a job opening that you would like to promote on SSRN?

Paper statistics

Downloads
69
Abstract Views
310
Rank
725,743
PlumX Metrics