Q-NAS revisited: Exploring evolution fitness to improve efficiency
Abstract
Over the last decade, the scientific community has witnessed the success of deep neural networks in a variety of tasks. However, the design of these new structures still demands expert knowledge and significant time. In this scenario, the idea of automating the design of such complex networks has inspired many researchers. New algorithms have been proposed to address the neural architecture search problem, but computational cost is a major drawback. Q-NAS (Quantum-inspired Neural Architecture Search) is an evolutionary algorithm recently proposed with the idea of improving efficiency, but with minor humanbias introduced in the search. This work extends the analysis of Q-NAS, focusing on fitness behavior during evolution. We want to verify if good individuals are obtained early in evolution, so we could apply an early-stopping mechanism. We experiment with a simple early-stopping technique, and results indicated an evolution time reduction of more than 45% in most cases. This new feature can improve efficiency, making Q-NAS very competitive compared to other algorithms.