Feature selection is a crucial problem in efficient machine learning,and it also greatly contributes to the explainability of machine-driven decisions.Methods,like decision trees and Least Absolute Shrinkage and Selection Operator(LASSO),can select features during training.However,these embedded approaches can only be applied to a small subset of machine learning models.Wrapper based methods can select features independently from machine learning models but they often suffer from a high computational cost.To enhance their efficiency,many randomized algorithms have been designed.In this paper,we propose automatic breadth searching and attention searching adjustment approaches to further speedup randomized wrapper based feature selection.We conduct theoretical computational complexity analysis and further explain our algorithms’generic parallelizability.We conduct experiments on both synthetic and real datasets with different machine learning base models.Results show that,compared with existing approaches,our proposed techniques can locate a more meaningful set of features with a high efficiency.