Randomize-then-optimize (RTO) is widely used for sampling from posterior distribu-tions in Bayesian inverse problems.However,RTO can be computationally intensive for complexity problems due to repetitive evaluations of the expensive forward model and its gradient.In this work,we present a novel goal-oriented deep neural networks (DNN) sur-rogate approach to substantially reduce the computation burden of RTO.In particular,we propose to drawn the training points for the DNN-surrogate from a local approximated posterior distribution-yielding a flexible and efficient sampling algorithm that converges to the direct RTO approach.We present a Bayesian inverse problem governed by elliptic PDEs to demonstrate the computational accuracy and efficiency of our DNN-RTO ap-proach,which shows that DNN-RTO can significantly outperform the traditional RTO.