In recent years,deep learning has made tremendous achievements in computer vision,natural language pro-cessing,man-machine games and so on,where artificial intelli?gence can reach or go beyond the level of human beings.However,behind so many glories,some serious challenges ex-ist in the bottom hardware,hindering the further develop-ment of Artificial Intelligence.While the remarkable Moore's Law becomes slower and computing consumption on von Neu-mann bottleneck can no longer be afforded,current accelerat-or chips are difficult to deal with demanding massive data,es-pecially in some power-limited scenes.These significant chal?lenges lead to a natural upsurge for exploring new comput-ing paradigms,i.e.a computational scientific revolution[1].Such computing paradigm is not expected to replace the von Neumann architecture that has worked well in the past,but forms an important compliment to the previous architecture that can no longer handle with more and more emerging com-puting problems and applications.e.g.those in big data and artificial intelligence.