Gradient Convergence of Deep Learning-Based Numerical Methods for BSDEs
Gradient Convergence of Deep Learning-Based Numerical Methods for BSDEs
基本信息来源于合作网站,原文需代理用户跳转至来源网站获取
摘要:
The authors prove the gradient convergence of the deep learning-based nu-merical method for high dimensional parabolic partial differential equations and backward stochastic differential equations,which is based on time discretization of stochastic differ-ential equations(SDEs for short)and the stochastic approximation method for noncon-vex stochastic programming problem.They take the stochastic gradient decent method,quadratic loss function,and sigmoid activation function in the setting of the neural net-work.Combining classical techniques of randomized stochastic gradients,Euler scheme for SDEs,and convergence of neural networks,they obtain the O(K-1/4)rate of gradient convergence with K being the total number of iterative steps.