论文标题

改进了通过模型合奏的物理知识神经网络的培训

Improved Training of Physics-Informed Neural Networks with Model Ensembles

论文作者

Haitsiukevich, Katsiaryna, Ilin, Alexander

论文摘要

通过神经网络学习偏微分方程(PDE)的溶液是传统求解器的一种有吸引力的替代方法,因为它的优雅性,更大的灵活性和易于合并观察到的数据的便利性。但是,众所周知,训练这种物理知识的神经网络(PINN)在实践中非常困难,因为Pinns经常汇聚为错误的解决方案。在本文中,我们通过培训Pinns的合奏来解决这个问题。我们的方法是通过观察到的,即单个PINN模型在目标点附近找到类似的解决方案(例如,观察到的数据或初始条件),而其溶液的溶液可能与此类点更远。因此,我们建议将集成协议用作解决方案间隔逐渐扩展的标准,即包括计算从微​​分方程得出的损失的新点。由于域扩展的灵活性,我们的算法可以轻松地在任意位置合并测量。与现有的PINN算法具有时间自适应策略相反,所提出的算法不需要预定义的间隔扩展时间表,并且可以平等地对待时间和空间。我们通过实验表明,所提出的算法可以稳定PINN训练,并与最近适应训练的Pinn的变体竞争性能。

Learning the solution of partial differential equations (PDEs) with a neural network is an attractive alternative to traditional solvers due to its elegance, greater flexibility and the ease of incorporating observed data. However, training such physics-informed neural networks (PINNs) is notoriously difficult in practice since PINNs often converge to wrong solutions. In this paper, we address this problem by training an ensemble of PINNs. Our approach is motivated by the observation that individual PINN models find similar solutions in the vicinity of points with targets (e.g., observed data or initial conditions) while their solutions may substantially differ farther away from such points. Therefore, we propose to use the ensemble agreement as the criterion for gradual expansion of the solution interval, that is including new points for computing the loss derived from differential equations. Due to the flexibility of the domain expansion, our algorithm can easily incorporate measurements in arbitrary locations. In contrast to the existing PINN algorithms with time-adaptive strategies, the proposed algorithm does not need a pre-defined schedule of interval expansion and it treats time and space equally. We experimentally show that the proposed algorithm can stabilize PINN training and yield performance competitive to the recent variants of PINNs trained with time adaptation.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源