论文标题

在条件转移下的部分微分方程的深度转移操作员学习

Deep transfer operator learning for partial differential equations under conditional shift

论文作者

Goswami, Somdatta, Kontolati, Katiana, Shields, Michael D., Karniadakis, George Em

论文摘要

转移学习(TL)使学习中获得的知识转移可以执行一项任务(源)到相关但不同的任务(目标),从而解决了数据获取和标签的费用,潜在的计算能力限制以及数据集分布不匹配。我们根据深层运算符网络(DeepOnet)提出了一个新的TL TL框架,用于特定于任务的学习(偏微分方程(PDES)的功能回归)。特定于任务的操作员学习是通过使用混合损耗函数进行微调的目标deponet的微调特定于目标层来完成的,该函数允许单个目标样本匹配,同时还保留目标数据条件分布的全局属性。受条件嵌入操作员理论的启发,我们通过将条件分布嵌入到繁殖的内核希尔伯特空间中,最大程度地减少了标记的目标数据和对未标记目标数据的替代预测之间的统计距离。我们证明了由于几何域和模型动力学的变化,在不同条件下涉及非线性PDE的各种TL方案的方法的优势。尽管源和目标域之间存在显着差异,但我们的TL框架可以快速有效地学习异质任务。

Transfer learning (TL) enables the transfer of knowledge gained in learning to perform one task (source) to a related but different task (target), hence addressing the expense of data acquisition and labeling, potential computational power limitations, and dataset distribution mismatches. We propose a new TL framework for task-specific learning (functional regression in partial differential equations (PDEs)) under conditional shift based on the deep operator network (DeepONet). Task-specific operator learning is accomplished by fine-tuning task-specific layers of the target DeepONet using a hybrid loss function that allows for the matching of individual target samples while also preserving the global properties of the conditional distribution of target data. Inspired by the conditional embedding operator theory, we minimize the statistical distance between labeled target data and the surrogate prediction on unlabeled target data by embedding conditional distributions onto a reproducing kernel Hilbert space. We demonstrate the advantages of our approach for various TL scenarios involving nonlinear PDEs under diverse conditions due to shift in the geometric domain and model dynamics. Our TL framework enables fast and efficient learning of heterogeneous tasks despite significant differences between the source and target domains.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源