论文标题

多机构增强学习在多尺度倒置问题上加速了MCMC

Multi-agent Reinforcement Learning Accelerated MCMC on Multiscale Inversion Problem

论文作者

Chung, Eric, Efendiev, Yalchin, Leung, Wing Tat, Pun, Sai-Mang, Zhang, Zecheng

论文摘要

在这项工作中,我们提出了一种多代理参与者 - 批判性增强学习(RL)算法,以加速多层蒙特卡洛马尔可夫链(MCMC)采样算法。代理人的政策(参与者)用于在MCMC步骤中生成建议;集中的评论家负责估计长期奖励。我们通过解决多个量表的反问题来验证我们提出的算法。通过使用传统的MCMC采样,实施此问题存在一些困难。首先,后验分布的计算涉及评估前向求解器,这对于异质性问题非常耗时。因此,我们建议使用多级算法。更确切地说,我们将广义的多尺度有限元方法(GMSFEM)用作评估多级拒绝过程中后验分布的前向求解器。其次,很难找到可以生成有意义的采样的函数。为了解决此问题,我们将RL政策作为提案生成器学习。我们的实验表明,所提出的方法显着改善了采样过程

In this work, we propose a multi-agent actor-critic reinforcement learning (RL) algorithm to accelerate the multi-level Monte Carlo Markov Chain (MCMC) sampling algorithms. The policies (actors) of the agents are used to generate the proposal in the MCMC steps; and the critic, which is centralized, is in charge of estimating the long term reward. We verify our proposed algorithm by solving an inverse problem with multiple scales. There are several difficulties in the implementation of this problem by using traditional MCMC sampling. Firstly, the computation of the posterior distribution involves evaluating the forward solver, which is very time consuming for a problem with heterogeneous. We hence propose to use the multi-level algorithm. More precisely, we use the generalized multiscale finite element method (GMsFEM) as the forward solver in evaluating a posterior distribution in the multi-level rejection procedure. Secondly, it is hard to find a function which can generate samplings which are meaningful. To solve this issue, we learn an RL policy as the proposal generator. Our experiments show that the proposed method significantly improves the sampling process

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源