论文标题
holismokes -iv。通过深度学习对强镜的有效质量建模
HOLISMOKES -- IV. Efficient mass modeling of strong lenses through deep learning
论文作者
论文摘要
对于使用它们作为天体物理和宇宙学探针,通常需要对强力镜的质量分布进行建模。随着即将进行的调查预期的镜头系统数量大量($> 10^5 $),及时探讨了耗时耗时的传统MCMC技术的有效建模方法。我们在Galaxy尺度镜头的图像上训练CNN,以预测SIE质量模型的参数($ x,y,e_x,e_y $和$θ_e$)。为了训练网络,我们根据镜头星系的HSC调查和HUDF作为镜头星系中的HSC调查中的实际观察结果来模拟图像。我们测试了不同的网络体系结构,不同数据集的效果以及使用$θ_e$的不同输入分布。我们发现,CNN的性能良好,并通过训练的网络获得$θ_e$ $> 0.5英寸$ $> 0.5英寸$以下中间值,$1σ$散布:$ΔX=(0.00^{+0.30} _ { - 0.30}) $Δθ_e=(0.07^{+0.29} _ { - 0.12})“ $,$,$Δe_x= -0.01^{+0.08} _ { - 0.09} $和$Δe_y= 0.00 = 0.00^{+0.08} {+0.08} _ { - 0.09} _ { - 0.09} $。 $θ_e$中的偏差是由小$θ_e$的系统驱动的。因此,当我们进一步预测基于网络输出的多个镜头图像位置和时间延迟时,我们将网络应用于$θ_e> 0.8英寸$。$。在这种情况下,预测和输入镜头图像位置之间的偏移为$(0.00 _ _ { - 0.29}}对于$ x $和$ y $,对于预测的时间延迟和真实的时间延迟之间的分数差异,我们获得了$ 0.04 _ { - 0.05}^{+0.27} $镜头检测在不久的将来。
Modelling the mass distributions of strong gravitational lenses is often necessary to use them as astrophysical and cosmological probes. With the high number of lens systems ($>10^5$) expected from upcoming surveys, it is timely to explore efficient modeling approaches beyond traditional MCMC techniques that are time consuming. We train a CNN on images of galaxy-scale lenses to predict the parameters of the SIE mass model ($x,y,e_x,e_y$, and $θ_E$). To train the network, we simulate images based on real observations from the HSC Survey for the lens galaxies and from the HUDF as lensed galaxies. We tested different network architectures, the effect of different data sets, and using different input distributions of $θ_E$. We find that the CNN performs well and obtain with the network trained with a uniform distribution of $θ_E$ $>0.5"$ the following median values with $1σ$ scatter: $Δx=(0.00^{+0.30}_{-0.30})"$, $Δy=(0.00^{+0.30}_{-0.29})" $, $Δθ_E=(0.07^{+0.29}_{-0.12})"$, $Δe_x = -0.01^{+0.08}_{-0.09}$ and $Δe_y = 0.00^{+0.08}_{-0.09}$. The bias in $θ_E$ is driven by systems with small $θ_E$. Therefore, when we further predict the multiple lensed image positions and time delays based on the network output, we apply the network to the sample limited to $θ_E>0.8"$. In this case, the offset between the predicted and input lensed image positions is $(0.00_{-0.29}^{+0.29})"$ and $(0.00_{-0.31}^{+0.32})"$ for $x$ and $y$, respectively. For the fractional difference between the predicted and true time delay, we obtain $0.04_{-0.05}^{+0.27}$. Our CNN is able to predict the SIE parameters in fractions of a second on a single CPU and with the output we can predict the image positions and time delays in an automated way, such that we are able to process efficiently the huge amount of expected lens detections in the near future.