论文标题
探索用于茎模拟中图像到图像翻译的生成对抗网络
Exploring Generative Adversarial Networks for Image-to-Image Translation in STEM Simulation
论文作者
论文摘要
精确扫描传输电子显微镜(STEM)图像模拟方法的使用需要大量的计算时间,这可以使它们的使用对于许多图像的模拟而言是不可行的。基于线性成像模型(例如卷积方法)的其他仿真方法速度更快,但不准确,无法在应用中使用。在本文中,我们探索了深度学习模型,这些模型试图将卷积方法产生的茎图像转化为高精度多层图像的预测。然后,我们将结果与回归方法的结果进行比较。我们发现,使用深度学习模型生成对抗网络(GAN)为我们提供最佳结果,并以与同一数据集上的先前回归模型相似的精度级别执行。该项目的代码和数据可以在此GitHub存储库中找到,https://github.com/uw-cmg/gan-stem-conv2multislice。
The use of accurate scanning transmission electron microscopy (STEM) image simulation methods require large computation times that can make their use infeasible for the simulation of many images. Other simulation methods based on linear imaging models, such as the convolution method, are much faster but are too inaccurate to be used in application. In this paper, we explore deep learning models that attempt to translate a STEM image produced by the convolution method to a prediction of the high accuracy multislice image. We then compare our results to those of regression methods. We find that using the deep learning model Generative Adversarial Network (GAN) provides us with the best results and performs at a similar accuracy level to previous regression models on the same dataset. Codes and data for this project can be found in this GitHub repository, https://github.com/uw-cmg/GAN-STEM-Conv2MultiSlice.