论文标题

通过可变形物体刺穿针的数据集,以从示范中进行深入学习

A data-set of piercing needle through deformable objects for Deep Learning from Demonstrations

论文作者

Hashempour, Hamidreza, Nazari, Kiyanoush, Zhong, Fangxun, E., Amir Ghalamzan

论文摘要

由于自动化这些任务非常耗时且昂贵,因此仍有许多机器人任务。机器人从演示中学习(RLFD)可以减少编程时间和成本。但是,常规的RLFD方法不直接适用于许多机器人任务,例如机器人缝合使用微创机器人,因为它们需要耗时的设计过程,以设计视觉信息。深度神经网络(DNN)已成为创建复杂模型的有用工具,该工具捕获了高维观测空间与低级动作/状态空间之间的关系。但是,这种方法需要适合培训适当DNN模型的数据集。本文介绍了一个数据集,其中包含在软组织中/通过软组织中的Da Vinci研究套件的两个臂插入/刺穿针头。该数据集由(1)60个成功的针插入试验组成,该试验具有由6个高分辨率校准摄像机记录的随机所需出口点,(2)相应的机器人数据,校准参数和(3)(3)所有收集到的数据都同步的机器人控制输入。该数据集设计用于Deep-RLFD方法。我们还实施了几种深入的RLFD架构,包括简单的前馈CNN和不同的经常性卷积网络(RCN)。我们的研究表明,尽管基线馈电CNNS成功地学习了视觉信息与机器人的下一步控制动作之间的关系,但RCN提高了模型的预测准确性。数据集以及我们对RLFD的基线实现,可在https://github.com/imanlab/d-lfd上公开使用。

Many robotic tasks are still teleoperated since automating them is very time consuming and expensive. Robot Learning from Demonstrations (RLfD) can reduce programming time and cost. However, conventional RLfD approaches are not directly applicable to many robotic tasks, e.g. robotic suturing with minimally invasive robots, as they require a time-consuming process of designing features from visual information. Deep Neural Networks (DNN) have emerged as useful tools for creating complex models capturing the relationship between high-dimensional observation space and low-level action/state space. Nonetheless, such approaches require a dataset suitable for training appropriate DNN models. This paper presents a dataset of inserting/piercing a needle with two arms of da Vinci Research Kit in/through soft tissues. The dataset consists of (1) 60 successful needle insertion trials with randomised desired exit points recorded by 6 high-resolution calibrated cameras, (2) the corresponding robot data, calibration parameters and (3) the commanded robot control input where all the collected data are synchronised. The dataset is designed for Deep-RLfD approaches. We also implemented several deep RLfD architectures, including simple feed-forward CNNs and different Recurrent Convolutional Networks (RCNs). Our study indicates RCNs improve the prediction accuracy of the model despite that the baseline feed-forward CNNs successfully learns the relationship between the visual information and the next step control actions of the robot. The dataset, as well as our baseline implementations of RLfD, are publicly available for bench-marking at https://github.com/imanlab/d-lfd.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源