论文标题
部分可观测时空混沌系统的无模型预测
One-Shot Transfer of Affordance Regions? AffCorrs!
论文作者
论文摘要
在这项工作中,我们对对象零件进行了一声视觉搜索。给定具有带注释的负担区域的对象的单个参考图像,我们在目标场景中以语义对应部分进行分段。我们提出了Affcorrs,这是一个无监督的模型,结合了预训练的Dino-Vit的图像描述符和循环对应关系。我们使用affcorrs来找到相应的课内和间一杆零件分割的负担。这项任务比有监督的替代方案更加困难,但是可以通过模仿和辅助远程处理等将来的工作,例如学习能力。
In this work, we tackle one-shot visual search of object parts. Given a single reference image of an object with annotated affordance regions, we segment semantically corresponding parts within a target scene. We propose AffCorrs, an unsupervised model that combines the properties of pre-trained DINO-ViT's image descriptors and cyclic correspondences. We use AffCorrs to find corresponding affordances both for intra- and inter-class one-shot part segmentation. This task is more difficult than supervised alternatives, but enables future work such as learning affordances via imitation and assisted teleoperation.