论文标题
C2F-TCN:半半动作细分的框架
C2F-TCN: A Framework for Semi and Fully Supervised Temporal Action Segmentation
论文作者
论文摘要
时间动作分割标签输入未经修剪视频中的每个帧的动作标签,其中包含序列中的多个动作。对于时间动作进行分割的任务,我们提出了一种名为C2F-TCN的编码器式架构,该体系结构具有解码器输出的“粗到限制”集合。 C2F-TCN框架通过一种新型模型不可知的时间特征增强策略增强了,该策略由段的随机最大程度地进行计算廉价策略形成。它在三个基准操作分割数据集上产生更准确且校准的监督结果。我们证明该体系结构对于监督和表示学习都是灵活的。与此一致,我们提出了一种从C2F-TCN学习框架表示的新颖的无监督方法。我们的无监督学习方法取决于输入特征的聚类功能以及解码器隐式结构中多分辨率特征的形成。此外,我们通过将代表性学习与常规监督学习合并来提供第一个半监督的时间动作细分结果。我们的半监督学习方案,称为``迭代对抗性分类''(ICC)'',使用更标记的数据逐渐改善了性能。 C2F-TCN中的ICC半监督学习,带有40%标记的视频的ICC半监督,其性能类似于完全监督的对应物。
Temporal action segmentation tags action labels for every frame in an input untrimmed video containing multiple actions in a sequence. For the task of temporal action segmentation, we propose an encoder-decoder-style architecture named C2F-TCN featuring a "coarse-to-fine" ensemble of decoder outputs. The C2F-TCN framework is enhanced with a novel model agnostic temporal feature augmentation strategy formed by the computationally inexpensive strategy of the stochastic max-pooling of segments. It produces more accurate and well-calibrated supervised results on three benchmark action segmentation datasets. We show that the architecture is flexible for both supervised and representation learning. In line with this, we present a novel unsupervised way to learn frame-wise representation from C2F-TCN. Our unsupervised learning approach hinges on the clustering capabilities of the input features and the formation of multi-resolution features from the decoder's implicit structure. Further, we provide the first semi-supervised temporal action segmentation results by merging representation learning with conventional supervised learning. Our semi-supervised learning scheme, called ``Iterative-Contrastive-Classify (ICC)'', progressively improves in performance with more labeled data. The ICC semi-supervised learning in C2F-TCN, with 40% labeled videos, performs similar to fully supervised counterparts.