论文标题
文本到文本多任务学习者是否会遭受任务冲突的困扰?
Do Text-to-Text Multi-Task Learners Suffer from Task Conflict?
论文作者
论文摘要
传统的多任务学习体系结构通过共享编码器,然后是特定于任务的解码器来训练多个任务的单个模型。学习这些模型通常需要专门的培训算法,该算法在共享参数更新中解决任务冲突,否则可能会导致负转移。 NLP中一种新型的多任务学习类型将多任务体系结构均匀地统一为共享编码器和语言模型解码器,在各种不同的任务中,它的表现出色。这种新架构是否遭受需要专门培训算法的任务冲突?我们研究向文本到文本模型转变的某些因素如何影响多任务冲突和负面转移,发现方向冲突和转移在整个架构之间都是恒定的。
Traditional multi-task learning architectures train a single model across multiple tasks through a shared encoder followed by task-specific decoders. Learning these models often requires specialized training algorithms that address task-conflict in the shared parameter updates, which otherwise can lead to negative transfer. A new type of multi-task learning within NLP homogenizes multi-task architectures as a shared encoder and language model decoder, which does surprisingly well across a range of diverse tasks. Does this new architecture suffer from task-conflicts that require specialized training algorithms? We study how certain factors in the shift towards text-to-text models affects multi-task conflict and negative transfer, finding that both directional conflict and transfer are surprisingly constant across architectures.