论文标题
协调和评估;调整对人类听众的参数
Harmonization and Evaluation; Tweaking the Parameters on Human Listeners
论文作者
论文摘要
Kansei模型用于研究音乐的含义。在多媒体和混合现实中,越来越多地使用自动生成的旋律。重要的是要考虑这种音乐是否传达了什么感受。评估计算机生成的旋律并不是一项琐碎的任务。研究人员考虑到定义有用的定量指标的难度,研究人员经常诉诸于人类评估。在这些评估中,通常需要法官来评估一组生成的作品以及一些基准作品。后者通常由人类组成。尽管这种评估相对普遍,但已知在设计实验时应注意要注意,因为人类可能会受到各种因素的影响。在本文中,我们研究了法官必须评估和谐文件中和谐的影响,以查看伴奏是否可以改变对产生的旋律的评估。为此,我们通过两种不同的算法生成旋律,并与我们为该实验设计的自动工具协调它们,并要求超过60名参与者评估旋律。通过使用统计分析,我们表明协调确实会通过强调判断之间的差异来影响评估过程。
Kansei models were used to study the connotative meaning of music. In multimedia and mixed reality, automatically generated melodies are increasingly being used. It is important to consider whether and what feelings are communicated by this music. Evaluation of computer-generated melodies is not a trivial task. Considered the difficulty of defining useful quantitative metrics of the quality of a generated musical piece, researchers often resort to human evaluation. In these evaluations, often the judges are required to evaluate a set of generated pieces along with some benchmark pieces. The latter are often composed by humans. While this kind of evaluation is relatively common, it is known that care should be taken when designing the experiment, as humans can be influenced by a variety of factors. In this paper, we examine the impact of the presence of harmony in audio files that judges must evaluate, to see whether having an accompaniment can change the evaluation of generated melodies. To do so, we generate melodies with two different algorithms and harmonize them with an automatic tool that we designed for this experiment, and ask more than sixty participants to evaluate the melodies. By using statistical analyses, we show harmonization does impact the evaluation process, by emphasizing the differences among judgements.