论文标题
隐藏马尔可夫模型的参数学习的相变
Phase transition for parameter learning of Hidden Markov Models
论文作者
论文摘要
我们研究了隐藏马尔可夫模型(HMM)参数学习中的相变。我们通过从给定的离散HMM中生成具有均匀分布的过渡概率和输出概率编码的噪声水平的离散HMM的观察符号的序列来做到这一点。通过使用Baum-Welch(BW)算法,这是一种来自机器学习领域的期望最大化算法,我们然后尝试估算每个研究的HMM实现的参数。我们研究n = 4、8和16个州的HMM。通过更改可访问的学习数据和噪声水平,我们观察到学习算法的性能类似相变的变化。对于更大的HMM和更多的学习数据,学习行为在噪声强度的一定阈值以下大大提高。对于高于阈值的噪声水平,学习是不可能的。此外,我们使用一个重叠参数应用于最大a- posteriori(viterbi)算法的结果来研究围绕相变的隐藏状态估计的准确性。
We study a phase transition in parameter learning of Hidden Markov Models (HMMs). We do this by generating sequences of observed symbols from given discrete HMMs with uniformly distributed transition probabilities and a noise level encoded in the output probabilities. By using the Baum-Welch (BW) algorithm, an Expectation-Maximization algorithm from the field of Machine Learning, we then try to estimate the parameters of each investigated realization of an HMM. We study HMMs with n=4, 8 and 16 states. By changing the amount of accessible learning data and the noise level, we observe a phase-transition-like change in the performance of the learning algorithm. For bigger HMMs and more learning data, the learning behavior improves tremendously below a certain threshold in the noise strength. For a noise level above the threshold, learning is not possible. Furthermore, we use an overlap parameter applied to the results of a maximum-a-posteriori (Viterbi) algorithm to investigate the accuracy of the hidden state estimation around the phase transition.