首页 | 本学科首页   官方微博 | 高级检索  
     


Fast speaker adaptation using extended diagonal linear transformation for deep neural networks
Authors:Donghyun Kim  Sanghun Kim
Abstract:This paper explores new techniques that are based on a hidden‐layer linear transformation for fast speaker adaptation used in deep neural networks (DNNs). Conventional methods using affine transformations are ineffective because they require a relatively large number of parameters to perform. Meanwhile, methods that employ singular‐value decomposition (SVD) are utilized because they are effective at reducing adaptive parameters. However, a matrix decomposition is computationally expensive when using online services. We propose the use of an extended diagonal linear transformation method to minimize adaptation parameters without SVD to increase the performance level for tasks that require smaller degrees of adaptation. In Korean large vocabulary continuous speech recognition (LVCSR) tasks, the proposed method shows significant improvements with error‐reduction rates of 8.4% and 17.1% in five and 50 conversational sentence adaptations, respectively. Compared with the adaptation methods using SVD, there is an increased recognition performance with fewer parameters.
Keywords:DNN acoustic modeling  DNN adaptation  DNN speech recognition  fast speaker adaptation  SVD adaptation
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号