首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   129篇
  免费   17篇
  国内免费   3篇
电工技术   6篇
综合类   16篇
化学工业   2篇
金属工艺   1篇
机械仪表   3篇
矿业工程   2篇
能源动力   1篇
无线电   32篇
一般工业技术   10篇
冶金工业   1篇
自动化技术   75篇
  2025年   4篇
  2024年   6篇
  2023年   8篇
  2022年   7篇
  2021年   5篇
  2020年   3篇
  2019年   7篇
  2018年   6篇
  2017年   3篇
  2016年   5篇
  2015年   5篇
  2014年   13篇
  2013年   6篇
  2012年   8篇
  2011年   6篇
  2010年   7篇
  2009年   7篇
  2008年   8篇
  2007年   4篇
  2006年   11篇
  2005年   3篇
  2004年   4篇
  2003年   4篇
  2002年   3篇
  2000年   1篇
  1998年   1篇
  1997年   3篇
  1981年   1篇
排序方式: 共有149条查询结果,搜索用时 15 毫秒
101.
In the age of digital information, audio data has become an important part in many modern computer applications. Audio classification and indexing has been becoming a focus in the research of audio processing and pattern recognition. In this paper, we propose effective algorithms to automatically classify audio clips into one of six classes: music, news, sports, advertisement, cartoon and movie. For these categories a number of acoustic features that include linear predictive coefficients, linear predictive cepstral coefficients and mel-frequency cepstral coefficients are extracted to characterize the audio content. The autoassociative neural network model (AANN) is used to capture the distribution of the acoustic feature vectors. Then the proposed method uses a Gaussian mixture model (GMM)-based classifier where the feature vectors from each class were used to train the GMM models for those classes. During testing, the likelihood of a test sample belonging to each model is computed and the sample is assigned to the class whose model produces the highest likelihood. Audio clip extraction, feature extraction, creation of index, and retrieval of the query clip are the major issues in automatic audio indexing and retrieval. A method for indexing the classified audio using LPCC features and k-means clustering algorithm is proposed.  相似文献   
102.
基于SVM模型的自然环境声音的分类   总被引:1,自引:0,他引:1  
提出了一种基于支持向量机(SVM)模型对自然环境声音进行分类的方法。首先,提取Mel频率倒谱系数(MFCCs)来分析声音信号;其次,对自然环境的声音基于MFCC特征集建立SVM模型;最后,使用交叉验证的测试方法得到基于SVM算法的分类结果。使用SVM模型对50类自然环境中的声音进行分类的正确率可达99.5704%,分类效果明显优于K最近邻(KNN)和二分嵌套整合(END)这两种算法。  相似文献   
103.
    
Some channel compensation techniques integrated into front-end of speech recognizer for improving channel robustness are described. These techniques include cepstral mean normalization, rasta processing and blind equalization. Two standard channel frequency characteristics, G. 712 and MIRS, are introduced as channel distortion references and a mandarin digit string recognition task is performed for evaluating and comparing the performance of these different methods. The recognition results show that in G. 712 case blind equalization can achieve the best recognition performance while cepstral mean normalization outperforms the other methods in MIRS case which is capable of reaching a word error rate of 3.96 %.  相似文献   
104.
利用投票选择机制进行语音分割的新方法   总被引:1,自引:1,他引:0  
针对在噪声背景下连续语音信号的语音分割性能会明显下降的问题,提出了一种针对连续语音信号分割的新方法。该方法不再采用单一的端点检测方法,而是将基于分形维数的端点检测方法,基于倒谱特征的端点检测方法,基于HMM的端点检测方法等多种不同方法下得到的端点检测结果,通过投票选择的方式,得到最终的端点检测结果,从而达到对连续语音信号进行分割的目的。实验结果表明,该方法较明显地提高了语音分割的准确性。  相似文献   
105.
复倒谱域语音信号去混响研究   总被引:3,自引:0,他引:3  
语音信号去混响技术能明显提高语音通信系统和识别系统的性能。简要介绍了语音复倒谱法去混响原理,对复倒谱域去混响滤波特性进行了仿真研究。根据多种去混响评价指标,确定复倒谱域“低通滤波器”的最高截止点、过渡带宽和过渡带的曲线特性等参数,发现在通常混响时间范围内,“低通滤波器”最高截止点与混响时间无关,复倒谱域滤波前加高斯窗可以改善去混响效果。  相似文献   
106.
各种原因使得工业现场设备状态监测的首选测量信号是声信号时,提出一种基于声信号的设备状态监测方法显得尤为必要。以某型离心泵为依据对象,对现场采集的声信号提取梅尔倒谱系数(MFCC)作为信号的初始特征,然后计算这些MFCC初始特征的散布熵(DE)值,并通过主成分分析法(PCA)对矩阵进行降维,从而构造特征矩阵。利用蝙蝠优化算法(BA)对支持向量机(SVM)的惩罚系数与核函数参数进行优化,对离心泵的多种故障工况开展诊断,并与多种诊断方法进行比较。实验结果表明,经过BA优化后的模型在诊断准确率上提高了21.7%;在该模型的基础上利用DE对MFCC提取的信号进行深度挖掘,使模型诊断的准确率提高2.05%。  相似文献   
107.
为解决近场空域低、慢、小旋翼无人机的安全威胁,提出基于音频信号分析的无人机探测识别方法。该方法采用改进流程和参数的梅尔频率倒谱系数(Mel-Frequency Cepstral Coeffi-cients,MFCC)和其一阶差分作为无人机音频的特征参数,结合提出的多距离分段采集法,通过训练高斯混合模型(Gaussian Mixture Model,GMM),建立多特征的无人机音频\"指纹库\",最后用特征匹配算法实现无人机的探测和识别。实验结果表明,所提出的方法在典型郊区环境中可实现150 m距离内无人机的探测和识别,识别率达到84.4%。  相似文献   
108.
沈凌洁  王蔚 《声学技术》2018,37(2):167-174
提出一种基于韵律特征(基频、时长)和梅尔倒谱系数(Mel-Frequency Cepstral Coefficient,MFCC)特征的融合特征进行短语音汉语声调识别的方法,旨在利用两种特征的优势提高短语音汉语声调识别率。该融合特征包括7个根据不同模型得到的韵律特征和统计参数以及4个从每个音段的梅尔倒谱系数计算得来的对数化后验概率,使用高斯混合模型表示4个声调的倒谱特征的分布。实验分两步:第一步,将基于韵律特征和倒谱特征的分类器在决策阶段混合起来进行声调分类,分别赋予两个分类器权重,计算倒谱特征和韵律特征在声调分类任务中的权重;第二步,将基于字的韵律特征和基于帧的倒谱特征结合起来生成融合特征的超向量,使用融合特征进行汉语声调识别,根据准确率、未加权平均召回率(Unweigted Average Recall,UAR)和科恩卡帕(Cohen’s Kappa)系数3个指标,比较并评估5种分类器(两种设置的高斯混合模型,后向传播神经网络,支持向量机和卷积神经网络(Convolutional Neural Network,CNN))在不平衡数据集上的分类效果。实验结果表明:(1)倒谱特征方法能够提高汉语声调的识别率,该特征在总体分类任务中的权重为0.11;(2)基于融合特征的深度学习(CNN)方法对声调的识别率最高,为87.6%,与高斯混合模型的基线系统相比,提高了5.87%。该研究证明了倒谱特征法能够提供与韵律特征法互补的信息,从而提高短语音汉语声调识别率;同时,该方法可以运用到韵律检测和副语言信息检测等相关研究中。  相似文献   
109.
    
Since Turkish is a morphologically productive language, it is almost impossible for a word-based recognition system to be realized to completely modelTurkish language. Due to the fact that it is difficult for the system to recognize words not introduced to it in a word-based recognition system, recognitionsuccess rate drops considerably caused by out-of-vocabulary words. In this study, a speaker-dependent, phoneme-based word recognition system has beendesigned and implemented for Turkish Language to overcome the problem. An algorithm for finding phoneme-boundaries has been devised in order tosegment the word into its phonemes. After the segmentation of words into phonemes, each phoneme is separated into different sub-groups according to itsposition and neighboring phonemes in that word. Generated sub-groups are represented by Hidden Markov Model, which is a statistical technique, usingMel-frequency cepstral coefficients as feature vector. Since phoneme-based approach is adopted in this study, it has been successfully achieved that manyout of vocabulary words could be recognized.  相似文献   
110.
    
Classification of speech signals is a vital part of speech signal processing systems. With the advent of speech coding and synthesis, the classification of the speech signal is made accurate and faster. Conventional methods are considered inaccurate due to the uncertainty and diversity of speech signals in the case of real speech signal classification. In this paper, we use efficient speech signal classification using a series of neural network classifiers with reinforcement learning operations. Prior classification of speech signals, the study extracts the essential features from the speech signal using Cepstral Analysis. The features are extracted by converting the speech waveform to a parametric representation to obtain a relatively minimized data rate. Hence to improve the precision of classification, Generative Adversarial Networks are used and it tends to classify the speech signal after the extraction of features from the speech signal using the cepstral coefficient. The classifiers are trained with these features initially and the best classifier is chosen to perform the task of classification on new datasets. The validation of testing sets is evaluated using RL that provides feedback to Classifiers. Finally, at the user interface, the signals are played by decoding the signal after being retrieved from the classifier back based on the input query. The results are evaluated in the form of accuracy, recall, precision, f-measure, and error rate, where generative adversarial network attains an increased accuracy rate than other methods: Multi-Layer Perceptron, Recurrent Neural Networks, Deep belief Networks, and Convolutional Neural Networks.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号