首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   88628篇
  免费   1211篇
  国内免费   412篇
电工技术   904篇
综合类   2326篇
化学工业   12658篇
金属工艺   4853篇
机械仪表   3116篇
建筑科学   2570篇
矿业工程   572篇
能源动力   1216篇
轻工业   4303篇
水利工程   1298篇
石油天然气   354篇
无线电   9494篇
一般工业技术   17115篇
冶金工业   3244篇
原子能技术   311篇
自动化技术   25917篇
  2021年   69篇
  2020年   56篇
  2019年   61篇
  2018年   14496篇
  2017年   13441篇
  2016年   10050篇
  2015年   733篇
  2014年   367篇
  2013年   427篇
  2012年   3348篇
  2011年   9599篇
  2010年   8449篇
  2009年   5733篇
  2008年   6958篇
  2007年   7986篇
  2006年   307篇
  2005年   1397篇
  2004年   1294篇
  2003年   1284篇
  2002年   662篇
  2001年   204篇
  2000年   272篇
  1999年   131篇
  1998年   136篇
  1997年   111篇
  1996年   126篇
  1995年   81篇
  1994年   65篇
  1993年   79篇
  1992年   87篇
  1991年   69篇
  1990年   56篇
  1989年   60篇
  1987年   68篇
  1986年   56篇
  1985年   69篇
  1984年   65篇
  1983年   60篇
  1982年   61篇
  1981年   66篇
  1980年   66篇
  1979年   65篇
  1976年   56篇
  1974年   66篇
  1968年   69篇
  1967年   61篇
  1966年   67篇
  1958年   56篇
  1955年   70篇
  1954年   76篇
排序方式: 共有10000条查询结果,搜索用时 390 毫秒
991.
992.
This paper presents a method for reconstructing unreliable spectral components of speech signals using the statistical distributions of the clean components. Our goal is to model the temporal patterns in speech signal and take advantage of correlations between speech features in both time and frequency domain simultaneously. In this approach, a hidden Markov model (HMM) is first trained on clean speech data to model the temporal patterns which appear in the sequences of the spectral components. Using this model and according to the probabilities of occurring noisy spectral component at each states, a probability distributions for noisy components are estimated. Then, by applying maximum a posteriori (MAP) estimation on the mentioned distributions, the final estimations of the unreliable spectral components are obtained. The proposed method is compared to a common missing feature method which is based on the probabilistic clustering of the feature vectors and also to a state of the art method based on sparse reconstruction. The experimental results exhibits significant improvement in recognition accuracy over a noise polluted Persian corpus.  相似文献   
993.
In this paper we propose a feature normalization method for speaker-independent speech emotion recognition. The performance of a speech emotion classifier largely depends on the training data, and a large number of unknown speakers may cause a great challenge. To address this problem, first, we extract and analyse 481 basic acoustic features. Second, we use principal component analysis and linear discriminant analysis jointly to construct the speaker-sensitive feature space. Third, we classify the emotional utterances into pseudo-speaker groups in the speaker-sensitive feature space by using fuzzy k-means clustering. Finally, we normalize the original basic acoustic features of each utterance based on its group information. To verify our normalization algorithm, we adopt a Gaussian mixture model based classifier for recognition test. The experimental results show that our normalization algorithm is effective on our locally collected database, as well as on the eNTERFACE’05 Audio-Visual Emotion Database. The emotional features achieved using our method are robust to the speaker change, and an improved recognition rate is observed.  相似文献   
994.
The speech recognition system basically extracts the textual information present in the speech. In the present work, speaker independent isolated word recognition system for one of the south Indian language—Kannada has been developed. For European languages such as English, large amount of research has been carried out in the context of speech recognition. But, speech recognition in Indian languages such as Kannada reported significantly less amount of work and there are no standard speech corpus readily available. In the present study, speech database has been developed by recording the speech utterances of regional Kannada news corpus of different speakers. The speech recognition system has been implemented using the Hidden Markov Tool Kit. Two separate pronunciation dictionaries namely phone based and syllable based dictionaries are built in-order to design and evaluate the performances of phone-level and syllable-level sub-word acoustical models. Experiments have been carried out and results are analyzed by varying the number of Gaussian mixtures in each state of monophone Hidden Markov Model (HMM). Also, context dependent triphone HMM models have been built for the same Kannada speech corpus and the recognition accuracies are comparatively analyzed. Mel frequency cepstral coefficients along with their first and second derivative coefficients are used as feature vectors and are computed in acoustic front-end processing. The overall word recognition accuracy of 60.2 and 74.35 % respectively for monophone and triphone models have been obtained. The study shows a good improvement in the accuracy of isolated-word Kannada speech recognition system using triphone HMM models compared to that of monophone HMM models.  相似文献   
995.
This paper addresses a novel approach to investigate, study and simulate computation of high band (HB) feature extraction based on linear predictive coding (LPC) and mel frequency cepstral coefficient (MFCC) techniques. Further, HB features are embedded into encoded bitstream of proposed global system for mobile (GSM) full rate (FR) 06.10 coder using joint source coding and data hiding before being transmitted to receiving terminal. At receiver, HB features are extracted to reproduce HB portion of speech and for the same different extension of excitation techniques are applied and their results evaluated in terms of quality (intelligibility and naturalness) and bandwidth. MATLAB based e-test bench is created for implementing the proposed artificial bandwidth extension (ABE) coder following series of simulations, that are carried out to discover and gain insight about the performance of it using subjective [mean opinion score (MOS)] and objective [perceptual evaluation of speech quality (PESQ)] analysis. The results obtained for both the analyses advocate that proposed ABE coder outperforms proposed GSM FR NB (legacy GSM FR) coder. While the fact remains that, compared to LPC based parameterizations over ABE coder, MFCC parameterization results in higher speech intelligibility which is evident from obtained slightly better PESQ and MOS scores.  相似文献   
996.
Nowadays, many software organizations rely on automatic problem reporting tools to collect crash reports directly from users’ environments. These crash reports are later grouped together into crash types. Usually, developers prioritize crash types based on the number of crash reports and file bug reports for the top crash types. Because a bug can trigger a crash in different usage scenarios, different crash types are sometimes related to the same bug. Two bugs are correlated when the occurrence of one bug causes the other bug to occur. We refer to a group of crash types related to identical or correlated bug reports, as a crash correlation group. In this paper, we propose five rules to identify correlated crash types automatically. We propose an algorithm to locate and rank buggy files using crash correlation groups. We also propose a method to identify duplicate and related bug reports. Through an empirical study on Firefox and Eclipse, we show that the first three rules can identify crash correlation groups using stack trace information, with a precision of 91 % and a recall of 87 % for Firefox and a precision of 76 % and a recall of 61 % for Eclipse. On the top three buggy file candidates, the proposed bug localization algorithm achieves a recall of 62 % and a precision of 42 % for Firefox, and a recall of 52 % and a precision of 50 % for Eclipse. On the top 10 buggy file candidates, the recall increases to 92 % for Firefox and 90 % for Eclipse. The proposed duplicate bug report identification method achieves a recall of 50 % and a precision of 55 % on Firefox, and a recall of 47 % and a precision of 35 % on Eclipse. Developers can combine the proposed crash correlation rules with the new bug localization algorithm to identify and fix correlated crash types all together. Triagers can use the duplicate bug report identification method to reduce their workload by filtering duplicate bug reports automatically.  相似文献   
997.
This article presents three new methods (M5, M6, M7) for the estimation of an unknown map projection and its parameters. Such an analysis is beneficial and interesting for historic, old, or current maps without information about the map projection; it could improve their georeference. The location similarity approach takes into account the residuals on the corresponding features; the minimum is found using the non-linear least squares. For the shape similarity approach, the minimized objective function ? takes into account the spatial distribution of the features, together with the shapes of the meridians, parallels and other 0D-2D elements. Due to the non-convexity and discontinuity, its global minimum is determined using the global optimization, represented by the differential evolution. The constant values of projection φ k , λ k , φ 1, λ 0, and map constants RXY, α (in relation to which the methods are invariant) are estimated. All methods are compared and the results are presented for the synthetic data as well as for 8 early maps from the Map Collection of the Charles University and the David Rumsay Map Collection. The proposed algorithms have been implemented in the new version of the detectproj software.  相似文献   
998.
With the expansion of wireless-communication infrastructure and the evolution of indoor positioning technologies, the demand for location-based services (LBS) has been increasing in indoor as well as outdoor spaces. However, we should consider a significant challenge regarding the location privacy for realizing indoor LBS. To avoid violations of location privacy, much research has been performed, and location \(\mathcal {K}\)-anonymity has been intensively studied to blur a user location with a cloaking region involving at least \(\mathcal {K}-1\) locations of other persons. Owing to the differences between indoor and outdoor spaces, it is, however, difficult to apply this approach directly in an indoor space. First, the definition of the distance metric in indoor space is different from that in Euclidean and road-network spaces. Second, a bounding region, which is a general form of an anonymizing spatial region (ASR) in Euclidean space, does not respect the locality property in indoor space, where movement is constrained by building components. Therefore, we introduce the concept of indoor location \(\mathcal {K}\)-anonymity in this paper. Then, we investigate the requirements of ASR in indoor spaces and propose novel methods to determine the ASR, considering hierarchical structures of the indoor space. While indoor ASRs are determined at the anonymizer, we also propose processing methods for r-range queries and k-nearest-neighbor queries at a location-based service provider. We validate our methods with experimental analysis of query-processing performance and resilience against attacks in indoor spaces.  相似文献   
999.
Exploring massive mobile data for location-based services becomes one of the key challenges in mobile data mining. In this paper, we investigate a problem of finding a correlation between the collective behavior of mobile users and the distribution of points of interest (POIs) in a city. Specifically, we use large-scale cell tower data dumps collected from cell towers and POIs extracted from a popular social network service, Weibo. Our objective is to make use of the data from these two different types of sources to build a model for predicting the POI densities of different regions in the covered area. An application domain that may benefit from our research is a business recommendation application, where a prediction result can be used as a recommendation for opening a new store/branch. The crux of our contribution is the method of representing the collective behavior of mobile users as a histogram of connection counts over a period of time in each region. This representation ultimately enables us to apply a supervised learning algorithm to our problem in order to train a POI prediction model using the POI data set as the ground truth. We studied 12 state-of-the-art classification and regression algorithms; experimental results demonstrate the feasibility and effectiveness of the proposed method.  相似文献   
1000.
As software systems continue to play an important role in our daily lives, their quality is of paramount importance. Therefore, a plethora of prior research has focused on predicting components of software that are defect-prone. One aspect of this research focuses on predicting software changes that are fix-inducing. Although the prior research on fix-inducing changes has many advantages in terms of highly accurate results, it has one main drawback: It gives the same level of impact to all fix-inducing changes. We argue that treating all fix-inducing changes the same is not ideal, since a small typo in a change is easier to address by a developer than a thread synchronization issue. Therefore, in this paper, we study high impact fix-inducing changes (HIFCs). Since the impact of a change can be measured in different ways, we first propose a measure of impact of the fix-inducing changes, which takes into account the implementation work that needs to be done by developers in later (fixing) changes. Our measure of impact for a fix-inducing change uses the amount of churn, the number of files and the number of subsystems modified by developers during an associated fix of the fix-inducing change. We perform our study using six large open source projects to build specialized models that identify HIFCs, determine the best indicators of HIFCs and examine the benefits of prioritizing HIFCs. Using change factors, we are able to predict 56 % to 77 % of HIFCs with an average false alarm (misclassification) rate of 16 %. We find that the lines of code added, the number of developers who worked on a change, and the number of prior modifications on the files modified during a change are the best indicators of HIFCs. Lastly, we observe that a specialized model for HIFCs can provide inspection effort savings of 4 % over the state-of-the-art models. We believe our results would help practitioners prioritize their efforts towards the most impactful fix-inducing changes and save inspection effort.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号