首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   88866篇
  免费   1348篇
  国内免费   411篇
电工技术   836篇
综合类   2329篇
化学工业   12787篇
金属工艺   4897篇
机械仪表   3123篇
建筑科学   2546篇
矿业工程   590篇
能源动力   1244篇
轻工业   4005篇
水利工程   1308篇
石油天然气   348篇
无线电   9656篇
一般工业技术   17399篇
冶金工业   2883篇
原子能技术   292篇
自动化技术   26382篇
  2023年   97篇
  2022年   109篇
  2021年   231篇
  2020年   153篇
  2019年   132篇
  2018年   14571篇
  2017年   13485篇
  2016年   10156篇
  2015年   836篇
  2014年   520篇
  2013年   580篇
  2012年   3497篇
  2011年   9803篇
  2010年   8587篇
  2009年   5843篇
  2008年   7072篇
  2007年   8045篇
  2006年   346篇
  2005年   1408篇
  2004年   1282篇
  2003年   1313篇
  2002年   656篇
  2001年   177篇
  2000年   254篇
  1999年   122篇
  1998年   127篇
  1997年   71篇
  1996年   94篇
  1995年   66篇
  1994年   47篇
  1993年   43篇
  1992年   40篇
  1991年   42篇
  1989年   23篇
  1988年   24篇
  1984年   24篇
  1969年   26篇
  1968年   43篇
  1967年   36篇
  1966年   44篇
  1965年   45篇
  1963年   28篇
  1962年   22篇
  1960年   31篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   35篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
In this paper we propose a feature normalization method for speaker-independent speech emotion recognition. The performance of a speech emotion classifier largely depends on the training data, and a large number of unknown speakers may cause a great challenge. To address this problem, first, we extract and analyse 481 basic acoustic features. Second, we use principal component analysis and linear discriminant analysis jointly to construct the speaker-sensitive feature space. Third, we classify the emotional utterances into pseudo-speaker groups in the speaker-sensitive feature space by using fuzzy k-means clustering. Finally, we normalize the original basic acoustic features of each utterance based on its group information. To verify our normalization algorithm, we adopt a Gaussian mixture model based classifier for recognition test. The experimental results show that our normalization algorithm is effective on our locally collected database, as well as on the eNTERFACE’05 Audio-Visual Emotion Database. The emotional features achieved using our method are robust to the speaker change, and an improved recognition rate is observed.  相似文献   
992.
The speech recognition system basically extracts the textual information present in the speech. In the present work, speaker independent isolated word recognition system for one of the south Indian language—Kannada has been developed. For European languages such as English, large amount of research has been carried out in the context of speech recognition. But, speech recognition in Indian languages such as Kannada reported significantly less amount of work and there are no standard speech corpus readily available. In the present study, speech database has been developed by recording the speech utterances of regional Kannada news corpus of different speakers. The speech recognition system has been implemented using the Hidden Markov Tool Kit. Two separate pronunciation dictionaries namely phone based and syllable based dictionaries are built in-order to design and evaluate the performances of phone-level and syllable-level sub-word acoustical models. Experiments have been carried out and results are analyzed by varying the number of Gaussian mixtures in each state of monophone Hidden Markov Model (HMM). Also, context dependent triphone HMM models have been built for the same Kannada speech corpus and the recognition accuracies are comparatively analyzed. Mel frequency cepstral coefficients along with their first and second derivative coefficients are used as feature vectors and are computed in acoustic front-end processing. The overall word recognition accuracy of 60.2 and 74.35 % respectively for monophone and triphone models have been obtained. The study shows a good improvement in the accuracy of isolated-word Kannada speech recognition system using triphone HMM models compared to that of monophone HMM models.  相似文献   
993.
This paper addresses a novel approach to investigate, study and simulate computation of high band (HB) feature extraction based on linear predictive coding (LPC) and mel frequency cepstral coefficient (MFCC) techniques. Further, HB features are embedded into encoded bitstream of proposed global system for mobile (GSM) full rate (FR) 06.10 coder using joint source coding and data hiding before being transmitted to receiving terminal. At receiver, HB features are extracted to reproduce HB portion of speech and for the same different extension of excitation techniques are applied and their results evaluated in terms of quality (intelligibility and naturalness) and bandwidth. MATLAB based e-test bench is created for implementing the proposed artificial bandwidth extension (ABE) coder following series of simulations, that are carried out to discover and gain insight about the performance of it using subjective [mean opinion score (MOS)] and objective [perceptual evaluation of speech quality (PESQ)] analysis. The results obtained for both the analyses advocate that proposed ABE coder outperforms proposed GSM FR NB (legacy GSM FR) coder. While the fact remains that, compared to LPC based parameterizations over ABE coder, MFCC parameterization results in higher speech intelligibility which is evident from obtained slightly better PESQ and MOS scores.  相似文献   
994.
This article presents three new methods (M5, M6, M7) for the estimation of an unknown map projection and its parameters. Such an analysis is beneficial and interesting for historic, old, or current maps without information about the map projection; it could improve their georeference. The location similarity approach takes into account the residuals on the corresponding features; the minimum is found using the non-linear least squares. For the shape similarity approach, the minimized objective function ? takes into account the spatial distribution of the features, together with the shapes of the meridians, parallels and other 0D-2D elements. Due to the non-convexity and discontinuity, its global minimum is determined using the global optimization, represented by the differential evolution. The constant values of projection φ k , λ k , φ 1, λ 0, and map constants RXY, α (in relation to which the methods are invariant) are estimated. All methods are compared and the results are presented for the synthetic data as well as for 8 early maps from the Map Collection of the Charles University and the David Rumsay Map Collection. The proposed algorithms have been implemented in the new version of the detectproj software.  相似文献   
995.
With the expansion of wireless-communication infrastructure and the evolution of indoor positioning technologies, the demand for location-based services (LBS) has been increasing in indoor as well as outdoor spaces. However, we should consider a significant challenge regarding the location privacy for realizing indoor LBS. To avoid violations of location privacy, much research has been performed, and location \(\mathcal {K}\)-anonymity has been intensively studied to blur a user location with a cloaking region involving at least \(\mathcal {K}-1\) locations of other persons. Owing to the differences between indoor and outdoor spaces, it is, however, difficult to apply this approach directly in an indoor space. First, the definition of the distance metric in indoor space is different from that in Euclidean and road-network spaces. Second, a bounding region, which is a general form of an anonymizing spatial region (ASR) in Euclidean space, does not respect the locality property in indoor space, where movement is constrained by building components. Therefore, we introduce the concept of indoor location \(\mathcal {K}\)-anonymity in this paper. Then, we investigate the requirements of ASR in indoor spaces and propose novel methods to determine the ASR, considering hierarchical structures of the indoor space. While indoor ASRs are determined at the anonymizer, we also propose processing methods for r-range queries and k-nearest-neighbor queries at a location-based service provider. We validate our methods with experimental analysis of query-processing performance and resilience against attacks in indoor spaces.  相似文献   
996.
Exploring massive mobile data for location-based services becomes one of the key challenges in mobile data mining. In this paper, we investigate a problem of finding a correlation between the collective behavior of mobile users and the distribution of points of interest (POIs) in a city. Specifically, we use large-scale cell tower data dumps collected from cell towers and POIs extracted from a popular social network service, Weibo. Our objective is to make use of the data from these two different types of sources to build a model for predicting the POI densities of different regions in the covered area. An application domain that may benefit from our research is a business recommendation application, where a prediction result can be used as a recommendation for opening a new store/branch. The crux of our contribution is the method of representing the collective behavior of mobile users as a histogram of connection counts over a period of time in each region. This representation ultimately enables us to apply a supervised learning algorithm to our problem in order to train a POI prediction model using the POI data set as the ground truth. We studied 12 state-of-the-art classification and regression algorithms; experimental results demonstrate the feasibility and effectiveness of the proposed method.  相似文献   
997.
As software systems continue to play an important role in our daily lives, their quality is of paramount importance. Therefore, a plethora of prior research has focused on predicting components of software that are defect-prone. One aspect of this research focuses on predicting software changes that are fix-inducing. Although the prior research on fix-inducing changes has many advantages in terms of highly accurate results, it has one main drawback: It gives the same level of impact to all fix-inducing changes. We argue that treating all fix-inducing changes the same is not ideal, since a small typo in a change is easier to address by a developer than a thread synchronization issue. Therefore, in this paper, we study high impact fix-inducing changes (HIFCs). Since the impact of a change can be measured in different ways, we first propose a measure of impact of the fix-inducing changes, which takes into account the implementation work that needs to be done by developers in later (fixing) changes. Our measure of impact for a fix-inducing change uses the amount of churn, the number of files and the number of subsystems modified by developers during an associated fix of the fix-inducing change. We perform our study using six large open source projects to build specialized models that identify HIFCs, determine the best indicators of HIFCs and examine the benefits of prioritizing HIFCs. Using change factors, we are able to predict 56 % to 77 % of HIFCs with an average false alarm (misclassification) rate of 16 %. We find that the lines of code added, the number of developers who worked on a change, and the number of prior modifications on the files modified during a change are the best indicators of HIFCs. Lastly, we observe that a specialized model for HIFCs can provide inspection effort savings of 4 % over the state-of-the-art models. We believe our results would help practitioners prioritize their efforts towards the most impactful fix-inducing changes and save inspection effort.  相似文献   
998.
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration.  相似文献   
999.
Communication in global software development is hindered by language differences in countries with a lack of English speaking professionals. Machine translation is a technology that uses software to translate from one natural language to another. The progress of machine translation systems has been steady in the last decade. As for now, machine translation technology is particularly appealing because it might be used, in the form of cross-language chat services, in countries that are entering into global software projects. However, despite the recent progress of the technology, we still lack a thorough understanding of how real-time machine translation affects communication. In this paper, we present a set of empirical studies with the goal of assessing to what extent real-time machine translation can be used in distributed, multilingual requirements meetings instead of English. Results suggest that, despite far from 100 % accurate, real-time machine translation is not disruptive of the conversation flow and, therefore, is accepted with favor by participants. However, stronger effects can be expected to emerge when language barriers are more critical. Our findings add to the evidence about the recent advances of machine translation technology and provide some guidance to global software engineering practitioners in regarding the losses and gains of using English as a lingua franca in multilingual group communication, as in the case of computer-mediated requirements meetings.  相似文献   
1000.
Several code smell detection tools have been developed providing different results, because smells can be subjectively interpreted, and hence detected, in different ways. In this paper, we perform the largest experiment of applying machine learning algorithms to code smells to the best of our knowledge. We experiment 16 different machine-learning algorithms on four code smells (Data Class, Large Class, Feature Envy, Long Method) and 74 software systems, with 1986 manually validated code smell samples. We found that all algorithms achieved high performances in the cross-validation data set, yet the highest performances were obtained by J48 and Random Forest, while the worst performance were achieved by support vector machines. However, the lower prevalence of code smells, i.e., imbalanced data, in the entire data set caused varying performances that need to be addressed in the future studies. We conclude that the application of machine learning to the detection of these code smells can provide high accuracy (>96 %), and only a hundred training examples are needed to reach at least 95 % accuracy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号