首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   128900篇
  免费   4994篇
  国内免费   2736篇
电工技术   3624篇
技术理论   5篇
综合类   5947篇
化学工业   18794篇
金属工艺   7143篇
机械仪表   5773篇
建筑科学   5923篇
矿业工程   1654篇
能源动力   2309篇
轻工业   8460篇
水利工程   2240篇
石油天然气   2521篇
武器工业   429篇
无线电   14204篇
一般工业技术   20947篇
冶金工业   4423篇
原子能技术   837篇
自动化技术   31397篇
  2024年   226篇
  2023年   744篇
  2022年   1373篇
  2021年   1879篇
  2020年   1441篇
  2019年   1141篇
  2018年   15612篇
  2017年   14713篇
  2016年   11187篇
  2015年   2470篇
  2014年   2554篇
  2013年   3005篇
  2012年   6371篇
  2011年   12852篇
  2010年   11303篇
  2009年   8507篇
  2008年   9793篇
  2007年   10491篇
  2006年   2768篇
  2005年   3498篇
  2004年   2690篇
  2003年   2386篇
  2002年   1693篇
  2001年   1075篇
  2000年   1083篇
  1999年   1004篇
  1998年   823篇
  1997年   652篇
  1996年   585篇
  1995年   487篇
  1994年   357篇
  1993年   275篇
  1992年   237篇
  1991年   187篇
  1990年   117篇
  1989年   110篇
  1988年   99篇
  1987年   47篇
  1986年   57篇
  1968年   44篇
  1967年   34篇
  1966年   42篇
  1965年   44篇
  1960年   30篇
  1959年   38篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Technical debt is a metaphor for delayed software maintenance tasks. Incurring technical debt may bring short-term benefits to a project, but such benefits are often achieved at the cost of extra work in future, analogous to paying interest on the debt. Currently technical debt is managed implicitly, if at all. However, on large systems, it is too easy to lose track of delayed tasks or to misunderstand their impact. Therefore, we have proposed a new approach to managing technical debt, which we believe to be helpful for software managers to make informed decisions. In this study we explored the costs of the new approach by tracking the technical debt management activities in an on-going software project. The results from the study provided insights into the impact of technical debt management on software projects. In particular, we found that there is a significant start-up cost when beginning to track and monitor technical debt, but the cost of ongoing management soon declines to very reasonable levels.  相似文献   
992.
This paper discusses a new method to perform propagation over a (two-layer, feed-forward) Neural Network embedded in a Constraint Programming model. The method is meant to be employed in Empirical Model Learning, a technique designed to enable optimal decision making over systems that cannot be modeled via conventional declarative means. The key step in Empirical Model Learning is to embed a Machine Learning model into a combinatorial model. It has been showed that Neural Networks can be embedded in a Constraint Programming model by simply encoding each neuron as a global constraint, which is then propagated individually. Unfortunately, this decomposition approach may lead to weak bounds. To overcome such limitation, we propose a new network-level propagator based on a non-linear Lagrangian relaxation that is solved with a subgradient algorithm. The method proved capable of dramatically reducing the search tree size on a thermal-aware dispatching problem on multicore CPUs. The overhead for optimizing the Lagrangian multipliers is kept within a reasonable level via a few simple techniques. This paper is an extended version of [27], featuring an improved structure, a new filtering technique for the network inputs, a set of overhead reduction techniques, and a thorough experimentation.  相似文献   
993.
The number R(4, 3, 3) is often presented as the unknown Ramsey number with the best chances of being found “soon”. Yet, its precise value has remained unknown for almost 50 years. This paper presents a methodology based on abstraction and symmetry breaking that applies to solve hard graph edge-coloring problems. The utility of this methodology is demonstrated by using it to compute the value R(4, 3, 3) = 30. Along the way it is required to first compute the previously unknown set \(\mathcal {R}(3,3,3;13)\) consisting of 78,892 Ramsey colorings.  相似文献   
994.
995.
This paper presents a method for reconstructing unreliable spectral components of speech signals using the statistical distributions of the clean components. Our goal is to model the temporal patterns in speech signal and take advantage of correlations between speech features in both time and frequency domain simultaneously. In this approach, a hidden Markov model (HMM) is first trained on clean speech data to model the temporal patterns which appear in the sequences of the spectral components. Using this model and according to the probabilities of occurring noisy spectral component at each states, a probability distributions for noisy components are estimated. Then, by applying maximum a posteriori (MAP) estimation on the mentioned distributions, the final estimations of the unreliable spectral components are obtained. The proposed method is compared to a common missing feature method which is based on the probabilistic clustering of the feature vectors and also to a state of the art method based on sparse reconstruction. The experimental results exhibits significant improvement in recognition accuracy over a noise polluted Persian corpus.  相似文献   
996.
In this paper we propose a feature normalization method for speaker-independent speech emotion recognition. The performance of a speech emotion classifier largely depends on the training data, and a large number of unknown speakers may cause a great challenge. To address this problem, first, we extract and analyse 481 basic acoustic features. Second, we use principal component analysis and linear discriminant analysis jointly to construct the speaker-sensitive feature space. Third, we classify the emotional utterances into pseudo-speaker groups in the speaker-sensitive feature space by using fuzzy k-means clustering. Finally, we normalize the original basic acoustic features of each utterance based on its group information. To verify our normalization algorithm, we adopt a Gaussian mixture model based classifier for recognition test. The experimental results show that our normalization algorithm is effective on our locally collected database, as well as on the eNTERFACE’05 Audio-Visual Emotion Database. The emotional features achieved using our method are robust to the speaker change, and an improved recognition rate is observed.  相似文献   
997.
As software systems continue to play an important role in our daily lives, their quality is of paramount importance. Therefore, a plethora of prior research has focused on predicting components of software that are defect-prone. One aspect of this research focuses on predicting software changes that are fix-inducing. Although the prior research on fix-inducing changes has many advantages in terms of highly accurate results, it has one main drawback: It gives the same level of impact to all fix-inducing changes. We argue that treating all fix-inducing changes the same is not ideal, since a small typo in a change is easier to address by a developer than a thread synchronization issue. Therefore, in this paper, we study high impact fix-inducing changes (HIFCs). Since the impact of a change can be measured in different ways, we first propose a measure of impact of the fix-inducing changes, which takes into account the implementation work that needs to be done by developers in later (fixing) changes. Our measure of impact for a fix-inducing change uses the amount of churn, the number of files and the number of subsystems modified by developers during an associated fix of the fix-inducing change. We perform our study using six large open source projects to build specialized models that identify HIFCs, determine the best indicators of HIFCs and examine the benefits of prioritizing HIFCs. Using change factors, we are able to predict 56 % to 77 % of HIFCs with an average false alarm (misclassification) rate of 16 %. We find that the lines of code added, the number of developers who worked on a change, and the number of prior modifications on the files modified during a change are the best indicators of HIFCs. Lastly, we observe that a specialized model for HIFCs can provide inspection effort savings of 4 % over the state-of-the-art models. We believe our results would help practitioners prioritize their efforts towards the most impactful fix-inducing changes and save inspection effort.  相似文献   
998.
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration.  相似文献   
999.
Communication in global software development is hindered by language differences in countries with a lack of English speaking professionals. Machine translation is a technology that uses software to translate from one natural language to another. The progress of machine translation systems has been steady in the last decade. As for now, machine translation technology is particularly appealing because it might be used, in the form of cross-language chat services, in countries that are entering into global software projects. However, despite the recent progress of the technology, we still lack a thorough understanding of how real-time machine translation affects communication. In this paper, we present a set of empirical studies with the goal of assessing to what extent real-time machine translation can be used in distributed, multilingual requirements meetings instead of English. Results suggest that, despite far from 100 % accurate, real-time machine translation is not disruptive of the conversation flow and, therefore, is accepted with favor by participants. However, stronger effects can be expected to emerge when language barriers are more critical. Our findings add to the evidence about the recent advances of machine translation technology and provide some guidance to global software engineering practitioners in regarding the losses and gains of using English as a lingua franca in multilingual group communication, as in the case of computer-mediated requirements meetings.  相似文献   
1000.
Tracking the spatio-temporal activity is highly relevant for domains like security, health, and quality management. Since animal welfare became a topic in politics and legislation locomotion patterns of livestock have received increasing interest. In contrast to the monitoring of pedestrians cattle activity tracking poses special challenges to both sensors and data analysis. Interesting states are not directly observable by a single sensor. In addition, sensors must be accepted by cattle and need to be robust enough to cope with a rough environment. In this article, we introduce the novel combination of heart rate and positioning sensors. Attached to neck and chest they are less interfering than accelerometers at the ankles. Exploiting the potential of such combined sensor system that records locomotion and non-spatial information from the heart rate sensor however is challenging. We introduce a novel two level method for the activity tracking focused on the duration and sequence of activity states. We combine Support Vector Machine (SVM) with Conditional Random Field (CRF) and extend Conditional Random fields by an explicit representation of duration. The SVM characterizes local activity states, whereas the CRF addresses sequences of local states to sequences incorporating spatial and non-spatial contextual knowledge. This combination provides a reliable and comprehensive identification of defined activity patterns, as well as their chronology and durations, suitable for the integration in an activity data base. This data base is used to extract physiological parameters and promises insights into internal states such as fitness, well-being and stress. Interestingly we were able to demonstrate a significant correlation between resting pulse rate and the day of pregnancy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号