首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   33128篇
  免费   1956篇
  国内免费   31篇
电工技术   238篇
综合类   17篇
化学工业   6613篇
金属工艺   614篇
机械仪表   691篇
建筑科学   938篇
矿业工程   72篇
能源动力   573篇
轻工业   5477篇
水利工程   317篇
石油天然气   115篇
武器工业   5篇
无线电   1226篇
一般工业技术   5604篇
冶金工业   7959篇
原子能技术   121篇
自动化技术   4535篇
  2024年   78篇
  2023年   290篇
  2022年   310篇
  2021年   767篇
  2020年   670篇
  2019年   761篇
  2018年   1351篇
  2017年   1326篇
  2016年   1375篇
  2015年   1052篇
  2014年   1220篇
  2013年   2598篇
  2012年   1930篇
  2011年   1748篇
  2010年   1442篇
  2009年   1292篇
  2008年   1248篇
  2007年   1183篇
  2006年   790篇
  2005年   683篇
  2004年   640篇
  2003年   567篇
  2002年   566篇
  2001年   409篇
  2000年   389篇
  1999年   432篇
  1998年   2365篇
  1997年   1613篇
  1996年   1030篇
  1995年   616篇
  1994年   472篇
  1993年   559篇
  1992年   201篇
  1991年   206篇
  1990年   156篇
  1989年   161篇
  1988年   162篇
  1987年   135篇
  1986年   117篇
  1985年   141篇
  1984年   124篇
  1983年   95篇
  1982年   130篇
  1981年   146篇
  1980年   154篇
  1979年   79篇
  1978年   80篇
  1977年   278篇
  1976年   621篇
  1973年   63篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
281.
Applied Intelligence - Applied Intelligence is one of the most important international scientific journals in the field of artificial intelligence. From 1991, Applied Intelligence has been oriented...  相似文献   
282.
In this paper we focus on appearance features particularly the Local Binary Patterns describing the manual component of Sign Language. We compare the performance of these features with geometric moments describing the trajectory and shape of hands. Since the non-manual component is also very important for sign recognition we localize facial landmarks via Active Shape Model combined with Landmark detector that increases the robustness of model fitting. We test the recognition performance of individual features and their combinations on a database consisting of 11 signers and 23 signs with several repetitions. Local Binary Patterns outperform the geometric moments. When the features are combined we achieve a recognition rate up to 99.75% for signer dependent tests and 57.54% for signer independent tests.  相似文献   
283.
Creating high-quality quad meshes from triangulated surfaces is a highly nontrivial task that necessitates consideration of various application specific metrics of quality. In our work, we follow the premise that automatic reconstruction techniques may not generate outputs meeting all the subjective quality expectations of the user. Instead, we put the user at the center of the process by providing a flexible, interactive approach to quadrangulation design. By combining scalar field topology and combinatorial connectivity techniques, we present a new framework, following a coarse to fine design philosophy, which allows for explicit control of the subjective quality criteria on the output quad mesh, at interactive rates. Our quadrangulation framework uses the new notion of Reeb atlas editing, to define with a small amount of interactions a coarse quadrangulation of the model, capturing the main features of the shape, with user prescribed extraordinary vertices and alignment. Fine grain tuning is easily achieved with the notion of connectivity texturing, which allows for additional extraordinary vertices specification and explicit feature alignment, to capture the high-frequency geometries. Experiments demonstrate the interactivity and flexibility of our approach, as well as its ability to generate quad meshes of arbitrary resolution with high-quality statistics, while meeting the user’s own subjective requirements.  相似文献   
284.
Dense stereo algorithms are able to estimate disparities at all pixels including untextured regions. Typically these disparities are evaluated at integer disparity steps. A subsequent sub-pixel interpolation often fails to propagate smoothness constraints on a sub-pixel level.We propose to increase the sub-pixel accuracy in low-textured regions in four possible ways: First, we present an analysis that shows the benefit of evaluating the disparity space at fractional disparities. Second, we introduce a new disparity smoothing algorithm that preserves depth discontinuities and enforces smoothness on a sub-pixel level. Third, we present a novel stereo constraint (gravitational constraint) that assumes sorted disparity values in vertical direction and guides global algorithms to reduce false matches, especially in low-textured regions. Finally, we show how image sequence analysis improves stereo accuracy without explicitly performing tracking. Our goal in this work is to obtain an accurate 3D reconstruction. Large-scale 3D reconstruction will benefit heavily from these sub-pixel refinements.Results based on semi-global matching, obtained with the above mentioned algorithmic extensions are shown for the Middlebury stereo ground truth data sets. The presented improvements, called ImproveSubPix, turn out to be one of the top-performing algorithms when evaluating the set on a sub-pixel level while being computationally efficient. Additional results are presented for urban scenes. The four improvements are independent of the underlying type of stereo algorithm.  相似文献   
285.
The recent popularity of digital cameras has posed a new problem: how to efficiently store and retrieve the very large number of digital photos captured and chaotically stored in multiple locations without any annotation. This paper proposes an infrastructure, called PhotoGeo, which aims at helping users with the people photo annotation, event photo annotation, storage and retrieval of personal digital photos. To achieve the desired objective, PhotoGeo uses new algorithms that make it possible to annotate photos with the key metadata to facilitate their retrieval, such as: the people who were shown in the photo (who); where it was captured (where); the date and time of capture (when); and the event that was captured. The paper concludes with a detailed evaluation of these algorithms.  相似文献   
286.
Today middleware is much more powerful, more reliable and faster than it used to be. Nevertheless, for the application developer, the complexity of using middleware platforms has increased accordingly. The volume and variety of application contexts that current middleware technologies have to support require that developers be able to anticipate the widest possible range of execution environments, desired and undesired effects of different programming strategies, handling procedures for runtime errors, and so on. This paper shows how a generic framework designed to evaluate the usability of notations (the Cognitive Dimensions of Notations Framework, or CDN) has been instantiated and used to analyze the cognitive challenges involved in adapting middleware platforms. This human-centric perspective allowed us to achieve novel results compared to existing middleware evaluation research, typically centered around system performance metrics. The focus of our study is on the process of adapting middleware implementations, rather than in the end product of this activity. Our main contributions are twofold. First, we describe a qualitative CDN-based method to analyze the cognitive effort made by programmers while adapting middleware implementations. And second, we show how two platforms designed for flexibility have been compared, suggesting that certain programming language design features might be particularly helpful for developers.  相似文献   
287.
This work presents a study of RTP multiplexing schemes, which are compared with the normal use of RTP, in terms of experienced quality. Bandwidth saving, latency and packet loss for different options are studied, and some tests of Voice over IP (VoIP) traffic are carried out in order to compare the quality obtained using different implementations of the router buffer. Voice quality is calculated using ITU R-factor, which is a widely accepted quality estimator. The tests show the bandwidth savings of multiplexing, and also the importance of packet size for certain buffers, as latency and packet loss may be affected. The customer’s experience improvement is measured, showing that the use of multiplexing can be interesting in some scenarios, like an enterprise with different offices connected via the Internet. The system is also tested using different numbers of samples per packet, and the distribution of the flows into different tunnels is found to be an important factor in order to achieve an optimal perceived quality for each kind of buffer. Grouping all the flows into a single tunnel will not always be the best solution, as the increase of the number of flows does not improve bandwidth efficiency indefinitely. If the buffer penalizes big packets, it will be better to group the flows into a number of tunnels. The router processing capacity has to be taken into account too, as the limit of packets per second it can manage must not be exceeded. The obtained results show that multiplexing is a good way to improve customer’s experience of VoIP in scenarios where many RTP flows share the same path.  相似文献   
288.
Fuzzy rule-based classification systems (FRBCSs) are known due to their ability to treat with low quality data and obtain good results in these scenarios. However, their application in problems with missing data are uncommon while in real-life data, information is frequently incomplete in data mining, caused by the presence of missing values in attributes. Several schemes have been studied to overcome the drawbacks produced by missing values in data mining tasks; one of the most well known is based on preprocessing, formerly known as imputation. In this work, we focus on FRBCSs considering 14 different approaches to missing attribute values treatment that are presented and analyzed. The analysis involves three different methods, in which we distinguish between Mamdani and TSK models. From the obtained results, the convenience of using imputation methods for FRBCSs with missing values is stated. The analysis suggests that each type behaves differently while the use of determined missing values imputation methods could improve the accuracy obtained for these methods. Thus, the use of particular imputation methods conditioned to the type of FRBCSs is required.  相似文献   
289.
Data mining is most commonly used in attempts to induce association rules from databases which can help decision-makers easily analyze the data and make good decisions regarding the domains concerned. Different studies have proposed methods for mining association rules from databases with crisp values. However, the data in many real-world applications have a certain degree of imprecision. In this paper we address this problem, and propose a new data-mining algorithm for extracting interesting knowledge from databases with imprecise data. The proposed algorithm integrates imprecise data concepts and the fuzzy apriori mining algorithm to find interesting fuzzy association rules in given databases. Experiments for diagnosing dyslexia in early childhood were made to verify the performance of the proposed algorithm.  相似文献   
290.
We generalise belief functions to many-valued events which are represented by elements of Lindenbaum algebra of infinite-valued ?ukasiewicz propositional logic. Our approach is based on mass assignments used in the Dempster–Shafer theory of evidence. A generalised belief function is totally monotone and it has Choquet integral representation with respect to a unique belief measure on Boolean events.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号