首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3770篇
  免费   227篇
  国内免费   23篇
电工技术   72篇
综合类   9篇
化学工业   1070篇
金属工艺   87篇
机械仪表   109篇
建筑科学   91篇
矿业工程   8篇
能源动力   270篇
轻工业   329篇
水利工程   41篇
石油天然气   53篇
无线电   382篇
一般工业技术   662篇
冶金工业   196篇
原子能技术   39篇
自动化技术   602篇
  2024年   8篇
  2023年   60篇
  2022年   169篇
  2021年   198篇
  2020年   162篇
  2019年   190篇
  2018年   241篇
  2017年   185篇
  2016年   218篇
  2015年   139篇
  2014年   185篇
  2013年   381篇
  2012年   210篇
  2011年   236篇
  2010年   179篇
  2009年   164篇
  2008年   131篇
  2007年   82篇
  2006年   100篇
  2005年   61篇
  2004年   57篇
  2003年   48篇
  2002年   56篇
  2001年   30篇
  2000年   32篇
  1999年   22篇
  1998年   65篇
  1997年   43篇
  1996年   44篇
  1995年   33篇
  1994年   28篇
  1993年   28篇
  1992年   28篇
  1991年   19篇
  1990年   15篇
  1989年   19篇
  1988年   15篇
  1987年   13篇
  1986年   7篇
  1985年   13篇
  1984年   10篇
  1983年   10篇
  1982年   9篇
  1981年   10篇
  1980年   10篇
  1979年   8篇
  1978年   9篇
  1977年   8篇
  1976年   13篇
  1971年   4篇
排序方式: 共有4020条查询结果,搜索用时 15 毫秒
71.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   
72.
73.
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration.  相似文献   
74.
Applying model predictive control (MPC) in some cases such as complicated process dynamics and/or rapid sampling leads us to poorly numerically conditioned solutions and heavy computational load. Furthermore, there is always mismatch in a model that describes a real process. Therefore, in this paper in order to prevail over the mentioned difficulties, we design a robust MPC using the Laguerre orthonormal basis in order to speed up the convergence at the same time with lower computation adding an extra parameter “a” in MPC. In addition, the Kalman state estimator is included in the prediction model and accordingly the MPC design is related to the Kalman estimator parameters as well as the error of estimations which helps the controller react faster against unmeasured disturbances. Tuning the parameters of the Kalman estimator as well as MPC is another achievement of this paper which guarantees the robustness of the system against the model mismatch and measurement noise. The sensitivity function at low frequency is minimized to tune the MPC parameters since the lower the magnitude of the sensitivity function at low frequency the better command tracking and disturbance rejection results. The integral absolute error (IAE) and peak of the sensitivity are used as constraints in optimization procedure to ensure the stability and robustness of the controlled process. The performance of the controller is examined via the controlling level of a Tank and paper machine processes.  相似文献   
75.
High-efficiency video coding is the latest standardization effort of the International Organization for Standardization and the International Telecommunication Union. This new standard adopts an exhaustive algorithm of decision based on a recursive quad-tree structured coding unit, prediction unit, and transform unit. Consequently, an important coding efficiency may be achieved. However, a significant computational complexity is resulted. To speed up the encoding process, efficient algorithms based on fast mode decision and optimized motion estimation were adopted in this paper. The aim was to reduce the complexity of the motion estimation algorithm by modifying its search pattern. Then, it was combined with a new fast mode decision algorithm to further improve the coding efficiency. Experimental results show a significant speedup in terms of encoding time and bit-rate saving with tolerable quality degradation. In fact, the proposed algorithm permits a main reduction that can reach up to 75 % in encoding time. This improvement is accompanied with an average PSNR loss of 0.12 dB and a decrease by 0.5 % in terms of bit-rate.  相似文献   
76.
Water Resources Management - This paper shows the utility of a new interval cooperative game theory as an effective water diplomacy tool to resolve competing and conflicting needs of water users...  相似文献   
77.
78.
79.
In this paper, we present faster than real-time implementation of a class of dense stereo vision algorithms on a low-power massively parallel SIMD architecture, the CSX700. With two cores, each with 96 Processing Elements, this SIMD architecture provides a peak computation power of 96 GFLOPS while consuming only 9 Watts, making it an excellent candidate for embedded computing applications. Exploiting full features of this architecture, we have developed schemes for an efficient parallel implementation with minimum of overhead. For the sum of squared differences (SSD) algorithm and for VGA (640 × 480) images with disparity ranges of 16 and 32, we achieve a performance of 179 and 94 frames per second (fps), respectively. For the HDTV (1,280 × 720) images with disparity ranges of 16 and 32, we achieve a performance of 67 and 35 fps, respectively. We have also implemented more accurate, and hence more computationally expensive variants of the SSD, and for most cases, particularly for VGA images, we have achieved faster than real-time performance. Our results clearly demonstrate that, by developing careful parallelization schemes, the CSX architecture can provide excellent performance and flexibility for various embedded vision applications.  相似文献   
80.
We propose novel techniques to find the optimal achieve the maximum loss reduction for distribution networks location, size, and power factor of distributed generation (DG) to Determining the optimal DG location and size is achieved simultaneously using the energy loss curves technique for a pre-selected power factor that gives the best DG operation. Based on the network's total load demand, four DG sizes are selected. They are used to form energy loss curves for each bus and then for determining the optimal DG options. The study shows that by defining the energy loss minimization as the objective function, the time-varying load demand significantly affects the sizing of DG resources in distribution networks, whereas consideration of power loss as the objective function leads to inconsistent interpretation of loss reduction and other calculations. The devised technique was tested on two test distribution systems of varying size and complexity and validated by comparison with the exhaustive iterative method (EIM) and recently published results. Results showed that the proposed technique can provide an optimal solution with less computation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号