首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3495篇
  免费   218篇
  国内免费   22篇
电工技术   59篇
综合类   8篇
化学工业   942篇
金属工艺   83篇
机械仪表   101篇
建筑科学   85篇
矿业工程   8篇
能源动力   261篇
轻工业   294篇
水利工程   37篇
石油天然气   51篇
无线电   373篇
一般工业技术   626篇
冶金工业   189篇
原子能技术   39篇
自动化技术   579篇
  2024年   5篇
  2023年   54篇
  2022年   119篇
  2021年   187篇
  2020年   152篇
  2019年   180篇
  2018年   229篇
  2017年   176篇
  2016年   205篇
  2015年   133篇
  2014年   179篇
  2013年   365篇
  2012年   197篇
  2011年   228篇
  2010年   170篇
  2009年   153篇
  2008年   121篇
  2007年   79篇
  2006年   95篇
  2005年   58篇
  2004年   56篇
  2003年   46篇
  2002年   52篇
  2001年   28篇
  2000年   27篇
  1999年   17篇
  1998年   64篇
  1997年   38篇
  1996年   41篇
  1995年   28篇
  1994年   26篇
  1993年   25篇
  1992年   22篇
  1991年   17篇
  1990年   12篇
  1989年   16篇
  1988年   12篇
  1987年   13篇
  1986年   5篇
  1985年   12篇
  1984年   7篇
  1983年   8篇
  1982年   6篇
  1981年   10篇
  1980年   10篇
  1979年   7篇
  1978年   6篇
  1977年   8篇
  1976年   13篇
  1971年   4篇
排序方式: 共有3735条查询结果,搜索用时 953 毫秒
71.
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration.  相似文献   
72.
Applying model predictive control (MPC) in some cases such as complicated process dynamics and/or rapid sampling leads us to poorly numerically conditioned solutions and heavy computational load. Furthermore, there is always mismatch in a model that describes a real process. Therefore, in this paper in order to prevail over the mentioned difficulties, we design a robust MPC using the Laguerre orthonormal basis in order to speed up the convergence at the same time with lower computation adding an extra parameter “a” in MPC. In addition, the Kalman state estimator is included in the prediction model and accordingly the MPC design is related to the Kalman estimator parameters as well as the error of estimations which helps the controller react faster against unmeasured disturbances. Tuning the parameters of the Kalman estimator as well as MPC is another achievement of this paper which guarantees the robustness of the system against the model mismatch and measurement noise. The sensitivity function at low frequency is minimized to tune the MPC parameters since the lower the magnitude of the sensitivity function at low frequency the better command tracking and disturbance rejection results. The integral absolute error (IAE) and peak of the sensitivity are used as constraints in optimization procedure to ensure the stability and robustness of the controlled process. The performance of the controller is examined via the controlling level of a Tank and paper machine processes.  相似文献   
73.
High-efficiency video coding is the latest standardization effort of the International Organization for Standardization and the International Telecommunication Union. This new standard adopts an exhaustive algorithm of decision based on a recursive quad-tree structured coding unit, prediction unit, and transform unit. Consequently, an important coding efficiency may be achieved. However, a significant computational complexity is resulted. To speed up the encoding process, efficient algorithms based on fast mode decision and optimized motion estimation were adopted in this paper. The aim was to reduce the complexity of the motion estimation algorithm by modifying its search pattern. Then, it was combined with a new fast mode decision algorithm to further improve the coding efficiency. Experimental results show a significant speedup in terms of encoding time and bit-rate saving with tolerable quality degradation. In fact, the proposed algorithm permits a main reduction that can reach up to 75 % in encoding time. This improvement is accompanied with an average PSNR loss of 0.12 dB and a decrease by 0.5 % in terms of bit-rate.  相似文献   
74.
75.
76.
We propose novel techniques to find the optimal achieve the maximum loss reduction for distribution networks location, size, and power factor of distributed generation (DG) to Determining the optimal DG location and size is achieved simultaneously using the energy loss curves technique for a pre-selected power factor that gives the best DG operation. Based on the network's total load demand, four DG sizes are selected. They are used to form energy loss curves for each bus and then for determining the optimal DG options. The study shows that by defining the energy loss minimization as the objective function, the time-varying load demand significantly affects the sizing of DG resources in distribution networks, whereas consideration of power loss as the objective function leads to inconsistent interpretation of loss reduction and other calculations. The devised technique was tested on two test distribution systems of varying size and complexity and validated by comparison with the exhaustive iterative method (EIM) and recently published results. Results showed that the proposed technique can provide an optimal solution with less computation.  相似文献   
77.
Bug fixing accounts for a large amount of the software maintenance resources. Generally, bugs are reported, fixed, verified and closed. However, in some cases bugs have to be re-opened. Re-opened bugs increase maintenance costs, degrade the overall user-perceived quality of the software and lead to unnecessary rework by busy practitioners. In this paper, we study and predict re-opened bugs through a case study on three large open source projects—namely Eclipse, Apache and OpenOffice. We structure our study along four dimensions: (1) the work habits dimension (e.g., the weekday on which the bug was initially closed), (2) the bug report dimension (e.g., the component in which the bug was found) (3) the bug fix dimension (e.g., the amount of time it took to perform the initial fix) and (4) the team dimension (e.g., the experience of the bug fixer). We build decision trees using the aforementioned factors that aim to predict re-opened bugs. We perform top node analysis to determine which factors are the most important indicators of whether or not a bug will be re-opened. Our study shows that the comment text and last status of the bug when it is initially closed are the most important factors related to whether or not a bug will be re-opened. Using a combination of these dimensions, we can build explainable prediction models that can achieve a precision between 52.1–78.6 % and a recall in the range of 70.5–94.1 % when predicting whether a bug will be re-opened. We find that the factors that best indicate which bugs might be re-opened vary based on the project. The comment text is the most important factor for the Eclipse and OpenOffice projects, while the last status is the most important one for Apache. These factors should be closely examined in order to reduce maintenance cost due to re-opened bugs.  相似文献   
78.
An observer-based adaptive fuzzy control is presented for a class of nonlinear systems with unknown time delays. The state observer is first designed, and then the controller is designed via the adaptive fuzzy control method based on the observed states. Both the designed observer and controller are independent of time delays. Using an appropriate Lyapunov-Krasovskii functional, the uncertainty of the unknown time delay is compensated, and then the fuzzy logic system in Mamdani type is utilized to approximate the unknown nonlinear functions. Based on the Lyapunov stability theory, the constructed observer-based controller and the closed-loop system are proved to be asymptotically stable. The designed control law is independent of the time delays and has a simple form with only one adaptive parameter vector, which is to be updated on-line. Simulation results are presented to demonstrate the effectiveness of the proposed approach.  相似文献   
79.
The position control system of an electro-hydraulic actuator system (EHAS) is investigated in this paper. The EHAS is developed by taking into consideration the nonlinearities of the system: the friction and the internal leakage. A variable load that simulates a realistic load in robotic excavator is taken as the trajectory reference. A method of control strategy that is implemented by employing a fuzzy logic controller (FLC) whose parameters are optimized using particle swarm optimization (PSO) is proposed. The scaling factors of the fuzzy inference system are tuned to obtain the optimal values which yield the best system performance. The simulation results show that the FLC is able to track the trajectory reference accurately for a range of values of orifice opening. Beyond that range, the orifice opening may introduce chattering, which the FLC alone is not sufficient to overcome. The PSO optimized FLC can reduce the chattering significantly. This result justifies the implementation of the proposed method in position control of EHAS.  相似文献   
80.
In smart environments, pervasive computing contributes in improving daily life activities for dependent people by providing personalized services. Nevertheless, those environments do not guarantee a satisfactory level for protecting the user privacy and ensuring the trust between communicating entities. In this study, we propose a trust evaluation model based on user past and present behavior. This model is associated with a lightweight authentication key agreement protocol (Elliptic Curve-based Simple Authentication Key Agreement). The aim is to enable the communicating entities to establish a level of trust and then succeed in a mutual authentication using a scheme suitable for low-resource devices in smart environments. An innovation in our trust model is that it uses an accurate approach to calculate trust in different situations and includes a human-based feature for trust feedback, which is user rating. Finally, we tested and implemented our scheme on Android mobile phones in a smart environment dedicated for handicapped people.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号