首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3528篇
  免费   218篇
  国内免费   22篇
电工技术   60篇
综合类   8篇
化学工业   964篇
金属工艺   83篇
机械仪表   102篇
建筑科学   85篇
矿业工程   8篇
能源动力   261篇
轻工业   296篇
水利工程   37篇
石油天然气   51篇
无线电   373篇
一般工业技术   628篇
冶金工业   189篇
原子能技术   39篇
自动化技术   584篇
  2024年   5篇
  2023年   54篇
  2022年   130篇
  2021年   188篇
  2020年   152篇
  2019年   180篇
  2018年   229篇
  2017年   176篇
  2016年   205篇
  2015年   133篇
  2014年   180篇
  2013年   366篇
  2012年   197篇
  2011年   231篇
  2010年   170篇
  2009年   153篇
  2008年   122篇
  2007年   79篇
  2006年   95篇
  2005年   58篇
  2004年   56篇
  2003年   46篇
  2002年   52篇
  2001年   28篇
  2000年   27篇
  1999年   17篇
  1998年   64篇
  1997年   38篇
  1996年   41篇
  1995年   28篇
  1994年   27篇
  1993年   26篇
  1992年   24篇
  1991年   18篇
  1990年   14篇
  1989年   18篇
  1988年   13篇
  1987年   13篇
  1986年   6篇
  1985年   12篇
  1984年   10篇
  1983年   9篇
  1982年   6篇
  1981年   10篇
  1980年   10篇
  1979年   7篇
  1978年   6篇
  1977年   8篇
  1976年   13篇
  1971年   4篇
排序方式: 共有3768条查询结果,搜索用时 15 毫秒
61.

In this paper, we present an in-depth study on the computational aspects of high-order discrete orthogonal Meixner polynomials (MPs) and Meixner moments (MMs). This study highlights two major problems related to the computation of MPs. The first problem is the overflow and the underflow of MPs values (“Nan” and “infinity”). To overcome this problem, we propose two new recursive Algorithms for MPs computation with respect to the polynomial order n and with respect to the variable x. These Algorithms are independent of all functions that are the origin the numerical overflow and underflow problem. The second problem is the propagation of rounding errors that lead to the loss of the orthogonality property of high-order MPs. To fix this problem, we implement MPs based on the following orthogonalization methods: modified Gram-Schmidt process (MGS), Householder method, and Givens rotation method. The proposed Algorithms for the stable computation of MPs are efficiently applied for the reconstruction and localization of the region of interest (ROI) of large-sized 1D signals and 2D/3D images. We also propose a new fast method for the reconstruction of large-size 1D signal. This method involves the conversion of 1D signal into 2D matrix, then the reconstruction is performed in the 2D domain, and a 2D to 1D conversion is performed to recover the reconstructed 1D signal. The results of the performed simulations and comparisons clearly justify the efficiency of the proposed Algorithms for the stable analysis of large-size signals and 2D/3D images.

  相似文献   
62.
Prudent management of Iraqi water resources under climate change conditions requires plans to be based on actual figures of the storage capacity of existing reservoirs. With the absence of sediment flushing measures, the actual storage capacity of Dokan Reservoir (operated since 1959) has been affected by the amount of sediment delivered during its operational life leading to an undetermined reduction in its storage capacity. In consequence, there has not been an update on the dam's operational storage capacity curves. In this research, new operational curves were established for the reservoir based on a recent bathymetric survey undertaken in 2014. The reduction in reservoir capacity during the period between 1959 and 2014 was calculated by the mean of the difference between the designed storage capacity and the storage capacity which was concluded from the 2014 bathymetric survey. Moreover, the rate of sediment transported to the reservoir was calculated based on the overall quantities of accumulated sediment and the water discharge of the Lesser Zab River into the reservoir. The results indicate that the dam capacity is reduced by 25% due to sedimentation of an estimated volume of 367 million cubic metres at water level 480 m.a.s.l. The annual sedimentation rate was about 6.6 million cubic metres, and the sediment yield was estimated to be 701.2 t?km?3?year.  相似文献   
63.
Wound care has been a challenging subject for medical teams and researchers. Bacterial infections are one of the most serious complications in injured skins that often affect healing process. Antibacterial wound dressings can be used to facilitate wound healing process. The purpose of this study is to fabricate chitosan (Chito)/polyethylene glycol (PEG) antibacterial wound dressing doped with minocycline, and to evaluate the influence of composition ratio on the blending properties of the films. To improve the mechanical properties of these films, we examined various amounts of glycerol as a plasticizer. Moreover, we investigated morphological and mechanical aspects, water uptake, degradation, water vapor transmission and wettability properties of the films prepared with various ratios of Chito/PEG/Gly. Assessment of mechanical properties revealed that film containing 80:20 ratio Chito/PEG with 40 PHR Gly content exhibits the highest ultimate tensile strength and elongation at break (9.74 MPa and 45.73% respectively). Furthermore, results demonstrated that upon increasing PEG and Gly contents, degradability and hydrophilicity of the films increased whereas water uptake decreased. Water vapor transmission rate of the films was close to the range of 530–1200 g/m2d, indicating that the as formed films are possible candidates for dressing low exudate wounds or burns. Minocycline loaded films exhibited a biphasic drug release profile and it was more effective on gram-positive bacteria than on gram-negative bacteria. The polymeric film with the highest amount of loaded drug (2%) exhibited insignificant cytotoxicity (88%) against normal fibroblast cell line.  相似文献   
64.
A new process on friction aided deep drawing has been developed in which a metal blank-holder divided into eight fan-shaped segments is used instead of an elastomer ring used in the Maslennikov process. This blank holding device consists of four drawing segments and four small wedges, which can move radially in- and out-wards under a certain blank-holding pressure. The drawing process can be efficiently performed using an assistant punch, which partially supports the deformation of the blank as well as improving the shape and dimensional accuracy of the drawn cup. Deep drawing experiments have been done using soft aluminum sheets of 0.5 and 1.0 mm in thickness to understand the main features of the proposed drawing process. Theoretical analyses based on the energy and slab methods have also been conducted to study the effect of main process parameters on the minimum blank holding pressure required for the onset of deformation, and to obtain the other optimum working conditions. The possibility of the new process has been confirmed by producing deep and successful cups with a drawing ratio of 4.0, although the number of drawing operations is still high.  相似文献   
65.
This work aim at developing a non-destructive tool for the evaluation of bonded plastic joints. The paper examines infrared thermographic transmission and reflection mode imaging and validates the feasibility of the thermal NDT approach for this application. Results demonstrate good estimation performance for adhesion integrity, uniformity and bond strength using a transmission mode application of infrared thermography. In addition, results from a pulsed infrared thermographic application using a modified dynamic infrared tomography scheme show good performance for estimating adhesion layer thickness mapping and detecting delaminations.  相似文献   
66.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues.  相似文献   
67.
68.
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration.  相似文献   
69.
Applying model predictive control (MPC) in some cases such as complicated process dynamics and/or rapid sampling leads us to poorly numerically conditioned solutions and heavy computational load. Furthermore, there is always mismatch in a model that describes a real process. Therefore, in this paper in order to prevail over the mentioned difficulties, we design a robust MPC using the Laguerre orthonormal basis in order to speed up the convergence at the same time with lower computation adding an extra parameter “a” in MPC. In addition, the Kalman state estimator is included in the prediction model and accordingly the MPC design is related to the Kalman estimator parameters as well as the error of estimations which helps the controller react faster against unmeasured disturbances. Tuning the parameters of the Kalman estimator as well as MPC is another achievement of this paper which guarantees the robustness of the system against the model mismatch and measurement noise. The sensitivity function at low frequency is minimized to tune the MPC parameters since the lower the magnitude of the sensitivity function at low frequency the better command tracking and disturbance rejection results. The integral absolute error (IAE) and peak of the sensitivity are used as constraints in optimization procedure to ensure the stability and robustness of the controlled process. The performance of the controller is examined via the controlling level of a Tank and paper machine processes.  相似文献   
70.
High-efficiency video coding is the latest standardization effort of the International Organization for Standardization and the International Telecommunication Union. This new standard adopts an exhaustive algorithm of decision based on a recursive quad-tree structured coding unit, prediction unit, and transform unit. Consequently, an important coding efficiency may be achieved. However, a significant computational complexity is resulted. To speed up the encoding process, efficient algorithms based on fast mode decision and optimized motion estimation were adopted in this paper. The aim was to reduce the complexity of the motion estimation algorithm by modifying its search pattern. Then, it was combined with a new fast mode decision algorithm to further improve the coding efficiency. Experimental results show a significant speedup in terms of encoding time and bit-rate saving with tolerable quality degradation. In fact, the proposed algorithm permits a main reduction that can reach up to 75 % in encoding time. This improvement is accompanied with an average PSNR loss of 0.12 dB and a decrease by 0.5 % in terms of bit-rate.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号