首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5480篇
  免费   410篇
  国内免费   35篇
电工技术   108篇
综合类   6篇
化学工业   1567篇
金属工艺   182篇
机械仪表   162篇
建筑科学   208篇
矿业工程   13篇
能源动力   302篇
轻工业   570篇
水利工程   46篇
石油天然气   90篇
无线电   537篇
一般工业技术   1039篇
冶金工业   310篇
原子能技术   37篇
自动化技术   748篇
  2023年   57篇
  2022年   154篇
  2021年   190篇
  2020年   155篇
  2019年   221篇
  2018年   307篇
  2017年   218篇
  2016年   231篇
  2015年   191篇
  2014年   229篇
  2013年   488篇
  2012年   277篇
  2011年   352篇
  2010年   282篇
  2009年   276篇
  2008年   255篇
  2007年   188篇
  2006年   199篇
  2005年   131篇
  2004年   121篇
  2003年   151篇
  2002年   120篇
  2001年   85篇
  2000年   97篇
  1999年   79篇
  1998年   117篇
  1997年   75篇
  1996年   54篇
  1995年   47篇
  1994年   38篇
  1993年   36篇
  1992年   21篇
  1991年   31篇
  1990年   20篇
  1989年   21篇
  1988年   25篇
  1987年   42篇
  1986年   25篇
  1985年   27篇
  1984年   26篇
  1983年   28篇
  1982年   18篇
  1981年   20篇
  1980年   26篇
  1979年   15篇
  1978年   20篇
  1977年   21篇
  1976年   26篇
  1975年   21篇
  1973年   18篇
排序方式: 共有5925条查询结果,搜索用时 15 毫秒
161.
ISO/IEC 15504 is an emerging international standard on software process assessment. It defines a number of software engineering processes and a scale for measuring their capability. One of the defined processes is software requirements analysis (SRA). A basic premise of the measurement scale is that higher process capability is associated with better project performance (i.e., predictive validity). The paper describes an empirical study that evaluates the predictive validity of SRA process capability. Assessments using ISO/IEC 15504 were conducted on 56 projects world-wide over a period of two years. Performance measures on each project were also collected using questionnaires, such as the ability to meet budget commitments and staff productivity. The results provide strong evidence of predictive validity for the SRA process capability measure used in ISO/IEC 15504, but only for organizations with more than 50 IT staff. Specifically, a strong relationship was found between the implementation of requirements analysis practices as defined in ISO/IEC 15504 and the productivity of software projects. For smaller organizations, evidence of predictive validity was rather weak. This can be interpreted in a number of different ways: that the measure of capability is not suitable for small organizations or that the SRA process capability has less effect on project performance for small organizations  相似文献   
162.
A suitable combination of materials for sheltering a system from a sudden change of environmental temperature has been theoretically studied. The protective composite wall consists of two materials. An insulating material is placed on the outer surface, while, for the inner surface, materials that have good heat storage properties but negligible heat transfer resistance are chosen. The results show that by replacing some of the insulation material with a heat storage material, the temperature of the protected system can be maintained at a considerably lower level. Although the optimal thickness ratio X depends on the Biot number, Fourier number, and on the heat capacity ratio K C, for a large number of thermal protection cases, the approximation X = 0.45 yields practically the minimum progress of the transient. If the Biot number is sufficiently small, it is better to replace all of the insulation material with a good heat storage material.  相似文献   
163.
In this paper, we revisit the implicit front representation and evolution using the vector level set function (VLSF) proposed in (H. E. Abd El Munim, et al., Oct. 2005). Unlike conventional scalar level sets, this function is designed to have a vector form. The distance from any point to the nearest point on the front has components (projections) in the coordinate directions included in the vector function. This kind of representation is used to evolve closed planar curves and 3D surfaces as well. Maintaining the VLSF property as the distance projections through evolution will be considered together with a detailed derivation of the vector partial differential equation (PDE) for such evolution. A shape-based segmentation framework will be demonstrated as an application of the given implicit representation. The proposed level set function system will be used to represent shapes to give a dissimilarity measure in a variational object registration process. This kind of formulation permits us to better control the process of shape registration, which is an important part in the shape-based segmentation framework. The method depends on a set of training shapes used to build a parametric shape model. The color is taken into consideration besides the shape prior information. The shape model is fitted to the image volume by registration through an energy minimization problem. The approach overcomes the conventional methods problems like point correspondences and weighing coefficients tuning of the evolution (PDEs). It is also suitable for multidimensional data and computationally efficient. Results in 2D and 3D of real and synthetic data will demonstrate the efficiency of the framework  相似文献   
164.
This paper focuses on numerical method to solve the dynamic equilibrium of a humanoid robot during the walking cycle with the gait initiation process. It is based on a multi-chain strategy and a dynamic control/command architecture previously developed by Gorce. The strategy is based on correction of the trunk center of mass acceleration and force distribution of the forces exerced by the limbs on the trunk. This latter is performed by mean of a Linear Programming (LP) method. We study the gait initiation process when a subject, initially in quiet erect stance posture, performs a walking cycle. In this paper, we propose to adjust the method for the multiphases (from double support to single support) and multicriteria features of the studied movement. This is done by adapting some specific constraints and criteria in order to ensure the global stability of the humanoid robot along the task execution. For that, we use a Real-Time Criteria and Constraints Adaptation method. Simulation results are presented to demonstrate criteria and constraints influences on the dynamic stability.  相似文献   
165.
In this paper, new wavelet-based affine invariant functions for shape representation are presented. Unlike the previous representation functions, only the approximation coefficients are used to obtain the proposed functions. One of the derived functions is computed by applying a single wavelet transform; the other function is calculated by applying two different wavelet transforms with two different wavelet families. One drawback of the previously derived detail-based invariant representation functions is that they are sensitive to noise at the finer scale levels, which limits the number of scale levels that can be used. The experimental results in this paper demonstrate that the proposed functions are more stable and less sensitive to noise than the detail-based functions.  相似文献   
166.
Much effort has been devoted to the development and empirical validation of object-oriented metrics. The empirical validations performed thus far would suggest that a core set of validated metrics is close to being identified. However, none of these studies allow for the potentially confounding effect of class size. We demonstrate a strong size confounding effect and question the results of previous object-oriented metrics validation studies. We first investigated whether there is a confounding effect of class size in validation studies of object-oriented metrics and show that, based on previous work, there is reason to believe that such an effect exists. We then describe a detailed empirical methodology for identifying those effects. Finally, we perform a study on a large C++ telecommunications framework to examine if size is really a confounder. This study considered the Chidamber and Kemerer metrics and a subset of the Lorenz and Kidd metrics. The dependent variable was the incidence of a fault attributable to a field failure (fault-proneness of a class). Our findings indicate that, before controlling for size, the results are very similar to previous studies. The metrics that are expected to be validated are indeed associated with fault-proneness  相似文献   
167.
Software cost estimation with incomplete data   总被引:2,自引:0,他引:2  
The construction of software cost estimation models remains an active topic of research. The basic premise of cost modeling is that a historical database of software project cost data can be used to develop a quantitative model to predict the cost of future projects. One of the difficulties faced by workers in this area is that many of these historical databases contain substantial amounts of missing data. Thus far, the common practice has been to ignore observations with missing data. In principle, such a practice can lead to gross biases and may be detrimental to the accuracy of cost estimation models. We describe an extensive simulation where we evaluate different techniques for dealing with missing data in the context of software cost modeling. Three techniques are evaluated: listwise deletion, mean imputation, and eight different types of hot-deck imputation. Our results indicate that all the missing data techniques perform well with small biases and high precision. This suggests that the simplest technique, listwise deletion, is a reasonable choice. However, this will not necessarily provide the best performance. Consistent best performance (minimal bias and highest precision) can be obtained by using hot-deck imputation with Euclidean distance and a z-score standardization  相似文献   
168.
This article presents a new method to generate test patterns for multiple stuck-at faults in combinational circuits. We assume the presence of all multiple faults of all multiplicities and we do not resort to their explicit enumeration: the target fault is a single component of possibly several multiple faults. New line and gate models are introduced to handle multiple fault effect propagation through the circuits. The method tries to generate test conditions that propagate the effect of the target fault to primary outputs. When these conditions are fulfilled, the input vector is a test for the target fault and it is guaranteed that all multiple faults of all multiplicities containing the target fault as component are also detected. The method uses similar techniques to those in FAN and SOCRATES algorithms to guide the search part of the algorithm, and includes several new heuristics to enhance the performance and fault detection capability. Experiments performed on the ISCAS'85 benchmark circuits show that test sets for multiple faults can be generated with high fault coverage and a reasonable increase in cost over test generation for single stuck-at faults.  相似文献   
169.
The CuInTe2 thin films were prepared by thermal vacuum evaporation of the bulk compound. The structural and optical properties in the temperature range 300–47 K of thin films grown on glass substrates and annealed in vacuum were studied. The films were investigated by X-ray diffraction and electron microscope techniques. The calculated lattice constants for CuInTe2 powder were found to bea=0.619 nm andc –1,234 nm. From the reflection and transmission data, the optical constants, refractive indexn, absorption index,k, and the absorption coefficient, , werw computed. The optical energy gap was determined for CuInTe2 thin films heat treated at different temparatures for different periods of time. It was found thatE g increases with both increasing temperature and time of annealing.  相似文献   
170.
We present explanation-based learning (EBL) methods aimed at improving the performance of diagnosis systems integrating associational and model-based components. We consider multiple-fault model-based diagnosis (MBD) systems and describe two learning architectures. One, EBLIA, is a method for learning in advance. The other, EBL(p), is a method for learning while doing. EBLIA precompiles models into associations and relies only on the associations during diagnosis. EBL(p) performs compilation during diagnosis whenever reliance on previously learned associational rules results in unsatisfactory performance—as defined by a given performance threshold p. We present results of empirical studies comparing MBD without learning versus EBLIA and EBL(p). The main conclusions are as follows. EBLIA is superior when it is feasible, but it is not feasible for large devices. EBL(p) can speed-up MBD and scale-up to larger devices in situations where perfect accuracy is not required.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号