首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   86150篇
  免费   1000篇
  国内免费   407篇
电工技术   779篇
综合类   2318篇
化学工业   11739篇
金属工艺   4808篇
机械仪表   3046篇
建筑科学   2256篇
矿业工程   566篇
能源动力   1137篇
轻工业   3923篇
水利工程   1281篇
石油天然气   352篇
武器工业   1篇
无线电   9352篇
一般工业技术   16594篇
冶金工业   3573篇
原子能技术   256篇
自动化技术   25576篇
  2022年   34篇
  2021年   45篇
  2020年   36篇
  2019年   33篇
  2018年   14486篇
  2017年   13397篇
  2016年   9993篇
  2015年   638篇
  2014年   254篇
  2013年   329篇
  2012年   3199篇
  2011年   9490篇
  2010年   8359篇
  2009年   5621篇
  2008年   6891篇
  2007年   7900篇
  2006年   243篇
  2005年   1316篇
  2004年   1232篇
  2003年   1244篇
  2002年   622篇
  2001年   136篇
  2000年   217篇
  1999年   106篇
  1998年   115篇
  1997年   76篇
  1996年   102篇
  1995年   67篇
  1994年   70篇
  1993年   48篇
  1992年   65篇
  1991年   37篇
  1990年   46篇
  1989年   49篇
  1988年   42篇
  1986年   39篇
  1985年   42篇
  1984年   35篇
  1979年   33篇
  1968年   45篇
  1967年   33篇
  1966年   44篇
  1965年   44篇
  1960年   32篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   36篇
  1955年   64篇
  1954年   69篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
981.
Since DeLone and McLean (D&M) developed their model of IS success, there has been much research on the topic of success as well as extensions and tests of their model. Using the technique of a qualitative literature review, this research reviews 180 papers found in the academic literature for the period 1992–2007 dealing with some aspect of IS success. Using the six dimensions of the D&M model – system quality, information quality, service quality, use, user satisfaction, and net benefits – 90 empirical studies were examined and the results summarized. Measures for the six success constructs are described and 15 pairwise associations between the success constructs are analyzed. This work builds on the prior research related to IS success by summarizing the measures applied to the evaluation of IS success and by examining the relationships that comprise the D&M IS success model in both individual and organizational contexts.  相似文献   
982.
In the 1990s, enrollments grew rapidly in information systems (IS) and computer science. Then, beginning in 2000 and 2001, enrollments declined precipitously. This paper looks at the enrollment bubble and the dotcom bubble that drove IT enrollments. Although the enrollment bubble occurred worldwide, this paper focuses primarily on U.S. data, which is widely available, and secondarily on Western Europe data. The paper notes that the dotcom bubble was an investment disaster but that U.S. IT employment fell surprisingly little and soon surpassed the bubble's peak IT employment. In addition, U.S. IT unemployment rose to almost the level of total unemployment in 2003, then fell to traditional low levels by 2005. Job prospects in the U.S. and most other countries are good for the short term, and the U.S. Bureau of Labor Statistics employment projections for 2006–2016 indicate that job prospects in the U.S. will continue to be good for most IT jobs. However, offshoring is a persistent concern for students in Western Europe and the United States. The data on offshoring are of poor quality, but several studies indicate that IT job losses from offshoring are small and may be counterbalanced by gains in IT inshoring jobs. At the same time, offshoring and productivity gains appear to be making low-level jobs such as programming and user support less attractive. This means that IS and computer science programs will have to focus on producing higher-level job skills among graduates. In addition, students may have to stop considering the undergraduate degree to be a terminal degree in IS and computer science.  相似文献   
983.
984.
Category Partition Method (CPM) is a general approach to specification-based program testing, where test frame reduction and refinement are two important issues. Test frame reduction is necessary since too many test frames may be produced, and test frame refinement is important since during CPM testing new information about test frame generation may be achieved and considered incrementally. Besides the information provided by testers or users, implementation related knowledge offers alternative information for reducing and refining CPM test frames. This paper explores the idea by proposing a call patterns semantics based test frame updating method for Prolog programs, in which a call patterns analysis is used to collect information about the way in which procedures are used in a program. The updated test frames will be represented as constraints. The effect of our test frame updating is two-fold. On one hand, it removes “uncared” data from the original set of test frames; on the other hand, it refines the test frames to which we should pay more attention. The first effect makes the input domain on which a procedure must be tested a subset of the procedure’s input domain, and the latter makes testers stand more chance to find out the faults that are more likely to show their presence in the use of the program under consideration. Our test frame updating method preserves the effectiveness of CPM testing with respect to the detection of faults we care. The test case generation from the updated set of test frames is also discussed. In order to show the applicability of our method an approximation call patterns semantics is proposed, and the test frame updating on the semantics is illustrated by an example.
Lingzhong ZhaoEmail:
  相似文献   
985.
Statistical process control (SPC) is a conventional means of monitoring software processes and detecting related problems, where the causes of detected problems can be identified using causal analysis. Determining the actual causes of reported problems requires significant effort due to the large number of possible causes. This study presents an approach to detect problems and identify the causes of problems using multivariate SPC. This proposed method can be applied to monitor multiple measures of software process simultaneously. The measures which are detected as the major impacts to the out-of-control signals can be used to identify the causes where the partial least squares (PLS) and statistical hypothesis testing are utilized to validate the identified causes of problems in this study. The main advantage of the proposed approach is that the correlated indices can be monitored simultaneously to facilitate the causal analysis of a software process.
Chih-Ping ChuEmail:

Ching-Pao Chang   is a PhD candidate in Computer Science & Information Engineering at the National Cheng-Kung University, Taiwan. He received his MA from the University of Southern California in 1998 in Computer Science. His current work deals with the software process improvement and defect prevention using machine learning techniques. Chih-Ping Chu   is Professor of Software Engineering in Department of Computer Science & Information Engineering at the National Cheng-Kung University (NCKU) in Taiwan. He received his MA in Computer Science from the University of California, Riverside in 1987, and his Doctorate in Computer Science from Louisiana State University in 1991. He is especially interested in parallel computing and software engineering.   相似文献   
986.
The identification of part families and machine groups that form the cells is a major step in the development of a cellular manufacturing system and, consequently, a large number of concepts, theories and algorithms have been proposed. One common assumption for most of these cell formation algorithms is that the product mix remains stable over a period of time. In today’s world, the market demand is being shaped by consumers resulting in a highly volatile market. This has given rise to a new class of products characterized by low volume and high variety. To incorporate product mix changes into an existing cellular manufacturing system many important issues have to be tackled. In this paper, a methodology to incorporate new parts and machines into an existing cellular manufacturing system has been presented. The objective is to fit the new parts and machines into an existing cellular manufacturing system thereby increasing machine utilization and reducing investment in new equipment.  相似文献   
987.
Hard turning with cubic boron nitride (CBN) tools has been proven to be more effective and efficient than traditional grinding operations in machining hardened steels. However, rapid tool wear is still one of the major hurdles affecting the wide implementation of hard turning in industry. Better prediction of the CBN tool wear progression helps to optimize cutting conditions and/or tool geometry to reduce tool wear, which further helps to make hard turning a viable technology. The objective of this study is to design a novel but simple neural network-based generalized optimal estimator for CBN tool wear prediction in hard turning. The proposed estimator is based on a fully forward connected neural network with cutting conditions and machining time as the inputs and tool flank wear as the output. Extended Kalman filter algorithm is utilized as the network training algorithm to speed up the learning convergence. Network neuron connection is optimized using a destructive optimization algorithm. Besides performance comparisons with the CBN tool wear measurements in hard turning, the proposed tool wear estimator is also evaluated against a multilayer perceptron neural network modeling approach and/or an analytical modeling approach, and it has been proven to be faster, more accurate, and more robust. Although this neural network-based estimator is designed for CBN tool wear modeling in this study, it is expected to be applicable to other tool wear modeling applications.  相似文献   
988.
Developing trusted softwares has become an important trend and a natural choice in the development of software technology and applications, and software trustworthiness modeling has become a prerequisite and necessary means. To discuss and explain the basic scientific problems in software trustworthiness and to establish theoretical foundations for software trustworthiness measurement, combining the ideas of dynamical system study, this paper studies evolutionary laws of software trustworthiness and the dynamical mechanism under the effect of various internal and external factors, and proposes dynamical models for software trustworthiness, thus, software trustworthiness can be considered as the statistical characteristics of behaviors of software systems in the dynamical and open environment. By analyzing two simple examples, the paper explains the relationship between the limit evolutionary behaviors of software trustworthiness attributes and dynamical system characteristics, and interprets the dynamical characteristics of software trustworthiness and their evolutionary complexity. Supported partially by the National Basic Research Program of China (Grant No. 2005CB321900) and the National Natural Science Foundation of China (Grant No. 60473091)  相似文献   
989.
In this paper, sampled-data based average-consensus control is considered for networks consisting of continuous-time first-order integrator agents in a noisy distributed communication environment. The impact of the sampling size and the number of network nodes on the system performances is analyzed. The control input of each agent can only use information measured at the sampling instants from its neighborhood rather than the complete continuous process, and the measurements of its neighbors’ states are corrupted by random noises. By probability limit theory and the property of graph Laplacian matrix, it is shown that for a connected network, the static mean square error between the individual state and the average of the initial states of all agents can be made arbitrarily small, provided the sampling size is sufficiently small. Furthermore, by properly choosing the consensus gains, almost sure consensus can be achieved. It is worth pointing out that an uncertainty principle of Gaussian networks is obtained, which implies that in the case of white Gaussian noises, no matter what the sampling size is, the product of the steady-state and transient performance indices is always equal to or larger than a constant depending on the noise intensity, network topology and the number of network nodes.  相似文献   
990.
Recently, international academic circles advanced a class of new stochastic control models of a geometric Brownian motion which is an important kind of impulse control models whose cost structure is different from the others before, and it has a broad applying background and important theoretical significance in financial control and management of investment. This paper generalizes substantially the above stochastic control models under quite extensive conditions and describes the models more exactly under more normal theoretical system of stochastic process. By establishing a set of proper variational equations and proving the existence of its solution, and applying the means of stochastic analysis, this paper proves that the generalized stochastic control models have optimal controls. Meanwhile, we also analyze the structure of optimal controls carefully. Besides, we study the solution function of variational equations in a relatively deep-going way, which constitutes the value function of control models to some extent. Because the analysis methods of this paper are greatly different from those of original reference, this paper possesses considerable originality to some extent. In addition, this paper gives the strict proof to the part of original reference which is not fairly well-knit in analyses, and makes analyses and discussions of the model have the exactitude of mathematical sense. Supported by the National Natural Science Foundation of China (Grant No. 19671004)  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号