首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5553篇
  免费   272篇
  国内免费   8篇
电工技术   57篇
综合类   19篇
化学工业   1352篇
金属工艺   117篇
机械仪表   132篇
建筑科学   187篇
矿业工程   8篇
能源动力   253篇
轻工业   562篇
水利工程   37篇
石油天然气   14篇
武器工业   1篇
无线电   291篇
一般工业技术   906篇
冶金工业   896篇
原子能技术   73篇
自动化技术   928篇
  2023年   47篇
  2022年   104篇
  2021年   156篇
  2020年   104篇
  2019年   90篇
  2018年   205篇
  2017年   206篇
  2016年   220篇
  2015年   154篇
  2014年   203篇
  2013年   432篇
  2012年   311篇
  2011年   359篇
  2010年   276篇
  2009年   297篇
  2008年   231篇
  2007年   208篇
  2006年   194篇
  2005年   149篇
  2004年   108篇
  2003年   113篇
  2002年   97篇
  2001年   70篇
  2000年   69篇
  1999年   59篇
  1998年   246篇
  1997年   170篇
  1996年   114篇
  1995年   73篇
  1994年   81篇
  1993年   78篇
  1992年   39篇
  1991年   24篇
  1990年   30篇
  1989年   27篇
  1988年   24篇
  1987年   21篇
  1986年   17篇
  1985年   27篇
  1984年   31篇
  1983年   14篇
  1982年   18篇
  1981年   35篇
  1980年   18篇
  1979年   24篇
  1977年   39篇
  1976年   55篇
  1975年   17篇
  1974年   12篇
  1973年   20篇
排序方式: 共有5833条查询结果,搜索用时 15 毫秒
131.
We consider a system of Maxwell’s and Landau-Lifshitz-Gilbert equations describing magnetization dynamics in micromagnetism. The problem is discretized by a convergent, unconditionally stable finite element method. A multigrid preconditioned Uzawa type method for the solution of the algebraic system resulting from the discretized Maxwell’s equations is constructed. The efficiency of the method is demonstrated on numerical experiments and the results are compared to those obtained by simplified models.  相似文献   
132.
Cost estimation and effort allocation are the key challenges for successful project planning and management in software development. Therefore, both industry and the research community have been working on various models and techniques to accurately predict the cost of projects. Recently, researchers have started debating whether the prediction performance depends on the structure of data rather than the models used. In this article, we focus on a new aspect of data homogeneity, “cross- versus within-application domain”, and investigate what kind of training data should be used for software cost estimation in the embedded systems domain. In addition, we try to find out the effect of training dataset size on the prediction performance. Based on our empirical results, we conclude that it is better to use cross-domain data for embedded software cost estimation and the optimum training data size depends on the method used.  相似文献   
133.
134.
In this paper, regression analyses (RA) are presented for the neutronic calculation of ThO2 mixed 244CmO2 fuel with different neutronic parameters for various coolants, natural lithium, Li20Sn80 and Flinabe, respectively. The tritium breeding ratio (TBR), energy multiplication factor (M), total fission rate (Σf) and 232Th(n, γ) reaction is computed by XSDRNPM. In addition, this numerical results are estimated by RA depends on neutronic parameters and the empirical equations for neutronic performance are acquired. The results obtained by using XSDRNPM and the results of the RA, obtained empirical equations, are compared. The empirical equations indicate that RA can successfully be used for the prediction of the neutronic performance parameters in the hybrid reactor with a high degree of accuracy. In addition, correlation matrix is calculated to determined statistical relationships between variables TBR, M, Σf, and 232Th(n, γ).  相似文献   
135.
Conceptual models are used in understanding and communicating the domain of interest during analysis phase of system development. As they are used in early phases, errors and omissions may propagate to later phases and may be very costly to correct. This paper proposes a framework for evaluating conceptual models when represented in a domain specific language based on UML constructs. The framework describes the main aspects to be considered when conceptual models are represented in a domain specific language, presents a classification of semantic issues and some evaluation indicators. The indicators can, in principle, identify situations in the models where inconsistencies or incompleteness might occur. Whether these are real concerns might depend on domain semantics, hence these are semantic, not syntactic checks. The use of the proposed review framework is illustrated in the context of two conceptual models in a domain specific notation, KAMA. With reviews based on the framework, it is possible to spot semantic issues which are not noticed by case tools and help the analyst to identify more information about the domain.  相似文献   
136.
In this paper we describe the successful application of the ProB tool for data validation in several industrial applications. The initial case study centred on the San Juan metro system installed by Siemens. The control software was developed and formally proven with B. However, the development contains certain assumptions about the actual rail network topology which have to be validated separately in order to ensure safe operation. For this task, Siemens has developed custom proof rules for Atelier B. Atelier B, however, was unable to deal with about 80 properties of the deployment (running out of memory). These properties thus had to be validated by hand at great expense, and they need to be revalidated whenever the rail network infrastructure changes. In this paper we show how we were able to use ProB to validate all of the about 300 properties of the San Juan deployment, detecting exactly the same faults automatically in a few minutes that were manually uncovered in about one man-month. We have repeated this task for three ongoing projects at Siemens, notably the ongoing automatisation of the line 1 of the Paris Métro. Here again, about a man month of effort has been replaced by a few minutes of computation. This achievement required the extension of the ProB kernel for large sets as well as an improved constraint propagation algorithm. We also outline some of the effort and features that were required in moving from a tool capable of dealing with medium-sized examples towards a tool able to deal with actual industrial specifications. We also describe the issue of validating ProB, so that it can be integrated into the SIL4 development chain at Siemens.  相似文献   
137.
In this paper, a novel classification rule extraction algorithm which has been recently proposed by authors is employed to determine the causes of quality defects in a fabric production facility in terms of predetermined parameters like machine type, warp type etc. The proposed rule extraction algorithm works on the trained artificial neural networks in order to discover the hidden information which is available in the form of connection weights in them. The proposed algorithm is mainly based on a swarm intelligence metaheuristic which is known as Touring Ant Colony Optimization (TACO). The algorithm has a hierarchical structure with two levels. In the first level, a multilayer perceptron type neural network is trained and its weights are extracted. After obtaining the weights, in the second level, the TACO-based algorithm is applied to extract classification rules. The main purpose of the present work is to determine and analyze the most effective parameters on the quality defects in fabric production. The parameters and their levels which give the best quality results are tried to be discovered and evaluated by making use of the proposed algorithm. It is also aimed to compare the accuracy of proposed algorithm with several other rule-based algorithms in order to present its competitiveness.  相似文献   
138.

Purpose

Extracting comprehensible classification rules is the most emphasized concept in data mining researches. In order to obtain accurate and comprehensible classification rules from databases, a new approach is proposed by combining advantages of artificial neural networks (ANN) and swarm intelligence.

Method

Artificial neural networks (ANNs) are a group of very powerful tools applied to prediction, classification and clustering in different domains. The main disadvantage of this general purpose tool is the difficulties in its interpretability and comprehensibility. In order to eliminate these disadvantages, a novel approach is developed to uncover and decode the information hidden in the black-box structure of ANNs. Therefore, in this paper a study on knowledge extraction from trained ANNs for classification problems is carried out. The proposed approach makes use of particle swarm optimization (PSO) algorithm to transform the behaviors of trained ANNs into accurate and comprehensible classification rules. Particle swarm optimization with time varying inertia weight and acceleration coefficients is designed to explore the best attribute-value combination via optimizing ANN output function.

Results

The weights hidden in trained ANNs turned into comprehensible classification rule set with higher testing accuracy rates compared to traditional rule based classifiers.  相似文献   
139.
Quality function deployment (QFD) is a product development process performed to maximize customer satisfaction. In the QFD, the design requirements (DRs) affecting the product performance are primarily identified, and product performance is improved to optimize customer needs (CNs). For product development, determining the fulfillment levels of design requirements (DRs) is crucial during QFD optimization. However, in real world applications, the values of DRs are often discrete instead of continuous. To the best of our knowledge, there is no mixed integer linear programming (MILP) model in which the discrete DRs values are considered. Therefore, in this paper, a new QFD optimization approach combining MILP model and Kano model is suggested to acquire the optimized solution from a limited number of alternative DRs, the values of which can be discrete. The proposed model can be used not only to optimize the product development but also in other applications of QFD such as quality management, planning, design, engineering and decision-making, on the condition that DR values are discrete. Additionally, the problem of lack of solutions in integer and linear programming in the QFD optimization is overcome. Finally, the model is illustrated through an example.  相似文献   
140.
This article deals with real-time critical systems modelling and verification. Real-time scheduling theory provides algebraic methods and algorithms in order to make timing constraints verifications of these systems. Nevertheless, many industrial projects do not perform analysis with real-time scheduling theory even if demand for use of this theory is large and the industrial application field is wide (avionics, aerospace, automotive, autonomous systems, …). The Cheddar project investigates why real-time scheduling theory is not used and how its usability can be increased. The project was launched at the University of Brest in 2002. In Lecture Notes on Computer Sciences, vol. 5026, pp. 240–253, 2008, we have presented a short overview of this project. This article is an extended presentation of the Cheddar project, its contributions and also its ongoing works.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号