首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9087篇
  免费   588篇
  国内免费   14篇
电工技术   128篇
综合类   2篇
化学工业   2530篇
金属工艺   155篇
机械仪表   279篇
建筑科学   363篇
矿业工程   29篇
能源动力   355篇
轻工业   1451篇
水利工程   102篇
石油天然气   53篇
无线电   574篇
一般工业技术   1398篇
冶金工业   576篇
原子能技术   69篇
自动化技术   1625篇
  2024年   29篇
  2023年   116篇
  2022年   344篇
  2021年   417篇
  2020年   284篇
  2019年   324篇
  2018年   378篇
  2017年   400篇
  2016年   360篇
  2015年   310篇
  2014年   427篇
  2013年   743篇
  2012年   589篇
  2011年   710篇
  2010年   500篇
  2009年   530篇
  2008年   453篇
  2007年   415篇
  2006年   338篇
  2005年   244篇
  2004年   207篇
  2003年   201篇
  2002年   177篇
  2001年   117篇
  2000年   118篇
  1999年   92篇
  1998年   152篇
  1997年   121篇
  1996年   86篇
  1995年   79篇
  1994年   51篇
  1993年   47篇
  1992年   25篇
  1991年   29篇
  1990年   23篇
  1989年   29篇
  1988年   18篇
  1987年   12篇
  1986年   17篇
  1985年   20篇
  1984年   16篇
  1983年   11篇
  1982年   17篇
  1981年   10篇
  1980年   12篇
  1979年   12篇
  1978年   21篇
  1976年   17篇
  1975年   6篇
  1973年   6篇
排序方式: 共有9689条查询结果,搜索用时 15 毫秒
91.
This paper shows that breaking the barrier of 1 triangle/clock rasterization rate for microtriangles in modern GPU architectures in an efficient way is possible. The fixed throughput of the special purpose culling and triangle setup stages of the classic pipeline limits the GPU scalability to rasterize many triangles in parallel when these cover very few pixels. In contrast, the shader core counts and increasing GFLOPs in modern GPUs clearly suggests parallelizing this computation entirely across multiple shader threads, making use of the powerful wide-ALU instructions. In this paper, we present a very efficient SIMD-like rasterization code targeted at very small triangles that scales very well with the number of shader cores and has higher performance than traditional edge equation based algorithms. We have extended the ATTILA GPU shader ISA (del Barrioet al. in IEEE International Symposium on Performance Analysis of Systems and Software, pp. 231–241, 2006) with two fixed point instructions to meet the rasterization precision requirement. This paper also introduces a novel subpixel Bounding Box size optimization that adjusts the bounds much more finely, which is critical for small triangles, and doubles the 2×2-pixel stamp test efficiency. The proposed shader rasterization program can run on top of the original pixel shader program in such a way that selected fragments are rasterized, attribute interpolated and pixel shaded in the same pass. Our results show that our technique yields better performance than a classic rasterizer at 8 or more shader cores, with speedups as high as 4× for 16 shader cores.  相似文献   
92.
The aim of this paper is to show how the hybridization of a multi-objective evolutionary algorithm (MOEA) and a local search method based on the use of rough set theory is a viable alternative to obtain a robust algorithm able to solve difficult constrained multi-objective optimization problems at a moderate computational cost. This paper extends a previously published MOEA [Hernández-Díaz AG, Santana-Quintero LV, Coello Coello C, Caballero R, Molina J. A new proposal for multi-objective optimization using differential evolution and rough set theory. In: 2006 genetic and evolutionary computation conference (GECCO’2006). Seattle, Washington, USA: ACM Press; July 2006], which was limited to unconstrained multi-objective optimization problems. Here, the main idea is to use this sort of hybrid approach to approximate the Pareto front of a constrained multi-objective optimization problem while performing a relatively low number of fitness function evaluations. Since in real-world problems the cost of evaluating the objective functions is the most significant, our underlying assumption is that, by aiming to minimize the number of such evaluations, our MOEA can be considered efficient. As in its previous version, our hybrid approach operates in two stages: in the first one, a multi-objective version of differential evolution is used to generate an initial approximation of the Pareto front. Then, in the second stage, rough set theory is used to improve the spread and quality of this initial approximation. To assess the performance of our proposed approach, we adopt, on the one hand, a set of standard bi-objective constrained test problems and, on the other hand, a large real-world problem with eight objective functions and 160 decision variables. The first set of problems are solved performing 10,000 fitness function evaluations, which is a competitive value compared to the number of evaluations previously reported in the specialized literature for such problems. The real-world problem is solved performing 250,000 fitness function evaluations, mainly because of its high dimensionality. Our results are compared with respect to those generated by NSGA-II, which is a MOEA representative of the state-of-the-art in the area.  相似文献   
93.
In this work we present a general (mono and multiobjective) optimization framework for the technological improvement of biochemical systems. The starting point of the method is a mathematical model in ordinary differential equations (ODEs) of the investigated system, based on qualitative biological knowledge and quantitative experimental data. In the method we take advantage of the special structural features of a family of ODEs called power-law models to reduce the computational complexity of the optimization program. In this way, the genetic manipulation of a biochemical system to meet a certain biotechnological goal can be expressed as an optimization program with some desired properties such as linearity or convexity.The general method of optimization is presented and discussed in its linear and geometric programming versions. We furthermore illustrate the use of the method by several real case studies. We conclude that the technological improvement of microorganisms can be afforded using the combination of mathematical modelling and optimization. The systematic nature of this approach facilitates the redesign of biochemical systems and makes this a predictive exercise rather than a trial-and-error procedure.  相似文献   
94.
Most work on pattern mining focuses on simple data structures such as itemsets and sequences of itemsets. However, a lot of recent applications dealing with complex data like chemical compounds, protein structures, XML and Web log databases and social networks, require much more sophisticated data structures such as trees and graphs. In these contexts, interesting patterns involve not only frequent object values (labels) appearing in the graphs (or trees) but also frequent specific topologies found in these structures. Recently, several techniques for tree and graph mining have been proposed in the literature. In this paper, we focus on constraint-based tree pattern mining. We propose to use tree automata as a mechanism to specify user constraints over tree patterns. We present the algorithm CoBMiner which allows user constraints specified by a tree automata to be incorporated in the mining process. An extensive set of experiments executed over synthetic and real data (XML documents and Web usage logs) allows us to conclude that incorporating constraints during the mining process is far more effective than filtering the interesting patterns after the mining process.  相似文献   
95.
In this paper we present a new parallel multi-frontal direct solver, dedicated for the hp Finite Element Method (hp-FEM). The self-adaptive hp-FEM generates in a fully automatic mode, a sequence of hp-meshes delivering exponential convergence of the error with respect to the number of degrees of freedom (d.o.f.) as well as the CPU time, by performing a sequence of hp refinements starting from an arbitrary initial mesh. The solver constructs an initial elimination tree for an arbitrary initial mesh, and expands the elimination tree each time the mesh is refined. This allows us to keep track of the order of elimination for the solver. The solver also minimizes the memory usage, by de-allocating partial LU factorizations computed during the elimination stage of the solver, and recomputes them for the backward substitution stage, by utilizing only about 10% of the computational time necessary for the original computations. The solver has been tested on 3D Direct Current (DC) borehole resistivity measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive hphp-FEM, with finite elements of various sizes and polynomial orders of approximation varying from p=1p=1 to p=9p=9. From the presented experiments it follows that the parallel solver scales well up to the maximum number of utilized processors. The limit for the solver scalability is the maximum sequential part of the algorithm: the computations of the partial LU factorizations over the longest path, coming from the root of the elimination tree down to the deepest leaf.  相似文献   
96.
One of the main problems of robots is the lack of adaptability and the need for adjustment every time the robot changes its working place. To solve this, we propose a learning approach for mobile robots using a reinforcement-based strategy and a dynamic sensor-state mapping. This strategy, practically parameterless, minimises the adjustments needed when the robot operates in a different environment or performs a different task.Our system will simultaneously learn the state space and the action to execute on each state. The learning algorithm will attempt to maximise the time before a robot failure in order to obtain a control policy suited to the desired behaviour, thus providing a more interpretable learning process. The state representation will be created dynamically, starting with an empty state space and adding new states as the robot finds new situations that has not seen before. A dynamic creation of the state representation will avoid the classic, error-prone and cyclic process of designing and testing an ad hoc representation. We performed an exhaustive study of our approach, comparing it with other classic strategies. Unexpectedly, learning both perception and action does not increase the learning time.  相似文献   
97.
This paper presents a novel control design technique in order to obtain a guaranteed cost fuzzy controller subject to constraints on the input channel. This guaranteed cost control law is obtained via multi-parametric quadratic programming. The result is a piecewise fuzzy control law where the state partition is defined by fuzzy inequalities. The parameters of the Lyapunov function can be obtained previously using Linear Matrix Inequalities optimization.  相似文献   
98.
Quality control and safety related issues have become more and more important in industrial production of high added value products and chemical specialities during last years. In this regard, many successful applications of multivariate statistical process control (MSPC) for monitoring and diagnosis of batch processes have been presented. It is a common industrial practice to monitor the batch progress by exploiting the information contained in a historical database of successful batches using projection techniques such as principal components analysis (PCA), partial least squares (PLS) and independent component analysis (ICA). In this work, a new MSPC strategy for batch process monitoring is presented. Its distinctive feature is that it works in the space of the original variables. The technique uses only the T2-statistic for detection and identification purposes. The identification of the set of observations that signal the fault is accomplished by decomposing the T2-statistic as a unique sum of each variable contribution. Performance comparisons among the proposed strategy and the most popular PCA-based approaches are carried out by simulation of polymerization and penicillin cultivation batch processes. Results show that the new approach can be successfully applied to monitor this kind of processes since it works very well during both fault detection and identification stages.  相似文献   
99.
This paper presents a new methodology for the design and implementation of gain‐scheduled controllers for multi‐rate systems. The proposed methodology provides a natural way to address the integrated guidance and control problem for autonomous vehicles when the outputs are sampled at different instants of time. A controller structure is first proposed for the regulation of non‐square multi‐rate systems with more measured outputs than inputs. Based on this structure, an implementation for a gain‐scheduled controller is obtained that satisfies an important property known as the linearization property. The implementation resembles the velocity implementation for single‐rate systems. The method is then applied to the problem of steering an autonomous rotorcraft along a predefined trajectory defined in terms of space and time coordinates. By considering a convenient error vector to describe the vehicle's dynamics, the trajectory tracking problem is reduced to that of regulating the error variables to zero. Because of the periodic multi‐rate nature of the onboard sensor suite, the controller synthesis is dealt with under the scope of linear periodic systems theory. Simulation results obtained with a full non‐linear rotorcraft dynamic model are presented and discussed. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
100.
To overcome the difficulty on building manually complex models of chain drives, this work proposes a comprehensive methodology to build multibody models of any general chain drive automatically from a minimal set of data. The proposed procedure also evaluates the initial positions and velocities of all components of the drive that are consistent with the kinematic joints or with the contact pairs used in the model. In this methodology, all links and sprockets are represented by rigid bodies connected to each other either by ideal or by clearance revolute joints. The clearance revolute joint contact is further extended to handle the contact between the chain rollers and the sprocket teeth exact profiles. A suitable cylindrical continuous contact law is applied to describe the interaction on all contact pairs. One of the complexities of the computational study of roller chain drives is the large number of bodies in the system and the dynamics of the successive engagement and disengagement of the rollers with the sprockets. Each time a roller engages or disengages with a sprocket tooth, the number of rigid bodies in contact changes. The search for the contact pairs is recognized as one of the most time consuming task in contact analysis. This work proposes a procedure to specify the contact pairs and their update during the dynamic analysis optimizing the computational efficiency of the contact search. The methodologies adopted result in a general computer program that is applied and demonstrated in a generic chain drive that can be used in industrial machines, vehicle engines or any other type of mechanical system.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号