首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1825篇
  免费   92篇
  国内免费   5篇
电工技术   24篇
综合类   16篇
化学工业   522篇
金属工艺   30篇
机械仪表   30篇
建筑科学   138篇
矿业工程   3篇
能源动力   27篇
轻工业   170篇
水利工程   11篇
石油天然气   3篇
无线电   117篇
一般工业技术   374篇
冶金工业   94篇
原子能技术   16篇
自动化技术   347篇
  2023年   24篇
  2022年   17篇
  2021年   64篇
  2020年   37篇
  2019年   46篇
  2018年   43篇
  2017年   41篇
  2016年   52篇
  2015年   47篇
  2014年   73篇
  2013年   97篇
  2012年   91篇
  2011年   157篇
  2010年   99篇
  2009年   111篇
  2008年   102篇
  2007年   84篇
  2006年   85篇
  2005年   73篇
  2004年   68篇
  2003年   53篇
  2002年   51篇
  2001年   44篇
  2000年   42篇
  1999年   42篇
  1998年   41篇
  1997年   35篇
  1996年   22篇
  1995年   20篇
  1994年   26篇
  1993年   14篇
  1992年   10篇
  1991年   21篇
  1990年   13篇
  1989年   14篇
  1988年   7篇
  1987年   6篇
  1986年   4篇
  1985年   6篇
  1984年   10篇
  1983年   7篇
  1982年   2篇
  1981年   2篇
  1980年   2篇
  1978年   3篇
  1977年   2篇
  1974年   2篇
  1967年   2篇
  1939年   1篇
  1931年   3篇
排序方式: 共有1922条查询结果,搜索用时 31 毫秒
31.
32.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.  相似文献   
33.
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.  相似文献   
34.
Confronted with decreasing margins and a rising customer demand for integrated solutions, manufacturing companies integrate complementary services into their portfolio. Offering value bundles (consisting of services and physical goods) takes place in integrated product–service systems, spanning the coordinated design and delivery of services and physical goods for customers. Conceptual Modeling is an established approach to support and guide such efforts. Using a framework for the design and delivery of value bundles as an analytical lens, this study evaluates the current support of reference models and modeling languages for setting up conceptual models for an integrated design and delivery of value bundles. Consecutively, designing modeling languages and reference models to fit the requirements of conceptual models in product–service systems are presented as upcoming challenges in Service Research. To guide further research, first steps are proposed by exemplarily integrating reference models and modeling languages stemming from the service and manufacturing domains.  相似文献   
35.
A two-layer architecture for dynamic real-time optimization (or nonlinear modelpredictive control (NMPC) with an economic objective) is presented, where the solution of the dynamic optimization problem is computed on two time-scales. On the upper layer, a rigorous optimization problem is solved with an economic objective function at a slow time-scale, which captures slow trends in process uncertainties. On the lower layer, a fast neighboring-extremal controller is tracking the trajectory in order to deal with fast disturbances acting on the process. Compared to a single-layer architecture, the two-layer architecture is able to address control systems with complex models leading to high computational load, since the rigorous optimization problem can be solved at a slower rate than the process sampling time. Furthermore, solving a new rigorous optimization problem is not necessary at each sampling time if the process has rather slow dynamics compared to the disturbance dynamics. The two-layer control strategy is illustrated with a simulated case study of an industrial polymerization process.  相似文献   
36.
We discuss an optimization model for the line planning problem in public transport in order to minimize operation costs while guaranteeing a certain level of quality of service, in terms of available transport capacity. We analyze the computational complexity of this problem for tree network topologies as well as several categories of line operations that are important for the Quito Trolebús system. In practice, these instances can be solved quite well, and significant optimization potentials can be demonstrated.  相似文献   
37.
A miniaturized ceramic differential scanning calorimeter (MC-DSC) with integrated oven and crucible is presented. Despite its small size of only 11 mm × 39 mm × 1.5 mm, all functions of a conventional DSC apparatus are integrated in this novel device - including the oven. The MC-DSC is fully manufactured in thick-film and green glass ceramic tape-based low temperature co-fired ceramics (LTCC) technology. Therefore, production costs are considered to be low. Initial results using indium as a sample material show a good dynamic performance of the MC-DSC. Full width at half maximum of the melting peak is 2.4 °C (sample mass approx. 11 mg, heating rate approx. 50 °C/min). Repeatability of the indium melting point is within ±0.02 °C. The melting peak area increases linearly with the sample mass up to at least 26 mg. Simulations of a strongly simplified finite element model of the MC-DSC are in a good agreement with measurement results allowing a model-based prediction of its basic characteristics.  相似文献   
38.
Urea-SCR systems (selective catalytic reduction) are required to meet future NOx emission standards of heavy-duty and light-duty vehicles. It is a key factor to control the SCR systems and to monitor the catalysts’ functionalities to achieve low emissions. The novel idea of this study is to apply commercially available SCR catalyst materials based on vanadia-doped tungsten-titania as gas sensing films for impedimetric thick-film exhaust gas sensor devices. The dependence of the impedance on the surrounding gas atmosphere, especially on the concentrations of NH3 and NO2, is investigated, as well as cross interferences from other components of the exhaust. The sensors provide a good NH3 sensitivity at 500 °C. The sensor behavior is explained in light of the literature combining the fields of catalysts and semiconducting gas sensors.  相似文献   
39.
Physically Guided Animation of Trees   总被引:2,自引:0,他引:2  
This paper presents a new method to animate the interaction of a tree with wind both realistically and in real time. The main idea is to combine statistical observations with physical properties in two major parts of tree animation. First, the interaction of a single branch with the forces applied to it is approximated by a novel efficient two step nonlinear deformation method, allowing arbitrary continuous deformations and circumventing the need to segment a branch to model its deformation behavior. Second, the interaction of wind with the dynamic system representing a tree is statistically modeled. By precomputing the response function of branches to turbulent wind in frequency space, the motion of a branch can be synthesized efficiently by sampling a 2D motion texture.
Using a hierarchical form of vertex displacement, both methods can be combined in a single vertex shader, fully leveraging the power of modern GPUs to realistically animate thousands of branches and ten thousands of leaves at practically no cost.  相似文献   
40.
Top-k query processing is a fundamental building block for efficient ranking in a large number of applications. Efficiency is a central issue, especially for distributed settings, when the data is spread across different nodes in a network. This paper introduces novel optimization methods for top-k aggregation queries in such distributed environments. The optimizations can be applied to all algorithms that fall into the frameworks of the prior TPUT and KLEE methods. The optimizations address three degrees of freedom: 1) hierarchically grouping input lists into top-k operator trees and optimizing the tree structure, 2) computing data-adaptive scan depths for different input sources, and 3) data-adaptive sampling of a small subset of input sources in scenarios with hundreds or thousands of query-relevant network nodes. All optimizations are based on a statistical cost model that utilizes local synopses, e.g., in the form of histograms, efficiently computed convolutions, and estimators based on order statistics. The paper presents comprehensive experiments, with three different real-life datasets and using the ns-2 network simulator for a packet-level simulation of a large Internet-style network.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号