首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2087篇
  免费   95篇
  国内免费   5篇
电工技术   24篇
综合类   16篇
化学工业   593篇
金属工艺   45篇
机械仪表   33篇
建筑科学   146篇
矿业工程   3篇
能源动力   31篇
轻工业   171篇
水利工程   19篇
石油天然气   3篇
无线电   139篇
一般工业技术   402篇
冶金工业   192篇
原子能技术   16篇
自动化技术   354篇
  2023年   24篇
  2022年   50篇
  2021年   71篇
  2020年   40篇
  2019年   47篇
  2018年   44篇
  2017年   42篇
  2016年   54篇
  2015年   47篇
  2014年   76篇
  2013年   100篇
  2012年   100篇
  2011年   161篇
  2010年   105篇
  2009年   122篇
  2008年   115篇
  2007年   88篇
  2006年   90篇
  2005年   78篇
  2004年   74篇
  2003年   61篇
  2002年   58篇
  2001年   49篇
  2000年   43篇
  1999年   44篇
  1998年   44篇
  1997年   35篇
  1996年   24篇
  1995年   25篇
  1994年   35篇
  1993年   21篇
  1992年   12篇
  1991年   25篇
  1990年   17篇
  1989年   16篇
  1988年   8篇
  1987年   10篇
  1986年   6篇
  1985年   10篇
  1984年   16篇
  1983年   17篇
  1982年   6篇
  1981年   6篇
  1978年   5篇
  1977年   6篇
  1976年   7篇
  1970年   6篇
  1969年   6篇
  1968年   10篇
  1967年   5篇
排序方式: 共有2187条查询结果,搜索用时 15 毫秒
31.
We developed a decision support system (DSS) for sustainable river basin management in the German Elbe catchment (~100,000 km2), called Elbe-DSS. The system integrates georeferenced simulation models and related data sets with a user friendly interface and includes a library function. Design and content of the DSS have been developed in close cooperation with end users and stakeholders. The user can evaluate effectiveness of management actions like reforestation, improvement of treatment plant technology or the application of buffer strips under the influence of external constraints on climate, demographic and agro-economic changes to meet water management objectives such as water quality standards and discharge control. The paper (i) describes the conceptual design of the Elbe-DSS, (ii) demonstrates the applicability of the integrated catchment model by running three different management options for phosphate discharge reduction (reforestation, erosion control and ecological-farming) under the assumption of regional climate change based on IPCC scenarios, (iii) evaluates the effectiveness of the management options, and (iv) provides some lessons for the DSS-development in similar settings. The georeferenced approach allows the identification of local inputs in sub-catchments and their impact on the overall water quality, which helps the user to prioritize his management actions in terms of spatial distribution and effectiveness.  相似文献   
32.
33.
We examined the high precision deposition of toner and polymer microparticles with a typical size of approximately 10 microm on electrode arrays with electrodes of 100 microm and below using custom-made microelectronic chips. Selective desorption of redundant particles was employed to obtain a given particle pattern from preadsorbed particle layers. Microparticle desorption was regulated by dielectrophoretic attracting forces generated by individual pixel electrodes, tangential detaching forces of an air flow, and adhesion forces on the microchip surface. A theoretical consideration of the acting forces showed that without pixel voltage, the tangential force applied for particle detachment exceeded the particle adhesion force. When the pixel voltage was switched on, however, the sum of attracting forces was larger than the tangential detaching force, which was crucial for desorption efficiency. In our experiments, appropriately large dielectrophoretic forces were achieved by applying high voltages of up to 100 V on the pixel electrodes. In addition, electrode geometries on the chip's surface as well as particle size influenced the desorption quality. We further demonstrated the compatibility of this procedure to complementary metal oxide semiconductor chip technology, which should allow for an easy technical implementation with respect to high-resolution microparticle deposition.  相似文献   
34.
Materials with controllable multifunctional abilities for optical imaging (OI) and magnetic resonant imaging (MRI) that also can be used in photodynamic therapy are very interesting for future applications. Mesoporous TiO2 sub‐micrometer particles are doped with gadolinium to improve photoluminescence functionality and spin relaxation for MRI, with the added benefit of enhanced generation of reactive oxygen species (ROS). The Gd‐doped TiO2 exhibits red emission at 637 nm that is beneficial for OI and significantly improves MRI relaxation times, with a beneficial decrease in spin–lattice and spin–spin relaxation times. Density functional theory calculations show that Gd3+ ions introduce impurity energy levels inside the bandgap of anatase TiO2, and also create dipoles that are beneficial for charge separation and decreased electron–hole recombination in the doped lattice. The Gd‐doped TiO2 nanobeads (NBs) show enhanced ability for ROS monitored via ?OH radical photogeneration, in comparison with undoped TiO2 nanobeads and TiO2 P25, for Gd‐doping up to 10%. Cellular internalization and biocompatibility of TiO2@x Gd NBs are tested in vitro on MG‐63 human osteosarcoma cells, showing full biocompatibility. After photoactivation of the particles, anticancer trace by means of ROS photogeneration is observed just after 3 min irradiation.  相似文献   
35.
36.
Edge-based and face-based smoothed finite element methods (ES-FEM and FS-FEM, respectively) are modified versions of the finite element method allowing to achieve more accurate results and to reduce sensitivity to mesh distortion, at least for linear elements. These properties make the two methods very attractive. However, their implementation in a standard finite element code is nontrivial because it requires heavy and extensive modifications to the code architecture. In this article, we present an element-based formulation of ES-FEM and FS-FEM methods allowing to implement the two methods in a standard finite element code with no modifications to its architecture. Moreover, the element-based formulation permits to easily manage any type of element, especially in 3D models where, to the best of the authors' knowledge, only tetrahedral elements are used in FS-FEM applications found in the literature. Shape functions for non-simplex 3D elements are proposed in order to apply FS-FEM to any standard finite element.  相似文献   
37.
Model-based performance evaluation methods for software architectures can help architects to assess design alternatives and save costs for late life-cycle performance fixes. A recent trend is component-based performance modelling, which aims at creating reusable performance models; a number of such methods have been proposed during the last decade. Their accuracy and the needed effort for modelling are heavily influenced by human factors, which are so far hardly understood empirically. Do component-based methods allow to make performance predictions with a comparable accuracy while saving effort in a reuse scenario? We examined three monolithic methods (SPE, umlPSI, Capacity Planning (CP)) and one component-based performance evaluation method (PCM) with regard to their accuracy and effort from the viewpoint of method users. We conducted a series of three experiments (with different levels of control) involving 47 computer science students. In the first experiment, we compared the applicability of the monolithic methods in order to choose one of them for comparison. In the second experiment, we compared the accuracy and effort of this monolithic and the component-based method for the model creation case. In the third, we studied the effort reduction from reusing component-based models. Data were collected based on the resulting artefacts, questionnaires and screen recording. They were analysed using hypothesis testing, linear models, and analysis of variance. For the monolithic methods, we found that using SPE and CP resulted in accurate predictions, while umlPSI produced over-estimates. Comparing the component-based method PCM with SPE, we found that creating reusable models using PCM takes more (but not drastically more) time than using SPE and that participants can create accurate models with both techniques. Finally, we found that reusing PCM models can save time, because effort to reuse can be explained by a model that is independent of the inner complexity of a component. The tasks performed in our experiments reflect only a subset of the actual activities when applying model-based performance evaluation methods in a software development process. Our results indicate that sufficient prediction accuracy can be achieved with both monolithic and component-based methods, and that the higher effort for component-based performance modelling will indeed pay off when the component models incorporate and hide a sufficient amount of complexity.  相似文献   
38.
Confronted with decreasing margins and a rising customer demand for integrated solutions, manufacturing companies integrate complementary services into their portfolio. Offering value bundles (consisting of services and physical goods) takes place in integrated product–service systems, spanning the coordinated design and delivery of services and physical goods for customers. Conceptual Modeling is an established approach to support and guide such efforts. Using a framework for the design and delivery of value bundles as an analytical lens, this study evaluates the current support of reference models and modeling languages for setting up conceptual models for an integrated design and delivery of value bundles. Consecutively, designing modeling languages and reference models to fit the requirements of conceptual models in product–service systems are presented as upcoming challenges in Service Research. To guide further research, first steps are proposed by exemplarily integrating reference models and modeling languages stemming from the service and manufacturing domains.  相似文献   
39.
A two-layer architecture for dynamic real-time optimization (or nonlinear modelpredictive control (NMPC) with an economic objective) is presented, where the solution of the dynamic optimization problem is computed on two time-scales. On the upper layer, a rigorous optimization problem is solved with an economic objective function at a slow time-scale, which captures slow trends in process uncertainties. On the lower layer, a fast neighboring-extremal controller is tracking the trajectory in order to deal with fast disturbances acting on the process. Compared to a single-layer architecture, the two-layer architecture is able to address control systems with complex models leading to high computational load, since the rigorous optimization problem can be solved at a slower rate than the process sampling time. Furthermore, solving a new rigorous optimization problem is not necessary at each sampling time if the process has rather slow dynamics compared to the disturbance dynamics. The two-layer control strategy is illustrated with a simulated case study of an industrial polymerization process.  相似文献   
40.
We discuss an optimization model for the line planning problem in public transport in order to minimize operation costs while guaranteeing a certain level of quality of service, in terms of available transport capacity. We analyze the computational complexity of this problem for tree network topologies as well as several categories of line operations that are important for the Quito Trolebús system. In practice, these instances can be solved quite well, and significant optimization potentials can be demonstrated.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号