首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3070篇
  免费   179篇
  国内免费   1篇
电工技术   65篇
综合类   1篇
化学工业   976篇
金属工艺   26篇
机械仪表   92篇
建筑科学   113篇
矿业工程   2篇
能源动力   86篇
轻工业   439篇
水利工程   27篇
石油天然气   5篇
无线电   248篇
一般工业技术   405篇
冶金工业   172篇
原子能技术   18篇
自动化技术   575篇
  2023年   34篇
  2022年   178篇
  2021年   198篇
  2020年   96篇
  2019年   85篇
  2018年   112篇
  2017年   88篇
  2016年   140篇
  2015年   93篇
  2014年   121篇
  2013年   208篇
  2012年   181篇
  2011年   226篇
  2010年   155篇
  2009年   185篇
  2008年   133篇
  2007年   140篇
  2006年   119篇
  2005年   90篇
  2004年   68篇
  2003年   59篇
  2002年   65篇
  2001年   38篇
  2000年   32篇
  1999年   30篇
  1998年   66篇
  1997年   40篇
  1996年   45篇
  1995年   21篇
  1994年   32篇
  1993年   15篇
  1992年   12篇
  1991年   5篇
  1990年   20篇
  1989年   13篇
  1988年   8篇
  1987年   8篇
  1986年   7篇
  1985年   5篇
  1984年   10篇
  1983年   10篇
  1982年   12篇
  1981年   6篇
  1980年   12篇
  1977年   4篇
  1976年   6篇
  1975年   2篇
  1971年   2篇
  1965年   2篇
  1909年   2篇
排序方式: 共有3250条查询结果,搜索用时 15 毫秒
41.
Multi-objective evolutionary algorithms (MOEAs) have received increasing interest in industry because they have proved to be powerful optimizers. Despite the great success achieved, however, MOEAs have also encountered many challenges in real-world applications. One of the main difficulties in applying MOEAs is the large number of fitness evaluations (objective calculations) that are often needed before an acceptable solution can be found. There are, in fact, several industrial situations in which fitness evaluations are computationally expensive and the time available is very short. In these applications efficient strategies to approximate the fitness function have to be adopted, looking for a trade-off between optimization performance and efficiency. This is the case in designing a complex embedded system, where it is necessary to define an optimal architecture in relation to certain performance indexes while respecting strict time-to-market constraints. This activity, known as design space exploration (DSE), is still a great challenge for the EDA (electronic design automation) community. One of the most important bottlenecks in the overall design flow of an embedded system is due to simulation. Simulation occurs at every phase of the design flow and is used to evaluate a system which is a candidate for implementation. In this paper we focus on system level design, proposing an extensive comparison of the state-of-the-art of MOEA approaches with an approach based on fuzzy approximation to speed up the evaluation of a candidate system configuration. The comparison is performed in a real case study: optimization of the performance and power dissipation of embedded architectures based on a Very Long Instruction Word (VLIW) microprocessor in a mobile multimedia application domain. The results of the comparison demonstrate that the fuzzy approach outperforms in terms of both performance and efficiency the state of the art in MOEA strategies applied to DSE of a parameterized embedded system.  相似文献   
42.
Nature is a great source of inspiration for scientists, because natural systems seem to be able to find the best way to solve a given problem by using simple and robust mechanisms. Studying complex natural systems, scientists usually find that simple local dynamics lead to sophisticated macroscopic structures and behaviour. It seems that some kind of local interaction rules naturally allow the system to auto-organize itself as an efficient and robust structure, which can easily solve different tasks. Examples of such complex systems are social networks, where a small set of basic interaction rules leads to a relatively robust and efficient communication structure. In this paper, we present PROSA, a semantic peer-to-peer (P2P) overlay network inspired by social dynamics. The way queries are forwarded and links among peers are established in PROSA resemble the way people ask other people for collaboration, help or information. Behaving as a social network of peers, PROSA naturally evolves to a small world, where all peers can be reached in a fast and efficient way. The underlying algorithm used for query forwarding, based only on local choices, is both reliable and effective: peers sharing similar resources are eventually connected with each other, allowing queries to be successfully answered in a really small amount of time. The resulting emergent structure can guarantee fast responses and good query recall.  相似文献   
43.
44.
We consider a production-distribution system, where a facility produces one commodity which is distributed to a set of retailers by a fleet of vehicles. Each retailer defines a maximum level of the inventory. The production policy, the retailers replenishment policies and the transportation policy have to be determined so as to minimize the total system cost. The overall cost is composed by fixed and variable production costs at the facility, inventory costs at both facility and retailers and routing costs. We study two different types of replenishment policies. The well-known order-up to level (OU) policy, where the quantity shipped to each retailer is such that the level of its inventory reaches the maximum level, and the maximum level (ML) policy, where the quantity shipped to each retailer is such that the inventory is not greater than the maximum level. We first show that when the transportation is outsourced, the problem with OU policy is NP-hard, whereas there exists a class of instances where the problem with ML policy can be solved in polynomial time. We also show the worst-case performance of the OU policy with respect to the more flexible ML policy. Then, we focus on the ML policy and the design of a hybrid heuristic. We also present an exact algorithm for the solution of the problem with one vehicle. Results of computational experiments carried out on small size instances show that the heuristic can produce high quality solutions in a very short amount of time. Results obtained on a large set of randomly generated problem instances are also shown, aimed at comparing the two policies.  相似文献   
45.
46.
47.
The representer theorem for kernel methods states that the solution of the associated variational problem can be expressed as the linear combination of a finite number of kernel functions. However, for non-smooth loss functions, the analytic characterization of the coefficients poses nontrivial problems. Standard approaches resort to constrained optimization reformulations which, in general, lack a closed-form solution. Herein, by a proper change of variable, it is shown that, for any convex loss function, the coefficients satisfy a system of algebraic equations in a fixed-point form, which may be directly obtained from the primal formulation. The algebraic characterization is specialized to regression and classification methods and the fixed-point equations are explicitly characterized for many loss functions of practical interest. The consequences of the main result are then investigated along two directions. First, the existence of an unconstrained smooth reformulation of the original non-smooth problem is proven. Second, in the context of SURE (Stein’s Unbiased Risk Estimation), a general formula for the degrees of freedom of kernel regression methods is derived.  相似文献   
48.
In this paper, a switched control architecture for constrained control systems is presented. The strategy is based on command governor ideas that are here specialized to ‘optimally’ schedule switching events on the plant dynamics for improving control performance at the expense of low computational burdens. The significance of the method mainly lies in its capability to avoid constraints violation and loss of stability regardless of any configuration change occurrence in the plant/constraint structure. To this end, the concept of model transition dwell time is used within the proposed control framework to formally define the minimum time necessary to enable a switching event under guaranteed conditions on the overall stability and constraint fulfilment. Simulation results on a simple linear system and on a Cessna 182 aircraft model show the effectiveness of the proposed strategy. Copyright © 2017 John Wiley & Sons, Ltd.  相似文献   
49.
The use of satellites to monitor the color of the ocean requires effective removal of the atmospheric signal. This can be performed by extrapolating the aerosol optical properties in the visible from the near-infrared (NIR) spectral region assuming that the seawater is totally absorbant in this latter part of the spectrum. However, the non-negligible water-leaving radiance in the NIR which is characteristic of turbid waters may lead to an overestimate of the atmospheric radiance in the whole visible spectrum with increasing severity at shorter wavelengths. This may result in significant errors, if not complete failure, of various algorithms for the retrieval of chlorophyll-a concentration, inherent optical properties and biogeochemical parameters of surface waters.This paper presents results of an inter-comparison study of three methods that compensate for NIR water-leaving radiances and that are based on very different hypothesis: 1) the standard SeaWiFS algorithm (Stumpf et al., 2003; Bailey et al., 2010) based on a bio-optical model and an iterative process; 2) the algorithm developed by Ruddick et al. (2000) based on the spatial homogeneity of the NIR ratios of the aerosol and water-leaving radiances; and 3) the algorithm of Kuchinke et al. (2009) based on a fully coupled atmosphere-ocean spectral optimization inversion. They are compared using normalized water-leaving radiance nLw in the visible. The reference source for comparison is ground-based measurements from three AERONET-Ocean Color sites, one in the Adriatic Sea and two in the East Coast of USA.Based on the matchup exercise, the best overall estimates of the nLw are obtained with the latest SeaWiFS standard algorithm version with relative error varying from 14.97% to 35.27% for λ = 490 nm and λ = 670 nm respectively. The least accurate estimates are given by the algorithm of Ruddick, the relative errors being between 16.36% and 42.92% for λ = 490 nm and λ = 412 nm, respectively. The algorithm of Kuchinke appears to be the most accurate algorithm at 412 nm (30.02%), 510 (15.54%) and 670 nm (32.32%) using its default optimization and bio-optical model coefficient settings.Similar conclusions are obtained for the aerosol optical properties (aerosol optical thickness τ(865) and the Ångström exponent, α(510, 865)). Those parameters are retrieved more accurately with the SeaWiFS standard algorithm (relative error of 33% and 54.15% for τ(865) and α(510, 865)).A detailed analysis of the hypotheses of the methods is given for explaining the differences between the algorithms. The determination of the aerosol parameters is critical for the algorithm of Ruddick et al. (2000) while the bio-optical model is critical for the algorithm of Stumpf et al. (2003) utilized in the standard SeaWiFS atmospheric correction and both aerosol and bio-optical model for the coupled atmospheric-ocean algorithm of Kuchinke. The Kuchinke algorithm presents model aerosol-size distributions that differ from real aerosol-size distribution pertaining to the measurements. In conclusion, the results show that for the given atmospheric and oceanic conditions of this study, the SeaWiFS atmospheric correction algorithm is most appropriate for estimating the marine and aerosol parameters in the given turbid waters regions.  相似文献   
50.
This paper presents a general method for the finite element analysis of linear mechanical systems by taking into account probability density functions whose parameters are affected by fuzziness. Within this framework, the standard perturbation-based stochastic finite element method is relaxed in order to incorporate uncertain probabilities in static, dynamic and modal analyses. General formulae are provided for assessing the (fuzzy) structural reliability and several typologies of optimization problems (reliability-based design, robust design, robust/reliability-based design) are formalized. In doing this the credibility theory is extensively used to extract qualified crisp data from the available set of fuzzy results, so that standard optimizers can be adopted to solve the most important design problems. It is shown that the proposed methodology is a general and versatile tool for finite element analyses because it is able to consider, both, probabilistic and non-probabilistic sources of uncertainties, such as randomness, vagueness, ambiguity and imprecision.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号