首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   8995篇
  免费   719篇
  国内免费   14篇
电工技术   139篇
综合类   3篇
化学工业   2377篇
金属工艺   215篇
机械仪表   288篇
建筑科学   275篇
矿业工程   27篇
能源动力   373篇
轻工业   1715篇
水利工程   84篇
石油天然气   54篇
无线电   563篇
一般工业技术   1460篇
冶金工业   763篇
原子能技术   74篇
自动化技术   1318篇
  2024年   33篇
  2023年   119篇
  2022年   302篇
  2021年   476篇
  2020年   301篇
  2019年   382篇
  2018年   406篇
  2017年   420篇
  2016年   412篇
  2015年   290篇
  2014年   423篇
  2013年   734篇
  2012年   590篇
  2011年   630篇
  2010年   453篇
  2009年   481篇
  2008年   426篇
  2007年   377篇
  2006年   295篇
  2005年   203篇
  2004年   209篇
  2003年   183篇
  2002年   171篇
  2001年   108篇
  2000年   107篇
  1999年   105篇
  1998年   217篇
  1997年   174篇
  1996年   133篇
  1995年   92篇
  1994年   72篇
  1993年   64篇
  1992年   36篇
  1991年   25篇
  1990年   39篇
  1989年   27篇
  1988年   34篇
  1987年   19篇
  1986年   21篇
  1985年   20篇
  1984年   11篇
  1983年   7篇
  1982年   9篇
  1981年   12篇
  1980年   7篇
  1978年   7篇
  1977年   7篇
  1976年   20篇
  1975年   8篇
  1974年   7篇
排序方式: 共有9728条查询结果,搜索用时 62 毫秒
151.
A density-based partitioning strategy is proposed for large domain networks in order to deal with the scalability issue found in autonomic networks considering, as a scenario, the autonomic Quality of Service (QoS) management context. The approach adopted focus as on obtaining dense network partitions having more paths for a given vertices set in the domain. It is demonstrated that dense partitions improve autonomic processing scalability, for instance, reducing routing process complexity. The solution looks for a significant trade-off between partition autonomic algorithm execution time and path selection quality in large domains. Simulation scenarios for path selection execution time are presented and discussed. Authors argue that autonomic networks may benefit from the dense partition approach proposed by achieving scalable, efficient and near real-time support for autonomic management systems.  相似文献   
152.
In this article, the authors compare offshore outsourcing and the internal offshoring of software development. Empirical evidence is presented from a case study conducted in five companies. Based on a detailed literature review, a framework was developed that guided the authors' analysis of the differences in the challenges faced by companies and the patterns of evolution in the practice of software development in each business model.  相似文献   
153.
Both image compression based on color quantization and image segmentation are two typical tasks in the field of image processing. Several techniques based on splitting algorithms or cluster analyses have been proposed in the literature. Self-organizing maps have been also applied to these problems, although with some limitations due to the fixed network architecture and the lack of representation in hierarchical relations among data. In this paper, both problems are addressed using growing hierarchical self-organizing models. An advantage of these models is due to the hierarchical architecture, which is more flexible in the adaptation process to input data, reflecting inherent hierarchical relations among data. Comparative results are provided for image compression and image segmentation. Experimental results show that the proposed approach is promising for image processing, and the powerful of the hierarchical information provided by the proposed model.  相似文献   
154.
In this paper, we report on our experience with the application of validated models to assess performance, reliability, and adaptability of a complex mission critical system that is being developed to dynamically monitor and control the position of an oil-drilling platform. We present real-time modeling results that show that all tasks are schedulable. We performed stochastic analysis of the distribution of task execution time as a function of the number of system interfaces. We report on the variability of task execution times for the expected system configurations. In addition, we have executed a system library for an important task inside the performance model simulator. We report on the measured algorithm convergence as a function of the number of vessel thrusters. We have also studied the system architecture adaptability by comparing the documented system architecture and the implemented source code. We report on the adaptability findings and the recommendations we were able to provide to the system’s architect. Finally, we have developed models of hardware and software reliability. We report on hardware and software reliability results based on the evaluation of the system architecture.  相似文献   
155.
The multiple determination tasks of chemical properties are a classical problem in analytical chemistry. The major problem is concerned in to find the best subset of variables that better represents the compounds. These variables are obtained by a spectrophotometer device. This device measures hundreds of correlated variables related with physicocbemical properties and that can be used to estimate the component of interest. The problem is the selection of a subset of informative and uncorrelated variables that help the minimization of prediction error. Classical algorithms select a subset of variables for each compound considered. In this work we propose the use of the SPEA-II (strength Pareto evolutionary algorithm II). We would like to show that the variable selection algorithm can selected just one subset used for multiple determinations using multiple linear regressions. For the case study is used wheat data obtained by NIR (near-infrared spectroscopy) spectrometry where the objective is the determination of a variable subgroup with information about E protein content (%), test weight (Kg/HI), WKT (wheat kernel texture) (%) and farinograph water absorption (%). The results of traditional techniques of multivariate calibration as the SPA (successive projections algorithm), PLS (partial least square) and mono-objective genetic algorithm are presents for comparisons. For NIR spectral analysis of protein concentration on wheat, the number of variables selected from 775 spectral variables was reduced for just 10 in the SPEA-II algorithm. The prediction error decreased from 0.2 in the classical methods to 0.09 in proposed approach, a reduction of 37%. The model using variables selected by SPEA-II had better prediction performance than classical algorithms and full-spectrum partial least-squares.  相似文献   
156.
This work considers the open-loop control problem of steering a two-level quantum system from any initial to any final condition. The model of this system evolves on the state space , having two inputs that correspond to the complex amplitude of a resonant laser field. A symmetry preserving flat output is constructed using a fully geometric construction and quaternion computations. Simulation results of this flatness-based open-loop control are provided.  相似文献   
157.
Algorithm based on simulated annealing for land-use allocation   总被引:1,自引:0,他引:1  
This article describes the use of simulated annealing for allocation of land units to a set of possible uses on, the basis of their suitability for those uses, and the compactness of the total areas allotted to the same use or kind of use, which are fixed a priori. The results obtained for the Terra Chá district of Galicia (N.W. Spain) using different objective weighting schemes are compared with each other and with those obtained for this district under the same area constraints, using hierarchical optimization, ideal point analysis, and multi-objective land allocation (MOLA) to maximize average use suitability. Inclusion of compactness in the simulated annealing objective function avoids the highly disperse allocations typical of optimizations that ignore this sub-objective.  相似文献   
158.
This report describes the design of a modular, massive-parallel, neural-network (NN)-based vector quantizer for real-time video coding. The NN is a self-organizing map (SOM) that works only in the training phase for codebook generation, only at the recall phase for real-time image coding, or in both phases for adaptive applications. The neural net can be learned using batch or adaptive training and is controlled by an inside circuit, finite-state machine-based hard controller. The SOM is described in VHDL and implemented on electrically (FPGA) and mask (standard-cell) programmable devices.  相似文献   
159.
This paper proposes a methodology to compute quadratic performance bounds when the closed loop poles of a discrete-time multivariable control loop are confined to a disk, centred at the origin, and with radius less than one. The underlying philosophy in this constraint is to avoid certain undesirable dynamic features which arise in quadratic optimal designs. An expression for the performance loss due to the pole location constraint is also provided. Using numerical examples, we show that the performance loss is compensated by an improved transient performance, specially visible in the control signals.  相似文献   
160.
In survival analysis applications, the failure rate function may frequently present a unimodal shape. In such case, the log-normal or log-logistic distributions are used. In this paper, we shall be concerned only with parametric forms, so a location-scale regression model based on the Burr XII distribution is proposed for modeling data with a unimodal failure rate function as an alternative to the log-logistic regression model. Assuming censored data, we consider a classic analysis, a Bayesian analysis and a jackknife estimator for the parameters of the proposed model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and compared to the performance of the log-logistic and log-Burr XII regression models. Besides, we use sensitivity analysis to detect influential or outlying observations, and residual analysis is used to check the assumptions in the model. Finally, we analyze a real data set under log-Burr XII regression models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号