首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5139篇
  免费   411篇
  国内免费   37篇
电工技术   70篇
综合类   26篇
化学工业   1446篇
金属工艺   155篇
机械仪表   256篇
建筑科学   205篇
矿业工程   15篇
能源动力   294篇
轻工业   429篇
水利工程   101篇
石油天然气   102篇
武器工业   3篇
无线电   455篇
一般工业技术   899篇
冶金工业   164篇
原子能技术   49篇
自动化技术   918篇
  2024年   21篇
  2023年   78篇
  2022年   113篇
  2021年   272篇
  2020年   294篇
  2019年   351篇
  2018年   454篇
  2017年   393篇
  2016年   341篇
  2015年   222篇
  2014年   389篇
  2013年   619篇
  2012年   428篇
  2011年   440篇
  2010年   288篇
  2009年   256篇
  2008年   149篇
  2007年   124篇
  2006年   74篇
  2005年   37篇
  2004年   33篇
  2003年   24篇
  2002年   24篇
  2001年   18篇
  2000年   19篇
  1999年   22篇
  1998年   11篇
  1997年   14篇
  1996年   9篇
  1995年   16篇
  1994年   4篇
  1993年   7篇
  1992年   5篇
  1991年   4篇
  1990年   4篇
  1989年   4篇
  1988年   2篇
  1987年   2篇
  1986年   2篇
  1985年   1篇
  1984年   6篇
  1982年   7篇
  1980年   2篇
  1979年   1篇
  1978年   1篇
  1974年   1篇
  1967年   1篇
排序方式: 共有5587条查询结果,搜索用时 15 毫秒
81.
In this paper, we evaluate the adequacy of several performance measures for the evaluation of driving skills between different drivers. This work was motivated by the need for a training system that captures the driving skills of an expert driver and transfers the skills to novice drivers using a haptic-enabled driving simulator. The performance measures examined include traditional task performance measures, e.g., the mean position error, and a stochastic distance between a pair of hidden Markov models (HMMs), each of which is trained for an individual driver. The emphasis of the latter is on the differences between the stochastic somatosensory processes of human driving skills. For the evaluation, we developed a driving simulator and carried out an experiment that collected the driving data of an expert driver whose data were used as a reference for comparison and of many other subjects. The performance measures were computed from the experimental data, and they were compared to each other. We also collected the subjective judgement scores of the driver’s skills made by a highly-experienced external evaluator, and these subjective scores were compared with the objective performance measures. Analysis results showed that the HMM-based distance metric had a moderately high correlation between the subjective scores and it was also consistent with the other task performance measures, indicating the adequacy of the HMM-based metric as an objective performance measure for driving skill learning. The findings of this work can contribute to developing a driving simulator for training with an objective assessment function of driving skills.  相似文献   
82.
We propose novel techniques to find the optimal achieve the maximum loss reduction for distribution networks location, size, and power factor of distributed generation (DG) to Determining the optimal DG location and size is achieved simultaneously using the energy loss curves technique for a pre-selected power factor that gives the best DG operation. Based on the network's total load demand, four DG sizes are selected. They are used to form energy loss curves for each bus and then for determining the optimal DG options. The study shows that by defining the energy loss minimization as the objective function, the time-varying load demand significantly affects the sizing of DG resources in distribution networks, whereas consideration of power loss as the objective function leads to inconsistent interpretation of loss reduction and other calculations. The devised technique was tested on two test distribution systems of varying size and complexity and validated by comparison with the exhaustive iterative method (EIM) and recently published results. Results showed that the proposed technique can provide an optimal solution with less computation.  相似文献   
83.
Interoperability is the ability of systems to provide services to and accept services from other systems, and to use the services exchanged so as to operate together in a more effective manner. The fact that interoperability can be improved means that the metrics for measuring interoperability can be defined. For the purpose of measuring the interoperability between systems, an interoperability assessment model is required. This paper deals with the existing interoperability assessment models. A compara- tive analysis among these models is provided to evaluate the similarities and differences in their philosophy and implementation. The analysis yields a set of recommendations for any party that is open to the idea of creating or improving an interoperability assessment model.  相似文献   
84.
In this paper, we propose a source localization algorithm based on a sparse Fast Fourier Transform (FFT)-based feature extraction method and spatial sparsity. We represent the sound source positions as a sparse vector by discretely segmenting the space with a circular grid. The location vector is related to microphone measurements through a linear equation, which can be estimated at each microphone. For this linear dimensionality reduction, we have utilized a Compressive Sensing (CS) and two-level FFT-based feature extraction method which combines two sets of audio signal features and covers both short-time and long-time properties of the signal. The proposed feature extraction method leads to a sparse representation of audio signals. As a result, a significant reduction in the dimensionality of the signals is achieved. In comparison to the state-of-the-art methods, the proposed method improves the accuracy while the complexity is reduced in some cases.  相似文献   
85.
Almost all binarization methods have a few parameters that require setting. However, they do not usually achieve their upper-bound performance unless the parameters are individually set and optimized for each input document image. In this work, a learning framework for the optimization of the binarization methods is introduced, which is designed to determine the optimal parameter values for a document image. The framework, which works with any binarization method, has a standard structure, and performs three main steps: (i) extracts features, (ii) estimates optimal parameters, and (iii) learns the relationship between features and optimal parameters. First, an approach is proposed to generate numerical feature vectors from 2D data. The statistics of various maps are extracted and then combined into a final feature vector, in a nonlinear way. The optimal behavior is learned using support vector regression (SVR). Although the framework works with any binarization method, two methods are considered as typical examples in this work: the grid-based Sauvola method, and Lu’s method, which placed first in the DIBCO’09 contest. The experiments are performed on the DIBCO’09 and H-DIBCO’10 datasets, and combinations of these datasets with promising results.  相似文献   
86.
Mohammad Hossein  Reza   《Pattern recognition》2008,41(8):2571-2593
This paper investigates the use of time-adaptive self-organizing map (TASOM)-based active contour models (ACMs) for detecting the boundaries of the human eye sclera and tracking its movements in a sequence of images. The task begins with extracting the head boundary based on a skin-color model. Then the eye strip is located with an acceptable accuracy using a morphological method. Eye features such as the iris center or eye corners are detected through the iris edge information. TASOM-based ACM is used to extract the inner boundary of the eye. Finally, by tracking the changes in the neighborhood characteristics of the eye-boundary estimating neurons, the eyes are tracked effectively. The original TASOM algorithm is found to have some weaknesses in this application. These include formation of undesired twists in the neuron chain and holes in the boundary, lengthy chain of neurons, and low speed of the algorithm. These weaknesses are overcome by introducing a new method for finding the winning neuron, a new definition for unused neurons, and a new method of feature selection and application to the network. Experimental results show a very good performance for the proposed method in general and a better performance than that of the gradient vector field (GVF) snake-based method.  相似文献   
87.
Recently, we introduced the sorted Gaussian mixture models (SGMMs) algorithm providing the means to tradeoff performance for operational speed and thus permitting the speed-up of GMM-based classification schemes. The performance of the SGMM algorithm depends on the proper choice of the sorting function, and the proper adjustment of its parameters. In the present work, we employ particle swarm optimization (PSO) and an appropriate fitness function to find the most advantageous parameters of the sorting function. We evaluate the practical significance of our approach on the text-independent speaker verification task utilizing the NIST 2002 speaker recognition evaluation (SRE) database while following the NIST SRE experimental protocol. The experimental results demonstrate a superior performance of the SGMM algorithm using PSO when compared to the original SGMM. For comprehensiveness we also compared these results with those from a baseline Gaussian mixture model-universal background model (GMM-UBM) system. The experimental results suggest that the performance loss due to speed-up is partially mitigated using PSO-derived weights in a sorted GMM-based scheme.  相似文献   
88.
Strategic reasoning about business models is an integral part of service design. In fast moving markets, businesses must be able to recognize and respond strategically to disruptive change. They have to answer questions such as: what are the threats and opportunities in emerging technologies and innovations? How should they target customer groups? Who are their real competitors? How will competitive battles take shape? In this paper we define a strategic modeling framework to help understand and analyze the goals, intentions, roles, and the rationale behind the strategic actions in a business environment. This understanding is necessary in order to improve existing or design new services. The key component of the framework is a strategic business model ontology for representing and analyzing business models and strategies, using the i* agent and goal oriented methodology as a basis. The ontology introduces a strategy layer which reasons about alternative strategies that are realized in the operational layer. The framework is evaluated using a retroactive example of disruptive technology in the telecommunication services sector from the literature.  相似文献   
89.
An original inversion method specifically adapted to the estimation of Poisson coefficient of balls by using their resonance spectra is described. From the study of their elastic vibrations, it is possible to accurately characterize the balls. The proposed methodology can create both spheroidal modes in the balls and detect such vibrations over a large frequency range. Experimentally, by using both an ultrasonic probe for the emission (piezoelectric transducer) and a heterodyne optic probe for the reception (interferometer), it was possible to take spectroscopic measurements of spheroidal vibrations over a large frequency range (100 kHz-45 MHz) in a continuous regime. This method, which uses ratios between wave resonance frequencies, allows the Poisson coefficient to be determined independently of Young's modulus and the ball's radius and density. This has the advantage of providing highly accurate estimations of Poisson coefficient (+/-4.3 x 10(-4)) over a wide frequency range.  相似文献   
90.

The Peer to Peer-Cloud (P2P-Cloud) is a suitable alternative to distributed cloud-based or peer-to-peer (P2P)-based content on a large scale. The P2P-Cloud is used in many applications such as IPTV, Video-On-Demand, and so on. In the P2P-Cloud network, overload is a common problem during overcrowds. If a node receives many requests simultaneously, the node may not be able to respond quickly to user requests, and this access latency in P2P-Cloud networks is a major problem for their users. The replication method in P2P-Cloud environments reduces the time to access and uses network bandwidth by making multiple data copies in diverse locations. The replication improves access to the information and increases the reliability of the system. The data replication's main problem is identifying the best possible placement of replica data nodes based on user requests for data access time and an NP-hard optimization problem. This paper proposes a new replica replacement to improve average access time and replica cost using fuzzy logic and Ant Colony Optimization algorithm. Ants can find the shortest path to discover the optimal node to place the duplicate file with the least access time latency. The fuzzy module evaluates the historical information of each node to analyze the pheromone value per iteration. The fuzzy membership function is also used to determine each node's degree based on the four characteristics. The simulation results showed that the access time and replica cost are improved compared to other replica replacement algorithms.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号