首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9567篇
  免费   808篇
  国内免费   91篇
电工技术   181篇
综合类   41篇
化学工业   2645篇
金属工艺   217篇
机械仪表   422篇
建筑科学   356篇
矿业工程   21篇
能源动力   617篇
轻工业   918篇
水利工程   190篇
石油天然气   143篇
武器工业   5篇
无线电   978篇
一般工业技术   1646篇
冶金工业   212篇
原子能技术   67篇
自动化技术   1807篇
  2024年   41篇
  2023年   191篇
  2022年   347篇
  2021年   612篇
  2020年   552篇
  2019年   685篇
  2018年   783篇
  2017年   739篇
  2016年   731篇
  2015年   433篇
  2014年   720篇
  2013年   1054篇
  2012年   659篇
  2011年   745篇
  2010年   481篇
  2009年   412篇
  2008年   255篇
  2007年   192篇
  2006年   154篇
  2005年   105篇
  2004年   103篇
  2003年   62篇
  2002年   58篇
  2001年   33篇
  2000年   28篇
  1999年   26篇
  1998年   24篇
  1997年   18篇
  1996年   25篇
  1995年   24篇
  1994年   14篇
  1993年   16篇
  1992年   13篇
  1991年   17篇
  1990年   17篇
  1989年   12篇
  1988年   7篇
  1987年   7篇
  1986年   9篇
  1985年   8篇
  1984年   14篇
  1983年   12篇
  1982年   6篇
  1981年   3篇
  1980年   3篇
  1979年   6篇
  1978年   3篇
  1977年   2篇
  1973年   2篇
  1967年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
101.
An extension to the divide-and-conquer algorithm (DCA) is presented in this paper to model constrained multibody systems. The constraints of interest are those applied to the system due to the inverse dynamics or control laws rather than the kinematically closed loops which have been studied in the literature. These imposed constraints are often expressed in terms of the generalized coordinates and speeds. A set of unknown generalized constraint forces must be considered in the equations of motion to enforce these algebraic constraints. In this paper dynamics of this class of multibody constrained systems is formulated using a Generalized-DCA. In this scheme, introducing dynamically equivalent forcing systems, each generalized constraint force is replaced by its dynamically equivalent spatial constraint force applied from the appropriate parent body to the associated child body at the connecting joint without violating the dynamics of the original system. The handle equations of motion are then formulated considering these dynamically equivalent spatial constraint forces. These equations in the GDCA scheme are used in the assembly and disassembly processes to solve for the states of the system, as well as the generalized constraint forces and/or Lagrange multipliers.  相似文献   
102.
Discrete linear quadratic control has been efciently applied to linear systems as an optimal control.However,a robotic system is highly nonlinear,heavily coupled and uncertain.To overcome the problem,the robotic system can be modeled as a linear discrete-time time-varying system in performing repetitive tasks.This modeling motivates us to develop an optimal repetitive control.The contribution of this paper is twofold.For the frst time,it presents discrete linear quadratic repetitive control for electrically driven robots using the mentioned model.The proposed control approach is based on the voltage control strategy.Second,uncertainty is efectively compensated by employing a robust time-delay controller.The uncertainty can include parametric uncertainty,unmodeled dynamics and external disturbances.To highlight its ability in overcoming the uncertainty,the dynamic equation of an articulated robot is introduced and used for the simulation,modeling and control purposes.Stability analysis verifes the proposed control approach and simulation results show its efectiveness.  相似文献   
103.
Most realistic solid state devices considered as qubits are not true two-state systems. If the energy separation of the upper energy levels from the lowest two levels is not large, then these upper states may affect the evolution of the ground state over time and therefore cannot be neglected. In this work, we study the effect of energy levels beyond the lowest two energy levels on adiabatic quantum optimization in a device with a double-well potential as the basic logical element. We show that the extra levels can be modeled by adding additional ancilla qubits coupled to the original logical qubits, and that the presence of upper levels has no effect on the final ground state. We also study the influence of upper energy levels on the minimum gap for a set of 8-qubit spin glass instances.  相似文献   
104.
A vast amount of valuable human knowledge is recorded in documents. The rapid growth in the number of machine-readable documents for public or private access necessitates the use of automatic text classification. While a lot of effort has been put into Western languages—mostly English—minimal experimentation has been done with Arabic. This paper presents, first, an up-to-date review of the work done in the field of Arabic text classification and, second, a large and diverse dataset that can be used for benchmarking Arabic text classification algorithms. The different techniques derived from the literature review are illustrated by their application to the proposed dataset. The results of various feature selections, weighting methods, and classification algorithms show, on average, the superiority of support vector machine, followed by the decision tree algorithm (C4.5) and Naïve Bayes. The best classification accuracy was 97 % for the Islamic Topics dataset, and the least accurate was 61 % for the Arabic Poems dataset.  相似文献   
105.
Mohammad Hossein  Reza   《Pattern recognition》2008,41(8):2571-2593
This paper investigates the use of time-adaptive self-organizing map (TASOM)-based active contour models (ACMs) for detecting the boundaries of the human eye sclera and tracking its movements in a sequence of images. The task begins with extracting the head boundary based on a skin-color model. Then the eye strip is located with an acceptable accuracy using a morphological method. Eye features such as the iris center or eye corners are detected through the iris edge information. TASOM-based ACM is used to extract the inner boundary of the eye. Finally, by tracking the changes in the neighborhood characteristics of the eye-boundary estimating neurons, the eyes are tracked effectively. The original TASOM algorithm is found to have some weaknesses in this application. These include formation of undesired twists in the neuron chain and holes in the boundary, lengthy chain of neurons, and low speed of the algorithm. These weaknesses are overcome by introducing a new method for finding the winning neuron, a new definition for unused neurons, and a new method of feature selection and application to the network. Experimental results show a very good performance for the proposed method in general and a better performance than that of the gradient vector field (GVF) snake-based method.  相似文献   
106.
107.
Effective task scheduling is essential for obtaining high performance in heterogeneous distributed computing systems (HeDCSs). However, finding an effective task schedule in HeDCSs requires the consideration of both the heterogeneity of processors and high interprocessor communication overhead, which results from non-trivial data movement between tasks scheduled on different processors. In this paper, we present a new high-performance scheduling algorithm, called the longest dynamic critical path (LDCP) algorithm, for HeDCSs with a bounded number of processors. The LDCP algorithm is a list-based scheduling algorithm that uses a new attribute to efficiently select tasks for scheduling in HeDCSs. The efficient selection of tasks enables the LDCP algorithm to generate high-quality task schedules in a heterogeneous computing environment. The performance of the LDCP algorithm is compared to two of the best existing scheduling algorithms for HeDCSs: the HEFT and DLS algorithms. The comparison study shows that the LDCP algorithm outperforms the HEFT and DLS algorithms in terms of schedule length and speedup. Moreover, the improvement in performance obtained by the LDCP algorithm over the HEFT and DLS algorithms increases as the inter-task communication cost increases. Therefore, the LDCP algorithm provides a practical solution for scheduling parallel applications with high communication costs in HeDCSs.  相似文献   
108.
The problem of missing values in software measurement data used in empirical analysis has led to the proposal of numerous potential solutions. Imputation procedures, for example, have been proposed to ‘fill-in’ the missing values with plausible alternatives. We present a comprehensive study of imputation techniques using real-world software measurement datasets. Two different datasets with dramatically different properties were utilized in this study, with the injection of missing values according to three different missingness mechanisms (MCAR, MAR, and NI). We consider the occurrence of missing values in multiple attributes, and compare three procedures, Bayesian multiple imputation, k Nearest Neighbor imputation, and Mean imputation. We also examine the relationship between noise in the dataset and the performance of the imputation techniques, which has not been addressed previously. Our comprehensive experiments demonstrate conclusively that Bayesian multiple imputation is an extremely effective imputation technique.
Jason Van HulseEmail:

Taghi M. Khoshgoftaar   is a professor of the Department of Computer Science and Engineering, Florida Atlantic University and the Director of the Empirical Software Engineering and Data Mining and Machine Learning Laboratories. His research interests are in software engineering, software metrics, software reliability and quality engineering, computational intelligence, computer performance evaluation, data mining, machine learning, and statistical modeling. He has published more than 300 refereed papers in these areas. He is a member of the IEEE, IEEE Computer Society, and IEEE Reliability Society. He was the program chair and General Chair of the IEEE International Conference on Tools with Artificial Intelligence in 2004 and 2005 respectively. He has served on technical program committees of various international conferences, symposia, and workshops. Also, he has served as North American Editor of the Software Quality Journal, and is on the editorial boards of the journals Software Quality and Fuzzy systems. Jason Van Hulse   received the Ph.D. degree in Computer Engineering from the Department of Computer Science and Engineering at Florida Atlantic University in 2007, the M.A. degree in Mathematics from Stony Brook University in 2000, and the B.S. degree in Mathematics from the University at Albany in 1997. His research interests include data mining and knowledge discovery, machine learning, computational intelligence, and statistics. He has published numerous peer-reviewed research papers in various conferences and journals, and is a member of the IEEE, IEEE Computer Society, and ACM. He has worked in the data mining and predictive modeling field at First Data Corp. since 2000, and is currently Vice President, Decision Science.   相似文献   
109.
In wireless multimedia sensor networks (WMSNs) a sensor node may have different types of sensor which gather different kinds of data. To support quality of service (QoS) requirements for multimedia applications having a reliable and fair transport protocol is necessary. One of the main objectives of the transport layer in WMSNs is congestion control. We observe that the information provided may have different levels of importance and argue that sensor networks should be willing to spend more resources in disseminating packets carrying more important information. Some applications of WMSNs may need to send real time traffic toward the sink node. This real time traffic requires low latency and high reliability so that immediate remedial and defensive actions can be taken when needed. Therefore, similar to wired networks, service differentiation in wireless sensor networks is also an important issue. We present a priority-based rate control mechanism for congestion control and service differentiation in WMSNs. We distinguish high priority real time traffic from low priority non-real time traffic, and service the input traffic based on its priority. Simulation results confirm the superior performance of the proposed model with respect to delays, delay variation and loss probability.  相似文献   
110.
This paper proposes a new fuzzy approach to count eosinophils, as a measure of inflammation, in bronchoalveolar lavage fluid images, provided by digital camera through microscope. We use fuzzy cluster analysis and fuzzy classification algorithm to determine the number of objects in an image. For this purpose, a fuzzy image processing procedure consisting of five main stages is presented. The first stage is pre-highlighting the objects in the images by using an image pre-processing method for enhancement, which is sharpening the image with the Laplaian high pass filter in order to have acceptable contrast in the image. The second stage is segmentation by clustering with fuzzy c-mean algorithm for portioning. In this stage the clustered data are the rough symbols of objects in the image containing noise. In the third step, first, a Gaussian low pass filter is used for noise reduction. Then, a contrast adoption in the image is done by modifying the membership functions in the image [H.R. Tizhoosh, G. Krell, B. Michaelis, Knowledge-based enhancement of megavoltage images in radiation therapy using a hybrid neuro-fuzzy system, Image and Vision Computing 19(July) (2000) 217–233]. Object recognition, the fourth stage, will be done by using fuzzy labeling for the objects in the image, using a fuzzy classification method. The number of labeled images shows the number of eosinophils in an image which is an index for diagnosing inflammation. The last stage is tuning parameters and verification of the system performance by using a feed forward Neural Network.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号