首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5438篇
  免费   439篇
  国内免费   58篇
电工技术   61篇
综合类   22篇
化学工业   1266篇
金属工艺   75篇
机械仪表   264篇
建筑科学   101篇
矿业工程   8篇
能源动力   332篇
轻工业   717篇
水利工程   64篇
石油天然气   30篇
武器工业   1篇
无线电   697篇
一般工业技术   1210篇
冶金工业   93篇
原子能技术   51篇
自动化技术   943篇
  2024年   37篇
  2023年   210篇
  2022年   481篇
  2021年   752篇
  2020年   455篇
  2019年   512篇
  2018年   471篇
  2017年   391篇
  2016年   389篇
  2015年   235篇
  2014年   289篇
  2013年   395篇
  2012年   237篇
  2011年   287篇
  2010年   164篇
  2009年   139篇
  2008年   92篇
  2007年   87篇
  2006年   37篇
  2005年   25篇
  2004年   36篇
  2003年   28篇
  2002年   16篇
  2001年   7篇
  2000年   14篇
  1999年   13篇
  1998年   18篇
  1997年   10篇
  1996年   14篇
  1995年   17篇
  1994年   7篇
  1993年   10篇
  1992年   8篇
  1991年   7篇
  1990年   1篇
  1989年   5篇
  1988年   4篇
  1987年   5篇
  1986年   1篇
  1985年   6篇
  1984年   3篇
  1983年   2篇
  1982年   4篇
  1981年   3篇
  1980年   1篇
  1979年   3篇
  1978年   2篇
  1977年   3篇
  1976年   1篇
  1961年   1篇
排序方式: 共有5935条查询结果,搜索用时 15 毫秒
71.
Multimedia analysis and reuse of raw un-edited audio visual content known as rushes is gaining acceptance by a large number of research labs and companies. A set of research projects are considering multimedia indexing, annotation, search and retrieval in the context of European funded research, but only the FP6 project RUSHES is focusing on automatic semantic annotation, indexing and retrieval of raw and un-edited audio-visual content. Even professional content creators and providers as well as home-users are dealing with this type of content and therefore novel technologies for semantic search and retrieval are required. In this paper, we present a summary of the most relevant achievements of the RUSHES project, focusing on specific approaches for automatic annotation as well as the main features of the final RUSHES search engine.  相似文献   
72.
Robots have played an important role in the automation of computer aided manufacturing. The classical robot control implementation involves an expensive key step of model-based programming. An intuitive way to reduce this expensive exercise is to replace programming with machine learning of robot actions from demonstration where a (learner) robot learns an action by observing a demonstrator robot performing the same. To achieve this learning from demonstration (LFD) different machine learning techniques such as Artificial Neural Networks (ANN), Genetic Algorithms, Hidden Markov Models, Support Vector Machines, etc. can be used. This piece of work focuses exclusively on ANNs. Since ANNs have many standard architectural variations divided into two basic computational categories namely the recurrent networks and feed-forward networks, representative networks from each have been selected for study, i.e. Feed Forward Multilayer Perceptron (FF) network for feed-forward networks category and Elman (EL), and Nonlinear Autoregressive Exogenous Model (NARX) networks for the recurrent networks category. The main objective of this work is to identify the most suitable neural architecture for application of LFD in learning different robot actions. The sensor and actuator streams of demonstrated action are used as training data for ANN learning. Consequently, the learning capability is measured by comparing the error between demonstrator and corresponding learner streams. To achieve fairness in comparison three steps have been taken. First, Dynamic Time Warping is used to measure the error between demonstrator and learner streams, which gives resilience against translation in time. Second, comparison statistics are drawn between the best, instead of weight-equal, configurations of competing architectures so that learning capability of any architecture is not forced handicap. Third, each configuration's error is calculated as the average of ten trials of all possible learning sequences with random weight initialization so that the error value is independent of a particular sequence of learning or a particular set of initial weights. Six experiments are conducted to get a performance pattern of each architecture. In each experiment, a total of nine different robot actions were tested. Error statistics thus obtained have shown that NARX architecture is most suitable for this learning problem whereas Elman architecture has shown the worst suitability. Interestingly the computationally lesser MLP gives much lower and slightly higher error statistics compared to the computationally superior Elman and NARX neural architectures, respectively.  相似文献   
73.
In this paper, we introduce and consider a new class of mixed variational inequalities involving four operators, which are called extended general mixed variational inequalities. Using the resolvent operator technique, we establish the equivalence between the extended general mixed variational inequalities and fixed point problems as well as resolvent equations. We use this alternative equivalent formulation to suggest and analyze some iterative methods for solving general mixed variational inequalities. We study the convergence criteria for the suggested iterative methods under suitable conditions. Our methods of proof are very simple as compared with other techniques. The results proved in this paper may be viewed as refinements and important generalizations of the previous known results.  相似文献   
74.
75.
Two proposed techniques let microprocessors operate at low voltages despite high memory-cell failure rates. They identify and disable defective portions of the cache at two granularities: individual words or pairs of bits. Both techniques use the entire cache during high-voltage operation while sacrificing cache capacity during low-voltage operation to reduce the minimum voltage below 500 mV.  相似文献   
76.
To conserve space and power as well as to harness high performance in embedded systems, high utilization of the hardware is required. This can be facilitated through dynamic adaptation of the silicon resources in reconfigurable systems in order to realize various customized kernels as execution proceeds. Fortunately, the encountered reconfiguration overheads can be estimated. Therefore, if the scheduling of time-consuming kernels considers also the reconfiguration overheads, an overall performance gain can be obtained. We present our policy, experiments, and performance results of customizing and reconfiguring Field-Programmable Gate Arrays (FPGAs) for embedded kernels. Experiments involving EEMBC (EDN Embedded Microprocessor Benchmarking Consortium) and MiBench embedded benchmark kernels show high performance using our main policy, when considering reconfiguration overheads. Our policy reduces the required reconfigurations by more than 50% as compared to brute-force solutions, and performs within 25% of the ideal execution time while conserving 60% of the FPGA resources. Alternative strategies to reduce the reconfiguration overhead are also presented and evaluated.  相似文献   
77.
The use of robotics in distributed monitoring applications requires wireless sensors that are deployed efficiently. A very important aspect of sensor deployment includes positioning them for sampling at locations most likely to yield information about the spatio-temporal field of interest, for instance, the spread of a forest fire. In this paper, we use mobile robots (agents) that estimate the time-varying spread of wildfires using a distributed multi-scale adaptive sampling strategy. The proposed parametric sampling algorithm, “EKF-NN-GAS” is based on neural networks, the extended Kalman filter (EKF), and greedy heuristics. It combines measurements arriving at different times, taken at different scale lengths, such as from ground, airborne, and spaceborne observation platforms. One of the advantages of our algorithm is the ability to incorporate robot localization uncertainty in addition to sensor measurement and field parameter uncertainty into the same EKF model. We employ potential fields, generated naturally from the estimated fire field distribution, in order to generate fire-safe trajectories that could be used to rescue vehicles and personnel. The covariance of the EKF is used as a quantitative information measure for sampling locations most likely to yield optimal information about the sampled field distribution. Neural net training is used infrequently to generate initial low resolution estimates of the fire spread parameters. We present simulation and experimental results for reconstructing complex spatio-temporal forest fire fields “truth models”, approximated by radial basis function (RBF) parameterizations. When compared to a conventional raster scan approach, our algorithm shows a significant reduction in the time necessary to map the fire field.  相似文献   
78.
79.
ObjectiveIn this paper, we present findings from an empirical study that was aimed at identifying the relative “perceived value” of CMMI level 2 specific practices based on the perceptions and experiences of practitioners of small and medium size companies. The objective of this study is to identify the extent to which a particular CMMI practice is used in order to develop a finer-grained framework, which encompasses the notion of perceived value within specific practices.MethodWe used face-to-face questionnaire based survey sessions as the main approach to collecting data from 46 software development practitioners from Malaysia and Vietnam. We asked practitioners to choose and rank CMMI level 2 practices against the five types of assessments (high, medium, low, zero or do not know). From this, we have proposed the notion of ‘perceived value’ associated with each practice.ResultsWe have identified three ‘requirements management’ practices as having a ‘high perceived value’. The results also reveal the similarities and differences in the perceptions of Malaysian and Vietnamese practitioners with regard to the relative values of different practices of CMMI level 2 process areas.ConclusionsSmall and medium size companies should not be seen as being “at fault” for not adopting CMMI – instead the Software Process Improvement (SPI) implementation approaches and its transition mechanisms should be improved. We argue that research into “tailoring” existing process capability maturity models may address some of the issues of small and medium size companies.  相似文献   
80.
ABSTRACT

The effect of 2D and 3D educational content learning on memory has been studied using electroencephalography (EEG) brain signal. A hypothesis is set that the 3D materials are better than the 2D materials for learning and memory recall. To test the hypothesis, we proposed a classification system that will predict true or false recall for short-term memory (STM) and long-term memory (LTM) after learning by either 2D or 3D educational contents. For this purpose, EEG brain signals are recorded during learning and testing; the signals are then analysed in the time domain using different types of features in various frequency bands. The features are then fed into a support vector machine (SVM)-based classifier. The experimental results indicate that the learning and memory recall using 2D and 3D contents do not have significant differences for both the STM and the LTM.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号