首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   928篇
  免费   29篇
  国内免费   2篇
电工技术   19篇
化学工业   161篇
金属工艺   26篇
机械仪表   20篇
建筑科学   9篇
能源动力   28篇
轻工业   73篇
水利工程   11篇
石油天然气   1篇
无线电   117篇
一般工业技术   145篇
冶金工业   82篇
原子能技术   3篇
自动化技术   264篇
  2024年   4篇
  2023年   15篇
  2022年   39篇
  2021年   42篇
  2020年   42篇
  2019年   30篇
  2018年   40篇
  2017年   26篇
  2016年   34篇
  2015年   26篇
  2014年   26篇
  2013年   51篇
  2012年   47篇
  2011年   43篇
  2010年   38篇
  2009年   34篇
  2008年   43篇
  2007年   32篇
  2006年   27篇
  2005年   24篇
  2004年   25篇
  2003年   16篇
  2002年   30篇
  2001年   10篇
  2000年   17篇
  1999年   11篇
  1998年   24篇
  1997年   26篇
  1996年   29篇
  1995年   11篇
  1994年   17篇
  1993年   11篇
  1992年   7篇
  1991年   6篇
  1989年   7篇
  1988年   2篇
  1987年   3篇
  1986年   2篇
  1985年   3篇
  1984年   2篇
  1983年   2篇
  1982年   5篇
  1981年   4篇
  1980年   4篇
  1979年   2篇
  1978年   3篇
  1977年   6篇
  1976年   3篇
  1967年   1篇
  1966年   1篇
排序方式: 共有959条查询结果,搜索用时 19 毫秒
71.
Visual shape parameters and aesthetic aspects of a product are one of the crucial factors for the success of a product in the market. The type and the value of the shape parameters plays an important role in visual appearance of a product and designers tends to be critical while deciding these parameters. The aesthetic aspect of a product has been matter of concern for researchers with its electromechanical design. The Kano model has been found to be a useful tool to establish the relationship between performance criteria and customer satisfaction. To achieve the desired customer satisfaction weight of each product criteria is determined by using Kano model. This study presents an integrative design approach combining the Kano model, Taguchi method and grey relation analysis to obtain the optimal combination of shape parameters and aesthetic aspects. Prioritized criteria of aesthetic attributes have been abstracted through proposed methodology. A case study has been presented to evolve a profile of a car.  相似文献   
72.
We propose tackling a “mini challenge” problem: a nontrivial verification effort that can be completed in 2–3 years, and will help establish notational standards, common formats, and libraries of benchmarks that will be essential in order for the verification community to collaborate on meeting Hoare’s 15-year verification grand challenge. We believe that a suitable candidate for such a mini challenge is the development of a filesystem that is verifiably reliable and secure. The paper argues why we believe a filesystem is the right candidate for a mini challenge and describes a project in which we are building a small embedded filesystem for use with flash memory. The work described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.  相似文献   
73.
Microsystem Technologies - Due to fast technological development, human beings generally depend upon computer and other digital equipments in different areas of concern/applications. Therefore,...  相似文献   
74.
This article describes RISCBOT (RISCBOT name has been derived from RISC lab and ‘bot’ of robot, a modular 802.11 b-enabled mobile autonomous robot built at the RISC lab of the University of Bridgeport. RISCBOT localizes itself and successfully fulfills www – enabled online user requests and navigates to various rooms, employing a visual recognition algorithm. This article describes the mechanical design, hardware and software algorithms of the robot, and the web-based interface for communicating with the robot. RISC lab: Interdisciplinary Robotics, Intelligent Sensing and Control laboratory at the University of Bridgeport.  相似文献   
75.

With the advancement of image acquisition devices and social networking services, a huge volume of image data is generated. Using different image and video processing applications, these image data are manipulated, and thus, original images get tampered. These tampered images are the prime source of spreading fake news, defaming the personalities and in some cases (when used as evidence) misleading the law bodies. Hence before relying totally on the image data, the authenticity of the image must be verified. Works of the literature are reported for the verification of the authenticity of an image based on noise inconsistency. However, these works suffer from limitations of confusion between edges and noise, post-processing operation for localization and need of prior knowledge about an image. To handle these limitations, a noise inconsistency-based technique has been presented here to detect and localize a false region in an image. This work consists of three major steps of pre-processing, noise estimation and post-processing. For the experimental purpose two, publicly available datasets are used. The result is discussed in terms of precision, recall, accuracy and f1-score on the pixel level. The result of the presented work is also compared with the recent state-of-the-art techniques. The average accuracy of the proposed work on datasets is 91.70%, which is highest among state-of-the-art techniques.

  相似文献   
76.
WALRUS: a similarity retrieval algorithm for image databases   总被引:2,自引:0,他引:2  
Approaches for content-based image querying typically extract a single signature from each image based on color, texture, or shape features. The images returned as the query result are then the ones whose signatures are closest to the signature of the query image. While efficient for simple images, such methods do not work well for complex scenes since they fail to retrieve images that match the query only partially, that is, only certain regions of the image match. This inefficiency leads to the discarding of images that may be semantically very similar to the query image since they may contain the same objects. The problem becomes even more apparent when we consider scaled or translated versions of the similar objects. We propose WALRUS (wavelet-based retrieval of user-specified scenes), a novel similarity retrieval algorithm that is robust to scaling and translation of objects within an image. WALRUS employs a novel similarity model in which each image is first decomposed into its regions and the similarity measure between a pair of images is then defined to be the fraction of the area of the two images covered by matching regions from the images. In order to extract regions for an image, WALRUS considers sliding windows of varying sizes and then clusters them based on the proximity of their signatures. An efficient dynamic programming algorithm is used to compute wavelet-based signatures for the sliding windows. Experimental results on real-life data sets corroborate the effectiveness of WALRUS'S similarity model.  相似文献   
77.
78.
79.
There is growing interest in algorithms for processing and querying continuous data streams (i.e., data seen only once in a fixed order) with limited memory resources. In its most general form, a data stream is actually an update stream, i.e., comprising data-item deletions as well as insertions. Such massive update streams arise naturally in several application domains (e.g., monitoring of large IP network installations or processing of retail-chain transactions). Estimating the cardinality of set expressions defined over several (possibly distributed) update streams is perhaps one of the most fundamental query classes of interest; as an example, such a query may ask what is the number of distinct IP source addresses seen in passing packets from both router R 1 and R 2 but not router R 3?. Earlier work only addressed very restricted forms of this problem, focusing solely on the special case of insert-only streams and specific operators (e.g., union). In this paper, we propose the first space-efficient algorithmic solution for estimating the cardinality of full-fledged set expressions over general update streams. Our estimation algorithms are probabilistic in nature and rely on a novel, hash-based synopsis data structure, termed 2-level hash sketch. We demonstrate how our 2-level hash sketch synopses can be used to provide low-error, high-confidence estimates for the cardinality of set expressions (including operators such as set union, intersection, and difference) over continuous update streams, using only space that is significantly sublinear in the sizes of the streaming input (multi-)sets. Furthermore, our estimators never require rescanning or resampling of past stream items, regardless of the number of deletions in the stream. We also present lower bounds for the problem, demonstrating that the space usage of our estimation algorithms is within small factors of the optimal. Finally, we propose an optimized, time-efficient stream synopsis (based on 2-level hash sketches) that provides similar, strong accuracy-space guarantees while requiring only guaranteed logarithmic maintenance time per update, thus making our methods applicable for truly rapid-rate data streams. Our results from an empirical study of our synopsis and estimation techniques verify the effectiveness of our approach.Received: 20 October 2003, Accepted: 16 April 2004, Published online: 14 September 2004Edited by: J. Gehrke and J. Hellerstein.Sumit Ganguly: sganguly@cse.iitk.ac.in Current affiliation: Department of Computer Science and Engineering, Indian Institute of Technology, Kanpur, India  相似文献   
80.
Despite years of study on failure prediction, it remains an open problem, especially in large-scale systems composed of vast amount of components. In this paper, we present a dynamic meta-learning framework for failure prediction. It intends to not only provide reasonable prediction accuracy, but also be of practical use in realistic environments. Two key techniques are developed to address technical challenges of failure prediction. One is meta-learning to boost prediction accuracy by combining the benefits of multiple predictive techniques. The other is a dynamic approach to dynamically obtain failure patterns from a changing training set and to dynamically extract effective rules by actively monitoring prediction accuracy at runtime. We demonstrate the effectiveness and practical use of this framework by means of real system logs collected from the production Blue Gene/L systems at Argonne National Laboratory and San Diego Supercomputer Center. Our case studies indicate that the proposed mechanism can provide reasonable prediction accuracy by forecasting up to 82% of the failures, with a runtime overhead less than 1.0 min.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号