首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2493篇
  免费   115篇
  国内免费   15篇
电工技术   42篇
综合类   7篇
化学工业   584篇
金属工艺   78篇
机械仪表   70篇
建筑科学   44篇
矿业工程   3篇
能源动力   135篇
轻工业   126篇
水利工程   19篇
石油天然气   8篇
无线电   337篇
一般工业技术   589篇
冶金工业   190篇
原子能技术   17篇
自动化技术   374篇
  2024年   7篇
  2023年   46篇
  2022年   89篇
  2021年   96篇
  2020年   78篇
  2019年   78篇
  2018年   85篇
  2017年   95篇
  2016年   95篇
  2015年   67篇
  2014年   97篇
  2013年   197篇
  2012年   120篇
  2011年   144篇
  2010年   108篇
  2009年   103篇
  2008年   93篇
  2007年   83篇
  2006年   79篇
  2005年   51篇
  2004年   54篇
  2003年   39篇
  2002年   42篇
  2001年   40篇
  2000年   26篇
  1999年   28篇
  1998年   49篇
  1997年   51篇
  1996年   38篇
  1995年   29篇
  1994年   35篇
  1993年   33篇
  1992年   19篇
  1991年   22篇
  1990年   14篇
  1989年   11篇
  1988年   14篇
  1987年   12篇
  1986年   20篇
  1985年   20篇
  1984年   26篇
  1983年   16篇
  1982年   17篇
  1981年   24篇
  1980年   20篇
  1979年   20篇
  1978年   19篇
  1977年   19篇
  1976年   16篇
  1975年   8篇
排序方式: 共有2623条查询结果,搜索用时 234 毫秒
61.
Creating algorithms capable of predicting the perceived quality of a visual stimulus defines the field of objective visual quality assessment (QA). The field of objective QA has received tremendous attention in the recent past, with many successful algorithms being proposed for this purpose. Our concern here is not with the past however; in this paper we discuss our vision for the future of visual quality assessment research. We first introduce the area of quality assessment and state its relevance. We describe current standards for gauging algorithmic performance and define terms that we will use through this paper. We then journey through 2D image and video quality assessment. We summarize recent approaches to these problems and discuss in detail our vision for future research on the problems of full-reference and no-reference 2D image and video quality assessment. From there, we move on to the currently popular area of 3D QA. We discuss recent databases, algorithms and 3D quality of experience. This yet-nascent technology provides for tremendous scope in terms of research activities and we summarize each of them. We then move on to more esoteric topics such as algorithmic assessment of aesthetics in natural images and in art. We discuss current research and hypothesize about possible paths to tread. Towards the end of this article, we discuss some other areas of interest including high-definition (HD) quality assessment, immersive environments and so on before summarizing interesting avenues for future work in multimedia (i.e., audio-visual) quality assessment.  相似文献   
62.
In this paper, we focus on information extraction from optical character recognition (OCR) output. Since the content from OCR inherently has many errors, we present robust algorithms for information extraction from OCR lattices instead of merely looking them up in the top-choice (1-best) OCR output. Specifically, we address the challenge of named entity detection in noisy OCR output and show that searching for named entities in the recognition lattice significantly improves detection accuracy over 1-best search. While lattice-based named entity (NE) detection improves NE recall from OCR output, there are two problems with this approach: (1) the number of false alarms can be prohibitive for certain applications and (2) lattice-based search is computationally more expensive than 1-best NE lookup. To mitigate the above challenges, we present techniques for reducing false alarms using confidence measures and for reducing the amount of computation involved in performing the NE search. Furthermore, to demonstrate that our techniques are applicable across multiple domains and languages, we experiment with optical character recognition systems for videotext in English and scanned handwritten text in Arabic.  相似文献   
63.
The task of the robot in localization is to find out where it is, through sensing and motion. In environments which possess relatively few features that enable a robot to unambiguously determine its location, global localization algorithms can result in ‘multiple hypotheses’ locations of a robot. This is inevitable with global localization algorithms, as the local environment seen by a robot repeats at other parts of the map. Thus, for effective localization, the robot has to be actively guided to those locations where there is a maximum chance of eliminating most of the ambiguous states — which is often referred to as ‘active localization’. When extended to multi-robotic scenarios where all robots possess more than one hypothesis of their position, there is an opportunity to do better by using robots, apart from obstacles, as ‘hypotheses resolving agents’. The paper presents a unified framework which accounts for the map structure as well as measurement amongst robots, while guiding a set of robots to locations where they can singularize to a unique state. The strategy shepherds the robots to places where the probability of obtaining a unique hypothesis for a set of multiple robots is a maximum. Another aspect of framework demonstrates the idea of dispatching localized robots to locations where they can assist a maximum of the remaining unlocalized robots to overcome their ambiguity, named as ‘coordinated localization’. The appropriateness of our approach is demonstrated empirically in both simulation & real-time (on Amigo-bots) and its efficacy verified. Extensive comparative analysis portrays the advantage of the current method over others that do not perform active localization in a multi-robotic sense. It also portrays the performance gain by considering map structure and robot placement to actively localize over methods that consider only one of them or neither. Theoretical backing stems from the proven completeness of the method for a large category of diverse environments.  相似文献   
64.
65.
Networks on chip must deliver high bandwidth at low latencies while keeping within a tight power envelope. Using express virtual channels for flow control improves energy-delay throughput by letting packets bypass intermediate routers, but EVCs have key limitations. Nochi (NoC with hybrid interconnect) overcomes these limitations by transporting data payloads and control information on separate planes, optimized for bandwidth and latency respectively.  相似文献   
66.
Hammerstein system identification is considered in presence of preload and dead zone nonlinearities. The discontinuous feature of these nonlinearities makes it difficult to get a single system parameterization involving linearly all unknown parameters (those of the linear subsystem and those of the nonlinearity). Therefore, system identification has generally been dealt with using multiple stage schemes including different parametrizations and several data acquisition experiences. However, the consistency issue has only been solved under restrictive assumptions regarding the identified system. In this paper, a new identification scheme is designed and shown to be consistent under mild assumptions.  相似文献   
67.
The objective of this study is to explore the groundwater availability for agriculture in the Musi basin. Remote sensing data and geographic information system were used to locate potential zones for groundwater in the Musi basin. Various maps (i.e., base, hydrogeomorphological, geological, structural, drainage, slope, land use/land cover and groundwater prospect zones) were prepared using the remote sensing data along with the existing maps. The groundwater availability of the basin is qualitatively classified into different classes (i.e., very good, good, moderate, poor and nil) based on its hydrogeomorphological conditions. The land use/land cover map was prepared for the Kharif season using a digital classification technique with the limited ground truth for mapping irrigated areas in the Musi basin. The alluvial plain in filled valley, flood plain and deeply buried pediplain were successfully delineated and shown as the prospective zones of groundwater.  相似文献   
68.
Lexical states in JavaCC provide a powerful mechanism to scan regular expressions in a context sensitive manner. But lexical states also make it hard to reason about the correctness of the grammar. We first categorize the related correctness issues into two classes: errors and warnings. We then extend the traditional context sensitive and a context insensitive analysis to identify errors and warnings in context‐free grammars. We have implemented these analyses as a standalone tool (LSA ), the first of its kind, to identify errors and warnings in JavaCC grammars. The LSA tool outputs a graph that depicts the grammar and the error transitions. Importantly, it can also generate counter example strings that can be used to establish the errors. We have used LSA to analyze a host of open‐source JavaCC grammar files to good effect. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
69.
It is a well known result in the vision literature that the motion of independently moving objects viewed by an affine camera lie on affine subspaces of dimension four or less. As a result a large number of the recently proposed motion segmentation algorithms model the problem as one of clustering the trajectory data to its corresponding affine subspace. While these algorithms are elegant in formulation and achieve near perfect results on benchmark datasets, they fail to address certain very key real-world challenges, including perspective effects and motion degeneracies. Within a robotics and autonomous vehicle setting, the relative configuration of the robot and moving object will frequently be degenerate leading to a failure of subspace clustering algorithms. On the other hand, while gestalt-inspired motion similarity algorithms have been used for motion segmentation, in the moving camera case, they tend to over-segment or under-segment the scene based on their parameter values. In this paper we present a principled approach that incorporates the strengths of both approaches into a cohesive motion segmentation algorithm capable of dealing with the degenerate cases, where camera motion follows that of the moving object. We first generate a set of prospective motion models for the various moving and stationary objects in the video sequence by a RANSAC-like procedure. Then, we incorporate affine and long-term gestalt-inspired motion similarity constraints, into a multi-label Markov Random Field (MRF). Its inference leads to an over-segmentation, where each label belongs to a particular moving object or the background. This is followed by a model selection step where we merge clusters based on a novel motion coherence constraint, we call in-frame shear, that tracks the in-frame change in orientation and distance between the clusters, leading to the final segmentation. This oversegmentation is deliberate and necessary, allowing us to assess the relative motion between the motion models which we believe to be essential in dealing with degenerate motion scenarios.We present results on the Hopkins-155 benchmark motion segmentation dataset [27], as well as several on-road scenes where camera and object motion are near identical. We show that our algorithm is competitive with the state-of-the-art algorithms on [27] and exceeds them substantially on the more realistic on-road sequences.  相似文献   
70.
As service-oriented computing increases, so does the role of e-contracts in helping business partners automate contractual agreements and relationships. The key challenge is to translate traditional contracts into executable e-contracts in a way that facilitates runtime monitoring and management. As research in this area progresses, organizations will have different approaches for modeling, implementing, and managing e-contracts. For now, developers must contend with several key research issues and challenges.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号