首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2154篇
  免费   95篇
  国内免费   15篇
电工技术   27篇
综合类   6篇
化学工业   539篇
金属工艺   72篇
机械仪表   62篇
建筑科学   36篇
矿业工程   2篇
能源动力   127篇
轻工业   85篇
水利工程   12篇
石油天然气   7篇
无线电   298篇
一般工业技术   523篇
冶金工业   160篇
原子能技术   15篇
自动化技术   293篇
  2023年   39篇
  2022年   77篇
  2021年   77篇
  2020年   64篇
  2019年   65篇
  2018年   73篇
  2017年   76篇
  2016年   81篇
  2015年   51篇
  2014年   78篇
  2013年   168篇
  2012年   103篇
  2011年   118篇
  2010年   97篇
  2009年   90篇
  2008年   84篇
  2007年   77篇
  2006年   73篇
  2005年   42篇
  2004年   46篇
  2003年   37篇
  2002年   40篇
  2001年   38篇
  2000年   24篇
  1999年   26篇
  1998年   41篇
  1997年   42篇
  1996年   31篇
  1995年   24篇
  1994年   32篇
  1993年   29篇
  1992年   16篇
  1991年   20篇
  1990年   10篇
  1989年   8篇
  1988年   9篇
  1987年   11篇
  1986年   19篇
  1985年   19篇
  1984年   25篇
  1983年   16篇
  1982年   16篇
  1981年   23篇
  1980年   19篇
  1979年   19篇
  1978年   17篇
  1977年   17篇
  1976年   14篇
  1975年   8篇
  1972年   5篇
排序方式: 共有2264条查询结果,搜索用时 15 毫秒
51.
In this paper, we focus on information extraction from optical character recognition (OCR) output. Since the content from OCR inherently has many errors, we present robust algorithms for information extraction from OCR lattices instead of merely looking them up in the top-choice (1-best) OCR output. Specifically, we address the challenge of named entity detection in noisy OCR output and show that searching for named entities in the recognition lattice significantly improves detection accuracy over 1-best search. While lattice-based named entity (NE) detection improves NE recall from OCR output, there are two problems with this approach: (1) the number of false alarms can be prohibitive for certain applications and (2) lattice-based search is computationally more expensive than 1-best NE lookup. To mitigate the above challenges, we present techniques for reducing false alarms using confidence measures and for reducing the amount of computation involved in performing the NE search. Furthermore, to demonstrate that our techniques are applicable across multiple domains and languages, we experiment with optical character recognition systems for videotext in English and scanned handwritten text in Arabic.  相似文献   
52.
The task of the robot in localization is to find out where it is, through sensing and motion. In environments which possess relatively few features that enable a robot to unambiguously determine its location, global localization algorithms can result in ‘multiple hypotheses’ locations of a robot. This is inevitable with global localization algorithms, as the local environment seen by a robot repeats at other parts of the map. Thus, for effective localization, the robot has to be actively guided to those locations where there is a maximum chance of eliminating most of the ambiguous states — which is often referred to as ‘active localization’. When extended to multi-robotic scenarios where all robots possess more than one hypothesis of their position, there is an opportunity to do better by using robots, apart from obstacles, as ‘hypotheses resolving agents’. The paper presents a unified framework which accounts for the map structure as well as measurement amongst robots, while guiding a set of robots to locations where they can singularize to a unique state. The strategy shepherds the robots to places where the probability of obtaining a unique hypothesis for a set of multiple robots is a maximum. Another aspect of framework demonstrates the idea of dispatching localized robots to locations where they can assist a maximum of the remaining unlocalized robots to overcome their ambiguity, named as ‘coordinated localization’. The appropriateness of our approach is demonstrated empirically in both simulation & real-time (on Amigo-bots) and its efficacy verified. Extensive comparative analysis portrays the advantage of the current method over others that do not perform active localization in a multi-robotic sense. It also portrays the performance gain by considering map structure and robot placement to actively localize over methods that consider only one of them or neither. Theoretical backing stems from the proven completeness of the method for a large category of diverse environments.  相似文献   
53.
54.
Networks on chip must deliver high bandwidth at low latencies while keeping within a tight power envelope. Using express virtual channels for flow control improves energy-delay throughput by letting packets bypass intermediate routers, but EVCs have key limitations. Nochi (NoC with hybrid interconnect) overcomes these limitations by transporting data payloads and control information on separate planes, optimized for bandwidth and latency respectively.  相似文献   
55.
The objective of this study is to explore the groundwater availability for agriculture in the Musi basin. Remote sensing data and geographic information system were used to locate potential zones for groundwater in the Musi basin. Various maps (i.e., base, hydrogeomorphological, geological, structural, drainage, slope, land use/land cover and groundwater prospect zones) were prepared using the remote sensing data along with the existing maps. The groundwater availability of the basin is qualitatively classified into different classes (i.e., very good, good, moderate, poor and nil) based on its hydrogeomorphological conditions. The land use/land cover map was prepared for the Kharif season using a digital classification technique with the limited ground truth for mapping irrigated areas in the Musi basin. The alluvial plain in filled valley, flood plain and deeply buried pediplain were successfully delineated and shown as the prospective zones of groundwater.  相似文献   
56.
Lexical states in JavaCC provide a powerful mechanism to scan regular expressions in a context sensitive manner. But lexical states also make it hard to reason about the correctness of the grammar. We first categorize the related correctness issues into two classes: errors and warnings. We then extend the traditional context sensitive and a context insensitive analysis to identify errors and warnings in context‐free grammars. We have implemented these analyses as a standalone tool (LSA ), the first of its kind, to identify errors and warnings in JavaCC grammars. The LSA tool outputs a graph that depicts the grammar and the error transitions. Importantly, it can also generate counter example strings that can be used to establish the errors. We have used LSA to analyze a host of open‐source JavaCC grammar files to good effect. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
57.
It is a well known result in the vision literature that the motion of independently moving objects viewed by an affine camera lie on affine subspaces of dimension four or less. As a result a large number of the recently proposed motion segmentation algorithms model the problem as one of clustering the trajectory data to its corresponding affine subspace. While these algorithms are elegant in formulation and achieve near perfect results on benchmark datasets, they fail to address certain very key real-world challenges, including perspective effects and motion degeneracies. Within a robotics and autonomous vehicle setting, the relative configuration of the robot and moving object will frequently be degenerate leading to a failure of subspace clustering algorithms. On the other hand, while gestalt-inspired motion similarity algorithms have been used for motion segmentation, in the moving camera case, they tend to over-segment or under-segment the scene based on their parameter values. In this paper we present a principled approach that incorporates the strengths of both approaches into a cohesive motion segmentation algorithm capable of dealing with the degenerate cases, where camera motion follows that of the moving object. We first generate a set of prospective motion models for the various moving and stationary objects in the video sequence by a RANSAC-like procedure. Then, we incorporate affine and long-term gestalt-inspired motion similarity constraints, into a multi-label Markov Random Field (MRF). Its inference leads to an over-segmentation, where each label belongs to a particular moving object or the background. This is followed by a model selection step where we merge clusters based on a novel motion coherence constraint, we call in-frame shear, that tracks the in-frame change in orientation and distance between the clusters, leading to the final segmentation. This oversegmentation is deliberate and necessary, allowing us to assess the relative motion between the motion models which we believe to be essential in dealing with degenerate motion scenarios.We present results on the Hopkins-155 benchmark motion segmentation dataset [27], as well as several on-road scenes where camera and object motion are near identical. We show that our algorithm is competitive with the state-of-the-art algorithms on [27] and exceeds them substantially on the more realistic on-road sequences.  相似文献   
58.
As service-oriented computing increases, so does the role of e-contracts in helping business partners automate contractual agreements and relationships. The key challenge is to translate traditional contracts into executable e-contracts in a way that facilitates runtime monitoring and management. As research in this area progresses, organizations will have different approaches for modeling, implementing, and managing e-contracts. For now, developers must contend with several key research issues and challenges.  相似文献   
59.
At the central energy management center in a power system, the real time controls continuously track the load changes and endeavor to match the total power demand with total generation in such a manner that the operating cost is minimized while all the operating constraints are satisfied. However, due to the strict government regulations on environmental protection, operation at minimum cost is no longer the only criterion for dispatching electrical power. The idea behind the environmentally constrained economic dispatch formulation is to estimate the optimal generation schedule of generating units in such a manner that fuel cost and harmful emission levels are both simultaneously minimized for a given load demand. Conventional optimization techniques become very time consuming and computationally extensive for such complex optimization tasks. These methods are hence not suitable for on-line use. Neural networks and fuzzy systems can be trained to generate accurate relations among variables in complex non-linear dynamical environment, as both are model-free estimators. The existing synergy between these two fields has been exploited in this paper for solving the economic and environmental dispatch problem on-line. A multi-output modified neo-fuzzy neuron (NFN), capable of real time training is proposed for economic and environmental power generation allocation.This model is found to achieve accurate results and the training is observed to be faster than other popular neural networks. The proposed method has been tested on medium-sized sample power systems with three and six generating units and found to be suitable for on-line combined environmental economic dispatch (CEED).  相似文献   
60.
We have built a database that provides term vector information for large numbers of pages (hundreds of millions). The basic operation of the database is to take URLs and return term vectors. Compared to computing vectors by downloading pages via HTTP, the Term Vector Database is several orders of magnitude faster, enabling a large class of applications that would be impractical without such a database. This paper describes the Term Vector Database in detail. It also reports on two applications built on top of the database. The first application is an optimization of connectivity-based topic distillation. The second application is a Web page classifier used to annotate results returned by a Web search engine.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号