全文获取类型
收费全文 | 922篇 |
免费 | 35篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 19篇 |
化学工业 | 161篇 |
金属工艺 | 26篇 |
机械仪表 | 20篇 |
建筑科学 | 9篇 |
能源动力 | 28篇 |
轻工业 | 73篇 |
水利工程 | 11篇 |
石油天然气 | 1篇 |
无线电 | 117篇 |
一般工业技术 | 144篇 |
冶金工业 | 82篇 |
原子能技术 | 3篇 |
自动化技术 | 264篇 |
出版年
2024年 | 4篇 |
2023年 | 15篇 |
2022年 | 39篇 |
2021年 | 41篇 |
2020年 | 42篇 |
2019年 | 30篇 |
2018年 | 40篇 |
2017年 | 26篇 |
2016年 | 34篇 |
2015年 | 26篇 |
2014年 | 26篇 |
2013年 | 51篇 |
2012年 | 47篇 |
2011年 | 43篇 |
2010年 | 38篇 |
2009年 | 34篇 |
2008年 | 43篇 |
2007年 | 32篇 |
2006年 | 27篇 |
2005年 | 24篇 |
2004年 | 25篇 |
2003年 | 16篇 |
2002年 | 30篇 |
2001年 | 10篇 |
2000年 | 17篇 |
1999年 | 11篇 |
1998年 | 24篇 |
1997年 | 26篇 |
1996年 | 29篇 |
1995年 | 11篇 |
1994年 | 17篇 |
1993年 | 11篇 |
1992年 | 7篇 |
1991年 | 6篇 |
1989年 | 7篇 |
1988年 | 2篇 |
1987年 | 3篇 |
1986年 | 2篇 |
1985年 | 3篇 |
1984年 | 2篇 |
1983年 | 2篇 |
1982年 | 5篇 |
1981年 | 4篇 |
1980年 | 4篇 |
1979年 | 2篇 |
1978年 | 3篇 |
1977年 | 6篇 |
1976年 | 3篇 |
1967年 | 1篇 |
1966年 | 1篇 |
排序方式: 共有958条查询结果,搜索用时 0 毫秒
51.
Sumit?GangulyEmail author Minos?Garofalakis Rajeev?Rastogi 《The VLDB Journal The International Journal on Very Large Data Bases》2004,13(4):354-369
There is growing interest in algorithms for processing and querying continuous data streams (i.e., data seen only once in a fixed order) with limited memory resources. In its most general form, a data stream is actually an update stream, i.e., comprising data-item deletions as well as insertions. Such massive update streams arise naturally in several application domains (e.g., monitoring of large IP network installations or processing of retail-chain transactions). Estimating the cardinality of set expressions defined over several (possibly distributed) update streams is perhaps one of the most fundamental query classes of interest; as an example, such a query may ask what is the number of distinct IP source addresses seen in passing packets from both router R
1 and R
2 but not router R
3?. Earlier work only addressed very restricted forms of this problem, focusing solely on the special case of insert-only streams and specific operators (e.g., union). In this paper, we propose the first space-efficient algorithmic solution for estimating the cardinality of full-fledged set expressions over general update streams. Our estimation algorithms are probabilistic in nature and rely on a novel, hash-based synopsis data structure, termed 2-level hash sketch. We demonstrate how our 2-level hash sketch synopses can be used to provide low-error, high-confidence estimates for the cardinality of set expressions (including operators such as set union, intersection, and difference) over continuous update streams, using only space that is significantly sublinear in the sizes of the streaming input (multi-)sets. Furthermore, our estimators never require rescanning or resampling of past stream items, regardless of the number of deletions in the stream. We also present lower bounds for the problem, demonstrating that the space usage of our estimation algorithms is within small factors of the optimal. Finally, we propose an optimized, time-efficient stream synopsis (based on 2-level hash sketches) that provides similar, strong accuracy-space guarantees while requiring only guaranteed logarithmic maintenance time per update, thus making our methods applicable for truly rapid-rate data streams. Our results from an empirical study of our synopsis and estimation techniques verify the effectiveness of our approach.Received: 20 October 2003, Accepted: 16 April 2004, Published online: 14 September 2004Edited by: J. Gehrke and J. Hellerstein.Sumit Ganguly: sganguly@cse.iitk.ac.in Current affiliation: Department of Computer Science and Engineering, Indian Institute of Technology, Kanpur, India 相似文献
52.
Rajeev Gautam Rajakkannu Mutharasan Donald R. Coughanowr 《Chemical engineering science》1978,33(5):561-568
Sampled-data proportional control of an exothermic CSTR has been studied using the classical linear analysis and by the application of the averaging te 相似文献
53.
RE-tree: an efficient index structure for regular expressions 总被引:4,自引:0,他引:4
Chan Chee-Yong Garofalakis Minos Rastogi Rajeev 《The VLDB Journal The International Journal on Very Large Data Bases》2003,12(2):102-119
Due to their expressive power, regular expressions (REs) are quickly becoming an integral part of language specifications for several important application scenarios. Many of these applications have to manage huge databases of RE specifications and need to provide an effective matching mechanism that, given an input string, quickly identifies the REs in the database that match it. In this paper, we propose the RE-tree, a novel index structure for large databases of RE specifications. Given an input query string, the RE-tree speeds up the retrieval of matching REs by focusing the search and comparing the input string with only a small fraction of REs in the database. Even though the RE-tree is similar in spirit to other tree-based structures that have been proposed for indexing multidimensional data, RE indexing is significantly more challenging since REs typically represent infinite sets of strings with no well-defined notion of spatial locality. To address these new challenges, our RE-tree index structure relies on novel measures for comparing the relative sizes of infinite regular languages. We also propose innovative solutions for the various RE-tree operations including the effective splitting of RE-tree nodes and computing a "tight" bounding RE for a collection of REs. Finally, we demonstrate how sampling-based approximation algorithms can be used to significantly speed up the performance of RE-tree operations. Preliminary experimental results with moderately large synthetic data sets indicate that the RE-tree is effective in pruning the search space and easily outperforms naive sequential search approaches.Received: 16 September 2002, Published online: 8 July 2003Edited by R. Ramakrishnan 相似文献
54.
55.
We propose tackling a “mini challenge” problem: a nontrivial verification effort that can be completed in 2–3 years, and will
help establish notational standards, common formats, and libraries of benchmarks that will be essential in order for the verification
community to collaborate on meeting Hoare’s 15-year verification grand challenge. We believe that a suitable candidate for
such a mini challenge is the development of a filesystem that is verifiably reliable and secure. The paper argues why we believe a filesystem is the right candidate for a mini challenge and describes
a project in which we are building a small embedded filesystem for use with flash memory.
The work described in this paper was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under
a contract with the National Aeronautics and Space Administration. 相似文献
56.
Tianzhu Zhang Bernard Ghanem Si Liu Narendra Ahuja 《International Journal of Computer Vision》2013,101(2):367-383
In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing $\ell _{p,q}$ mixed norms $(\text{ specifically} p\in \{2,\infty \}$ and $q=1),$ we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular $L_1$ tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259–2272, 2011) is a special case of our MTT formulation (denoted as the $L_{11}$ tracker) when $p=q=1.$ Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers. 相似文献
57.
A detailed chemisorption mechanism is proposed for the atomic layer deposition (ALD) of aluminium oxide nano layers using
trimethyl aluminum (TMA) and water as precursors. Six possible chemisorption mechanisms, complete ligand exchange, partial
ligand exchange, simple dissociation, complete dissociation via ligand exchange, complete dissociation and association, are
proposed and related parameters like ligand to metal ratio (L/M), concentrations of metal atoms and methyl groups adsorbed
are calculated and compared against reported values. The maximum number of methyl groups that can get attached on the surface
is calculated in a different way which yields a more realistic value of 6·25 per nm2 substrate area. The dependence of the number of metal atoms adsorbed on OH concentration is explained clearly. It is proposed
that a combination of complete ligand exchange and complete dissociation is the most probable chemisorption mechanism taking
place at various OH concentrations. 相似文献
58.
Varvarigou T.A. Anagnostou M.E. Ahuja S.R. 《IEEE transactions on pattern analysis and machine intelligence》1999,25(3):401-415
We present new results in the area of reconfiguration of stateful interactive processes in the presence of faults. More precisely, we consider a set of servers/processes that have the same functionality, i.e., are able to perform the same tasks and provide the same set of services to their clients. In the case when several of them turn out to be faulty, we want to reconfigure the system so that the clients of the faulty servers/processes are served by some other, fault-free, servers of the system in a way that is transparent to all the system clients. We propose a novel method for reconfiguring in the presence of faults: compensation paths. Compensation paths are an efficient way of shifting spare resources from where they are available to where they are needed. We also present optimal and suboptimal simple reconfiguration algorithms of low polynomial time complexity O(nmlog(n2/m)) for the optimal and O(m) for the suboptimal algorithms, where n is the number of processes and m is the number of primary-backup relationships. The optimal algorithms compute the way to reconfigure the system whenever the reconfiguration is possible. The suboptimal algorithms may sometimes fail to reconfigure the system, although reconfiguration would be possible by using the optimal centralized algorithms. However, suboptimal algorithms have other competitive advantages over the centralized optimal algorithms with regard to time complexity and communication overhead 相似文献
59.
60.
Zhiling Lan Jiexing Gu Ziming Zheng Rajeev Thakur Susan Coghlan 《Journal of Parallel and Distributed Computing》2010
Despite years of study on failure prediction, it remains an open problem, especially in large-scale systems composed of vast amount of components. In this paper, we present a dynamic meta-learning framework for failure prediction. It intends to not only provide reasonable prediction accuracy, but also be of practical use in realistic environments. Two key techniques are developed to address technical challenges of failure prediction. One is meta-learning to boost prediction accuracy by combining the benefits of multiple predictive techniques. The other is a dynamic approach to dynamically obtain failure patterns from a changing training set and to dynamically extract effective rules by actively monitoring prediction accuracy at runtime. We demonstrate the effectiveness and practical use of this framework by means of real system logs collected from the production Blue Gene/L systems at Argonne National Laboratory and San Diego Supercomputer Center. Our case studies indicate that the proposed mechanism can provide reasonable prediction accuracy by forecasting up to 82% of the failures, with a runtime overhead less than 1.0 min. 相似文献