首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   679篇
  免费   21篇
  国内免费   24篇
电工技术   10篇
综合类   26篇
化学工业   22篇
金属工艺   12篇
机械仪表   45篇
建筑科学   7篇
矿业工程   2篇
能源动力   11篇
轻工业   4篇
水利工程   3篇
石油天然气   4篇
武器工业   2篇
无线电   58篇
一般工业技术   37篇
冶金工业   2篇
原子能技术   4篇
自动化技术   475篇
  2024年   2篇
  2023年   9篇
  2022年   6篇
  2021年   14篇
  2020年   14篇
  2019年   15篇
  2018年   11篇
  2017年   18篇
  2016年   19篇
  2015年   30篇
  2014年   27篇
  2013年   44篇
  2012年   35篇
  2011年   53篇
  2010年   15篇
  2009年   38篇
  2008年   53篇
  2007年   47篇
  2006年   39篇
  2005年   27篇
  2004年   28篇
  2003年   22篇
  2002年   18篇
  2001年   22篇
  2000年   15篇
  1999年   12篇
  1998年   8篇
  1997年   9篇
  1996年   14篇
  1995年   6篇
  1994年   13篇
  1993年   8篇
  1992年   4篇
  1991年   2篇
  1990年   4篇
  1989年   6篇
  1988年   4篇
  1987年   2篇
  1986年   3篇
  1985年   4篇
  1984年   1篇
  1983年   2篇
  1981年   1篇
排序方式: 共有724条查询结果,搜索用时 0 毫秒
81.
基于相似孤立系数的孤立点检测算法   总被引:1,自引:0,他引:1  
基于聚类的孤立点检测算法得到的结果比较粗糙,不够准确。针对该问题,提出一种基于相似孤立系数的孤立点检测算法。定义相似距离以及相似孤立点系数,给出基于相似距离的剪枝策略,根据该策略缩小可疑孤立点候选集,并降低孤立点检测算法的计算复杂度。通过选用公共数据集Iris、Labor和Segment—test进行实验验证,结果表明,该算法在发现孤立点、缩小候选集等方面相比经典孤立点检测算法更有效。  相似文献   
82.
In this paper, we propose an iterative approach to increase the computation efficiency of the homotopy analysis method (HAM), a analytic technique for highly nonlinear problems. By means of the Schmidt–Gram process (Arfken et al., 1985)  [15], we approximate the right-hand side terms of high-order linear sub-equations by a finite set of orthonormal bases. Based on this truncation technique, we introduce the MMth-order iterative HAM by using each MMth-order approximation as a new initial guess. It is found that the iterative HAM is much more efficient than the standard HAM without truncation, as illustrated by three nonlinear differential equations defined in an infinite domain as examples. This work might greatly improve the computational efficiency of the HAM and also the Mathematica package BVPh for nonlinear BVPs.  相似文献   
83.
A hybrid uncertainty theory is developed to bridge the gap between fuzzy set theory and Dempster-Shafer theory. Its basis is the Dempster-Shafer formalism, which is extended to include a complete set of basic operations for manipulating uncertainties in a set-theoretic framework. The new theory, operator-belief theory (OT), retains the probabilistic flavor of Dempster's original point-to-set mappings but includes the potential for defining a wider range of operators like those found in fuzzy set theory.

The basic operations defined for OT in this paper include those for: dominance and order, union, intersection, complement and general mappings. Several sample problems in approximate reasoning are worked out to illustrate the new approach as well as to compare it with the other theories currently being used. A general method or extending the theory by using fuzzy set theory as a guide is suggested.  相似文献   

84.
With the increasing number of available XML documents, numerous approaches for retrieval have been proposed in the literature. They usually use the tree representation of documents and queries to process them, whether in an implicit or explicit way. Although retrieving XML documents can be considered as a tree matching problem between the query tree and the document trees, only a few approaches take advantage of the algorithms and methods proposed by the graph theory. In this paper, we aim at studying the theoretical approaches proposed in the literature for tree matching and at seeing how these approaches have been adapted to XML querying and retrieval, from both an exact and an approximate matching perspective. This study will allow us to highlight theoretical aspects of graph theory that have not been yet explored in XML retrieval.  相似文献   
85.
Some approximate indexing schemes have been recently proposed in metric spaces which sort the objects in the database according to pseudo-scores. It is known that (1) some of them provide a very good trade-off between response time and accuracy, and (2) probability-based pseudo-scores can provide an optimal trade-off in range queries if the probabilities are correctly estimated. Based on these facts, we propose a probabilistic enhancement scheme which can be applied to any pseudo-score based scheme. Our scheme computes probability-based pseudo-scores using pseudo-scores obtained from a pseudo-score based scheme. In order to estimate the probability-based pseudo-scores, we use the object-specific parameters in logistic regression and learn the parameters using MAP (Maximum a Posteriori) estimation and the empirical Bayes method. We also propose a technique which speeds up learning the parameters using pseudo-scores. We applied our scheme to the two state-of-the-art schemes: the standard pivot-based scheme and the permutation-based scheme, and evaluated them using various kinds of datasets from the Metric Space Library. The results showed that our scheme outperformed the conventional schemes, with regard to both the number of distance computations and the CPU time, in all the datasets.  相似文献   
86.
87.
Many methods based on the rough set to deal with incomplete information systems have been proposed in recent years. However, they are only suitable for the incomplete systems with regular attributes whose domains are not preference-ordered. This paper thus attempts to present research focusing on a complex incomplete information system—the incomplete ordered information system. In such incomplete information systems, all attributes are considered as criterions. A criterion indicates an attribute with preference-ordered domain. To conduct classification analysis in the incomplete ordered information system, the concept of similarity dominance relation is first proposed. Two types of knowledge reductions are then formed for preserving two different notions of similarity dominance relations. With introduction of the approximate distribution reduct into the incomplete ordered decision system, the judgment theorems and discernibility matrixes associated with four novel approximate distribution reducts are obtained. A numerical example is employed to substantiate the conceptual arguments.  相似文献   
88.
Partial information in databases can arise when information from several databases is combined. Even if each database is complete for some “world”, the combined databases will not be, and answers to queries against such combined databases can only be approximated. In this paper we describe various situations in which a precise answer cannot be obtained for a query asked against multiple databases. Based on an analysis of these situations, we propose a classification of constructs that can be used to model approximations.

The main goal of the paper is to study several formal models of approximations and their semantics. In particular, we obtain universality properties for these models of approximations. Universality properties suggest syntax for languages with approximations based on the operations which are naturally associated with them. We prove universality properties for most of the approximation constructs. Then we design languages built around datatypes given by the approximation constructs. A straightforward approach results in languages that have a number of limitations. In an attempt to overcome those limitations, we explain how all the languages can be embedded into a language for conjunctive and disjunctive sets from Libkin and Wong (1996) and demonstrate its usefulness in querying independent databases. We also discuss the semantics of approximation constructs and the relationship between them.  相似文献   

89.
脱机手写字符识别的DP算法的设计   总被引:2,自引:0,他引:2       下载免费PDF全文
把手写体汉字识别视为一种允许空间偏移的弹性模式识别匹配问题,提出了一种用于这类二值图象的两层动态规划算法,初步实验表明该算法具有十分满意的识别效果。  相似文献   
90.
A survey on algorithms for mining frequent itemsets over data streams   总被引:1,自引:8,他引:1  
The increasing prominence of data streams arising in a wide range of advanced applications such as fraud detection and trend learning has led to the study of online mining of frequent itemsets (FIs). Unlike mining static databases, mining data streams poses many new challenges. In addition to the one-scan nature, the unbounded memory requirement and the high data arrival rate of data streams, the combinatorial explosion of itemsets exacerbates the mining task. The high complexity of the FI mining problem hinders the application of the stream mining techniques. We recognize that a critical review of existing techniques is needed in order to design and develop efficient mining algorithms and data structures that are able to match the processing rate of the mining with the high arrival rate of data streams. Within a unifying set of notations and terminologies, we describe in this paper the efforts and main techniques for mining data streams and present a comprehensive survey of a number of the state-of-the-art algorithms on mining frequent itemsets over data streams. We classify the stream-mining techniques into two categories based on the window model that they adopt in order to provide insights into how and why the techniques are useful. Then, we further analyze the algorithms according to whether they are exact or approximate and, for approximate approaches, whether they are false-positive or false-negative. We also discuss various interesting issues, including the merits and limitations in existing research and substantive areas for future research.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号