首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   48篇
  免费   0篇
化学工业   1篇
冶金工业   2篇
自动化技术   45篇
  2016年   1篇
  2013年   1篇
  2011年   1篇
  2010年   2篇
  2009年   3篇
  2008年   2篇
  2007年   1篇
  2004年   1篇
  2003年   2篇
  2002年   3篇
  2001年   3篇
  2000年   2篇
  1999年   4篇
  1997年   1篇
  1996年   3篇
  1995年   3篇
  1994年   1篇
  1993年   1篇
  1992年   1篇
  1991年   2篇
  1988年   2篇
  1986年   1篇
  1985年   3篇
  1983年   2篇
  1979年   1篇
  1977年   1篇
排序方式: 共有48条查询结果,搜索用时 421 毫秒
11.
We apply rough set constructs to inductive learning from a database. A design guideline is suggested, which provides users the option to choose appropriate attributes, for the construction of data classification rules. Error probabilities for the resultant rule are derived. A classification rule can be further generalized using concept hierarchies. The condition for preventing overgeneralization is derived. Moreover, given a constraint, an algorithm for generating a rule with minimal error probability is proposed.  相似文献   
12.
The authors explore three topics in computational intelligence: machine translation, machine learning and user interface design and speculate on their effects on Web intelligence. Systems that can communicate naturally and learn from interactions will power Web intelligence's long term success. The large number of problems requiring Web-specific solutions demand a sustained and complementary effort to advance fundamental machine-learning research and incorporate a learning component into every Internet interaction. Traditional forms of machine translation either translate poorly, require resources that grow exponentially with the number of languages translated, or simplify language excessively. Recent success in statistical, nonlinguistic, and hybrid machine translation suggests that systems based on these technologies can achieve better results with a large annotated language corpus. Adapting existing computational intelligence solutions, when appropriate for Web intelligence applications, must incorporate a robust notion of learning that will scale to the Web, adapt to individual user requirements, and personalize interfaces.  相似文献   
13.
Xiang  Y.  Wong  S.K.M.  Cercone  N. 《Machine Learning》1997,26(1):65-92
Several scoring metrics are used in different search procedures for learning probabilistic networks. We study the properties of cross entropy in learning a decomposable Markov network. Though entropy and related scoring metrics were widely used, its microscopic properties and asymptotic behavior in a search have not been analyzed. We present such a microscopic study of a minimum entropy search algorithm, and show that it learns an I-map of the domain model when the data size is large.Search procedures that modify a network structure one link at a time have been commonly used for efficiency. Our study indicates that a class of domain models cannot be learned by such procedures. This suggests that prior knowledge about the problem domain together with a multi-link search strategy would provide an effective way to uncover many domain models.  相似文献   
14.
A rule quality measure is important to a rule induction system for determining when to stop generalization or specialization. Such measures are also important to a rule-based classification procedure for resolving conflicts among rules. We describe a number of statistical and empirical rule quality formulas and present an experimental comparison of these formulas on a number of standard machine learning datasets. We also present a meta-learning method for generating a set of formula-behavior rules from the experimental results which show the relationships between a formula's performance and the characteristics of a dataset. These formula-behavior rules are combined into formula-selection rules that can be used in a rule induction system to select a rule quality formula before rule induction. We will report the experimental results showing the effects of formula-selection on the predictive performance of a rule induction system.  相似文献   
15.
The U.S. Interstate 80 bridge over State Street in Salt Lake City is very near the Wasatch fault, which is active and capable of producing large earthquakes. The bridge was designed and built in 1965 according to the 1961 American Association of State Highway Officials specifications, which did not consider earthquake-induced forces or displacements. The bridge consists of reinforced concrete bents supporting steel plate welded girders. The bents are supported on cast-in-place concrete piles and pile caps. A seismic retrofit design was developed using carbon fiber reinforced polymer (CFRP) composites, which was implemented in the summer of 2000 and the summer of 2001, to improve the displacement ductility of the bridge. The seismic retrofit included column jacketing, as well as wrapping of the bent cap and bent cap-column joints for confinement, flexural, and shear strength increase. This paper describes the specifications developed for the CFRP composite column jackets and composite bent wrap. The specifications included provisions for materials, constructed thickness based on strength capacity, and an environmental durability reduction factor. Surface preparation, finish coat requirements, quality assurance provisions, which included sampling and testing, and constructability issues regarding the application of fiber composite materials in the retrofit of concrete bridges are also described.  相似文献   
16.
17.
18.
We propose Generate and Repair Machine Translation (GRMT), a constraint–based approach to machine translation that focuses on accurate translation output. GRMT performs the translation by generating a Translation Candidate (TC), verifying the syntax and semantics of the TC and repairing the TC when required. GRMT comprises three modules: Analysis Lite Machine Translation (ALMT), Translation Candidate Evaluation (TCE) and Repair and Iterate (RI). The key features of GRMT are simplicity, modularity, extendibility, and multilinguality.
An English–Thai translation system has been implemented to illustrate the performance of GRMT. The system has been developed and run under SWI–Prolog 3.2.8. The English and Thai grammars have been developed based on Head–Driven Phrase Structure Grammar (HPSG) and implemented on the Attribute Logic Engine (ALE). GRMT was tested to generate the translations for a number of sentences/phrases. Examples are provided throughout the article to illustrate how GRMT performs the translation process.  相似文献   
19.
A multiphase machine translation approach, Generate and Repair Machine Translation (GRMT), is proposed. GRMT is designed to generate accurate translations that focus primarily on retaining the linguistic meaning of the source language sentence. GRMT presently incorporates a limited multilingual translation capability. The central idea behind the GRMT approach is to generate a translationcandidate (TC) by quick and dirty machine translation (QDMT), then investigate the accuracy of that TC by translation candidate evaluation (TCE), and, if necessary, revise the translation in the repair and iterate (RI) phase. To demonstrate the GRMT approach, a translation system that translates from English to Thai has been developed. This paper presents the design characteristics and some experimental results of QDMT and also the initial design, some experiments, and proposed ideas behind TCE and RI.  相似文献   
20.
We present a data mining method which integrates discretization, generalization and rough set feature selection. Our method reduces the data horizontally and vertically. In the first phase, discretization and generalization are integrated. Numeric attributes are discretized into a few intervals. The primitive values of symbolic attributes are replaced by high level concepts and some obvious superfluous or irrelevant symbolic attributes are also eliminated. The horizontal reduction is done by merging identical tuples after substituting an attribute value by its higher level value in a pre- defined concept hierarchy for symbolic attributes, or the discretization of continuous (or numeric) attributes. This phase greatly decreases the number of tuples we consider further in the database(s). In the second phase, a novel context- sensitive feature merit measure is used to rank features, a subset of relevant attributes is chosen, based on rough set theory and the merit values of the features. A reduced table is obtained by removing those attributes which are not in the relevant attributes subset and the data set is further reduced vertically without changing the interdependence relationships between the classes and the attributes. Finally, the tuples in the reduced relation are transformed into different knowledge rules based on different knowledge discovery algorithms. Based on these principles, a prototype knowledge discovery system DBROUGH-II has been constructed by integrating discretization, generalization, rough set feature selection and a variety of data mining algorithms. Tests on a telecommunication customer data warehouse demonstrates that different kinds of knowledge rules, such as characteristic rules, discriminant rules, maximal generalized classification rules, and data evolution regularities, can be discovered efficiently and effectively.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号