首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 453 毫秒
1.
Cilk (pronounced “silk”) is a C-based runtime system for multithreaded parallel programming. In this paper, we document the efficiency of the Cilk work-stealing scheduler, both empirically and analytically. We show that on real and synthetic applications, the “work” and “critical-path length” of a Cilk computation can be used to model performance accurately. Consequently, a Cilk programmer can focus on reducing the computation's work and critical-path length, insulated from load balancing and other runtime scheduling issues. We also prove that for the class of “fully strict” (well-structured) programs, the Cilk scheduler achieves space, time, and communication bounds all within a constant factor of optimal. The Cilk runtime system currently runs on the Connection Machine CM5 MPP, the Intel Paragon MPP, the Sun Sparcstation SMP, and the Cilk-NOW network of workstations. Applications written in Cilk include protein folding, graphic rendering, backtrack search, and the Socrates chess program, which won second prize in the 1995 ICCA World Computer Chess Championship.  相似文献   

2.
中国象棋计算机博弈关键技术分析   总被引:36,自引:0,他引:36  
机器博弈被认为是人工智能领域最具挑战性的研究方向之一.国际象棋的计算机博弈已经有了很长的历史,并且经历了一场波澜壮阔的“搏杀”,“深蓝”计算机的胜利也给人类留下了难以忘怀的记忆.中国象棋计算机博弈的难度绝不亚于国际象棋,不仅涉足学者太少,而且参考资料不多.在国际象棋成熟技术的基础上,结合在中国象棋机器博弈方面的多年实践,总结出一套过程建模、状态表示、着法生成、棋局评估、博弈树搜索、开局库与残局库开发、系统测试与参数优化等核心技术要点,最后提出了当前研究的热点与方向.  相似文献   

3.
The IBM Deep Blue supercomputer that defeated World Chess Champion Garry Kasparov in 1997 employed 480 custom chess chips. This article describes the design philosophy, general architecture, and performance of the chess chips, which provided most of Deep Blue's computational power  相似文献   

4.
Deep Blue     
《Artificial Intelligence》2002,134(1-2):57-83
Deep Blue is the chess machine that defeated then-reigning World Chess Champion Garry Kasparov in a six-game match in 1997. There were a number of factors that contributed to this success, including:
  • •a single-chip chess search engine,
  • •a massively parallel system with multiple levels of parallelism,
  • •a strong emphasis on search extensions,
  • •a complex evaluation function, and
  • •effective use of a Grandmaster game database.
This paper describes the Deep Blue system, and gives some of the rationale that went into the design decisions behind Deep Blue.  相似文献   

5.
Computer composition of high-quality chess mate problems is relatively an uninvestigated research domain. A previous model, called an Improver of Chess Problems, which is based on a hill-climbing search, improved slightly the quality of 10 out of 36 known problems (about 28%). In this article, we describe an improved model, called Deep Improver of Chess Problems. This model uses an improved version of Bounded Depth First Search. The experiment, we carry out on the same problems, show that the quality of 32 problems (about 89%) has been improved. Moreover, some of the improvements are meaningful and impressive.  相似文献   

6.
This paper deals with the problem of teaching chess. We describe the ideal characteristics for a learning environment for chess and discuss a partial implementation of such an environment; ICONCHESS. An Interactive Consultant for Chess Middlegames. Most research on computer chess has focused on creating highly competitive chess playing programs, regardless of the means used to achieve this goal. Because of the success obtained by programs based on search algorithms, little effort has been put on chess playing programs that play using high level strategies, which are necessary in human chess players. This lack of strategic foundations makes most chess playing programs inadequate as chess tutors too. This paper presents a new approach for computer chess in general and specifically for a learning environment for chess. This approach combines the techniques of fuzzy logic and case-based reasoning in order to produce high level advice for chess positions. We also present the results of the empirical experiments used to test ICONCHESS and suggest some ways in which the approach can be applied to other domains.  相似文献   

7.
Difference-lists are terms that represent lists. The use of difference-lists can speed up most list-processing programs considerably. Prolog programmers routinely use “difference-list versions” of programs, but very little investigation has taken place into difference-list transformation. Thus, to most programmers it is either unknown that the use of difference-lists is far from safe in all contexts, or else this fact is known but attributed to Prolog’s infamous “occur check problem.” In this paper we study the transformation of list-processing programs into programs that use differencelists. In particular we are concerned with finding circumstances under which the transformation is safe. We show that dataflow analysis can be used to determine whether the transformation is applicable to a given program, thereby allowing for automatic transformation. We prove that our transformation preserves strong operational equivalence. This paper is a revised and extended version of a paper10) that was presented to theInternational Computer Science Conference 88 in Hong Kong December 1988.  相似文献   

8.
We present a generalized let-polymorphic type inference algorithm, prove that any of its instances is sound and complete with respect to the Hindley/Milner let-polymorphic type system, and find a condition on two instance algorithms so that one algorithm should find type errors earlier than the other. By instantiating the generalized algorithm with different parameters, we can obtain not only the two opposite algorithms (the bottom-up standard algorithmW and the top-down algorithmM) but also other hybrid algorithms which are used in real compilers. Such instances’ soudness and completeness follow automatically, and their relative earliness in detecting type-errors is determined by checking a simple condition. The set of instances of the generalized algorithm is a superset of those used in the two most popular ML compilers: SML/NJ and OCaml. This work is supported by Creative Research Initiatives of the Korean Ministry of Science and Technology National Creative Research Initiative Center, http://ropas.kaist.ac.kr Work done while the third author was associated with Korea Advanced Institute of Science and Technology Hyunjun Eo: He is a Ph.D. candidate in the Department of Computer Science at KAIST (Korea Advanced Institute of Science and Technology). He recieved his bachelor’s degree and master’s degree in Computer Science from KAIST in 1996 and 1998, respectively. His research interest has been on static program analysis, fixpoint iteration algorithm and higher-order and typed languages. From fall 1998, he has been a research assistant of the National Creative Research Initiative Center for Research on Program Analysis System. He is currently working on developing a tool for automatic generation of program analyzer. Oukseh Lee: He is a Ph.D. candidate in the Department of Computer Science at KAIST (Korea Advanced Institute of Science and Technology). He received his bachelor’s and master’s degree in Computer Science from KAIST in 1995 and 1997, respectively. His research interest has been on static program analysis, type system, program language implementation, higher-order and typed languages, and program verification. From 1998, he has been a research assistant of the National Creative Research Initiative Center for Research on Program Analysis System. He is currently working on compile-time analyses and verification for the memory behavior of programs. Kwangkeun Yi, Ph.D.: His research interest has been on semanticbased program analysis and systems application of language technologies. After his Ph.D. from University of Illinois at Urbana-Champaign he joined the Software Principles Research Department at Bell Laboratories, where he worked on various static analysis approaches for higher-order and typed programming languages. For 1995 to 2003 he was a faculty member in the Department of Computer Science, Korea Advanced Institute of Science and Technology. Since fall 2003, he has been a faculty member in the School of Computer Science and Engineering, Seoul National University.  相似文献   

9.
Abstract. The modelling and measurement of expertise is a relatively new research area in artificial intelligence and cognitive science. Many domains do not have a formal method for evaluating expertise. When formal methods exist, they are frequently inefficient. Using an extension to the IAM program, a pattern recognition and acquisition method for evaluating the level of expertise for the domain of chess is developed. Chess players, as well as experts in other domains, use cognitive chunks of perceptual patterns to gain a cognitive economy that enables them to evaluate complex domain situations faster and more accurately than novices. The IAM program acquires a representative collection of the perceptual patterns demonstrated by a domain expert and uses those patterns to analyse skill level. A longitudinal study of a developing player and a comparison of the developing player to an established expert demonstrates the utility of the developed method for evaluating expertise.  相似文献   

10.
Pest Control Expert System for Tomato (PCEST)   总被引:4,自引:0,他引:4  
This paper presents a real-life pest control expert system for tomato. The system involves two main subtasks, namely: ‘diagnose’ and ‘treat’. The ‘diagnose’ subtask finds out the causes of the growers' complaints, while the ‘treat’ subtask finds out a treatment plan for these causes. CommonKADS methodology has been used to develop the system. Dependency network is used as one of our knowledge representation schemes in both subtasks. An expert system evaluation methodology has been suggested and applied to the developed system. Received May 1998 / Revised January 1999 / Accepted in revised form May 1999  相似文献   

11.
The Internet was an important witness to a change in human-computer relations this spring, when IBM's Deep Blue computer defeated World Champion Garry Kasparov in a six-game chess match, 3.5 to 2.5. The author initiated an analysis by the Internet Chess Club that revealed a resource for Kasparov in the second game that could have prevented his defeat. The author dicusses the incident and the implications for collaboration on the Web  相似文献   

12.
Memoing is often used in logic programming to avoid redundant evaluation of similar goals, often on programs that are inherently recursive in nature. The interaction between memoing and recursion, however, is quite complex. There are several top-down evaluation strategies for logic programs that utilize memoing to achieve completeness in the presence of recursion. This paper’s focus, however, is on the use ofnaive memoing in Prolog. Using memoingnaively in conjunction with recursion in Prolog may not produce expected results. For example, adding naive memoing to Prolog’s evaluation of a right-recursive transitive closure may be incomplete, whereas adding naive memoing to Prolog’s evaluation of a left-recursive transitive closure may be terminating and complete. This paper examines the completeness of naive memoing in linear-recursive, function-free logic programs evaluated with Prolog’s top-down evaluation strategy. In addition, we assume that the program is definite and safe, having finite base relations and exactly one recursive predicate. The goal of the paper is a theoretical study of the completeness of naive memoing and recursion in Prolog, illustrating the limitations imposed even for this simplified class of programs. The naive memoing approach utilized for this study is based on extension tables, which provide a memo mechanism with immediate update view semantics for Prolog programs, through a source transformation known as ET. We introduce the concept ofET-complete, which refers to the completeness of the evaluation of a query over a Prolog program that memos selected predicates through the ET transformation. We show that left-linear recursions defined by a single recursive rule are ET-complete. We generalize the class of left-linear recursions that are ET-complete by introducing pseudo-left-linear recursions, which are also defined by a single linear recursive rule. To add left-linear recursions defined bymultiple linear recursive rules to the class of ET-complete recursions, we present a left-factoring algorithm that converts left-linear recursions defined by multiple recusive rules into pseudo-left-linear recursions defined by a single recursive rule. Based on these results, the paper concludes by identifying research directions for expanding the class of Prolog programs to be examined in future work. This work was partially supported by the National Science Foundation under Grant CCR-9008737. Suzanne Wagner Dietrich, Ph.D.: She is an Associate Professor in the Department of Computer Science and Engineering at Arizona State University. Her research emphasis is on the evaluation of declarative logic programs especially in the context of deductive databases, including materialized view maintenance and condition monitoring in active deductive databases. More recently, her research interests include the integration of active, object-oriented and deductive databases as well as the application of this emerging database technology to various disciplines such as software engineering. She received the B. S. degree in computer science in 1983 from the State University of New York at Stony Brook, and as the recipient of an Office of Naval Research Graduate Fellowship, earned her Ph.D. degree in computer science at Stony Brook in 1987. Changguan Fan, M.S.: He is a Ph.D. candidate in the Department of Computer Science and Engineering at Arizona State University and a software engineer at the Regenisys Corporation in Scottsdale, AZ. His research interests include the evaluation of logic programs, deductive database systems and database management systems. He received his B.S. in Computer Science from the Shanghai Institute of Railway Technology, Shanghai, China in 1982 and his M.S. in the Department of Computer Science and Engineering at Arizona State University in 1989.  相似文献   

13.
This paper presents a statistical model of the malware detection problem. Where Chess and White (An undetectable computer virus. In: Virus Bulletin Conference, 2000) just partially addressed this issue and gave only existence results, we give here constructive results of undetectable malware. We show that any existing detection techniques can be modelled by one or more statistical tests. Consequently, the concepts of false positive and non detection are precisely defined. The concept of test simulability is then presented and enables us to gives constructive results how undetectable malware could be developped by an attacker. Practical applications of this statistical model are proposed. Finally, we give a statistical variant of Cohen’s undecidability results of virus detection.  相似文献   

14.
为了提高博弈系统处理速度,设计一套专用的多处理器系统,达到博弈处理能力要求是一种切实可行的方案.而多处理器系统的并行计算和任务分配调度也为这套博弈硬件系统提出了难题.在介绍国际象棋博弈计算机发展历程及其典型系统的结构基础上,给出了采用松散耦合型的多处理器博弈硬件体系结构.并且详细介绍了基于DSP和FPGA的一种解决方案.根据博弈硬件系统结构和博弈任务的特点,给出了一种有效的任务调度方案.  相似文献   

15.
We describe a relational learning by observation framework that automatically creates cognitive agent programs that model expert task performance in complex dynamic domains. Our framework uses observed behavior and goal annotations of an expert as the primary input, interprets them in the context of background knowledge, and returns an agent program that behaves similar to the expert. We map the problem of creating an agent program on to multiple learning problems that can be represented in a “supervised concept learning’’ setting. The acquired procedural knowledge is partitioned into a hierarchy of goals and represented with first order rules. Using an inductive logic programming (ILP) learning component allows our framework to naturally combine structured behavior observations, parametric and hierarchical goal annotations, and complex background knowledge. To deal with the large domains we consider, we have developed an efficient mechanism for storing and retrieving structured behavior data. We have tested our approach using artificially created examples and behavior observation traces generated by AI agents. We evaluate the learned rules by comparing them to hand-coded rules. Editor: Rui Camacho  相似文献   

16.
“久”棋是藏族人民的传统棋类游戏,游戏过程分为布局阶段和战斗阶段,布局的质量对弈棋结果影响很大。与围棋博弈智能软件战胜人类高手的情况比较,“久”棋博弈研究几乎空白。为了拓宽机器博弈研究的游戏范围,开发具有较高棋力的“久”棋软件,作者开展了基于棋型的“久”棋计算机博弈研究。通过实地考察,在四川阿坝地区采集了约300局有效的“久”棋对弈数据,提取了常见棋型,分别为棋型命名为三角、三子、二子、对角、四子等。在布局阶段,采用模式匹配算法提高棋型的匹配速度。在布局和战斗阶段,基于棋型,设计了具有优先级别的防守、攻击、连子策略。采用C语言开发了“久”棋博弈软件,该软件具有人人对弈、人机对弈、自动录制棋谱等功能。该软件在2016年四川省阿坝县第七届“体彩杯”藏棋比赛中成功开展了人机对弈,但是棋力有待提高。结果表明,基于棋型的攻防策略能够有效地应用于“久”棋计算机博弈。  相似文献   

17.
A user-adaptive city guide system with an unobtrusive navigation interface   总被引:2,自引:1,他引:1  
In this paper, we describe an intelligent location-aware city guide system, which adapts to each user’s preferences, and uses an intuitive “metal detector” interface for navigation. Our system analyzes each user’s past location data history to estimate individual preferences, and allows users to find shops that match their tastes in the same way a metal detector would be used to detect metal objects. The procedure with which the system picks out shops that match each user’s preferences includes a newly developed place learning algorithm, which can efficiently find frequented places, complete with their proper names (e.g. “The Ueno Royal Museum”). We have conducted a series of evaluation tests at a popular shopping district inside Tokyo, and the results validate the effectiveness of our overall approach.  相似文献   

18.
In this paper, we propose an approach to synthesize cartoons from the existing cartoon data by controlling the character’s path which is defined by the cartoonists in a background image. First, detailed pre-experiments are conducted in which different cartoon features are extracted and compared. During the pre-experiments, three features extracted from edge, motion and color are demonstrated effectively for evaluating cartoon similarity according to the quantitative analysis. The three features are then fused and a Cartoon Frame Relationship Network is constructed. Based on the graph, we propose a Constrained Spreading Activation Algorithm to select candidate frames which are visually similar to the current frame to generate the next frame. The cartoons are synthesized by choosing the most appropriate frame from the candidates in accordance with the path designed by the cartoonists. When the new cartoons are applied into the background image, our approach coordinates the cartoon character’s size according to the image’s perspective as well. The experiment results demonstrate that the combination of the three proposed features are effective in similarity evaluation, and the candidates selected by Constrained Spreading Activation Algorithm, are more similar to the current frame compared with other algorithms. The results also show that our approach can synthesize visually smooth cartoons from the existing cartoon library.  相似文献   

19.
CHUNKER is a chess program that uses chunked knowledge to improve its performance. Its domain is a subset of king and pawn endings in chess that has been studied for over 300 years. CHUNKER has a large library of chunk instances where each chunk type has a property list and each instance has a set of values for these properties. This allows CHUNKER to reason about positions that come up in the search that would otherwise have to be handled by means of additional search. Thus the program is able to solve the most difficult problem of its present domain (a problem that would require 45 ply of search and on the order of 1013 years of CPU time to be solved by the best of present day chess programs) in 18 ply and one minute of CPU time. Further, CHUNKER is undoubtedly the world's foremost expert in its domain, and has discovered 2 mistakes in the literature and has been instrumental in discovering a new theorem about the domain that allows the assessing of positions with a new degree of ease and confidence. In this paper we show how the libraries are compiled, how CHUNKER works, and discuss our plans for extending it to play the whole domain of king and pawn endings.  相似文献   

20.
The PLT Scheme Web Server uses continuations to enable a natural, console-like program development style. We describe the implementation of the server and its use in the development of an application for managing conference paper reviews. In the process of developing this application, we encountered subtle forms of interaction not directly addressed by using continuations. We discuss these subtleties and offer solutions that have been successfully deployed in our application. Finally, we present some details on the server’s performance, which is comparable to that of the widely-used Apache Web server. This research is partially supported by multiple NSF grants. Preliminary versions of parts of this material have appeared in print (Lecture Notes in Computer Science, vol. 2638, pp. 100–128, 2003; European Symposium on Programming, pp. 122–136, 2001; Scheme Workshop, pp. 53–58, 2003; Symposium on the Practical Aspects of Declarative Languages, pp. 2–16, 2003). P.W. Hopkins’ current affiliation: Google, Inc. P.T. Graunke’s current affiliation: Galois Connections, Inc. G. Pettyjohn’s current affiliation: Turbine, Inc.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号