共查询到20条相似文献,搜索用时 31 毫秒
1.
A Horn definition is a set of Horn clauses with the same predicate in all head literals. In this paper, we consider learning
non-recursive, first-order Horn definitions from entailment. We show that this class is exactly learnable from equivalence
and membership queries. It follows then that this class is PAC learnable using examples and membership queries. Finally, we
apply our results to learning control knowledge for efficient planning in the form of goal-decomposition rules.
Chandra Reddy, Ph.D.: He is currently a doctoral student in the Department of Computer Science at Oregon State University. He is completing his
Ph.D. on June 30, 1998. His dissertation is entitled “Learning Hierarchical Decomposition Rules for Planning: An Inductive
Logic Programming Approach.” Earlier, he had an M. Tech in Artificial Intelligence and Robotics from University of Hyderabad,
India, and an M.Sc.(tech) in Computer Science from Birla Institute of Technology and Science, India. His current research
interests broadly fall under machine learning and planning/scheduling—more specifically, inductive logic programming, speedup
learning, data mining, and hierarchical planning and optimization.
Prasad Tadepalli, Ph.D.: He has an M.Tech in Computer Science from Indian Institute of Technology, Madras, India and a Ph.D. from Rutgers University,
New Brunswick, USA. He joined Oregon State University, Corvallis, as an assistant professor in 1989. He is now an associate
professor in the Department of Computer Science of Oregon State University. His main area of research is machine learning,
including reinforcement learning, inductive logic programming, and computational learning theory, with applications to classification,
planning, scheduling, manufacturing, and information retrieval. 相似文献
2.
On account of the enormous amounts of rules that can be produced by data mining algorithms, knowledge post-processing is a
difficult stage in an association rule discovery process. In order to find relevant knowledge for decision making, the user
(a decision maker specialized in the data studied) needs to rummage through the rules. To assist him/her in this task, we
here propose the rule-focusing methodology, an interactive methodology for the visual post-processing of association rules. It allows the user to explore
large sets of rules freely by focusing his/her attention on limited subsets. This new approach relies on rule interestingness
measures, on a visual representation, and on interactive navigation among the rules. We have implemented the rule-focusing
methodology in a prototype system called ARVis. It exploits the user's focus to guide the generation of the rules by means of a specific constraint-based rule-mining algorithm.
Julien Blanchard earned the Ph.D. in 2005 from Nantes University (France) and is currently an assistant professor at the Polytechnic School
of Nantes University. He is the author of a book chapter and seven journal and international conference papers in the field
of visualization and interestingness measures for data mining.
Fabrice Guillet is currently a member of the LINA laboratory (CNRS 2729) at the Polytechnic Graduate School of Nantes University (France).
He receive the Ph.D. degree in computer science in 1995 from the Ecole Nationale Supěrieure des Télécommunications de Bretagne.
He is author of 35 international publications in data mining and knowledge management. He is a founder and a permanent member
of the Steering Committee of the annual EGC French-speaking conference.
Henri Briand received the Ph.D. degree in 1983 from Paul Sabatier University located in Toulouse (France) and has published works in over
100 publications in database systems and database mining. He was the head of the Computer Engineering Department at the Polytechnic
School of Nantes University. He was in charge of a research team in the data mining domain. He is responsible for the organization
of the Data Mining Master in Nantes University. 相似文献
3.
Recent Progress on Selected Topics in Database Research ——-A Report by Nine Young Chinese Researchers Working in the United States
下载免费PDF全文
![点击此处可从《计算机科学技术学报》网站下载免费的PDF全文](/ch/ext_images/free.gif)
ZhiyuanChen ChenLi JianPei YufeiTao HaixunWang WeiWang JiongYang JunYang DonghuiZhang 《计算机科学技术学报》2003,18(5):0-0
The study on database technologies, or more generally, the technologies of data and information management, is an important and active research field. Recently, many exciting results have been reported. In this fast growing field, Chinese researchers play more and more active roles. Research papers from Chinese scholars, both in China and abroad,appear in prestigious academic forums.In this paper,we, nine young Chinese researchers working in the United States, present concise surveys and report our recent progress on the selected fields that we are working on.Although the paper covers only a small number of topics and the selection of the topics is far from balanced, we hope that such an effort would attract more and more researchers,especially those in China,to enter the frontiers of database research and promote collaborations. For the obvious reason, the authors are listed alphabetically, while the sections are arranged in the order of the author list. 相似文献
4.
This approach proposes the creation and management of adaptive learning systems by combining component technology, semantic metadata, and adaptation rules. A component model allows interaction among components that share consistent assumptions about what each provides and each requires of the other. It allows indexing, using, reusing, and coupling of components in different contexts powering adaptation. Our claim is that semantic metadata are required to allow a real reusing and assembling of educational component. Finally, a rule language is used to define strategies to rewrite user query and user model. The former allows searching components developing concepts not appearing in the user query but related with user goals, whereas the last allow inferring user knowledge that is not explicit in user model.John Freddy Duitama received his M.Sc. degree in system engineering from the University of Antioquia -Colombia (South America). He is currently a doctoral candidate in the GET – Institut National des Télécommunications, Evry France. This work is sponsored by the University of Antioquia, where he is assistant professor.His research interest includes semantic web and web-based learning systems, educational metadata and learning objects.Bruno Defude received his Ph.D. in Computer Science from the University of Grenoble (I.N.P.G) in 1986. He is currently Professor in the Department of Computer Science at the GET - Institut National des Télécommunications, Evry France where he leads the SIMBAD project (Semantic Interoperability for MoBile and ADaptive applications).His major field of research interest is databases and semantic web, specifically personalized data access, adaptive systems, metadata, interoperability and semantic Peer-to-peer systems with elearning as a privileged application area.He is a member of ACM SIGMOD.Amel Bouzeghoub received a degree of Ph.D. in Computer Sciences at Pierre et Marie Curie University, France.In 2000, she joined the Computer Sciences Department of GET-INT (Institut National des Telecommunications) at Evry (France) as an associate professor.Her research interests include topics related to Web-based Learning Systems, Semantic Metadata for learning resources, Adaptive Learning Systems and Intelligent Tutoring Systems.Claire Lecocq received an Engineer Degree and a Ph.D. in Computer Sciences respectively in 1994 and 1999. In 1997, she joined the Computer Sciences Department at GET-INT (Institut National des Télécommunications) of Evry, France, as an associate professor. Her first research interests included spatial databases and visual query languages. She is now working on adaptive learning systems, particularly on semantic metadata and user models. 相似文献
5.
View-based approach for learning and recognition of 3D object and its pose detection was proved to be affective and efficient,
except its high learning cost. In this research, we propose a virtual learning approach which generates learning samples of
views of an object from its 3D view model obtained by motion-stereo method. From the generated learning sample views, features
of high-order autocorrelation are extracted, and discriminant feature spaces for object recognition and pose detection are
built. Recognition experiments on real objects are carried out to show the effectiveness of the proposed method.
Caihua Wang, Ph.D.: He received his B.S. in mathematics and M.E. in electronic engineering from Renmin University of China, Beijing, China in
1983 and 1986, and his Ph. D. from Shizuoka University, Hamamatsu, Japan in 1996. He is a JST domestic fellow and is doing
his post doctoral research at Electrotechnical Laboratory. His research interests are computer vision and image processing.
He is a member of IEICE and IPSJ.
Katsuhiko Sakaue, Ph.D.: He received the B.E., M.E., and Ph.D. degrees all in electronic engineering from University of Tokyo, in 1976, 1978 and
1981, respectively. In 1981, he joined the Electrotechnical Laboratory, Ministry of International Trade and Industry, and
engaged in researches in image processing and computer vision. He received the Encouragement Prize in 1979 from IEICE, and
the Paper Award in 1985 from Information. 相似文献
6.
We present a system for performing belief revision in a multi-agent environment. The system is called GBR (Genetic Belief
Revisor) and it is based on a genetic algorithm. In this setting, different individuals are exposed to different experiences.
This may happen because the world surrounding an agent changes over time or because we allow agents exploring different parts
of the world. The algorithm permits the exchange of chromosomes from different agents and combines two different evolution
strategies, one based on Darwin’s and the other on Lamarck’s evolutionary theory. The algorithm therefore includes also a
Lamarckian operator that changes the memes of an agent in order to improve their fitness. The operator is implemented by means
of a belief revision procedure that, by tracing logical derivations, identifies the memes leading to contradiction. Moreover,
the algorithm comprises a special crossover mechanism for memes in which a meme can be acquired from another agent only if
the other agent has “accessed” the meme, i.e. if an application of the Lamarckian operator has read or modified the meme.
Experiments have been performed on the η-queen problem and on a problem of digital circuit diagnosis. In the case of the η-queen
problem, the addition of the Lamarckian operator in the single agent case improves the fitness of the best solution. In both
cases the experiments show that the distribution of constraints, even if it may lead to a reduction of the fitness of the
best solution, does not produce a significant reduction.
Evelina Lamma, Ph.D.: She is Full Professor at the University of Ferrara. She got her degree in Electrical Engineering at the University of Bologna
in 1985, and her Ph.D. in Computer Science in 1990. Her research activity centers on extensions of logic programming languages
and artificial intelligence. She was coorganizers of the 3rd International Workshop on Extensions of Logic Programming ELP92,
held in Bologna in February 1992, and of the 6th Italian Congress on Artificial Intelligence, held in Bologna in September
1999. Currently, she teaches Artificial Intelligence and Fondations of Computer Science.
Fabrizio Riguzzi, Ph.D.: He is Assistant Professor at the Department of Engineering of the University of Ferrara, Italy. He received his Laurea from
the University of Bologna in 1995 and his Ph.D. from the University of Bologna in 1999. He joined the Department of Engineering
of the University of Ferrara in 1999. He has been a visiting researcher at the University of Cyprus and at the New University
of Lisbon. His research interests include: data mining (and in particular methods for learning from multirelational data),
machine learning, belief revision, genetic algorithms and software engineering.
Luís Moniz Pereira, Ph.D.: He is Full Professor of Computer Science at Departamento de Informática, Universidade Nova de Lisboa, Portugal. He received
his Ph.D. in Artificial Intelligence from Brunel University in 1974. He is the director of the Artificial Intelligence Centre
(CENTRIA) at Universidade Nova de Lisboa. He has been elected Fellow of the European Coordinating Committee for Artificial
Intelligence in 2001. He has been a visiting Professor at the U. California at Riverside, USA, the State U. NY at Stony Brook,
USA and the U. Bologna, Italy. His research interests include: knowledge representation, reasoning, learning, rational agents
and logic programming. 相似文献
7.
In this paper,ARM iner,a data mining tool based on association rules,is introduced.Beginning with the system architecture,the characteristics and functions are discussed in details,including data transfer,concept hierarchy generalization,mining rules with negative items and the re-development of the system.An example of the tool‘s application is also shown.Finally,Some issues for future research are presented. 相似文献
8.
TEG—a hybrid approach to information extraction 总被引:1,自引:1,他引:1
This paper describes a hybrid statistical and knowledge-based information extraction model, able to extract entities and relations
at the sentence level. The model attempts to retain and improve the high accuracy levels of knowledge-based systems while
drastically reducing the amount of manual labour by relying on statistics drawn from a training corpus. The implementation
of the model, called TEG (trainable extraction grammar), can be adapted to any IE domain by writing a suitable set of rules
in a SCFG (stochastic context-free grammar)-based extraction language and training them using an annotated corpus. The system
does not contain any purely linguistic components, such as PoS tagger or shallow parser, but allows to using external linguistic
components if necessary. We demonstrate the performance of the system on several named entity extraction and relation extraction
tasks. The experiments show that our hybrid approach outperforms both purely statistical and purely knowledge-based systems,
while requiring orders of magnitude less manual rule writing and smaller amounts of training data. We also demonstrate the
robustness of our system under conditions of poor training-data quality.
Ronen Feldman is a senior lecturer at the Mathematics and Computer Science Department of Bar-Ilan University in Israel, and the Director
of the Data Mining Laboratory. He received his B.Sc. in Math, Physics and Computer Science from the Hebrew University, M.Sc.
in Computer Science from Bar-Ilan University, and his Ph.D. in Computer Science from Cornell University in NY. He was an Adjunct
Professor at NYU Stern Business School. He is the founder of ClearForest Corporation, a Boston based company specializing
in development of text mining tools and applications. He has given more than 30 tutorials on next mining and information extraction
and authored numerous papers on these topics. He is currently finishing his book “The Text Mining Handbook” to the published
by Cambridge University Press.
Benjamin Rosenfeld is a research scientist at ClearForest Corporation. He received his B.Sc. in Mathematics and Computer Science from Bar-Ilan
University. He is the co-inventor of the DIAL information extraction language.
Moshe Fresko is finalizing his Ph.D. in Computer Science Department at Bar-Ilan University in Israel. He received his B.Sc. in Computer
Engineering from Bogazici University, Istanbul/Turkey on 1991, and M.Sc. on 1994. He is also an adjunct lecturer at the Computer
Science Department of Bar-Ilan University and functions as the Information-Extraction Group Leader in the Data Mining Laboratory. 相似文献
9.
Linear relation has been found to be valuable in rule discovery of stocks, such as if stock X goes up a, stock Y will go down b. The traditional linear regression models the linear relation of two sequences faithfully. However, if a user requires clustering
of stocks into groups where sequences have high linearity or similarity with each other, it is prohibitively expensive to
compare sequences one by one. In this paper, we present generalized regression model (GRM) to match the linearity of multiple
sequences at a time. GRM also gives strong heuristic support for graceful and efficient clustering. The experiments on the
stocks in the NASDAQ market mined interesting clusters of stock trends efficiently.
Hansheng Lei received his BE from Ocean University of China in 1998, MS from the University of Science and Technology of China in 2001,
and Ph.D. from the University at Buffalo, the State University of New York in February 2006, all in computer science. He is
currently an assistant professor in CS/CIS Department, University of Texas at Brownsville. His research interests include
biometrics, pattern recognition, machine learning, and data mining.
Venu Govindaraju is a professor of Computer Science and Engineering at the University at Buffalo (UB), State University of New York. He received
his B.-Tech. (Honors) from the Indian Institute of Technology (IIT), Kharagpur, India in 1986, and his Ph.D. degree in Computer
Science from UB in 1992. His research is focused on pattern recognition applications in the areas of biometrics and digital
libraries. 相似文献
10.
Multimedia systems can profit a lot from personalization. Such a personalization is essential to give users the feeling that the system is easily accessible especially if it is done automatically. The way this adaptive personalization works is very dependent on the adaptation model that is chosen.We introduce a generic two-dimensional classification framework for user modeling systems. This enables us to clarify existing as well as new applications in the area of user modeling. In order to illustrate our framework we evaluate push and pull based user modeling in user modeling systems.Paul de Vrieze received his Masters degree in Information Science in 2002 from the University Of Tilburg, The Netherlands. He is currently junior researcher at the University of Nijmegen. His main research interests include adaptive systems and user modelling.Patrick van Bommel received his Masters degree in Computer Science in 1990, and the degree of Ph.D in Mathematics and Computer Science, from the University of Nijmegen, the Netherlands in 1995. He is currently assistant professor at the University of Nijmegen. His main research interests include information modelling and information retrieval.Prof.Dr.Ir. Th.P. van der Weide received his masters degree from the Technical University Eindhoven, the Netherlands in 1975, and the degree of Ph.D in Mathematics and Physics from the University of Leiden, the Netherlands in 1980. He is currently professor at the University of Nijmegen, the Netherlands. His main research interests include information systems, information retrieval, hypertext and knowledge based systems. 相似文献
11.
Haruki Nakamura Susumu Date Hideo Matsuda Shinji Shimojo 《New Generation Computing》2004,22(2):157-166
Recently, life scientists have expressed a strong need for computational power sufficient to complete their analyses within
a realistic time as well as for a computational power capable of seamlessly retrieving biological data of interest from multiple
and diverse bio-related databases for their research infrastructure. This need implies that life science strongly requires
the benefits of advanced IT. In Japan, the Biogrid project has been promoted since 2002 toward the establishment of a next-generation
research infrastructure for advanced life science. In this paper, the Biogrid strategy toward these ends is detailed along
with the role and mission imposed on the Biogrid project. In addition, we present the current status of the development of
the project as well as the future issues to be tackled.
Haruki Nakamura, Ph.D.: He is Professor of Protein Informatics at Institute for Protein Research, Osaka University. He received his B.S., M.A. and
Ph.D. from the University of Tokyo in 1975, 1977 and 1980 respectively. His research field is Biophysics and Bioinformatics,
and has so far developed several original algorithms in the computational analyses of protein electrostatic features and folding
dynamics. He is also a head of PDBj (Protein Data Bank Japan) to manage and develop the protein structure database, collaborating
with RCSB (Research Collaboratory for Structural Bioinformatics) in USA and MSD-EBI (Macromolecular Structure Database at
the European Bioinformatics Institute) in EU.
Susumu Date, Ph.D.: He is Assistant Professor of the Graduate School of Information Science and Technology, Osaka University. He received his
B.E., M.E. and Ph.D. degrees from Osaka University in 1997, 2000 and 2002, respectively. His research field is computer science
and his current research interests include application of Grid computing and related information technologies to life sciences.
He is a member of IEEE CS and IPSJ.
Hideo Matsuda, Ph.D.: He is Professor of the Department of Bioinformatic Engineering, the Graduate School of Information Science and Technology,
Osaka University. He received his B.S., M.Eng. and Ph.D. degrees from Kobe University in 1982, 1984 and 1987 respectively.
For M.Eng. and Ph.D. degrees, he majored in computer science. His research interests include computational analysis of genomic
sequences. He has been involved in the FANTOM (Functional Annotation of Mouse) Project for the functional annotation of RIKEN
mouse full-length cDNA sequences. He is a member of ISCB, IEEE CS and ACM.
Shinji Shimojo, Ph.D.: He received M.E. and Ph.D. degrees from Osaka University in 1983 and 1986 respectively. He was an Assistant Professor with
the Department of Information and Computer Sciences, Faculty of Engineering Science at Osaka University from 1986, and an
Associate Professor with Computation Center from 1991 to 1998. During the period, he also worked as a visiting researcher
at the University of California, Irvine for a year. He has been Professor with Cybermedia Center (then Computation Center)
at Osaka University since 1998. His current research work focus on a wide variety of multimedia applications, peer-to-peer
communication networks, ubiquitous network systems and Grid technologies. He is a member of ACM, IEEE and IEICE. 相似文献
12.
Summary Algorithms for mutual exclusion that adapt to the current degree of contention are developed. Afilter and a leader election algorithm form the basic building blocks. The algorithms achieve system response times that are independent of the total number of processes and governed instead by the current degree of contention. The final algorithm achieves a constant amortized system response time.
Manhoi Choy was born in 1967 in Hong Kong. He received his B.Sc. in Electrical and Electronic Engineerings from the University of Hong Kong in 1989, and his M.Sc. in Computer Science from the University of California at Santa Barbara in 1991. Currently, he is working on his Ph.D. in Computer Science at the University of California at Santa Barbara. His research interests are in the areas of parallel and distributed systems, and distributed algorithms.
Ambuj K. Singh is an Assistant Professor in the Department of Computer Science at the University of California, Santa Barbara. He received a Ph.D. in Computer Science from the University of Texas at Austin in 1989, an M.S. in Computer Science from Iowa State University in 1984, and a B.Tech. from the Indian Institute of Technology at Kharagpur in 1982. His research interests are in the areas of adaptive resource allocation, concurrent program development, and distributed shared memory.A preliminary version of the paper appeared in the 12th Annual ACM Symposium on Principles of Distributed ComputingWork supported in part by NSF grants CCR-9008628 and CCR-9223094 相似文献
13.
Electronic Commerce (EC) is a promising field for applying agent and Artificial Intelligence technologies. In this article,
we give an overview of the trends of Internet auctions and agent-mediated Web commerce. We describe the theoretical backgrounds
of auction protocols and introduce several Internet auction sites. Furthermore, we describe various activities aimed toward
utilizing agent technologies in EC and the trends in standardization efforts on agent technologies.
Makoto Yokoo, Ph.D.: He received the B.E. and M.E. degrees in electrical engineering, in 1984 and 1986, respectively, from the University of
Tokyo, Japan, and the Ph.D. degree in information and communication engineering in 1995 from the University of Tokyo, Japan.
He is currently a distinguished technical member in NTT Communication Science Laboratories, Kyoto, Japan. He was a visiting
research scientist at the Department of Electrical Engineering and Computer Science, the University of Michigan, Ann Arbor,
from 1990 to 1991. His current research interests include multi-agent systems, search, and constraint satisfaction.
Satoru Fujita, D.Eng.: He received his B.E. and M.E. degrees in electronic engineering from the University of Tokyo in 1984 and 1986, respectively.
He also received his D.Eng. from the University of Tokyo in 1989 for his research on context comprehension in natural language
understanding. He joined NEC Corporation in 1989, and is now a principal researcher of Internet Systems Research Laboratories
of NEC. He is engaged in research on mobile agents, distributed systems and Web services. 相似文献
14.
Hiroaki Imade Ryohei Morishita Isao Ono Norihiko Ono Masahiro Okamoto 《New Generation Computing》2004,22(2):177-186
In this paper, we propose a framework for enabling for researchers of genetic algorithms (GAs) to easily develop GAs running
on the Grid, named “Grid-Oriented Genetic algorithms (GOGAs)”, and actually “Gridify” a GA for estimating genetic networks,
which is being developed by our group, in order to examine the usability of the proposed GOGA framework. We also evaluate
the scalability of the “Gridified” GA by applying it to a five-gene genetic network estimation problem on a grid testbed constructed
in our laboratory.
Hiroaki Imade: He received his B.S. degree in the department of engineering from The University of Tokushima, Tokushima, Japan, in 2001.
He received the M.S. degree in information systems from the Graduate School of Engineering, The University of Tokushima in
2003. He is now in Doctoral Course of Graduate School of Engineering, The University of Tokushima. His research interests
include evolutionary computation. He currently researches a framework to easily develop the GOGA models which efficiently
work on the grid.
Ryohei Morishita: He received his B.S. degree in the department of engineering from The University of Tokushima, Tokushima, Japan, in 2002.
He is now in Master Course of Graduate School of Engineering, The University of Tokushima, Tokushima. His research interest
is evolutionary computation. He currently researches GA for estimating genetic networks.
Isao Ono, Ph.D.: He received his B.S. degree from the Department of Control Engineering, Tokyo Institute of Technology, Tokyo, Japan, in
1994. He received Ph.D. of Engineering at Tokyo Institute of Technology, Yokohama, in 1997. He worked as a Research Fellow
from 1997 to 1998 at Tokyo Institute of Technology, and at University of Tokushima, Tokushima, Japan, in 1998. He worked as
a Lecturer from 1998 to 2001 at University of Tokushima. He is now Associate Professor at University of Tokushima. His research
interests include evolutionary computation, scheduling, function optimization, optical design and bioinformatics. He is a
member of JSAI, SCI, IPSJ and OSJ.
Norihiko Ono, Ph.D.: He received his B.S. M.S. and Ph.D. of Engineering in 1979, 1981 and 1986, respectively, from Tokyo Institute of Technology.
From 1986 to 1989, he was Research Associate at Faculty of Engineering, Hiroshima University. From 1989 to 1997, he was an
associate professor at Faculty of Engineering, University of Tokushima. He was promoted to Professor in the Department of
Information Science and Intelligent Systems in 1997. His current research interests include learning in multi-agent systems,
autonomous agents, reinforcement learning and evolutionary algorithms.
Masahiro Okamoto, Ph.D.: He is currently Professor of Graduate School of Systems Life Sciences, Kyushu University, Japan. He received his Ph.D. degree
in Biochemistry from Kyushu University in 1981. His major research field is nonlinear numerical optimization and systems biology.
His current research interests cover system identification of nonlinear complex systems by using evolutional computer algorithm
of optimization, development of integrated simulator for analyzing nonlinear dynamics and design of fault-tolerant routing
network by mimicking metabolic control system. He has more than 90 peer reviewed publications. 相似文献
15.
Brian J. Ross 《New Generation Computing》2001,19(4):313-337
DCTG-GP is a genetic programming system that uses definite clause translation grammars. A DCTG is a logical version of an
attribute grammar that supports the definition of context-free languages, and it allows semantic information associated with
a language to be easily accommodated by the grammar. This is useful in genetic programming for defining the interpreter of
a target language, or incorporating both syntactic and semantic problem-specific constraints into the evolutionary search.
The DCTG-GP system improves on other grammar-based GP systems by permitting nontrivial semantic aspects of the language to
be defined with the grammar. It also automatically analyzes grammar rules in order to determine their minimal depth and termination
characteristics, which are required when generating random program trees of varied shapes and sizes. An application using
DCTG-GP is described.
Brian James Ross, Ph.D.: He is an associate professor of computer science at Brock University, where he has worked since 1992. He obtained his BCSc
at the University of Manitoba, Canada, in 1984, his MSc at the University of British Columbia, Canada, in 1988, and his PhD
at the University of Edinburgh, Scotland, in 1992. His research interests include evolutionary computation, machine learning,
language induction, concurrency, and logic programming. 相似文献
16.
Real robots should be able to adapt autonomously to various environments in order to go on executing their tasks without breaking
down. They achieve this by learning how to abstract only useful information from a huge amount of information in the environment
while executing their tasks. This paper proposes a new architecture which performs categorical learning and behavioral learning
in parallel with task execution. We call the architectureSituation Transition Network System (STNS). In categorical learning, it makes a flexible state representation and modifies it according to the results of behaviors.
Behavioral learning is reinforcement learning on the state representation. Simulation results have shown that this architecture
is able to learn efficiently and adapt to unexpected changes of the environment autonomously.
Atsushi Ueno, Ph.D.: He is a research associate in the Artificial Intelligence Laboratory at the Graduate School of Information Science at the
Nara Institute of Science and Technology (NAIST). He received the B.E., the M.E., and the Ph.D. degrees in aeronautics and
astronautics from the University of Tokyo in 1991, 1993, and 1997 respectively. His research interest is robot learning and
autonomous systems. He is a member of Japan Association for Artificial Intelligence (JSAI).
Hideaki Takeda, Ph.D.: He is an associate professor in the Artificial Intelligence Laboratory at the Graduate School of Information Science at the
Nara Institute of Science and Technology (NAIST). He received his Ph.D. in precision machinery engineering from the University
of Tokyo in 1991. He has conducted research on a theory of intelligent computer-aided design systems, in particular experimental
study and logical formalization of engineering design. He is also interested in multiagent architectures and ontologies for
knowledge base systems. 相似文献
17.
This paper introduces a new algorithm of mining association rules.The algorithm RP counts the itemsets with different sizes in the same pass of scanning over the database by dividing the database into m partitions.The total number of pa sses over the database is only(k 2m-2)/m,where k is the longest size in the itemsets.It is much less than k . 相似文献
18.
Jose-Jesus Fernandez Jose-Roman Bilbao-Castro Roberto Marabini Jose-Maria Carazo Inmaculada Garcia 《New Generation Computing》2005,23(1):101-112
The present contribution describes a potential application of Grid Computing in Bioinformatics. High resolution structure
determination of biological specimens is critical in BioSciences to understanding the biological function. The problem is
computational intensive. Distributed and Grid Computing are thus becoming essential. This contribution analyzes the use of
Grid Computing and its potential benefits in the field of electron microscope tomography of biological specimens.
Jose-Jesus Fernandez, Ph.D.: He received his M.Sc. and Ph.D. degrees in Computer Science from the University of Granada, Spain, in 1992 and 1997, respectively.
He was a Ph.D. student at the Bio-Computing unit of the National Center for BioTechnology (CNB) from the Spanish National
Council of Scientific Research (CSIC), Madrid, Spain. He became an Assistant Professor in 1997 and, subsequently, Associate
Professor in 2000 in Computer Architecture at the University of Almeria, Spain. He is a member of the supercomputing-algorithms
research group. His research interests include high performance computing (HPC), image processing and tomography.
Jose-Roman Bilbao-Castro: He received his M.Sc. degree in Computer Science from the University of Almeria in 2001. He is currently a Ph.D. student
at the BioComputing unit of the CNB (CSIC) through a Ph.D. CSIC-grant in conjuction with Dept. Computer Architecture at the
University of Malaga (Spain). His current research interestsinclude tomography, HPC and distributed and grid computing.
Roberto Marabini, Ph.D.: He received the M.Sc. (1989) and Ph.D. (1995) degrees in Physics from the University Autonoma de Madrid (UAM) and University
of Santiago de Compostela, respectively. He was a Ph.D. student at the BioComputing Unit at the CNB (CSIC). He worked at the
University of Pennsylvania and the City University of New York from 1998 to 2002. At present he is an Associate Professor
at the UAM. His current research interests include inverse problems, image processing and HPC.
Jose-Maria Carazo, Ph.D.: He received the M.Sc. degree from the Granada University, Spain, in 1981, and got his Ph.D. in Molecular Biology at the
UAM in 1984. He left for Albany, NY, in 1986, coming back to Madrid in 1989 to set up the BioComputing Unit of the CNB (CSIC).
He was involved in the Spanish Ministry of Science and Technology as Deputy General Director for Research Planning. Currently,
he keeps engaged in his activities at the CNB, the Scientific Park of Madrid and Integromics S.L.
Immaculada Garcia, Ph.D.: She received her B.Sc. (1977) and Ph.D. (1986) degrees in Physics from the Complutense University of Madrid and University
of Santiago de Compostela, respectively. From 1977 to 1987 she was an Assistant professor at the University of Granada, from
1987 to 1996 Associate professor at the University of Almeria and since 1997 she is a Full Professor and head of Dept. Computer
Architecture. She is head of the supercomputing-algorithms research group. Her research interest lies in HPC for irregular
problems related to image processing, global optimization and matrix computation. 相似文献
19.
Arjen P. De Vries Menzo Windhouwer Peter M. G. Apers Martin Kersten 《New Generation Computing》2000,18(4):323-339
With the increasing popularity of the WWW, the main challenge in computer science has become content-based retrieval of multimedia
objects. Access to multimedia objects in databases has long been limited to the information provided in manually assigned
keywords. Now, with the integration of feature-detection algorithms in database systems software, content-based retrieval
can be fully integrated with query processing. We describe our experimentation platform under development, making database
technology available to multimedia. Our approach is based on the new notion of feature databases. Its architecture fully integrates
traditional query processing and content-based retrieval techniques.
Arjen P. de Vries, Ph.D.: He received his Ph.D. in Computer Science from the University of Twente in 1999, on the integration of content management
in database systems. He is especially interested in the new requirements on the design of database systems to support content-based
retrieval in multimedia digital libraries. He has continued to work on multimedia database systems as a postdoc at the CWI
in Amsterdam as well as University of Twente.
Menzo Windhouwer: He received his MSc in Computer Science and Management from the University of Amsterdam in 1997. Currently he is working
in the CWI Database Research Group on his Ph.D., which is concerned with multimedia indexing and retrieval using feature grammars.
Peter M.G. Apers, Ph.D.: He is a full professor in the area of databases at the University of Twente, the Netherlands. He obtained his MSc and Ph.D.
at the Free University, Amsterdam, and has been a visiting researcher at the University of California, Santa Cruz and Stanford
University. His research interests are query optimization in parallel and distributed database systems to support new application
domains, such as multimedia applications and WWW. He has served on the program committees of major database conferences: VLDB,
SIGMOD, ICDE, EDBT. In 1996 he was the chairman of the EDBT PC. In 2001 he will, for the second time, be the chairman of the
European PC of the VLDB. Currently he is coordinating Editor-in-Chief of the VLDB Journal, editor of Data & Knowledge Engineering,
and editor of Distributed and Parallel Databases.
Martin Kersten, Ph.D.: He received his PhD in Computer Science from the Vrije Universiteit in 1985 on research in database security, whereafter
he moved to CWI to establish the Database Research Group. Since 1994 he is professor at the University of Amsterdam. Currently
he is heading a department involving 60 researchers in areas covering BDMS architectures, datamining, multimedia information
systems, and quantum computing. In 1995 he co-founded Data Distilleries, specialized in data mining technology, and became
a non-executive board member of the software company Consultdata Nederland. He has published ca. 130 scientific papers and
is member of the editorial board of VLDB journal and Parallel and Distributed Systems. He acts as a reviewer for ESPRIT projects
and is a trustee of the VLDB Endowment board. 相似文献
20.
Many algorithms in distributed systems assume that the size of a single message depends on the number of processors. In this paper, we assume in contrast that messages consist of a single bit. Our main goal is to explore how the one-bit translation of unbounded message algorithms can be sped up by pipelining. We consider two problems. The first is routing between two processors in an arbitrary network and in some special networks (ring, grid, hypercube). The second problem is coloring a synchronous ring with three colors. The routing problem is a very basic subroutine in many distributed algorithms; the three coloring problem demonstrates that pipelining is not always useful.
Amotz Bar-Noy received his B.Sc. degree in Mathematics and Computer Science in 1981, and his Ph.D. degree in Computer Science in 1987, both from the Hebrew University of Jerusalem, Israel. Between 1987 and 1989 he was a post-doctoral fellow in the Department of Computer Science at Stanford University. He is currently a visiting scientist at the IBM Thomas J. Watson Research Center. His current research interests include the theoretical aspects of distributed and parallel computing, computational complexity and combinatorial optimization.
Joseph (Seffi) Naor received his B.A. degree in Computer Science in 1981 from the Technion, Israel Institute of Technology. He received his M.Sc. in 1983 and Ph.D. in 1987 in Computer Science, both from the Hebrew University of Jerusalem, Israel. Between 1987 and 1988 he was a post-doctoral fellow at the University of Southern California, Los Angeles, CA. Since 1988 he has been a post-doctoral fellow in the Department of Computer Science at Stanford University. His research interests include combinatorial optimization, randomized algorithms, computational complexity and the theoretical aspects of parallel and distributed computing.
Moni Naor received his B.A. in Computer Science from the Technion, Israel Institute of Technology, in 1985, and his Ph.D. in Computer Science from the University of California at Berkeley in 1989. He is currently a visiting scientist at the IBM Almaden Research Center. His research interests include computational complexity, data structures, cryptography, and parallel and distributed computation.Supported in part by a Weizmann fellowship and by contract ONR N00014-85-C-0731Supported by contract ONR N00014-88-K-0166 and by a grant from Stanford's Center for Integrated Systems. This work was done while the author was a post-doctoral fellow at the University of Southern California, Los Angeles, CAThis work was done while the author was with the Computer Science Division, University of California at Berkeley, and Supported by NSF grant DCR 85-13926 相似文献