共查询到20条相似文献,搜索用时 31 毫秒
1.
Jose-Jesus Fernandez Jose-Roman Bilbao-Castro Roberto Marabini Jose-Maria Carazo Inmaculada Garcia 《New Generation Computing》2005,23(1):101-112
The present contribution describes a potential application of Grid Computing in Bioinformatics. High resolution structure
determination of biological specimens is critical in BioSciences to understanding the biological function. The problem is
computational intensive. Distributed and Grid Computing are thus becoming essential. This contribution analyzes the use of
Grid Computing and its potential benefits in the field of electron microscope tomography of biological specimens.
Jose-Jesus Fernandez, Ph.D.: He received his M.Sc. and Ph.D. degrees in Computer Science from the University of Granada, Spain, in 1992 and 1997, respectively.
He was a Ph.D. student at the Bio-Computing unit of the National Center for BioTechnology (CNB) from the Spanish National
Council of Scientific Research (CSIC), Madrid, Spain. He became an Assistant Professor in 1997 and, subsequently, Associate
Professor in 2000 in Computer Architecture at the University of Almeria, Spain. He is a member of the supercomputing-algorithms
research group. His research interests include high performance computing (HPC), image processing and tomography.
Jose-Roman Bilbao-Castro: He received his M.Sc. degree in Computer Science from the University of Almeria in 2001. He is currently a Ph.D. student
at the BioComputing unit of the CNB (CSIC) through a Ph.D. CSIC-grant in conjuction with Dept. Computer Architecture at the
University of Malaga (Spain). His current research interestsinclude tomography, HPC and distributed and grid computing.
Roberto Marabini, Ph.D.: He received the M.Sc. (1989) and Ph.D. (1995) degrees in Physics from the University Autonoma de Madrid (UAM) and University
of Santiago de Compostela, respectively. He was a Ph.D. student at the BioComputing Unit at the CNB (CSIC). He worked at the
University of Pennsylvania and the City University of New York from 1998 to 2002. At present he is an Associate Professor
at the UAM. His current research interests include inverse problems, image processing and HPC.
Jose-Maria Carazo, Ph.D.: He received the M.Sc. degree from the Granada University, Spain, in 1981, and got his Ph.D. in Molecular Biology at the
UAM in 1984. He left for Albany, NY, in 1986, coming back to Madrid in 1989 to set up the BioComputing Unit of the CNB (CSIC).
He was involved in the Spanish Ministry of Science and Technology as Deputy General Director for Research Planning. Currently,
he keeps engaged in his activities at the CNB, the Scientific Park of Madrid and Integromics S.L.
Immaculada Garcia, Ph.D.: She received her B.Sc. (1977) and Ph.D. (1986) degrees in Physics from the Complutense University of Madrid and University
of Santiago de Compostela, respectively. From 1977 to 1987 she was an Assistant professor at the University of Granada, from
1987 to 1996 Associate professor at the University of Almeria and since 1997 she is a Full Professor and head of Dept. Computer
Architecture. She is head of the supercomputing-algorithms research group. Her research interest lies in HPC for irregular
problems related to image processing, global optimization and matrix computation. 相似文献
2.
Yago Saez Pedro Isasi Javier Segovia Julio C. Hernandez 《New Generation Computing》2005,23(2):129-142
Evolutionary Computation encompasses computational models that follow a biological evolution metaphor. The success of these
techniques is based on the maintenance of the genetic diversity, for which it is necessary to work with large populations.
However, it is not always possible to deal with such large populations, for instance, when the adequacy values must be estimated
by a human being (Interactive Evolutionary Computation, IEC). This work introduces a new algorithm which is able to perform
very well with a very low number of individuals (micropopulations) which speeds up the convergence and it is solving problems
with complex evaluation functions. The new algorithm is compared with the canonical genetic algorithm in order to validate
its efficiency. Two experimental frameworks have been chosen: table and logotype designs. An objective evaluation measures
has been proposed to avoid user interaction in the experiments. In both cases the results show the efficiency of the new algorithm
in terms of quality of solutions and convergence speed, two key issues in decreasing user fatigue.
Yago Saez: He received the Computer Engineering degree from the Universidad Pontificia de Salamanca in 1999 Spain. He now is a Ph.D.
student and works as assistant professor at the EVANNAI Group at the Computer Science Department of CARLOS III, Madrid, Spain.
His main research areas encompasses the interactive evolutionary computation, the design applications and the optimization
problems.
Pedro Isasi, Ph.D.: He received Computer Science degree and Ph.D. degree from the Universidad Politécnica de Madrid (UPM), Spain in 1994. He
is now working as professor at the EVANNAI Group at the Computer Science Department of CARLOS III, Madrid, Spain. His main
research areas are Machine Learning, Evolutionary, Computation and Neural Networks and Applications to Optimization Problems.
Javier Segovia, Ph.D.: He is a receiving physicist, Ph.D. degree in Computer Science (with honours) from the Universidad Politécnica de Madrid
(UPM). Currently Dean of the UPM School of Computer Science, and is editor and/or author of more than 70 scientific publications
in the fields of genetic algorithms, data and web mining, artificial intelligence and intelligent interfaces.
Julio C. Hernandez, Ph.D.: He has received degree in Maths, Ph.D. degree in Computer Science. His main research area is the artificial intelligence
applied to criptography and net security. His unofficial hobbies are chess and go. Currently, he is working as invited researcher
at INRIA, France. 相似文献
3.
Peter W. Arzberger Abbas Farazdel Akihiko Konagaya Larry Ang Shinji Shimojo Rick L. Stevens 《New Generation Computing》2004,22(2):97-110
Over the past quarter century, two revolutions, one in biomedicine, the other in computing and information technology leading
to cyberinfrastructure, have made the largest advances and the most significant impacts on science, technology, and society.
The interface between these areas is rich with opportunity for major advances. The Life Sciences Grid Research Group (LSG-RG)
of the Global Grid Forum recognized the opportunities and needs to bring the communities together to ensure the cyberinfrastructure
will be constructed for the benefit of science. This article gives an overview of the area, the activities of the LSG-RG,
and the minisymposium organized by LSG-RG, and introduces the papers in this Special Issue of New Generation Computing.
Peter Arzberger, Ph.D.: He is the Director of Life Sciences Initiatives, University of California San Diego; Director of the National Biomedical
Computation Resource (http://nbcr.ucsd.edu), funded by the National Center of Research Resource of NIH; and the Chair of the
Pacific Rim Application and Grid Middleware Assembly (http://www.pragma-grid.edu), an organization of 20 institutions around
the pacific rim whose mission is to establish sustained collaborations and to advance the use of grid technologies in applications.
He serves on the US National CODATA Committee and the National Advisory Board of the US Long Term Ecological Research. His
hobby is working on Lloyds.
Abbas Farazdel, Ph.D.: He is a Senior Scientist and an IT Solution Strategist in the Advanced Technologies unit at the IBM Life Sciences. Previously,
Dr. Farazdel worked at several positions in IBM including Cluster System Strategist; Data Warehousing and Data Mining Solutions
Implementation Manager; and High Performance Computing Consultant. Abbas is the co-chair of the Global Grid Forum (GGF) Life
Sciences Grids Research Group. He serves on the Scientific Board of the European Health Grid and the Mid Hudson Technology
Council of New York. Abbas received his Ph.D. in Quantum Chemistry and M.Sc. in Computational Physics from the University
of Massachusetts concurrently.
Akihiko Konagaya, Dr. Eng.: He is Project Director of Bioinformatics Group, RIKEN Genomic Sciences Center. He received his B.S. and M.S. from Tokyo Institute
of Technology in 1978 and 1980 in Informatics Science, and joined NEC Corporation in 1980, Japan Advanced Institute of Science
and Technology in 1997, RIKEN GSC in 2003. His research covers wide area from computer architectures to bioinformatics. He
has been much involved into the Open Bioinformatics Grid project since 2002.
Larry Ang: As the Project Director in the Bioinformatics Institute (BII), he is in charge of major international collaborative projects
on biomedical grids between BII and other research organizations (http://web.bii.a-star.edu.sg/ larry/). In particular, he
works actively with bodies such as Pragma where he serves on the Steering Committee. He is also the Secretary of the Life
Sciences Grid Research Group of GGF (Global Grid Forum) He serves on the Gelato Federation; Gelato was started by HP Labs
and pushes open source software on linux platforms.
Shinji Shimojo, Ph.D.: He received his M.E. and Ph.D. degrees from Osaka University in 1983 and 1986, respectively. He was an Assistant Professor
with the Department of Information and Computer Sciences, Faculty of Engineering Science at Osaka University from 1986, and
an Associate Professor with Computation Center from 1991 to 1998. During the period, he also worked as a visiting researcher
at the University of California, Irvine for a year. He has been a Professor with Cybermedia Center (then Computation Center)
at Osaka University since 1998. His current research work is focusing on a wide variety of multimedia applications, peer-to-peer
communication networks, ubiquitous network systems and Grid technologies. He is a member of ACM, IEEE and IEICE.
Rick L. Stevens, Ph.D.: He is Professor, University of Chicago; director, Mathematics and Computer Science Division/Argonne National Laboratory;
director, ANL/UC Computation Institute; project director for National Science Foundation supported TeraGrid project; head
of the Argonne/Chicago Futures Lab. He is interested in the development of innovative tools and techniques that enable computational
scientists to solve important large-scale problems effectively on advanced scientific computers. His research focuses on three
principal areas: advanced collaboration and visualization environments, high-performance computer architectures (including
Grids), and computational problems in life sciences and systems biology. He teaches courses on computer architecture, collaboration
technology, virtual reality, parallel computing, and computational science. 相似文献
4.
Chunming Hu Yanmin Zhu Jinpeng Huai Yunhao Liu Lionel M. Ni 《Knowledge and Information Systems》2007,12(1):55-75
Information service plays a key role in grid system, handles resource discovery and management process. Employing existing
information service architectures suffers from poor scalability, long search response time, and large traffic overhead. In
this paper, we propose a service club mechanism, called S-Club, for efficient service discovery. In S-Club, an overlay based
on existing Grid Information Service (GIS) mesh network of CROWN is built, so that GISs are organized as service clubs. Each
club serves for a certain type of service while each GIS may join one or more clubs. S-Club is adopted in our CROWN Grid and
the performance of S-Club is evaluated by comprehensive simulations. The results show that S-Club scheme significantly improves
search performance and outperforms existing approaches.
Chunming Hu is a research staff in the Institute of Advanced Computing Technology at the School of Computer Science and Engineering,
Beihang University, Beijing, China. He received his B.E. and M.E. in Department of Computer Science and Engineering in Beihang
University. He received the Ph.D. degree in School of Computer Science and Engineering of Beihang University, Beijing, China,
2005. His research interests include peer-to-peer and grid computing; distributed systems and software architectures.
Yanmin Zhu is a Ph.D. candidate in the Department of Computer Science, Hong Kong University of Science and Technology. He received his
B.S. degree in computer science from Xi’an Jiaotong University, Xi’an, China, in 2002. His research interests include grid
computing, peer-to-peer networking, pervasive computing and sensor networks. He is a member of the IEEE and the IEEE Computer
Society.
Jinpeng Huai is a Professor and Vice President of Beihang University. He serves on the Steering Committee for Advanced Computing Technology
Subject, the National High-Tech Program (863) as Chief Scientist. He is a member of the Consulting Committee of the Central Government’s Information Office, and Chairman of the Expert Committee in both the National e-Government Engineering Taskforce and the National e-Government Standard office. Dr. Huai and his colleagues are leading the key projects in e-Science of the National Science Foundation of China (NSFC)
and Sino-UK. He has authored over 100 papers. His research interests include middleware, peer-to-peer (P2P), grid computing,
trustworthiness and security.
Yunhao Liu received his B.S. degree in Automation Department from Tsinghua University, China, in 1995, and an M.A. degree in Beijing
Foreign Studies University, China, in 1997, and an M.S. and a Ph.D. degree in computer science and engineering at Michigan
State University in 2003 and 2004, respectively. He is now an assistant professor in the Department of Computer Science and
Engineering at Hong Kong University of Science and Technology. His research interests include peer-to-peer computing, pervasive
computing, distributed systems, network security, grid computing, and high-speed networking. He is a senior member of the
IEEE Computer Society.
Lionel M. Ni is chair professor and head of the Computer Science and Engineering Department at Hong Kong University of Science and Technology.
Lionel M. Ni received the Ph.D. degree in electrical and computer engineering from Purdue University, West Lafayette, Indiana,
in 1980. He was a professor of computer science and engineering at Michigan State University from 1981 to 2003, where he received
the Distinguished Faculty Award in 1994. His research interests include parallel architectures, distributed systems, high-speed
networks, and pervasive computing. A fellow of the IEEE and the IEEE Computer Society, he has chaired many professional conferences
and has received a number of awards for authoring outstanding papers. 相似文献
5.
Wilfred W. Li Robert W. Byrnes Jim Hayes Adam Birnbaum Vicente M. Reyes Atif Shahab Coleman Mosley Dmitry Pekurovsky Greg B. Quinn Ilya N. Shindyalov Henri Casanova Larry Ang Fran Berman Peter W. Arzberger Mark A. Miller Philip E. Bourne 《New Generation Computing》2004,22(2):127-136
The ongoing global effort of genome sequencing is making large scale comparative proteomic analysis an intriguing task. The
Encyclopedia of Life (EOL; http://eol.sdsc.edu) project aims to provide current functional and structural annotations for
all available proteomes, a computational challenge never seen before in biology. Using an integrative genome annotation pipeline
(iGAP), we have produced 3D models and functional annotations for more than 100 proteomes thus far. This process is greatly
facilitated by grid compute resources, and especially by the development of grid application execution environment. AppLeS
(Application-Level Scheduling) Parameter Sweep Template (APST) has been adopted by the EOL project as a mediator to grid middleware.
APST has made the annotation process much more efficient, highly automated and scalable. Currently we are building a domain-specific
bioinformatics workflow management system (BWMS) on top of APST, which further streamlines grid deployment of life science
applications. With these developments in mind, we discuss some common problems and expectations of grid computing for high
throughput proteomics.
Henri Casanova, Ph.D.: He is an adjunct Professor of Computer Science and Engineering at the University of California, San Diego (UCSD), a Research
Scientist at the San Diego Supercomputer Center, and the founder and director of the Grid Research and Development Laboratory
(GRAIL) at UCSD. His research interests are in the area of parallel, distributed, Grid and Internet computing. He obtained
his B.S. from the Ecole Nationale Supérieure d’Electronique, d’Electrotechnique, d’Informatique et d’Hydraulique de Toulouse,
France in 1993, his M.S. from the Université Paul Sabatier, Toulouse, France in 1994, and his Ph.D. from the University of
Tennessee, Knoxville in 1998.
Francine Berman, Ph.D.: She is a Professor and High Performance Computing Endowed Chair at U.C. San Diego, Director of the San Diego Supercomputer
Center and a Fellow of the ACM. Her research over two decades has focused on High Performance and Grid Computing, in particular
in the areas of programming environments, adaptive middleware, scheduling and performance prediction. She has served on numerous
editorial boards, steering committees, and program and conference committees in the areas of Parallel and Grid computing.
She is one of the Principal Investigators of the NSF-supported TeraGrid, and directs NSF’s National Partnership for Advanced
Computing Infrastructure (NPACI).
Peter Arzberger, Ph.D.: He is the Director of Life Sciences Initiatives, University of California San Diego, Director of the National Biomedical
Computation Resource (http://nbcr.ucsd.edu), funded by the National Center of Research Resource of NIH and the Chair of the
Pacific Rim Application and Grid Middleware Assembly (http://www.pragma-grid.edu), an organization of 20 institutions around
the pacific rim whose mission is to establish sustained collaborations and to advance the use of grid technologies in applications.
He serves on the US National CODATA Committee and the National Advisory Board of the US Long Term Ecological Research. His
hobby is working on Lloyds.
Mark A. Miller, Ph.D.: He is Program Coordinator for the Integrative BioSciences Program at San Diego Supercomputer Center. He received his Ph.D.
in Biochemistry from Purdue University in 1984. His research interests have slowly moved towards computer driven analyses
and quantitative biology, and culminated in managing the BioInformatics Core of the Joint Center for Structural Biology where
he helped to plan and implement the informatics solutions for high throughput crystallography. He is currently working on
the specification, design and deployment of tools to enable biology research.
Philip Bourne, Ph.D.: He is a Professor of Pharmacology at the University of California, San Diego and co-director of the Protein Data Bank (PDB).
He is immediate past President of the International Society for Computational Biology, an Associate Editor of Bioinformatics
and on the Editorial Board of several other journals. He received his B.Sc. and Ph.D. in chemistry at the Flinders University,
South Australia. His research interests include bioinformatics, particularly structural bioinformatics. This implies algorithms,
metalanguages, biological databases, biological query languages and visualization with special interest in cell signaling
and apoptosis. Major projects ongoing in the Bourne Lab include the PDB, Encyclopedia of Life (EOL), Systematic Protein Annotation
and Modeling (SPAM), and the Tree of Life. Bourne’s personal interests include fishing, tennis, squash, walking, skiing, sports
cars, motor bikes and writing. 相似文献
6.
In this paper, we propose a new topology called theDual Torus Network (DTN) which is constructed by adding interleaved edges to a torus. The DTN has many advantages over meshes and tori such as better
extendibility, smaller diameter, higher bisection width, and robust link connectivity. The most important property of the
DTN is that it can be partitioned into sub-tori of different sizes. This is not possible for mesh and torus-based systems.
The DTN is investigated with respect to allocation, embedding, and fault-tolerant embedding. It is shown that the sub-torus
allocation problem in the DTN reduces to the sub-mesh allocation problem in the torus. With respect to embedding, it is shown
that a topology that can be embedded into a mesh with dilation δ can also be embedded into the DTN with less dilation. In
fault-tolerant embedding, a fault-tolerant embedding method based on rotation, column insertion, and column skip is proposed.
This method can embed any rectangular grid into its optimal square DTN when the number of faulty nodes is fewer than the number
of unused nodes. In conclusion, the DTN is a scalable topology well-suited for massively parallel computation.
Sang-Ho Chae, M.S.: He received the B.S. in the Computer Science and Engineering from the Pohang University of Science and Technology (POSTECH)
in 1994, and the M.E. in 1996. Since 1996, he works as an Associate Research Engineer in the Central R&D Center of the SK
Telecom Co. Ltd. He took part in developing SK Telecom Short Message Server whose subscribers are now over 3.5 million and
Advanced Paging System in which he designed and implemented high availability concepts. His research interests are the Fault
Tolerance, Parallel Processing, and Parallel Topolgies.
Jong Kim, Ph.D.: He received the B.S. degree in Electronic Engineering from Hanyang University, Seoul, Korea, in 1981, the M.S. degree in
Computer Science from the Korea Advanced Institute of Science and Technology, Seoul, Korea, in 1983, and the Ph.D. degree
in Computer Engineering from Pennsylvania State University, U.S.A., in 1991. He is currently an Associate Professor in the
Department of Computer Science and Engineering, Pohang University of Science and Technology, Pohang, Korea. Prior to this
appointment, he was a research fellow in the Real-Time Computing Laboratory of the Department of Electrical Engineering and
Computer Science at the University of Michigan from 1991 to 1992. From 1983 to 1986, he was a System Engineer in the Korea
Securities Computer Corporation, Seoul, Korea. His major areas of interest are Fault-Tolerant Computing, Performance Evaluation,
and Parallel and Distributed Computing.
Sung Je Hong, Ph.D.: He received the B.S. degree in Electronics Engineering from Seoul National University, Korea, in 1973, the M.S. degree in
Computer Science from Iowa State University, Ames, U.S.A., in 1979, and the Ph.D. degree in Computer Science from the University
of Illinois, Urbana, U.S.A., in 1983. He is currently a Professor in the Department of Computer Science and Engineering, Pohang
University of Science and Technology, Pohang, Korea. From 1983 to 1989, he was a staff member of Corporate Research and Development,
General Electric Company, Schenectady, NY, U.S.A. From 1975 to 1976, he was with Oriental Computer Engineering, Korea, as
a Logic Design Engineer. His current research interest includes VLSI Design, CAD Algorithms, Testing, and Parallel Processing.
Sunggu Lee, Ph.D.: He received the B.S.E.E. degree with highest distinction from the University of Kansas, Lawrence, in 1985 and the M.S.E.
and Ph.D. degrees from the University of Michigan, Ann Arbor, in 1987 and 1990, respectively. He is currently an Associate
Professor in the Department of Electronic and Electrical Engineering at the Pohang University of Science and Technology (POSTECH),
Pohang, Korea. Prior to this appointment, he was an Associate Professor in the Department of Electrical Engineering at the
University of Delaware in Newark, Delaware, U.S.A. From June 1997 to July 1998, he spent one year as a Visiting Scientist
at the IBM T. J. Watson Research Center. His research interests are in Parallel, Distributed, and Fault-Tolerant Computing.
Currently, his main research focus is on the high-level and low-level aspects of Inter-Processor Communications for Parallel
Computers. 相似文献
7.
A. Tchernykh A. Cristóbal-Salas V. Kober I. A. Ovseevich 《Pattern Recognition and Image Analysis》2007,17(3):390-398
In this paper, a partial evaluation technique to reduce communication costs of distributed image processing is presented.
It combines application of incomplete structures and partial evaluation together with classical program optimization such
as constant-propagation, loop unrolling and dead-code elimination. Through a detailed performance analysis, we establish conditions
under which the technique is beneficial.
Andrei Tchernykh received his Ph.D. degree in computer science from the Institute of Precise Mechanics and Computer Technology of the Russian
Academy of Sciences (RAS), Russia in 1986. From 1975 to 1995 he was with the Institute of Precise Mechanics and Computer Technology
of the RAS, Scientific Computer Center of the RAS, and at Institute for High Performance Computer Systems of the RAS, Moscow,
Russia. Since 1995 he has been working at Computer Science Department at the CICESE Research Center, Ensenada, Baja California,
Mexico. His main interests include cluster and Grid computing, incomplete information processing, and on-line scheduling.
Vitaly Kober obtained his MS degree in Applied Mathematics from the Air-Space University of Samara (Russia) in 1984, and his PhD degree
in 1992 and Doctoral degree in 2004 in Image Processing from the Institute of Information Transmission Problems, Russian Academy
of Sciences. Now he is a titular researcher at the Centro de Investigation Cientifica y de Educatión Superior de Ensenada
(Cicese), México. His research interests include signal and image processing, pattern recognition.
Alfredo Cristóbal-Salas received his Ph.D. degree in computer science from the Computer Science Department at the CICESE Research Center, Ensenada,
Baja California, México. Now he is a researcher at School of Chemistry Sciences and Engineering, University of Baja California,
Tijuana, B.C. Mexico His main interests include cluster and Grid computing, incomplete information processing, and online
scheduling.
Iosif A. Ovseevich graduated from the Moscow Electrotechnical Institute of Telecommunications. Received candidate’s degree in 1953 and doctoral
degree in information theory in 1972. At present he is Emeritus Professor at the Institute of Information Transmission Problems
of the Russian Academy of Sciences. His research interests include information theory, signal processing, and expert systems.
He is a Member of IEEE, Popov Radio Society. 相似文献
8.
Khayri A. M. Ali 《New Generation Computing》1998,16(2):201-221
This paper presents and empirically evaluates a generational real-time garbage collection scheme, which is based on combining
Baker’s real-time scheme with a simple generational scheme by Andrew W. Appel.
Real World Computing Partnership.
Khayri A. M. Ali, Ph.D.: He currently works as Dean of the Faculty of Computer Science at October University for Modern Sciences and Arts, Egypt.
He received his B. Sc. (1970) in Electronics, his M. Sc. (1977) in Automatic Control, both from Egypt. He received his Ph.D.
in Computer Systems from the Royal Institute of Technology, Stockholm, in 1984. His research interests are in developing parallel
and distributed logic, functional, object-oriented, and constraints programming systems. 相似文献
9.
Multimedia records of meetings contain a rich amount of project information. However, finding detailed information in a meeting
record can be difficult because there is no structural information other than time to aid navigation. In this paper we survey
and discuss various ways of indexing meeting records by categorizing existing approaches along multiple dimensions. We then
introduce the notion of creating indices based upon user interaction with domain-specific artifacts. As an example to illustrate
the use of domain-specific artifacts to create meaningful pointers into the meeting record, we describe capture and access
in a prototype system that supports general meeting artifacts.
Werner Geyer is a Research Staff Member at the IBM T.J. Watson Research Lab in Cambridge, Massachusetts, in the Collaborative User Experience
Group (CUE). He is leading research projects in the areas of activity-centric collaboration, ad hoc collaboration, and virtual
meetings. His research focuses on the intersections of egocentric vs. public, informal vs. formal, unstructured vs. structured
types of collaboration. Before joining CUE, Werner was a Post Doc at IBM Research in New York where he worked on new web-based
team support technologies and on capture and access of distributed meetings. He holds a Ph.D. in Computer Science from the
University of Mannheim, Germany. He also earned a M.S. in Information Technology, which combines Computer Science and Business
Administration, from the University of Mannheim.
Heather Richter is an Assistant Professor in the Department of Software and Information Systems at the University of North Carolina at Charlotte.
She received her Ph.D. in Computer Science from the Georgia Institute of Technology in 2005, and her B.S. in Computer Science
from Michigan State University in 1995. Her research interests are in the areas of Human Computer Interaction, Computer Supported
Cooperative Work, Ubiquitous Computing, and Software Engineering.
Gregory D. Abowd is an Associate Professor in the College of Computing at the Georgia Institute of Technology. He leads the Ubiquitous Computing
Research Group in examining issues involved in building and evaluating ubiquitous computing applications that impact our everyday
lives. Dr. Abowd initiated, and now co-directs, the Aware Home Research Initiative at Georgia Tech. He is an Associate Editor
for the Human Computer Interaction Journal and the IEEE Pervasive Computing Magazine. He received a B.S. in Mathematics in
1986 from the University of Notre Dame and the degrees of M.Sc. in 1987 and D.Phil in 1991 in Computation from Oxford University. 相似文献
10.
Leakage current of CMOS circuit increases dramatically with the technology scaling down and has become a critical issue of high performance system. Subthreshold, gate and reverse biased junction band-to-band tunneling (BTBT) leakages are considered three main determinants of total leakage current. Up to now, how to accurately estimate leakage current of large-scale circuits within endurable time remains unsolved, even though accurate leakage models have been widely discussed. In this paper, the authors first dip into the stack effect of CMOS technology and propose a new simple gate-level leakage current model. Then, a table-lookup based total leakage current simulator is built up according to the model. To validate the simulator, accurate leakage current is simulated at circuit level using popular simulator HSPICE for comparison. Some further studies such as maximum leakage current estimation, minimum leakage current generation and a high-level average leakage current macromodel are introduced in detail. Experiments on ISCAS85 and ISCAS89 benchmarks demonstrate that the two proposed leakage current estimation methods are very accurate and efficient. 相似文献
11.
12.
This paper introduces a new algorithm of mining association rules.The algorithm RP counts the itemsets with different sizes in the same pass of scanning over the database by dividing the database into m partitions.The total number of pa sses over the database is only(k 2m-2)/m,where k is the longest size in the itemsets.It is much less than k . 相似文献
13.
D. J. Mavriplis Raja Das Joel Saltz R. E. Vermeland 《The Journal of supercomputing》1995,8(4):329-344
An efficient three-dimensional unstructured Euler solver is parallelized on a CRAY Y-MP C90 shared-memory computer and on an Intel Touchstone Delta distributed-memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between the two differing architectures are made.This work was sponsored in part by ARPA (NAG-1-1485) and by NASA Contract No. NAS1-19480 while authors Mavriplis, Saltz and Das were in residence at ICASE, NASA Langley Research Center, Hampton, Virginia. This research was performed in part using the Intel Touchstone Delta System operated by Caltech on behalf of the Concurrent Supercomputing Consortium. Access to this fecility was provided by NASA Langley Research Center and the Center for Research in Parallel Processing. The content of the information does not necessarily reflect the position or the policy of the Government and no official endorsement should be inferred. 相似文献
14.
An Efficient VLSI Architecture for Motion Compensation of AVS HDTV Decoder 总被引:2,自引:0,他引:2
下载免费PDF全文
![点击此处可从《计算机科学技术学报》网站下载免费的PDF全文](/ch/ext_images/free.gif)
In the part 2 of advanced Audio Video coding Standard (AVS-P2), many efficient coding tools are adopted in motion compensation, such as new motion vector prediction, symmetric matching, quarter precision interpolation, etc. However, these new features enormously increase the computational complexity and the memory bandwidth requirement, which make motion compensation a difficult component in the implementation of the AVS HDTV decoder. This paper proposes an efficient motion compensation architecture for AVS-P2 video standard up to the Level 6.2 of the Jizhun Profile. It has a macroblock-level pipelined structure which consists of MV predictor unit, reference fetch unit and pixel interpolation unit. The proposed architecture exploits the parallelism in the AVS motion compensation algorithm to accelerate the speed of operations and uses the dedicated design to optimize the memory access. And it has been integrated in a prototype chip which is fabricated with TSMC 0.18-#m CMOS technology, and the experimental results show that this architecture can achieve the real time AVS-P2 decoding for the HDTV 1080i (1920 - 1088 4 : 2 : 0 60field/s) video. The efficient design can work at the frequency of 148.5MHz and the total gate count is about 225K. 相似文献
15.
16.
Haruki Nakamura Susumu Date Hideo Matsuda Shinji Shimojo 《New Generation Computing》2004,22(2):157-166
Recently, life scientists have expressed a strong need for computational power sufficient to complete their analyses within
a realistic time as well as for a computational power capable of seamlessly retrieving biological data of interest from multiple
and diverse bio-related databases for their research infrastructure. This need implies that life science strongly requires
the benefits of advanced IT. In Japan, the Biogrid project has been promoted since 2002 toward the establishment of a next-generation
research infrastructure for advanced life science. In this paper, the Biogrid strategy toward these ends is detailed along
with the role and mission imposed on the Biogrid project. In addition, we present the current status of the development of
the project as well as the future issues to be tackled.
Haruki Nakamura, Ph.D.: He is Professor of Protein Informatics at Institute for Protein Research, Osaka University. He received his B.S., M.A. and
Ph.D. from the University of Tokyo in 1975, 1977 and 1980 respectively. His research field is Biophysics and Bioinformatics,
and has so far developed several original algorithms in the computational analyses of protein electrostatic features and folding
dynamics. He is also a head of PDBj (Protein Data Bank Japan) to manage and develop the protein structure database, collaborating
with RCSB (Research Collaboratory for Structural Bioinformatics) in USA and MSD-EBI (Macromolecular Structure Database at
the European Bioinformatics Institute) in EU.
Susumu Date, Ph.D.: He is Assistant Professor of the Graduate School of Information Science and Technology, Osaka University. He received his
B.E., M.E. and Ph.D. degrees from Osaka University in 1997, 2000 and 2002, respectively. His research field is computer science
and his current research interests include application of Grid computing and related information technologies to life sciences.
He is a member of IEEE CS and IPSJ.
Hideo Matsuda, Ph.D.: He is Professor of the Department of Bioinformatic Engineering, the Graduate School of Information Science and Technology,
Osaka University. He received his B.S., M.Eng. and Ph.D. degrees from Kobe University in 1982, 1984 and 1987 respectively.
For M.Eng. and Ph.D. degrees, he majored in computer science. His research interests include computational analysis of genomic
sequences. He has been involved in the FANTOM (Functional Annotation of Mouse) Project for the functional annotation of RIKEN
mouse full-length cDNA sequences. He is a member of ISCB, IEEE CS and ACM.
Shinji Shimojo, Ph.D.: He received M.E. and Ph.D. degrees from Osaka University in 1983 and 1986 respectively. He was an Assistant Professor with
the Department of Information and Computer Sciences, Faculty of Engineering Science at Osaka University from 1986, and an
Associate Professor with Computation Center from 1991 to 1998. During the period, he also worked as a visiting researcher
at the University of California, Irvine for a year. He has been Professor with Cybermedia Center (then Computation Center)
at Osaka University since 1998. His current research work focus on a wide variety of multimedia applications, peer-to-peer
communication networks, ubiquitous network systems and Grid technologies. He is a member of ACM, IEEE and IEICE. 相似文献
17.
Test Resource Partitioning Based on Efficient Response Compaction for Test Time and Tester Channels Reduction
下载免费PDF全文
![点击此处可从《计算机科学技术学报》网站下载免费的PDF全文](/ch/ext_images/free.gif)
Yin-HeHan Xiao-WeiLi Hua-WeiLi AnshumanChandra 《计算机科学技术学报》2005,20(2):201-209
This paper presents a test resource partitioning technique based on an efficient response compaction design called quotient compactor(q-Compactor). Because q-Compactor is a single-output compactor, high compaction ratios can be obtained even for chips with a small number of outputs. Some theorems for the design of q-Compactor are presented to achieve full diagnostic ability, minimize error cancellation and handle unknown bits in the outputs of the circuit under test (CUT). The q-Compactor can also be moved to the load-board, so as to compact the output response of the CUT even during functional testing. Therefore, the number of tester channels required to test the chip is significantly reduced. The experimental results on the ISCAS ‘89 benchmark circuits and an MPEG 2 decoder SoC show that the proposed compactionscheme is very efficient. 相似文献
18.
New Meta-Heuristic for Combinatorial Optimization Problems: Intersection Based Scaling 总被引:1,自引:0,他引:1
下载免费PDF全文
![点击此处可从《计算机科学技术学报》网站下载免费的PDF全文](/ch/ext_images/free.gif)
PengZou ZhiZhou Ying-YuWan Guo-LiangChen JunGu 《计算机科学技术学报》2004,19(6):0-0
Combinatorial optimization problems are found in many application fields such as computer science,engineering and economy. In this paper, a new efficient meta-heuristic, Intersection-Based Scaling (IBS for abbreviation), is proposed and it can be applied to the combinatorial optimization problems. The main idea of IBS is to scale the size of the instance based on the intersection of some local optima, and to simplify the search space by extracting the intersection from the instance, which makes the search more efficient. The combination of IBS with some local search heuristics of different combinatorial optimization problems such as Traveling Salesman Problem (TSP) and Graph Partitioning Problem (GPP) is studied, and comparisons are made with some of the best heuristic algorithms and meta-heuristic algorithms. It is found that it has significantly improved the performance of existing local search heuristics and significantly outperforms the known best algorithms. 相似文献
19.
Summary We study the relation between knowledge and space. That is, we analyze how much shared memory space is needed in order to learn certain kinds of facts. Such results are useful tools for reasoning about shared memory systems. In addition we generalize a known impossibility result, and show that results about how knowledge can be gained and lost in message passing systems also hold for shared memory systems.
Michael Merritt received a B.S. degree in Philosophy and in Computer Science from Yale College in 1978, the M.S. and Ph.D. degrees in Information and Computer Science in 1980 and 1983, respectively, from the Georgia Institute of Technology. Since 1983 he has been a member of technical staff at AT & T Bell Laboratories, and has taught as an adjunct or visiting lecturer at Stevens Institute of Technology, Massachusetts Institute of Technology, and Columbia University. In 1989 he was program chair for the ACM Symposium on Principles of Distributed Computing. His research interests include distributed and concurrent computation, both algorithms and formal methods for verifying their correctness, cryptography, and security. He is an editor for Distributed Computing and for Information and Computation, recently co-authored a book on database concurrency control algorithms, and is a member of the ACM and of Computer Professionals for Social Responsibility.
Gadi Taubenfeld received the B.A., M.Sc. and Ph.D. degrees in Computer Science from the Technion (Israel Institute of Technology), in 1982, 1984 and 1988, respectively. From 1988 to 1990 he was a research scientist at Yale University. Since 1991 he has been a member of technical staff at AT & T Bell Laboratories. His primary research interests are in concurrent and distributed computing.A preliminary version of this work appeared in the Proceedings of the Tenth Annual ACM Symposium on Principles of Distributed Computing, pages 189–200, Montreal, Canada, August 1991 相似文献
20.
Akihiko Konagaya Fumikazu Konishi Mariko Hatakeyama Kenji Satou 《New Generation Computing》2004,22(2):167-176
The grid design strongly depends on not only a network infrastructure but also a superstructure, that is, a social structure
of virtual organizations where people trust each other, share resources and work together. Open Bioinformatics Grid (OBIGrid)
is a grid aimed at building a cooperative bioinformatics environment for computer sicentists and biologists. In October 2003,
OBIGrid consisted of 293 nodes with 492 CPUs provided by 27 sites at universities, laboratories and other enterprises, connected
by a virtual private network over the Internet. So many organizations have participated because OBIGrid has been conscious
of constructing a superstructure on a grid as well as a grid infrastructure. For the benefit of OBIGrid participants, we have
developed a series of life science application services: an open bioinformatics environment (OBIEnv), a scalable genome database
(OBISgd), a genome annotation system (OBITco), a biochemical network simulator (OBIYagns), and to name a few.
Akihiko Konagaya, Dr.Eng.: He is Project Director of Bioinformatics Group, RIKEN Genomic Sciences Center. He received his B.S. and M.S. from Tokyo
Institute of Technology in 1978 and 1980 in Informatics Science, and joined NEC Corporation in 1980, Japan Advanced Institute
of Science and Technology in 1997, RIKEN GSC in 2003. His research covers wide area from computer architectures to bioinformatics.
He has been much involved into the Open Bioinformatics Grid project since 2002.
Fumikazu Konishi, Dr.Eng.: He is researcher at Bioinformatics Group, RIKEN Genomic Sciences Center since 2000. He received his M.S. (1996) and Ph.D.
(2001) from Tokyo Metropolitan Institute of Technology. He served as an assistant in Department of Production and Information
Systems Engineering, Tokyo Metropolitan Institute of Technology since 2000. He also works in Structurome Research Group, RIKEN
Harima Institute from 2001. His research interests include concurrent engineering, bioinformatics and the Grid. He has deeply
affected to the design of OBIGrid.
Mariko Hatakeyama, Ph.D.: She recieved her Ph.D. degree from Tokyo University of Agriculture and Technology. She is Research Scientist at Bioinformactis
Group, RIKEN Genomic Sciences Center. Her research topics are: microbiology, enzymology and signal transduction of mammalian
cells. She is now working on computational simulation of signal transduction systems and on thermophilic bacteria project.
Kenji Satou, Ph.D.: He is Associate Professor of School of Knowledge Science at Japan Advanced Institute of Science and Technology. He received
B.S., M.E. and Ph.D. degrees from Kyushu University, in 1987, 1989 and 1995 respectively. For each degree, he majored in computer
engineering. His research interests have progressed from deductive database application through data mining to Grid computing
and natural language processing. His current field of research is bioinformatics. He prefers set-oriented manner of thinking,
and usually wonders how he can construct an intelligent-looking system based on large amount of heterogeneous data and computer
resources. 相似文献