首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 397 毫秒
1.
Semantic scene classification is an open problem in computer vision, especially when information from only a single image is employed. In applications involving image collections, however, images are clustered sequentially, allowing surrounding images to be used as temporal context. We present a general probabilistic temporal context model in which the first-order Markov property is used to integrate content-based and temporal context cues. The model uses elapsed time-dependent transition probabilities between images to enforce the fact that images captured within a shorter period of time are more likely to be related. This model is generalized in that it allows arbitrary elapsed time between images, making it suitable for classifying image collections. In addition, we derived a variant of this model to use in ordered image collections for which no timestamp information is available, such as film scans. We applied the proposed context models to two problems, achieving significant gains in accuracy in both cases. The two algorithms used to implement inference within the context model, Viterbi and belief propagation, yielded similar results with a slight edge to belief propagation. Matthew Boutell received the BS degree in Mathematical Science from Worcester Polytechnic Institute, Massachusetts, in 1993, the MEd degree from University of Massachusetts at Amherst in 1994, and the PhD degree in Computer Science from the University of Rochester, Rochester, NY, in 2005. He served for several years as a mathematics and computer science instructor at Norton High School and Stonehill College and as a research intern/consultant at Eastman Kodak Company. Currently, he is Assistant Professor of Computer Science and Software Engineering at Rose-Hulman Institute of Technology in Terre Haute, Indiana. His research interests include image understanding, machine learning, and probabilistic modeling. Jiebo Luo received his PhD degree in Electrical Engineering from the University of Rochester, Rochester, NY in 1995. He is a Senior Principal Scientist with the Kodak Research Laboratories. He was a member of the Organizing Committee of the 2002 IEEE International Conference on Image Processing and 2006 IEEE International Conference on Multimedia and Expo, a guest editor for the Journal of Wireless Communications and Mobile Computing Special Issue on Multimedia Over Mobile IP and the Pattern Recognition journal Special Issue on Image Understanding for Digital Photos, and a Member of the Kodak Research Scientific Council. He is on the editorial boards of the IEEE Transactions on Multimedia, Pattern Recognition, and Journal of Electronic Imaging. His research interests include image processing, pattern recognition, computer vision, medical imaging, and multimedia communication. He has authored over 100 technical papers and holds over 30 granted US patents. He is a Kodak Distinguished Inventor and a Senior Member of the IEEE. Chris Brown (BA Oberlin 1967, PhD University of Chicago 1972) is Professor of Computer Science at the University of Rochester. He has published in many areas of computer vision and robotics. He wrote COMPUTER VISION with his colleague Dana Ballard, and influential work on the “active vision” paradigm was reported in two special issues of the International Journal of Computer Vision. He edited the first two volumes of ADVANCES IN COMPUTER VISION for Erlbaum and (with D. Terzopoulos) REAL-TIME COMPUTER VISION, from Cambridge University Press. He is the co-editor of VIDERE, the first entirely on-line refereed computer vision journal (MIT Press). His most recent PhD students have done research in infrared tracking and face recognition, features and strategies for image understanding, augmented reality, and three-dimensional reconstruction algorithms. He supervised the undergraduate team that twice won the AAAI Host Robot competition (and came third in the Robot Rescue competition in 2003).  相似文献   

2.
An Algorithm Based on Tabu Search for Satisfiability Problem   总被引:3,自引:0,他引:3       下载免费PDF全文
In this paper,a computationally effective algorithm based on tabu search for solving the satisfiability problem(TSSAT)is proposed.Some novel and efficient heuristic strategies for generating candidate neighborhood of the curred assignment and selecting varibables to be flipped are presented. Especially,the aspiration criterion and tabu list tructure of TSSAT are different from those of traditional tabu search.Computational experiments on a class of problem insteances show that,TSSAT,in a reasonable amount of computer time ,yields better results than Novelty which is currently among the fastest known.Therefore TSSAT is feasible and effective.  相似文献   

3.
In this paper, region features and relevance feedback are used to improve the performance of CBIR. Unlike existing region-based approaches where either individual regions are used or only simple spatial layout is modeled, the proposed approach simultaneously models both region properties and their spatial relationships in a probabilistic framework. Furthermore, the retrieval performance is improved by an adaptive filter based relevance feedback. To illustrate the performance of the proposed approach, extensive experiments have been carried out on a large heterogeneous image collection with 17,000 images, which render promising results on a wide variety of queries.  相似文献   

4.
This paper describes a system for visual object recognition based on mobile augmented reality gear. The user can train the system to the recognition of objects online using advanced methods of interaction with mobile systems: Hand gestures and speech input control “virtual menus,” which are displayed as overlays within the camera image. Here we focus on the underlying neural recognition system, which implements the key requirement of an online trainable system—fast adaptation to novel object data. The neural three-stage architecture can be adapted in two modes: In a fast training mode (FT), only the last stage is adapted, whereas complete training (CT) rebuilds the system from scratch. Using FT, online acquired views can be added at once to the classifier, the system being operational after a delay of less than a second, though still with reduced classification performance. In parallel, a new classifier is trained (CT) and loaded to the system when ready. The text was submitted by the authors in English. Gunther Heidemann was born in 1966. He studied physics at the Universities of Karlsruhe and Münster and received his PhD (Eng.) from Bielefeld University in 1998. He is currently working within the collaborative research project “Hybrid Knowledge Representation” of the SFB 360 at Bielefeld University. His fields of research are mainly computer vision, robotics, neural networks, data mining, bonification, and hybrid systems. Holger Bekel was born in 1970. He received his BS degree from the University of Bielefeld, Germany, in 1997. In 2002 he received a diploma in Computer Science from the University of Bielefeld. He is currently pursuing a PhD program in Computer Science at the University of Bielefeld, working within the Neuroinformatics Group (AG Neuroinformatik) in the project VAMPIRE (Visual Active Memory Processes and Interactive Retrieval). His fields of research are active vision and data mining. Ingo Bax was born in 1976. He received a diploma in Computer Science from the University of Bielefeld in 2002. He is currently pursuing a PhD program in Computer Science at the Neuroinformatics Group of the University of Bielefeld, working within the VAMPIRE project. His fields of interest are cognitive computer vision and pattern recognition. Helge J. Ritter was born 1958. He studied physics and mathematics at the Universities of Bayreuth, Heidelberg and Munich. After a PhD in physics at Technical University of Munich in 1988, he visited the Laboratory of Computer Science at Helsinki University of Technology and the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign. Since 1990 he has headed the Neuroinformatics Group at the Faculty of Technology, Bielefeld University. His main interests are principles of neural computation and their application to building intelligent systems. In 1999, she was awarded the SEL Alcatel Research Prize, and in 2001, the Leibniz Prize of the German Research Foundation DFG.  相似文献   

5.
6.
The research outlined in this paper is part of a wider research program named SYSCOLAG (Coastal and LAGoonal SYStems in Languedoc-Roussillon area, France) dedicated to sustainable coastal management. The main objective of this program is to build up a communication infrastructure to improve the exchange of information and knowledge between the various scientific disciplines involved in the research. In order to ensure the sharing of resources without affecting the autonomy and independance of the partners, we propose a three-level infrastructure (resources, federation and knowledge access) based on a metadata service (using ISO 19115 standard for geographic information metadata) completed by a common vocabulary (ontology).The Syscolag research program (COastal and LAGoonal SYStems) is funded by Languedoc Roussillon authority.Julien Barde is currently a Ph.D. student in Computer Science at the LIRMM (the Computer Science, Robotic and Microelectronic Laboratory of the University of Montpellier II, France) under the guidance of Thérèse Libourel and Pierre Maurel since 2002. He works for a research program of Integrated Coastal Management to improve knowledge sharing between the stakeholders of Languedoc Roussillon coastal area. He has received his engineer/M.Sc. degrees in Oceanology Sciences and Spatial Information Treatment from the National Superior Agronomic School of Rennes (ENSAR, Brittany, France) in 2000 and 2001. He has experience in Computer Science, Remote sensing, GIS and oceanology.Thérèse Libourel is a Senior Lecturer in Computer Science from the Conservatoire National des Arts et Métiers (CNAM), currently at the LIRMM (the Computer Science, Robotic and Microelectronic Laboratory of the University of Montpellier II, France) since 1994. She holds a Ph.D. and a habilitation thesis in Computer Science from the University of Montpellier II (France). Among others, her research interests are oriented towards object oriented design, reuse of software components, object oriented databases and evolution, and data models for spatial and temporal information systems.Pierre Maurel is a research engineer in Cemagref (France). He received his Diploma on Agronomy Engineering from ESAP high school (France) in 1986 and his M.Sc. on quantitative geography in 1990 from Avignon University (France). In the past, he performed research and teaching in satellite image processing and GIS for environmental and water applications. His current scientific interests include the development of methods for the design of multi-partners geographic information systems, the use of metadata within Spatial Data Infrastructures and the integration of Geographic Information technologies to support public participation in the field of Integrated River Basin Management (HarmoniCOP European project).  相似文献   

7.
In typical software development, a software reliability growth model (SRGM) is applied in each testing activity to determine the time to finish the testing. However, there are some cases in which the SRGM does not work correctly. That is, the SRGM sometimes mistakes quality for poor quality products. In order to tackle this problem, we focussed on the trend of time series data of software defects among successive testing phases and tried to estimate software quality using the trend. First, we investigate the characteristics of the time series data on the detected faults by observing the change of the number of detected faults. Using the rank correlation coefficient, the data are classified into four kinds of trends. Next, with the intention of estimating software quality, we investigate the relationship between the trends of the time series data and software quality. Here, software quality is defined by the number of faults detected during six months after shipment. Finally, we find a relationship between the trends and metrics data collected in the software design phase. Using logistic regression, we statistically show that two review metrics in the design and coding phase can determine the trend. Sousuke Amasakireceived the B.E. degree in Information and Computer Sciences from Okayama Prefectural University, Japan, in 2000 and the M.E. degree in Information and Computer Sciences from Graduate School of Information Science and Technology, Osaka University, Japan, in 2003. He has been in Ph.D. course of Graduate School of Information Science and Technology at Osaka University. His interests include the software process and the software quality assurance technique. He is a student member of IEEE and ACM. Takashi Yoshitomireceived the B.E. degree in Information and Computer Sciences from Osaka University, Japan, in 2002. He has been working for Hitachi Software Engineering Co., Ltd. Osamu Mizunoreceived the B.E., M.E., and Ph.D. degrees in Information and Computer Sciences from Osaka University, Japan, in 1996, 1998, and 2001, respectively. He is an Assistant Professor of the Graduate School of Information Science and Technology at Osaka University. His research interests include the improvement technique of the software process and the software risk management technique. He is a member of IEEE. Yasunari Takagireceived the B.E. degree in Information and Computer Science, from Nagoya Institute of Technology, Japan, in 1985. He has been working for OMRON Corporation. He has been also in Ph.D. course of Graduate School of Information Science and Technology at Osaka University since 2002. Tohru Kikunoreceived the B.E., M.Sc., and Ph.D. degrees in Electrical Engineering from Osaka University, Japan, in 1970, 1972, and 1975, respectively. He joined Hiroshima University from 1975 to 1987. Since 1990, he has been a Professor of the Department of Information and Computer Sciences at Osaka University. His research interests include the analysis and design of fault-tolerant systems, the quantitative evaluation of software development processes, and the design of procedures for testing communication protocols. He is a member of IEEE and ACM.  相似文献   

8.
We study the relationships between a number of behavioural notions that have arisen in the theory of distributed computing. In order to sharpen the under-standing of these relationships we apply the chosen behavioural notions to a basic net-theoretic model of distributed systems called elementary net systems. The behavioural notions that are considered here are trace languages, non-sequential processes, unfoldings and event structures. The relationships between these notions are brought out in the process of establishing that for each elementary net system, the trace language representation of its behaviour agrees in a strong way with the event structure representation of its behaviour. M. Nielsen received a Master of Science degree in mathematics and computer science in 1973, and a Ph.D. degree in computer science in 1976 both from Aarhus University, Denmark. He has held academic positions at Department of Computer Science, Aarhus University, Denmark since 1976, and was visiting researcher at Computer Science Department, University of Edinburgh, U.K., 1977–79, and Computer Laboratory, Cambridge University, U.K., 1986. His research interest is in the theory of distributed computing. Grzegorz Rozenberg received a master of engineering degree from the Department of Electronics (section computers) of the Technical University of Warsaw in 1964 and a Ph.D. in mathematics from the Institute of Mathematics of the Polish Academy of Science in 1968. He has held acdeemic positions at the Institute of Mathematics of the Polish Academy of Science, the Department of Mathematics of Utrecht University, the Department of Computer Science at SUNY at Buffalo, and the Department of Mathematics of the University of Antwerp. He is currently Professor at the Department of Computer Science of Leiden University and Adjoint Professor at the Department of Computer Science of the University of Colorado at Boulder. His research interests include formal languages and automata theory, theory of graph transformations, and theory of concurrent systems. He is currently President of the European Association for Theoretical Computer Science (EATCS). P.S. Thiagarajan received the Bachelor of Technology degree from the Indian Institute of Technology, Madras, India in 1970. He was awarded the Ph.D. degree by Rice University, Houston Texas, U.S.A, in 1973. He has been a Research Associate at the Massachusetts Institute of Technology, Cambridge a Staff Scientist at the Geosellschaft für Mathematik und Datenverarbeitung, St. Augustin, a Lektor at Århus University, Århus and an Associate Professor at the Institute of Mathematical Sciences, Madras. He is currently a Professor at the School of Mathematics, SPIC Science Foundation, Madras. He research intest is in the theory of distributed computing.  相似文献   

9.
Privacy-preserving SVM classification   总被引:2,自引:2,他引:0  
Traditional Data Mining and Knowledge Discovery algorithms assume free access to data, either at a centralized location or in federated form. Increasingly, privacy and security concerns restrict this access, thus derailing data mining projects. What is required is distributed knowledge discovery that is sensitive to this problem. The key is to obtain valid results, while providing guarantees on the nondisclosure of data. Support vector machine classification is one of the most widely used classification methodologies in data mining and machine learning. It is based on solid theoretical foundations and has wide practical application. This paper proposes a privacy-preserving solution for support vector machine (SVM) classification, PP-SVM for short. Our solution constructs the global SVM classification model from data distributed at multiple parties, without disclosing the data of each party to others. Solutions are sketched out for data that is vertically, horizontally, or even arbitrarily partitioned. We quantify the security and efficiency of the proposed method, and highlight future challenges. Jaideep Vaidya received the Bachelor’s degree in Computer Engineering from the University of Mumbai. He received the Master’s and the Ph.D. degrees in Computer Science from Purdue University. He is an Assistant Professor in the Management Science and Information Systems Department at Rutgers University. His research interests include data mining and analysis, information security, and privacy. He has received best paper awards for papers in ICDE and SIDKDD. He is a Member of the IEEE Computer Society and the ACM. Hwanjo Yu received the Ph.D. degree in Computer Science in 2004 from the University of Illinois at Urbana-Champaign. He is an Assistant Professor in the Department of Computer Science at the University of Iowa. His research interests include data mining, machine learning, database, and information systems. He is an Associate Editor of Neurocomputing and served on the NSF Panel in 2006. He has served on the program committees of 2005 ACM SAC on Data Mining track, 2005 and 2006 IEEE ICDM, 2006 ACM CIKM, and 2006 SIAM Data Mining. Xiaoqian Jiang received the B.S. degree in Computer Science from Shanghai Maritime University, Shanghai, 2003. He received the M.C.S. degree in Computer Science from the University of Iowa, Iowa City, 2005. Currently, he is pursuing a Ph.D. degree from the School of Computer Science, Carnegie Mellon University. His research interests are computer vision, machine learning, data mining, and privacy protection technologies.  相似文献   

10.
Information service plays a key role in grid system, handles resource discovery and management process. Employing existing information service architectures suffers from poor scalability, long search response time, and large traffic overhead. In this paper, we propose a service club mechanism, called S-Club, for efficient service discovery. In S-Club, an overlay based on existing Grid Information Service (GIS) mesh network of CROWN is built, so that GISs are organized as service clubs. Each club serves for a certain type of service while each GIS may join one or more clubs. S-Club is adopted in our CROWN Grid and the performance of S-Club is evaluated by comprehensive simulations. The results show that S-Club scheme significantly improves search performance and outperforms existing approaches. Chunming Hu is a research staff in the Institute of Advanced Computing Technology at the School of Computer Science and Engineering, Beihang University, Beijing, China. He received his B.E. and M.E. in Department of Computer Science and Engineering in Beihang University. He received the Ph.D. degree in School of Computer Science and Engineering of Beihang University, Beijing, China, 2005. His research interests include peer-to-peer and grid computing; distributed systems and software architectures. Yanmin Zhu is a Ph.D. candidate in the Department of Computer Science, Hong Kong University of Science and Technology. He received his B.S. degree in computer science from Xi’an Jiaotong University, Xi’an, China, in 2002. His research interests include grid computing, peer-to-peer networking, pervasive computing and sensor networks. He is a member of the IEEE and the IEEE Computer Society. Jinpeng Huai is a Professor and Vice President of Beihang University. He serves on the Steering Committee for Advanced Computing Technology Subject, the National High-Tech Program (863) as Chief Scientist. He is a member of the Consulting Committee of the Central Government’s Information Office, and Chairman of the Expert Committee in both the National e-Government Engineering Taskforce and the National e-Government Standard office. Dr. Huai and his colleagues are leading the key projects in e-Science of the National Science Foundation of China (NSFC) and Sino-UK. He has authored over 100 papers. His research interests include middleware, peer-to-peer (P2P), grid computing, trustworthiness and security. Yunhao Liu received his B.S. degree in Automation Department from Tsinghua University, China, in 1995, and an M.A. degree in Beijing Foreign Studies University, China, in 1997, and an M.S. and a Ph.D. degree in computer science and engineering at Michigan State University in 2003 and 2004, respectively. He is now an assistant professor in the Department of Computer Science and Engineering at Hong Kong University of Science and Technology. His research interests include peer-to-peer computing, pervasive computing, distributed systems, network security, grid computing, and high-speed networking. He is a senior member of the IEEE Computer Society. Lionel M. Ni is chair professor and head of the Computer Science and Engineering Department at Hong Kong University of Science and Technology. Lionel M. Ni received the Ph.D. degree in electrical and computer engineering from Purdue University, West Lafayette, Indiana, in 1980. He was a professor of computer science and engineering at Michigan State University from 1981 to 2003, where he received the Distinguished Faculty Award in 1994. His research interests include parallel architectures, distributed systems, high-speed networks, and pervasive computing. A fellow of the IEEE and the IEEE Computer Society, he has chaired many professional conferences and has received a number of awards for authoring outstanding papers.  相似文献   

11.
Data extraction from the web based on pre-defined schema   总被引:7,自引:1,他引:7       下载免费PDF全文
With the development of the Internet,the World Web has become an invaluable information source for most organizations,However,most documents available from the Web are in HTML form which is originally designed for document formatting with little consideration of its contents.Effectively extracting data from such documents remains a non-trivial task.In this paper,we present a schema-guided approach to extracting data from HTML pages .Under the approach,the user defines a schema specifying what to be extracted and provides sample mappings between the schema and th HTML page.The system will induce the mapping rules and generate a wrapper that takes the HTML page as input and produces the required datas in the form of XML conforming to the use-defined schema .A prototype system implementing the approach has been developed .The preliminary experiments indicate that the proposed semi-automatic approach is not only easy to use but also able to produce a wrapper that extracts required data from inputted pages with high accuracy.  相似文献   

12.
In this paper,a noverl technique adopted in HarkMan is introduced.HarkMan is a keywore-spotter designed to automatically spot the given words of a vocabulary-independent task in unconstrained Chinese telephone speech.The speaking manner and the number of keywords are not limited.This paper focuses on the novel technique which addresses acoustic modeling,keyword spotting network,search strategies,robustness,and rejection.The underlying technologies used in HarkMan given in this paper are useful not only for keyword spotting but also for continuous speech recognition.The system has achieved a figure-of-merit value over 90%.  相似文献   

13.
The simple least-significant-bit (LSB) substitution technique is the easiest way to embed secret data in the host image. To avoid image degradation of the simple LSB substitution technique, Wang et al. proposed a method using the substitution table to process image hiding. Later, Thien and Lin employed the modulus function to solve the same problem. In this paper, the proposed scheme combines the modulus function and the optimal substitution table to improve the quality of the stego-image. Experimental results show that our method can achieve better quality of the stego-image than Thien and Lin’s method does. The text was submitted by the authors in English. Chin-Shiang Chan received his BS degree in Computer Science in 1999 from the National Cheng Chi University, Taipei, Taiwan and the MS degree in Computer Science and Information Engineering in 2001 from the National Chung Cheng University, ChiaYi, Taiwan. He is currently a Ph.D. student in Computer Science and Information Engineering at the National Chung Cheng University, Chiayi, Taiwan. His research fields are image hiding and image compression. Chin-Chen Chang received his BS degree in applied mathematics in 1977 and his MS degree in computer and decision sciences in 1979, both from the National Tsing Hua University, Hsinchu, Taiwan. He received his Ph.D. in computer engineering in 1982 from the National Chiao Tung University, Hsinchu, Taiwan. During the academic years of 1980–1983, he was on the faculty of the Department of Computer Engineering at the National Chiao Tung University. From 1983–1989, he was on the faculty of the Institute of Applied Mathematics, National Chung Hsing University, Taichung, Taiwan. From 1989 to 2004, he has worked as a professor in the Institute of Computer Science and Information Engineering at National Chung Cheng University, Chiayi, Taiwan. Since 2005, he has worked as a professor in the Department of Information Engineering and Computer Science at Feng Chia University, Taichung, Taiwan. Dr. Chang is a Fellow of IEEE, a Fellow of IEE and a member of the Chinese Language Computer Society, the Chinese Institute of Engineers of the Republic of China, and the Phi Tau Phi Society of the Republic of China. His research interests include computer cryptography, data engineering, and image compression. Yu-Chen Hu received his Ph.D. degree in Computer Science and Information Engineering from the Department of Computer Science and Information Engineering, National Chung Cheng University, Chiayi, Taiwan in 1999. Dr. Hu is currently an assistant professor in the Department of Computer Science and Information Engineering, Providence University, Sha-Lu, Taiwan. He is a member of the SPIE society and a member of the IEEE society. He is also a member of the Phi Tau Phi Society of the Republic of China. His research interests include image and data compression, information hiding, and image processing.  相似文献   

14.
In this paper we discuss the paradigm of real-time processing on the lower level of computing systems. An arithmetical unit based on this principle containing addition, multiplication, division and square root operations is described. The development of the computation operators model is based on the imprecise computation paradigm and defines the concept of the adjustable calculation of a function that manages delay and the precision of the results as an inherent and parameterized characteristic. The arithmetic function design is based on well-known algorithms and offers progressive improvement in the results. Advantages in the predictability of calculations are obtained by means of processing groups of k-bits atomically and by using look-up tables. We report an evaluation of the operations in path time, delay and computation error. Finally, we present an example of our real-time architecture working in a realistic context. Higinio Mora-Mora received the BS degree in computer science engineering and the BS degree in business studies in University of Alicante, Spain, in 1996 and 1997, respectively. He received the PhD degree in computer science from the University of Alicante in 2003. Since 2002, he is a member of the faculty of the Computer Technology and Computation Department at the same university where he is currently an associate professor and researcher of Specialized Processors Architecture Laboratory. His areas of research interest include computer arithmetic and the design of floating points units and approximation algorithms related to VLSI design. Jerónimo Mora-Pascual received the BS degree in computer science engineering from University of Valencia (Spain), in 1994. Since 1994, he has been a member of the faculty of the Computer Technology and Computation department at the University of Alicante, where he is currently an associate professor. He completed his PhD in computer science at University of Alicante in 2001. He has worked on neural networks and its VLSI implementation. His current areas of research interest include the design of floating points units and its application for real-time systems and processors for geometric calculus. Juan Manuel García-Chamizo received his BS in physics at the University of Granada (Spain) in 1980, and the PhD degree in Computer Science at the University of Alicante (Spain) in 1994. He is currently a full professor and director of the Computer Technology and Computation department at the University of Alicante. His current research interests are computer vision, reconfigurable hardware, biomedical applications, computer networks and architectures and artificial neural networks. He has directed several research projects related to the above-mentioned interest areas. He is a member of a Spanish Consulting Commission on Electronics, Computer Science and Communications. He is also member and editor of some program committee conferences. Antonio Jimeno-Morenilla is associate professor in the Computer Technology and Computation department at the University of Alicante (Spain). He received his PhD from the University of Alicante in 2003. He concluded his bachelor studies at the EPFL (Ecole Polytechnique Fe’de’rale de Lausanne, Switzerland) and received his BS degree in computer science from the Polytechnical University of Valencia (Spain) in 1994. His research interests include sculptured surface manufacturing, CAD/CAM, computational geometry for design and manufacturing, rapid and virtual prototyping, 3D surface flattening, and high performance computer architectures. He has considerable experience in the development of 3D CAD systems for shoes. In particular, he has been involved in many government and industrial funded projects, most of them in collaboration with the Spanish Footwear Research Institute (INESCOP).  相似文献   

15.
Chinese-English machine translation is a significant and challenging problem in information processing.The paper presents an interlingua-based Chinese-English natural language translation system(ICENT).It introduces the realization mechanism of Chinses language analysis,which contains syntactic parsing and semantic analyzing and gives the design of interlingua in details .Experimental results and system evaluation are given .The sesult is satisfying.  相似文献   

16.
Traditionally, direct marketing companies have relied on pre-testing to select the best offers to send to their audience. Companies systematically dispatch the offers under consideration to a limited sample of potential buyers, rank them with respect to their performance and, based on this ranking, decide which offers to send to the wider population. Though this pre-testing process is simple and widely used, recently the industry has been under increased pressure to further optimize learning, in particular when facing severe time and learning space constraints. The main contribution of the present work is to demonstrate that direct marketing firms can exploit the information on visual content to optimize the learning phase. This paper proposes a two-phase learning strategy based on a cascade of regression methods that takes advantage of the visual and text features to improve and accelerate the learning process. Experiments in the domain of a commercial Multimedia Messaging Service (MMS) show the effectiveness of the proposed methods and a significant improvement over traditional learning techniques. The proposed approach can be used in any multimedia direct marketing domain in which offers comprise both a visual and text component.
Giuseppe TribulatoEmail:

Sebastiano Battiato   was born in Catania, Italy, in 1972. He received the degree in Computer Science (summa cum laude) in 1995 and his Ph.D in Computer Science and Applied Mathematics in 1999. From 1999 to 2003 he has lead the “Imaging” team c/o STMicroelectronics in Catania. Since 2004 he works as a Researcher at Department of Mathematics and Computer Science of the University of Catania. His research interests include image enhancement and processing, image coding and camera imaging technology. He published more than 90 papers in international journals, conference proceedings and book chapters. He is co-inventor of about 15 international patents. He is reviewer for several international journals and he has been regularly a member of numerous international conference committees. He has participated in many international and national research projects. He is an Associate Editor of the SPIE Journal of Electronic Imaging (Specialty: digital photography and image compression). He is director of ICVSS (International Computer Vision Summer School). He is a Senior Member of the IEEE. Giovanni Maria Farinella   is currently contract researcher at Dipartimento di Matematica e Informatica, University of Catania, Italy (IPLAB research group). He is also associate member of the Computer Vision and Robotics Research Group at University of Cambridge since 2006. His research interests lie in the fields of computer vision, pattern recognition and machine learning. In 2004 he received his degree in Computer Science (egregia cum laude) from University of Catania. He was awarded a Ph.D. (Computer Vision) from the University of Catania in 2008. He has co-authored several papers in international journals and conferences proceedings. He also serves as reviewer numerous international journals and conferences. He is currently the co-director of the International Summer School on Computer Vision (ICVSS). Giovanni Giuffrida   is an assistant professor at University of Catania, Italy. He received a degree in Computer Science from the University of Pisa, Italy in 1988 (summa cum laude), a Master of Science in Computer Science from the University of Houston, Texas, in 1992, and a Ph.D. in Computer Science, from the University of California in Los Angeles (UCLA) in 2001. He has an extensive experience in both the industrial and academic world. He served as CTO and CEO in the industry and served as consultant for various organizations. His research interest is on optimizing content delivery on new media such as Internet, mobile phones, and digital tv. He published several papers on data mining and its applications. He is a member of ACM and IEEE. Catarina Sismeiro   is a senior lecturer at Imperial College Business School, Imperial College London. She received her Ph.D. in Marketing from the University of California, Los Angeles, and her Licenciatura in Management from the University of Porto, Portugal. Before joining Imperial College Catarina had been and assistant professor at Marshall School of Business, University of Southern California. Her primary research interests include studying pharmaceutical markets, modeling consumer behavior in interactive environments, and modeling spatial dependencies. Other areas of interest are decision theory, econometric methods, and the use of image and text features to predict the effectiveness of marketing communications tools. Catarina’s work has appeared in innumerous marketing and management science conferences. Her research has also been published in the Journal of Marketing Research, Management Science, Marketing Letters, Journal of Interactive Marketing, and International Journal of Research in Marketing. She received the 2003 Paul Green Award and was the finalist of the 2007 and 2008 O’Dell Awards. Catarina was also a 2007 Marketing Science Institute Young Scholar, and she received the D. Antonia Adelaide Ferreira award and the ADMES/MARKTEST award for scientific excellence. Catarina is currently on the editorial boards of the Marketing Science journal and the International Journal of Research in Marketing. Giuseppe Tribulato   was born in Messina, Italy, in 1979. He received the degree in Computer Science (summa cum laude) in 2004 and his Ph.D in Computer Science in 2008. From 2005 he has lead the research team at Neodata Group. His research interests include data mining techniques, recommendation systems and customer targeting.   相似文献   

17.
In this paper we describe a machine vision system capable of high-resolution measurement of fluid velocity fields in complex 2D models of rock, providing essential data for the validation of the numerical models which are widely applied in the oil and petroleum industries. Digital models, incorporating the properties of real rock, are first generated, then physically replicated as layers of resin or aluminium (200 mm × 200 mm) encapsulated between transparent plates as a flowcell. This configuration enables the geometry to be permeated with fluid and fluid motion visualised using particle image velocimetry. Fluid velocity fields are then computed using well-tested cross-correlation techniques. Rachel Cassidy is a Research Associate in Geophysics at the University of Ulster. Dr. Cassidy's research interests include percolation theory and its application to fluid flow in fractured rock, the fractal and multifractal properties of natural phenomena and the development of experimental techniques for investigating fluid flow in porous fractured media with realistic structure and exhibiting scale invariance. She is currently involved in the development of molecular tracer techniques for characterising reservoir heterogeneity. Philip Morrow is currently a Senior Lecturer in the School of Computing and Information Engineering at the University of Ulster. Dr. Morrow has a BSc in Applied Mathematics and Computer Science, an MSc in Electronics and a PhD in Parallel Image Processing, all from the Queen's University of Belfast. His main research interests lie in image processing, computer vision and parallel/distributed computing. He has published over 65 research papers in these areas. John McCloskey is Professor of Geophysics and Head of the School of Environmental Sciences at the University of Ulster. Prof. McCloskey's research interests are in the application of ideas of chaos and complexity to a variety of geophysical problems including earthquake dynamics and fluid flow in fractured porous rock. He has published over 100 articles and is a regular contributor to international press on matters connected with earth science.  相似文献   

18.
This paper presents the design and implementation of a real-time solution for the global control of robotic highway safety markers. Problems addressed in the system are: (1) poor scalability and predictability as the number of markers increases, (2) jerky movement of markers, and (3) misidentification of safety markers caused by objects in the environment.An extensive analysis of the system and two solutions are offered: a basic solution and an enhanced solution. They are built respectively upon two task models: the periodic task model and the variable rate execution (VRE) task model. The former is characterized by four static parameters: phase, period, worst case execution time and relative deadline. The latter has similar parameters, but the parameter values are allowed to change at arbitrary times.The use of real-time tasks and scheduling techniques solve the first two problems. The third problem is solved using a refined Hough transform algorithm and a horizon scanning window. The approach decreases the time complexity of traditional implementations of the Hough transform with only slightly increased storage requirements.Supported, in part, by grants from the National Science Foundation (CCR-0208619 and CNS-0409382) and the National Academy of Sciences Transportation Research Board-NCHRP IDEA Program (Project #90).Jiazheng Shi received the B.E. and M.E. degrees in electrical engineering from Beijing University of Posts and Telecommunications in 1997 and 2000, respectively. In 2000, he worked with the Global Software Group, Motorola Inc. Currently, he is a Ph.D. candidate in the Computer Science and Engineering Department at the University of Nebraska–Lincoln. His research interests are automated human face recognition, image processing, computer vision, approximate theory, and linear system optimization.Steve Goddard is a J.D. Edwards Associate Professor in the Department of Computer Science & Engineering at the University of Nebraska–Lincoln. He received the B.A. degree in computer science and mathematics from the University of Minnesota (1985). He received the M.S. and Ph.D. degrees in computer science from the University of North Carolina at Chapel Hill (1995, 1998).His research interests are embedded, real-time and distributed systems with emphases in high assurance systems engineering and real-time, rate-based scheduling theory.Anagh Lal received a B.S. degree in Computer Science from the University of Mumbai (Bombay), Mumbai, in 2001. He is currently a graduate research assistant at the University of Nebraska–Lincoln working on a M.S. in Computer Science, and a member of the ConSystLab. His research interests lie in Databases, Constraint Processing and Real Time Systems. Anagh will be graduating soon and is looking for positions at research institutions.Jason Dumpert received a B.S. degree in electrical engineering from the University of Nebraska–Lincoln in 2001. He received a M.S. degree in electrical engineering from the University of Nebraska-Lincoln in 2004. He is currently a graduate research assistant at the University of Nebraska-Lincoln working on a Ph.D. in biomedical engineering. His research interests include mobile robotics and surgical robotics.Shane M. Farritor is an Associate Professor in the University of Nebraska–Lincolns Department of Mechanical Engineering. His research interests include space robotics, surgical robotics, biomedical sensors, and robotics for highway safety. He holds courtesy appointments in both the Department of Surgery and the Department of Orthopaedic Surgery at the University of Nebraska Medical Center, Omaha. He serves of both the AIAA Space Robotics and Automation technical committee and ASME Dynamic Systems and Control Robotics Panel. He received M.S. and Ph.D. degrees from M.I.T.  相似文献   

19.
A new stick text segmentation method based on the sub connected area analysis is introduced in this paper.The foundation of this method is the sub connected area representation of text image that can represent all connected areas in an image efficiently.This method consists mainly of four steps:sub connected area classification,finding initial boundary following point,finding optimal segmentation point by boundary tracing,and text segmentaton.This method is similar to boundary analysis method but is more efficient than boundary analysis.  相似文献   

20.
A humanoid robot is always flooded by sensed information when sensing the environment, and it usually needs significant time to compute and process the sensed information. In this paper, a selective attention-based contextual perception approach was proposed for humanoid robots to sense the environment with high efficiency. First, the connotation of attention window (AW) is extended to make a more general and abstract definition of AW, and its four kinds of operations and state transformations are also discussed. Second, the attention control policies are described, which integrate intensionguided perceptual objects selection and distractor inhibition, and can deal with emergent issues. Distractor inhibition is used to filter unrelated information. Last, attention policies are viewed as the robot’s perceptual modes, which can control and adjust the perception efficiency. The experimental results show that the presented approach can promote the perceptual efficiency significantly, and the perceptual cost can be effectively controlled through adopting different attention policies.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号