首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A non-slicing approach,Corner Block List(CBL),has been presented recently.Since CBL only can represent floorplans without empty rooms,the algorithm based on CBL cannot get the optimum placement.In this paper,an extended corner block list,ECBLλ,is proposed.It can represent non-slicing floorplan including empty rooms.Based on the optimum solution theorem of BSG(bounded-sliceline grid),it is proved that the solution space of ECBLn,where n is the number of blocks,contains the optimum block placement with the minimum area.A placement algorithm based on ECBLλ,whose solution space can be controlled by setting λ,the extending ratio,is completed.Whenλ is set as n,the algorithm based on ECBLn is the optimum placement search algorithm.Experiments show that λ has a reasonable constant range for building block layout problem,so the algorithm can translate an ECBLλ representation to its corresponding placement in O(n) time,Experimental results on MCNC benchmarks show promising performance with 7% improvement in wire length and 2% decrease in dead space over algorthms based on CBL.Meanwhile,compared with other algorithms,the proposed algorithm can get better results with less runtime.  相似文献   

2.
Bounded Slice-line Grid (BSG) is an elegant representation of block placement, because it is very intuitionistic and has the advantage of handling various placement constraints. However, BSG has attracted little attention because its evaluation is very time-consuming. This paper proposes a simple algorithm independent of the BSG size to evaluate the BSG representation in O(nloglogn) time, where n is the number of blocks. In the algorithm, the BSG-rooms are assigned with integral coordinates firstly, and then a linear sorting algorithm is applied on the BSG-rooms where blocks are assigned to compute two block sequences, from which the block placement can be obtained in O(n log logn) time. As a consequence, the evaluation of the BSG is completed in O(nloglogn) time, where n is the number of blocks. The proposed algorithm is much faster than the previous graph-based O(n^2) algorithm. The experimental results demonstrate the efficiency of the algorithm.  相似文献   

3.
A Model for Slicing JAVA Programs Hierarchically   总被引:3,自引:0,他引:3       下载免费PDF全文
Program slicing can be effectively used to debug, test, analyze, understand and maintain objectoriented software. In this paper, a new slicing model is proposed to slice Java programs based on their inherent hierarchical feature. The main idea of hierarchical slicing is to slice programs in a stepwise way, from package level, to class level, method level, and finally up to statement level. The stepwise slicing algorithm and the related graph reachability algorithms are presented, the architecture of the Java program Analyzing TOol (JATO) based on hierarchical slicing model is provided, the applications and a small case study are also discussed.  相似文献   

4.
5.
This paper presents an edge detection method based on mathematical morphology. The proposed scheme consists of four steps: preprocessing, edge extraction, edge decision, and postprocessing. In the preprocessing step, a morphological central transformation is applied to remove noise. In the edge extraction and decision steps, a morphological edge extractor is designed to estimate the edge information of an image, and an edge decision criterion is followed to determine whether a pixel is an edge or not. In the postprocessing step, the morphological hit-or-miss transformation is utilized to improve the correctness of the detected edges. It is proved theoretically for the correctness and effectiveness for detecting ideal edges. Experimental results show that the proposed method works well on both artificial and real images. The text was submitted by the authors in English. Chin-Pan Huang was born in 1959 in Taiwan, Republic of China. He received the B.S. and M.S. degrees in electrical engineering from Chung Cheng Institute of Technology, Taiwan, in 1981 and in 1985, respectively. In 1996, he received the Ph.D. degree in electrical engineering from the University of Pittsburgh in the United States. From 1996 to 2002, he was an associate scientist of the Electronic System Division in Chung Shan Institute of Science and Technology. He then joined the Department of Computer and Communication Engineering at Ming Chuan University in August 2002 and is currently an assistant professor there. His recent research interests include data compression, computer vision, digital image processing, and pattern recognition. Ran-Zan Wang was born in 1972 in Fukien, Republic of China. He received his B.S. degree in computer engineering and science in 1994 and M.S. degree in electrical engineering and computer science in 1996, both from Yuan-Ze University. In 2001, he received his Ph.D. degree in computer and information science from National Chiao Tung University. In 2001–2002, he was an assistant professor at the Department of Computer Engineering at the Van Nung Institute of Technology. He joined the Department of Computer and Communication Engineering at Ming Chuan University in August 2002 and is currently an assistant professor there. His recent research interests include data hiding and digital watermarking, image processing, and pattern recognition. Dr. Wang is a member of the Phi Tau Phi Scholastic Honor Society.  相似文献   

6.
Retiming is a technique for optimizing sequential circuits.In this paper,we discuss this problem and propose an improved retiming algorithm based on varialbes bounding.Through the computation of the lower and upper bounds on variables,the algorithm can significantly reduce the number of constratints and speed up the execution of retiming.Furthermore,the elements of matrixes D and W are computed in a demand-driven way,which can reduce the capacity of memory,It is shown through the experimental results on ISCAS89 benchmarks that our algorithm is very effective for large-scale seuqential circuits.  相似文献   

7.
This paper defines second-order and third-order permutation global functions and presents the corresponding higher-order cellular automaton approach to the hyper-parallel undistorted data compression.The genetic algorithm is successfully devoted to finding out all the correct local compression rules for the higher-order cellualr automaton.The correctness of the higher-order compression rules,the time complexity,and the systolic hardware implementation issue are discussed.In comparison with the first-order automation method reported,the proposed higher-order approach has much faster compression speed with almost the same degree of cellular structure complexity for hardware implementation.  相似文献   

8.
In this paper,the 1-D real-valued discrete Gabor transform(RDGT)proposed in the previous work and its relationship with the complex-valued discrete Gabor transform(CDGT)are briefly reviewed.Block time-recursive RDGT algorithms for the efficient and fast computation of the 1-D RDGT coefficients and for the fast reconstruction of the original signal from the coefficients are developed in both critical sampling and oversampling cases.Unified parallel lattice structuires for the implementation of the algorithms are studied.And the computational complexity analysis and comparison show that the proposed algorithms provide a more efficient and faster approach to the computation of the discrete Gabor transforms.  相似文献   

9.
This paper proposes the use of more than one clustering method to improve clustering performance,Clustering is an optimization procedure based on a specific clustering criterion.Clustering combination can be regarded as a technique that constructs and processes multiple clustering criteria.Since the global and local clustering criteria are complementary rather than competitive,combining these two types of clustering criteria may enhance the clustering performance,In our past work,a multi-objective programming based simultaneous clustering combination algorithm has been propsed,which incorporates multiple criteria into an objective function by a weighting method,and solves this problem with constrained nonlinear optimization programming.But this algorithm has high computaional complexity,Here a sequential combination approach is investigated,which first uses the global criterion based clustering to produce an initial result ,then uses the local criterion based informaiton to improve the initial result with a probabilistic relaxation algorithm or linear additive model.Compared with the simultaneous combination method,sequential combination has low computational complexity.Results on some simulated data and standard test data are reported.It appears that clustering performance improvement can be achieved at low cost through sequential combination.  相似文献   

10.
This paper presents a test resource partitioning technique based on an efficient response compaction design called quotient compactor(q-Compactor). Because q-Compactor is a single-output compactor, high compaction ratios can be obtained even for chips with a small number of outputs. Some theorems for the design of q-Compactor are presented to achieve full diagnostic ability, minimize error cancellation and handle unknown bits in the outputs of the circuit under test (CUT). The q-Compactor can also be moved to the load-board, so as to compact the output response of the CUT even during functional testing. Therefore, the number of tester channels required to test the chip is significantly reduced. The experimental results on the ISCAS ‘89 benchmark circuits and an MPEG 2 decoder SoC show that the proposed compactionscheme is very efficient.  相似文献   

11.
In this paper,a noverl technique adopted in HarkMan is introduced.HarkMan is a keywore-spotter designed to automatically spot the given words of a vocabulary-independent task in unconstrained Chinese telephone speech.The speaking manner and the number of keywords are not limited.This paper focuses on the novel technique which addresses acoustic modeling,keyword spotting network,search strategies,robustness,and rejection.The underlying technologies used in HarkMan given in this paper are useful not only for keyword spotting but also for continuous speech recognition.The system has achieved a figure-of-merit value over 90%.  相似文献   

12.
Printed Arabic character recognition using HMM   总被引:1,自引:0,他引:1       下载免费PDF全文
The Arabic Language has a very rich vocabulary. More than 200 million people speak this language as their native speaking, and over 1 billion people use it in several religion-related activities. In this paper a new technique is presented for recognizing printed Arabic characters. After a word is segmented, each character/word is entirely transformed into a feature vector. The features of printed Arabic characters include strokes and bays in various directions, endpoints, intersection points, loops, dots and zigzags. The word skeleton is decomposed into a number of links in orthographic order, and then it is transferred into a sequence of symbols using vector quantization. Single hidden Markov model has been used for recognizing the printed Arabic characters. Experimental results show that the high recognition rate depends on the number of states in each sample.  相似文献   

13.
This paper investigates the robust H∞ filtering problem for uncertain two-dimensional (2D) systems described by the Roesser model. The parameter uncertainties considered in this paper are assumed to be of polytopie type. A new structured polynomi-ally parameter-dependent method is utilized, which is based on homogeneous polynomially parameter-dependent matrices of arbitrary degree. The proposed method includes results in the quadratic framework and the linearly parameter-dependent framework as special cases for zeroth degree and first degree, respectively. A numerical example illustrates the feasibility and advantage of the proposed filter design methods.  相似文献   

14.
In mobile database systems,mobility of users has a significant impact on data replication.As a result,the various replica control protocols that exist today in traditional distributed and multidatabase environments are no longer suitable To solve this problem,a new mobile database replication scheme,the Transaction-Level Result-Set Propagation(TLRSP)model,is put forward in this paper,The conflict dectction and resolution strategy based on TLRSP is discussed in detail,and the implementation algorithm is proposed,In order to compare the performance of the TLRSP model with that of other mobile replication schemes,we have developed a detailde simulation model.Experimantal results show that the TLRSP model provides an effcient support for replicated mobile database systems by reducing reprocessing overhead and maintaining database consistency.  相似文献   

15.
This paper is devtoed to a new algebraic modelling approach to distributed problem-solving in multi-agent systems(MAS),which is featured by a unified framework for describing and treating social behaviors,social dynamics and social intelligence.A coneptual architecture of algebraic modelling is presented.The algebraic modelling of typical social be-haviors,social situation and social dynamics is discussed in the context of distributed problem-solving in MAS .The comparison and simulation on distributed task allocations and resource assignments in MAS show more advantages of the algebraic approach than other conventional methods.  相似文献   

16.
The Multi-Agent Distributed Goal Satisfaction (MADGS) system facilitates distributed mission planning and execution in complex dynamic environments with a focus on distributed goal planning and satisfaction and mixed-initiative interactions with the human user. By understanding the fundamental technical challenges faced by our commanders on and off the battlefield, we can help ease the burden of decision-making. MADGS lays the foundations for retrieving, analyzing, synthesizing, and disseminating information to commanders. In this paper, we present an overview of the MADGS architecture and discuss the key components that formed our initial prototype and testbed. Eugene Santos, Jr. received the B.S. degree in mathematics and Computer science and the M.S. degree in mathematics (specializing in numerical analysis) from Youngstown State University, Youngstown, OH, in 1985 and 1986, respectively, and the Sc.M. and Ph.D. degrees in computer science from Brown University, Providence, RI, in 1988 and 1992, respectively. He is currently a Professor of Engineering at the Thayer School of Engineering, Dartmouth College, Hanover, NH, and Director of the Distributed Information and Intelligence Analysis Group (DI2AG). Previously, he was faculty at the Air Force Institute of Technology, Wright-Patterson AFB and the University of Connecticut, Storrs, CT. He has over 130 refereed technical publications and specializes in modern statistical and probabilistic methods with applications to intelligent systems, multi-agent systems, uncertain reasoning, planning and optimization, and decision science. Most recently, he has pioneered new research on user and adversarial behavioral modeling. He is an Associate Editor for the IEEE Transactions on Systems, Man, and Cybernetics: Part B and the International Journal of Image and Graphics. Scott DeLoach is currently an Associate Professor in the Department of Computing and Information Sciences at Kansas State University. His current research interests include autonomous cooperative robotics, adaptive multiagent systems, and agent-oriented software engineering. Prior to coming to Kansas State, Dr. DeLoach spent 20 years in the US Air Force, with his last assignment being as an Assistant Professor of Computer Science and Engineering at the Air Force Institute of Technology. Dr. DeLoach received his BS in Computer Engineering from Iowa State University in 1982 and his MS and PhD in Computer Engineering from the Air Force Institute of Technology in 1987 and 1996. Michael T. Cox is a senior scientist in the Intelligent Distributing Computing Department of BBN Technologies, Cambridge, MA. Previous to this position, Dr. Cox was an assistant professor in the Department of Computer Science & Engineering at Wright State University, Dayton, Ohio, where he was the director of Wright State’s Collaboration and Cognition Laboratory. He received his Ph.D. in Computer Science from the Georgia Institute of Technology, Atlanta, in 1996 and his undergraduate from the same in 1986. From 1996 to 1998, he was a postdoctoral fellow in the Computer Science Department at Carnegie Mellon University in Pittsburgh working on the PRODIGY project. His research interests include case-based reasoning, collaborative mixed-initiative planning, intelligent agents, understanding (situation assessment), introspection, and learning. More specifically, he is interested in how goals interact with and influence these broader cognitive processes. His approach to research follows both artificial intelligence and cognitive science directions.  相似文献   

17.
In this paper, region features and relevance feedback are used to improve the performance of CBIR. Unlike existing region-based approaches where either individual regions are used or only simple spatial layout is modeled, the proposed approach simultaneously models both region properties and their spatial relationships in a probabilistic framework. Furthermore, the retrieval performance is improved by an adaptive filter based relevance feedback. To illustrate the performance of the proposed approach, extensive experiments have been carried out on a large heterogeneous image collection with 17,000 images, which render promising results on a wide variety of queries.  相似文献   

18.
Multicast offers an efficient means of distributing video contents/programs to multiple clients by batching their requests and then having them share a server‘s video stream.Batching customers‘ requests is either client-initiated or server-initiated.Most anvanced client-initiated video multicasts are implemented by patching.Periodic broadcast,a typical server-initiated approach,can be entirety-based or segment-based.This paper focuses on the performance of the VoD service for popular videos.First,we analyze the limitation of conventional patching when the customer request rate is high.Then,By combining the advantages of each of the two broadcast schemes,we propose a hybrid broadcast scheme for popular videos ,which not only lowers the serviec latency but also improves clients‘ interactivity by using an active buffering technique ,This is shown to be a good compromise for both lowering service latency and improving the VCR-like interactivity.  相似文献   

19.
20.
In this paper we introduce the logic programming languageDisjunctive Chronolog which combines the programming paradigms of temporal and disjunctive logic programming. Disjunctive Chronolog is capable of expressing dynamic behaviour as well as uncertainty, two notions that are very common in a variety of real systems. We present the minimal temporal model semantics and the fixpoint semantics for the new programming language and demonstrate their equivalence. We also show how proof procedures developed for disjunctive logic programs can be easily extended to apply to Disjunctive Chronolog programs. Manolis Gergatsoulis, Ph.D.: He received his B.Sc. in Physics in 1983, the M.Sc. and the Ph.D. degrees in Computer Science in 1986 and 1995 respectively all from the University of Athens, Greece. Since 1996 he is a Research Associate in the Institute of Informatics and Telecommunications, NCSR ‘Demokritos’, Athens. His research interests include logic and temporal programming, program transformations and synthesis, as well as theory of programming languages. Panagiotis Rondogiannis, Ph.D.: He received his B.Sc. from the Department of Computer Engineering and Informatics, University of Patras, Greece, in 1989, and his M.Sc. and Ph.D. from the Department of Computer Science, University of Victoria, Canada, in 1991 and 1994 respectively. From 1995 to 1996 he served in the Greek army. From 1996 to 1997 he was a visiting professor in the Department of Computer Science, University of Ioannina, Greece, and since 1997 he is a Lecturer in the same Department. In January 2000 he was elected Assistant Professor in the Department of Informatics at the University of Athens. His research interests include functional, logic and temporal programming, as well as theory of programming languages. Themis Panayiotopoulos, Ph.D.: He received his Diploma on Electrical Engineering from the Department of Electrical Engineering, National Technical Univesity of Athens, in 1984, and his Ph.D. on Artificial Intelligence from the above mentioned department in 1989. From 1991 to 1994 he was a visiting professor at the Department of Mathematics, University of the Aegean, Samos, Greece and a Research Associate at the Institute of Informatics and Telecommunications of “Democritos” National Research Center. Since 1995 he is an Assistant Prof. at the Department of Computer Science, University of Piraeus. His research interests include temporal programming, logic programming, expert systems and intelligent agent architectures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号