首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
A hyperclique is an itemset containing items that are strongly correlated with each other, based on a user-specified threshold. Hypercliques (HC) have been successfully used in a number of applications, for example clustering and noise removal. Even though the HC collection has been shown to respond well to datasets with skewed support distribution and low support threshold, it may still grow very large for dense datasets and lower h-confidence threshold. Recently, we proposed a condensed representation of HC, the Non-Derivable Hypercliques (NDHC). NDHC was shown to have substantial advantages over HC, especially for dense datasets and lower h-confidence values. In this paper, we propose an approximation of the NDHC collection, called Approximate Non-Derivable Hypercliques (ANDHC). This collection is a subset of NDHC and is generated based on a user-input error. We present an efficient algorithm to mine all ANDHC sets, and an algorithm to approximately derive the HC collection from the ANDHC collection. Through experiments with several datasets and parameter combinations, the ANDHC collection is shown to have significant collection size reduction from NDHC even for small error values. It is also shown that based on ANDHC we can generate a good approximation of the entire HC collection. Finally, we experiment with applying our proposed collections to enhance clustering with encouraging results.  相似文献   

2.
While hype around the benefits of ‘cloud computing’increase, challenges in maintaining data security and data privacy have also been recognised as significant vulnerabilities (Ristenpart et al. in Proceedings of the 14th ACM conference on computer and communications security, pp 103–115, 2009; Pearson in CLOUD’09, pp 44–52, 2009; Vouk in J Comput Inf Technol 4:235–246, 2008). These vulnerabilities generate a range of questions relating to the capacity of organisations relying on cloud solutions to effectively manage risk. This has become particularly the case as the threats faced by organisations have moved increasingly away from indiscriminate malware to more targeted cyber-attack tools. From forensic computing perspective it has also been recognised that ‘cloud solutions’ pose additional challenges for forensic computing specialists including discoverability and chain of evidence (Ruan et al. in Adv Digital Forensics VII:35–46, 2011; Reilly et al. in Int J Multimedia Image Process 1:26–34, 2011). However, to date there has been little consideration of how the differences between indiscriminate malware and targeted cyber-attack tools further problematize the capacity of organisations to manage risk. This paper also considers these risks and differentiates between technical, legal and ethical dilemmas posed. The paper also highlights the need for organisations to be aware of these issues when deciding to move to cloud solutions.  相似文献   

3.
The routing of traffic between Internet domains, or Autonomous Systems (ASes), a task known as interdomain routing, is currently handled by the Border Gateway Protocol (BGP, Rekhter and Li in RFC 4271 of the Internet Engineering Task Force, 2006). Using BGP, ASes can apply semantically rich routing policies to choose interdomain routes in a distributed fashion. This expressiveness in routing-policy choice supports domains?? autonomy in network operations and in business decisions, but it comes at a price: The interaction of locally defined routing policies can lead to unexpected global anomalies, including route oscillations or overall protocol divergence (see, e.g., Varadhan et?al. in Comput Networks 32(1):1?C16, 2000). Networking researchers have addressed this problem by devising constraints on policies that guarantee BGP convergence without unduly limiting expressiveness and autonomy (see, e.g., Gao and Rexford in IEEE/ACM Trans Network 9(6):681?C692, 2001; Griffin et?al. in Proceedings of 9th ACM Conference on Applications, Technologies, Architectures, and Protocols for Computer Communication (SIGCOMM??03), pp. 61?C72. ACM Press, New York, 2003). In addition to taking this engineering or ??protocol- design?? approach, researchers have approached interdomain routing from an economic or ??mechanism-design?? point of view. It is known that lowest-cost-path (LCP) routing can be implemented in an incentive-compatible, BGP-compatible manner (Feigenbaum et?al. in Distribut. Comput 18(1):61?C72, 2005; Shneidman and Parkes in Proceedings of 23rd ACM Symposium on Principles of Distributed Computing (PODC??04), pp. 88?C97. ACM Press, New York, 2004) but that several other natural classes of policies cannot (Feigenbaum et?al. in Theor Comput Sci 378(2):175?C189, 2007; Feigenbaum et?al. in Distribut Comput 18(4):293?C305, 2006). In this paper, we present the first example of a class of interdomain-routing policies that is more general than LCP routing and for which BGP itself is both incentive-compatible and guaranteed to converge. We also present several steps toward a general theory of incentive-compatible, BGP-compatible interdomain routing.  相似文献   

4.
An ongoing line of research has shown super-polynomial lower bounds for uniform and slightly-non-uniform small-depth threshold and arithmetic circuits (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999; Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009; Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011). We give a unified framework that captures and improves each of the previous results. Our main results are that Permanent does not have threshold circuits of the following kinds.
  1. Depth O(1), n o(1) bits of non-uniformity, and size n O(1).
  2. Depth O(1), polylog(n) bits of non-uniformity, and size s(n) such that for all constants c the c-fold composition of s, s (c)(n), is less than 2 n .
  3. Depth o(loglogn), polylog(n) bits of non-uniformity, and size n O(1).
(1) strengthens a result of Jansen and Santhanam (Jansen and Santhanam, in Proceedings of the 38th International Colloquium on Automata, Languages and Programming (ICALP 2011), I, pp. 724–735, 2011), who obtained similar parameters but for arithmetic circuits of constant depth rather than Boolean threshold circuits. (2) and (3) strengthen results of Allender (Allender, in Chicago J. Theor. Comput. Sci. 1999(7), 1999) and Koiran and Perifel (Koiran and Perifel, in Proceedings of the 24th Annual IEEE Conference on Computational Complexity (CCC 2009), pp. 35–40, 2009), respectively, who obtained results with similar parameters but for completely uniform circuits. Our main technical contribution is to simplify and unify earlier proofs in this area, and adapt the proofs to handle some amount of non-uniformity. We also develop a notion of circuits with a small amount of non-uniformity that naturally interpolates between fully uniform and fully non-uniform circuits. We use this notion, which we term weak uniformity, rather than the earlier and essentially equivalent notion of succinctness used by Jansen and Santhanam because the notion of weak uniformity more fully and easily interpolates between full uniformity and non-uniformity of circuits.  相似文献   

5.
Danvy??s functional unparsing problem (Danvy in J. Funct. Program. 8(6), 621?C625, 1998) is to implement a type-safe ??printf?? function, which converts a sequence of heterogeneous arguments to a string according to a given format. The dual problem is to implement a type-safe ??scanf?? function, which extracts a sequence of heterogeneous arguments from a string by interpreting (Friedman and Wand in LFP, pp. 348?C355, 1984 and in Essentials of Programming Languages, MIT Press, 2008) the same format as an equally heterogeneous sequence of patterns that binds zero or more variables. We derive multiple solutions to both problems (Wand in J. ACM 27(1), 164?C180, 1980) from their formal specifications (Wand in Theor. Comput. Sci. 20(1), 3?C32, 1982). On one hand, our solutions show how the Hindley-Milner type system, unextended, permits accessing heterogeneous sequences with the static assurance of type safety. On the other hand, our solutions demonstrate the use of control operators (Felleisen et al. in Proceedings of the 1988 ACM Conference on Lisp and Functional Programming, pp. 52?C62, ACM Press, New York, 1988; Wand in POPL 85: Conference Record of the Annual ACM Symposium on Principles of Programming Languages, vol. 16, ACM Press, New York, 1985; Meyer and Wand in Logics of Programs, Lecture Notes in Computer Science, vol. 193, pp. 219?C224, Springer, Berlin, 1985) to communicate with formats as coroutines (Wand in Proceedings of the 1980 ACM Conference on Lisp and Functional Programming, vol. 12, pp. 285?C299, ACM Press, New York, 1980 and Haynes et al. in LFP, pp. 293?C298, 1984).  相似文献   

6.
The problems studied in this article originate from the Graph Motif problem introduced by Lacroix et al. (IEEE/ACM Trans. Comput. Biol. Bioinform. 3(4):360–368, 2006) in the context of biological networks. The problem is to decide if a vertex-colored graph has a connected subgraph whose colors equal a given multiset of colors M. It is a graph pattern-matching problem variant, where the structure of the occurrence of the pattern is not of interest but the only requirement is the connectedness. Using an algebraic framework recently introduced by Koutis (Proceedings of the 35th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5125, pp. 575–586, 2008) and Koutis and Williams (Proceedings of the 36th International Colloquium on Automata, Languages and Programming (ICALP), Lecture Notes in Computer Science, vol. 5555, pp. 653–664, 2009), we obtain new FPT algorithms for Graph Motif and variants, with improved running times. We also obtain results on the counting versions of this problem, proving that the counting problem is FPT if M is a set, but becomes #W[1]-hard if M is a multiset with two colors. Finally, we present an experimental evaluation of this approach on real datasets, showing that its performance compares favorably with existing software.  相似文献   

7.
Reachability and shortest path problems are NL-complete for general graphs. They are known to be in L for graphs of tree-width 2 (Jakoby and Tantau in Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007). In this paper, we improve these bounds for k-trees, where k is a constant. In particular, the main results of our paper are log-space algorithms for reachability in directed k-trees, and for computation of shortest and longest paths in directed acyclic k-trees. Besides the path problems mentioned above, we also consider the problem of deciding whether a k-tree has a perfect matching (decision version), and if so, finding a perfect matching (search version), and prove that these two problems are L-complete. These problems are known to be in P and in RNC for general graphs, and in SPL for planar bipartite graphs, as shown in Datta et al. (Theory Comput. Syst. 47:737–757, 2010). Our results settle the complexity of these problems for the class of k-trees. The results are also applicable for bounded tree-width graphs, when a tree-decomposition is given as input. The technique central to our algorithms is a careful implementation of the divide-and-conquer approach in log-space, along with some ideas from Jakoby and Tantau (Proceedings of FSTTCS’07: The 27th Annual Conference on Foundations of Software Technology and Theoretical Computer Science, pp. 216–227, 2007) and Limaye et al. (Theory Comput. Syst. 46(3):499–522, 2010).  相似文献   

8.
9.
The last decade has seen an explosion in the number of people learning English as a second language (ESL). In China alone, it is estimated to be over 300 million (Yang in Engl Today 22, 2006). Even in predominantly English-speaking countries, the proportion of non-native speakers can be very substantial. For example, the US National Center for Educational Statistics reported that nearly 10 % of the students in the US public school population speak a language other than English and have limited English proficiency (National Center for Educational Statistics (NCES) in Public school student counts, staff, and graduate counts by state: school year 2000–2001, 2002). As a result, the last few years have seen a rapid increase in the development of NLP tools to detect and correct grammatical errors so that appropriate feedback can be given to ESL writers, a large and growing segment of the world’s population. As a byproduct of this surge in interest, there have been many NLP research papers on the topic, a Synthesis Series book (Leacock et al. in Automated grammatical error detection for language learners. Synthesis lectures on human language technologies. Morgan Claypool, Waterloo 2010), a recurring workshop (Tetreault et al. in Proceedings of the NAACL workshop on innovative use of NLP for building educational applications (BEA), 2012), and a shared task competition (Dale et al. in Proceedings of the seventh workshop on building educational applications using NLP (BEA), pp 54–62, 2012; Dale and Kilgarriff in Proceedings of the European workshop on natural language generation (ENLG), pp 242–249, 2011). Despite this growing body of work, several issues affecting the annotation for and evaluation of ESL error detection systems have received little attention. In this paper, we describe these issues in detail and present our research on alleviating their effects.  相似文献   

10.
Wavelet frame based models for image restoration have been extensively studied for the past decade (Chan et al. in SIAM J. Sci. Comput. 24(4):1408–1432, 2003; Cai et al. in Multiscale Model. Simul. 8(2):337–369, 2009; Elad et al. in Appl. Comput. Harmon. Anal. 19(3):340–358, 2005; Starck et al. in IEEE Trans. Image Process. 14(10):1570–1582, 2005; Shen in Proceedings of the international congress of mathematicians, vol. 4, pp. 2834–2863, 2010; Dong and Shen in IAS lecture notes series, Summer program on “The mathematics of image processing”, Park City Mathematics Institute, 2010). The success of wavelet frames in image restoration is mainly due to their capability of sparsely approximating piecewise smooth functions like images. Most of the wavelet frame based models designed in the past are based on the penalization of the ? 1 norm of wavelet frame coefficients, which, under certain conditions, is the right choice, as supported by theories of compressed sensing (Candes et al. in Appl. Comput. Harmon. Anal., 2010; Candes et al. in IEEE Trans. Inf. Theory 52(2):489–509, 2006; Donoho in IEEE Trans. Inf. Theory 52:1289–1306, 2006). However, the assumptions of compressed sensing may not be satisfied in practice (e.g. for image deblurring and CT image reconstruction). Recently in Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), the authors propose to penalize the ? 0 “norm” of the wavelet frame coefficients instead, and they have demonstrated significant improvements of their method over some commonly used ? 1 minimization models in terms of quality of the recovered images. In this paper, we propose a new algorithm, called the mean doubly augmented Lagrangian (MDAL) method, for ? 0 minimizations based on the classical doubly augmented Lagrangian (DAL) method (Rockafellar in Math. Oper. Res. 97–116, 1976). Our numerical experiments show that the proposed MDAL method is not only more efficient than the method proposed by Zhang et al. (UCLA CAM Report, vol. 11-32, 2011), but can also generate recovered images with even higher quality. This study reassures the feasibility of using the ? 0 “norm” for image restoration problems.  相似文献   

11.
12.
Given natural limitations on the length DNA sequences, designing phylogenetic reconstruction methods which are reliable under limited information is a crucial endeavor. There have been two approaches to this problem: reconstructing partial but reliable information about the tree (Mossel in IEEE Comput. Biol. Bioinform. 4:108–116, 2007; Daskalakis et al. in SIAM J. Discrete Math. 25:872–893, 2011; Daskalakis et al. in Proc. of RECOMB 2006, pp. 281–295, 2006; Gronau et al. in Proc. of the 19th Annual SODA 2008, pp. 379–388, 2008), and reaching “deeper” in the tree through reconstruction of ancestral sequences. In the latter category, Daskalakis et al. (Proc. of the 38th Annual STOC, pp. 159–168, 2006) settled an important conjecture of M. Steel (My favourite conjecture. Preprint, 2001), showing that, under the CFN model of evolution, all trees on n leaves with edge lengths bounded by the Ising model phase transition can be recovered with high probability from genomes of length O(logn) with a polynomial time algorithm. Their methods had a running time of O(n 10). Here we enhance our methods from Daskalakis et al. (Proc. of RECOMB 2006, pp. 281–295, 2006) with the learning of ancestral sequences and provide an algorithm for reconstructing a sub-forest of the tree which is reliable given available data, without requiring a-priori known bounds on the edge lengths of the tree. Our methods are based on an intuitive minimum spanning tree approach and run in O(n 3) time. For the case of full reconstruction of trees with edges under the phase transition, we maintain the same asymptotic sequence length requirements as in Daskalakis et al. (Proc. of the 38th Annual STOC, pp. 159–168, 2006), despite the considerably faster running time.  相似文献   

13.
This work contrasts Giovanni Sartor’s view of inferential semantics of legal concepts (Sartor in Artif Intell Law 17:217–251, 2009) with a probabilistic model of theory formation (Kemp et al. in Cognition 114:165–196, 2010). The work further explores possibilities of implementing Kemp’s probabilistic model of theory formation in the context of mapping legal concepts between two individual legal systems. For implementing the legal concept mapping, we propose a cross-categorization approach that combines three mathematical models: the Bayesian Model of Generalization (BMG; Tenenbaum and Griffiths in Behav Brain Sci 4:629–640, 2001), the probabilistic model of theory formation, i.e., the Infinite Relational Model (IRM) first introduced by Kemp et al. (The twenty-first national conference on artificial intelligence, 2006, Cognition 114:165–196, 2010) and its extended model, i.e., the normal-IRM (n-IRM) proposed by Herlau et al. (IEEE International Workshop on Machine Learning for Signal Processing, 2012). We apply our cross-categorization approach to datasets where legal concepts related to educational systems are respectively defined by the Japanese- and the Danish authorities according to the International Standard Classification of Education. The main contribution of this work is the proposal of a conceptual framework of the cross-categorization approach that, inspired by Sartor (Artif Intell Law 17:217–251, 2009), attempts to explain reasoner’s inferential mechanisms.  相似文献   

14.
Matthias Möller 《Computing》2013,95(5):425-448
This paper is concerned with the extension of the algebraic flux-correction (AFC) approach (Kuzmin in Computational fluid and solid mechanics, Elsevier, Amsterdam, pp 887–888, 2001; J Comput Phys 219:513–531, 2006; Comput Appl Math 218:79–87, 2008; J Comput Phys 228:2517–2534, 2009; Flux-corrected transport: principles, algorithms, and applications, 2nd edn. Springer, Berlin, pp 145–192, 2012; J Comput Appl Math 236:2317–2337, 2012; Kuzmin et al. in Comput Methods Appl Mech Eng 193:4915–4946, 2004; Int J Numer Methods Fluids 42:265–295, 2003; Kuzmin and Möller in Flux-corrected transport: principles, algorithms, and applications. Springer, Berlin, 2005; Kuzmin and Turek in J Comput Phys 175:525–558, 2002; J Comput Phys 198:131–158, 2004) to nonconforming finite element methods for the linear transport equation. Accurate nonoscillatory approximations to convection-dominated flows are obtained by stabilizing the continuous Galerkin method by solution-dependent artificial diffusion. Its magnitude is controlled by a flux limiter. This concept dates back to flux-corrected transport schemes. The unique feature of AFC is that all information is extracted from the system matrices which are manipulated to satisfy certain mathematical constraints. AFC schemes have been devised with conforming $P_1$ and $Q_1$ finite elements in mind but this is not a prerequisite. Here, we consider their extension to the nonconforming Crouzeix–Raviart element (Crouzeix and Raviart in RAIRO R3 7:33–76, 1973) on triangular meshes and its quadrilateral counterpart, the class of rotated bilinear Rannacher–Turek elements (Rannacher and Turek in Numer Methods PDEs 8:97–111, 1992). The underlying design principles of AFC schemes are shown to hold for (some variant of) both elements. However, numerical tests for a purely convective flow and a convection–diffusion problem demonstrate that flux-corrected solutions are overdiffusive for the Crouzeix–Raviart element. Good resolution of smooth and discontinuous profiles is attested to $Q_1^\mathrm{nc}$ approximations on quadrilateral meshes. A synthetic benchmark is used to quantify the artificial diffusion present in conforming and nonconforming high-resolution schemes of AFC-type. Finally, the implementation of efficient sparse matrix–vector multiplications is addressed.  相似文献   

15.
Teachers and students face many challenges in shifting from traditional classroom cultures to enacting the Knowledge-Building Communities model (KBC model) supported by the CSCL environment, Knowledge Forum (Bereiter, 2002; Bereiter & Scardamalia, 1993; Scardamalia, 2002; Scardamalia & Bereiter, 2006). Enacting the model involves socializing students into knowledge work, similar to disciplinary communities. A useful construct in the field of the Learning Sciences for understanding knowledge work is “epistemic games” (Collins & Ferguson, 1993; Morrison & Collins 1995; Perkins, 1997). We propose that a powerful means for supporting classroom enactments of the KBC model entails conceptualizing Knowledge Forum as a collective space for playing multi-player epistemic games. Participation in knowledge-building communities is then scaffolded through learning the moves of such games. We have designed scaffolding tools that highlight particular knowledge-building moves for practice and reflection as a means of supporting students and teachers in coming to understand how to collectively work together toward the progressive improvement of ideas. In order to examine our design theories in practice, we present research on Ideas First, a design-based research program involving enactments of the KBC model in Singaporean primary science classrooms (Bielaczyc & Ow, 2007, 2010; Ow & Bielaczyc, 2007; 2008).  相似文献   

16.
17.
In this document, we present an alternative to the method introduced by Ebner (Pattern Recognit 60–67, 2003; J Parallel Distrib Comput 64(1):79–88, 2004; Color constancy using local color shifts, pp 276–287, 2004; Color Constancy, 2007; Mach Vis Appl 20(5):283–301, 2009) for computing the local space average color. We show that when the problem is framed as a linear system and the resulting series is solved, there is a solution based on LU decomposition that reduces the computing time by at least an order of magnitude.  相似文献   

18.
In the most popular logics combining knowledge and awareness, it is not possible to express statements about knowledge of unawareness such as “Ann knows that Bill is aware of something Ann is not aware of”—without using a stronger statement such as “Ann knows that Bill is aware of \(p\) and Ann is not aware of \(p\) ”, for some particular \(p\) . In Halpern and Rêgo (Proceedings of KR 2006; Games Econ Behav 67(2):503–525, 2009b) Halpern and Rêgo introduced a logic in which such statements about knowledge of unawareness can be expressed. The logic extends the traditional framework with quantification over formulae, and is thus very expressive. As a consequence, it is not decidable. In this paper we introduce a decidable logic which can be used to reason about certain types of unawareness. Our logic extends the traditional framework with an operator expressing full awareness, i.e., the fact that an agent is aware of everything, and another operator expressing relative awareness, the fact that one agent is aware of everything another agent is aware of. The logic is less expressive than Halpern’s and Rêgo’s logic. It is, however, expressive enough to express all of the motivating examples in Halpern and Rêgo (Proceedings of KR 2006; Games Econ Behav 67(2):503–525, 2009b). In addition to proving that the logic is decidable and that its satisfiability problem is PSPACE-complete, we present an axiomatisation which we show is sound and complete.  相似文献   

19.
This paper gives a precise mathematical analysis of the behaviour of “hiring above the median” strategies for a problem in the context of “on-line selection under uncertainty” that is known (at least in computer science related literature) as the “hiring problem”. Here a sequence of candidates is interviewed sequentially and based on the “score” of the current candidate an immediate decision whether to hire him or not has to be made. For “hiring above the median” selection rules, a new candidate will be hired if he has a score better than the median score of the already recruited candidates. Under the natural probabilistic model assuming that the ranks of the first n candidates are forming a random permutation, we show exact and asymptotic results for various quantities of interest to describe the dynamics of the hiring process and the quality of the hired staff. In particular we can characterize the limiting distribution of the number of hired candidates of a sequence of n candidates, which reveals the somewhat surprising effect, that slight changes in the selection rule, i.e., assuming the “lower” or the “upper” median as the threshold, have a strong influence on the asymptotic behaviour. Thus we could extend considerably previous analyses (Krieger et al., Ann. Appl. Probab., 17:360–385, 2007; Broder et al., Proceedings of the 19th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pp. 1184–1193, ACM/SIAM, New York/Philadelphia, 2008 and Archibald and Martinez, Proceedings of the 21st International Conference on Formal Power Series and Algebraic Combinatorics (FPSAC 2009), Discrete Mathematics and Theoretical Computer Science, pp. 63–76, 2009) of such selection rules. Furthermore, we discuss connections between the hiring process and the Chinese restaurant process introduced by Pitman (Combinatorial Stochastic Processes, Springer, Berlin, 2006).  相似文献   

20.
We address the problem of generating compact code from software pipelined loops. Although software pipelining is a powerful technique to extract fine-grain parallelism, it generates lifetime intervals spanning multiple loop iterations. These intervals require periodic register allocation (also called variable expansion), which in turn yields a code generation challenge. We are looking for the minimal unrolling factor enabling the periodic register allocation of software pipelined kernels. This challenge is generally addressed through one of: (1) hardware support in the form of rotating register files, which solve the unrolling problem but are expensive in hardware; (2) register renaming by inserting register moves, which increase the number of operations in the loop, and may damage the schedule of the software pipeline and reduce throughput; (3) post-pass loop unrolling that does not compromise throughput but often leads to impractical code growth. The latter approach relies on the proof that MAXLIVE registers (maximal number of values simultaneously alive) are sufficient for periodic register allocation (Eisenbeis et al. in PACT ’95: Proceedings of the IFIP WG10.3 working conference on Parallel Architectures and Compilation Techniques, pages 264–267, Manchester, UK, 1995; Hendren et al. in CC ’92: Proceedings of the 4th International Conference on Compiler Construction, pages 176–191, London, UK, 1992). However, the best existing heuristic for controlling this code growth—modulo variable expansion (Lam in SIGPLAN Not 23(7):318–328, 1988)—may not apply the correct amount of loop unrolling to guarantee that MAXLIVE registers are enough, which may result in register spills Eisenbeis et al. in PACT ’95: Proceedings of the IFIP WG10.3 working conference on Parallel Architectures and Compilation Techniques, pages 264–267, Manchester, UK, 1995. This paper presents our research results on the open problem of minimal loop unrolling, allowing a software-only code generation that does not trade the optimality of the initiation interval (II) for the compactness of the generated code. Our novel idea is to use the remaining free registers after periodic register allocation to relax the constraints on register reuse. The problem of minimal loop unrolling arises either before or after software pipelining, either with a single or with multiple register types (classes). We provide a formal problem definition for each scenario, and we propose and study a dedicated algorithm for each problem. Our solutions are implemented within an industrial-strength compiler for a VLIW embedded processor from STMicroelectronics, and validated on multiple benchmarks suites.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号