首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 500 毫秒
1.
A Unified Primal-Dual Algorithm Framework Based on Bregman Iteration   总被引:2,自引:0,他引:2  
In this paper, we propose a unified primal-dual algorithm framework for two classes of problems that arise from various signal and image processing applications. We also show the connections to existing methods, in particular Bregman iteration (Osher et al., Multiscale Model. Simul. 4(2):460–489, 2005) based methods, such as linearized Bregman (Osher et al., Commun. Math. Sci. 8(1):93–111, 2010; Cai et al., SIAM J. Imag. Sci. 2(1):226–252, 2009, CAM Report 09-28, UCLA, March 2009; Yin, CAAM Report, Rice University, 2009) and split Bregman (Goldstein and Osher, SIAM J. Imag. Sci., 2, 2009). The convergence of the general algorithm framework is proved under mild assumptions. The applications to 1 basis pursuit, TV−L 2 minimization and matrix completion are demonstrated. Finally, the numerical examples show the algorithms proposed are easy to implement, efficient, stable and flexible enough to cover a wide variety of applications.  相似文献   

2.
Currently, embedded systems have been widely used for ubiquitous computing environments including digital setup boxes, mobile phones, and USN (Ubiquitous Sensor Networks). The significance of security has been growing as it must be necessarily embedded in all these systems. Up until now, many researchers have made efforts to verify the integrity of applied binaries downloaded in embedded systems. The research of problem solving is organized into hardware methods and software-like methods. In this research, the basic approach to solving problems from the software perspective was employed. From the software perspective, unlike in the existing papers (Seshadri et al., Proc. the IEEE symposium on security and privacy, 2004; Seshadri et al., Proc. the symposium on operating systems principals, 2005) based on the standardized model (TTAS.KO-11.0054. 2006) publicized in Korea, there is no extra verifier and conduct for the verification function in the target system. Contrary to the previous schemes (Jung et al. , 2008; Lee et al., LNCS, vol. 4808, pp. 346–355, 2007), verification results are stored in 1 validation check bit, instead of storing signature value for application binary files in the i-node structure for the purpose of reducing run-time execution overhead. Consequently, the proposed scheme is more efficient because it dramatically reduces overhead in storage space, and when it comes to computing, it performs one hash algorithm for initial execution and thereafter compares 1 validation check bit only, instead of signature and hash algorithms for every application binary. Furthermore, in cases where there are frequent changes in the i-node structure or file data depending on the scheme application, the scheme can provide far more effective verification performance compared to the previous schemes.  相似文献   

3.
It is often the case that after a scheduling problem has been solved some small changes occur that make the solution of the original problem not valid. Solving the new problem from scratch can result in a schedule that is very different from the original schedule. In applications such as a university course timetable or flight scheduling, one would be interested in a solution that requires minimal changes for the users. The present paper considers the minimal perturbation problem. It is motivated by scenarios in which a Constraint Satisfaction Problem (CSP) is subject to changes. In particular, the case where some of the constraints are changed after a solution was obtained. The goal is to find a solution to the changed problem that is as similar as possible (e.g. includes minimal perturbations) to the previous solution. Previous studies proposed a formal model for this problem (Barták et al. 2004), a best first search algorithm (Ross et al. 2000), complexity bounds (Hebrard et al. 2005), and branch and bound based algorithms (Barták et al. 2004; Hebrard et al. 2005). The present paper proposes a new approach for solving the minimal perturbation problem. The proposed method interleaves constraint optimization and constraint satisfaction techniques. Our experimental results demonstrate the advantage of the proposed algorithm over former algorithms. Experiments were performed both on random CSPs and on random instances of the Meeting Scheduling Problem.  相似文献   

4.
5.
Ant colony optimization metaheuristic (ACO) represents a new class of algorithms particularly suited to solve real-world combinatorial optimization problems. ACO algorithms, published for the first time in 1991 by M. Dorigo [Optimization, learning and natural algorithms (in Italian). Ph.D. Thesis, Dipartimento di Elettronica, Politecnico di Milano, Milan, 1992] and his coworkers, have been applied, particularly starting from 1999 (Bonabeau et al., Swarm intelligence: from natural to artificial systems, Oxford University Press, New York, 1999; Dorigo et al., Artificial life 5(2):137–172, 1999; Dorigo and Di Caro, Ant colony optimization: a new metaheuristic, IEEE Press, Piscataway, NJ, 1999; Dorigo et al., Ant colony optimization and swarm intelligence, Springer, Berlin Heidelberg New York, 2004; Dorigo and Stutzle, Ant colony optimization, MIT Press, Cambridge, MA, 2004), to several kinds of optimization problems such as the traveling salesman problem, quadratic assignment problem, vehicle routing, sequential ordering, scheduling, graph coloring, management of communications networks, and so on. The ant colony optimization metaheuristic takes inspiration from the studies of real ant colonies’ foraging behavior. The main characteristic of such colonies is that individuals have no global knowledge of problem solving but communicate indirectly among themselves, depositing on the ground a chemical substance called pheromone, which influences probabilistically the choice of subsequent ants, which tend to follow paths where the pheromone concentration is higher. Such behavior, called stigmergy, is the basic mechanism that controls ant activity and permits them to take the shortest path connecting their nest to a food source. In this paper, it is shown how to convert natural ant behavior to algorithms able to escape from local minima and find global minimum solutions to constrained combinatorial problems. Some examples on plane trusses are also presented.  相似文献   

6.
Programming robot behavior remains a challenging task. While it is often easy to abstractly define or even demonstrate a desired behavior, designing a controller that embodies the same behavior is difficult, time consuming, and ultimately expensive. The machine learning paradigm offers the promise of enabling “programming by demonstration” for developing high-performance robotic systems. Unfortunately, many “behavioral cloning” (Bain and Sammut in Machine intelligence agents. London: Oxford University Press, 1995; Pomerleau in Advances in neural information processing systems 1, 1989; LeCun et al. in Advances in neural information processing systems 18, 2006) approaches that utilize classical tools of supervised learning (e.g. decision trees, neural networks, or support vector machines) do not fit the needs of modern robotic systems. These systems are often built atop sophisticated planning algorithms that efficiently reason far into the future; consequently, ignoring these planning algorithms in lieu of a supervised learning approach often leads to myopic and poor-quality robot performance. While planning algorithms have shown success in many real-world applications ranging from legged locomotion (Chestnutt et al. in Proceedings of the IEEE-RAS international conference on humanoid robots, 2003) to outdoor unstructured navigation (Kelly et al. in Proceedings of the international symposium on experimental robotics (ISER), 2004; Stentz et al. in AUVSI’s unmanned systems, 2007), such algorithms rely on fully specified cost functions that map sensor readings and environment models to quantifiable costs. Such cost functions are usually manually designed and programmed. Recently, a set of techniques has been developed that explore learning these functions from expert human demonstration. These algorithms apply an inverse optimal control approach to find a cost function for which planned behavior mimics an expert’s demonstration. The work we present extends the Maximum Margin Planning (MMP) (Ratliff et al. in Twenty second international conference on machine learning (ICML06), 2006a) framework to admit learning of more powerful, non-linear cost functions. These algorithms, known collectively as LEARCH (LEArning to seaRCH), are simpler to implement than most existing methods, more efficient than previous attempts at non-linearization (Ratliff et al. in NIPS, 2006b), more naturally satisfy common constraints on the cost function, and better represent our prior beliefs about the function’s form. We derive and discuss the framework both mathematically and intuitively, and demonstrate practical real-world performance with three applied case-studies including legged locomotion, grasp planning, and autonomous outdoor unstructured navigation. The latter study includes hundreds of kilometers of autonomous traversal through complex natural environments. These case-studies address key challenges in applying the algorithm in practical settings that utilize state-of-the-art planners, and which may be constrained by efficiency requirements and imperfect expert demonstration.
J. Andrew BagnellEmail:
  相似文献   

7.
An emerging trend in DNA computing consists of the algorithmic analysis of new molecular biology technologies, and in general of more effective tools to tackle computational biology problems. An algorithmic understanding of the interaction between DNA molecules becomes the focus of some research which was initially addressed to solve mathematical problems by processing data within biomolecules. In this paper a novel mechanism of DNA recombination is discussed, that turned out to be a good implementation key to develop new procedures for DNA manipulation (Franco et al., DNA extraction by cross pairing PCR, 2005; Franco et al., DNA recombination by XPCR, 2006; Manca and Franco, Math Biosci 211:282–298, 2008). It is called XPCR as it is a variant of the polymerase chain reaction (PCR), which was a revolution in molecular biology as a technique for cyclic amplification of DNA segments. A few DNA algorithms are proposed, that were experimentally proven in different contexts, such as, mutagenesis (Franco, Biomolecular computing—combinatorial algorithms and laboratory experiments, 2006), multiple concatenation, gene driven DNA extraction (Franco et al., DNA extraction by cross pairing PCR, 2005), and generation of DNA libraries (Franco et al., DNA recombination by XPCR, 2006), and some related ongoing work is outlined.  相似文献   

8.
We present new efficient deterministic and randomized distributed algorithms for decomposing a graph with n nodes into a disjoint set of connected clusters with radius at most k−1 and having O(n 1+1/k ) intercluster edges. We show how to implement our algorithms in the distributed CONGEST\mathcal{CONGEST} model of computation, i.e., limited message size, which improves the time complexity of previous algorithms (Moran and Snir in Theor. Comput. Sci. 243(1–2):217–241, 2000; Awerbuch in J. ACM 32:804–823, 1985; Peleg in Distributed Computing: A Locality-Sensitive Approach, 2000) from O(n) to O(n 1−1/k ). We apply our algorithms for constructing low stretch graph spanners and network synchronizers in sublinear deterministic time in the CONGEST\mathcal{CONGEST} model.  相似文献   

9.
《国际计算机数学杂志》2012,89(11):2387-2397
Grids or multicluster computing environments are becoming increasingly popular to both scientific and commercial applications. Process scheduling remains a central issue to be effectively resolved in order to exploit the full potential that the grid or multicluster environment can offer. We use a directed acyclic graph (DAG) to model a process or an application where the nodes of the DAG represent the tasks of the process. Prior to the execution of a process in a multicluster environment, the tasks are required to be mapped onto the clusters. In this article, it is shown that the algorithm developed by He et al. [L. He, S.A. Jarvis, D.P. Spooner, D. Bacigalupo, G. Tan, and G.R. Nudd, Mapping DAG-based applications to multiclusters with background workload, Proceedings of the 2005 IEEE International Symposium on Cluster Computing and the Grid, Cardiff, 2005, pp 855–862.] for the multicluster DAG mapping problem can be significantly improved by incorporating the task duplication strategy. The proposed process scheduling algorithm has a time complexity O(| V|2(r+d+1)), where |V| represents the number of tasks; r, the number of clusters; and d, the maximum in-degree of tasks.  相似文献   

10.
Distributed constraint satisfaction problems (DisCSPs) are composed of agents, each holding its own variables, that are connected by constraints to variables of other agents. Due to the distributed nature of the problem, message delay can have unexpected effects on the behavior of distributed search algorithms on DisCSPs. This has been recently shown in experimental studies of asynchronous backtracking algorithms (Bejar et al., Artif. Intell., 161:117–148, 2005; Silaghi and Faltings, Artif. Intell., 161:25–54, 2005). To evaluate the impact of message delay on the run of DisCSP search algorithms, a model for distributed performance measures is presented. The model counts the number of non concurrent constraints checks, to arrive at a solution, as a non concurrent measure of distributed computation. A simpler version measures distributed computation cost by the non-concurrent number of steps of computation. An algorithm for computing these distributed measures of computational effort is described. The realization of the model for measuring performance of distributed search algorithms is a simulator which includes the cost of message delays. Two families of distributed search algorithms on DisCSPs are investigated. Algorithms that run a single search process, and multiple search processes algorithms. The two families of algorithms are described and associated with existing algorithms. The performance of three representative algorithms of these two families is measured on randomly generated instances of DisCSPs with delayed messages. The delay of messages is found to have a strong negative effect on single search process algorithms, whether synchronous or asynchronous. Multi search process algorithms, on the other hand, are affected very lightly by message delay.  相似文献   

11.
Semi-supervised graph clustering: a kernel approach   总被引:6,自引:0,他引:6  
Semi-supervised clustering algorithms aim to improve clustering results using limited supervision. The supervision is generally given as pairwise constraints; such constraints are natural for graphs, yet most semi-supervised clustering algorithms are designed for data represented as vectors. In this paper, we unify vector-based and graph-based approaches. We first show that a recently-proposed objective function for semi-supervised clustering based on Hidden Markov Random Fields, with squared Euclidean distance and a certain class of constraint penalty functions, can be expressed as a special case of the weighted kernel k-means objective (Dhillon et al., in Proceedings of the 10th International Conference on Knowledge Discovery and Data Mining, 2004a). A recent theoretical connection between weighted kernel k-means and several graph clustering objectives enables us to perform semi-supervised clustering of data given either as vectors or as a graph. For graph data, this result leads to algorithms for optimizing several new semi-supervised graph clustering objectives. For vector data, the kernel approach also enables us to find clusters with non-linear boundaries in the input data space. Furthermore, we show that recent work on spectral learning (Kamvar et al., in Proceedings of the 17th International Joint Conference on Artificial Intelligence, 2003) may be viewed as a special case of our formulation. We empirically show that our algorithm is able to outperform current state-of-the-art semi-supervised algorithms on both vector-based and graph-based data sets.  相似文献   

12.
Rapid advancement and more readily availability of Grid technologies have encouraged many businesses and researchers to establish Virtual Organizations (VO) and make use of their available desktop resources to solve computing intensive problems. These VOs, however, work as disjointed and independent communities with no resource sharing between them. We, in previous work, have proposed a fully decentralized and reconfigurable Inter-Grid framework for resource sharing among such distributed and autonomous Grid systems (Rao et al. in ICCSA, [2006]). The specific problem that underlies in such a collaborating Grids system is scheduling of resources as there is very little knowledge about availability of the resources due to the distributed and autonomous nature of the underlying Grid entities. In this paper, we propose a probabilistic and adaptive scheduling algorithm using system-generated predictions for Inter-Grid resource sharing keeping collaborating Grid systems autonomous and independent. We first use system-generated job runtime estimates without actually submitting jobs to the target Grid system. Then this job execution estimate is used to predict the job scheduling feasibility on the target system. Furthermore, our proposed algorithm adapted itself to the actual resource behavior and performance. Simulation results are presented to discuss the correctness and accuracy of our proposed algorithm.
Eui-Nam Huh (Corresponding author)Email:
  相似文献   

13.
Matrix-Matrix Multiplication (MMM) is a highly important kernel in linear algebra algorithms and the performance of its implementations depends on the memory utilization and data locality. There are MMM algorithms, such as standard, Strassen–Winograd variant, and many recursive array layouts, such as Z-Morton or U-Morton. However, their data locality is lower than that of the proposed methodology. Moreover, several SOA (state of the art) self-tuning libraries exist, such as ATLAS for MMM algorithm, which tests many MMM implementations. During the installation of ATLAS, on the one hand an extremely complex empirical tuning step is required, and on the other hand a large number of compiler options are used, both of which are not included in the scope of this paper. In this paper, a new methodology using the standard MMM algorithm is presented, achieving improved performance by focusing on data locality (both temporal and spatial). This methodology finds the scheduling which conforms with the optimum memory management. Compared with (Chatterjee et al. in IEEE Trans. Parallel Distrib. Syst. 13:1105, 2002; Li and Garzaran in Proc. of Lang. Compil. Parallel Comput., 2005; Bilmes et al. in Proc. of the 11th ACM Int. Conf. Super-comput., 1997; Aberdeen and Baxter in Concurr. Comput. Pract. Exp. 13:103, 2001), the proposed methodology has two major advantages. Firstly, the scheduling used for the tile level is different from the element level’s one, having better data locality, suited to the sizes of memory hierarchy. Secondly, its exploration time is short, because it searches only for the number of the level of tiling used, and between (1, 2) (Sect. 4) for finding the best tile size for each cache level. A software tool (C-code) implementing the above methodology was developed, having the hardware model and the matrix sizes as input. This methodology has better performance against others at a wide range of architectures. Compared with the best existing related work, which we implemented, better performance up to 55% than the Standard MMM algorithm and up to 35% than Strassen’s is observed, both under recursive data array layouts.  相似文献   

14.
Using Biologically Inspired Features for Face Processing   总被引:1,自引:0,他引:1  
In this paper, we show that a new set of visual features, derived from a feed-forward model of the primate visual object recognition pathway proposed by Riesenhuber and Poggio (R&P Model) (Nature Neurosci. 2(11):1019–1025, 1999) is capable of matching the performance of some of the best current representations for face identification and facial expression recognition. Previous work has shown that the Riesenhuber and Poggio Model features can achieve a high level of performance on object recognition tasks (Serre, T., et al. in IEEE Comput. Vis. Pattern Recognit. 2:994–1000, 2005). Here we modify the R&P model in order to create a new set of features useful for face identification and expression recognition. Results from tests on the FERET, ORL and AR datasets show that these features are capable of matching and sometimes outperforming other top visual features such as local binary patterns (Ahonen, T., et al. in 8th European Conference on Computer Vision, pp. 469–481, 2004) and histogram of gradient features (Dalal, N., Triggs, B. in International Conference on Computer Vision & Pattern Recognition, pp. 886–893, 2005). Having a model based on shared lower level features, and face and object recognition specific higher level features, is consistent with findings from electrophysiology and functional magnetic resonance imaging experiments. Thus, our model begins to address the complete recognition problem in a biologically plausible way.  相似文献   

15.
The grid is a promising infrastructure that can allow scientists and engineers to access resources among geographically distributed environments. Grid computing is a new technology which focuses on aggregating resources (e.g., processor cycles, disk storage, and contents) from a large-scale computing platform. Making grid computing a reality requires a resource broker to manage and monitor available resources. This paper presents a workflow-based resource broker whose main functions are matching available resources with user requests and considering network information statuses during matchmaking in computational grids. The resource broker provides a graphic user interface for accessing available and the appropriate resources via user credentials. This broker uses the Ganglia and NWS tools to monitor resource status and network-related information, respectively. Then we propose a history-based execution time estimation model to predict the execution time of parallel applications, according to previous execution results. The experimental results show that our model can accurately predict the execution time of embarrassingly parallel applications. We also report on using the Globus Toolkit to construct a grid platform called the TIGER project that integrates resources distributed across five universities in Taichung city, Taiwan, where the resource broker was developed.
Po-Chi ShihEmail:
  相似文献   

16.
In the pool of cloud providers that are currently available there is a lack of standardised APIs and brokering tools to effectively distribute high throughput calculations among them. Moreover, the current middleware tools are not able to straightforwardly provision the ephemeral and specific environments that certain codes and simulations require. These facts prevent the massive portability of legacy applications to cloud environments. Such an issue can be overcome by effectively scheduling the distributed calculations using the basic capacities offered by cloud federations. In this work, a framework achieving such a goal is presented: a pilot system (GWpilot) that has been improved with cloud computing capabilities (GWcloud). This framework profits from the expertise acquired in grid federations and provides interesting features that make it more efficient, flexible and useable than other approaches. Thus, decentralisation, middleware independence, dynamic brokering, on-demand provisioning of specific virtual images, compatibility with legacy applications, and the efficient accomplishment of short tasks, among other features, are achieved. Not only this, the new framework is multi-user and multi-application, dynamically instantiating virtual machines depending on the available and demanded resources, i.e. allowing users to consolidate their own resource provisioning. Results presented in this work demonstrate these features by efficiently executing several legacy applications with very different requirements on the FedCloud infrastructure at the same time.  相似文献   

17.
Modeling the dependence of credit ratings is an important issue for portfolio credit risk analysis. Multivariate Markov chain models are a feasible mathematical tool for modeling the dependence of credit ratings. Here we develop a flexible multivariate Markov chain model for modeling the dependence of credit ratings. The proposed model provides a parsimonious way to capture both the cross-sectional and temporal associations among ratings of individual entities. The number of model parameters is of the magnitude O(sm 2 + s 2 m), where m is the number of ratings categories and s is the number of entities in a credit portfolio. The proposed model is also easy to implement. The estimation method is formulated as a set of s linear programming problems and the estimation algorithm can be implemented easily in a Microsoft EXCEL worksheet, see Ching et al. Int J Math Educ Sci Eng 35:921–932 (2004). We illustrate the practical implementation of the proposed model using real ratings data. We evaluate risk measures, such as Value at Risk and Expected Shortfall, for a credit portfolio using the proposed model and compare the risk measures with those arising from Ching et al. IMRPreprintSeries (2007), Siu et al. Quant Finance 5:543–556 (2005).  相似文献   

18.
In some hard real-time systems, relative timing constraints may be imposed on task executions, in addition to the release time and deadline constraints. Relative timing constraints such as separation or relative deadline constraints may be given between start or finish times of tasks (Gerber et al., 1995; Han and Lin, 1989; Han et al., 1992; Han and Lin, 1992; Han et al., 1996).One approach in real-time scheduling is to find a total order on a set of N tasks in a scheduling window, and cyclically use this order at run time to execute tasks. However, in the presence of relative timing constraints, if the task execution times are nondeterministic with defined lower and upper bounds, it is not always possible to statically assign task start times at pre-runtime for a given task ordering (Gerber et al., 1995).We develop a technique called dynamic cyclic dispatching as an extension of a parametric dispatching mechanism in (Gerber et al., 1995). An ordered set of N tasks is assumed to be given in a scheduling window and this schedule(ordering) is cyclically repeated at runtime in consecutive scheduling windows. Relative timing constraints between tasks may be defined across scheduling window boundaries as well as within one scheduling window. A task set is defined to be dispatchable if there exists any way in which the tasks can be dispatched with all their timing constraints satisfied. An off-line algorithm is presented to check the dispatchability of a task set and to obtain parametric lower and upper bound functions for task start times if the task set is dispatchable. These parametric bound functions are evaluated at runtime to obtain a valid time interval during which a task can be started. The complexity of this off-line component is shown to be O(n 2 N 3) where n is the number of tasks in a scheduling window that have relative timing constraints with tasks in the next scheduling window. An online algorithm can evaluate these bounds in O(N) time.Unlike static approaches which assign fixed start times to tasks in the scheduling window, our approach allows us to flexibly manage the slack times at runtime without sacrificing the dispatchability of tasks. Also, a wider class of relative timing constraints can be imposed to the task set compared to the traditional approaches.  相似文献   

19.
This paper describes the development steps and core ideas used by the USP Farmers herding team, that has participated in the 2010 edition of the Multi-Agent Programming Contest (MAPC 2010). This is the third year that the competitors must design a team of herding agents, whose global goal is to lead a maximum number of cows to their own corral. As this is a very complex task and requires coordination of the team, we have developed the individual agents using the Jason (Bordini et al. 2007) interpreter for AgentSpeak(L) (Rao 1996). Moreover, the coordination strategy was defined using the M\mathcal{M} OISE  +  (Hübner et al. 2002, 2007) organizational model. We have also used the idea of artifact (Ricci et al. 2007) to develop global services, available to all the agents. Moreover, it is clear that for this contest some pure procedural processing should be developed in a lower abstraction level (Hübner et al. 2008); therefore some calculation and pre-defined global decisions were implemented by Java classes.  相似文献   

20.
The class of alternating group networks was introduced in the late 1990’s as an alternative to the alternating group graphs as interconnection networks. Recently, additional properties for the alternating group networks have been published. In particular, Zhou et al., J. Supercomput (2009), doi:, was published very recently in this journal. We show that this so-called new interconnection topology is in fact isomorphic to the (n,n−2)-star, a member of the well-known (n,k)-stars, 1≤kn−1, a class of popular networks proposed earlier for which a large amount of work have already been done. Specifically, the problem in Zhou et al., J. Supercomput (2009), doi:, was addressed in Lin and Duh, Inf. Sci. 178(3), 788–801, 2008, when k = n−2.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号