首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Based on the method of (n,k)-universal sets, we present a deterministic parameterized algorithm for the weighted rd-matching problem with time complexity O(4(r−1)k+o(k)), improving the previous best upper bound O(4rk+o(k)). In particular, the algorithm applied to the unweighted 3d-matching problem results in a deterministic algorithm with time O(16k+o(k)), improving the previous best result O(21.26k). For the weighted r-set packing problem, we present a deterministic parameterized algorithm with time complexity O(2(2r−1)k+o(k)), improving the previous best result O(22rk+o(k)). The algorithm, when applied to the unweighted 3-set packing problem, has running time O(32k+o(k)), improving the previous best result O(43.62k+o(k)). Moreover, for the weighted r-set packing and weighted rd-matching problems, we give a kernel of size O(kr), which is the first kernelization algorithm for the problems on weighted versions.  相似文献   

2.
We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=⌈log∣A∣⌉, where A is the algorithm’s action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones.In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1≤bΘ(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio Ω(log(n)/b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n)/b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio kO(1/b) for any choice of Θ(1)≤b≤logk.  相似文献   

3.
4.
We show an O?(n(?+1))-time algorithm for the channel assignment problem, where ? is the maximum edge weight. This improves on the previous O?(n(?+2))-time algorithm by Král (2005) [1], as well as algorithms for important special cases, like L(2,1)-labeling. For the latter problem, our algorithm works in O?(n3) time. The progress is achieved by applying the fast zeta transform in combination with the inclusion-exclusion principle.  相似文献   

5.
This paper investigates the problem of minimizing makespan on a single batch-processing machine, and the machine can process multiple jobs simultaneously. Each job is characterized by release time, processing time, and job size. We established a mixed integer programming model and proposed a valid lower bound for this problem. By introducing a definition of waste and idle space (WIS), this problem is proven to be equivalent to minimizing the WIS for the schedule. Since the problem is NP-hard, we proposed a heuristic and an ant colony optimization (ACO) algorithm based on the theorems presented. A candidate list strategy and a new method to construct heuristic information were introduced for the ACO approach to achieve a satisfactory solution in a reasonable computational time. Through extensive computational experiments, appropriate ACO parameter values were chosen and the effectiveness of the proposed algorithms was evaluated by solution quality and run time. The results showed that the ACO algorithm combined with the candidate list was more robust and consistently outperformed genetic algorithm (GA), CPLEX, and the other two heuristics, especially for large job instances.  相似文献   

6.
7.
The k-set agreement problem is a generalization of the uniform consensus problem: each process proposes a value, and each non-faulty process has to decide a value such that a decided value is a proposed value, and at most k different values are decided. It has been shown that any algorithm that solves the k-set agreement problem in synchronous systems that can suffer up to t crash failures requires rounds in the worst case. It has also been shown that it is possible to design early deciding algorithms where no process decides and halts after rounds, where f is the number of actual crashes in a run (0≤ft).This paper explores a new direction to solve the k-set agreement problem in a synchronous system. It considers that the system is enriched with base objects (denoted has [m,?]_SA objects) that allow solving the ?-set agreement problem in a set of m processes (m<n). The paper makes several contributions. It first proposes a synchronous k-set agreement algorithm that benefits from such underlying base objects. This algorithm requires rounds, more precisely, rounds, where . The paper then shows that this bound, that involves all the parameters that characterize both the problem (k) and its environment (t, m and ?), is a lower bound. The proof of this lower bound sheds additional light on the deep connection between synchronous efficiency and asynchronous computability. Finally, the paper extends its investigation to the early deciding case. It presents a k-set agreement algorithm that directs the processes to decide and stop by round . These bounds generalize the bounds previously established for solving the k-set agreement problem in pure synchronous systems.  相似文献   

8.
In 2000, Li et al. introduced dual-cube networks, denoted by DCn for n?1, using the hypercube family Qn and showed the vertex symmetry and some fault-tolerant hamiltonian properties of DCn. In this article, we introduce a new family of interconnection networks called dual-cube extensive networks, denoted by DCEN(G). Given any arbitrary graph G, DCEN(G) is generated from G using the similar structure of DCn. We show that if G is a nonbipartite and hamiltonian connected graph, then DCEN(G) is hamiltonian connected. In addition, if G has the property that for any two distinct vertices u,v of G, there exist three disjoint paths between u and v such that these three paths span the graph G, then DCEN(G) preserves the same property. Furthermore, we prove that the similar results hold when G is a bipartite graph.  相似文献   

9.
A string-based negative selection algorithm is an immune-inspired classifier that infers a partitioning of a string space Σ? into “normal” and “anomalous” partitions from a training set S containing only samples from the “normal” partition. The algorithm generates a set of patterns, called “detectors”, to cover regions of the string space containing none of the training samples. Strings that match at least one of these detectors are then classified as “anomalous”. A major problem with existing implementations of this approach is that the detector generating step needs exponential time in the worst case. Here we show that for the two most widely used kinds of detectors, the r-chunk and r-contiguous detectors based on partial matching to substrings of length r, negative selection can be implemented more efficiently by avoiding generating detectors altogether: for each detector type, training set SΣ? and parameter r? one can construct an automaton whose acceptance behaviour is equivalent to the algorithm’s classification outcome. The resulting runtime is O(|S|?r|Σ|) for constructing the automaton in the training phase and O(?) for classifying a string.  相似文献   

10.
In this paper, we consider the coefficient-based regularized least-squares regression problem with the lq-regularizer (1≤q≤2) and data dependent hypothesis spaces. Algorithms in data dependent hypothesis spaces perform well with the property of flexibility. We conduct a unified error analysis by a stepping stone technique. An empirical covering number technique is also employed in our study to improve sample error. Comparing with existing results, we make a few improvements: First, we obtain a significantly sharper learning rate that can be arbitrarily close to O(m−1) under reasonable conditions, which is regarded as the best learning rate in learning theory. Second, our results cover the case q=1, which is novel. Finally, our results hold under very general conditions.  相似文献   

11.
An edge-cut F of a connected graph G is called a restricted edge-cut if GF contains no isolated vertices. The minimum cardinality of all restricted edge-cuts is called the restricted edge-connectivity λ(G) of G. A graph G is said to be λ-optimal if λ(G)=ξ(G), where ξ(G) is the minimum edge-degree of G. A graph is said to be super-λ if every minimum restricted edge-cut isolates an edge. This article gives a sufficient condition for Cartesian product graphs to be super-λ. Using this result, certain classes of networks which are recursively defined by the Cartesian product can be simply shown to be super-λ.  相似文献   

12.
Although there have been many researches on cluster analysis considering feature (or variable) weights, little effort has been made regarding sample weights in clustering. In practice, not every sample in a data set has the same importance in cluster analysis. Therefore, it is interesting to obtain the proper sample weights for clustering a data set. In this paper, we consider a probability distribution over a data set to represent its sample weights. We then apply the maximum entropy principle to automatically compute these sample weights for clustering. Such method can generate the sample-weighted versions of most clustering algorithms, such as k-means, fuzzy c-means (FCM) and expectation & maximization (EM), etc. The proposed sample-weighted clustering algorithms will be robust for data sets with noise and outliers. Furthermore, we also analyze the convergence properties of the proposed algorithms. This study also uses some numerical data and real data sets for demonstration and comparison. Experimental results and comparisons actually demonstrate that the proposed sample-weighted clustering algorithms are effective and robust clustering methods.  相似文献   

13.
14.
Super connectivity of line graphs   总被引:1,自引:0,他引:1  
The super connectivity κ and the super edge-connectivity λ are more refined network reliability indices than connectivity κ and edge-connectivity λ. This paper shows that for a connected graph G with order at least four rather than a star and its line graph L(G), κ(L(G))=λ(G) if and only if G is not super-λ. As a consequence, we obtain the result of Hellwig et al. [Note on the connectivity of line graphs, Inform. Process. Lett. 91 (2004) 7] that κ(L(G))=λ(G). Furthermore, the authors show that the line graph of a super-λ graph is super-λ if the minimum degree is at least three.  相似文献   

15.
A proper k-vertex coloring of a graph is an equitable k-coloring if the sizes of the color classes differ by at most 1. A graph G is equitably k-choosable if, for any k-uniform list assignment L, G is L-colorable and each color appears on at most vertices. We prove in this paper that outerplane graphs are equitably k-choosable whenever kΔ, where Δ is the maximum degree. Moreover, we discuss equitable colorings of some d-degenerate graphs.  相似文献   

16.
Blanchet-Sadri et al. have shown that Avoidability, or the problem of deciding the avoidability of a finite set of partial words over an alphabet of size k≥2, is NP-hard [F. Blanchet-Sadri, R. Jungers, J. Palumbo, Testing avoidability on sets of partial words is hard, Theoret. Comput. Sci. 410 (2009) 968-972]. Building on their work, we analyze in this paper the complexity of natural variations on the problem. While some of them are NP-hard, others are shown to be efficiently decidable. Using some combinatorial properties of de Bruijn graphs, we establish a correspondence between lengths of cycles in such graphs and periods of avoiding words, resulting in a tight bound for periods of avoiding words. We also prove that Avoidability can be solved in polynomial space, and reduces in polynomial time to the problem of deciding the avoidability of a finite set of partial words of equal length over the binary alphabet. We give a polynomial bound on the period of an infinite avoiding word, in the case of sets of full words, in terms of two parameters: the length and the number of words in the set. We give a polynomial space algorithm to decide if a finite set of partial words is avoided by a non-ultimately periodic infinite word. The same algorithm also decides if the number of words of length n avoiding a given finite set of partial words grows polynomially or exponentially with n.  相似文献   

17.
This study proposes two optimization mathematical models for the clustering and selection of suppliers. Model 1 performs an analysis of supplier clusters, according to customer demand attributes, including production cost, product quality and production time. Model 2 uses the supplier cluster obtained in Model 1 to determine the appropriate supplier combinations. The study additionally proposes a two-phase method to solve the two mathematical models. Phase 1 integrates k-means and a simulated annealing algorithm with the Taguchi method (TKSA) to solve for Model 1. Phase 2 uses an analytic hierarchy process (AHP) for Model 2 to weight every factor and then uses a simulated annealing algorithm with the Taguchi method (ATSA) to solve for Model 2. Finally, a case study is performed, using parts supplier segmentation and an evaluation process, which compares different heuristic methods. The results show that TKSA+ATSA provides a quality solution for this problem.  相似文献   

18.
Bryant [On the complexity of VLSI implementations and graph representations of boolean functions with applications to integer multiplication, IEEE Trans. Comput. 40 (1991) 205-213] has shown that any OBDD for the function MULn-1,n, i.e. the middle bit of the n-bit multiplication, requires at least 2n/8 nodes. In this paper a stronger lower bound of essentially 2n/2/61 is proven by a new technique, using a universal family of hash functions. As a consequence, one cannot hope anymore to verify e.g. 128-bit multiplication circuits using OBDD-techniques because the representation of the middle bit of such a multiplier requires more than 3·1017OBDD-nodes. Further, a first non-trivial upper bound of 7/3·24n/3 for the OBDD-size of MULn-1,n is provided.  相似文献   

19.
In this paper, the concepts of Janowski functions and the conic regions are combined to define a new domain which represents the conic type regions. Different views of this modified conic domain for specific values are shown graphically for better understanding of the behavior of this domain. The class of such functions which map the open unit disk E onto this modified conic domain is defined. Also the classes of k-uniformly Janowski convex and k-Janowski starlike functions are defined and their coefficient inequalities are formulated. The coefficient bound for a certain class of analytic functions, proved by Owa et al. (2006) in [16], has also been improved.  相似文献   

20.
In this paper, we present new multivariate quantile distributions and utilise likelihood-free Bayesian algorithms for inferring the parameters. In particular, we apply a sequential Monte Carlo (SMC) algorithm that is adaptive in nature and requires very little tuning compared with other approximate Bayesian computation algorithms. Furthermore, we present a framework for the development of multivariate quantile distributions based on a copula. We consider bivariate and time series extensions of the g-and-k distribution under this framework, and develop an efficient component-wise updating scheme free of likelihood functions to be used within the SMC algorithm. In addition, we trial the set of octiles as summary statistics as well as functions of these that form robust measures of location, scale, skewness and kurtosis. We show that these modifications lead to reasonably precise inferences that are more closely comparable to computationally intensive likelihood-based inference. We apply the quantile distributions and algorithms to simulated data and an example involving daily exchange rate returns.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号