首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Based on the method of (n,k)-universal sets, we present a deterministic parameterized algorithm for the weighted rd-matching problem with time complexity O(4(r−1)k+o(k)), improving the previous best upper bound O(4rk+o(k)). In particular, the algorithm applied to the unweighted 3d-matching problem results in a deterministic algorithm with time O(16k+o(k)), improving the previous best result O(21.26k). For the weighted r-set packing problem, we present a deterministic parameterized algorithm with time complexity O(2(2r−1)k+o(k)), improving the previous best result O(22rk+o(k)). The algorithm, when applied to the unweighted 3-set packing problem, has running time O(32k+o(k)), improving the previous best result O(43.62k+o(k)). Moreover, for the weighted r-set packing and weighted rd-matching problems, we give a kernel of size O(kr), which is the first kernelization algorithm for the problems on weighted versions.  相似文献   

2.
The main aims of this study are to derive the fuzzy Euler-Lagrange conditions for both fuzzy unconstrained and constrained variational problems based on the concepts of differentiability and integrability of a fuzzy mapping that may be parameterized by the left and right-hand functions of its α-level sets.  相似文献   

3.
The selection of a subset of input variables is often based on the previous construction of a ranking to order the variables according to a given criterion of relevancy. The objective is then to linearize the search, estimating the quality of subsets containing the topmost ranked variables. An algorithm devised to rank input variables according to their usefulness in the context of a learning task is presented. This algorithm is the result of a combination of simple and classical techniques, like correlation and orthogonalization, which allow the construction of a fast algorithm that also deals explicitly with redundancy. Additionally, the proposed ranker is endowed with a simple polynomial expansion of the input variables to cope with nonlinear problems. The comparison with some state-of-the-art rankers showed that this combination of simple components is able to yield high-quality rankings of input variables. The experimental validation is made on a wide range of artificial data sets and the quality of the rankings is assessed using a ROC-inspired setting, to avoid biased estimations due to any particular learning algorithm.  相似文献   

4.
The problem of constructing optimal designs when some of the factors are not under the control of the experimenters is considered. Their values can be known or unknown before the experiment is carried out. Several criteria are taken into consideration to find optimal conditional designs given some prior information on the factors. In order to determine these optimal conditional designs a class of multiplicative algorithms is provided. Optimal designs are computed for illustrative, but simplistic, examples. Two real life problems in production models and a physical test for predicting morbidity in lung cancer surgery motivate the procedures provided.  相似文献   

5.
This study proposes two optimization mathematical models for the clustering and selection of suppliers. Model 1 performs an analysis of supplier clusters, according to customer demand attributes, including production cost, product quality and production time. Model 2 uses the supplier cluster obtained in Model 1 to determine the appropriate supplier combinations. The study additionally proposes a two-phase method to solve the two mathematical models. Phase 1 integrates k-means and a simulated annealing algorithm with the Taguchi method (TKSA) to solve for Model 1. Phase 2 uses an analytic hierarchy process (AHP) for Model 2 to weight every factor and then uses a simulated annealing algorithm with the Taguchi method (ATSA) to solve for Model 2. Finally, a case study is performed, using parts supplier segmentation and an evaluation process, which compares different heuristic methods. The results show that TKSA+ATSA provides a quality solution for this problem.  相似文献   

6.
A string-based negative selection algorithm is an immune-inspired classifier that infers a partitioning of a string space Σ? into “normal” and “anomalous” partitions from a training set S containing only samples from the “normal” partition. The algorithm generates a set of patterns, called “detectors”, to cover regions of the string space containing none of the training samples. Strings that match at least one of these detectors are then classified as “anomalous”. A major problem with existing implementations of this approach is that the detector generating step needs exponential time in the worst case. Here we show that for the two most widely used kinds of detectors, the r-chunk and r-contiguous detectors based on partial matching to substrings of length r, negative selection can be implemented more efficiently by avoiding generating detectors altogether: for each detector type, training set SΣ? and parameter r? one can construct an automaton whose acceptance behaviour is equivalent to the algorithm’s classification outcome. The resulting runtime is O(|S|?r|Σ|) for constructing the automaton in the training phase and O(?) for classifying a string.  相似文献   

7.
Fuzzy-clustering methods, such as fuzzy k-means and expectation maximization, allow an object to be assigned to multiple clusters with different degrees of membership. However, the memberships that result from fuzzy-clustering algorithms are difficult to be analyzed and visualized. The memberships, usually converted to 0-1 values, are visualized using parallel coordinates or different color shades. In this paper, we propose a new approach to visualize fuzzy-clustered data. The scheme is based on a geometric visualization, and works by grouping the objects with similar cluster memberships towards the vertices of a hyper-tetrahedron. The proposed method shows clear advantages over the existing methods, demonstrating its capabilities for viewing and navigating inter-cluster relationships in a spatial manner.  相似文献   

8.
In this paper, we introduce a modified new hybrid projection method for finding the set of solutions of the generalized mixed equilibrium problems and the convex feasibility problems for an infinite family of closed and uniformly quasi-?-asymptotically nonexpansive mappings. Strong convergence theorems are established in a uniformly smooth and strictly convex Banach space which also enjoys the Kadec-Klee property. Our results improve and extend the corresponding results announced by Qin et al. (2010) and many authors.  相似文献   

9.
Elections are a central model in a variety of areas. This paper studies parameterized computational complexity of five control problems in the Maximin election. We obtain the following results: constructive control by adding candidates is W[2]-hard with respect to the parameter “number of added candidates”; both constructive and destructive control by adding/deleting voters are W[1]-hard with respect to the parameter “number of added/deleted voters”.  相似文献   

10.
11.
This paper proposes a new multi-robot coordinated exploration algorithm that applies a global optimization strategy based on K-Means clustering to guarantee a balanced and sustained exploration of big workspaces. The algorithm optimizes the on-line assignment of robots to targets, keeps the robots working in separate areas and efficiently reduces the variance of average waiting time on those areas. The latter ensures that the different areas of the workspace are explored at a similar speed, thus avoiding that some areas are explored much later than others, something desirable for many exploration applications, such as search & rescue. The algorithm leads to the lowest variance of regional waiting time (WTV) and the lowest variance of regional exploration percentages (EPV). Both features are presented through a comparative evaluation of the proposed algorithm with different state-of-the-art approaches.  相似文献   

12.
In this paper a necessary and sufficient condition for a parameter insensitive disturbance-rejection problem with state feedback which was pointed out as an open problem by Bhattacharyya to be solvable is proved. A constructive algorithm of simultaneously (A,B)-invariant subspaces for a finite-number of linear systems and a relationship between simultaneously (A,B)-invariant subspaces and generalized (A,B)-invariant subspaces play an important role to prove the main result.  相似文献   

13.
Statistical models are often based on normal distributions and procedures for testing such distributional assumption are needed. Many goodness-of-fit tests are available. However, most of them are quite insensitive in detecting non-normality when the alternative distribution is symmetric. On the other hand all the procedures are quite powerful against skewed alternatives. A new test for normality based on a polynomial regression is presented. It is very effective in detecting non-normality when the alternative distribution is symmetric. A comparison between well known tests and this new procedure is performed by simulation study. Other properties are also investigated.  相似文献   

14.
This article is concerned with robust stability analysis of discrete-time systems and introduces a novel and powerful technique that we call noncausal linear periodically time-varying (LPTV) scaling. Based on the discrete-time lifting together with the conventional but general scaling approach, we are led to the notion of noncausal LPTV scaling for LPTV systems, and its effectiveness is demonstrated with a numerical example. To separate the effect of noncausal and LPTV characteristics of noncausal LPTV scaling to see which is a more important source leading to the effectiveness, we then consider the case of LTI systems as a special case. Then, we show that even static noncausal LPTV scaling has an ability of inducing frequency-dependent scaling when viewed in the context of the conventional LTI scaling, while causal LPTV scaling fails to do so. It is further discussed that the effectiveness of noncausal characteristics leading to the frequency-domain interpretation can be exploited even for LPTV systems by considering the νN-lifted transfer matrices of N-periodic systems.  相似文献   

15.
The goal of cluster analysis is to assign observations into clusters so that observations in the same cluster are similar in some sense. Many clustering methods have been developed in the statistical literature, but these methods are inappropriate for clustering family data, which possess intrinsic familial structure. To incorporate the familial structure, we propose a form of penalized cluster analysis with a tuning parameter controlling the tradeoff between the observation dissimilarity and the familial structure. The tuning parameter is selected based on the concept of clustering stability. The effectiveness of the method is illustrated via simulations and an application to a family study of asthma.  相似文献   

16.
In this paper we argue that the solitary solution of the Liouville equation produced by the Exp-function method does not satisfy the original differential equation for all initial conditions. Moreover, the region where the solution is correct is located entirely on a curve in the parameter plane of initial conditions. We derive the explicit equation for this curve and argue that classical Exp-function type methods cannot identify such constraints related to initial conditions.  相似文献   

17.
Various design and model selection methods are available for supersaturated designs having more factors than runs but little research is available on their comparison and evaluation. Simulated experiments are used to evaluate the use of E(s2)-optimal and Bayesian D-optimal designs and to compare three analysis strategies representing regression, shrinkage and a novel model-averaging procedure. Suggestions are made for choosing the values of the tuning constants for each approach. Findings include that (i) the preferred analysis is via shrinkage; (ii) designs with similar numbers of runs and factors can be effective for a considerable number of active effects of only moderate size; and (iii) unbalanced designs can perform well. Some comments are made on the performance of the design and analysis methods when effect sparsity does not hold.  相似文献   

18.
In this work, we study The Abelian Sandpile Model from the point of view of computational complexity. We begin by studying the length distribution of sandpile avalanches triggered by the addition of two critical configurations: we prove that those avalanches are long on average, their length is bounded below by a constant fraction of the length of the longest critical avalanche which is, in most of the cases, superlinear. At the end of the paper we take the point of view of computational complexity, we analyze the algorithmic hardness of the problem consisting in computing the addition of two critical configurations, we prove that this problem is P complete, and we prove that most algorithmic problems related to The Abelian Sandpile Model are NC reducible to it.  相似文献   

19.
A goodness-of-fit testing procedure for Archimedean copula (AC) models is developed based on right-censored data. The proposed approach extends an existing method, which is suitable for the Clayton model, to general AC models. Asymptotic properties of the proposed test statistics under the true model assumption are derived. Simulation analysis shows that the proposed test has reasonable performance. Finally, two real data examples are analyzed for illustrative purposes.  相似文献   

20.
This article is about testing the equality of several normal means when the variances are unknown and arbitrary, i.e., the set up of the one-way ANOVA. Even though several tests are available in the literature, none of them perform well in terms of Type I error probability under various sample size and parameter combinations. In fact, Type I errors can be highly inflated for some of the commonly used tests; a serious issue that appears to have been overlooked. We propose a parametric bootstrap (PB) approach and compare it with three existing location-scale invariant tests—the Welch test, the James test and the generalized F (GF) test. The Type I error rates and powers of the tests are evaluated using Monte Carlo simulation. Our studies show that the PB test is the best among the four tests with respect to Type I error rates. The PB test performs very satisfactorily even for small samples while the Welch test and the GF test exhibit poor Type I error properties when the sample sizes are small and/or the number of means to be compared is moderate to large. The James test performs better than the Welch test and the GF test. It is also noted that the same tests can be used to test the significance of the random effect variance component in a one-way random model under unequal error variances. Such models are widely used to analyze data from inter-laboratory studies. The methods are illustrated using some examples.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号