首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The Fuzzy k-Means clustering model (FkM) is a powerful tool for classifying objects into a set of k homogeneous clusters by means of the membership degrees of an object in a cluster. In FkM, for each object, the sum of the membership degrees in the clusters must be equal to one. Such a constraint may cause meaningless results, especially when noise is present. To avoid this drawback, it is possible to relax the constraint, leading to the so-called Possibilistic k-Means clustering model (PkM). In particular, attention is paid to the case in which the empirical information is affected by imprecision or vagueness. This is handled by means of LR fuzzy numbers. An FkM model for LR fuzzy data is firstly developed and a PkM model for the same type of data is then proposed. The results of a simulation experiment and of two applications to real world fuzzy data confirm the validity of both models, while providing indications as to some advantages connected with the use of the possibilistic approach.  相似文献   

2.
Although there have been many researches on cluster analysis considering feature (or variable) weights, little effort has been made regarding sample weights in clustering. In practice, not every sample in a data set has the same importance in cluster analysis. Therefore, it is interesting to obtain the proper sample weights for clustering a data set. In this paper, we consider a probability distribution over a data set to represent its sample weights. We then apply the maximum entropy principle to automatically compute these sample weights for clustering. Such method can generate the sample-weighted versions of most clustering algorithms, such as k-means, fuzzy c-means (FCM) and expectation & maximization (EM), etc. The proposed sample-weighted clustering algorithms will be robust for data sets with noise and outliers. Furthermore, we also analyze the convergence properties of the proposed algorithms. This study also uses some numerical data and real data sets for demonstration and comparison. Experimental results and comparisons actually demonstrate that the proposed sample-weighted clustering algorithms are effective and robust clustering methods.  相似文献   

3.
Based on the method of (n,k)-universal sets, we present a deterministic parameterized algorithm for the weighted rd-matching problem with time complexity O(4(r−1)k+o(k)), improving the previous best upper bound O(4rk+o(k)). In particular, the algorithm applied to the unweighted 3d-matching problem results in a deterministic algorithm with time O(16k+o(k)), improving the previous best result O(21.26k). For the weighted r-set packing problem, we present a deterministic parameterized algorithm with time complexity O(2(2r−1)k+o(k)), improving the previous best result O(22rk+o(k)). The algorithm, when applied to the unweighted 3-set packing problem, has running time O(32k+o(k)), improving the previous best result O(43.62k+o(k)). Moreover, for the weighted r-set packing and weighted rd-matching problems, we give a kernel of size O(kr), which is the first kernelization algorithm for the problems on weighted versions.  相似文献   

4.
We consider a model for online computation in which the online algorithm receives, together with each request, some information regarding the future, referred to as advice. The advice is a function, defined by the online algorithm, of the whole request sequence. The advice provided to the online algorithm may allow an improvement in its performance, compared to the classical model of complete lack of information regarding the future. We are interested in the impact of such advice on the competitive ratio, and in particular, in the relation between the size b of the advice, measured in terms of bits of information per request, and the (improved) competitive ratio. Since b=0 corresponds to the classical online model, and b=⌈log∣A∣⌉, where A is the algorithm’s action space, corresponds to the optimal (offline) one, our model spans a spectrum of settings ranging from classical online algorithms to offline ones.In this paper we propose the above model and illustrate its applicability by considering two of the most extensively studied online problems, namely, metrical task systems (MTS) and the k-server problem. For MTS we establish tight (up to constant factors) upper and lower bounds on the competitive ratio of deterministic and randomized online algorithms with advice for any choice of 1≤bΘ(logn), where n is the number of states in the system: we prove that any randomized online algorithm for MTS has competitive ratio Ω(log(n)/b) and we present a deterministic online algorithm for MTS with competitive ratio O(log(n)/b). For the k-server problem we construct a deterministic online algorithm for general metric spaces with competitive ratio kO(1/b) for any choice of Θ(1)≤b≤logk.  相似文献   

5.
A proper k-vertex coloring of a graph is an equitable k-coloring if the sizes of the color classes differ by at most 1. A graph G is equitably k-choosable if, for any k-uniform list assignment L, G is L-colorable and each color appears on at most vertices. We prove in this paper that outerplane graphs are equitably k-choosable whenever kΔ, where Δ is the maximum degree. Moreover, we discuss equitable colorings of some d-degenerate graphs.  相似文献   

6.
7.
In this paper, we present new multivariate quantile distributions and utilise likelihood-free Bayesian algorithms for inferring the parameters. In particular, we apply a sequential Monte Carlo (SMC) algorithm that is adaptive in nature and requires very little tuning compared with other approximate Bayesian computation algorithms. Furthermore, we present a framework for the development of multivariate quantile distributions based on a copula. We consider bivariate and time series extensions of the g-and-k distribution under this framework, and develop an efficient component-wise updating scheme free of likelihood functions to be used within the SMC algorithm. In addition, we trial the set of octiles as summary statistics as well as functions of these that form robust measures of location, scale, skewness and kurtosis. We show that these modifications lead to reasonably precise inferences that are more closely comparable to computationally intensive likelihood-based inference. We apply the quantile distributions and algorithms to simulated data and an example involving daily exchange rate returns.  相似文献   

8.
This paper systematically studies the minimum input sensitivity analysis problem. The lowest level of sensitivity of system outputs to system inputs is defined as an H- index. A full characterization of the H- index is given, first, in terms of matrix equalities and inequalities, and then in terms of linear matrix inequalities (LMIs), as a dual of the Bounded Real Lemma. A related problem of input observability is also studied, with new necessary and sufficient conditions given, which are necessary for a fault detection system to have a nonzero worst-case fault sensitivity. The above results are applied to the problem of fault detection filter analysis, with numerical examples given to show the effectiveness of the proposed approaches.  相似文献   

9.
In this paper we prove that, under suitable conditions, Atanassov’s Kα operators, which act on intervals, provide the same numerical results as OWA operators of dimension two. On one hand, this allows us to recover OWA operators from Kα operators. On the other hand, by analyzing the properties of Atanassov’s operators, we can generalize them. In this way, we introduce a class of aggregation functions - the generalized Atanassov operators - that, in particular, include two-dimensional OWA operators. We investigate under which conditions these generalized Atanassov operators satisfy some properties usually required for aggregation functions, such as bisymmetry, strictness, monotonicity, etc. We also show that if we apply these aggregation functions to interval-valued fuzzy sets, we obtain an ordered family of fuzzy sets.  相似文献   

10.
11.
The satisfiability problem is a basic core NP-complete problem. In recent years, a lot of heuristic algorithms have been developed to solve this problem, and many experiments have evaluated and compared the performance of different heuristic algorithms. However, rigorous theoretical analysis and comparison are rare. This paper analyzes and compares the expected runtime of three basic heuristic algorithms: RandomWalk, (1+1) EA, and hybrid algorithm. The runtime analysis of these heuristic algorithms on two 2-SAT instances shows that the expected runtime of these heuristic algorithms can be exponential time or polynomial time. Furthermore, these heuristic algorithms have their own advantages and disadvantages in solving different SAT instances. It also demonstrates that the expected runtime upper bound of RandomWalk on arbitrary k-SAT (k?3) is O(n(k−1)), and presents a k-SAT instance that has Θ(n(k−1)) expected runtime bound.  相似文献   

12.
In this paper, the concepts of Janowski functions and the conic regions are combined to define a new domain which represents the conic type regions. Different views of this modified conic domain for specific values are shown graphically for better understanding of the behavior of this domain. The class of such functions which map the open unit disk E onto this modified conic domain is defined. Also the classes of k-uniformly Janowski convex and k-Janowski starlike functions are defined and their coefficient inequalities are formulated. The coefficient bound for a certain class of analytic functions, proved by Owa et al. (2006) in [16], has also been improved.  相似文献   

13.
For a positive integer d, an L(d,1)-labeling f of a graph G is an assignment of integers to the vertices of G such that |f(u)−f(v)|?d if uvE(G), and |f(u)−f(v)|?1 if u and u are at distance two. The span of an L(d,1)-labeling f of a graph is the absolute difference between the maximum and minimum integers used by f. The L(d,1)-labeling number of G, denoted by λd,1(G), is the minimum span over all L(d,1)-labelings of G. An L(d,1)-labeling of a graph G is an L(d,1)-labeling of G which assigns different labels to different vertices. Denote by the L(d,1)-labeling number of G. Georges et al. [Discrete Math. 135 (1994) 103-111] established relationship between the L(2,1)-labeling number of a graph G and the path covering number of Gc, the complement of G. In this paper we first generalize the concept of the path covering of a graph to the t-group path covering. Then we establish the relationship between the L(d,1)-labeling number of a graph G and the (d−1)-group path covering number of Gc. Using this result, we prove that and for bipartite graphs G can be computed in polynomial time.  相似文献   

14.
This paper considers input affine nonlinear systems with matched disturbances and shows how to compute an a priori upper bound of the H attenuation level achieved by the optimal L2 controller and the suboptimal H central controller. The case where the disturbance contains a constant term is also discussed. These bounds are shown to depend only on the function mapping the control input to the performance variable. This result is used to derive a robust control design for a special, but practically important, class of non-input affine nonlinear systems consisting of the series connection of a nonlinear state and input dependent map and of a nonlinear input affine dynamical system. Approximate inversion of the nonlinear static map leads to a robust control problem which fits into the framework. The effectiveness of the theoretical results is shown by its use for the robust control design of a diesel engine test bench.  相似文献   

15.
A homomorphism from an oriented graph G to an oriented graph H is an arc-preserving mapping f from V(G) to V(H), that is f(x)f(y) is an arc in H whenever xy is an arc in G. The oriented chromatic number of G is the minimum order of an oriented graph H such that G has a homomorphism to H. In this paper, we determine the oriented chromatic number of the class of partial 2-trees for every girth g?3. We also give an upper bound for the oriented chromatic number of planar graphs with girth at least 11.  相似文献   

16.
17.
The Voronoi diagram of a point set has been extensively used in various disciplines ever since it was first proposed. Its application realms have been even further extended to estimate the shape of point clouds when Edelsbrunner and Mücke introduced the concept of α-shape based on the Delaunay triangulation of a point set.In this paper, we present the theory of β-shape for a set of three-dimensional spheres as the generalization of the well-known α-shape for a set of points. The proposed β-shape fully accounts for the size differences among spheres and therefore it is more appropriate for the efficient and correct solution for applications in biological systems such as proteins.Once the Voronoi diagram of spheres is given, the corresponding β-shape can be efficiently constructed and various geometric computations on the sphere complex can be efficiently and correctly performed. It turns out that many important problems in biological systems such as proteins can be easily solved via the Voronoi diagram of atoms in proteins and β-shapes transformed from the Voronoi diagram.  相似文献   

18.
A modern problem from aerospace control involves the certification of a large set of potential controllers with either a single plant or a fleet of potential plant systems, with both plants and controllers being MIMO and, for the moment, linear. Experiments on a limited number of controller/plant pairs should establish the stability and a certain level of margin of the complete set. We consider this certification problem for a set of controllers and provide algorithms for selecting an efficient subset for testing. This is done for a finite set of candidate controllers and, at least for SISO plants, for compact infinite set. In doing this, the ν-gap metric will be the main tool. Computational examples are given, including one of certification of an aircraft engine controller. The overarching aim is to introduce truly MIMO margin calculations and to understand their efficacy in certifying stability over a set of controllers and in replacing legacy single-loop gain and phase margin calculations.  相似文献   

19.
In this work, we study The Abelian Sandpile Model from the point of view of computational complexity. We begin by studying the length distribution of sandpile avalanches triggered by the addition of two critical configurations: we prove that those avalanches are long on average, their length is bounded below by a constant fraction of the length of the longest critical avalanche which is, in most of the cases, superlinear. At the end of the paper we take the point of view of computational complexity, we analyze the algorithmic hardness of the problem consisting in computing the addition of two critical configurations, we prove that this problem is P complete, and we prove that most algorithmic problems related to The Abelian Sandpile Model are NC reducible to it.  相似文献   

20.
In this paper, we consider the coefficient-based regularized least-squares regression problem with the lq-regularizer (1≤q≤2) and data dependent hypothesis spaces. Algorithms in data dependent hypothesis spaces perform well with the property of flexibility. We conduct a unified error analysis by a stepping stone technique. An empirical covering number technique is also employed in our study to improve sample error. Comparing with existing results, we make a few improvements: First, we obtain a significantly sharper learning rate that can be arbitrarily close to O(m−1) under reasonable conditions, which is regarded as the best learning rate in learning theory. Second, our results cover the case q=1, which is novel. Finally, our results hold under very general conditions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号