首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
《Intelligent Data Analysis》1998,2(1-4):265-286
The main problem considered in this paper consists of binarizing categorical (nominal) attributes having a very large number of values (204 in our application). A small number of relevant binary attributes are gathered from each initial attribute. Let us suppose that we want to binarize a categorical attribute v with L values, where L is large or very large. The total number of binary attributes that can be extracted from v is 2L−1− 1, which in the case of a large L is prohibitive. Our idea is to select only those binary attributes that are predictive; and these shall constitute a small fraction of all possible binary attributes. In order to do this, the significant idea consists in grouping the L values of a categorical attribute by means of an hierarchical clustering method. To do so, we need to define a similarity between values, which is associated with their predictive power. By clustering the L values into a small number of clusters (J), we define a new categorical attribute with only J values. The hierarchical clustering method used by us, AVL, allows to choose a significant value for J. Now, we could consider using all the 2L−1− 1 binary attributes associated with this new categorical attribute. Nevertheless, the J values are tree-structured, because we have used a hierarchical clustering method. We profit from this, and consider only about 2 × J binary attributes. If L is extremely large, for complexity and statistical reasons, we might not be able to apply a clustering algorithm directly. In this case, we start by “factorizing” v into a pair (v2, v2), each one with about √L(v) values. For a simple example, consider an attribute v with only four values m1,m2, m3,m4. Obviously, in this example, there is no need to factorize the set of values of v, because it has a very small number of values. Nevertheless, for illustration purposes, v could be decomposed (factorized) into 2 attributes with only two values each; the correspondence between the values of v and (v2, v2) would be  相似文献   

2.
For high order interpolations at both end points of two rational Bézier curves, we introduce the concept of C(v,u)-continuity and give a matrix expression of a necessary and sufficient condition for satisfying it. Then we propose three new algorithms, in a unified approach, for the degree reduction of Bézier curves, approximating rational Bézier curves by Bézier curves and the degree reduction of rational Bézier curves respectively; all are in L2 norm and C(v,u)-continuity is satisfied. The algorithms for the first and second problems can get the best approximation results, and for the third one, resorting to the steepest descent method in numerical optimization obtains a series of degree reduced curves iteratively with decreasing approximation errors. Compared to some well-known algorithms for the degree reduction of rational Bézier curves, such as the uniformizing weights algorithm, canceling the best linear common divisor algorithm and shifted Chebyshev polynomials algorithm, the new one presented here can give a better approximation error, do multiple degrees of reduction at a time and preserve high order interpolations at both end points.  相似文献   

3.
Using a combinatorial characterization of digital convexity based on words, one defines the language of convex words. The complement of this language forms an ideal whose minimal elements, with respect to the factorial ordering, appear to have a particular combinatorial structure very close to the Christoffel words. In this paper, those words are completely characterized as those of the form uwkv where k≥1, w=uv and u,v,w are Christoffel words. Also, by considering the most balanced among the unbalanced words, we obtain a second characterization for a special class of minimal non-convex words that are of the form u2v2 corresponding to the case k=1 in the previous form.  相似文献   

4.
This paper presents a new version of fuzzy support vector classifier machine to diagnose the nonlinear fuzzy fault system with multi-dimensional input variables. Since there exist problems of finite samples and uncertain data in complex fuzzy fault system modeling, the input and output variables are described as fuzzy numbers. Then by integrating the fuzzy theory and v-support vector classifier machine, the triangular fuzzy v-support vector regression machine (TF v-SVCM) is proposed. To seek the optimal parameters of TF v-SVCM, particle swarm optimization (PSO) is also applied to optimize parameters of TF v-SVCM. A diagnosing method based on TF v-SVCM and PSO are put forward. The results of the application in fault system diagnosis confirm the feasibility and the validity of the diagnosing method. The results of application in fault diagnosis of car assembly line show the hybrid diagnosis model based on TF v-SVCM and PSO is feasible and effective, and the comparison between the method proposed in this paper and other ones is also given, which proves this method is better than standard v-SVCM.  相似文献   

5.
《Graphical Models》2001,63(4):228-244
We present an efficient and robust algorithm to compute the intersection curve of two ringed surfaces, each being the sweep ∪uCu generated by a moving circle. Given two ringed surfaces ∪uCu1 and ∪vCv2, we formulate the condition Cu1Cv2 ≠ ∅ (i.e., that the intersection of the two circles Cu1 and Cv2 is nonempty) as a bivariate equation λ(u, v)=0 of relatively low degree. Except for redundant solutions and degenerate cases, there is a rational map from each solution of λ(u, v)=0 to the intersection point Cu1Cv2. Thus it is trivial to construct the intersection curve once we have computed the zero-set of λ(u, v)=0. We also analyze exceptional cases and consider how to construct the corresponding intersection curves. A similar approach produces an efficient algorithm for the intersection of a ringed surface and a ruled surface, which can play an important role in accelerating the ray-tracing of ringed surfaces. Surfaces of linear extrusion and surfaces of revolution reduce their respective intersection algorithms to simpler forms than those for ringed surfaces and ruled surfaces. In particular, the bivariate equation λ(u, v)=0 is reduced to a decomposable form, f(u)=g(v) or 6f(u)−g(v)6=|r(u)|, which can be solved more efficiently than the general case.  相似文献   

6.
Let û be the solution of a boundary value problem for an ordinary differential equation of the second order. Function boundsv andw are constructed to û such thatv ≦ û ≦w. From this other bounds are derived for the derivatives û′ and û″. To this end a collocation method with finite elements is used. The inclusion property is proven with the aid of theorems on differential inequalities. Leth be the maximal step size and letk be an arbitrary natural number. Then the accuracy can be made to have arbitrarily high order such thatw?v=C(h 2k ).  相似文献   

7.
We study the problem of minimizing the number of late jobs on a single machine where job processing times are known precisely and due dates are uncertain. The uncertainty is captured through a set of scenarios. In this environment, an appropriate criterion to select a schedule is to find one with the best worst-case performance, which minimizes the maximum number of late jobs over all scenarios. For a variable number of scenarios and two distinct due dates over all scenarios, the problem is proved NP-hard in the strong sense and non-approximable in pseudo-polynomial time with approximation ratio less than 2. It is polynomially solvable if the number s of scenarios and the number v of distinct due dates over all scenarios are given constants. An O(nlog?n) time s-approximation algorithm is suggested for the general case, where n is the number of jobs, and a polynomial 3-approximation algorithm is suggested for the case of unit-time jobs and a constant number of scenarios. Furthermore, an O(n s+v?2/(v?1) v?2) time dynamic programming algorithm is presented for the case of unit-time jobs. The problem with unit-time jobs and the number of late jobs not exceeding a given constant value is solvable in polynomial time by an enumeration algorithm. The obtained results are related to a min-max assignment problem, an exact assignment problem and a multi-agent scheduling problem.  相似文献   

8.
Schnyder woods are decompositions of simple triangulations into three edge-disjoint spanning trees crossing each other in a specific way. In this article, we generalize the definition of Schnyder woods to d-angulations (plane graphs with faces of degree d) for all d≥3. A Schnyder decomposition is a set of d spanning forests crossing each other in a specific way, and such that each internal edge is part of exactly d?2 of the spanning forests. We show that a Schnyder decomposition exists if and only if the girth of the d-angulation is d. As in the case of Schnyder woods (d=3), there are alternative formulations in terms of orientations (“fractional” orientations when d≥5) and in terms of corner-labellings. Moreover, the set of Schnyder decompositions of a fixed d-angulation of girth d has a natural structure of distributive lattice. We also study the dual of Schnyder decompositions which are defined on d-regular plane graphs of mincut d with a distinguished vertex v ?: these are sets of d spanning trees rooted at v ? crossing each other in a specific way and such that each edge not incident to v ? is used by two trees in opposite directions. Additionally, for even values of d, we show that a subclass of Schnyder decompositions, which are called even, enjoy additional properties that yield a reduced formulation; in the case d=4, these correspond to well-studied structures on simple quadrangulations (2-orientations and partitions into 2 spanning trees). In the case d=4, we obtain straight-line and orthogonal planar drawing algorithms by using the dual of even Schnyder decompositions. For a 4-regular plane graph G of mincut 4 with a distinguished vertex v ? and n?1 other vertices, our algorithms places the vertices of G\v ? on a (n?2)×(n?2) grid according to a permutation pattern, and in the orthogonal drawing each of the 2n?4 edges of G\v ? has exactly one bend. The vertex v ? can be embedded at the cost of 3 additional rows and columns, and 8 additional bends. We also describe a further compaction step for the drawing algorithms and show that the obtained grid-size is strongly concentrated around 25n/32×25n/32 for a uniformly random instance with n vertices.  相似文献   

9.
Zeev Nutov 《Algorithmica》2014,70(2):340-364
We consider Degree Constrained Survivable Network problems. For the directed Degree Constrained k -Edge-Outconnected Subgraph problem, we slightly improve the best known approximation ratio, by a simple proof. Our main contribution is giving a framework to handle node-connectivity degree constrained problems with the iterative rounding method. In particular, for the degree constrained versions of the Element-Connectivity Survivable Network problem on undirected graphs, and of the k -Outconnected Subgraph problem on both directed and undirected graphs, our algorithm computes a solution J of cost O(logk) times the optimal, with degrees O(2 k )?b(v). Similar result are obtained for the k -Connected Subgraph problem. The latter improves on the only degree approximation O(klogn)?b(v) in O(n k ) time on undirected graphs by Feder, Motwani, and Zhu.  相似文献   

10.
This paper presents a new version of fuzzy wavelet support vector classifier machine to diagnosing the nonlinear fuzzy fault system with multi-dimensional input variables. Since there exist problems of finite samples and uncertain data in complex fuzzy fault system, the input and output variables are described as fuzzy numbers. Then by integrating the fuzzy theory, wavelet analysis theory and v-support vector classifier machine, fuzzy wavelet v-support vector classifier machine (FWv-SVCM) is proposed. To seek the optimal parameters of FWv-SVCM, genetic algorithm (GA) is also applied to optimize unknown parameters of FWv-SVCM. A diagnosing method based on FWv-SVCM and GA is put forward. The results of the application in car assembly line diagnosis confirm the feasibility and the validity of the diagnosing method. Compared with the traditional model and other SVCM methods, FWv-SVCM method requires fewer samples and has better diagnosing precision.  相似文献   

11.
Commercial CFD codes are commonly used to simulate models that involve complicated geometries such as the human nasal cavity. This means that the user has to work within the limitations of the available models of the CFD code. One such issue is the turbulent dispersion of particles in the Lagrangian reference, namely the Discrete Random Walk (DRW) model which overpredicts the deposition of smaller inertial particles, due to its inherent isotropic treatment of the normal to the wall fluctuation, v, in the near wall region. DNS data for channel flows has been used to create a function that reduces the turbulent kinetic energy (TKE) to match the v profile which has delivered improved particle deposition efficiency results. This paper presents an alternative approach to reduce the TKE to match v, by directly taking the profile from the v2-f turbulence model. The approach is validated against experimental pipe flow for a 90° bend and then applied to particle dispersion in a human nasal cavity using Ansys-Fluent which showed improved results compared to no modification.  相似文献   

12.
A finite set W of words over an alphabet A is cyclic if, whenever u,vA and uv,vuW, we have u,vW. If it is only assumed that the property holds for all u,vA with a large length, then W is called pseudo-cyclic, that is, there is NN such that, whenever u,vA with |u|, |v|≥N and uv,vuW, we have u,vW. We analyze the class of pseudo-cyclic sets and describe how it is related to the open question which asks whether every irreducible shift of finite type is conjugate to a renewal system.  相似文献   

13.
The file allocation problem considers a file and a fully connected network having n nodes. The problem assumes that the overall file usage over a unit time period is known and it asks for the optimal set of network sites at which to locate copies of the file. This paper considers the same problem but it assumes that the behavior of the user access patterns changes over v planning periods in a manner, known in advance. A model is presented which shows that there are (2n ? 1)v possible file allocations. To assist the searching of this large solution space four theorems are presented which are subsequently utilized to analyze the problem and to solve an example case.  相似文献   

14.
The hypercube has been widely used as the interconnection network in parallel computers. The n-dimensional hypercube Qn is a graph having n2 vertices each labeled with a distinct n-bit binary strings. Two vertices are linked by an edge if and only if their addresses differ exactly in the one bit position. Let fv denote the number of faulty vertices in Qn. For n?3, in this paper, we prove that every fault-free edge and fault-free vertex of Qn lies on a fault-free cycle of every even length from 4 to n2−2fv inclusive even if fv?n−2. Our results are optimal.  相似文献   

15.
This paper presents a new version of fuzzy wavelet support vector classifier machine to diagnosing the nonlinear fuzzy fault system with multi-dimensional input variables. Since there exist some problems of Gaussian noise and uncertain data in complex fuzzy fault system, the input and output variables are described as fuzzy numbers. Then by integrating the fuzzy theory, wavelet analysis theory, Gaussian loss function and ν-support vector classifier machine, fuzzy Gaussian wavelet ν-support vector classifier machine (TFGW v-SVCM) is proposed. To seek the optimal parameters of TFGW v-SVCM, genetic algorithm (GA) is presented to optimize the unknown parameters of TFGW v-SVCM. A diagnosing method based on TFGW v-SVCM and GA is presented. The results of the application in car assembly line diagnosis confirm the feasibility and the validity of the diagnosing method. Compared with the traditional model and other SVCM methods, TFGW v-SVCM method requires fewer samples and has better diagnosing precision.  相似文献   

16.
J. Katajainen 《Computing》1988,40(2):147-161
The following geometrical proximity concepts are discussed: relative closeness and geographic closeness. Consider a setV={v 1,v 2, ...,v n } of distinct points in atwo-dimensional space. The pointv j is said to be arelative neighbour ofv i ifd p (v i ,v j )≤max{d p (v j ,v k ),d p (v j ,v k )} for allv k V, whered p denotes the distance in theL p metric, 1≤p≤∞. After dividing the space around the pointv i into eight sectors (regions) of equal size, a closest point tov i in some region is called anoctant (region, orgeographic) neighbour ofv i. For anyL p metric, a relative neighbour ofv i is always an octant neighbour in some region atv i. This gives a direct method for computing all relative neighbours, i.e. for establishing therelative neighbourhood graph ofV. For every pointv i ofV, first search for the octant neighbours ofv i in each region, and then for each octant neighbourv j found check whether the pointv j is also a relative neighbour ofv i. In theL p metric, 1<p<∞, the total number of octant neighbours is shown to be θ(n) for any set ofn points; hence, even a straightforward implementation of the above method runs in θn 2) time. In theL 1 andL metrics the method can be refined to a θ(n logn+m) algorithm, wherem is the number of relative neighbours in the output,n-1≤mn(n-1). TheL 1 (L ) algorithm is optimal within a constant factor.  相似文献   

17.
We present a new approach for approximating node deletion problems by combining the local ratio and the greedy multicovering algorithms. For a function , our approach allows to design a 2+maxvV(G)logf(v) approximation algorithm for the problem of deleting a minimum number of nodes so that the degree of each node v in the remaining graph is at most f(v). This approximation ratio is shown to be asymptotically optimal. The new method is also used to design a 1+(log2)(k−1) approximation algorithm for the problem of deleting a minimum number of nodes so that the remaining graph contains no k-bicliques.  相似文献   

18.
A controllability problem for a Fokker-Planck equation is termedProblem A. Under proper assumptions, a solution (v*, Ф*) to that problem is constructed by a Theorem of Jamison. Theorem 2 gives a sufficiency condition concerning the given initial and terminal data for that solution to exist. Theorem 3 states that v* is an optimal feedback control for a stochastic optimal control problem with constraint on the end-state, termedProblem B. Further, v* corresponds to the minimum of an entropy distance. Finally, Problem A is transformed into a controllability problem for a stochastic differential equation, termedProblem C: the solution to Problem C corresponding to the one constructed in Problem A is the Markovian process satisfying the given end conditions in a set of reciprocal processes of Jamison.  相似文献   

19.
An algorithm for generating parity-check matrices of regular low-density paritycheck codes based on permutation matrices and Steiner triple systems S(v, 3, 2), v = 2 m ? 1, is proposed. Estimations of the rate, minimum distance, and girth for obtained code constructions are presented. Results of simulation of the obtained code constructions for an iterative “belief propagation” (Sum-Product) decoding algorithm applied in the case of transmission over a binary channel with additive Gaussian white noise and BPSK modulation are presented.  相似文献   

20.
The computational approximation of exact boundary controllability problems for the wave equation in two dimensions is studied. A numerical method is defined that is based on the direct solution of optimization problems that are introduced in order to determine unique solutions of the controllability problem. The uniqueness of the discrete finite-difference solutions obtained in this manner is demonstrated. The convergence properties of the method are illustrated through computational experiments. Efficient implementation strategies for the method are also discussed. It is shown that for smooth, minimum L2-norm Dirichlet controls, the method results in convergent approximations without the need to introduce regularization. Furthermore, for the generic case of nonsmooth Dirichlet controls, convergence with respect to L2 norms is also numerically demonstrated. One of the strengths of the method is the flexibility it allows for treating other controls and other minimization criteria; such generalizations are discussed. In particular, the minimum H1-norm Dirichlet controllability problem is approximated and solved, as are minimum regularized L2-norm Dirichlet controllability problems with small penalty constants. Finally, a discussion is provided about the differences between our method and existing methods; these differences may explain why our methods provide convergent approximations for problems for which existing methods produce divergent approximations unless they are regularized in some manner.  相似文献   

υ1,υ2)
m111
m212
m321
m422
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号