首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 390 毫秒
1.
This paper proposes a method for constructing ensembles of decision trees, random feature weights (RFW). The method is similar to Random Forest, they are methods that introduce randomness in the construction method of the decision trees. In Random Forest only a random subset of attributes are considered for each node, but RFW considers all of them. The source of randomness is a weight associated with each attribute. All the nodes in a tree use the same set of random weights but different from the set of weights in other trees. So, the importance given to the attributes will be different in each tree and that will differentiate their construction. The method is compared to Bagging, Random Forest, Random-Subspaces, AdaBoost and MultiBoost, obtaining favourable results for the proposed method, especially when using noisy data sets. RFW can be combined with these methods. Generally, the combination of RFW with other method produces better results than the combined methods. Kappa-error diagrams and Kappa-error movement diagrams are used to analyse the relationship between the accuracies of the base classifiers and their diversity.  相似文献   

2.
In this paper we conjecture that the edges of any non-trivial graph can be weighted with integers 1, 2, 3 in such a way that for every edge uv the product of weights of the edges adjacent to u is different than the product of weights of the edges adjacent to v. It is proven here for cycles, paths, complete graphs and 3-colourable graphs. It is also shown that the edges of every non-trivial graph can be weighted with integers 1, 2, 3, 4 in such a way that the adjacent vertices have different products of incident edge weights.In a total weighting of a simple graph G we assign the positive integers to edges and to vertices of G. We consider a colouring of G obtained by assigning to each vertex v the product of its weight and the weights of its adjacent edges. The paper conjectures that we can get the proper colouring in this way using the weights 1, 2 for every simple graph. We show that we can do it using the weights 1, 2, 4 on edges and 1, 2 on vertices.  相似文献   

3.
In recent years, stepped beam resonators have found broad application in MEMS/NEMS devices. A beam resonator with an undercut at the support, produced due to isotropic etching of the supporting substrate during fabrication, has also been characterized as stepped beam in the literature. The present study deals with thermoelastic dissipations of clamped–clamped stepped beam resonators under adiabatic surface thermal conditions having j (j = 1, 2, …., n) number of sections defined by (j ? 1) number of steps along the length. Numerical results are obtained for three different types of stepped beams of rectangular cross-section having single step such as beams with cross-sectional change at the step only in lateral direction (type-1), in bending direction (type-2), and in both lateral and bending directions (type-3) where the section on the right of the step possesses smaller cross-sectional size compared to the other. The obtained results show that Q-factors vary significantly with step positions in all the three types of stepped beams. For constant length, the Q-factor increases in the type-1 while it decreases in other two types of stepped beams as the step position moves from the left support to the right along the length. Moreover, Q-factors in a type-1 stepped beam depend on the widths of different sections and can be higher than a uniform beam of same thickness for some particular step positions. For most common lengths of stepped beams in real applications with the step close to the left support, type-1 stepped beams provide higher quality factors than the other stepped beams provided that they have the same cross-sectional area.  相似文献   

4.
A Monte Carlo method for digital computer simulation of the strength of (steel) members and structures is presented and is applied to rolled steel beams and columns, and thin-walled cylinders. Input data are cumulative distribution functions (histograms) for the geometric and strength variables. The output (i.e. the scatter in structural strength) is printed as histograms and is statistically analysed.Each output histogram is compared with the Gaussian normal distribution. Using the nonparametric test of homogeneity a number of histograms may then be compared.The case studies presented deal with the plastic strength of steel beams and the maximum load of axially loaded steel columns and thin-walled cylinders. Mathematical models for beams subject to pure bending moment, moment and axial force, moment and shear, or uniform torsion are presented. For the initially straight, centrally loaded column a tangent modulus theory which considers residual stresses is used.The simulations have been carried out for one HEA beam, four HEB beams and three IPE beams. Comparison of the simulation results show that the scatter in load carrying capacity of the simulated beams and columns can be regarded as normally distributed, that the load carrying capacity of beams and columns of the same group (HEB or IPE) and beams and columns of the groups HEA and HEB have distributions which differ very little from each other, and that the scatter in simulated beam strength, and in simulated column strength for short and medium length columns, is much more affected by the variation in yield strength of the material than by the variation in cross sectional data. This conclusion holds for ordinary distributions in yield strength of structural carbon steel.Comparisons of simulation results and test results show good agreement for the beams. The agreement is not so good for the columns mainly because in the tangent modulus theory it is assumed that the columns are initially straight. For the cylinders excellent agreement was achieved.The experience gained with the simulation system presented here shows that a medium size computer can be economically used to simulate a relatively large number of plays.  相似文献   

5.
Two architectures for optical processors designed to solve instances of NP-Complete problems, trading space for time, are suggested. The first approach mimics the traveling salesman by an exponential number of traveling beams, that simultaneously examine the different possible paths. The other approach uses a pre-processing stage in which O(n2) masks consisting of an exponential number of locations, are constructed; each representing a different edge in the graph. The choice and combination of the appropriate (small) subset of these masks yields the solution. The solution is rejected in cases where the combination of these masks completely blocks the light and accepted otherwise. We present detailed designs for basic primitives of the optical processor. We propose designs for solving instances of Hamiltonian path, Traveling Salesman, Clique, Independent Set, Vertex Cover, Partition, 3-SAT, 3D-matching, and the Permanent.  相似文献   

6.
This paper proposes a novel edge detection method for both gray level images and color images. The 3×3 mask in the image is considered and two pixel sets S0 and S1 in the mask are used to define an objective function. The values of the objective function corresponding to four directions determine the edge intensity and edge direction of each pixel in the mask. After all pixels in the image have been processed, the edge map and direction map are generated. Then we apply the non-maxima suppression method to the edge map and the direction map to extract the edge points. The proposed method can detect the edge successfully, while double edges, thick edges, and speckles can be avoided.  相似文献   

7.
The authors have developed a beam finite element model in large torsion context for thin-walled beams with arbitrary cross sections [1]. In the model, the trigonometric functions of the twist angle θx (c = cos θx  1 and s = sin θx) were included as additional variables in the whole model without any assumption. In the present paper, three other 3D finite element beams are derived according to three approximations based on truncated Taylor expansions of the functions c and s (cubic, quadratic and linear). A finite element approach of these approximations is carried out. Finally, it is worth mentioning that the promising results obtained in [1], [2] encourage the authors to extend the formulation of the model in order to include load eccentricity effects. Solution of the non-linear equations is made possible by Asymptotic Numerical Method (ANM) [3]. This method is used as an alternative to the classical incremental iterative methods. Many comparison examples are considered. They concern the non-linear behaviour of beams under twist moment and the post buckling behaviour of struts under axial loads or the beam lateral buckling under eccentric bending loads. The obtained results highlight the discrepancies between the various approximations often employed in thin-walled beams literature for the geometrically non-linear analysis of beams in flexural–torsional behaviour.  相似文献   

8.
《Computers & Structures》1986,23(5):649-655
A semianalytical, seminumerical method of solution is presented for the governing partial differential equation of rectangular plates subjected to in-plane loads. The basic functions in the y-direction are chosen as the eigenfunctions for straight prismatic beams. The classical method of separation of variables is employed to obtain an ordinary differential equation. The resulting equation is solved by a one-dimensional finite difference technique. The problem is then reduced to a typical eigenvalue problem which on solution yields the buckling coefficient of the plate. The method is applied on plates with different edge conditions and under various loading conditions. The results are compared with those of existing solutions. Results for the case when one loaded edge is fixed and the other simply supported were reported in the literature for the first time.  相似文献   

9.
Due to its compact binary codes and efficient search scheme, image hashing method is suitable for large-scale image retrieval. In image hashing methods, Hamming distance is used to measure similarity between two points. For K-bit binary codes, the Hamming distance is an int and bounded by K. Therefore, there are many returned images sharing the same Hamming distances with the query. In this paper, we propose two efficient image ranking methods, which are distance weights based reranking method (DWR) and bit importance based reranking method (BIR). DWR method aim to rerank PCA hash codes. DWR averages Euclidean distance of equal hash bits to these bits with different values, so as to obtain the weights of hash codes. BIR method is suitable for all type of binary codes. Firstly, feedback technology is adopted to detect the importance of each binary bit, and then big weights are assigned to important bits and small weights are assigned to minor bits. The advantage of this proposed method is calculation efficiency. Evaluations on two large-scale image data sets demonstrate the efficacy of our methods.  相似文献   

10.
Straight line fitting to sets of edge pixels is a commonly used technique in computer vision. This note suggests that if the edge pixels are characterized by gradient magnitudes and directions, the fitting process can be refined to give weight to the directions of the edge pixels as well as to their magnitudes. Specifically, we first fit a line L0 to the set of edge pixels, weighted by their magnitudes. We now weight each edge pixel by the cosine of the angle between its (tangential) direction and that of L0, and recompute the fit, obtaining a new line L1. This process can be iterated to yield a sequence of fits L2, L3,…. When fitting a line to a non-noisy edge that has wiggles, this process converges rapidly and yields a better fit than the original one (L0). In a noisy image, however, the iteration process is often unstable; thus this method is best used after noise cleaning has been performed on the edge data.  相似文献   

11.
The effectiveness of consistent variational statements for discontinuous fields is illustrated for bending, buckling, and vibration of elastic beams. General variational statements are developed, from which finite-element formulations with piece-wise constant and piece-wise linear trial functions, are obtained. Results are illustrated by means of numerical examples.  相似文献   

12.
A weighted-k-out-of-n:G system is a system that consists of n binary components, each with its own positive weight, and operates only when the total weight of working components is at least k. Such a structure is useful when the components have different contributions to the performance of the entire system. This paper is concerned with both marginal and joint Birnbaum, and Barlow–Proschan (BP) importances of the components in weighted- k-out-of-n:G systems. The method of universal generating function is used for computing marginal and joint Birnbaum importances. The method for computing BP-importance is based on a direct probabilistic approach. Extensive numerical calculations are presented. By the help of these calculations and illustrations, it is possible to observe how the marginal and joint importances change with respect to the weights of components.  相似文献   

13.
By considering the uncertainty that exists in the edge weights of the network, fuzzy shortest path problems, as one of the derivative problems of shortest path problems, emerge from various practical applications in different areas. A path finding model, inspired by an amoeboid organism, Physarum polycephalum, has been shown as an effective approach for deterministic shortest path problems. In this paper, a biologically inspired algorithm called Fuzzy Physarum Algorithm (FPA) is proposed for fuzzy shortest path problems. FPA is developed based on the path finding model, while utilizing fuzzy arithmetic and fuzzy distance to deal with fuzzy issues. As a result, FPA can represent and handle the fuzzy shortest path problem flexibly and effectively. Distinct from many existing methods, no order relation has been assumed in the proposed FPA. Several examples, including a tourist problem, are given to illustrate the effectiveness and flexibility of the proposed method and the results are compared with existing methods.  相似文献   

14.
We analyze a simple method for finding shortest paths inEuclidean graphs (where vertices are points in a Euclidean space and edge weights are Euclidean distances between points). For many graph models, the average running time of the algorithm to find the shortest path between a specified pair of vertices in a graph withV vertices andE edges is shown to beO(V) as compared withO(E +V logV) required by the classical algorithm due to Dijkstra.  相似文献   

15.
This article presents an approach to designing an adaptive, data dependent, committee of models applied to prediction of several financial attributes for assessing company’s future performance. Current liabilities/Current assets, Total liabilities/Total assets, Net income/Total assets, and Operating Income/Total liabilities are the attributes used in this paper. A self-organizing map (SOM) used for data mapping and analysis enables building committees, which are specific (committee size and aggregation weights) for each SOM node. The number of basic models aggregated into a committee and the aggregation weights depend on accuracy of basic models and their ability to generalize in the vicinity of the SOM node. A random forest is used a basic model in this study. The developed technique was tested on data concerning companies from ten sectors of the healthcare industry of the United States and compared with results obtained from averaging and weighted averaging committees. The proposed adaptivity of a committee size and aggregation weights led to a statistically significant increase in prediction accuracy if compared to other types of committees.  相似文献   

16.
We consider inapproximability of the correlation clustering problem defined as follows: Given a graph G=(V,E) where each edge is labeled either “+” (similar) or “−” (dissimilar), correlation clustering seeks to partition the vertices into clusters so that the number of pairs correctly (resp., incorrectly) classified with respect to the labels is maximized (resp., minimized). The two complementary problems are called MaxAgree and MinDisagree, respectively, and have been studied on complete graphs, where every edge is labeled, and general graphs, where some edge might not have been labeled. Natural edge-weighted versions of both problems have been studied as well. Let S-MaxAgree denote the weighted problem where all weights are taken from set S, we show that S-MaxAgree with weights bounded by O(|V|1/2−δ) essentially belongs to the same hardness class in the following sense: if there is a polynomial time algorithm that approximates S-MaxAgree within a factor of λ=O(log|V|) with high probability, then for any choice of S, S-MaxAgree can be approximated in polynomial time within a factor of (λ+?), where ?>0 can be arbitrarily small, with high probability. A similar statement also holds for S-MinDisagree. This result implies it is hard (assuming NPRP) to approximate unweighted MaxAgree within a factor of 80/79−?, improving upon a previous known factor of 116/115−? by Charikar et al. [M. Charikar, V. Guruswami, A. Wirth, Clustering with qualitative information, Journal of Computer and System Sciences 71 (2005) 360-383].1  相似文献   

17.
There may exist priority relationships among criteria in multi-criteria decision making (MCDM) problems. This kind of problems, which we focus on in this paper, are called prioritized MCDM ones. In order to aggregate the evaluation values of criteria for an alternative, we first develop some weighted prioritized aggregation operators based on triangular norms (t-norms) together with the weights of criteria by extending the prioritized aggregation operators proposed by Yager (Yager, R. R. (2004). Modeling prioritized multi-criteria decision making. IEEE Transactions on Systems, Man, and Cybernetics, 34, 2396–2404). After discussing the influence of the concentration degrees of the evaluation values with respect to each criterion to the priority relationships, we further develop a method for handling the prioritized MCDM problems. Through a simple example, we validate that this method can be used in more wide situations than the existing prioritized MCDM methods. At length, the relationships between the weights associated with criteria and the preference relations among alternatives are explored, and then two quadratic programming models for determining weights based on multiplicative and fuzzy preference relations are developed.  相似文献   

18.
A novel and accurate method for matching of heterogeneous faces, such as sketch and near-infrared (NIR) images, with the visible (VIS) photo gallery and vice a versa has been presented here. A new geometric edge-texture feature (GETF) is proposed, which is not only able to capture the edge information but also the texture information. GETF is constructed from the combined information of edge and texture image of same individual. For texture information local binary pattern (LBP) is used, while for edge information canny edge detection is chosen. Edges are sensitive to illumination, so before applying canny edge operation, we convert the image into illumination invariant gradient domain. For each pixel of the edge image, the nearest edge pixel is found. Finally, the total hamming distance between any pixel and its nearest edge pixel of the corresponding texture image gives GETFDist and the angle between them gives the GETFAng feature. To classify the heterogeneous faces we proposed a multiple fuzzy-classifier system, which is a combination of fuzzy partial least square (FPLS) and fuzzy local feature-based discriminant analysis (FLFDA). We have tested statistically that, the proposed classifier performs better than the individual classifiers. In sketch-photo matching, a rank-1 accuracy of 99.66% is achieved in a gallery of 606 photos consisting of CUHK student dataset, AR face dataset, and XM2VTS dataset. In NIR–VIS image matching, a rank-1 accuracy of 99.50% is achieved in a gallery of 400 VIS images from CASIA-HFB dataset.  相似文献   

19.
Clustering networks play a key role in many scientific fields, from Biology to Sociology and Computer Science. Some clustering approaches are called global because they exploit knowledge about the whole network topology. Vice versa, so-called local methods require only a partial knowledge of the network topology. Global approaches yield accurate results but do not scale well on large networks; local approaches, vice versa, are less accurate but computationally fast. We propose CONCLUDE (COmplex Network CLUster DEtection), a new clustering method that couples the accuracy of global approaches with the scalability of local methods. CONCLUDE generates random, non-backtracking walks of finite length to compute the importance of each edge in keeping the network connected, i.e., its edge centrality. Edge centralities allow for mapping vertices onto points of a Euclidean space and compute all-pairs distances between vertices; those distances are then used to partition the network into clusters.  相似文献   

20.
Segmentation of anatomical structures in radiological images is one of the important steps in the computerized approach to the bone age assessment. In this paper a method dealing with correct location of the borders in the epi-metaphyseal regions of interest is described. The well segmented bone structures are obtained utilizing the Gibbs random fields as the first segmentation step; however this method does not prove to be adequate in the correct outline of other tissues in the epi-metaphyseal area. In order to correct delineation of cartilage in this region, the second segmentation step utilizing the active contours serving as a post-segmentation edge location technique is applied. Controlling of tension and bending of the active contour requires a set of weights in the energy functional to be set. To adjust the weights and to initially test the methodology a model of region of interest containing three different anatomical structures corrupted with Gaussian noise has been designed. Combined methodology of Gibbs random fields and active contours with the final set of weights was applied to 200 regions of interest randomly selected from 1100 left hand radiographs. A meaningful improvement in terms of ultimate contour location and smoothing has been observed in regions with cartilage or bone convexity developed near the bottom region of the epiphysis.
Arkadiusz GertychEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号