共查询到20条相似文献,搜索用时 15 毫秒
2.
This paper exploits the properties of the commute time for the purposes of graph simplification and matching. Our starting point is the lazy random walk on the graph, which is determined by the heat kernel of the graph and can be computed from the spectrum of the graph Laplacian. We characterise the random walk using the commute time between nodes, and show how this quantity may be computed from the Laplacian spectrum using the discrete Green's function. In this paper, we explore two different, but essentially dual, simplified graph representations delivered by the commute time. The first representation decomposes graphs into concentric layers. To do this we augment the graph with an auxiliary node which acts as a heat source. We use the pattern of commute times from this node to decompose the graph into a sequence of layers. Our second representation is based on the minimum spanning tree of the commute time matrix. The spanning trees located using commute time prove to be stable to structural variations. We match the graphs by applying a tree-matching method to the spanning trees. We experiment with the method on synthetic and real-world image data, where it proves to be effective. 相似文献
3.
We investigate important combinatorial and algorithmic properties of Gn,m,p random intersection graphs. In particular, we prove that with high probability (a) random intersection graphs are expanders, (b) random walks on such graphs are “rapidly mixing” (in particular they mix in logarithmic time) and (c) the cover time of random walks on such graphs is optimal (i.e. it is Θ(nlogn) ). All results are proved for p very close to the connectivity threshold and for the interesting, non-trivial range where random intersection graphs differ from classical Gn,p random graphs. 相似文献
5.
The analysis of complex networks is of major interest in various fields of science. In many applications we face the challenge that the exact topology of a network is unknown but we are instead given information about distances within this network. The theoretical approaches to this problem have so far been focusing on the reconstruction of graphs from shortest path distance matrices. Often, however, movements in networks do not follow shortest paths but occur in a random fashion. In these cases an appropriate distance measure can be defined as the mean length of a random walk between two nodes — a quantity known as the mean first hitting time.In this contribution we investigate whether a graph can be reconstructed from its mean first hitting time matrix and put forward an algorithm for solving this problem. A heuristic method to reduce the computational effort is described and analyzed. In the case of trees we can even give an algorithm for reconstructing graphs from incomplete random walk distance matrices. 相似文献
6.
Embedding of paths have attracted much attention in the parallel processing. Many-to-many communication is one of the most central issues in various interconnection networks. A graph G is globally two-equal-disjoint path coverable if for any two distinct pairs of vertices (u,v) and (w,x) of G, there exist two disjoint paths P and Q satisfied that (1) P ( Q, respectively) joins u and v ( w and x, respectively), (2) |P|=|Q|, and (3) V(PQ)=V(G). The Matching Composition Network (MCN) is a family of networks which two components are connected by a perfect matching. In this paper, we consider the globally two-equal-disjoint path cover property of MCN. Applying our result, the Crossed cube CQn, the Twisted cube TQn, and the Möbius cube MQn can all be proven to be globally two-equal-disjoint path coverable for n5. 相似文献
7.
Recently, power shortages have become a major problem all over Japan, due to the Great East Japan Earthquake, which resulted in the shutdown of a nuclear power plant. As a consequence, production scheduling has become a problem for factories, due to considerations of the availability of electric power. For factories, the contract with the electric power company sets the maximum power demand for a unit period, and in order to minimize this, it is necessary to consider the peak power when scheduling production. There are conventional studies on flowshop scheduling with consideration of peak power. However, these studies did not consider fluctuations in the processing time. Because the actual processing time is not constant, there is an increase in the probability of simultaneous operations with multiple machines. If the probability of simultaneous operations is high, the probability of increasing the peak power is high. Thus, we consider inserting idle time (delay in inputting parts) into the schedule in order to reduce the likelihood of simultaneous operations. We consider a robust schedule that limits the peak power, in spite of an unexpected fluctuation in the processing time. However, when we insert idle time, the makespan gets longer, and the production efficiency decreases. Therefore, we performed simulations to investigate the optimal amount of idle time and the best point for inserting it. We propose a more robust production scheduling model that considers random processing times and the peak power consumption. The results of experiments show that the effectiveness of the schedule produced by the proposed method is superior to the initial schedule and to a schedule produced by another method. Thus, the use of random processing times can limit the peak power. 相似文献
8.
采用传统固相合成法制成的LSCO作为功能相,并用钙硼硅玻璃作为无机粘结相制备了厚膜电阻浆料。研究了玻璃相的含量、峰值烧结温度对厚膜方阻及电阻温度系数的影响。结果表明:当浆料固相成分中玻璃相的质量分数为3%~9%(体积分数为11.11%~28.56%)时,制备的厚膜电阻浆料方阻变化范围为1kΩ/□~10MkΩ/□。电阻温度系数为-8000×10-6/℃~-5000×10-6/℃。 相似文献
9.
电阻层析成像系统敏感场受多相流介质分布的影响,敏感场分布数据作为图像重建所需的先验数据必须通过理论计算的方法得到,为降低敏感场的软场误差,提高重建图像质量,对敏感场分布进行深入的分析是极为必要的。论文在分析电阻层析成像的基本原理的基础上,采用有限元的方法建立了敏感场的数学模型,通过对离散介质场域的研究,分析了影响敏感场分布的因素及规律,完成了敏感场分布计算及可视化仿真。实验证明建立的有限元模型是正确的,而且敏感场分布符合实际,运算速度在10s左右,为相关的图像重建算法提供了依据。 相似文献
10.
Large area land cover products generated from remotely sensed data are difficult to validate in a timely and cost effective manner. As a result, pre-existing data are often used for validation. Temporal, spatial, and attribute differences between the land cover product and pre-existing validation data can result in inconclusive depictions of map accuracy. This approach may therefore misrepresent the true accuracy of the land cover product, as well as the accuracy of the validation data, which is not assumed to be without error. Hence, purpose-acquired validation data is preferred; however, logistical constraints often preclude its use — especially for large area land cover products. Airborne digital video provides a cost-effective tool for collecting purpose-acquired validation data over large areas. An operational trial was conducted, involving the collection of airborne video for the validation of a 31,000 km 2 sub-sample of the Canadian large area Earth Observation for Sustainable Development of Forests (EOSD) land cover map (Vancouver Island, British Columbia, Canada). In this trial, one form of agreement between the EOSD product and the airborne video data was defined as a match between the mode land cover class of a 3 by 3 pixel neighbourhood surrounding the sample pixel and the primary or secondary choice of land cover for the interpreted video. This scenario produced the highest level of overall accuracy at 77% for level 4 of classification hierarchy (13 classes). The coniferous treed class, which represented 71% of Vancouver Island, had an estimated user's accuracy of 86%. Purpose acquired video was found to be a useful and cost-effective data source for validation of the EOSD land cover product. The impact of using multiple interpreters was also tested and documented. Improvements to the sampling and response designs that emerged from this trial will benefit a full-scale accuracy assessment of the EOSD product and also provides insights for other regional and global land cover mapping programs. 相似文献
11.
The first-order, untyped, functional logic language Babel is extended by polymorphic types and higher order functios. A sophisticated incompatibility check which is used to guarantee nonambiguity of BABEL programs is presented. For the implementation of the language, unification and backtracking are integrated in a programmed (functional) graph reduction machine. The implementation of this machine has been used for a comparison between Babel and PROLOG based on the runtimes of some example programs. 相似文献
12.
This paper extends the work on discovering fuzzy association rules with degrees of support and implication (ARsi). The effort is twofold: one is to discover ARsi with hierarchy so as to express more semantics due to the fact that hierarchical relationships usually exist among fuzzy sets associated with the attribute concerned; the other is to generate a “core” set of rules, namely the rule cover set, that are of more interest in a sense that all other rules could be derived by the cover set. Corresponding algorithms for ARsi with hierarchy and the cover set are proposed along with pruning strategies incorporated to improve the computational efficiency. Some data experiments are conducted as well to show the effectiveness of the approach. 相似文献
13.
Several investigations indicate that the Bidirectional Reflectance Distribution Function (BRDF) contains information that can be used to complement spectral information for improved land cover classification accuracies. Prior studies on the addition of BRDF information to improve land cover classifications have been conducted primarily at local or regional scales. Thus, the potential benefits of adding BRDF information to improve global to continental scale land cover classification have not yet been explored. Here we examine the impact of multidirectional global scale data from the first Polarization and Directionality of Earth Reflectances (POLDER) spacecraft instrument flown on the Advanced Earth Observing Satellite (ADEOS-1) platform on overall classification accuracy and per-class accuracies for 15 land cover categories specified by the International Geosphere Biosphere Programme (IGBP). A set of 36,648 global training pixels (7 × 6 km spatial resolution) was used with a decision tree classifier to evaluate the performance of classifying POLDER data with and without the inclusion of BRDF information. BRDF ‘metrics’ for the eight-month POLDER on ADEOS-1 archive (10/1996–06/1997) were developed that describe the temporal evolution of the BRDF as captured by a semi-empirical BRDF model. The concept of BRDF ‘feature space’ is introduced and used to explore and exploit the bidirectional information content. The C5.0 decision tree classifier was applied with a boosting option, with the temporal metrics for spectral albedo as input for a first test, and with spectral albedo and BRDF metrics for a second test. Results were evaluated against 20 random subsets of the training data. Examination of the BRDF feature space indicates that coarse scale BRDF coefficients from POLDER provide information on land cover that is different from the spectral and temporal information of the imagery. The contribution of BRDF information to reducing classification errors is also demonstrated: the addition of BRDF metrics reduces the mean, overall classification error rates by 3.15% (from 18.1% to 14.95% error) with larger improvements for producer's accuracies of individual classes such as Grasslands (+ 8.71%), Urban areas (+ 8.02%), and Wetlands (+ 7.82%). User's accuracies for the Urban (+ 7.42%) and Evergreen Broadleaf Forest (+ 6.70%) classes are also increased. The methodology and results are widely applicable to current multidirectional satellite data from the Multi-angle Imaging Spectroradiometer (MISR), and to the next generation of POLDER-like multi-directional instruments. 相似文献
14.
Many problems can be cast as statistical inference on an attributed random graph. Our motivation is change detection in communication graphs. We prove that tests based on a fusion of graph-derived and content-derived metadata can be more powerful than those based on graph or content features alone. For some basic attributed random graph models, we derive fusion tests from the likelihood ratio. We describe the regions in parameter space where the fusion improves power, using both numeric results from selected small examples and analytic results on asymptotically large graphs. 相似文献
15.
This paper determines upper bounds on the expected time complexity for a variety of parallel algorithms for undirected and directed random graph problems. For connectivity, biconnectivity, transitive closure, minimum spanning trees, and all pairs minimum cost paths, we prove the expected time to be O(log log n) for the CRCW PRAM (this parallel RAM machine allows resolution of write conflicts) and O(log n · log log n) for the CREW PRAM (which allows simultaneous reads but not simultaneous writes). We also show that the problem of graph isomorphism has expected parallel time O(log log n) for the CRCW PRAM and O(log n) for the CREW PRAM. Most of these results follow because of upper bounds on the mean depth of a graph, derived in this paper, for more general graphs than was known before.For undirected connectivity especially, we present a new probabilistic algorithm which runs on a randomized input and has an expected running time of O(log log n) on the CRCW PRAM, with O( n) expected number of processors only.Our results also improve known upper bounds on the expected space required for sequential graph algorithms. For example, we show that the problems of finding connected components, transitive closure, minimum spanning trees, and minimum cost paths have expected sequential space O(log n · log log n) on a deterministic Turing Machine. We use a simulation of the CRCW PRAM to get these expected sequential space bounds.This research was supported by National Science Foundation Grant DCR-85-03251 and Office of Naval Research Contract N00014-80-C-0647.This research was partially supported by the National Science Foundation Grants MCS-83-00630, DCR-8503497, by the Greek Ministry of Research and Technology, and by the ESPRIT Basic Research Actions Project ALCOM. 相似文献
16.
Atmospheric general circulation model (AGCM) simulations predict that a complete deforestation of the Amazon basin would lead to a significant climate change; however, it is more difficult to determine the amount of deforestation that would lead to a detectable climate change. This paper examines whether cloudiness has already changed locally in the Brazilian arc of deforestation, one of the most deforested regions of the Amazon basin, where over 15% of the primary forest has been converted to pasture and agriculture. Three pairs of deforested/forested areas have been selected at a scale compatible with that of climate model grids to compare changes in land cover with changes in cloudiness observed in satellite data over a 10-year period from 1984 to 1993. Analysis of cloud cover trends suggests that a regional climate change may already be underway in the most deforested part of the arc of deforestation. Although changes in cloud cover over deforested areas are not significant for interannual variations, they are for the seasonal and diurnal distributions. During the dry season, observations show more low-level clouds in early afternoon and less convection at night and in early morning over deforested areas. During the wet season, convective cloudiness is enhanced in the early night over deforested areas. Generally speaking, the results suggest that deforestation may lead to increased seasonality; however, some of the differences observed between deforested and forested areas may be related to their different geographical locations. 相似文献
17.
Information related to land cover is immensely important to global change science. In the past decade, data sources and methodologies for creating global land cover maps from remote sensing have evolved rapidly. Here we describe the datasets and algorithms used to create the Collection 5 MODIS Global Land Cover Type product, which is substantially changed relative to Collection 4. In addition to using updated input data, the algorithm and ancillary datasets used to produce the product have been refined. Most importantly, the Collection 5 product is generated at 500-m spatial resolution, providing a four-fold increase in spatial resolution relative to the previous version. In addition, many components of the classification algorithm have been changed. The training site database has been revised, land surface temperature is now included as an input feature, and ancillary datasets used in post-processing of ensemble decision tree results have been updated. Further, methods used to correct classifier results for bias imposed by training data properties have been refined, techniques used to fuse ancillary data based on spatially varying prior probabilities have been revised, and a variety of methods have been developed to address limitations of the algorithm for the urban, wetland, and deciduous needleleaf classes. Finally, techniques used to stabilize classification results across years have been developed and implemented to reduce year-to-year variation in land cover labels not associated with land cover change. Results from a cross-validation analysis indicate that the overall accuracy of the product is about 75% correctly classified, but that the range in class-specific accuracies is large. Comparison of Collection 5 maps with Collection 4 results show substantial differences arising from increased spatial resolution and changes in the input data and classification algorithm. 相似文献
19.
A hub set in a graph G is a set U⊆ V( G) such that any two vertices outside U are connected by a path whose internal vertices lie in U. We prove that h( G)? hc( G)? γc( G)? h( G)+1, where h( G), hc( G), and γc( G), respectively, are the minimum sizes of a hub set in G, a hub set inducing a connected subgraph, and a connected dominating set. Furthermore, all graphs with γc( G)> hc( G)?4 are obtained by substituting graphs into three consecutive vertices of a cycle; this yields a polynomial-time algorithm to check whether hc( G)= γc( G). 相似文献
20.
An automated method was developed for mapping forest cover change using satellite remote sensing data sets. This multi-temporal classification method consists of a training data automation (TDA) procedure and uses the advanced support vector machines (SVM) algorithm. The TDA procedure automatically generates training data using input satellite images and existing land cover products. The derived high quality training data allow the SVM to produce reliable forest cover change products. This approach was tested in 19 study areas selected from major forest biomes across the globe. In each area a forest cover change map was produced using a pair of Landsat images acquired around 1990 and 2000. High resolution IKONOS images and independently developed reference data sets were available for evaluating the derived change products in 7 of those areas. The overall accuracy values were over 90% for 5 areas, and were 89.4% and 89.6% for the remaining two areas. The user's and producer's accuracies of the forest loss class were over 80% for all 7 study areas, demonstrating that this method is especially effective for mapping major disturbances with low commission errors. IKONOS images were also available in the remaining 12 study areas but they were either located in non-forest areas or in forest areas that did not experience forest cover change between 1990 and 2000. For those areas the IKONOS images were used to assist visual interpretation of the Landsat images in assessing the derived change products. This visual assessment revealed that for most of those areas the derived change products likely were as reliable as those in the 7 areas where accuracy assessment was conducted. The results also suggest that images acquired during leaf-off seasons should not be used in forest cover change analysis in areas where deciduous forests exist. Being highly automatic and with demonstrated capability to produce reliable change products, the TDA-SVM method should be especially useful for quantifying forest cover change over large areas. 相似文献
|