共查询到20条相似文献,搜索用时 0 毫秒
2.
This paper exploits the properties of the commute time for the purposes of graph simplification and matching. Our starting point is the lazy random walk on the graph, which is determined by the heat kernel of the graph and can be computed from the spectrum of the graph Laplacian. We characterise the random walk using the commute time between nodes, and show how this quantity may be computed from the Laplacian spectrum using the discrete Green's function. In this paper, we explore two different, but essentially dual, simplified graph representations delivered by the commute time. The first representation decomposes graphs into concentric layers. To do this we augment the graph with an auxiliary node which acts as a heat source. We use the pattern of commute times from this node to decompose the graph into a sequence of layers. Our second representation is based on the minimum spanning tree of the commute time matrix. The spanning trees located using commute time prove to be stable to structural variations. We match the graphs by applying a tree-matching method to the spanning trees. We experiment with the method on synthetic and real-world image data, where it proves to be effective. 相似文献
3.
We investigate important combinatorial and algorithmic properties of Gn,m,p random intersection graphs. In particular, we prove that with high probability (a) random intersection graphs are expanders, (b) random walks on such graphs are “rapidly mixing” (in particular they mix in logarithmic time) and (c) the cover time of random walks on such graphs is optimal (i.e. it is Θ(nlogn) ). All results are proved for p very close to the connectivity threshold and for the interesting, non-trivial range where random intersection graphs differ from classical Gn,p random graphs. 相似文献
4.
现有的全局流形学习算法都敏感于邻域大小这一难以高效选取的参数,它们都采用了基于欧氏距离的邻域图创建方法,从而使邻域图容易产生“短路”边。本文提出了一种基于随机游走模型的全局
流形学习算法(Random walk-based isometric mapping,RW-ISOMAP)。和欧氏距离相比,由随机游走模型得到的通勤时间距离是由给定两点间的所有通路以概率为权组合而成的,不但鲁棒性更高,而且还能在一定程度上度量具有非线性几何结构的数据之间的相似性。因此采用通勤时间距离来创建邻域图的RW-ISOMAP算法将不再敏感于邻域大小参数,从而可以更容易地选取邻域大小参数,同时还具有更高的鲁棒性。最后的实验结果证实了该算法的有效性。 相似文献
6.
在电阻层析成像系统中,激励模式和电极数目的设计直接影响到整个系统。为了提高重建图像质量,更好地指导ERT系统设计,对敏感场进行深入的分析是必要的。通过电磁场有限元仿真软件COMSOL,构建ERT的电极模型,通过对空场和离散介质场域的仿真研究,分析了各个因素对敏感场的影响,并对四种典型流型进行了图像重建。仿真实验表明,COMSOL图像重建效果满意,为图像重建的研究提供了新的思路。 相似文献
7.
The analysis of complex networks is of major interest in various fields of science. In many applications we face the challenge that the exact topology of a network is unknown but we are instead given information about distances within this network. The theoretical approaches to this problem have so far been focusing on the reconstruction of graphs from shortest path distance matrices. Often, however, movements in networks do not follow shortest paths but occur in a random fashion. In these cases an appropriate distance measure can be defined as the mean length of a random walk between two nodes — a quantity known as the mean first hitting time.In this contribution we investigate whether a graph can be reconstructed from its mean first hitting time matrix and put forward an algorithm for solving this problem. A heuristic method to reduce the computational effort is described and analyzed. In the case of trees we can even give an algorithm for reconstructing graphs from incomplete random walk distance matrices. 相似文献
9.
Embedding of paths have attracted much attention in the parallel processing. Many-to-many communication is one of the most central issues in various interconnection networks. A graph G is globally two-equal-disjoint path coverable if for any two distinct pairs of vertices (u,v) and (w,x) of G, there exist two disjoint paths P and Q satisfied that (1) P ( Q, respectively) joins u and v ( w and x, respectively), (2) |P|=|Q|, and (3) V(PQ)=V(G). The Matching Composition Network (MCN) is a family of networks which two components are connected by a perfect matching. In this paper, we consider the globally two-equal-disjoint path cover property of MCN. Applying our result, the Crossed cube CQn, the Twisted cube TQn, and the Möbius cube MQn can all be proven to be globally two-equal-disjoint path coverable for n5. 相似文献
10.
Recently, power shortages have become a major problem all over Japan, due to the Great East Japan Earthquake, which resulted in the shutdown of a nuclear power plant. As a consequence, production scheduling has become a problem for factories, due to considerations of the availability of electric power. For factories, the contract with the electric power company sets the maximum power demand for a unit period, and in order to minimize this, it is necessary to consider the peak power when scheduling production. There are conventional studies on flowshop scheduling with consideration of peak power. However, these studies did not consider fluctuations in the processing time. Because the actual processing time is not constant, there is an increase in the probability of simultaneous operations with multiple machines. If the probability of simultaneous operations is high, the probability of increasing the peak power is high. Thus, we consider inserting idle time (delay in inputting parts) into the schedule in order to reduce the likelihood of simultaneous operations. We consider a robust schedule that limits the peak power, in spite of an unexpected fluctuation in the processing time. However, when we insert idle time, the makespan gets longer, and the production efficiency decreases. Therefore, we performed simulations to investigate the optimal amount of idle time and the best point for inserting it. We propose a more robust production scheduling model that considers random processing times and the peak power consumption. The results of experiments show that the effectiveness of the schedule produced by the proposed method is superior to the initial schedule and to a schedule produced by another method. Thus, the use of random processing times can limit the peak power. 相似文献
11.
We consider a problem of decentralized exploration of a faulty network by several simple, memoryless agents. The model we
adopt for a network is a directed graph. We design an asynchronous algorithm that can cope with failures of network edges
and nodes. The algorithm is self-stabilizing in the sense that it can be started with arbitrary initializations and scalable
– new agents can be added while other agents are already running.
This revised version was published online in August 2006 with corrections to the Cover Date. 相似文献
12.
采用传统固相合成法制成的LSCO作为功能相,并用钙硼硅玻璃作为无机粘结相制备了厚膜电阻浆料。研究了玻璃相的含量、峰值烧结温度对厚膜方阻及电阻温度系数的影响。结果表明:当浆料固相成分中玻璃相的质量分数为3%~9%(体积分数为11.11%~28.56%)时,制备的厚膜电阻浆料方阻变化范围为1kΩ/□~10MkΩ/□。电阻温度系数为-8000×10-6/℃~-5000×10-6/℃。 相似文献
13.
This paper proposes, from the economical viewpoint of preventive maintenance in reliability theory, several preventive maintenance policies for an operating system that works for jobs at random times and is imperfectly maintained upon failure. As a failure occurs, the system suffers one of two types of failure based on a specific random mechanism: type-I (repairable) failure is rectified by a minimal repair, and type-II (non-repairable) failure is removed by a corrective replacement. First, a modified random and age replacement policy is considered in which the system is replaced at a planned time T, at a random working time, or at the first type-II failure, whichever occurs first. Next, as one extended model, the system may work continuously for N jobs with random working times. Finally, as another extended model, we might consider replacing an operating system at the first working time completion over a planned time T. For each policy, the optimal schedule of preventive replacement that minimizes the mean cost rate is presented analytically and discussed numerically. Because the framework and analysis are general, the proposed models extend several existing results. 相似文献
14.
电阻层析成像系统敏感场受多相流介质分布的影响,敏感场分布数据作为图像重建所需的先验数据必须通过理论计算的方法得到,为降低敏感场的软场误差,提高重建图像质量,对敏感场分布进行深入的分析是极为必要的。论文在分析电阻层析成像的基本原理的基础上,采用有限元的方法建立了敏感场的数学模型,通过对离散介质场域的研究,分析了影响敏感场分布的因素及规律,完成了敏感场分布计算及可视化仿真。实验证明建立的有限元模型是正确的,而且敏感场分布符合实际,运算速度在10s左右,为相关的图像重建算法提供了依据。 相似文献
15.
Large area land cover products generated from remotely sensed data are difficult to validate in a timely and cost effective manner. As a result, pre-existing data are often used for validation. Temporal, spatial, and attribute differences between the land cover product and pre-existing validation data can result in inconclusive depictions of map accuracy. This approach may therefore misrepresent the true accuracy of the land cover product, as well as the accuracy of the validation data, which is not assumed to be without error. Hence, purpose-acquired validation data is preferred; however, logistical constraints often preclude its use — especially for large area land cover products. Airborne digital video provides a cost-effective tool for collecting purpose-acquired validation data over large areas. An operational trial was conducted, involving the collection of airborne video for the validation of a 31,000 km 2 sub-sample of the Canadian large area Earth Observation for Sustainable Development of Forests (EOSD) land cover map (Vancouver Island, British Columbia, Canada). In this trial, one form of agreement between the EOSD product and the airborne video data was defined as a match between the mode land cover class of a 3 by 3 pixel neighbourhood surrounding the sample pixel and the primary or secondary choice of land cover for the interpreted video. This scenario produced the highest level of overall accuracy at 77% for level 4 of classification hierarchy (13 classes). The coniferous treed class, which represented 71% of Vancouver Island, had an estimated user's accuracy of 86%. Purpose acquired video was found to be a useful and cost-effective data source for validation of the EOSD land cover product. The impact of using multiple interpreters was also tested and documented. Improvements to the sampling and response designs that emerged from this trial will benefit a full-scale accuracy assessment of the EOSD product and also provides insights for other regional and global land cover mapping programs. 相似文献
16.
This paper extends the work on discovering fuzzy association rules with degrees of support and implication (ARsi). The effort is twofold: one is to discover ARsi with hierarchy so as to express more semantics due to the fact that hierarchical relationships usually exist among fuzzy sets associated with the attribute concerned; the other is to generate a “core” set of rules, namely the rule cover set, that are of more interest in a sense that all other rules could be derived by the cover set. Corresponding algorithms for ARsi with hierarchy and the cover set are proposed along with pruning strategies incorporated to improve the computational efficiency. Some data experiments are conducted as well to show the effectiveness of the approach. 相似文献
17.
Several investigations indicate that the Bidirectional Reflectance Distribution Function (BRDF) contains information that can be used to complement spectral information for improved land cover classification accuracies. Prior studies on the addition of BRDF information to improve land cover classifications have been conducted primarily at local or regional scales. Thus, the potential benefits of adding BRDF information to improve global to continental scale land cover classification have not yet been explored. Here we examine the impact of multidirectional global scale data from the first Polarization and Directionality of Earth Reflectances (POLDER) spacecraft instrument flown on the Advanced Earth Observing Satellite (ADEOS-1) platform on overall classification accuracy and per-class accuracies for 15 land cover categories specified by the International Geosphere Biosphere Programme (IGBP). A set of 36,648 global training pixels (7 × 6 km spatial resolution) was used with a decision tree classifier to evaluate the performance of classifying POLDER data with and without the inclusion of BRDF information. BRDF ‘metrics’ for the eight-month POLDER on ADEOS-1 archive (10/1996–06/1997) were developed that describe the temporal evolution of the BRDF as captured by a semi-empirical BRDF model. The concept of BRDF ‘feature space’ is introduced and used to explore and exploit the bidirectional information content. The C5.0 decision tree classifier was applied with a boosting option, with the temporal metrics for spectral albedo as input for a first test, and with spectral albedo and BRDF metrics for a second test. Results were evaluated against 20 random subsets of the training data. Examination of the BRDF feature space indicates that coarse scale BRDF coefficients from POLDER provide information on land cover that is different from the spectral and temporal information of the imagery. The contribution of BRDF information to reducing classification errors is also demonstrated: the addition of BRDF metrics reduces the mean, overall classification error rates by 3.15% (from 18.1% to 14.95% error) with larger improvements for producer's accuracies of individual classes such as Grasslands (+ 8.71%), Urban areas (+ 8.02%), and Wetlands (+ 7.82%). User's accuracies for the Urban (+ 7.42%) and Evergreen Broadleaf Forest (+ 6.70%) classes are also increased. The methodology and results are widely applicable to current multidirectional satellite data from the Multi-angle Imaging Spectroradiometer (MISR), and to the next generation of POLDER-like multi-directional instruments. 相似文献
18.
The first-order, untyped, functional logic language Babel is extended by polymorphic types and higher order functios. A sophisticated incompatibility check which is used to guarantee nonambiguity of BABEL programs is presented. For the implementation of the language, unification and backtracking are integrated in a programmed (functional) graph reduction machine. The implementation of this machine has been used for a comparison between Babel and PROLOG based on the runtimes of some example programs. 相似文献
19.
提出一种正则化广义逆ERT图像重建算法,利用ERT仿真软件得到的数据进行图像重建。与常用的ERT图像重建算法进行比较,重建出的图像经过统一的门限滤波后,反投影算法、灵敏度系数算法和正则化广义逆ERT图像重建算法重建图像的CSIE平均值分别为12%,9%和6%。研究表明,正则化广义逆ERT图像重建算法重建速度快,能显著提高重建图像的质量,适合于工业过程应用。 相似文献
20.
Many problems can be cast as statistical inference on an attributed random graph. Our motivation is change detection in communication graphs. We prove that tests based on a fusion of graph-derived and content-derived metadata can be more powerful than those based on graph or content features alone. For some basic attributed random graph models, we derive fusion tests from the likelihood ratio. We describe the regions in parameter space where the fusion improves power, using both numeric results from selected small examples and analytic results on asymptotically large graphs. 相似文献
|