首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
通信网中最重要节点的确定方法   总被引:15,自引:0,他引:15  
提出了一种确定通信网中最重要节点的方法——节点删除法,并给出了归一化的表达式。最重要的节点是去掉该节点以及相关联的链路后,使得图的生成树数目最小。节点删除法反映了某个节点失效时,对整个通信网可靠性的破坏程度。该方法可以评价全网范围内的节点重要性,通过比较生成树的数目,可以判断通信网中任意两个节点的相对重要性。实验结果证明了节点删除法的有效性。  相似文献   

2.
Detecting evolution-based anomalies have emerged as an effective research topic in many domains, such as social and information networks, bioinformatics, and diverse security applications. However, the majority of research has focused on detecting anomalies using evolutionary behavior among objects in a network. The real-world networks are omnipresent, and heterogeneous in nature, while, in these networks, multiple types of objects co-evolve together with their attributes. To understand the anomalous co-evolution of multi-typed objects in a heterogeneous information network (HIN), we need an effective technique that can capture abnormal co-evolution of multi-typed objects. For example, detecting co-evolution-based anomalies in the heterogeneous bibliographic information network (HBIN) can depict better the object-oriented semantics than just scrutinizing the co-author or citation network alone. In this paper, we introduce the novel notion of a co-evolutionary anomaly in the HBIN, detect anomalies using co-evolution pattern mining (CPM), and study how multi-typed objects influence each other in their anomalous declaration by following a special type of HIN called star networks. The influence of three pre-defined attributes namely paper-count, co-author, and venue over target objects is measured to detect co-evolutionary anomalies in HBIN. The anomaly scores are calculated for each 510 target objects and individual influence of attributes is measured for two top target objects in case-studies. It is observed that venue has the most influence on the target objects discussed as case studies, however, about the rest of anomalies in the list, the most anomalous influential attribute could be rather different than the venue. Indeed, the CABIN algorithm constructs the way to find out the most influential attributes in co-evolutionary anomaly detection. Experiments on bibliographic dataset validate the effectiveness of the model and dominance of the algorithm. The proposed technique can be applied on various HINs such as Facebook, Twitter, Delicious to detect co-evolutionary anomalies.  相似文献   

3.
《工程(英文)》2019,5(5):930-939
It has long been a challenging task to detect an anomaly in a crowded scene. In this paper, a self-supervised framework called the abnormal event detection network (AED-Net), which is composed of a principal component analysis network (PCAnet) and kernel principal component analysis (kPCA), is proposed to address this problem. Using surveillance video sequences of different scenes as raw data, the PCAnet is trained to extract high-level semantics of the crowd’s situation. Next, kPCA, a one-class classifier, is trained to identify anomalies within the scene. In contrast to some prevailing deep learning methods, this framework is completely self-supervised because it utilizes only video sequences of a normal situation. Experiments in global and local abnormal event detection are carried out on Monitoring Human Activity dataset from University of Minnesota (UMN dataset) and Anomaly Detection dataset from University of California, San Diego (UCSD dataset), and competitive results that yield a better equal error rate (EER) and area under curve (AUC) than other state-of-the-art methods are observed. Furthermore, by adding a local response normalization (LRN) layer, we propose an improvement to the original AED-Net. The results demonstrate that this proposed version performs better by promoting the framework’s generalization capacity.  相似文献   

4.
The stochastic block model (SBM) is a random graph model that focuses on partitioning the nodes into blocks or communities. A degree-corrected stochastic block model (DCSBM) considers degree heterogeneity within nodes. Investigation of the type of edge label can be useful for studying networks. We have proposed a labeled degree-corrected stochastic block model (LDCSBM), added the probability of the occurrence of each edge label, and monitored the behavior of this network. The LDCSBM is a dynamic network that varies over time; thus, we applied the monitoring process to both the US Senate voting network and simulated networks by defining structural changes. We used the Shewhart control chart for detecting changes and studied the effect of Phase I parameter estimation on Phase II performance. The efficiency of the model for surveillance has been evaluated using the average run length for estimated parameters.  相似文献   

5.
针对无线传感器网络的较大测距误差严重影响定位算法精度和鲁棒性的问题,利用节点均匀部署网络的拓扑特征,提出了一种基于局部网络拓扑特征的鲁棒节点定位算法(LFLS算法).该算法通过构建节点测距高估粗差阈值参数和测距低估粗差阈值参数,在对未知节点1跳测距数据集进行粗差识别及剔除等预处理滤波的基础上,使用高斯加权最小二乘定位算法实现节点定位.仿真结果表明,基于局部网络拓扑特征的鲁棒节点定位算法的定位精度明显优于未采用局部网络拓扑特征进行粗差预处理的加权最小二乘定位算法,其中粗差测距直接相关节点的定位精度改进尤为明显.  相似文献   

6.
Influence maximization of temporal social networks (IMT) is a problem that aims to find the most influential set of nodes in the temporal network so that their information can be the most widely spread. To solve the IMT problem, we propose an influence maximization algorithm based on an improved K-shell method, namely improved K-shell in temporal social networks (KT). The algorithm takes into account the global and local structures of temporal social networks. First, to obtain the kernel value Ks of each node, in the global scope, it layers the network according to the temporal characteristic of nodes by improving the K-shell method. Then, in the local scope, the calculation method of comprehensive degree is proposed to weigh the influence of nodes. Finally, the node with the highest comprehensive degree in each core layer is selected as the seed. However, the seed selection strategy of KT can easily lose some influential nodes. Thus, by optimizing the seed selection strategy, this paper proposes an efficient heuristic algorithm called improved K-shell in temporal social networks for influence maximization (KTIM). According to the hierarchical distribution of cores, the algorithm adds nodes near the central core to the candidate seed set. It then searches for seeds in the candidate seed set according to the comprehensive degree. Experiments show that KTIM is close to the best performing improved method for influence maximization of temporal graph (IMIT) algorithm in terms of effectiveness, but runs at least an order of magnitude faster than it. Therefore, considering the effectiveness and efficiency simultaneously in temporal social networks, the KTIM algorithm works better than other baseline algorithms.  相似文献   

7.
During the 2001 foot and mouth disease epidemic in the UK, initial dissemination of the disease to widespread geographical regions was attributed to livestock movement, especially of sheep. In response, recording schemes to provide accurate data describing the movement of large livestock in Great Britain (GB) were introduced. Using these data, we reconstruct directed contact networks within the sheep industry and identify key epidemiological properties of these networks. There is clear seasonality in sheep movements, with a peak of intense activity in August and September and an associated high risk of a large epidemic. The high correlation between the in and out degree of nodes favours disease transmission. However, the contact networks were largely dissasortative: highly connected nodes mostly connect to nodes with few contacts, effectively slowing the spread of disease. This is a result of bipartite-like network properties, with most links occurring between highly active markets and less active farms. When comparing sheep movement networks (SMNs) to randomly generated networks with the same number of nodes and node degrees, despite structural differences (such as disassortativity and higher frequency of even path lengths in the SMNs), the characteristic path lengths within the SMNs are close to values computed from the corresponding random networks, showing that SMNs have 'small-world'-like properties. Using the network properties, we show that targeted biosecurity or surveillance at highly connected nodes would be highly effective in preventing a large and widespread epidemic.  相似文献   

8.
Opportunistic multihop networks with mobile relays recently have drawn much attention from researchers across the globe due to their wide applications in various challenging environments. However, because of their peculiar intrinsic features like lack of continuous connectivity, network partitioning, highly dynamic behavior, and long delays, it is very arduous to model and effectively capture the temporal variations of such networks with the help of classical graph models. In this work, we utilize an evolving graph to model the dynamic network and propose a matrix‐based algorithm to generate all minimal path sets between every node pair of such network. We show that these time‐stamped‐minimal‐path sets (TS‐MPS) between each given source‐destination node pair can be used, by utilizing the well‐known Sum‐of‐Disjoint Products technique, to generate various reliability metrics of dynamic networks, ie, two‐terminal reliability of dynamic network and its related metrics, ie, two‐terminal reliabilities of the foremost, shortest, and fastest TS‐MPS, and Expected Hop Count. We also introduce and compute a new network performance metric?Expected Slot Count. We use two illustrative examples of dynamic networks, one of four nodes, and the other of five nodes, to show the salient features of our technique to generate TS‐MPS and reliability metrics.  相似文献   

9.
Many systems of interests in practices can be represented as complex networks. For biological systems, biomolecules do not perform their functions alone but interact with each other to form so‐called biomolecular networks. A system is said to be controllable if it can be steered from any initial state to any other final state in finite time. The network controllability has become essential to study the dynamics of the networks and understand the importance of individual nodes in the networks. Some interesting biological phenomena have been discovered in terms of the structural controllability of biomolecular networks. Most of current studies investigate the structural controllability of networks in notion of the minimum driver node sets (MDSs). In this study, the authors analyse the network structural controllability in notion of the minimum steering node sets (MSSs). They first develop a graph‐theoretic algorithm to identify the MSS for a given network and then apply it to several biomolecular networks. Application results show that biomolecules identified in the MSSs play essential roles in corresponding biological processes. Furthermore, the application results indicate that the MSSs can reflect the network dynamics and node importance in controlling the networks better than the MDSs.Inspec keywords: molecular biophysics, biocontrol, graph theoryOther keywords: graph‐theoretic algorithm, MSS, minimum driver node sets, structural controllability, network dynamics, network controllability, biological systems, biomolecular networks, complex networks, minimum steering node set  相似文献   

10.
The heterogeneous nodes in the Internet of Things (IoT) are relatively weak in the computing power and storage capacity. Therefore, traditional algorithms of network security are not suitable for the IoT. Once these nodes alternate between normal behavior and anomaly behavior, it is difficult to identify and isolate them by the network system in a short time, thus the data transmission accuracy and the integrity of the network function will be affected negatively. Based on the characteristics of IoT, a lightweight local outlier factor detection method is used for node detection. In order to further determine whether the nodes are an anomaly or not, the varying behavior of those nodes in terms of time is considered in this research, and a time series method is used to make the system respond to the randomness and selectiveness of anomaly behavior nodes effectively in a short period of time. Simulation results show that the proposed method can improve the accuracy of the data transmitted by the network and achieve better performance.  相似文献   

11.
Malicious social robots are the disseminators of malicious information on social networks, which seriously affect information security and network environments. Efficient and reliable classification of social robots is crucial for detecting information manipulation in social networks. Supervised classification based on manual feature extraction has been widely used in social robot detection. However, these methods not only involve the privacy of users but also ignore hidden feature information, especially the graph feature, and the label utilization rate of semi-supervised algorithms is low. Aiming at the problems of shallow feature extraction and low label utilization rate in existing social network robot detection methods, in this paper a robot detection scheme based on weighted network topology is proposed, which introduces an improved network representation learning algorithm to extract the local structure features of the network, and combined with the graph convolution network (GCN) algorithm based on the graph filter, to obtain the global structure features of the network. An end-to-end semi-supervised combination model (Semi-GSGCN) is established to detect malicious social robots. Experiments on a social network dataset (cresci-rtbust-2019) show that the proposed method has high versatility and effectiveness in detecting social robots. In addition, this method has a stronger insight into robots in social networks than other methods.  相似文献   

12.
Diagnosing Anomalies and Identifying Faulty Nodes in Sensor Networks   总被引:1,自引:0,他引:1  
In this paper, an anomaly detection approach that fuses data gathered from different nodes in a distributed sensor network is proposed and evaluated. The emphasis of this work is placed on the data integrity and accuracy problem caused by compromised or malfunctioning nodes. The proposed approach utilizes and applies Principal Component Analysis simultaneously on multiple metrics received from various sensors. One of the key features of the proposed approach is that it provides an integrated methodology of taking into consideration and combining effectively correlated sensor data, in a distributed fashion, in order to reveal anomalies that span through a number of neighboring sensors. Furthermore, it allows the integration of results from neighboring network areas to detect correlated anomalies/attacks that involve multiple groups of nodes. The efficiency and effectiveness of the proposed approach is demonstrated for a real use case that utilizes meteorological data collected from a distributed set of sensor nodes  相似文献   

13.
Human activity recognition is commonly used in several Internet of Things applications to recognize different contexts and respond to them. Deep learning has gained momentum for identifying activities through sensors, smartphones or even surveillance cameras. However, it is often difficult to train deep learning models on constrained IoT devices. The focus of this paper is to propose an alternative model by constructing a Deep Learning-based Human Activity Recognition framework for edge computing, which we call DL-HAR. The goal of this framework is to exploit the capabilities of cloud computing to train a deep learning model and deploy it on lesspowerful edge devices for recognition. The idea is to conduct the training of the model in the Cloud and distribute it to the edge nodes. We demonstrate how the DL-HAR can perform human activity recognition at the edge while improving efficiency and accuracy. In order to evaluate the proposed framework, we conducted a comprehensive set of experiments to validate the applicability of DL-HAR. Experimental results on the benchmark dataset show a significant increase in performance compared with the state-of-the-art models.  相似文献   

14.
Lv  Yiqin  Xie  Zheng  Zuo  Xiaojing  Song  Yiping 《Scientometrics》2022,127(8):4847-4872

The classification task of scientific papers can be implemented based on contents or citations. In order to improve the performance on this task, we express papers as nodes and integrate scientific papers’ contents and citations into a heterogeneous graph. It has two types of edges. One type represents the semantic similarity between papers, derived from papers’ titles and abstracts. The other type represents the citation relationship between papers and the journals or proceedings of conferences of their references. We utilize a contrastive learning method to embed the nodes in the heterogeneous graph into a vector space. Then, we feed the paper node vectors into classifiers, such as the decision tree, multilayer perceptron, and so on. We conduct experiments on three datasets of scientific papers: the Microsoft Academic Graph with 63,211 scientific papers in 20 classes, the Proceedings of the National Academy of Sciences with 38,243 scientific papers in 18 classes, and the American Physical Society with 443,845 scientific papers in 5 classes. The experimental results on the multi-class task show that our multi-view method scores the classification accuracy up to 98%, outperforming state-of-the-arts.

  相似文献   

15.
Wireless Sensor Network (WSN) is considered to be one of the fundamental technologies employed in the Internet of things (IoT); hence, enabling diverse applications for carrying out real-time observations. Robot navigation in such networks was the main motivation for the introduction of the concept of landmarks. A robot can identify its own location by sending signals to obtain the distances between itself and the landmarks. Considering networks to be a type of graph, this concept was redefined as metric dimension of a graph which is the minimum number of nodes needed to identify all the nodes of the graph. This idea was extended to the concept of edge metric dimension of a graph G, which is the minimum number of nodes needed in a graph to uniquely identify each edge of the network. Regular plane networks can be easily constructed by repeating regular polygons. This design is of extreme importance as it yields high overall performance; hence, it can be used in various networking and IoT domains. The honeycomb and the hexagonal networks are two such popular mesh-derived parallel networks. In this paper, it is proved that the minimum landmarks required for the honeycomb network HC(n), and the hexagonal network HX(n) are 3 and 6 respectively. The bounds for the landmarks required for the hex-derived network HDN1(n) are also proposed.  相似文献   

16.
Graph convolutional networks (GCNs) have been developed as a general and powerful tool to handle various tasks related to graph data. However, current methods mainly consider homogeneous networks and ignore the rich semantics and multiple types of objects that are common in heterogeneous information networks (HINs). In this paper, we present a Heterogeneous Hyperedge Convolutional Network (HHCN), a novel graph convolutional network architecture that operates on HINs. Specifically, we extract the rich semantics by different metastructures and adopt hyperedge to model the interactions among metastructure-based neighbors. Due to the powerful information extraction capabilities of metastructure and hyperedge, HHCN has the flexibility to model the complex relationships in HINs by setting different combinations of metastructures and hyperedges. Moreover, a metastructure attention layer is also designed to allow each node to select the metastructures based on their importance and provide potential interpretability for graph analysis. As a result, HHCN can encode node features, metastructure-based semantics and hyperedge information simultaneously by aggregating features from metastructure-based neighbors in a hierarchical manner. We evaluate HHCN by applying it to the semi-supervised node classification task. Experimental results show that HHCN outperforms state-of-the-art graph embedding models and recently proposed graph convolutional network models.  相似文献   

17.
Dynamic networks require effective methods of monitoring and surveillance in order to respond promptly to unusual disturbances. In many applications, it is of interest to identify anomalous behavior within a dynamic interacting system. Such anomalous interactions are reflected by structural changes in the network representation of the system. In this paper, a dynamic random graph model is proposed that takes into account the past activities of the individuals in the social network and also represents temporal dependency of the network. The model parameters are appearance and disappearance probabilities of an edge which are estimated using a maximum likelihood approach. A generalization of a single path‐dependent likelihood ratio test is employed to detect changes in the parameters of the proposed model. Through monitoring the estimated parameters, one can effectively detect structural changes in a temporal‐dependent network. The proposed model is employed to describe the behavior of a real network, and its parameters are monitored via dependent likelihood ratio test and multivariate exponentially weighted moving average control chart. Results indicate that the proposed dynamic random graph model is a reliable mean to modeling and detecting changes in temporally dependent networks.  相似文献   

18.
The widespread usage of Cyber Physical Systems (CPSs) generates a vast volume of time series data, and precisely determining anomalies in the data is critical for practical production. Autoencoder is the mainstream method for time series anomaly detection, and the anomaly is judged by reconstruction error. However, due to the strong generalization ability of neural networks, some abnormal samples close to normal samples may be judged as normal, which fails to detect the abnormality. In addition, the dataset rarely provides sufficient anomaly labels. This research proposes an unsupervised anomaly detection approach based on adversarial memory autoencoders for multivariate time series to solve the above problem. Firstly, an encoder encodes the input data into low-dimensional space to acquire a feature vector. Then, a memory module is used to learn the feature vector’s prototype patterns and update the feature vectors. The updating process allows partial forgetting of information to prevent model overgeneralization. After that, two decoders reconstruct the input data. Finally, this research uses the Peak Over Threshold (POT) method to calculate the threshold to determine anomalous samples from normal samples. This research uses a two-stage adversarial training strategy during model training to enlarge the gap between the reconstruction error of normal and abnormal samples. The proposed method achieves significant anomaly detection results on synthetic and real datasets from power systems, water treatment plants, and computer clusters. The F1 score reached an average of 0.9196 on the five datasets, which is 0.0769 higher than the best baseline method.  相似文献   

19.
Thoracic venous anomalies without congenital heart anomalies are present in minority of the population, but they are frequent enough to be encountered while placing hemodialysis catheters through the jugular or subclavian veins. Persistent left superior vena cava is the most commonly seen anomaly and it is rarely noticed before the observation of an unusual course of hemodialysis catheter or guidewire on chest X‐ray. We present two patients with previously unspotted persistent left superior vena cava and uncomplicated hemodialysis catheter insertions through the internal jugular veins with good catheter functions. Review of the relevant literature from a nephrologists’ perspective with technical aspects is provided.  相似文献   

20.
Asynchronous federated learning (AsynFL) can effectively mitigate the impact of heterogeneity of edge nodes on joint training while satisfying participant user privacy protection and data security. However, the frequent exchange of massive data can lead to excess communication overhead between edge and central nodes regardless of whether the federated learning (FL) algorithm uses synchronous or asynchronous aggregation. Therefore, there is an urgent need for a method that can simultaneously take into account device heterogeneity and edge node energy consumption reduction. This paper proposes a novel Fixed-point Asynchronous Federated Learning (FixedAsynFL) algorithm, which could mitigate the resource consumption caused by frequent data communication while alleviating the effect of device heterogeneity. FixedAsynFL uses fixed-point quantization to compress the local and global models in AsynFL. In order to balance energy consumption and learning accuracy, this paper proposed a quantization scale selection mechanism. This paper examines the mathematical relationship between the quantization scale and energy consumption of the computation/communication process in the FixedAsynFL. Based on considering the upper bound of quantization noise, this paper optimizes the quantization scale by minimizing communication and computation consumption. This paper performs pertinent experiments on the MNIST dataset with several edge nodes of different computing efficiency. The results show that the FixedAsynFL algorithm with an 8-bit quantization can significantly reduce the communication data size by 81.3% and save the computation energy in the training phase by 74.9% without significant loss of accuracy. According to the experimental results, we can see that the proposed AsynFixedFL algorithm can effectively solve the problem of device heterogeneity and energy consumption limitation of edge nodes.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号