首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The Journal of Supercomputing - In this paper, a trust framework is proposed for misbehavior detection in software defined vehicular networks (TFMD-SDVN) to detect the correct events in the network...  相似文献   

2.
Non-negative tensor factorization (NTF) has been successfully used to extract significant characteristics from polyadic data, such as data in social networks. Because these polyadic data have multiple dimensions (e.g., the author, content, and timestamp of a blog post), NTF fits in naturally and extracts data characteristics jointly from different data dimensions. In the traditional NTF, all information comes from the observed data, and therefore, the end users have no control over the outcomes. However, in many applications very often, the end users have certain prior knowledge, such as the demographic information about individuals in a social network or a pre-constructed ontology on the contents and therefore prefer the data characteristics extracting by NTF being consistent with such prior knowledge. To allow users’ prior knowledge to be naturally incorporated into NTF, in this paper, we present a general framework—FacetCube—that extends the standard NTF. The new framework allows the end users to control the factorization outputs at three different levels for each of the data dimensions. The proposed framework is intuitively appealing in that it has a close connection to the probabilistic generative models. In addition to introducing the framework, we provide an iterative algorithm for computing the optimal solution to the framework. We also develop an efficient implementation of the algorithm that consists of several techniques to make our framework scalable to large data sets. Extensive experimental studies on a paper citation data set and a blog data set demonstrate that our new framework is able to effectively incorporate users’ prior knowledge, improves performance over the traditional NTF on the task of personalized recommendation, and is scalable to large data sets from real-life applications.  相似文献   

3.
Crime data mining: a general framework and some examples   总被引:2,自引:0,他引:2  
Chen  H. Chung  W. Xu  J.J. Wang  G. Qin  Y. Chau  M. 《Computer》2004,37(4):50-56
A major challenge facing all law-enforcement and intelligence-gathering organizations is accurately and efficiently analyzing the growing volumes of crime data. Detecting cybercrime can likewise be difficult because busy network traffic and frequent online transactions generate large amounts of data, only a small portion of which relates to illegal activities. Data mining is a powerful tool that enables criminal investigators who may lack extensive training as data analysts to explore large databases quickly and efficiently. We present a general framework for crime data mining that draws on experience gained with the Coplink project, which researchers at the University of Arizona have been conducting in collaboration with the Tucson and Phoenix police departments since 1997.  相似文献   

4.
Adiabatic quantum computing has evolved in recent years from a theoretical field into an immensely practical area, a change partially sparked by D-Wave System’s quantum annealing hardware. These multimillion-dollar quantum annealers offer the potential to solve optimization problems millions of times faster than classical heuristics, prompting researchers at Google, NASA and Lockheed Martin to study how these computers can be applied to complex real-world problems such as NASA rover missions. Unfortunately, compiling (embedding) an optimization problem into the annealing hardware is itself a difficult optimization problem and a major bottleneck currently preventing widespread adoption. Additionally, while finding a single embedding is difficult, no generalized method is known for tuning embeddings to use minimal hardware resources. To address these barriers, we introduce a graph-theoretic framework for developing structured embedding algorithms. Using this framework, we introduce a biclique virtual hardware layer to provide a simplified interface to the physical hardware. Additionally, we exploit bipartite structure in quantum programs using odd cycle transversal (OCT) decompositions. By coupling an OCT-based embedding algorithm with new, generalized reduction methods, we develop a new baseline for embedding a wide range of optimization problems into fault-free D-Wave annealing hardware. To encourage the reuse and extension of these techniques, we provide an implementation of the framework and embedding algorithms.  相似文献   

5.
This article presents NePSim, an integrated system that includes a cycle-accurate architecture simulator, an automatic formal verification engine, and a parameterizable power estimator for NPs consisting of clusters of multithreaded execution cores, memory controllers, I/O ports, packet buffers, and high-speed buses. To perform concrete simulation and provide reliable performance and power analysis, we defined our system to comply with Intel's IXP1200 processor specification because academia has widely adopted it as a representative model for NP research.  相似文献   

6.
《国际智能系统杂志》2022,37(1):1052-1052
Mostafa Al-Gabalawy, International Journal of Intelligent Systems 2021, 36 (8) ( https://doi.org/10.1002/int.22457 ). The above article, published 9 May 2021 in Wiley Online Library ( wileyonlinelibrary.com ), has been retracted by agreement between the journal Editor-in-Chief Professor Jin Li and John Wiley & Sons Ltd. The retraction has been agreed due to unattributed overlap between this article and unpublished work on GitHub November 2020 ( https://github.com/RhysAgombar/Optimization-for-Convolutional-Network-Layers-using-the-Viola-Jones-Framework ).  相似文献   

7.
Ren  Fei  Chen  Xiaoliang  Hao  Fei  Du  Yajun  Zheng  Jianzhong 《The Journal of supercomputing》2020,76(7):5486-5500
The Journal of Supercomputing - Network embedding technologies that transform the nodes of a network into a low-dimensional vector space have many various potential applications such as node...  相似文献   

8.
We describe a general framework for out-of-core rendering and management of massive terrain surfaces. The two key components of this framework are: view-dependent refinement of the terrain mesh and a simple scheme for organizing the terrain data to improve coherence and reduce the number of paging events from external storage to main memory. Similar to several previously proposed methods for view-dependent refinement, we recursively subdivide a triangle mesh defined over regularly gridded data using longest-edge bisection. As part of this single, per-frame refinement pass, we perform triangle stripping, view frustum culling, and smooth blending of geometry using geomorphing. Meanwhile, our refinement framework supports a large class of error metrics, is highly competitive in terms of rendering performance, and is surprisingly simple to implement. Independent of our refinement algorithm, we also describe several data layout techniques for providing coherent access to the terrain data. By reordering the data in a manner that is more consistent with our recursive access pattern, we show that visualization of gigabyte-size data sets can be realized even on low-end, commodity PCs without the need for complicated and explicit data paging techniques. Rather, by virtue of dramatic improvements in multilevel cache coherence, we rely on the built-in paging mechanisms of the operating system to perform this task. The end result is a straightforward, simple-to-implement, pointerless indexing scheme that dramatically improves the data locality and paging performance over conventional matrix-based layouts.  相似文献   

9.
A large family of algorithms - supervised or unsupervised; stemming from statistics or geometry theory - has been designed to provide different solutions to the problem of dimensionality reduction. Despite the different motivations of these algorithms, we present in this paper a general formulation known as graph embedding to unify them within a common framework. In graph embedding, each algorithm can be considered as the direct graph embedding or its linear/kernel/tensor extension of a specific intrinsic graph that describes certain desired statistical or geometric properties of a data set, with constraints from scale normalization or a penalty graph that characterizes a statistical or geometric property that should be avoided. Furthermore, the graph embedding framework can be used as a general platform for developing new dimensionality reduction algorithms. By utilizing this framework as a tool, we propose a new supervised dimensionality reduction algorithm called marginal Fisher analysis in which the intrinsic graph characterizes the intraclass compactness and connects each data point with its neighboring points of the same class, while the penalty graph connects the marginal points and characterizes the interclass separability. We show that MFA effectively overcomes the limitations of the traditional linear discriminant analysis algorithm due to data distribution assumptions and available projection directions. Real face recognition experiments show the superiority of our proposed MFA in comparison to LDA, also for corresponding kernel and tensor extensions  相似文献   

10.
This paper presents a simplified approach for determining optimum machining parameters. Sensitivity analysis using this approach is illustrated through an example problem and extended to demonstrate a framework for an adaptive control structure.  相似文献   

11.
The kernel functions play a central role in kernel methods, accordingly over the years the optimization of kernel functions has been a promising research area. Ideally Fisher discriminant criteria can be used as an objective function to optimize the kernel function to augment the margin between different classes. Unfortunately, Fisher criteria are optimal only in the case that all the classes are generated from underlying multivariate normal distributions of common covariance matrix but different means and each class is expressed by a single cluster. Due to the assumptions, Fisher criteria obviously are not a suitable choice as a kernel optimization rule in some applications such as the multimodally distributed data. In order to solve this problem, recently many improved discriminant criteria (DC) have been also developed. Therefore, to apply these discriminant criteria to kernel optimization, in this paper based on a data-dependent kernel function we propose a unified kernel optimization framework, which can use any discriminant criteria formulated in a pairwise manner as the objective functions. Under the kernel optimization framework, to employ different discriminant criteria, one has to only change the corresponding affinity matrices without having to resort to any complex derivations in feature space. Experimental results based on some benchmark data demonstrate the efficiency of our method.  相似文献   

12.
Yocam  E.W. 《IT Professional》2003,5(2):32-36
A network's edge is arguably one of its most important areas, because this is where it delivers services to network users-or, from the Internet service provider perspective, to subscribers. Devices at the network's edge classify, prioritize, and mark packets for the rest of the network to understand, and ultimately allow them into the network. Given modern networks' complex internetworking, it is not surprising that developing and maintaining edge devices are tasks that head the agendas of most of today's network equipment manufacturers, carriers, service providers, and network administrators. This article examines key elements of current network edge devices and the network topologies that work best with them. It also discusses Multiprotocol Label Switching (MPLS), a standardized solution that combines the performance and virtual circuit capabilities of data-link-layer switching with the proven scalability of network layer routing used by today's edge devices.  相似文献   

13.
Coherence in a distributed system is meant to offset the disadvantages of distribution. The paper explores four issues under coherence, namely preservation of knowledge consistency across the agents, reliability of the overall system, integration of local solutions and the global performance. It presents some general strategies that can be employed to improve coherence in a CKBS, which include a weak consistency with versions for knowledge revision, and a recovery mechanism based on a hierarchic three-stage coordination, which ensures the correct isolation of potentially hierarchic multiagent actions. The paper goes on to identify the sources and classes of conflicts in global integration, and it suggests remedies, which at worst case would involve negotiation. In global performance, it focusses on planning and result synthesis, as the two most important problem domains, and suggests strategies ameliorate performance.  相似文献   

14.
A general feature extraction framework is proposed as an extension of conventional linear discriminant analysis. Two nonlinear feature extraction algorithms based on this framework are investigated. The first is a kernel function feature extraction (KFFE) algorithm. A disturbance term is introduced to regularize the algorithm. Moreover, it is revealed that some existing nonlinear feature extraction algorithms are the special cases of this KFFE algorithm. The second feature extraction algorithm, mean-STD1-norm feature extraction algorithm, is also derived from the framework. Experiments based on both synthetic and real data are presented to demonstrate the performance of both feature extraction algorithms.  相似文献   

15.

With the big success of the Cloud Computing, or the Cloud, new research areas appeared. Edge Computing (EC) is one of the recent paradigms that is expected to overcome the Quality of Service (QoS) and latency issues caused by the best-effort behaviour of the Cloud. EC aims to bring the computation power close to the end devices as much as possible and reduce the dependency to the Cloud. Bringing computing power close to the source also enables real-time applications. In this paper, we propose a novel software reference architecture for Edge Servers, which is operating system (OS) and hardware-agnostic. Edge Servers can collaborate and execute (near) real-time tasks on time, either by downscaling or scheduling them according to their deadlines or offloading them to other Edge Servers in the network. Decision making for offloading, resource planning, and task scheduling are challenging problems in decentralized systems. The paper explains how resource planning and task scheduling can be overcome with software approach. Finally, the article realises the architecture as a framework, called Real-Time Edge Framework (RTEF) and validates its correctness with a use case.

  相似文献   

16.
The NetMine framework allows the characterization of traffic data by means of data mining techniques. NetMine performs generalized association rule extraction to profile communications, detect anomalies, and identify recurrent patterns. Association rule extraction is a widely used exploratory technique to discover hidden correlations among data. However, it is usually driven by frequency constraints on the extracted correlations. Hence, it entails (i) generating a huge number of rules which are difficult to analyze, or (ii) pruning rare itemsets even if their hidden knowledge might be relevant. To overcome these issues NetMine exploits a novel algorithm to efficiently extract generalized association rules, which provide a high level abstraction of the network traffic and allows the discovery of unexpected and more interesting traffic rules. The proposed technique exploits (user provided) taxonomies to drive the pruning phase of the extraction process. Extracted correlations are automatically aggregated in more general association rules according to a frequency threshold. Eventually, extracted rules are classified into groups according to their semantic meaning, thus allowing a domain expert to focus on the most relevant patterns. Experiments performed on different network dumps showed the efficiency and effectiveness of the NetMine framework to characterize traffic data.  相似文献   

17.
Two kinds of robustness measure for networks are introduced and applied to the road network systems in Japan. One is on the connectivity of randomly chosen pair of vertices, another is on the shortest path length between pair of connected vertices. We devise Monte Carlo methods for the computation of two measures.  相似文献   

18.
The Journal of Supercomputing - The complexity of state-of-the-art processor architectures and their consequent vast design spaces have made it difficult and time-consuming to explore the best...  相似文献   

19.
While worms and their propagation have been a major security threat over the past years, causing major financial losses and down times for many enterprises connected to the Internet, we will argue in this paper that valuable lessons can be learned from them and that network management, which is the activity supposed to prevent them, can actually benefit from their use. We focus on five lessons learned from current malware that can benefit to the network management community. For each topic, we analyse how it is been addressed in standard management frameworks, we identify their limits and describe how current malware already provides efficient solutions to these limits. We illustrate our claim through a case study on a realistic application of worm based network management, which is currently developed in our group.  相似文献   

20.
A general framework for designing a fuzzy rule-based classifier   总被引:2,自引:2,他引:0  
This paper presents a general framework for designing a fuzzy rule-based classifier. Structure and parameters of the classifier are evolved through a two-stage genetic search. To reduce the search space, the classifier structure is constrained by a tree created using the evolving SOM tree algorithm. Salient input variables are specific for each fuzzy rule and are found during the genetic search process. It is shown through computer simulations of four real world problems that a large number of rules and input variables can be eliminated from the model without deteriorating the classification accuracy. By contrast, the classification accuracy of unseen data is increased due to the elimination.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号