首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
In order to conduct a comparative analysis on China's Telecommunications/ ICT regulation effectiveness to make up the omission of international researches, and to find out the difference between China and other countries and try to improve China's ICT regulation effectiveness, this paper developed an extended, integrated ICT regulation effectiveness assessment framework, with a name of IEP framework that consists of three assessment directions, namely regulation institution, regulation enforcement and industry performance. Based on this framework and by using Entropy Method, the paper then selected 10 sample countries, including China, six developed countries, and other three developing countries, and made a comprehensive comparison evaluation for those countries. Finally, by focusing on China's results of ranking 10 in institution, 6 in enforcement, 5 in performance and 8 in total ranking, the paper gave the explanation and presented improvement suggestions for the future ICT regulation in China.  相似文献   

2.
Choosing the optimum diagnostic nodes is helpful to increase the accuracy and the efficiency of circuits fault diagnosis. Grey entropy relation algorithm is proposed to choose the optimum diagnostic nodes of analogue circuits in this paper. Analogue circuits are regarded as grey system. Grey relation analysis and grey entropy analysis are combined to grey entropy relation algorithm. Grey entropy relation algorithm is used to quantify the relationship between the diagnostic nodes and fault components. The relevance between the diagnostic nodes and fault components can be evaluated by grey entropy relation degrees. According to the rank order of grey entropy relation degrees, the optimum diagnostic nodes of analogue circuits can be selected objectively and accurately. An example of fault diagnosis is presented to verify the validity of the optimum diagnostic nodes.  相似文献   

3.
An adaptive detection method is proposed to detect SYN flooding attacks at source-end networks. This method can adjust itself to the frequent changes of network conditions. Key features of its design include: (1) creating a detection statistic based on the protocol behavior of TCP SYN-SYN/ACK pairs; (2) forming on-line estimations of the statistical characters of the detection statistic; (3) adjusting its detection threshold according to the variations of network traffic and the latest detection result; (4) decreasing disturbance of random abnormalities in the normal network traffic by consecutive cumulation of threshold violations. Performance analysis and simulation results show the minimum attack traffic that can be detected is about 30% of the legitimate traffic, under the requirements that the probability of false alarms be less than 10^-6, the probability of a miss during an attack be less than 10^-2 and the detection delay be within 7 sampling periods.  相似文献   

4.
As one of the challenges for network virtu- alization, virtual network embedding which maps Virtual network (VN) to the substrate network and allocates re- sources according to the requirements of VN in an efficient way has gained great attention. Existing algorithms gener- ally make their decision according to the present available substrate network resource, especially bandwidth. This paper proposes a time-based VN embedding algorithm. A probability model is formulated to obtain the maximum probability that the available resources of substrate net- work can be used by succeeding VN requests. The prob- ability is set as the weight of the node and the greedy algorithm is employed to embed the virtual node. The reciprocal of the probability is set as the weight of the link and the shortest path algorithm is employed to em- bed the virtual link. Simulation experiments show that the proposed algorithm increases the acceptance rate and the revenue compared to the existing algorithms.  相似文献   

5.
Packet classification is a critical data-plane task for modern routers to support value-added services, especially for those requiring QoS and flow based processing. However, classification at 10Gbps or higher using an algorithmic approach is still challenging. New generation of Network processor unit (NPU) provides unprecedented processing power for network applications, and it opens a new venture to explore thread-level parallelism for attacking networking performance bottlenecks. This paper studies the implementation issues of how an adaptive clas- sification algorithm can be efflciently implemented on a multi-core and multithreaded NPU architecture. Our algorithm combines best traits of Recursive flow classification (RFC) algorithm and bitmap compression technique to achieve deterministic classification performance while keeping the memory growth checked. When mapping such an algorithm onto the Intel IXP network processor, we consider the characteristic of IXP architecture early in the algorithm-design phase to eliminate the potential perfor- mance bottlenecks. The implemented algorithm is highly efflcient and it can run at 10Gbps speed or higher on a real IXP2800 chip.  相似文献   

6.
The design of threshold based distributed Certification authority (CA) has been proposed to provide secure and efficient key management service in Mobile ad hoc networks (MANETs), but most of previous work ig- nore the efficiency and effectiveness and assuming there are always honest nodes performing the service. Focus- ing on developing a model to select a coalition of nodes dynamically and optimally to carry out the threshold key management service in MANETs, we formulate the dy- namic nodes selection problem as combinatorial optimiza- tion problem, with the objectives of maximizing the success ratio of the service and minimizing the nodes, cost of secu- rity and energy, and then extend the payment structure of the classical Vickrey, Clarke and Groves (VCG) mechanism design framework to ensure truth-telling is the dominant strategy for any node in our scenario. Compared with ex- isting works in the presence of selfish nodes, the proposed model enjoys an improvement of both the success ratio of key management service and lifetime of the network, and a reduction of both the cost of participating nodes and compromising probability of MANETs.  相似文献   

7.
A Dynamic quantitative analysis model for network survivability (DQAMNS) is proposed to measure survivability of large-scale distributed information systems based on ANP. Starting with the quantification and normalization of survivability factors (Resistance, recognition, recovery and adaptation, R3A), DQAMNS makes use of probability model and dimensionless representation to ac- cess the degree of each factor. Then, DQAMNS applies ANP to measure the weights of survivability factors based on experts' experience and gets a number between 0 and 1 which synthetically indicates network survivability. Analysis results show that DQAMNS can actually evaluate the four survivability properties, and precisely access survivability in different environments, thus they could be theoretical instructions to improve survivability in designing and implementing LSDIS.  相似文献   

8.
A new feature selection method is proposed for high-dimensional data clustering on the basis of data field. With the potential entropy to evaluate the importance of feature subsets, features are filtered by removing unimportant features or noises from the original datasets. Experiments show that the proposed method can sharply reduce the number of dimensions and effectively improve the clustering performance on WDBC dataset.  相似文献   

9.
Efficiently mapping multiple independent Virtual networks (VNs) over a common infrastructure sub- strate is a challenging problem on cloud computing plat- forms and large-scale future Internet testbeds. Inspired by the idea of data fields, we apply a topological poten- tial function to node ranking and propose an algorithm called Locality-aware node topological potential ranking (LNTPR), which can precisely and efficiently reflect the relative importance of nodes. Using LNTPR and the con- cept of locality awareness, we develop the Locality-aware influence choosing node (LICN) algorithm based on a node influence model that considers the mutual influence be- tween a mapped node and its candidate mapping nodes. LNTPR and LICN improve the integration of node and link mapping. Simulation results demonstrate that the proposed algorithms exhibit good performance in deter- mining revenue, acceptance ratio, and revenue/cost ratio.  相似文献   

10.
In order to investigate the effect of net- work connectivity on the performance of wireless network coding, we introduce percolation theory to construct the system model of multi-hop wireless network for asymp- totic connectivity. Concretely, we proposed a normaliza- tion algorithm for random network to layer the nodes of the largest connected component in multi-hop wireless net- work, and derived the theoretical conditions of percola- tion occurrence for the normalized hierarchical network of the largest connected component. Furthermore, accord- ing to the critical threshold of percolation phenomenon, we derived the performance of wireless network coding for the largest connected component. The mean delay and throughput were quantified in terms of network coding parameters such as coding window size, transmission ra- dius, and node density. These conclusions clarify the ef- fective performance of wireless network coding for random network, and will contribute to the evaluation of optimal performance of wireless network coding.  相似文献   

11.
Network virtualization provides a power- ful tool to allow multiple networks, each customized to a specific purpose, to run on a shared substrate. However, a big challenge is how to map multiple virtual networks onto specific nodes and links in the shared substrate network, known as virtual network embedding problem. Previous works in virtual network embedding can be decomposed to two classes: two-stage virtual network embedding and one- stage virtual network embedding. In this paper, by prun- ing the topology of virtual network using k-core decom- position, a hybrid virtual network embedding algorithm, with consideration of location constraints, is proposed to leverage the respective advantage of the two kinds of algo- rithm simultaneously in the mapping process. In addition, a time-oriented scheduling policy is introduced to improve the mapping performance. We conduct extensive simula- tions and the results show that the proposed algorithm has better performance in the long-term run.  相似文献   

12.
Sybil attacks is a particularly harmful attack against Wireless sensor networks (WSN), in which a malicious node illegitimately claims multiple identities, and forges an unbounded number of identities to defeat redundancy mechanisms. This paper proposes using received signal strength of the nodes to defend against Sybil attacks when the Jakes channel was established. Aiming at improving the detection accuracy and reducing power consumption, we put forward the mutual supervision between head nodes and member nodes to detect Sybil attacks together, and compared the related detection scheme for Sybil attacks. The simulation results show that our scheme consumes lower power, and the detection accuracy is over 90%.  相似文献   

13.
To emphasize the decisions of all users, and the total number of users sharing the same technique, we propose a novel Average cost sharing (ACS) pricing mechanism to study the game between Network coding (NC) and routing flows sharing a single link when users are price anticipating. We characterize the worst-case efficiency bounds of the game compared with the optimal (i.e., the Price of anarchy (PoA)), which can be as low as 4/9 when the number of routing users becomes sufficiently large. NC cannot improve the PoA significantly under ACS. However, to achieve a more efficient use of limited resources, this approach indicates the sharing users have a greater tendency to choose NC. However, the users will follow the majority users' choice of data transmission technique.  相似文献   

14.
Text is very important to video retrieval, index, and understanding. However, its detection and ex- traction is challenging due to varying background~ low con- trast between text and non-text regions, and perspective distortion. In this paper, we propose a novel two phase approach to tackling this problem by discriminative fea- tures and edge density. The first phase firstly defines and extracts a novel feature called edge distribution entropy and then uses this feature to remove most non-text re- gions. The second phase employs a Support vector machine (SVM) to further distinguish real text regions from non- text ones. To generate inputs for SVM, additional three novel features are defined and extracted from each region: a foreground pixel distribution entropy, skeleton/size ra- tio, and edge density. After text regions have been de- tected, texts are extracted from such regions that are sur- rounded by sufficient edge pixels. A comparative study using two publicly accessible datasets shows that the pro- posed method significantly outperforms the selected four state of the art ones for accurate text detection and ex- traction.  相似文献   

15.
Based on the sequence entropy of Shannon information theory, we work on the network coding technology in Wireless Sensor Network (WSN). In this paper, we take into account the similarity of the transmission sequences at the network coding node in the multi-sources and multi-receivers network in order to compress the data redundancy. Theoretical analysis and computer simulation results show that this proposed scheme not only further improves the efficiency of network transmission and enhances the throughput of the network, but also reduces the energy consumption of sensor nodes and extends the network life cycle.  相似文献   

16.
Distributed Luby Transform (DLT) codes have been proposed to improve the robustness and system throughputs for multisource single-sink networks by exploiting the benefits of Network Coding (NC) at a single relay. All the proposed schemes such as the DLT and Soliton-Like Rateless Coding (SLRC) have attempted to maintain Robust Soliton Distribution (RSD) or Soliton-Like Distribution (SLD) for the output data at the relay. This rzsult in some source symbols to be discarded, thereby degrading the throughput. In this paper, we have proposed a novel method called the Full DLT (FDLT) coding scheme to be applied in the InterPlaNetary (IPN) Internet scenario comprising two sources and one relay The aim of the proposed scheme is to fully utilise the source symbols so as to reduce the overheads and improve energy efficiencies at the sources while maintaining low overhead at the relay. In addition, almost no buffer is needed in the proposed scheme so the relay can have limited storage space, and the proposed scheme is resilient to the churn rates of the nodes. The simulation results have shown that the proposed scheme outperforms the aforementioned schemes with respect to source overheads and total overhead in addition to preserving the benefits of NC and LT codes.  相似文献   

17.
步进频率探地雷达因频谱截断和杂波干扰而产生严重的拖尾旁瓣,降低了系统分辨目标的能力。文中分析了熵表征的信号特征,从距离旁瓣产生的机理出发,提出用最大熵、最小熵谱外推技术对截断频谱外推,降低频谱截断产生的拖尾旁瓣。实验结果表明,最大熵可以有效地去除因频谱截断产生的Gibbs振荡,最小熵对峰值旁瓣电平抑制效果较好,两者都有效地提高了系统分辨率。  相似文献   

18.
在分析Freeman链码进行边缘描述的基础上,提出采用链码熵来描述链码的统计特征、采用链码空间分布熵来描述链码的空间分布特征、采用链码相关熵来描述链码间的相关性特征,并结合这三种特征进行图像检索。由于该方法在进行图像检索时既考虑了链码的统计特征又包含了其空间分布特征及相关性特征,同时上述三种描述子又具有尺度、旋转、平移不变性及链码起点无关性,因此取得了比传统方法更好的检索效果,试验结果也证明了该算法的有效性。  相似文献   

19.
由于影响网络流量变化的不确定因素很多,采用了信息理论中的最大熵原理构建综合预测模型。该模型根据历史预测误差对各种单一预测模型进行取舍,并以它们的预测结果作为约束信息,利用最大熵原理得到预测结果的分布。通过对实际网络流量的预测与测量,表明与单一预测模型相比,该综合模型的预测精度有了明显的提高,预测结果更合理、客观。  相似文献   

20.
传统的移动网络中忙时检测方法多数用统计的方法估计出语音业务的忙时,数据业务引入后,数据业务和语音业务的忙时还存在时间差,仍用统计的方法既浪费时间又估计不准确,利用条件相对熵的原理对原有的忙时检测算法进行改进,改进后的算法可以不受观测次数的限制,计算出条件相对熵的值对EDGE网络数据进行分析,得出忙时,提出算法具有计算速度快、精度高、不受观测次数的限制等优点,能够更好的管理和检测忙时,为网络优化提供依据。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号