首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Safe and effective error rate monitors for SS7 signaling links   总被引:1,自引:0,他引:1  
This paper describes SS7 error monitor characteristics, discusses the existing SUERM (signal unit error rate monitor), and develops the recently proposed EIM (error interval monitor) for higher speed SS7 links. A SS7 error monitor is considered safe if it ensures acceptable link quality and is considered effective if it is tolerant to short-term phenomena. Formal criteria for safe and effective error monitors are formulated. This paper develops models of changeover transients, the unstable component of queue length resulting from errors. These models are in the form of recursive digital filters. Time is divided into sequential intervals. The filter's input is the number of errors which have occurred in each interval. The output is the corresponding change in transmit queue length. Engineered EIMs are constructed by comparing an estimated changeover transient with a threshold T using a transient model modified to enforce SS7 standards. When this estimate exceeds T, a changeover will be initiated and the link will be removed from service. EIMs can be differentiated from SUERM by the fact that EIMs monitor errors over an interval while SUERMs count errored messages. EIM offer several advantages over SUERMs, including the fact that they are safe and effective, impose uniform standards in link quality, are easily implemented, and make minimal use of real-time resources  相似文献   

2.
Motivated by the excessive link status changes observed in some field operations of the common channel signaling (CCS) network, the authors provide a detailed analysis of the signaling link error monitoring algorithms in the Signaling System No. 7 (SS7) protocol. These algorithms determine when to fail a link due to excessive error rates and when to put a failed link back into service. The analysis shows that, under current SS7 specifications of the error monitoring algorithms, the probability of a signaling link oscillating in and out of service could be high, depending on the traffic load, signal unit size, and the statistical nature of errors (bursty or random). The link oscillation phenomenon could become worse as longer Transaction Capability Application Part (TCAP) messages for transaction-based services (e.g., 800 Service) are carried in the CCS networks. While the risk to the existing network may not be high due to the light loads carried at present, there is still a need to study the error monitoring issues thoroughly  相似文献   

3.
A link failure in the path of a virtual circuit in a packet data network will lead to premature disconnection of the circuit by the end-points. A soft failure will result in degraded throughout over the virtual circuit. If these failures can be detected quickly and reliably, then appropriate rerouteing strategies can automatically reroute the virtual circuits that use the failed facility. In this paper, we develop a methodology for analysing and designing failure detection schemes for digital facilites. Based on errored second data, we develop a Markov model for the error and failure behaviour of a T1
  • 1 T1 carrier, the lowest level in the plesiochronous digital carrier hierarchy in the United States. AT1 carrier has a payload of 24 64Kbps PCM channels.
  • trunk. The performance of a detection scheme is characterized by its false alarm probability and the detection delay. Using the Markov model, we analyse the performance of detection schemes that use physical layer or link layer information. The schemes basically rely upon detecting the occurrence of severely errored seconds (SESs). A failure is declared when a counter, that is driven by the occurrence of SESs, reaches a certain threshold. For hard failures, the design problem reduces to a proper choice of the threshold at which failure is declared, and on the connection. reattempt parameters of the virtual circuit end-point session recovery procedures. For soft failures, the performance of a detection scheme depends, in addition, on how long and how frequent the error bursts are in a given failure mode. We also propose and analyse a novel Level 2 detection scheme that relies only upon anomalies observable at Level 2, i.e. CRC failures and idle-fill flag errors. Our results suggest that Level 2 schemes that perform as well as Level 1 schemes are possible.  相似文献   

    4.
    We consider the physical layer error performance parameters and design criteria for digital satellite systems established by ITU‐R Recommendation S.1062, where the performance objectives are given in terms of the bit error rate (BER) divided by the average number of errors within a cluster. It is well known that errors on satellite links employing forward error correction (FEC) schemes tend to occur in clusters. The resulting block error rate is the same as if it was caused by randomly occurring bit errors with an error‐event ratio of BER/α, where α is the average number of errors within a cluster. The factor, α, accounts for the burstiness of the errors and also represents the ratio between the BER and the error‐event ratio. This paper proposes theoretical methods to estimate the factor, α. Using the weight distributions of the FEC codes, we derive a set of expressions for the factor, α, as well as their compact lower bounds. We present lower bounds for various FEC schemes including binary BCH codes, block turbo codes, convolutional codes, and turbo codes. The simulation results show that the proposed lower bounds are good estimates in the high signal‐to‐noise ratio region. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

    5.
    Motivated by field data which showed a large number of link changeovers and incidences of link oscillations between in-service and out-of-service states in common channel signalling (CCS) networks, a number of analyses of the link error monitoring procedures in the SS7 protocol were performed by the authors. This paper summarizes the results obtained thus far and include the following: (a) results of an exact analysis of the performance of the error monitoring procedures under both random and bursty errors; (b) a demonstration that there exists a range of error rates within which the error monitoring procedures of SS7 may induce frequent changeovers and changebacks; (c) an analysis of the performance of the SS7 level-2 transmission protocol to determine the tolerable error rates within which the delay requirements can be met; (d) a demonstration that the tolerable error rate depends strongly on various link and traffic characteristics, thereby implying that a single set of error monitor parameters will not work well in all situations; and (e) some recommendations on a customizable/adaptable scheme of error monitoring with a discussion on their implementability. These issues may be particularly relevant in the presence of anticipated increases in SS7 traffic due to widespread deployment of advanced intelligent network (AIN) and personal communications service (PCS) as well as for developing procedures for high-speed SS7 links currently under consideration by standards bodies  相似文献   

    6.
    Under the memoryless binary symmetric channel assumption, the author evaluates performance estimation schemes for DS1 transmission systems carrying live traffic. Bipolar violations, framing bit errors, and code-detected errors are commonly used to estimate bit error ratios and the respective numbers of errored seconds and severely errored seconds that are fundamental parameters in characterizing the performance of DS1 transmission systems. A basic framework based on the coefficient of variation is proposed to evaluate several estimation schemes. Serious drawbacks of the existing estimation schemes based on the superframe (D4) format are identified. A new method for estimating the number of errored seconds is proposed. A computer simulation shows that this proposed method performs much better than the conventional counting method. The performance of the cyclic redundancy check (CRC) code of the extended superframe (ESF) format is also evaluated through the use of a computer simulation model. The simulation results show that all the errored seconds are detected by the CRC code. This is a welcome feature of the code for real-time performance monitoring. Furthermore, the results suggest a new threshold of 326 CRC errors per second for determining severely errored seconds  相似文献   

    7.
    An international 64 kbps switched digital ISDN service has been commercially in place since October 1989 between Japan and the United States. Service providers had to know about transmission quality to facilitate service planning. Other performance information was necessary for designing terminal equipment for both network users and telecommunication providers. Transmission performance was therefore characterized under real operating conditions. This paper evaluates overall transmission quality in terms of errored seconds (ES) and severely errored seconds (SES), and it shows there is very little difference between the performance of international end-to-end connections via fiber and satellite. The daily variation of ES and SES reveals that these connections exhibit behavior typical of digital circuits. This study also characterizes the distributions of bit errors per errored second, error event inter-arrival time distributions, and the distributions of error event lengths and intensities. The time between freeze-frame events are estimated for 64 kbps compressed video and the retransmission rate is estimated for Group IV facsimile  相似文献   

    8.
    A reliability analysis of a k-out-of-N: G redundant system with multiple critical errors and r repair facilities in the presence of chance, is dealt with in this paper. The system is in a failed state when k units have failed or one of any multiple critical errors has occurred. Failed system repair times are arbitrarily distributed. The formulae for reliability function in terms of a Laplace transform, steady-state availability and mean time to failure are derived.  相似文献   

    9.
    Characterization of catastrophic fault patterns (CFPs) and their enumeration have been studied by several authors. Given a linear array with a set of bypass links, an important problem is how to count the number of CFPs. Enumeration of CFPs for two link redundancy G={1,g} has been solved for both unidirectional and bidirectional link cases. In this paper, we consider the more general case of link redundancy G={1,2,…,k,g}, 2k<g. Using random walk as a tool, we enumerate CFPs for both unidirectional and bidirectional cases.  相似文献   

    10.
    As the technology scales down, shrinking geometry and layout dimension, on- chip interconnects are exposed to different noise sources such as crosstalk coupling, supply voltage fluctuation and temperature variation that cause random and burst errors. These errors affect the reliability of the on-chip interconnects. Hence, error correction codes integrated with noise reduction techniques are incorporated to make the on-chip interconnects robust against errors. The proposed error correction code uses triplication error correction scheme as crosstalk avoidance code (CAC) and a parity bit is added to it to enhance the error correction capability. The proposed error correction code corrects all the error patterns of one bit error, two bit errors. The proposed code also corrects 7 out of 10 possible three bit error patterns and detects burst errors of three. Hybrid Automatic Repeat Request (HARQ) system is employed when burst errors of three occurs. The performance of the proposed codec is evaluated for residual flit error rate, codec area, power, delay, average flit latency and link energy consumption. The proposed codec achieves four magnitude order of low residual flit error rate and link energy minimization of over 53 % compared to other existing error correction schemes. Besides the low residual flit error rate, and link energy minimization, the proposed codec also achieves up to 4.2 % less area and up to 6 % less codec power consumption compared to other error correction codes. The less codec area, codec power consumption, low link energy and low residual flit error rate make the proposed code appropriate for on chip interconnection link.  相似文献   

    11.
    Markov  Z. Trenkic  B. 《Electronics letters》1998,34(7):631-632
    It is shown that the CCSNo7 link availability depends on both the signalling load and signalling unit length if the signal unit error rate monitor (SUERM) algorithm is built according to current ITU-T recommendations. A rearrangement of the SUERM algorithm is suggested so that the link availability has the largest and most constant possible value for the prescribed BER  相似文献   

    12.
    Outage performance is analyzed for opportunistic decode‐and‐forward cooperative networks employing orthogonal space–time block codes. The closed‐form expressions of diversity order and the end‐to‐end outage probability at high signal‐to‐noise ratio regime are derived for arbitrary relay number (K) and antenna configuration (N antennas at the source and each relay, ND antennas at the destination) under independent but not necessarily identical Rayleigh fading channels. The analysis is carried out in terms of the availability of the direct link between the source and the destination. It is demonstrated that the diversity order is min{N, ND} ⋅ KN if the direct link is blocked, and if the direct link is available, the diversity order becomes min{N, ND} ⋅ KN + NND. Simulation and numerical results verify the analysis well. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

    13.
    The complexity and performance of three simple error detection and correction strategies frequently used for the decoding of the cross-interleaved Reed-Solomon codes in digital compact disc ( CD) are compared. It is assumed that the number of byte errors in the input codeword is always a multiple of l a positive integer. By varying l from 1 to 32, the random to burst error performance of the various strategies is obtained. Specifically, it is shown that unless random errors are the main cause of concern, the best strategies seem to be those using erasure corrections. The results presented will be useful for deciding the decoding strategies to be adopted for CD players as well as in potential applications such as optical mass storage devices using the CD format  相似文献   

    14.
    Discrete-Event Simulation is a powerful, but underexploited alternative for many kinds of physical experimentation. It permits what is physically impossible or unaffordable, to conduct and run related experiments in parallel, against each other. Comparative and Concurrent Simulation (CCS) is a parallel experimentation method that adds a comparative dimension to discrete-event simulation. As a methodology or style, CCS resembles a many-pronged rake; its effectiveness is proportional to the number of prongs—the number of parallel experiments. It yields information in parallel and in time order, rather than in the arbitrary order of one-pronged serial simulations. CCS takes advantage of the similarities between parallel experiments via the one-for-many simulation of their identical parts; if many experiments are simulated, then it is normally hundreds to thousands times faster than serial simulation. While CCS is a one-dimensional method, a more general, multi-dimensional or multidomain version is MDCCS. MDCCS permits parent experiments to interact and produce offspring experiments, i.e., to produce more, but smaller experiments, and many zero-size/zero-cost experiments. MDCCS is more general, informative, and faster (usually over 100:1) than CCS for most applications. It handles more complex applications and experiments, such as multiple faults, variant executions of a software program, animation, and others.From a forthcoming book by E. Ulrich, V. Agrawal, and J. Arabian, and a Ph.D. thesis on MDCCS by K.P. Lentz, Northeastern University.  相似文献   

    15.
    Cooperative diversity using distributed space-time codes has been recently proposed to form virtual antennas in order to achieve diversity gain. In this paper, we consider a multi-relay network operating in amplify-and-forward (AAF) mode. Motivated by protocol III presented in (Nabar et al. 2004), we propose a cooperative diversity protocol implementing space–time coding for an arbitrary number of relay nodes when the source-destination link contributes in the second phase. We consider the use of real-orthogonal and quasi-orthogonal designs of space–time codes as they give better performance than random linear-dispersion codes. The pairwise error probability (PEP) has been derived and the theoretical analysis demonstrates that the proposed protocol achieves a diversity of order N + 1, where N is the number of relay nodes. No instantaneous channel state information is required at the relay nodes. The optimum power allocation that minimizes the PEP is obtained with numerical and theoretical analysis. The aggregate system power constraint is considered in the optimization. Simulation results demonstrate an improvement over the existing orthogonal protocols for different source-destination channel conditions. The results also show that the proposed scheme is robust to the channel estimation errors  相似文献   

    16.
    We propose a dispersion flattened fiber (DFF) front-haul transmission system with high bitrate, polarization multiplexing (PM) and quadrature amplitude modulation (QAM) signal at low input optical power. The modulation format of the system is PM-16QAM, and the bitrate is 256 Gbit/s. The transmission characteristics over DFF link system are experimentally studied, which are compared with those over non-zero dispersion shifted fiber (NZDSF) link and standard single mode fiber (SSMF) link. The experimental results show that the error vector magnitude (EVM) of 256 Gbit/s and PM-16QAM signal over 25 km DFF link is 0.75% better than that over 25 km NZDSF link at least, and the bit error rate (BER) and Q-factor are much better than those of NZDSF. Their EVM and BER are both decreased with the increase of input optical power, and the Q-factor is increased. Those characteristics over 25 km SSMF are the worst at the same case. The larger the dispersion is, the more the constellation points are deviated from their respective centers and the worse the constellation characteristics are. The greater the attenuation of the DFF is, the smaller the input power of the DFF is, the more the constellation points are deviated from their centers and the worse the constellation characteristics are. This study provides a new idea and experimental support for long span front-haul propagation in mobile communication.  相似文献   

    17.
    The reference node (RN) is a central node that has minimum distance/hop count to all other nodes in the network. This central node can play several critical roles such as being the time reference in order to synchronise computer nodes. For synchronisation, the main goal is to minimise the sum of synchronisation errors. The time synchronisation error, known for each link between two nodes, accumulates for each hop along the path used for synchronisation between two nodes. In such a context, the best RN is defined as having the minimal sum of time synchronisation errors between itself and every other node. Thus, the first step for error minimisation is to select a minimum spanning tree (MST), formed by the links with minimum synchronisation error, as synchronisation path. The second step is to select an RN, which minimises the sum of synchronisation errors to all nodes in the MST, as time reference for synchronisation. In a dynamic network, where communication links appear and disappear, and synchronisation accuracy improves as more packets are exchanged, a static RN would entail suboptimal synchronisation accuracy. All existing models in this area are limited to static RNs because of the computing cost of updating the RN, yielding a suboptimal total synchronisation error over time and causing problems if the selected node is removed from the dynamic network. This paper presents a novel and efficient method for dynamic RN selection in dynamic networks. The approach proposed in this paper improves the performance of RN computation and update in live mode for dynamic networks. This new method concentrates on the altered path with respect to the RN, each time the MST is updated. This provides an efficient way to find and maintain a RN incrementally in an average time complexity of O(log n) per update, which n is the total number of nodes in the network. The proposed approach was tested with a huge dynamic network containing 60 000 simulated nodes, in a number of different situations. The proposed approach achieves excellent running time while minimising synchronisation error. Although this work is currently used for time synchronisation purposes, several dynamic network tools can benefit from an efficient incremental algorithm to calculate hop counts and select a central point for the network. Copyright © 2014 John Wiley & Sons, Ltd  相似文献   

    18.
    Traditionally, congestion control in packet networks is performed by reducing the transmission rate when congestion is detected, in order to cut down the traffic that overwhelms the capacity of the network. However, if the bottleneck is a wireless link, congestion is often cumulated because of retransmissions derived from bit errors. In this case, it might be beneficial to allow delivery of partly corrupted packets up to the application layer instead of reducing the transmission rate. This would decrease the number of retransmissions in the link layer and therefore relieve congestion, but at the cost of bit errors appearing in the packet payload. In this paper, we study a congestion control mechanism for streaming applications that combines traditional congestion control with selective link layer partial checksumming allowing bit errors in the less sensitive parts of data. We have compared the performance of the proposed mechanism against traditional congestion control with a simulation study. The results show that the proposed approach can improve the overall performance both by increasing the throughput over the wireless and improving the video quality in terms of peak signal-to-noise ratio (PSNR) by up to 8 dB, depending on the error conditions and the content.  相似文献   

    19.
    Distributed System-level diagnosis allows the fault-free components of a fault-tolerant distributed system to determine which components of the system are faulty and which are fault-free. The time it takes for nodes running the algorithm to diagnose a new event is called the algorithm's latency. In this paper we present a new distributed system-level diagnosis algorithm which presents a latency of O(log N) testing rounds, for a system of N nodes. A previous hierarchical distributed system-level diagnosis algorithm, Hi-ADSD, presents a latency of O(log 2 N) testing rounds. Nodes are grouped in progressively larger logical clusters for the purpose of testing. The algorithm employs an isochronous testing strategy that forces all fault-free nodes to execute tests on clusters of the same size each testing round. This strategy is based on two main principles: a tested node must test its tester in the same round; a node only accepts tests according to a lexical priority order. We present formal proofs that the algorithm's latency is at most 2log N – 1 testing rounds and that the testing strategy of the algorithm leads to the execution of isochronous tests. Simulation results are shown for systems of up to 64 nodes.  相似文献   

    20.
    In this paper we consider an ATM transmission link, to which CBR or VBR and ABR or UBR calls arrive according to independent Poisson processes. CBR/VBR calls (characterized by their equivalent bandwidth) are blocked and leave the system if the available link capacity is less than required at the time of arrival. ABR/UBR calls, however, accept partial blocking, meaning that they may enter service even if the available capacity is less than the specified required peak bandwidth, but greater than the so called minimal accepted bandwidth. Partially blocked ABR/UBR calls instead experience longer service time, since smaller given bandwidth entails proportionally longer time spent in the system, as first suggested in [3] and analyzed in details herein. Throughout the life time of an ABR/UBR connection, its bandwidth consumption fluctuates in accordance with the current load on the link but always at the highest possible value up to their peak bandwidth (greedy sources). Additionally, if this minimal accepted bandwidth is unavailable at the time of arrival, ABR/UBR calls are allowed to wait in a finite queue. This system is modeled by a Continuous Time Markov Chain (CTMC) and the CBR/VBR and ABR/UBR blocking probabilities and the mean ABR/UBR waiting- and service times are derived.  相似文献   

    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号