首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
基于北斗的气象预警信息传输方法   总被引:1,自引:0,他引:1       下载免费PDF全文
为解决北斗短报文对气象预警信息发布过程中一份信息需要经过多次北斗短报文播报才能完成传输,以及接收端接收大量冗余数据的问题,先对气象预警信息进行编码、关联信息匹配处理,依据气象数据特点改进LZSS压缩算法,并级联LZW算法,使得最后需要传输的数据小于78.5Byte,通过实验验证,结果表明该方法能够高效的将气象预警信息通过北斗短报文传输发布到相应的接收端,为气象预警信息发布提供了一种新思路。  相似文献   

2.
郑鸿 《电子世界》2014,(2):166-167
本文提出了一种基于内容的冗余流量消除算法(RFECB)。算法利用滑动窗口计算数据块的边界点,对二个边界点的数据块计算其指纹并进行指纹匹配。RFECB能够提高冗余消除字节节省百分比,减少了冗余流量在网络中的传输。  相似文献   

3.
在传统网络环境中创建企业管理系统存在多种问题,比如降低企业经营管理效率,缺乏良好的企业数据管理分析能力等。所以本文对智能动态管控模型进行了分析,创新智能管理措施,利用大数据处理模块存储数据,并且对数据进行传输和收集;在数据流程设计过程中,通过三层架构实现;在消除冗余信息时,能够通过数据清洗实现;通过大数据行为模型对时间序列和模型故障进行分析,从而有效判断系统故障。最后对系统进行试验,证明系统预测结果满足实际需求、企业智能管理效率有所提高。  相似文献   

4.
在网络系统日志信息规模不断增长的情况下,结合运维中的实际需求,通过大数据技术,提出了一种基于Spark Streaming的海量日志实时处理系统,并详细地介绍了系统的底层日志数据收集、 传输、 计算、存储、 查询存储等一系列功能的设计与实现.该系统不仅能够准确、 实时地解析日志信息,对数据进行统计分析,而且能对历史日志数据进行实时存储和离线计算处理.  相似文献   

5.
在PSTN文件隐传系统中,由于DSP芯片晶振频率的不稳定以及其与计算机设备的晶振频率不匹配,导致在传输语音数据过程中会发生数据采样点的丢失,而隐写算法对采样点要求非常高,丢失采样点会导致接收端无法正确恢复密文信息。介绍了TI公司提出的RTDX能够在不中断DSP程序执行的情况下,通过JTAG接口实现DSP与主机的实时数据传送,而不会产生数据丢失的问题,应用到PSTN文件隐传系统中,为接收端正确恢复密文信息提供了重要的保障。  相似文献   

6.
论文面向传感器网络周期性收集中信息残缺流数据的统计特征,研究信源异构数据模型匹配驱动的节能滤波机制(MMF)。该机制在数据收集的作用阶段和信息容错方面区别于传统的面向传输过程的同构数据无损融合技术,能够在网络面向应用的前提下,针对异构信源采用有损融合方式来进一步降低网络整体能耗和传输时延。MMF的运作过程分别由模型建立的基本数据收集阶段和模型匹配驱动的自适应数据滤波阶段构成。首先依据半监督学习算法估计描述信源分布的高斯混合模型(GMM)参数,进而基于模型匹配程度来自适应控制网内数据通信频率,执行服务质量要求(QoS)约束下的数据收集有损融合。仿真实验表明,相比于一些经典的数据收集节能算法,本文提出的滤波机制能够在满足系统服务质量要求的前提下,通过提取信源异构流数据的统计冗余特征和驱动相应的模型匹配操作有效地抑制网内冗余数据传输次数和降低传输延时,最终实现健壮节能的传感器网络数据收集效果。  相似文献   

7.
提出了一种基于数据库播出的新方法,即通过在播出和接收端设置相同类型的数据库,传输时将数据记录进行封装传输,接收端通过提取数据记录并利用预存的一些模板进行显示,从而很大程度上避免传输过程中的冗余数据,大大提高播出效率,有效利用了广播网络带宽.  相似文献   

8.
针对测控装备图像传输中无损压缩的问题,提出了一种基于改进LZO(Lempel-Ziv-Oberhumer)的高灵敏度图像无损压缩方法.分析了无损压缩算法原理,根据高灵敏度图像特点,利用中值边缘预测消除空间冗余,改进LZO算法流程.试验表明:改进算法无损压缩效率高,性能稳定,能够满足任务图像实时无损的传输需求.  相似文献   

9.
带有同步预测的WBAN时序数据融合算法   总被引:1,自引:0,他引:1  
提出一种带有同步预测的时序数据融合算法,利用多分辨率分析特性对采集的原始数据进行预处理,挖掘反映人体生理状态的本质特征,进而采用同步预测机制在感知节点和汇聚节点处分别建立轻量级预测模型,消除网内冗余数据的传输以降低能耗。结果表明所提出的融合算法具有较高的预测精度,能够实现低开销的无线体域网时序数据融合。  相似文献   

10.
当今的广域网传输过程中存在着大量的重复数据,这些重复数据使得广域网带宽被严重浪费.针对这一问题,提出了一种基于应用层协议识别的冗余数据消除策略.通过将数据缓存及应用层协议识别技术相结合,找出广域网传输过程中的冗余数据,尽可能使广域网中的相同数据只传输一次,从而降低冗余数据传输对广域网带宽的过度浪费.  相似文献   

11.
在某项目开发过程中,为了满足数据采集系统中DSP应用程序在线FLASH更新的需求,首先借助Tornado开发环境自带的FTP服务器将应用程序下载到系统控制器内存,再封装成数据包分次传输,最后实现应用程序更新。该方案解决了某些应用场合下数据转存空间小于需要传输的文件时需分包多次传输的问题,整个软件开发过程都是在Tornado开发编译环境下进行的,能够完成对应用程序文件进行分包无差错传输。实际应用表明,按照该方案设计的软件程序可以很好地实现FLASH在线更新,具有分包传输、数据准确的特点,达到了设计要求。  相似文献   

12.
基于STM32F107VCT6微控制器,在μC/OS-Ⅲ系统下针对U盘与SD卡之间的文件传输进行研究。控制器分别通过SPI串行总线和OTG接口与SD卡和U盘实现数据传输,控制系统通过FATFS文件系统将U盘的文件数据存入控制器的缓存之中,再将数据写入SD卡,实现了U盘与SD卡之间的数据传输。实验表明:该文件传输原理简单、功能稳定,可广泛应用于日常生活和工农业的小型嵌入式设备当中。  相似文献   

13.
We consider transmission control (rate and power) strategies for transferring a fixed-size file (finite number of bits) over fading channels under constraints on both transmit energy and transmission delay. The goal is to maximize the probability of successfully transferring the entire file over a time-varying wireless channel modeled as a finite-state Markov process. We study two implementations regarding the delay constraints: an average delay constraint and a strict delay constraint. We also investigate the performance degradation caused by the imperfect (delayed or erroneous) channel knowledge. The resulting optimal policies are shown to be a function of the channel-state information (CSI), the residual battery energy, and the number of residual information bits in the transmit buffer. It is observed that the probability of successful file transfer increases significantly when the CSI is exploited opportunistically. When the perfect instantaneous CSI is available at the transmitter, the faster channel variations increase the success probability under delay constraints. In addition, when considering the power expenditure in the pilot for channel estimation, the optimal policy shows that the transmitter should use the pilot only if there is sufficient energy left for packet transfer; otherwise, a channel-independent policy should be used.  相似文献   

14.
Content defined chunking (CDC) is a prevalent data de-duplication algorithm for removing redundant data segments in archival storage systems. Current researches on CDC do not consider the unique content characteristic of different file types, and they determine chunk boundaries in a random way and apply a single strategy for all file types. It has been proven that such method cannot achieve optimal performance for compound archival data. We analyze the content characteristic of different file types and propose candidate anchor histogram (CAH) to capture it. We propose an improved strategy for determining chunk boundaries based on CAH and tune some key parameters of CDC based on the data layout of underlying data de-duplication file system (TriDFS), which can efficiently store variable-sized chunks on fixed-sized physical blocks. These strategies are evaluated with representative archival data, and the result indicates that they can increase on average the compression ratio by 16.3% and write throughput by 13.7%, while only decrease the read throughput by 2.5%.  相似文献   

15.
This paper presents the results of performance tests carried out for a file delivery system based on the file delivery over unidirectional transport (FLUTE) protocol. FLUTE is a file transport protocol used to deliver files over IP networks, including the Internet and unidirectional systems, from a sender to one or more receivers. FLUTE uses UDP, an unreliable transport protocol, and so reliable delivery must be guaranteed by other means. This paper shows how FLUTE manages to recover from packet losses using forward path redundancy (forward error correction (FEC) and repeat transmissions in a data carousel), and a simple HTTP‐based point‐to‐point file repair scheme which is specified in 3GPP and DVB standards. The results presented in this paper show that careful optimization of FEC overhead, and the number of repeat transmissions, gives the best system performance in most cases. Based on the simplified error reception and distribution model depicted in this study, it is illustrated that the simple client‐server point‐to‐point file repair is optimal only for small groups. Several options to improve the configuration of FLUTE senders are provided, to deliver reception guarantees with optimal data expense from the system point of view. Copyright © 2006 John Wiley & Sons, Ltd.  相似文献   

16.
Data wiping is a useful technology which can prevent possible data recovery in a file system. But the growth of the amount of data usually leads to a decline in wiping efficiency. In order to make improvement, a novel data wiping scheme for Ext4 file system DWSE is pro-posed. It includes two proposed algorithms for wiping files and free space adaptively. According to a rate of rest blocks which is specified by users, the file wiping algorithm WFile tries to clean part of a selected file for saving time. The free space wiping algorithm WFree tries to speed up the process of cleaning dirty blocks by employing a random sampling and hypothesis testing method with two adjustable rates which represent status and content of a block group. A journal cleaning algorithm CleanJ is also proposed which tries to clean old records by creating and deleting tempo-rary files for preventing data recovery from a journal file. With the help of parameters, users can wipe their data in a balanced way between security and efficiency. Several ex-periments are performed on our scheme. The experimental results show that our scheme can wipe files and free space in different security and efficiency with different parame-ters. Our scheme can achieve higher security and efficiency than other two data wiping schemes.  相似文献   

17.
基于FAT32的文件隐藏方法及在Linux上的实现   总被引:2,自引:0,他引:2  
针对现有基于FAT32的文件隐藏方法存在的不足,提出了一种通过修改目录项属性和重构FAT表项序列的文件隐藏方法;分析了Linux支持FAT32的重要数据结构和函数,利用Linux中的缓冲机制,在Linux操作系统上进行了方法实现。分析表明,该方法不仅能实现不依赖于操作系统的文件隐藏,而且隐藏方法简单,运算轻量,文件隐藏强度高。  相似文献   

18.
针对无法保存长时间监测产生大量数据的问题,设计开发了一种基于MSP430单片机和SD卡的FAT16文件系统。利用SPI总线与SD卡通信,实现对SD卡的数据读写,在SD卡中以fat16文件格式建立相应的文件系统,使其为Windows操作系统识别,方便后期的数据处理。该系统在大容量的现场数据采集、存储等方面有着广泛的应用前景。将该设计应用于便携式心电监护仪上,有较高的应用价值。  相似文献   

19.
无损数据压缩系统在通信传输过程中容易出现错误,会导致码表和重构数据出错并引发误码扩散,影响其在文件系统和无线通信中的应用。针对在通用编码领域广泛使用的无损数据压缩算法LZW,该文分析并利用LZW压缩数据的冗余,通过选取部分编码码字并动态调整其对应的被压缩符号串的长度来携带校验码,提出了具有误码纠正能力的无损数据压缩方法CLZW。该方法不用额外添加数据,也不改变数据规格和编码规则,与标准LZW算法兼容。实验结果表明,用该方法压缩的文件仍然能用标准LZW解码器解压,且该方法可以对LZW压缩数据的误码进行有效纠正。  相似文献   

20.
The authors evaluate the performance of both the fixed buffer allocation (FBA) and the adaptive buffer allocation (ABA) schemes, in which the network nodes are allowed to offer less than the requested buffer size. The performance measures of interest are the blocking probability, file transfer delay, and the adaptation speed for ABA for a given buffer size and the offered load. The authors develop and analyze a quasi-birth-death model of the ABA scheme (with exponential file lengths and negligible delay in carrying out reservation and cancellation procedures). In particular, they develop a recursive computational scheme exploiting the structure of the underlying model. This is supplemented by a first-passage time analysis to evaluate the transient behavior of the control strategy. The authors use both analytic and simulation methods. The results demonstrate that the ABA schemes provide significant advantages over the FBA scheme if the parameters are appropriately chosen. They also provide guidelines on the choice of these parameters  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号