首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Mobile data collectors (MDCs) are very efficient for data collection in internet of things (IoT) sensor networks. These data collectors collect data at rendezvous points to reduce data collection latency. It is paramount to determine these points in an IoT network to collect data in real time. It is important to consider IoT network characteristics to collect data on a specific deadline. First, the disconnected IoT sensor network is a real challenge in IoT applications. Second, it is essential to determine optimal data collection points (DCPs) and MDCs simultaneously to collect data in real time. In this study, Deadline-based Data Collection using Optimal Mobile Data Collectors (DDC-OMDC) scheme is proposed that aims to collect data in a disconnected network with the optimal number of mobile data collectors in a specific deadline for delay-intolerant applications. DDC-OMDC works in two phases. In the first phase, the optimal number of MDCs is determined to collect data at the optimal data collection points to guarantee one-hop data collection from each cluster. The optimal mobile data collectors are determined using optimal DCPs, data collection stopping time, and a specific deadline. In the second phase, the optimal data collection trajectory is determined for each MDC using the nearest neighbor heuristic algorithm to collect data in real time. The simulation results show that the proposed scheme outperforms in collecting data in real time and determines optimal mobile data collectors and optimal data collection trajectory to collect data in a specific deadline for delay-intolerant applications.  相似文献   

2.
This paper explores the potential for increased transmission efficiency of military tactical command and control data links through voice/data integration. The relationship between voice and data communications as they are employed in support of Military Command is examined, and operational advantages to the integration of voice and data are enumerated. A message processing/handling paradigm of the command and control process is introduced. Past analyses are extended to include tactical data links, and an evaluation of the relative advantages of various degrees of voice/data integration over tactical data links and networks is presented. It is shown that it is generally more efficient for voice and data traffic to share voice-equivalent channels on a statistical contention basis than for voice and data to operate over separate dedicated data link voice-equivalent channels. Important exceptions to this rule are duly noted. Finally a hybrid scheme, where a thin-line capability is reserved for a special class of data and/or voice and where the remainder supports an integrated voice-data operation, is described and analyzed. This scheme is shown to be practical and to result in little loss of efficiency over the more complete voice/data integration analyzed earlier.  相似文献   

3.
新一代航天测量船采用的分布式船姿船位测量体制提高了数据的准确性和可靠性,同时也给数据处理与使用提出了更多的需求,比如姿态数据一致性检验、姿态数据联合使用等.为此,提出了一种新的船姿数据重构方法.首先,对集中式与分布式测量机制进行了对比,分析了采用传统数据处理方法解决新需求会产生困难的原因,并指出姿态重构数据的取得将成为解决这些问题的关键;然后,分两个过程对分布式船体姿态测量数据的重构方法进行了推演,并说明了如何使用重构数据去解决新需求;最后,在重构数据本身精度和重构数据对飞行器目标定位精度影响两个方面对该重构方法进行了检验,结果表明使用该方法得到姿态数据满足任务精度指标要求.  相似文献   

4.
张玉珍  帅克 《信息技术》2007,31(2):100-101,105
提出一种将纯文本数据迁入数据库的技术。通过程序依次将纯文本资料中的每条记录里的字段提取出来,并插入目的数据库中,并根据应用的需要,进行必要的数据类型转换、数据提取和数据生成,最后利用目的数据库自带的数据迁移工具完成数据迁移。实际运行表明该技术可行、有效,在信息系统开发中对系统切换有很大帮助。  相似文献   

5.
数据仓库技术在超市销售管理中的应用   总被引:1,自引:0,他引:1  
首先介绍了数据仓库的概念,数据仓库就是一个能更好地支持企业或组织决策分析处理的、面向主题的、集成的、不可更新的、随时间不断变化的数据;其次介绍了数据仓库的两大分析工具联机分析处理和数据挖掘,联机分析处理是以海量数据为基础的复杂分析技术,数据挖掘是从海量数据中,提取隐含在其中的、人们事先不知道的但又可能有用的信息和知识的过程;最后介绍数据仓库在银行、金融、电信及零售等行业中的应用,并以超市销售管理为例,利用数据仓库技术建立数据模型和体系结构。  相似文献   

6.
在信息化飞速发展的时代,数据是企业至关重要的资源,是企业的核心,而数据质量是企业提升业绩和盈 利能力的关键。为了确保企业的数据质量,从数据维度及其定义、质量标准及其定义和评定方法等方面提 出了企业数据质量框架。该框架可用于指导企业获取高质量的数据,同时针对企业不同用户和任务对数据 质量的不同要求,提出了数据质量管理模型,为企业开展数据质量管理提供了参考依据。  相似文献   

7.

Large amount of data is generated each second around the world. Along with this technology is evolving each day to handle and store such enormous data in an efficient way. Even with the leap in technology, providing storage space for the gigantic data generated per second globally poses a conundrum. One of the main problems, which the storage server or data center are facing is data redundancy. Same data or slightly modified data is stored in the same storage server repeatedly. Thus, the same data is occupying more space due to its multiple copies. To overcome this issue, Data deduplication can be done. Existing deduplication techniques lacks in identifying the data in an efficient way. In our proposed method we are using a mixed mode analytical architecture to address the data deduplication. For that, three level of mapping is introduced. Each level deals with various aspects of the data and the operations carried out to get unique sets of data into the cloud server. The point of focus is effectively solving deduplication to rule out the duplicated data and optimizing the data storage in the cloud server.

  相似文献   

8.
A fuzzy classifier with ellipsoidal regions for diagnosis problems   总被引:1,自引:0,他引:1  
In our previous work, we developed a fuzzy classifier with ellipsoidal regions that has a training capability. In this paper, we extend the fuzzy classifier to diagnosis problems, in which the training data belonging to abnormal classes are difficult to obtain while the training data belonging to normal classes are easily obtained. Assuming that there are no data belonging to abnormal classes, we first train the fuzzy classifier with only the data belonging to normal classes. We then introduce the threshold of the minimum-weighted distance from the centers of the clusters for the data belonging to normal classes. If the unknown data is within the threshold, we classify the data into normal classes and, if not, abnormal classes. The operator checks whether the diagnosis is correct. If the incoming data is classified into the same normal class both by the classifier and the operator, nothing is done. But if the input data is classified into the different normal classes by the classifier and the operator, or if the incoming data is classified into an abnormal class, but the operator classified it into a normal class, the slopes of the membership functions of the fuzzy rules are tuned. If the operator classifies the data into an abnormal class, the classifier is retrained adding the newly obtained data irrespective of the classifier's classification result. The online training is continued until a sufficient number of the data belonging to abnormal classes are obtained. Then the threshold is optimized using the data belonging to both normal and abnormal classes. We evaluate our method using the Fisher iris data, blood cell data, and thyroid data, assuming some of the classes are abnormal  相似文献   

9.
如何在高速发展的网络环境下构建大规模、高性能、高可靠性及高可扩展性的分布式存储系统,是分布式存储技术面I临的新发展课题。利用Shamir门限方案分割秘密方法的重要思想对共享数据进行分割,并将得到不同的份额再根据模糊数学中的隶属度来构造出子数据集,而后将子数据集分发给各参与者进行存储,根据用户的需要进行数据恢复。核心是把分割生成的子数据扩充生成子数据集,以此来增强数据恢复的可靠性。  相似文献   

10.
刘晴  汤玮  刘旭 《信息技术》2020,(1):130-133,139
传统异地异构数据的整合方法存在查准性能差的问题,文中为此提出基于虚拟数据库技术的异地异构数据源整合方法。构建虚拟数据库异地异构数据的传输负载模型及数据存储结构,提取数据存储结构的稀疏性特征,并挖掘虚拟数据库异地异构数据的属性关联规则特征量,利用特征量融合异地异构数据的模糊信息,建立数据整合模型,实现虚拟数据库异地异构数据源整合。仿真结果表明,采用该方法进行虚拟数据库异地异构数据源整合时间的开销小,数据库的查准率较高。  相似文献   

11.
张涛 《通信技术》2020,(1):221-224
大数据时代,数据资产已成为企业的核心发展要素之一。一方面企业迫切希望能够将数据整合、分析和挖掘,以达到数据驱动业务、数据创新业务及实现业务转型的目标。另一方面,层出不穷的数据泄露事件制约着数据共享的进展。因此,急需一套数据共享管理体系,辅以数据共享技术管控措施,解决数据共享“不愿”“不敢”“不会”的三难问题。这种情形下,提出了一种基于数据标签的共享数据溯源方法,通过数据标签标记合法授权的数据共享信息流,结合数据共享规则特征库进行非法数据共享数据信息流的追踪溯源,并可对合法授权的数据共享信息流的违规操作进行追踪。  相似文献   

12.
庄磊 《信息技术》2022,(2):110-115
当前的云平台数据存储方案忽略了数据的重复性,易产生大量冗余数据,为优化数据存储性能,基于PaaS云平台设计数据存储方案并实现应用.分区删减云平台冗余数据,计算各分区剩余数据权重因子,基于权重因子设计PaaS云平台数据存储顺序,动态生成数据存储方案,将Proxmox VE的虚拟环境模拟系统作为虚拟节点,通过底层服务器实现...  相似文献   

13.
Chen  Rentai 《Wireless Networks》2022,28(6):2785-2793

The traditional Hadoop-based anomaly data recognition algorithm for big data networks does not suppress the disturbance components of the data attributes of anomalous nodes. A new algorithm for locating and identifying fuzzy anomaly data in big data network is proposed. In the big data network environment, adaptive cascade notch filter is used to eliminate data interference, and second-order lattice filter is used to locate abnormal node data. The parameters of the fuzzy linear regression model are estimated, and the fuzzy Cook distance is solved. The data points with the largest fuzzy Cook distance are regarded as fuzzy abnormal data, and the data location and recognition are realized. The experimental results show that the average recall rate of the proposed algorithm for locating fuzzy outlier data is 93%, and the locating probability of the proposed algorithm for fuzzy outlier data is 92% when the signal-to-noise ratio is ?30 dB. The proposed algorithm can accurately identify the fuzzy outlier data in big data network by Cook distance, and has better locating and identifying effect for fuzzy outlier data in big data network.

  相似文献   

14.
In wireless sensor networks, data aggregation protocols are used to prolong the network lifetime. However, the problem of how to perform data aggregation while preserving data privacy is challenging. This paper presents a polynomial regression‐based data aggregation protocol that preserves the privacy of sensor data. In the proposed protocol, sensor nodes represent their data as polynomial functions to reduce the amount of data transmission. In order to protect data privacy, sensor nodes secretly send coefficients of the polynomial functions to data aggregators instead of their original data. Data aggregation is performed on the basis of the concealed polynomial coefficients, and the base station is able to extract a good approximation of the network data from the aggregation result. The security analysis and simulation results show that the proposed scheme is able to reduce the amount of data transmission in the network while preserving data privacy. Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

15.
With the development of big data and cloud computing technology, more and more users choose to store data on cloud servers, which brings much convenience to their management and use of data, and also the risk of data leakage. A common method to prevent data leakage is to encrypt the data before uploading it, but the traditional encryption method is often not conducive to data sharing and querying. In this paper, a new kind of Attribute-Based Encryption (ABE) scheme, which is called the Sub-String Searchable ABE (SSS-ABE) scheme, is proposed for the sharing and querying of the encrypted data. In the SSS-ABE scheme, the data owner encrypts the data under an access structure, and only the data user who satisfies the access structure can query and decrypt it. The data user can make a substring query on the whole ciphertext without setting keywords in advance. In addition, the outsourcing method is also introduced to reduce the local computation of the decryption process so that the outsourcing SSS-ABE scheme can be applied to IoT devices.  相似文献   

16.
王铮  任华  方燕萍 《电信科学》2016,32(12):7-12
电信运营商有大量数据,但是鉴于多种原因,数据的质量不够理想,出现大量数据不完整甚至缺失。对于已有数据的挖掘,必须在数据满足质量要求且达到足够采样比例的前提下开展。依托现有的全国日志留存系统,设计完整数据的模板样库,鉴别不能满足质量要求的数据,使用随机森林算法,找到最符合的相同或相关数据,补全数据并提升数据质量;用回溯反馈的方法优化并扩充模板样库。在全国日志留存系统中构建数据补全子系统,实现端到端的数据质量保障和提升,补全并改善历史数据甚至实时数据的质量,最终满足数据处理和挖掘的要求,提升运营商数据质量和价值。  相似文献   

17.
At the end of data life cycle,there is still a risk of data leakage,because mostly data which was stored in cloud is removed by logical deletion of the key.Therefore,a cloud data assured deletion scheme (WV-CP-ABE) based on ciphertext re-encrypt and overwrite verification was proposed.When data owner wants to delete the outsourced data,the data fine-grained deletion operation was realized by re-encrypting the ciphertext to change the access control policy.Secondly,a searchable path hash binary tree (DSMHT) based on dirty data block overwrite was built to verify the correctness of the data to be deletion.Finally,the dual mechanism of changing the ciphertext access control policy and data overwriting guarantees the data assured deletion.The experimental analysis proves that the fine-grained control is better and the security is more reliable than the previous logical delete method in the assured deletion of data.  相似文献   

18.
一种虚拟试验系统中间件数据分发服务的设计   总被引:2,自引:0,他引:2  
针对某分布式虚拟试验系统中间件的批量数据交换需求,设计数据分发服务。采用数据域实现批量数据的管理,支持对多节点多SDO属性的自由组合;采用发布订购机制在数据域和SDO属性间建立交互关系,运行时根据交互关系将SDO属性的更新同步到数据域。为进一步提高数据交换的效率,采用数据过滤机制屏蔽无效数据,减轻应用层的数据处理压力,设计C++结合Python的方式进行数据过滤,相比单纯使用C++实现更为简单、灵活和安全。服务基于ACE开发,实际测试表明服务工作正常,能满足批量数据交换的需求。  相似文献   

19.

Recently and due to the impressive growth in the amounts of transmitted data over the heterogeneous sensor networks and the emerged related technologies especially the Internet of Things in which the number of the connected devices and the data consumption are remarkably growing, big data has emerged as a widely recognized trend and is increasingly being talked about. The term big data is not only about the volume of data, but also refers to the high speed of transmission and the wide variety of information that is difficult to collect, store and process using the available classical technologies. Although the generated data by the individual sensors may not appear to be significant, all the data generated through the many sensors in the connected sensor networks are able to produce large volumes of data. Big data management imposes additional constraints on the wireless sensor networks and especially on the data aggregation process, which represents one of the essential paradigms in wireless sensor networks. Data aggregation process can represent a solution to the problem of big data by allowing data from different sources to be combined to eliminate the redundant ones and consequently reduce the amounts of data and the consumption of the available resources in the network. The main objective of this work is to propose a new approach for supporting big data in the data aggregation process in heterogeneous wireless sensor networks. The proposed approach aims to reduce the data aggregation cost in terms of energy consumption by balancing the data loads on the heterogeneous nodes. The proposal is improved by integrating the feedback control closed loop to reinforce the balance of the data aggregation load on the nodes, maintaining therefore an optimal delay and aggregation time.

  相似文献   

20.
云计算是IT行业发展的新阶段。它是并行计算,网格计算,分布式计算和网络技术发展的结果。大数据技术的应用,实现了多源异构海洋环境监测数据的集成,对海洋环境监测数据的发展非常有意义。并且可以共享,以避免出现信息孤岛。能够为数据分析和挖掘提供所需的数据。这项研究有利于解决海量海洋环境监测数据管理问题,满足海洋环境研究人员对大数据的需求,并进一步探讨大数据技术在海洋环境监测中的可能应用场景。最后,讨论了云环境下海洋环境监测大数据处理平台的架构,以实现以数据为依据的决策,提高海洋环境管理水平。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号