首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
For real‐time interactive multimedia operations, such as video uploading, video play, fast‐forward, and fast‐rewind, solid state disk (SSD)‐based storage systems for video streaming servers are becoming more important. Random access rates in storage systems increase significantly with the number of users; it is thus difficult to simultaneously serve many users with HDD‐based storage systems, which have low random access performance. Because there is no mechanical operation in NAND flash‐based SSDs, they outperform HDDs in terms of flexible random access operation. In addition, due to the multichannel architecture of SSDs, they perform similarly to HDDs in terms of sequential access. In this paper, we propose a new SSD‐based storage system for interactive media servers. Based on the proposed method, it is possible to maximize the channel utilization of the SSD's multichannel architecture. Accordingly, we can improve the performance of SSD‐based storage systems for interactive media operations.  相似文献   

2.
分布式多媒体存储系统中的全局缓存管理   总被引:2,自引:2,他引:0       下载免费PDF全文
朱晴波  乔浩  陈道蓄 《电子学报》2002,30(12):1832-1835
多媒体存储系统必须同时支持连续媒体和非连续媒体的访问.由于连续媒体的实时要求,系统必须为访问连续媒体保留大量的磁盘带宽,并且持续很长的时间,这使其他类型文件的访问性能严重下降.本文根据连续媒体的访问特性,提出了一个分布式多媒体存储系统的协同缓存策略GLNU,充分利用系统中其他结点上可用的内存资源,提高缓存的利用率,以减少连续媒体的磁盘I/O,从而提高其他媒体的访问性能.仿真试验表明GLNU在各种不同的参数下,均优于现有的缓存策略,是一种适合分布式多媒体存储系统的缓存策略.  相似文献   

3.
Peer‐to‐peer (P2P) file‐sharing systems are characterized by highly replicated content that is distributed among nodes with enormous aggregate resources for storage and communication. File consistency is often compromised by undesirable changes, which should be detected and corrected in a timely fashion. The artificial immune system (AIS) is a novel evolutionary paradigm inspired by aspects of the biological immune system (BIS), such as protection, decentralization, autonomy, and anomaly detection. The AIS paradigm suggests a wide variety of mechanisms for solving complex computer problems. In this paper, we propose the ImmunoJXTA framework for file consistency management and file recovery using the main aspects of AIS in P2P systems. We implemented ImmunoJXTA on the JXTA P2P framework to recover distributed inconsistent files between peers efficiently. Promising results are achieved from experimental runs of the proposed framework. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   

4.
The development of proxy caching is essential in the area of video‐on‐demand (VoD) to meet users' expectations. VoD requires high bandwidth and creates high traffic due to the nature of media. Many researchers have developed proxy caching models to reduce bandwidth consumption and traffic. Proxy caching keeps part of a media object to meet the viewing expectations of users without delay and provides interactive playback. If the caching is done continuously, the entire cache space will be exhausted at one stage. Hence, the proxy server must apply cache replacement policies to replace existing objects and allocate the cache space for the incoming objects. Researchers have developed many cache replacement policies by considering several parameters, such as recency, access frequency, cost of retrieval, and size of the object. In this paper, the Weighted‐Rank Cache replacement Policy (WRCP) is proposed. This policy uses such parameters as access frequency, aging, and mean access gap ratio and such functions as size and cost of retrieval. The WRCP applies our previously developed proxy caching model, Hot‐Point Proxy, at four levels of replacement, depending on the cache requirement. Simulation results show that the WRCP outperforms our earlier model, the Dual Cache Replacement Policy.  相似文献   

5.
In mobile wireless data access networks, remote data access is expensive in terms of bandwidth consumption. An efficient caching scheme can reduce the amount of data transmission, hence, bandwidth consumption. However, an update event makes the associated cached data objects obsolete and useless for many applications. Data access frequency and update play a crucial role in deciding which data objects should be cached. Seemingly, frequently accessed but infrequently updated objects should have higher preference while preserving in the cache. Other objects should have lower preference or be evicted, or should not be cached at all, to accommodate higher‐preference objects. In this paper, we proposed Optimal Update‐based Replacement, a replacement or eviction scheme, for cache management in wireless data networks. To facilitate the replacement scheme, we also presented two enhanced cache access schemes, named Update‐based Poll‐Each‐Read and Update‐based Call‐Back. The proposed cache management schemes were supported with strong theoretical analysis. Both analysis and extensive simulation results were given to demonstrate that the proposed schemes guarantee optimal amount of data transmission by increasing the number of effective hits and outperform the popular Least Frequently Used scheme in terms of both effective hits and communication cost. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

6.
Low‐density parity‐check (LDPC) codes are very powerful error‐correction codes with capabilities approaching the Shannon's limits. In evaluating the error performance of an LDPC code, the computer simulation time taken becomes a primary concern when tens of millions of noise‐corrupted codewords are to be decoded, particularly for codes with very long lengths. In this paper, we propose modeling the parity‐check matrix of an LDPC code with compressed parity‐check matrices in the check‐node domain (CND) and in the bit‐node domain (BND), respectively. Based on the compressed parity‐check matrices, we created two message matrices, one in the CND and another in the BND, and two domain conversion matrices, one from CND to BND and another from BND to CND. With the proposed message matrices, the data used in the iterative LDPC decoding algorithm can be closely packed and stored within a small memory size. Consequently, such data can be mostly stored in the cache memory, reducing the need for the central processing unit to access the random access memory and hence improving the simulation time significantly. Furthermore, the messages in one domain can be easily converted to another domain with the use of the conversion matrices, facilitating the central processing unit to access and update the messages. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

7.
The operation of a ferroelectric DRAM (dynamic random access memory) cell for nonvolatile RAM (NVRAM) applications is described. Because polarization reversal only occurs during nonvolatile store/recall operations and not during read/write operations, ferroelectric fatigue is not a serious endurance problem. For a 3-V power supply, the worst-case effective silicon dioxide thickness of the unoptimized lead zirconate titanate film studied is less than 17 Å. The resistivity and endurance properties of ferroelectric films can be optimized by modifying the composition of the film. This cell can be the basis of a very-high-density NVRAM with practically no read/write cycle limit and at least 1010 nonvolatile store/recall cycles  相似文献   

8.
To protect the sensitive data outsourced to cloud server, outsourcing data in an encrypted way has become popular nowadays. However, it is not easy to find the corresponding ciphertext efficiently, especially the large ciphertext stored on cloud server. Besides, some data owners do not want those users who attempt to decrypt to know the sensitive access structure of the ciphertext because of some business or private reasons. In addition, the user attributes revocation and key updating are important issues, which affect application of ciphertext‐policy attribute‐based encryption (CP‐ABE) in cloud storage systems. To overcome the previous problems in cloud storage, we present a searchable CP‐ABE with attribute revocation, where access structures are partially hidden so that receivers cannot extract sensitive information from the ciphertext. The security of our scheme can be reduced to the decisional bilinear Diffie–Hellman (DBDH) assumption and decisional linear (DL) assumption. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

9.
Cooperative caching is an important technique to support pervasive Internet access. In order to ensure valid data access, the cache consistency must be maintained properly. However, this problem has not been sufficiently studied in mobile computing environments, especially those with ad hoc networks. There are two essential issues in cache consistency maintenance: consistency control initiation and data update propagation. Consistency control initiation not only decides the cache consistency provided to the users, but also impacts the consistency maintenance cost. This issue becomes more challenging in asynchronous and fully distributed ad hoc networks. To this end, we propose the predictive consistency control initiation (PCCI) algorithm, which adaptively initiates consistency control based on its online predictions of forthcoming data updates and cache queries. In order to efficiently propagate data updates through multi‐hop wireless connections, the hierarchical data update propagation (HDUP) algorithm is proposed. Theoretical analysis shows that cooperation among the caching nodes facilitates data update propagation. Extensive simulations are conducted to evaluate performance of both PCCI and HDUP. Evaluation results show that PCCI cost‐effectively initiates consistency control even when faced with dynamic changes in data update rate, cache query rate, node speed, and number of caching nodes. The evaluation results also show that HDUP saves cost for data update propagation by up to 66%. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

10.
Access control is one of the fundamental security mechanisms of IT systems. Most existing access control schemes rely on a centralized party to manage and enforce access control policies. As blockchain technologies, especially permissioned networks, find more applicability beyond cryptocurrencies in enterprise solutions, it is expected that the security requirements will increase. Therefore, it is necessary to develop an access control system that works in a decentralized environment without compromising the unique features of a blockchain. A straightforward method to support access control is to deploy a firewall in front of the enterprise blockchain application. However, this approach does not take advantage of the desirable features of blockchain. In order to address these concerns, we propose a novel blockchain‐based access control scheme, which keeps the decentralization feature for access control–related operations. The newly proposed system also provides the capability to protect user's privacy by leveraging ring signature. We implement a prototype of the scheme using Hyperledger Fabric and assess its performance to show that it is practical for real‐world applications.  相似文献   

11.
An efficient cryptography mechanism should enforce an access control policy over the encrypted data to provide flexible, fine‐grained, and secure data access control for secure sharing of data in cloud storage. To make a secure cloud data sharing solution, we propose a ciphertext‐policy attribute‐based proxy re‐encryption scheme. In the proposed scheme, we design an efficient fine‐grained revocation mechanism, which enables not only efficient attribute‐level revocation but also efficient policy‐level revocation to achieve backward secrecy and forward secrecy. Moreover, we use a multiauthority key attribute center in the key generation phase to overcome the single‐point performance bottleneck problem and the key escrow problem. By formal security analysis, we illustrate that our proposed scheme achieves confidentiality, secure key distribution, multiple collusions resistance, and policy‐ or attribute‐revocation security. By comprehensive performance and implementation analysis, we illustrate that our proposed scheme improves the practical efficiency of storage, computation cost, and communication cost compared to the other related schemes.  相似文献   

12.
As an attempt to make network managers’ life easier, we present M3Omon , a system architecture that helps to develop monitoring applications and perform network diagnosis. M3Omon behaves as an intermediate layer between the traffic and monitoring applications that provides advanced features, high performance and low cost. Such advanced features leverage a multi‐granular and multi‐purpose approach to the monitoring problem. Multi‐granular monitoring provides answers to tasks that use traffic aggregates to identify an event, and requires either flow records or packet data or even both to understand it and, eventually, take convenient countermeasures. M3Omon provides a simple API to access traffic simultaneously at several different granularities, i.e. packet‐level, flow‐level and aggregate statistics. The multi‐purposed design of M3Omon allows not only performing tasks in parallel that are specifically targeted to different traffic‐related purposes (e.g. traffic classification and intrusion detection) but also sharing granularities between applications, e.g. several concurrent applications fed from flow records that are provided by M3Omon . Finally, the low‐cost characteristic is brought by off‐the‐shelf systems (the combination of open‐source software and commodity hardware) and the high performance is achieved thanks to modifications in the standard NIC driver, low‐level hardware interaction, efficient memory management and programming optimization. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Cloud storage has become a trend of storage in modern age. The cloud‐based electronic health record (EHR) system has brought great convenience for health care. When a user visits a doctor for a treatment, the doctor may be necessary to access the history health records generated at other medical institutions. Thus, we present a secure EHR searching scheme based on conjunctive keyword search with proxy re‐encryption to realize data sharing between different medical institutions. Firstly, we propose a framework for health data sharing among multiple medical institutions based on cloud storage. We explore the public key encryption with conjunctive keyword search to encrypt the original data and store it in the cloud. It ensures data security with searchability. Furthermore, we adopt the identity‐based access control mechanism and proxy re‐encryption scheme to guarantee the legitimacy of access and the privacy of the original data. Generally speaking, our work can achieve authentication, keyword privacy, and privacy preservation. Moreover, the performance evaluation shows that the scheme can achieve high computational efficiency.  相似文献   

14.
A large number of new data‐consuming applications are emerging, and many of them involve mobile users. In the next generation of wireless communication systems, device‐to‐device (D2D) communication is introduced as a new paradigm to offload the increasing traffic to the user equipment. Before the traffic transmission, D2D discovery and access procedure is the first important step which needs to be completed. In this paper, our goal is to design a device discovery and access scheme for the fifth generation cellular networks. We first present two types of device discovery and access procedures. Then we provide performance analysis based on the Markov process model. In addition, we present numerical simulation on the Vienna Matlab platform. The simulation results demonstrate the viability of the proposed scheme. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

15.
Caching frequently accessed data objects at the local buffer of a mobile user (MU) has been found to be very effective in improving information availability in mobile wireless environments. Several mechanisms have been proposed in the literature to address the challenging problem of cache consistency in cellular wireless networks. However, these mechanisms are limited to single cell systems. In this paper, we develop a novel Dynamic Scalable Asynchronous Cache Consistency Scheme (DSACCS) that can adaptively maintain mobile data objects globally or locally depending on the minimum consistency maintenance cost in multi-cell systems. The cost function is derived by taking into account each data object's update frequency, MUs' access pattern and roaming frequency, number of cells and number of MUs in the system. Extensive simulation studies demonstrate that DSACCS outperforms three existing cache strategies extended to multi-cell environments. The three cache consistency strategies are homogeneous invalidation reports (IRs), inhomogeneous IR without roaming check, and inhomogeneous IR with roaming check. Finally, an improvisation of DSACCS, called DSACCS-G, is proposed for grouping cells in order to facilitate effective cache consistency maintenance in multi-cell systems.  相似文献   

16.
In this paper, we investigate an incentive edge caching mechanism for an internet of vehicles (IoV) system based on the paradigm of software‐defined networking (SDN). We start by proposing a distributed SDN‐based IoV architecture. Then, based on this architecture, we focus on the economic side of caching by considering competitive cache‐enablers market composed of one content provider (CP) and multiple mobile network operators (MNOs). Each MNO manages a set of cache‐enabled small base stations (SBS). The CP incites the MNOs to store its popular contents in cache‐enabled SBSs with highest access probability to enhance the satisfaction of its users. By leasing their cache‐enabled SBSs, the MNOs aim to make more monetary profit. We formulate the interaction between the CP and the MNOs, using a Stackelberg game, where the CP acts first as the leader by announcing the popular content quantity that it which to cache and fixing the caching popularity threshold, a minimum access probability under it a content cannot be cached. Then, MNOs act subsequently as followers responding by the content quantity they accept to cache and the corresponding caching price. A noncooperative subgame is formulated to model the competition between the followers on the CP's limited content quantity. We analyze the leader and the follower's optimization problems, and we prove the Stackelberg equilibrium (SE). Simulation results show that our game‐based incentive caching model achieves optimal utilities and outperforms other incentive caching mechanisms with monopoly cache‐enablers whilst enhancing 30% of the user's satisfaction and reducing the caching cost.  相似文献   

17.
Nowadays, security and data access control are some of the major concerns in the cloud storage unit, especially in the medical field. Therefore, a security‐aware mechanism and ontology‐based data access control (SA‐ODAC) has been developed to improve security and access control in cloud computing. The model proposed in this research work is based on two operational methods, namely, secure awareness technique (SAT) and ontology‐based data access control (ODAC), to improve security and data access control in cloud computing. The SAT technique is developed to provide security for medical data in cloud computing, based on encryption, splitting and adding files, and decryption. The ODAC ontology is launched to control unauthorized persons accessing data from storage and create owner and administrator rules to allow access to data and is proposed to improve security and restrict access to data. To manage the key of the SAT technique, the secret sharing scheme is introduced in the proposed framework. The implementation of the algorithm is performed by MATLAB, and its performance is verified in terms of delay, encryption time, encryption time, and ontology processing time and is compared with role‐based access control (RBAC), context‐aware RBAC and context‐aware task RBAC, and security analysis of advanced encryption standard and data encryption standard. Ultimately, the proposed data access control and security scheme in SA‐ODAC have achieved better performance and outperform the conventional technique.  相似文献   

18.
Inorganic phase change memories (PCMs) have attracted substantial attention as a next‐generation storage node, due to their high‐level of performance, reliability, and scalability. To integrate the PCM on plastic substrates, the reset power should be minimized to avoid thermal degradation of polymers and adjacent cells. Additionally, flexible phase change random access memory remains unsolved due to the absence of the optimal transfer method and the selection device. Here, an Mo‐based interfacial physical lift‐off transfer method is introduced to realize a crossbar‐structured flexible PCM array, which employs a Schottky diode (SD) selection device and conductive filament PCM storage node. A 32 × 32 parallel array of 1 SD‐1 CFPCM, which utilizes a Ni filament as a nanoheater for low power phase transition, is physically exfoliated from the glass substrate at the face‐centered cubic/body‐centered cubic interface within the sacrificial Mo layer. First principles density functional theory calculations are utilized to understand the mechanism of the Mo‐based exfoliation phenomena and the observed metastable Mo phase. The flexible 1 SD‐1 CFPCM shows reliable operations (e.g., large resistance ratio of 17, excellent endurance over 100 cycles, and long retention over 104 s) with excellent flexibility. Furthermore, the random access operation is confirmed by addressing tests of characters “KAIST.”  相似文献   

19.
Uploading and downloading content have recently become one of the major reasons for the growth of Internet traffic volume. With the increasing popularity of social networking tools and their video upload/download applications, as well as the connectivity enhancements in wireless networks, it has become a second nature for mobile users to access on‐demand content on‐the‐go. Urban hot spots, usually implemented via wireless relays, answer the bandwidth need of those users. On the other hand, the same popular contents are usually acquired by a large number of users at different times, and fetching those from the initial content source each and every time makes inefficient use of network resources. In‐network caching provides a solution to this problem by bringing contents closer to the users. Although in‐network caching has been previously studied from latency and transport energy minimization perspectives, energy‐efficient schemes to prolong user equipment lifetime have not been considered. To address this problem, we propose the cache‐at‐relay (CAR) scheme, which utilizes wireless relays for in‐network caching of popular contents with content access and caching energy minimization objectives. CAR consists of three integer linear programming models, namely, select relay, place content, and place relay, which respectively solve content access energy minimization, joint minimization of content access and caching energy, and joint minimization of content access energy and relay deployment cost problems. We have shown that place relay significantly minimizes the content access energy consumption of user equipments, while place content provides a compromise between the content access and the caching energy budgets of the network. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   

20.
A class of applications, such as home energy management and control and utility data acquisition, is emerging in recent times where smart meters, sensors, and appliances are networked together for intelligent management and coordination. Such applications rely on low data rate communication of monitoring and control information at large scale. For the underlying networking infrastructure to facilitate communication of the real‐time and intermittent packet traffic expected, random access‐based protocols are regarded as suitable medium access control solutions. A key challenge in this regard is that the random access protocols are prone to throughput degradation when the number of contending nodes grows, as expected with the infrastructures involved. Besides, provision for certain degree of criticality/priority is needed for some of the packets compared with the rest. With this background, this paper analytically determines the criterion for throughput‐optimal operations in a network based on low‐rate carrier sense multiple access protocol. In addition, ways to provide priority‐wise access differentiation at arbitrary proportions without a negative impact on the achievable throughput is incorporated within a binary exponential backoff based collision avoidance scheme. Discrete‐event simulations are performed to validate the accuracy of the approximations made in analysis.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号