首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 703 毫秒
1.
Although the performance of a magnetic recording read channel can be improved by employing advanced iterative signal detection and coding techniques, the method nevertheless tends to incur significant silicon area and energy consumption overhead. Motivated by recent significant improvement of high-density embedded dynamic random access memory (eDRAM) towards high manufacturability at low cost, we explored the potential of integrating eDRAM in read channel integrated circuits (IC) to minimize the silicon area and energy consumption cost incurred by iterative signal detection and coding. As a result of the memory-intensive nature of iterative signal detection and coding algorithms, the silicon cost can be reduced in a straightforward manner by directly replacing conventional SRAM with eDRAM. However, reducing the energy consumption may not be trivial. In this paper, we present two techniques that trade eDRAM storage capacity to reduce the energy consumption of iterative signal detection and coding datapath. We have demonstrated dDRAM's energy saving potential by designing a representative iterative read channel at the 65 nm technology node. Simulation shows that we can eliminate over 99.99% of post-processing computation for dominant error events detection, and achieve up to a 67% reduction of decoding energy consumption.  相似文献   

2.
With the rapid development of information technology, the development of blockchain technology has also been deeply impacted. When performing block verification in the blockchain network, if all transactions are verified on the chain, this will cause the accumulation of data on the chain, resulting in data storage problems. At the same time, the security of data is also challenged, which will put enormous pressure on the block, resulting in extremely low communication efficiency of the block. The traditional blockchain system uses the Merkle Tree method to store data. While verifying the integrity and correctness of the data, the amount of proof is large, and it is impossible to verify the data in batches. A large amount of data proof will greatly impact the verification efficiency, which will cause end-to-end communication delays and seriously affect the blockchain system’s stability, efficiency, and security. In order to solve this problem, this paper proposes to replace the Merkle tree with polynomial commitments, which take advantage of the properties of polynomials to reduce the proof size and communication consumption. By realizing the ingenious use of aggregated proof and smart contracts, the verification efficiency of blocks is improved, and the pressure of node communication is reduced.  相似文献   

3.
In the digital era, electronic medical record (EMR) has been a major way for hospitals to store patients’ medical data. The traditional centralized medical system and semi-trusted cloud storage are difficult to achieve dynamic balance between privacy protection and data sharing. The storage capacity of blockchain is limited and single blockchain schemes have poor scalability and low throughput. To address these issues, we propose a secure and efficient medical data storage and sharing scheme based on double blockchain. In our scheme, we encrypt the original EMR and store it in the cloud. The storage blockchain stores the index of the complete EMR, and the shared blockchain stores the index of the shared part of the EMR. Users with different attributes can make requests to different blockchains to share different parts according to their own permissions. Through experiments, it was found that cloud storage combined with blockchain not only solved the problem of limited storage capacity of blockchain, but also greatly reduced the risk of leakage of the original EMR. Content Extraction Signature (CES) combined with the double blockchain technology realized the separation of the privacy part and the shared part of the original EMR. The symmetric encryption technology combined with Ciphertext-Policy Attribute-Based Encryption (CP–ABE) not only ensures the safe storage of data in the cloud, but also achieves the consistency and convenience of data update, avoiding redundant backup of data. Safety analysis and performance analysis verified the feasibility and effectiveness of our scheme.  相似文献   

4.
贾丹  尤飞  张庆立 《包装工程》2018,39(1):190-195
目的针对黑白QR码信息嵌入能力不足以及大容量信息存储的需求,提出一种QR码直接扩容技术。方法黑白QR码只有2种颜色的编码模块,每个编码模块存储1比特数据;彩色QR码拥有2n种颜色的编码模块,每个编码模块存储n比特数据,使QR码存储容量直接扩增n倍。基于HSV颜色空间模型,采用三元组配色模型进行编码模块的配色,以降低彩色编码模块间的混叠效应。采用图像对比度增强和K-Means聚类算法处理彩色编码模块的颜色偏离。结果通过对拥有16种颜色编码模块的彩色QR码数据存储性能进行分析可知,存储相同容量的数据信息,彩色QR码比黑白QR码的版本更低,QR码符号更小,实现了QR码存储容量的直接扩增。结论采用QR码直接扩容技术可实现存储容量的倍数扩增;采用K-Means聚类算法可大幅度降低彩色QR码编码模块之间的混叠效应,显著提高彩色QR码译码的正确率。  相似文献   

5.
At some nominal recording density, the read signal in digital magnetic recording resembles a Class IV partial response (PR4) signal and, hence, may be equalized to the PR4 shape with relatively little noise enhancement. When coding is added, for a fixed user density, the recording density must increase as a result of coding overhead, and the read signal will resemble PR4 to a lesser extent. Equalization to PR4 in this case will produce excessive noise enhancement. Thus, coding overhead (or rate) must be selected for optimum tradeoff between code strength and noise enhancement. Toward this end, we provide results for high-rate concatenated codes, assuming a Lorentzian recording channel model. In addition to examining optimal code rates, we compare parallel and serial concatenated code performance on the PR4 channel  相似文献   

6.
This paper examined the drivers of blockchain technology adoption and carbon performance using the theory of technology-organization-environment (TOE) as the basis for the development of a technology adoption framework. Blockchain technology has passed the proof testing phase and is attracting early adopters who can gain benefits from it. Manufacturing firms that had adopted the blockchain technology and voluntary reported the carbon emission were targeted as the sample and survey data were collected from manufacturing firms that were registered with the Association of Malaysian Manufacturers. Unfavourable support of top management and the lack of technology competence were the main barriers to the adoption of blockchain technology among manufacturing firms. The results indicate that firms did not achieve low carbon performance and that a lack of pressure from competitors and technical competency to undertake blockchain technology were factors. No evidence existed demonstrating a linkage of early adopters of blockchain technology with and low carbon performance. Recommendations of this study include that firms should take the initiative to record the energy consumption, engage in the transfer of carbon credits, and monitor carbon performance using reliable technology to improve business transparency and sustainability.  相似文献   

7.
Reduced-state trellis detection with decision feedback is widely used to reduce the energy consumption of trellis detectors, particularly for soft-output trellis detectors that are energy-hungry by nature. However, the decision feedback tends to increase the circuit critical path and, more important, makes it difficult to apply some well-proven high-speed trellis detector design techniques such as bit-level pipelining. This paper presents a method, referred to as quasi-reduced-state trellis detection, to tackle such speed bottlenecks. The basic idea is to simply obviate the use of decision feedback by mapping only the data storage block of the trellis detector onto a reduced-state trellis and keeping the trellis state metric computation on the original full-state trellis. This makes sense because the data storage block tends to dominate the overall energy consumption while the decision feedback is due to the reduced-state trellis metric computation. Therefore, it is intuitive that such quasi-reduced-state detectors may largely maintain the energy saving potentials of reduced-state trellis detection without being subject to decision-feedback-induced speed bottlenecks. We demonstrated the effectiveness of this proposed design method by using soft-output Viterbi algorithm (SOVA) detection for a magnetic recording read channel as a test vehicle.  相似文献   

8.
Blockchain merges technology with the Internet of Things (IoT) for addressing security and privacy-related issues. However, conventional blockchain suffers from scalability issues due to its linear structure, which increases the storage overhead, and Intrusion detection performed was limited with attack severity, leading to performance degradation. To overcome these issues, we proposed MZWB (Multi-Zone-Wise Blockchain) model. Initially, all the authenticated IoT nodes in the network ensure their legitimacy by using the Enhanced Blowfish Algorithm (EBA), considering several metrics. Then, the legitimately considered nodes for network construction for managing the network using Bayesian-Direct Acyclic Graph (B-DAG), which considers several metrics. The intrusion detection is performed based on two tiers. In the first tier, a Deep Convolution Neural Network (DCNN) analyzes the data packets by extracting packet flow features to classify the packets as normal, malicious, and suspicious. In the second tier, the suspicious packets are classified as normal or malicious using the Generative Adversarial Network (GAN). Finally, intrusion scenario performed reconstruction to reduce the severity of attacks in which Improved Monkey Optimization (IMO) is used for attack path discovery by considering several metrics, and the Graph cut utilized algorithm for attack scenario reconstruction (ASR). UNSW-NB15 and BoT-IoT utilized datasets for the MZWB method simulated using a Network simulator (NS-3.26). Compared with previous performance metrics such as energy consumption, storage overhead accuracy, response time, attack detection rate, precision, recall, and F-measure. The simulation result shows that the proposed MZWB method achieves high performance than existing works  相似文献   

9.
Distributed storage can store data in multiple devices or servers to improve data security. However, in today's explosive growth of network data, traditional distributed storage scheme is faced with some severe challenges such as insufficient performance, data tampering, and data lose. A distributed storage scheme based on blockchain has been proposed to improve security and efficiency of traditional distributed storage. Under this scheme, the following improvements have been made in this paper. This paper first analyzes the problems faced by distributed storage. Then proposed to build a new distributed storage blockchain scheme with sharding blockchain. The proposed scheme realizes the partitioning of the network and nodes by means of blockchain sharding technology, which can improve the efficiency of data verification between nodes. In addition, this paper uses polynomial commitment to construct a new verifiable secret share scheme called PolyVSS. This new scheme is one of the foundations for building our improved distributed storage blockchain scheme. Compared with the previous scheme, our new scheme does not require a trusted third party and has some new features such as homomorphic and batch opening. The security of VSS can be further improved. Experimental comparisons show that the proposed scheme significantly reduces storage and communication costs.  相似文献   

10.
针对降低不断增加的片上Cache静态功耗的要求,提出了一种基于最远块对的低静态功耗指令Cache结构--FBPC.考虑到指令访问具有很好的时间和空间局部性,距离当前访问地址较远的指令在随后的一段时间内被访问的可能性不大.利用指令访问的特点,FBPC通过存储体的改造自动发现据当前访问最远的块对,并且将它们设置为休眠模式来减少静态功耗.对11个SPEC2000测试程序进行仿真的结果表明,该方案可以有效地降低指令Cache的静态功耗.  相似文献   

11.
为提升QR(quick response)二维码在高速运动状态下的识别率,系统分析并筛选了可能影响二维码识别率的因素,进行了基于响应面方法的CCI(central composite inscribed)实验设计。将传送带速度、读码距离、读码角度、照度等因素划分为5个水平,并分别进行了静态、动态实验以及参数优化。动、静态实验结果表明,读码距离、读码角度与照度之间的交互作用对静态下二维码识别率有显著影响(p<0.05),传送带速度、读码距离、读码距离与读码角度之间的交互作用、读码角度与照度之间的交互作用对动态下二维码识别率有显著影响。理论上,当传送带速度为80.6 m/min,读码距离为144.3 mm,读码角度为-25.6°,照度为683.0 lx时,可得识别率的最大优化值为98.4%。通过实验验证可知,影响因素的参数优化能有效提升二维码识别率。  相似文献   

12.
We describe a low-complexity noniterative detector for magnetic and optical multitrack high-density data storage. The detector is based on the M-algorithm architecture. It performs limited breadth-first detection on the equivalent one-dimensional (1-D) channel obtained by column-by-column helical unwinding of the two-dimensional (2-D) channel. The detection performance is optimized by the use of a specific 2-D minimum-phase factorization of the channel impulse response by the equalizer. An optimized path selection scheme maintains the complexity close to practical 1-D Viterbi. This scheme is based on an approximate path metric parallel sort network, taking advantage of the metrics' residual ordering from previous M-algorithm iterations. Such an architecture approaches maximum-likelihood performance on a high areal density uncoded channel for a practical number of retained paths M and bit error rate (BER) below 10-4. The performance of the system is evaluated when the channel is encoded with multi-parity check (MPC) block inner code and an outer interleaved Reed-Solomon code. The inner code enhances the minimum error distance of the equalized channel and reduces the correct path losses of the M-algorithm path buffer. The decoding is performed noniteratively. Here, we compare the performance of the system to the soft iterative joint decoding of the read channels for data pages encoded with low-density parity check (LDPC) codes with comparable rates and block length. We provide an approximation of the 2-D channel capacity to further assess the performance of the system  相似文献   

13.
We present the way in which we have constructed an implementation of a sparse Cholesky factorization based on a hypermatrix data structure. This data structure is a storage scheme which produces a recursive 2D partitioning of a sparse matrix. It can be useful on some large sparse matrices. Subblocks are stored as dense matrices. Thus, efficient BLAS3 routines can be used. However, since we are dealing with sparse matrices some zeros may be stored in those dense blocks. The overhead introduced by the operations on zeros can become large and considerably degrade performance. We present the ways in which we deal with this overhead. Using matrices from different areas (Interior Point Methods of linear programming and Finite Element Methods), we evaluate our sequential in-core hypermatrix sparse Cholesky implementation. We compare its performance with several other codes and analyze the results. In spite of using a simple fixed-size partitioning of the matrix our code obtains competitive performance. This work was supported by the Ministerio de Educación y Ciencia of Spain (TIN2004-07739-C02-01).  相似文献   

14.
A smart contract is a digital program of transaction protocol (rules of contract) based on the consensus architecture of blockchain. Smart contracts with Blockchain are modern technologies that have gained enormous attention in scientific and practical applications. A smart contract is the central aspect of a blockchain that facilitates blockchain as a platform outside the cryptocurrency spectrum. The development of blockchain technology, with a focus on smart contracts, has advanced significantly in recent years. However, research on the smart contract idea has weaknesses in the implementation sectors based on a decentralized network that shares an identical state. This paper extensively reviews smart contracts based on multi-criteria analysis, challenges and motivations. Therefore, implementing blockchain in multi-criteria research is required to increase the efficiency of interaction between users via supporting information exchange with high trust. Implementing blockchain in the multi-criteria analysis is necessary to increase the efficiency of interaction between users via supporting information exchange and with high confidence, detecting malfunctioning, helping users with performance issues, reaching a consensus, deploying distributed solutions and allocating plans, tasks and joint missions. The smart contract with decision-making performance, planning and execution improves the implementation based on efficiency, sustainability and management. Furthermore, the uncertainty and supply chain performance lead to improved users’ confidence in offering new solutions in exchange for problems in smart contacts. Evaluation includes code analysis and performance, while development performance can be under development.  相似文献   

15.
为提高IP-SAN的性能,在清华大学海量存储网络系统(TH-MSNS)存储区域网络的基础上,设计和实现了一种iSCSI环境下的缓存系统。该系统采用服务器内存作为数据的缓存,直接在内存中完成部分读写数据的命令,在存储请求空闲时同步远程网络磁盘数据。通过性能对比测试表明,该缓存系统能够较大地提高IP-SAN存储系统的性能,能增大存储系统的带宽,减少操作延迟。  相似文献   

16.
As the number of sensor network application scenarios continues to grow, the security problems inherent in this approach have become obstacles that hinder its wide application. However, it has attracted increasing attention from industry and academia. The blockchain is based on a distributed network and has the characteristics of nontampering and traceability of block data. It is thus naturally able to solve the security problems of the sensor networks. Accordingly, this paper first analyzes the security risks associated with data storage in the sensor networks, then proposes using blockchain technology to ensure that data storage in the sensor networks is secure. In the traditional blockchain, the data layer uses a Merkle hash tree to store data; however, the Merkle hash tree cannot provide non-member proof, which makes it unable to resist the attacks of malicious nodes in networks. To solve this problem, this paper utilizes a cryptographic accumulator rather than a Merkle hash tree to provide both member proof and nonmember proof. Moreover, the number of elements in the existing accumulator is limited and unable to meet the blockchain’s expansion requirements. This paper therefore proposes a new type of unbounded accumulator and provides its definition and security model. Finally, this paper constructs an unbounded accumulator scheme using bilinear pairs and analyzes its performance.  相似文献   

17.
报道了SrS(Eu,Sm)的写入、读出和发射光谱分布及输出发光对输入光强的关系.利用~100μm厚的SrS(Eu,Sm)薄膜进行了图象存储和加减功能的实验,结果表明,该材料在光信息存储和处理中具有很大的应用前景.  相似文献   

18.
阅读可以分为精读和速读两种,通过阅读研究揭示出的阅读规律可以指导阅读教学,提高学生的阅读效率。本研究对被试用两种阅读类型阅读中英文时的眼动进行记录,结果发现:(1)精读在注视时间、注视次数、凝视次数上高于速读,在注视频率、凝视频率、眼跳距离、瞳孔直径方面低于速读,且阅读类型在凝视次数和凝视频率两指标上主效应显著。(2)阅读英文时在注视时间、注视次数、注视频率、凝视次数上高于阅读中文,在凝视频率、眼跳距离方面低于中文。本研究的结果提示我们:速读的阅读效率高于精读,速读是一种有效、成熟的阅读类型,对中文阅读效果更好。  相似文献   

19.
In distributed storage systems, file access efficiency has an important impact on the real-time nature of information forensics. As a popular approach to improve file accessing efficiency, prefetching model can fetches data before it is needed according to the file access pattern, which can reduce the I/O waiting time and increase the system concurrency. However, prefetching model needs to mine the degree of association between files to ensure the accuracy of prefetching. In the massive small file situation, the sheer volume of files poses a challenge to the efficiency and accuracy of relevance mining. In this paper, we propose a massive files prefetching model based on LSTM neural network with cache transaction strategy to improve file access efficiency. Firstly, we propose a file clustering algorithm based on temporal locality and spatial locality to reduce the computational complexity. Secondly, we propose a definition of cache transaction according to files occurrence in cache instead of time-offset distance based methods to extract file block feature accurately. Lastly, we innovatively propose a file access prediction algorithm based on LSTM neural network which predict the file that have high possibility to be accessed. Experiments show that compared with the traditional LRU and the plain grouping methods, the proposed model notably increase the cache hit rate and effectively reduces the I/O wait time.  相似文献   

20.
Provable Data Possession (PDP) schemes have long been proposed to solve problem of how to check the integrity of data stored in cloud service without downloading. However, with the emerging of network consisting of low performance devices such as Internet of Things, we find that there are still two obstacles for applying PDP schemes. The first one is the heavy computation overhead in generating tags for data blocks, which is essential for setting up any PDP scheme. The other one is how to resist collusion attacks from third party auditors with any possible entities participating the auditing. In this paper, we propose a novel blockchain-based light-weighted PDP scheme for low performance devices, with an instance deployed on a cloud server. We design a secure outsourced tag generating method for low performance devices, which enables a kind of “hash-sign-switch” two-phase tag computing. With this method, users with low performance devices can employ third party auditors to compute modular exponential operations that accounts for the largest portion of computation overhead in tag generation, without leaking their data content. Chaincodes in blockchain network ensure the correctness of such outsourcing and prevent collusion attacks. The security analysis and performance evaluation prove that our scheme is both secure and efficient.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号