首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 765 毫秒
1.
针对地图浏览中的显示速度问题,该文提出了一种新的地图浏览方法,利用内存映射画图的思想设计了一个小型的图形支撑环境,这个支撑环境能够利用内存缓冲区快速的处理数据并实时输出数据,结合API一级的输出函数,能够实现地图的快速显示。这种方法克服了传统地图浏览中的显示速度慢的问题,在不借助图形支撑软件的情况下,达到了很快的显示速度,实现了快速浏览地图的目标。其中,内存映射画图思想的应用是该文的创新点。  相似文献   

2.
一种基于XML的文档处理模型   总被引:1,自引:0,他引:1  
在某军用软件开发过程中,由于系统文档格式不一致、结构性差,造成了系统文档管理、数据库存储及资源共享的不便。为解决这些问题,给出了一种基于XML的文档处理模型,应用XML和Oracle XML DB技术,对文档作结构化处理,并映射到关系数据库,映射过程中保持了文档模式语义约束和文档保真性;详细介绍了模型的结构和实现技术,并给出了应用实例。  相似文献   

3.
为了克服半结构化数据存储的不确定性,基于半结构化数据的结构信息可由其模式来描述的思想,提出一种动态树存储模型。在对对象交换模型OEM(Object Exchange Model)进行深度优先遍历,找出所有最大简单路径表达式基础上,采用累加计数原则将得到的最大路径表达式依次添加到一个动态树中,从而生成存储模型。最后将此模型映射到关系表中,实现了半结构化数据在关系数据库中的存储与查询。以村镇土地审批处理系统为实例来说明这种存储模型的有效性。  相似文献   

4.
在智慧城市领域中,随着信息化技术的不断深入,各信息系统产生的海量数据不断增长,这些多源异构数据之间的语义互通成为了城市智能应用开发需要解决的重要问题之一。构建知识图谱是解决数据语义互通的常用手段之一。在建立知识图谱本体模型后,图谱实例模型的构建演化就成为支撑基于图谱的各类应用的关键技术。为此,如何将不断更新的数据源中的知识实例尽可能自动化地扩充到知识图谱中,成为了图谱构建的首要问题。现有的一些知识实例生成工具对数据导入的支持力度不足,用户需要对源数据进行复杂的预处理,将其转化为符合平台支持的导入数据格式。这导致预处理工作量大,且不能迅速地应对数据不断更新增长的情况。由于智慧城市领域中信息系统所产生的数据多为结构化或半结构化数据,文中提出一种增量式本体模型与数据模式映射的图谱实例模型构建演化方法,面向结构化或半结构化数据生成实例,并随着数据的更新,实现图谱实例模型的增长与演化。文中方法结合机器推荐与人机协同交互设计,针对不同数据源的特征抽取知识并将其正确地映射到本体模型中的概念实体上,实现领域知识图谱实例模型的增量扩充;并通过实体对齐、关系补全等方法,支持实例模型的持续演化。文中方法在企...  相似文献   

5.
接收与处理分离的实时大数据处理模型   总被引:1,自引:0,他引:1  
在大数据处理过程中,系统必须有非常高的数据处理效率。为了满足对大数据实时、高效、稳定处理的需求,提出了一种接收与处理分离的数据处理模型。该数据处理模型由数据接收单元、内存数据库、原始数据分发单元、数据处理单元、处理数据分发单元、数据归并单元组成。接收单元负责接收、整合结构化数据与非结构化数据,把每条完整的数据放入内存数据库中;分发单元从内存数据库中检测获取数据,按照海量数据负载均衡算法把数据分发到数据处理单元;数据处理单元处理数据,处理结果放入内存数据库;处理数据分发单元继续从内存数据库中获取处理后的数据,并按照海量数据负载均衡算法把数据分发给数据归并单元。实验证明,使用该模型方法,系统保持了非常高效的处理效率。  相似文献   

6.
将卷积计算转化为矩阵乘法是FPGA上一种高效实现,而现有的转化方法无法根据卷积参数的不同动态调整,限制了卷积计算的并行度.提出一种新的动态余数处理映射模型.该映射模型包含有3个子模型:特征值映射模型,权值映射模型,和输出映射模型.特征值映射模型将特征值转化为特征值矩阵,权值映射模型将权值转化为权值矩阵,特征值矩阵和权值矩阵通过乘累加计算阵列得到卷积计算结果,由输出映射模型将卷积计算结果存储到内存中.在卷积计算过程中,卷积的输出通道数通常不是乘累加计算阵列行数的整数倍,3个子映射模型会根据产生的余数动态调整映射方法,提高乘累加计算阵列的利用率.通过实验表明,采用动态余数处理映射模型能够将余数并行度的倍数至多提高到卷积核大小,使整个加速器达到了更高的实际吞吐量和能量效率.  相似文献   

7.
半结构化、层次数据的模式发现   总被引:10,自引:0,他引:10  
Web数据资源及数据集成引发了半结构化数据问题,半结构化数据指其结构隐含或不规整的自描述数据。由于缺乏独立于数据的模式,有效地查询划浏览该类数据比较困难,半结构化数据的模式发现成为解决该问题的基础步骤。本文提出的算法能够快速有效地发现半结构化层次数据中的规整结构。它采用自顶向下的生成,结合有效的剪枝策略,从OEM模型表达的半结构化层次数据中构建模式树。  相似文献   

8.
随着大数据时代的来临,结构化数据的种类、规模不断增加,但是目前针对于结构化数据的注册还没有相应的研究成果。为了解决结构化数据注册的问题,采用数据架构(DA)的思想和相关技术,结合数据注册中心(DRC),设计一种针对结构化数据的注册引擎,提出一种结构化数据的统一注册标准和注册方法,实现结构化数据的自动采集注册。通过实验与分析,注册引擎能够准确、高效地将结构化数据注册信息采集并写入DRC中,为国内外常用数据库的注册问题提出了一种解决方案,为DRC数据注册中心的结构化数据注册信息管理和应用奠定了坚实的基础。  相似文献   

9.
对软件项目管理系统的项目数据备份进行分析,提出了一种基于半结构化数据的项目备份方法SDB-Method.该方法通过对系统的数据模型进行分析,建立关系数据模型和半结构化数据模型OEM(对象交换模型)之间的映射,实现关系数据和半结构化数据的相互转换,从而解决项目的导入和导出问题.该方法应用于项目管理系统SoftPM中,支持软件项目的多分支开发,迭代开发以及移植,有效地解决了软件项目管理系统的项目备份问题.  相似文献   

10.
复杂指令集计算机体系结构向精简指令集计算机体系结构的动态二进制翻译过程中经常出现地址不对界的问题。本文以I386到Alpha平台的动态二进制翻译为例,研究了内存映射时的不对界和数据存取时的不对界问题,提出了一种改进的内存映射方法以及在中间表示层处理不对界地址访存问题的方案,有效地解决了此类问题。经实验验证,该方法正确并有较高效率。  相似文献   

11.
非易失性存储器具有接近内存的读写速度,可利用其替换传统的存储设备,从而提升存储引擎的性能。但是,传统的存储引擎通常使用通用块接口读写数据,导致了较长的 I/O 软件栈,增加了软件层的读写延迟,进而限制了非易失性存储器的性能优势。针对这一问题,该文以 Ceph 大数据存储系统为基础,研究设计了基于非易失性存储器的新型存储引擎 NVMStore,通过内存映射的方式访问存储设备,根据非易失性存储器的字节可寻址和数据持久化特性,优化数据读写流程,从而减小数据写放大以及软件栈的开销。实验结果表明,与使用非易失性存储器的传统存储引擎相比,NVMStore能够显著提升 Ceph 的小块数据读写性能。  相似文献   

12.
The performance and energy efficiency of current systems is influenced by accesses to the memory hierarchy. One important aspect of memory hierarchies is the introduction of different memory access times, depending on the core that requested the transaction, and which cache or main memory bank responded to it. In this context, the locality of the memory accesses plays a key role for the performance and energy efficiency of parallel applications. Accesses to remote caches and NUMA nodes are more expensive than accesses to local ones. With information about the memory access pattern, pages can be migrated to the NUMA nodes that access them (data mapping), and threads that communicate can be migrated to the same node (thread mapping).In this paper, we present LAPT, a hardware-based mechanism to store the memory access pattern of parallel applications in the page table. The operating system uses the detected memory access pattern to perform an optimized thread and data mapping during the execution of the parallel application. Experiments with a wide range of parallel applications (from the NAS and PARSEC Benchmark Suites) on a NUMA machine showed significant performance and energy efficiency improvements of up to 19.2% and 15.7%, respectively, (6.7% and 5.3% on average).  相似文献   

13.
Li  Jianjiang  Deng  Zhaochu  Du  Panpan  Lin  Jie 《The Journal of supercomputing》2022,78(4):4779-4798

The Sunway TaihuLight is the first supercomputer built entirely with domestic processors in China. On Sunway Taihulight, the local data memory (LDM) of the slave core is limited, so data transmission with the main memory is frequent during calculation, and the memory access efficiency is low. On the other hand, for many scientific computing programs, how to solve the storage problem of irregular access data is the key to program optimization. Software cache (SWC) is one of the effective means to solve these problems. Based on the characteristics of Sunway TaihuLight structure and irregular access, this paper designs and implements a new software cache structure by using part of the space in LDM to simulate the cache function, which uses new cache address mapping and conflicts solution to solve high data access overhead and storage overhead in a traditional cache. At the same time, the SWC uses the register communication between the slave cores to share on the different slave core LDMs, increasing the capacity of the software cache and improving the hit rate. In addition, we adopt a double buffer strategy to access regular data in batches, which hides the communication overhead between the slave core and the main memory. The test results on the Sunway TaihuLight platform show that the software cache structure in this paper can effectively reduce the program running time, improve the software cache hit rate, and achieve a better optimization effect.

  相似文献   

14.
Although nonuniform memory access architecture provides better scalability for multicore systems, cores accessing memory on remote nodes take longer than those accessing on local nodes. Remote memory access accompanied by contention for internode interconnection degrades performance. Properly mapping threads to cores and data accessed to their nodes can substantially improve performance and energy efficiency. However, an operating system kernel's load-balancing activity may migrate threads across nodes, which thus messes up the thread mapping. Besides, subsequent data mapping behavior pays for the cost of page migration to reduce remote memory access. Once unsuitable threads are migrated, it is detrimental to system performance. This paper focuses on improving the kernel's internode load balancing on nonuniform memory access systems. We develop a memory-aware kernel mechanism and policies to reduce remote memory access incurred by internode thread migration. The Linux kernel's load balancing mechanism is modified to incorporate selection policies in the internode thread migration, and the kernel is modified to track the amount of memory used by each thread on each node. With this information, well-designed policies can then choose suitable threads for internode migration. The purpose is to avoid migrating a thread that might incur relatively more remote memory access and page migration. The experimental results show that with our mechanism and the proposed selection policies, the system performance is substantially increased when compared with the unmodified Linux kernel that does not consider memory usage and always migrates the first-fit thread in the runqueue that can be migrated to the target central processing unit.  相似文献   

15.
The main memory access latency can significantly slow down the overall performance of a computer system due to the fact that average cycle time of the main memory is typically a factor of 5–10 times higher than that of a processor. To cope with this problem, in addition to the use of caches, the main memory of a multiprocessor architecture is usually organized into multiple modules or banks. Although such organization enhances memory bandwidth, the amount of data that the multiprocessor can retrieve in the same memory cycle, conflicts due to simultaneous attempts to access the same memory module may reduce the effective bandwidth. Therefore, efficient mapping schemes are required to distribute data in such a way that regular patterns, called templates, of various structures can be retrieved in parallel without memory conflicts. Prior work on data mappings mostly dealt with conflict-free access to templates such as rows, columns, or diagonals of (multidimensional) arrays, and only limited attention has been paid to access templates of nonnumeric structures such as trees. In this paper, we study optimal and balanced mappings for accessing path and subtree templates of trees, where a mapping will be called optimal if it allows conflict-free access to templates with as few memory banks as possible. An optimal mapping will also be called balanced if it distributes as evenly as possible the nodes of the entire tree among the memory banks available. In particular, based on Latinsquares, we propose an optimal and balanced mapping for leaf-to-root paths of q-ary trees. Another (recursive) mapping for leaf-to-root paths of binary trees raises interesting combinatorial problems. We also derive an optimal and balanced mapping to access complete t-ary subtrees of complete q-ary trees, where 2⩽tq, and an optimal mapping for subtrees of binomial trees.  相似文献   

16.
针对现有报表缓存方法在内存消耗和访问速度上相冲突的问题,提出一种结构化数据缓存方法,把具有行列结构的报表数据分块存储到文件中。以文件形式存储的数据被划分为索引区和数据区,通过数据分块算法和写操作将报表数据缓存入文件。在读取报表数据时根据索引区可以直接定位到所在的块,在块中快速查找所需要的数据,从而在内存消耗和访问速度上达到优化。  相似文献   

17.
In many scientific applications, array redistribution is usually required to enhance data locality and reduce remote memory access in many parallel programs on distributed memory multicomputers. Since the redistribution is performed at runtime, there is a performance trade-off between the efficiency of the new data decomposition for a subsequent phase of an algorithm and the cost of redistributing data among processors. In this paper, we present a generalized processor mapping technique to minimize the amount of data exchange for BLOCK-CYCLIC(kr) to BLOCK-CYCLIC(r) array redistribution and vice versa. The main idea of the generalized processor mapping technique is first to develop mapping functions for computing a new rank of each destination processor. Based on the mapping functions, a new logical sequence of destination processors can be derived. The new logical processor sequence is then used to minimize the amount of data exchange in a redistribution. The generalized processor mapping technique can handle array redistribution with arbitrary source and destination processor sets and can be applied to multidimensional array redistribution. We present a theoretical model to analyze the performance improvement of the generalized processor mapping technique. To evaluate the performance of the proposed technique, we have implemented the generalized processor mapping technique on an IBM SP2 parallel machine. The experimental results show that the generalized processor mapping technique can provide performance improvement over a wide range of redistribution problems  相似文献   

18.
目前实时视频的仿射变换FPGA实现方法主要基于反向映射原理。该方法在缓存一帧图像后采用四邻域像素的双线性插值得到仿射图像,但是四邻域像素的存储器读写需要FPGA提供四倍于原始图像流的传输带宽,导致FPGA无法同时兼顾存储资源消耗与数据吞吐率。针对该问题,提出了一种插值前置的仿射变换FPGA实现方法,该方法采用四角点映射方式将双线性插值计算提前到存储器缓存原始图像之前,避免双线性插值直接读写存储器所导致的数据带宽倍增问题。实验结果表明,与基于反向映射的仿射变换FPGA实现方法相比,该方法在保证高数据吞吐率的同时,降低了存储资源的消耗。  相似文献   

19.
为有效解决HDFS面对多类型的海量小文件存在存储效率与检索速率低下的问题,构建一种基于EHDFS架构的存取方案.存储阶段,引入最优化策略,建立新的合并存储模型,使小文件最大化填满且均匀分布于Block,提高DataNode空间利用,降低NameNode内存开销.检索阶段,改进MapFile映射关系结构、索引存储位置与组...  相似文献   

20.
In order to execute a parallel PDE (partial differential equation) solver on a shared-memory multiprocessor, we have to avoid memory conflicts in accessing multidimensional data grids. A new multicoloring technique is proposed for speeding sparse matrix operations. The new technique enables parallel access of grid-structured data elements in the shared memory without causing conflicts. The coloring scheme is formulated as an algebraic mapping which can be easily implemented with low overhead on commercial multiprocessors. The proposed multicoloring scheme bas been tested on an Alliant FX/80 multiprocessor for solving 2D and 3D problems using the CGNR method. Compared to the results reported by Saad (1989) on an identical Alliant system, our results show a factor of 30 times higher performance in Mflops. Multicoloring transforms sparse matrices into ones with a diagonal diagonal block (DDB) structure, enabling parallel LU decomposition in solving PDE problems. The multicoloring technique can also be extended to solve other scientific problems characterized by sparse matrices  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号