首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Large-scale tactile sensing applications in Robotics have become the focus of extensive research activities in the past few years, specifically for humanoid platforms. Research products include a variety of fundamentally different robot skin systems. Differences rely in technological (e.g., sensory modes and networking), system-level (e.g., modularity and scalability) and representation (e.g., data structures, coherency and access efficiency) aspects. However, differences within the same robot platform may be present as well. Different robot body parts (e.g., fingertips, forearms and a torso) may be endowed with robot skin that is tailored to meet specific design goals, which leads to local peculiarities as far as technological, system-level and representation solutions are concerned. This variety leads to the issue of designing a software framework able to: (i) provide a unified interface to access information originating from heterogeneous robot skin systems; (ii) assure portability among different robot skin solutions. In this article, a real-time framework designed to address both these issues is discussed. The presented framework, which is referred to as Skinware, is able to acquire large-scale tactile data from heterogeneous networks in real-time and to provide tactile information using abstract data structures for high-level robot behaviours. As a result, tactile-based robot behaviours can be implemented independently of the actual robot skin hardware and body-part-specific features. An extensive validation campaign has been carried out to investigate Skinware’s capabilities with respect to real-time requirements, data coherency and data consistency when large-scale tactile information is needed.  相似文献   

2.
提出一种基于大规模廉价计算平台的海量数据处理模型,吸取了Map/Reduce计算模式和大规模分布式数据存储机制Bigtable的基本思想,实现了以数据为中心的计算密集型的经济性超级计算系统平台。系统选择电信部门的大规模业务数据为分析对象,对电信通话和数据业务的大规模数据集进行处理,从而向运营商和普通用户提供有价值的数据分析服务。该平台适用于其他多种海量数据的分布式处理,为其他的各种应用提供了一个具有良好参考价值的示范。  相似文献   

3.
针对数据中心存在大量数据冗余的问题,特别是备份数据造成的存储容量浪费,提出一种基于Hadoop平台的分布式重复数据删除解决方案。该方案通过检测并消除特定数据集内的冗余数据,来显著降低数据存储容量,优化存储空间利用率。利用Hadoop大数据处理平台下的分布式文件系统(HDFS)和非关系型数据库HBase两种数据管理模式,设计并实现一种可扩展分布式重删存储系统。其中,MapReduce并行编程框架实现分布式并行重删处理,HDFS负责重删后的数据存储,在HBase数据库中构建索引表,实现高效数据块索引查询。最后,利用虚拟机镜像文件数据集对系统进行了测试,基于Hadoop平台的分布式重删系统能在保证高重删率的同时,具有高吞吐率和良好的可扩展性。  相似文献   

4.
5.
Data warehousing technology offers organizations the potential for much greater exploitation of informational assets. However, the evaluation of potential investments in this technology poses problems for organizations as traditional evaluation methods are constrained when dealing with strategic IT applications. Nevertheless, many organizations are procedurally obliged to use such methods for evaluating data warehousing investments. This paper identifies five problems with using such methods in these circumstances: evaluating intangible benefits; making the relationship between IT and profitability explicit; dealing with the vanishing status quo; dealing with the extended investment time frame; and evaluating infrastructural investments. The authors studied how four organizations in the UK and Ireland attempted to overcome these problems when introducing data warehousing, and propose a framework for evaluating data warehousing investments. This framework consists of a high‐level analysis of the economic environment and of the information intensity of the relationship between the organization and its customers. Based on the outcome of this analysis, the authors propose four factors that have to be managed during the evaluation process in order to ensure that the limitations of the traditional evaluation techniques do not adversely affect the evaluation process. These factors are: commitment and sponsorship; the approach to evaluation; the time scale of benefits; and the appraisal techniques used.  相似文献   

6.
As processor performance increases and memory cost decreases, system intelligence continues to move away from the CPU and into peripherals. Storage system designers use this trend toward excess computing power to perform more complex processing and optimizations inside storage devices. To date, such optimizations take place at relatively low levels of the storage protocol. Trends in storage density, mechanics, and electronics eliminate the hardware bottleneck and put pressure on interconnects and hosts to move data more efficiently. We propose using an active disk storage device that combines on-drive processing and memory with software downloadability to allow disks to execute application-level functions directly at the device. Moving portions of an application's processing to a storage device significantly reduces data traffic and leverages the parallelism already present in large systems, dramatically reducing the execution time for many basic data mining tasks  相似文献   

7.
Recent advances in computer technologies have made it feasible to provide multimedia services, such as news distribution and entertainment, via high-bandwidth networks. The storage and retrieval of large multimedia objects (e.g., video) becomes a major design issue of the multimedia information system. While most other works on multimedia storage servers assume an on-line disk storage system, we consider a two-tier storage architecture with a robotic tape library as the vast near-line storage and an on-line disk system as the front-line storage. Magnetic tapes are cheaper, more robust, and have a larger capacity; hence, they are more cost effective for large scale storage systems (e.g., video-on-demand (VOD) systems may store tens of thousands of videos). We study in detail the design issues of the tape subsystem and propose some novel tape-scheduling algorithms which give faster response and require less disk buffer space. We also study the disk-striping policy and the data layout on the tape cartridge in order to fully utilize the throughput of the robotic tape system and to minimize the on-line disk storage space.  相似文献   

8.
Multimedia Tools and Applications - We live in a world where there are huge number of consumers and producers of multimedia content. In this sea of information, finding the right content is like...  相似文献   

9.
Multimedia Tools and Applications - By efficiently managing and utilizing the city’s multimedia big data, 3D analysis and visualization of the city’s multimedia information can be...  相似文献   

10.
针对大图数据的一种表达方法——K2树,提出了相应的压缩优化算法。该算法利用带有启发式规则的DFS编码对图中所有节点进行重新编码,并通过自适应调整参数K,使得K2树能够充分利用网络中的社团结构特性,从而降低空间代价。给出了K2树的优化算法描述,并针对一系列真实网络和模拟网络进行了实验,验证了优化算法具有较好的压缩效果。  相似文献   

11.
随着大数据时代到来,分布式文件系统支持Hadoop大数据访问已成为一种趋势。本文以研究支持Hadoop大数据访问的pNFS框架为目的,采用在Hadoop与pNFS之间添加pNFS shim layer模块的方法,实现了pNFS支持Hadoop大数据访问的HDFS APIs;通过在pNFS shim layer中添加写缓存和节点级数据布局感知机制优化了系统性能。采用Hadoop基准程序对本文提出的框架进行测试,结果显示写性能提升超过45%,读性能提升超过97%,证明此框架可以有效的支持Hadoop大数据访问。  相似文献   

12.
《电子技术应用》2016,(9):118-121
针对新型异构通信信号处理平台系统复杂、开发难度大、开发周期长、应用复杂等问题,提出了一种新型综合性的代码自动生成工具。通过实现通信信号处理平台中框架配置文件、要素宏定义文件、硬件驱动源代码框架、软件组件源程序框架、装配粘合代码的自动生成等功能,不仅可满足平台在实时性、分布式、可靠性等方面要求,而且可保证平台中软硬件编程的一致性,缩短了开发周期,大大减少需要手工编写开发和测试代码的工作量。  相似文献   

13.
Recent Large-Scale Multimedia Retrieval (LSMR) methods seem to heavily rely on analysing a large amount of data using high-performance machines. This paper aims to warn this research trend. We advocate that the above methods are useful only for recognising certain primitive meanings, knowledge about human interpretation is necessary to derive high-level meanings from primitive ones. We emphasise this by conducting a retrospective survey on machine-based methods which build classifiers based on features, and human-based methods which exploit user annotation and interaction. Our survey reveals that due to prioritising the generality and scalability for large-scale data, knowledge about human interpretation is left out by recent methods, while it was fully used in classical methods. Thus, we defend the importance of human-machine cooperation which incorporates the above knowledge into LSMR. In particular, we define its three future directions (cognition-based, ontology-based and adaptive learning) depending on types of knowledge, and suggest to explore each direction by considering its relation to the others.  相似文献   

14.
Knowledge management has become a challenge for almost all e-government applications where the efficient processing of large amounts of data is still a critical issue. In the last years, semantic techniques have been introduced to improve the full automatic digitalization process of documents, in order to facilitate the access to the information embedded in very large document repositories. In this paper, we present a novel model for multimedia digital documents aiming at improve effectiveness of digitalization activities within an information system supporting e-government organizations. At the best of our knowledge, the proposed model is one of the first attempts to give a single and unified characterization of multimedia documents managed by e-government applications, whereas semantic procedures and multimedia facilities are used for the transformation of unstructured documents into structured information. Furthermore, we define an architecture for the management of multimedia documents “life cycle”, which provides advanced functionalities for information extraction, semantic retrieval, indexing, storage, presentation, together with long-term preservation. Preliminary experiments concerning an e-health scenario are finally presented and discussed.  相似文献   

15.
A framework for synchronous delivery of time-dependent multimedia data   总被引:1,自引:0,他引:1  
Multimedia data often have time dependencies that must be satisfied at presentation time. To support a general purpose multimedia information system, these timing relationships must be managed to provide utility to both the data presentation system and the multimedia author. Timing management encompasses specification, data representation, temporal access control, playout scheduling, and run-time intermedia synchronization. In this paper we describe the components of our framework for supporting time-dependent multimedia data encompassing these areas and how they are assembled into a unified system.  相似文献   

16.
基于数据仓库的OLAP系统是当前海量多维数据分析的主要工具。随着信息技术的发展,海量多维数据的规模急剧增长,结构日益复杂,OLAP系统的性能严重下降,已经无法满足人们的数据分析需求。基于分布式计算系统Hadoop给出了新的海量多维数据的存储方法和查询方法。设计了HDFS上的列存储文件格式HCFile,基于HCFile给出了海量多维数据存储方案,该方案能够提高聚集计算效率,并有很好的可扩展性。同时,利用多维数据的层次性语义特征,设计了维层次索引,并给出了利用维层次索引和MapReduce进行聚集计算的方法。通过和Hive的对比实验,表明了数据存储方案和查询方法能够有效提高海量多维数据分析的性能。  相似文献   

17.
Multimedia Tools and Applications - Recognizing rotational and angular hand movements using a non-invasive way is a challenging task. In this paper, we present a web-based multimedia hand-Therapy...  相似文献   

18.
随着Internet技术的迅速发展,网络舆情监控系统正在得到广泛应用。网络舆情监控系统的数据量也急速膨胀,如何高效地存储和管理这些海量的非结构或半结构化数据成为网络舆情系统研发中的挑战课题。传统的关系数据库和分布式计算等数据处理的方式也越来越不能适应日益增长网络大数据。针对微博数据的特点建立了一种面向微博舆情应用的Hadoop存储平台的多层体系架构,并采用列数据库设计多种微博结构化数据的表结构,以及表之间的关系模型。测试结果表明,设计的存储管理平台具有检索响应速度快、可扩展性好等特点。  相似文献   

19.
Data processing complexity, partitionability, locality and provenance play a crucial role in the effectiveness of distributed data processing. Dynamics in data processing necessitates effective modeling which allows the understanding and reasoning of the fluidity of data processing. Through virtualization, resources have become scattered, heterogeneous, and dynamic in performance and networking. In this paper, we propose a new distributed data processing model based on automata where data processing is modeled as state transformations. This approach falls within a category of declarative concurrent paradigms which are fundamentally different than imperative approaches in that communication and function order are not explicitly modeled. This allows an abstraction of concurrency and thus suited for distributed systems. Automata give us a way to formally describe data processing independent from underlying processes while also providing routing information to route data based on its current state in a P2P fashion around networks of distributed processing nodes. Through an implementation, named Pumpkin, of the model we capture the automata schema and routing table into a data processing protocol and show how globally distributed resources can be brought together in a collaborative way to form a processing plane where data objects are self-routable on the plane.  相似文献   

20.
用光谱分析鉴别生物特征,导致数据量大,而实际需要必须实时处理。偏最小二乘法是使用最广泛的鉴别算法,但是对于大规模数据流该算法无法达到实时性。为了解决这个应用矛盾,提出了一种基于NVIDIA CUDA架构下的并行计算策略,利用具有大规模并行计算特征的图形处理器(GPU)作为计算设备,结合GPU存储器的优势实现了偏最小二乘算法。实验的测试结果表明,在GPU上使用CUDA实现的偏最小二乘算法比在CPU上实现该算法快了47倍,性能得到了显著提高,从而使偏最小二乘算法应用于大规模数据流处理成为可能。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号