全文获取类型
收费全文 | 491篇 |
免费 | 65篇 |
国内免费 | 163篇 |
专业分类
电工技术 | 13篇 |
综合类 | 32篇 |
化学工业 | 3篇 |
机械仪表 | 4篇 |
建筑科学 | 2篇 |
矿业工程 | 3篇 |
轻工业 | 1篇 |
水利工程 | 2篇 |
石油天然气 | 1篇 |
武器工业 | 1篇 |
无线电 | 101篇 |
一般工业技术 | 7篇 |
自动化技术 | 549篇 |
出版年
2024年 | 1篇 |
2023年 | 7篇 |
2022年 | 12篇 |
2021年 | 14篇 |
2020年 | 17篇 |
2019年 | 18篇 |
2018年 | 43篇 |
2017年 | 52篇 |
2016年 | 87篇 |
2015年 | 117篇 |
2014年 | 133篇 |
2013年 | 92篇 |
2012年 | 55篇 |
2011年 | 46篇 |
2010年 | 15篇 |
2009年 | 7篇 |
2008年 | 1篇 |
2007年 | 2篇 |
排序方式: 共有719条查询结果,搜索用时 15 毫秒
1.
在对海量数据进行聚类的过程中,传统的串行模式局限性越来越明显,难以在有效时间内得出满意结果的问题,本文提出一种基于Hadoop平台下MapReduce框架的并行聚类模型。理论和实验结果证明该模型具有接近线速的加速比,针对海量数据具有较高效率。 相似文献
2.
How to reduce power consumption of data centers has received worldwide attention. By combining the energy-aware data placement policy and locality-aware multi-job scheduling scheme, we propose a new multi-objective bi-level programming model based on MapReduce to improve the energy efficiency of servers. First, the variation of energy consumption with the performance of servers is taken into account; second, data locality can be adjusted dynamically according to current network state; last but not least, considering that task-scheduling strategies depend directly on data placement policies, we formulate the problem as an integer bi-level programming model. In order to solve the model efficiently, specific-design encoding and decoding methods are introduced. Based on these, a new effective multi-objective genetic algorithm based on MOEA/D is proposed. As there are usually tens of thousands of tasks to be scheduled in the cloud, this is a large-scale optimization problem and a local search operator is designed to accelerate convergent speed of the proposed algorithm. Finally, numerical experiments indicate the effectiveness of the proposed model and algorithm. 相似文献
3.
Hadoop在处理海量小图像数据时,存在输入分片过多以及海量小图像存储问题。针对这些问题,不同于采用HIPI、SequenceFile等方法,提出了一个新型图像并行处理模型。利用Hadoop适合处理纯文本数据的特性,本模型使用存储了图像路径的文本文件替换图像数据作为输入,不需要设计图像数据类型。在Map阶段直接完成图像的读取、处理、存储过程。为了简化图像处理算法,将OpenCV和Map函数结合并设计了对应的存储方法,实现小图像文件的存储。实验表明,在Hadoop分布式系统平台下,模型不论在小数据量还是在大数据量的测试数据环境中,都具有良好的吞吐性能和稳定性。 相似文献
4.
《信息记录材料》2019,(12)
大数据处理是近年来个人、公司、企业以及世界范围内的大型公司特别关注的问题之一。通常,Google已索引了100亿张图片,YouTube每分钟处理35小时的内容,Twitter每天处理6亿的计算机访问…以下就是我要讨论关于大数据的内容。曾经有一段时间,如此大规模的数据仅用于能够购买昂贵的超级计算机并雇用员工进行维护的大型公司。如今,由于降低存储数据成本和数据处理的能力变得司空见惯,一些较小的公司和个人已经开始类似于数据存储一样存储和挖掘数据。多个硬盘上的分布式数据存储有存储容量大和数据访问速度快的优势。但是,维护具有多个硬盘的分布式系统出现了许多需要解决的问题,例如硬件故障和要存储在其他硬件上的数据分析问题。由此,形成的大数据挖掘革命的技术之一是Hadoop平台上的MapReduce编程模型。因此,在本文的框架内,作者将介绍编程模型并提供有关它的说明性应用程序。 相似文献
5.
Li WEIGANG Edans F. O. SANDES Jianya ZHENG Alba C. M. A. de MELO Lorna UDEN 《浙江大学学报:C卷英文版》2014,15(2):81-90
Online social networks (OSNs) offer people the opportunity to join communities where they share a common interest or objective. This kind of community is useful for studying the human behavior, diffusion of information, and dynamics of groups. As the members of a community are always changing, an efficient solution is needed to query information in real time. This paper introduces the Follow Model to present the basic relationship between users in OSNs, and combines it with the MapReduce solution to develop new algorithms with parallel paradigms for querying. Two models for reverse relation and high-order relation of the users were implemented in the Hadoop system. Based on 75 GB message data and 26 GB relation network data from Twitter, a case study was realized using two dynamic discussion communities:#musicmonday and #beatcancer. The querying performance demonstrates that the new solution with the implementation in Hadoop significantly improves the ability to find useful information from OSNs. 相似文献
6.
7.
Various methods and techniques have been proposed in past for improving performance of queries on structured and unstructured data. The paper proposes a parallel B-Tree index in the MapReduce framework for improving efficiency of random reads over the existing approaches. The benefit of using the MapReduce framework is that it encapsulates the complexity of implementing parallelism and fault tolerance from users and presents these in a user friendly way. The proposed index reduces the number of data accesses for range queries and thus improves efficiency. The B-Tree index on MapReduce is implemented in a chained-MapReduce process that reduces intermediate data access time between successive map and reduce functions, and improves efficiency. Finally, five performance metrics have been used to validate the performance of proposed index for range search query in MapReduce, such as, varying cluster size and, size of range search query coverage on execution time, the number of map tasks and size of Input/Output (I/O) data. The effect of varying Hadoop Distributed File System (HDFS) block size and, analysis of the size of heap memory and intermediate data generated during map and reduce functions also shows the superiority of the proposed index. It is observed through experimental results that the parallel B-Tree index along with a chained-MapReduce environment performs better than default non-indexed dataset of the Hadoop and B-Tree like Global Index (Zhao et al., 2012) in MapReduce. 相似文献
8.
《电子技术与软件工程》2016,(4)
XML因其自描述性和可扩展性作为网络中一种主要数据形式得到越来越多的应用,单一XML文档数据量变得越来愈大,如何有效求解满足一定查询语义的结果是XML数据查询技术的一个核心问题,其中基于XML文档对应文档树采用的编码是影响查询效率的关键因素。本文实验过程是通过MapReduce计算框架完成,将前缀流编码作为研究对象,提出了一种新的编码MINDewey码。对比Dewey码,ED码,通过在分布式集群下编码效率提高了至少10%。 相似文献
9.
An important property of today’s big data processing is that the same computation is often repeated on datasets evolving over time, such as web and social network data. While repeating full computation of the entire datasets is feasible with distributed computing frameworks such as Hadoop, it is obviously inefficient and wastes resources. In this paper, we present HadUP (Hadoop with Update Processing), a modified Hadoop architecture tailored to large-scale incremental processing with conventional MapReduce algorithms. Several approaches have been proposed to achieve a similar goal using task-level memoization. However, task-level memoization detects the change of datasets at a coarse-grained level, which often makes such approaches ineffective. Instead, HadUP detects and computes the change of datasets at a fine-grained level using a deduplication-based snapshot differential algorithm (D-SD) and update propagation. As a result, it provides high performance, especially in an environment where task-level memoization has no benefit. HadUP requires only a small amount of extra programming cost because it can reuse the code for the map and reduce functions of Hadoop. Therefore, the development of HadUP applications is quite easy. 相似文献
10.
MapReduce is increasingly becoming a popular programming model. However, the widely used implementation, Apache Hadoop, uses the Hadoop Distributed File System (HDFS), which is currently not directly applicable to a majority of existing HPC environments such as Teragrid and NERSC that support other distributed file systems. On such resourceful High Performance Computing (HPC) infrastructures, the MapReduce model can rarely make use of full resources, as special circumstances must be created for its adoption, or simply limited resources must be isolated to the same end. This paper not only presents a MapReduce implementation directly suitable for such environments, but also exposes the design choices for better performance gains in those settings. By leveraging inherent distributed file systems’ functions, and abstracting them away from its MapReduce framework, MARIANE (MApReduce Implementation Adapted for HPC Environments) not only allows for the use of the model in an expanding number of HPC environments, but also shows better performance in such settings. This paper identifies the components and trade-offs necessary for this model, and quantifies the performance gains exhibited by our approach in HPC environments over Apache Hadoop in a data intensive setting at the National Energy Research Scientific Computing Center (NERSC). 相似文献