首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
词性标注在自然语言信息处理领域中扮演着重要角色,是句法分析、信息抽取、机器翻译等自然语言处理的基础,对于哈萨克语同样如此。在基于词典静态标注的基础上分析了隐马尔科夫模型HMM(H idden M arkovModel)模型参数的选取、数据平滑以及未登录词的处理方法,利用基于统计的方法对哈萨克语熟语料进行训练,然后用V iterb i算法实现词性标注。实验结果表明利用HMM进行词性标注的准确率有所提高。  相似文献   

2.
This work analyzes the relative advantages of different metaheuristic approaches to the well-known natural language processing problem of part-of-speech tagging. This consists of assigning to each word of a text its disambiguated part-of-speech according to the context in which the word is used. We have applied a classic genetic algorithm (GA), a CHC algorithm, and a simulated annealing (SA). Different ways of encoding the solutions to the problem (integer and binary) have been studied, as well as the impact of using parallelism for each of the considered methods. We have performed experiments on different linguistic corpora and compared the results obtained against other popular approaches plus a classic dynamic programming algorithm. Our results claim for the high performances achieved by the parallel algorithms compared to the sequential ones, and state the singular advantages for every technique. Our algorithms and some of its components can be used to represent a new set of state-of-the-art procedures for complex tagging scenarios.  相似文献   

3.
词性标注是自然语言理解中很长期的问题,但对于大词性标注集的词性标注,它的标注精度还很低.为此我们应用隐含马尔可夫方法(HMM)和最大熵方法对大词性标注集的词性标注问题进行了研究,并在此基础上提出了关于词性标注的最新方法--对数线性模型,以此来提高词性标注精度.此次实验分别在运用HMM模型时,提出了新的光滑算法;在运用最大熵模型上,集成了详细的局部和远距离的上下文特征信息;在对数线性模型中,集成了HMM模型和最大熵模型,并进行了对比.结果表明综合了多源信息的对数线性模型标注精度迭81.52%,取得了比传统的HMM模型更好的结果.  相似文献   

4.
    
The tremendous growth of data being generated today is making storage and computing a mammoth task. With its distributed processing capability Hadoopgives an efficient solution for such large data. Hadoop’s default data placement strategy places the data blocks randomly across the nodes without consideringthe execution parameters resulting in several lacunas such as increased execution time, query latency etc., Also, most of the data required for a task executionmay not be locally available which creates data-locality problem. Hence we propose an innovative data placement strategy based on dependency of datablocks across the nodes. Our strategy dynamically analyses the history log and establishes relationship between various tasks and blocks required for eachtask through Block Dependency Graph (BDG). Then Our CORE-Algorithm re-organizes the HDFS layout by redistributing the data blocks to give an optimaldata placement, resulting in improved performance for Big Data sets in distributed environment. This strategy is tested in 20-node cluster with differentreal-world MR applications. The results conclude that proposed strategy reduces the query execution time by 23%, improves the data locality by 50.7%,compared to default.  相似文献   

5.
    
Automated building code compliance checking systems were under development for many years. However, the excessive amount of human inputs needed to convert building codes from natural language to computer understandable formats severely limited their range of applicable code requirements. To address that, automated code compliance checking systems need to enable an automated regulatory rule conversion. Accurate Part-of-Speech (POS) tagging of building code texts is crucial to this conversion. Previous experiments showed that the state-of-the-art generic POS taggers do not perform well on building codes. In view of that, the authors are proposing a new POS tagger tailored to building codes. It utilizes deep learning neural network model and error-driven transformational rules. The neural network model contains a pre-trained model and one or more trainable neural layers. The pre-trained model was fine-tuned on Part-of-Speech Tagged Building Codes (PTBC), a POS tagged building codes dataset. The fine-tuning of pre-trained model allows the proposed POS tagger to reach high precision with a small amount of available training data. Error-driven transformational rules were used to boost performance further by fixing errors made by the neural network model in the tagged building code. Through experimental testing, the authors found a well-performing POS tagger for building codes that had one bi-directional LSTM trainable layer, utilized BERT_Cased_Base pre-trained model and was trained 50 epochs. This model reached a 91.89% precision without error-driven transformational rules and a 95.11% precision with error-driven transformational rules, which outperformed the 89.82% precision achieved by the state-of-the-art POS taggers.  相似文献   

6.
随着Web2.0技术的快速发展,社交网络、物联网、移动互联网等新兴服务行业日益涌现,Web数据呈爆炸式增长,成为炙手可热的“大数据”。Web大数据巨大的价值使得越来越多的人开始关注,如何获取Web数据并进行挖掘利用。在大数据的环境下,Web数据呈现出规模大、种类多、数据流高速性等特点,使得Web数据抽取与集成,数据分析,数据解释等方面的研究更加深入,与此同时,Web大数据的集成与挖掘仍存在着数据规模、数据多样性、数据时效性、隐私保护等方面的挑战。  相似文献   

7.
介绍了基于网络爬虫的网页HTML解析给出网络热词和数据挖掘的过程,总结了该方法的应用前景。  相似文献   

8.
关联规则挖掘是最常用、最重要的数据挖掘任务之一,经典的关联规则挖掘算法有Apriori、FP-Growth、Eclat等。随着数据的爆炸式增长,传统的算法已不能适应大数据挖掘的需要,需要分布式、并行的关联规则挖掘算法来解决上述问题。MapReduce是一种流行的分布式并行计算模型,因其使用简单、伸缩性好、自动负载均衡和自动容错等优点,得到了广泛的应用。本文对已有的基于MapReduce计算模型的并行关联规则挖掘算法进行了分类和综述,对其各自的优缺点和适用范围进行了总结,并对下一步的研究进行了展望。  相似文献   

9.
随着云计算、物联网、移动互联网等技术的飞速发展,海量数据在这些崭新的领域迅猛地生长着,大数据作为一项颠覆性技术,为处理海量数据提供了无限可能。而传统的关系型数据库的不再适用,导致了分布式数据库NoSQL的应运而生。针对大数据领域面临的种种现实难题,设计并实现了一种基于Hadoop和NoSQL的新型分布式大数据管理系统(DBDMS),其提供大数据的实时采集、检索以及永久存储的功能。实验表明,DBDMS可以显著提高大数据处理能力,适用于海量日志备份和检索、海量网络报文抓取和分析等领域。  相似文献   

10.
    
Nowadays, many organizations analyze their data with the MapReduce paradigm, most of them using the popular Apache Hadoop framework. As the data size managed by MapReduce applications is steadily increasing, the need for improving the Hadoop performance also grows. Existing modifications of Hadoop (e.g., Mellanox Unstructured Data Accelerator) attempt to improve performance by changing some of its underlying subsystems. However, they are not always capable to cope with all its performance bottlenecks or they hinder its portability. Furthermore, new frameworks like Apache Spark or DataMPI can achieve good performance improvements, but they do not keep compatibility with existing MapReduce applications. This paper proposes Flame-MR, a new event-driven MapReduce architecture that increases Hadoop performance by avoiding memory copies and pipelining data movements, without modifying the source code of the applications. The performance evaluation on two representative systems (an HPC cluster and a public cloud platform) has shown experimental evidence of significant performance increases, reducing the execution time by up to 54% on the Amazon EC2 cloud.  相似文献   

11.
李炎  马俊明  安博  曹东刚 《计算机科学》2018,45(9):60-64, 93
科研人员在日常研究中经常使用Excel,Spss等工具对数据进行分析加工来获得相关领域知识。然而随着大数据时代的到来,常用的数据处理软件因单机性能的限制已经不能满足科研人员对大数据分析处理的需求。大数据的处理和可视化离不开分布式计算环境。因此,为了完成对大数据的快速处理和可视化,科研人员不仅需要购置、维护分布式集群环境,还需要具备分布式环境下的编程能力和相应的前端数据可视化技术。这对很多非计算机科班的数据分析工作者而言是非常困难且不必要的。针对上述问题,提出了一种基于Web的轻量级大数据处理和可视化工具。通过该工具,数据分析工作者只需通过简单的点击和拖动,便可以在浏览器中轻松地打开大型数据文件(GB级别)、快速地对文件进行定位(跳转到文件某一行)、方便地调用分布式计算框架来对文件内容进行排序或求极大值、便捷地对数据进行可视化等。 实证研究证明,该解决方案是有效的。  相似文献   

12.
当前,智慧城市成为信息时代城市建设的一个基本目标,智能视频安防监控是其中重要一环,希望从视频图像提取出有效的信息,提供有效的治安防控业务信息。由于视频监控系统广泛使用于各行各业,监控视频数据已成为一类典型的大数据,因此,如何对监控视频大数据进行高效的处理成为一个重要挑战。为此,本文在分析视频处理特点的基础上,提出并实现了一种基于HadoopMapReduce计算框架的分布式离线视频处理方法,该方法根据视频处理的特点进行优化,提升了监控视频大数据的处理效率。  相似文献   

13.
An important property of today’s big data processing is that the same computation is often repeated on datasets evolving over time, such as web and social network data. While repeating full computation of the entire datasets is feasible with distributed computing frameworks such as Hadoop, it is obviously inefficient and wastes resources. In this paper, we present HadUP (Hadoop with Update Processing), a modified Hadoop architecture tailored to large-scale incremental processing with conventional MapReduce algorithms. Several approaches have been proposed to achieve a similar goal using task-level memoization. However, task-level memoization detects the change of datasets at a coarse-grained level, which often makes such approaches ineffective. Instead, HadUP detects and computes the change of datasets at a fine-grained level using a deduplication-based snapshot differential algorithm (D-SD) and update propagation. As a result, it provides high performance, especially in an environment where task-level memoization has no benefit. HadUP requires only a small amount of extra programming cost because it can reuse the code for the map and reduce functions of Hadoop. Therefore, the development of HadUP applications is quite easy.  相似文献   

14.
针对大数据聚类中存在的计算资源消耗大、聚类效率低的问题,提出了一种新的基于节点抽样的分布式二阶段聚类方法。该方法首先在各个本地节点对节点上的数据执行局部聚类操作,并基于局部聚类结果,从每个节点中抽取代表性的数据样本,然后将各节点选定的样本数据传输至中央节点。之后,在中央节点上,对合并的样本数据进行进一步的聚类分析,并将样本聚类的结果传回各个本地节点。最后,各本地节点结合自身的局部聚类结果和中央节点的样本聚类结果,完成最终的聚类标签统一。通过以上流程,所提方法实现了对集中式聚类算法的分布式改造,能够快速一致地完成对全局数据的聚类分析。理论分析和数值实验均表明,与传统的全量数据集中式聚类方法相比,二阶段聚类方法有效地结合了并行处理的高效性和集成分析的准确性,在保证聚类质量的前提下能够显著降低计算资源的消耗,是一种可行的大数据聚类分布式解决方案。  相似文献   

15.
In this paper, we describe the process of parallelizing an existing, production level, sequential Synthetic Aperture Radar (SAR) processor based on the Range-Doppler algorithmic approach. We show how, taking into account the constraints imposed by the software architecture and related software engineering costs, it is still possible with a moderate programming effort to parallelize the software and present an message-passing interface (MPI) implementation whose speedup is about 8 on 9 processors, achieving near real-time processing of raw SAR data even on a moderately aged parallel platform. Moreover, we discuss a hybrid two-level parallelization approach that involves the use of both MPI and OpenMP. We also present GridStore, a novel data grid service to manage raw, focused and post-processed SAR data in a grid environment. Indeed, another aim of this work is to show how the processed data can be made available in a grid environment to a wide scientific community, through the adoption of a data grid service providing both metadata and data management functionalities. In this way, along with near real-time processing of SAR images, we provide a data grid-oriented system for data storing, publishing, management, etc.
Giovanni AloisioEmail:
  相似文献   

16.
目前,自然语言处理大多是借助于分词结果进行句法依存分析,主要采用基于监督学习的端对端模型。该方法主要存在两个问题,一是标注体系繁多,相对比较复杂;二是无法识别语言嵌套结构。为了解决以上问题,该文提出了基于短语窗口的依存句法标注规则,并标注了中文短语窗口数据集(CPWD),同时引入短语窗口模型。该标注规则以短语为最小单位,把句子划分为7类可嵌套的短语类型,同时标示出短语间的句法依存关系;短语窗口模型借鉴了计算机视觉领域目标检测的思想,检测短语的起始位置和结束位置,实现了对嵌套短语及句法依存关系的同步识别。实验结果表明,在CPWD数据集上,短语窗口模型比传统端对端模型F1值提升超过1个百分点。相应的方法应用到了CCL2018的中文隐喻情感分析比赛中,在原有基础上F1值提升了1个百分点以上,取得第一名成绩。  相似文献   

17.
There are substantial benefits to be gained from building computing systems from a number of processors working in parallel. One of the frequently-stated advantages of parallel and distributed systems is that they may be scaled to the needs of the user. This paper discusses some of the problems associated with designing a general-purpose operating system for a scalable parallel computing engine and then describes the solutions adopted in our experimental parallel operating system. We explain why a parallel computing engine composed of a collection of processors communicating through point-to-point links provides a suitable vehicle in which to realize the advantages of scaling. We then introduce a parallel-processing abstraction which can be used as the basis of an operating system for such a computing engine. We consider how this abstraction can be implemented and retain the ability to scale. As a concrete example of the ideas presented here we describe our own experimental scalable parallel operating-system project, concentrating on the Wisdom nucleus and the Sage file system. Finally, after introducing related work, we describe some of the lessons learnt from our own project.  相似文献   

18.
Membrane systems are parallel distributed computing models that are used in a wide variety of areas. Use of a sequential machine to simulate membrane systems loses the advantage of parallelism in Membrane Computing. In this paper, an innovative classification algorithm based on a weighted network is introduced. Two new algorithms have been proposed for simulating membrane systems models on a Graphics Processing Unit (GPU). Communication and synchronization between threads and thread blocks in a GPU are time-consuming processes. In previous studies, dependent objects were assigned to different threads. This increases the need for communication between threads, and as a result, performance decreases. In previous studies, dependent membranes have also been assigned to different thread blocks, requiring inter-block communications and decreasing performance. The speedup of the proposed algorithm on a GPU that classifies dependent objects using a sequential approach, for example with 512 objects per membrane, was 82×, while for the previous approach (Algorithm 1), it was 8.2×. For a membrane system with high dependency among membranes, the speedup of the second proposed algorithm (Algorithm 3) was 12×, while for the previous approach (Algorithm 1) and the first proposed algorithm (Algorithm 2) that assign each membrane to one thread block, it was 1.8×.  相似文献   

19.
    
Forensic examiners are in an uninterrupted battle with criminals in the use of Big Data technology. The underlying storage system is the main scene to tracethe criminal activities. Big Data Storage System is identified as an emerging challenge to digital forensics. Thus, it requires the development of a soundmethodology to investigate Big Data Storage System. Since the use of Hadoop as Big Data Storage System continues to grow rapidly, investigation processmodel for forensic analysis on Hadoop Storage and attached client devices is compulsory. Moreover, forensic analysis on Hadoop Big Data Storage Systemmay take additional time without knowing where the data remnants can reside. In this paper, a new forensic investigation process model for Hadoop BigData Storage System is proposed and discovered data remnants are presented. By conducting forensic research on Hadoop Big Data Storage System, theresulting data remnants assist the forensics examiners and practitioners for generating the evidences.  相似文献   

20.
基于词袋模型的文本情感倾向性分析没有考虑句子的句法结构对句子语义的理解,基于依存句法分析的方法试图解决这一问题.目前基于依存句法分析的方法对影响文本情感的依存关系的选择多根据人为观察,带有随意性.根据影响句子情感倾向性的原极性、修饰极性和动态极性,1)找出了影响句子情感倾向性的4种词性:形容词、动词、副词和名词;2)从词性和汉语句子成分理解的角度,逐一分析了24种依存关系对句子情感计算的影响,找出了可能影响句子情感倾向性的8种依存关系;3)根据这8种依存关系中可能的词性组合设计了6种情感计算规则,并提出了基于二叉树的情感计算策略,设计了情感计算二叉树的构建算法和基于情感计算二叉树的情感计算算法;4)在Web金融信息上进行了实验测试,实验结果表明了该方法的有效性.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号