首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Semantic labelling refers to the problem of assigning known labels to the elements of structured information from a source such as an HTML table or an RDF dump with unknown semantics. In the recent years it has become progressively more relevant due to the growth of available structured information in the Web of data that need to be labelled in order to integrate it in data systems. The existing approaches for semantic labelling have several drawbacks that make them unappealing if not impossible to use in certain scenarios: not accepting nested structures as input, being unable to label structural elements, not being customisable, requiring groups of instances when labelling, requiring matching instances to named entities in a knowledge base, not detecting numeric data, or not supporting complex features. In this article, we propose TAPON-MT, a framework for machine learning semantic labelling. Our framework does not have the former limitations, which makes it domain-independent and customisable. We have implemented it with a graphical interface that eases the creation and analysis of models, and we offer a web service API for their application. We have also validated it with a subset of the National Science Foundation awards dataset, and our conclusion is that TAPON-MT creates models to label information that are effective and efficient in practice.  相似文献   

2.
The nature of scientific and technological data collection is evolving rapidly: data volumes and rates grow exponentially, with increasing complexity and information content, and there has been a transition from static data sets to data streams that must be analyzed in real time. Interesting or anomalous phenomena must be quickly characterized and followed up with additional measurements via optimal deployment of limited assets. Modern astronomy presents a variety of such phenomena in the form of transient events in digital synoptic sky surveys, including cosmic explosions (supernovae, gamma ray bursts), relativistic phenomena (black hole formation, jets), potentially hazardous asteroids, etc. We have been developing a set of machine learning tools to detect, classify and plan a response to transient events for astronomy applications, using the Catalina Real-time Transient Survey (CRTS) as a scientific and methodological testbed. The ability to respond rapidly to the potentially most interesting events is a key bottleneck that limits the scientific returns from the current and anticipated synoptic sky surveys. Similar challenge arises in other contexts, from environmental monitoring using sensor networks to autonomous spacecraft systems. Given the exponential growth of data rates, and the time-critical response, we need a fully automated and robust approach. We describe the results obtained to date, and the possible future developments.  相似文献   

3.
We explore contextual and dispositional correlates of the motivation to contribute to open source initiatives. We examine how the context of the open source project, and the personal values of contributors, are related to the types of motivations for contributing. A web-based survey was administered to 300 contributors in two prominent open source contexts: software and content. As hypothesized, software contributors placed a greater emphasis on reputation-gaining and self-development motivations, compared with content contributors, who placed a greater emphasis on altruistic motives. Furthermore, the hypothesized relationships were found between contributors’ personal values and their motivations for contributing.  相似文献   

4.
Access to legal information and, in particular, to legal literature is examined for the creation of a search and retrieval system for Italian legal literature. The design and implementation of services such as integrated access to a wide range of resources are described, with a particular focus on the importance of exploiting metadata assigned to disparate legal material. The integration of structured repositories and Web documents is the main purpose of the system: it is constructed on the basis of a federation system with service provider functions, aiming at creating a centralized index of legal resources. The index is based on a uniform metadata view created for structured data by means of the OAI approach and for Web documents by a machine learning approach, which, in this paper, has been assessed as regards document classification. Semantic searching is a major requirement for legal literature users and a solution based on the exploitation of Dublin Core metadata, as well as the use of legal ontologies and related terms prepared for accessing indexed articles have been implemented.
E. FrancesconiEmail:
  相似文献   

5.
Most data-mining algorithms assume static behavior of the incoming data. In the real world, the situation is different and most continuously collected data streams are generated by dynamic processes, which may change over time, in some cases even drastically. The change in the underlying concept, also known as concept drift, causes the data-mining model generated from past examples to become less accurate and relevant for classifying the current data. Most online learning algorithms deal with concept drift by generating a new model every time a concept drift is detected. On one hand, this solution ensures accurate and relevant models at all times, thus implying an increase in the classification accuracy. On the other hand, this approach suffers from a major drawback, which is the high computational cost of generating new models. The problem is getting worse when a concept drift is detected more frequently and, hence, a compromise in terms of computational effort and accuracy is needed. This work describes a series of incremental algorithms that are shown empirically to produce more accurate classification models than the batch algorithms in the presence of a concept drift while being computationally cheaper than existing incremental methods. The proposed incremental algorithms are based on an advanced decision-tree learning methodology called “Info-Fuzzy Network” (IFN), which is capable to induce compact and accurate classification models. The algorithms are evaluated on real-world streams of traffic and intrusion-detection data.  相似文献   

6.
Incremental learning has been developed for supervised classification, where knowledge is accumulated incrementally and represented in the learning process. However, labeling sufficient samples in each data chunk is of high cost, and incremental technologies are seldom discussed in the semi-supervised paradigm. In this paper we advance an Incremental Semi-Supervised classification approach via Self-Representative Selection (IS3RS) for data streams classification, by exploring both the labeled and unlabeled dynamic samples. An incremental self-representative data selection strategy is proposed to find the most representative exemplars from the sequential data chunk. These exemplars are incrementally labeled to expand the training set, and accumulate knowledge over time to benefit future prediction. Extensive experimental evaluations on some benchmarks have demonstrated the effectiveness of the proposed framework.  相似文献   

7.
We introduce the task of mapping search engine queries to DBpedia, a major linking hub in the Linking Open Data cloud. We propose and compare various methods for addressing this task, using a mixture of information retrieval and machine learning techniques. Specifically, we present a supervised machine learning-based method to determine which concepts are intended by a user issuing a query. The concepts are obtained from an ontology and may be used to provide contextual information, related concepts, or navigational suggestions to the user submitting the query. Our approach first ranks candidate concepts using a language modeling for information retrieval framework. We then extract query, concept, and search-history feature vectors for these concepts. Using manual annotations we inform a machine learning algorithm that learns how to select concepts from the candidates given an input query. Simply performing a lexical match between the queries and concepts is found to perform poorly and so does using retrieval alone, i.e., omitting the concept selection stage. Our proposed method significantly improves upon these baselines and we find that support vector machines are able to achieve the best performance out of the machine learning algorithms evaluated.  相似文献   

8.
Data mining techniques are traditionally divided into two distinct disciplines depending on the task to be performed by the algorithm: supervised learning and unsupervised learning. While the former aims at making accurate predictions after deeming an underlying structure in data—which requires the presence of a teacher during the learning phase—the latter aims at discovering regular-occurring patterns beneath the data without making any a priori assumptions concerning their underlying structure. The pure supervised model can construct a very accurate predictive model from data streams. However, in many real-world problems this paradigm may be ill-suited due to (1) the dearth of training examples and (2) the costs of labeling the required information to train the system. A sound use case of this concern is found when defining data replication and partitioning policies to store data emerged in the Smart Grids domain in order to adapt electric networks to current application demands (e.g., real time consumption, network self adapting). As opposed to classic electrical architectures, Smart Grids encompass a fully distributed scheme with several diverse data generation sources. Current data storage and replication systems fail at both coping with such overwhelming amount of heterogeneous data and at satisfying the stringent requirements posed by this technology (i.e., dynamic nature of the physical resources, continuous flow of information and autonomous behavior demands). The purpose of this paper is to apply unsupervised learning techniques to enhance the performance of data storage in Smart Grids. More specifically we have improved the eXtended Classifier System for Clustering (XCSc) algorithm to present a hybrid system that mixes data replication and partitioning policies by means of an online clustering approach. Conducted experiments show that the proposed system outperforms previous proposals and truly fits with the Smart Grid premises.  相似文献   

9.
With the growing popularity of Internet of Things (IoT) technologies and sensors deployment, more and more cities are leaning towards smart cities solutions that can leverage this rich source of streaming data to gather knowledge that can be used to solve domain-specific problems. A key challenge that needs to be faced in this respect is the ability to automatically discover and integrate heterogeneous sensor data streams on the fly for applications to use them. To provide a domain-independent platform and take full benefits from semantic technologies, in this paper we present an Automated Complex Event Implementation System (ACEIS), which serves as a middleware between sensor data streams and smart city applications. ACEIS not only automatically discovers and composes IoT streams in urban infrastructures for users’ requirements expressed as complex event requests, but also automatically generates stream queries in order to detect the requested complex events, bridging the gap between high-level application users and low-level information sources. We also demonstrate the use of ACEIS in a smart travel planner scenario using real-world sensor devices and datasets.  相似文献   

10.
快速、准确和全面地从大量互联网文本信息中定位情感倾向是当前大数据技术领域面临的一大挑战.文本情感分类方法大致分为基于语义理解和基于有监督的机器学习两类.语义理解处理情感分类的优势在于其对不同领域的文本都可以进行情感分类,但容易受到中文存在的不同句式及搭配的影响,分类精度不高.有监督的机器学习虽然能够达到比较高的情感分类精度,但在一个领域方面得到较高分类能力的分类器不适应新领域的情感分类.在使用信息增益对高维文本做特征降维的基础上,将优化的语义理解和机器学习相结合,设计了一种新的混合语义理解的机器学习中文情感分类算法框架.基于该框架的多组对比实验验证了文本信息在不同领域中高且稳定的分类精度.  相似文献   

11.
Machine Learning (ML) applications need large volumes of data to train their models so that they can make high-quality predictions. Given digital revolution enablers such as the Internet of Things (IoT) and the Industry 4.0, this information is generated in large quantities in terms of continuous data streams and not in terms of static datasets as it is the case with most AI (Artificial Intelligence) frameworks. Kafka-ML is a novel open-source framework that allows the complete management of ML/AI pipelines through data streams. In this article, we present new features for the Kafka-ML framework, such as the support for the well-known ML/AI framework PyTorch, as well as for GPU acceleration at different points along the pipeline. This pipeline will be described by taking a real Industry 4.0 use case in the Petrochemical Industry. Finally, a comprehensive evaluation with state-of-the-art deep learning models will be carried out to demonstrate the feasibility of the platform.  相似文献   

12.
滑动窗口模型下的优化数据流聚类算法   总被引:2,自引:0,他引:2  
胡彧  闫巧梅 《计算机应用》2008,28(6):1414-1416
为提高对进化数据流的聚类质量及效率,采用聚类特征指数直方图支持数据处理,减少直方图结构的维护数,改进滑动窗口下的流数据聚类算法。实验表明,与传统基于界标模型的聚类算法相比,优化算法可获得较好的工作效率、较小的内存开销和快速的数据处理能力,拓展了流数据挖掘技术的应用领域。  相似文献   

13.
针对目前矿山领域异构数据融合时先验知识获取困难、物联网本体库实时性差、实例对象数据手动标注方式效率较低等问题,提出了一种矿山语义物联网自动语义标注方法。给出了传感数据语义化处理框架:一方面,确定本体的专业领域和范畴,通过重用流注释本体(SAO)构建领域本体,作为驱动语义标注的基础;另一方面,使用机器学习方法对感知数据流进行特征提取与数据分析,从海量数据中挖掘出概念间的关系;通过数据挖掘知识来驱动本体的更新与完善,实现本体的动态更新、拓展与更精确的语义标注,增强机器的理解力。以矿井提升系统主轴故障为例阐述从本体到实例化的语义标注过程:结合领域专家知识及本体重用,采用"七步法"建立矿井提升系统主传动故障本体;为了加强实例数据属性描述的准确性,使用主成分分析法(PCA)与K-means聚类方法对数据集进行降维和分组,提取出数据属性与概念的关系;通过基于语义Web的规则语言(SWRL)标注具体先行条件与后续概念的关系,优化领域本体。实验结果表明:在本体实例化过程中,可利用机器学习技术从传感数据中自动提取概念,实现传感数据的自动语义标注。  相似文献   

14.
The rapid evolution of technology has led to the generation of high dimensional data streams in a wide range of fields, such as genomics, signal processing, and finance. The combination of the streaming scenario and high dimensionality is particularly challenging especially for the outlier detection task. This is due to the special characteristics of the data stream such as the concept drift, the limited time and space requirements, in addition to the impact of the well-known curse of dimensionality in high dimensional space. To the best of our knowledge, few studies have addressed these challenges simultaneously, and therefore detecting anomalies in this context requires a great deal of attention. The main objective of this work is to study the main approaches existing in the literature, to identify a set of comparison criteria, such as the computational cost and the interpretation of outliers, which will help us to reveal the different challenges and additional research directions associated with this problem. At the end of this study, we will draw up a summary report which summarizes the main limits identified and we will detail the different directions of research related to this issue in order to promote research for this community.  相似文献   

15.
Can a Knowledge-Level layer be located in the Semantic Grid infrastructure? Is it possible to design an Agent Communication Language (ACL) which enables Knowledge-Level agents to cooperate in a geographically distributed Semantic Grid despite nodes’ failures or malfunctions? This paper tries to address the above Semantic Grid challenges presenting an agent-based Open Service Architecture which integrates geographically distributed agents in a Semantic Grid. The architecture is well integrated with standard Internet components and technologies and supports communication among Knowledge-Level agents. The role of agents is to retrieve, execute and compose available services providing more sophisticated instances of them. Inter-agent communication is realized by exploiting an advanced Agent Communication Language which supports a fault-tolerant anonymous interaction protocol and satisfies a set of well defined Knowledge-Level programming requirements. Here, we present the design of the architecture and of the Agent Communication Language as well as their implementation. The architecture is evaluated by means of several case studies which highlight the main feature of our proposal. The main advantage of our approach is to demonstrate that different issues, such as high level inter-agent communication and fault tolerance, can be successfully integrated in Grid infrastructures which provide Web Services maintaining a clean design of the architecture and a Knowledge-Level characterization.  相似文献   

16.
在分布式数据流中,数据流之间相关性分析可以揭示被监测对象之间存在的内在联系。提出了一个基于基窗口的相关系数的计算方法,该方法先将计算相关系数的公式变形为由适合基窗口聚集的因子组成,然后用基于基窗口的方法聚集每个因子。基于基窗口的聚集方法是将窗口中的数据项划分成一系列基窗口并分别对基窗口进行计算。当窗口随机滑动后,新窗口中数据项的聚集可以部分地利用上一次窗口聚集的结果。模拟实验表明,与每次对窗口中所有数据进行聚集相比,基于基窗口的方法可以有效地降低数据流相关系数的计算时间。  相似文献   

17.
当前许多工程领域产生大量高速实时的流式数据,基于流式数据的关联规则挖掘应用广泛,与传统的静态数据相比,流式数据上关联分析面临极大的资源挑战。提出了流式数据上关联规则的形式化定义和基本挖掘算法,系统地回顾了近年来流式数据上关联规则挖掘的研究进展,详细分析了目前挖掘算法研究中存在的主要问题和解决途径,阐述了未来的研究方向。  相似文献   

18.
The relational database model is widely used in real applications. We propose a way of complementing such a database with an XML data warehouse. The approach we propose is generic, and driven by a domain ontology. The XML data warehouse is built from data extracted from the Web, which are semantically tagged using terms belonging to the domain ontology. The semantic tagging is fuzzy, since, instead of tagging the values of the Web document with one value of the domain ontology, we propose to use tags expressed in terms of a possibility distribution representing a set of possible terms, each term being weighted by a possibility degree. The querying of the XML data warehouse is also fuzzy: the end-users can express their preferences by means of fuzzy selection criteria. We present our approach on a first application domain: predictive microbiology.  相似文献   

19.
We present a method for the classification of multi-labeled text documents explicitly designed for data stream applications that require to process a virtually infinite sequence of data using constant memory and constant processing time.Our method is composed of an online procedure used to efficiently map text into a low-dimensional feature space and a partition of this space into a set of regions for which the system extracts and keeps statistics used to predict multi-label text annotations. Documents are fed into the system as a sequence of words, mapped to a region of the partition, and annotated using the statistics computed from the labeled instances colliding in the same region. This approach is referred to as clashing.We illustrate the method in real-world text data, comparing the results with those obtained using other text classifiers. In addition, we provide an analysis about the effect of the representation space dimensionality on the predictive performance of the system. Our results show that the online embedding indeed approximates the geometry of the full corpus-wise TF and TF-IDF space. The model obtains competitive F measures with respect to the most accurate methods, using significantly fewer computational resources. In addition, the method achieves a higher macro-averaged F measure than methods with similar running time. Furthermore, the system is able to learn faster than the other methods from partially labeled streams.  相似文献   

20.
In computing with words (CWW), knowledge is linguistically represented and has an explicit semantics defined through fuzzy information granules. The linguistic representation, in turn, naturally bears an implicit semantics that belongs to users reading the knowledge base; hence a necessary condition for achieving interpretability requires that implicit and explicit semantics are cointensive. Interpretability is definitely stringent when knowledge must be acquired from data through inductive learning. Therefore, in this paper we propose a methodology for designing interpretable fuzzy models through semantic cointension. We focus our analysis on fuzzy rule-based classifiers (FRBCs), where we observe that rules resemble logical propositions, thus semantic cointension can be partially regarded as the fulfillment of the “logical view”, i.e. the set of basic logical laws that are required in any logical system. The proposed approach is grounded on the employment of a couple of tools: DCf, which extracts interpretable classification rules from data, and Espresso, that is capable of fast minimization of Boolean propositions. Our research demonstrates that it is possible to design models that exhibit good classification accuracy combined with high interpretability in the sense of semantic cointension. Also, structural parameters that quantify model complexity show that the derived models are also simple enough to be read and understood.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号