首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
第六届中国健康信息处理会议(China conference on Health Information Processing,CHIP 2020)组织了中文医疗信息处理方面的6个评测任务,其中任务2为中文医学文本实体关系抽取任务,该任务的主要目标为自动抽取中文医学文本中的实体关系三元组。共有174支队伍参加了评测任务,最终17支队伍提交了42组结果,该任务以微平均F1值为最终评估标准,提交结果中F1最高值达0.648 6。  相似文献   

2.
Swarm intelligence (SI) is briefly defined as the collective behaviour of decentralized and self-organized swarms. The well known examples for these swarms are bird flocks, fish schools and the colony of social insects such as termites, ants and bees. In 1990s, especially two approaches based on ant colony and on fish schooling/bird flocking introduced have highly attracted the interest of researchers. Although the self-organization features are required by SI are strongly and clearly seen in honey bee colonies, unfortunately the researchers have recently started to be interested in the behaviour of these swarm systems to describe new intelligent approaches, especially from the beginning of 2000s. During a decade, several algorithms have been developed depending on different intelligent behaviours of honey bee swarms. Among those, artificial bee colony (ABC) is the one which has been most widely studied on and applied to solve the real world problems, so far. Day by day the number of researchers being interested in ABC algorithm increases rapidly. This work presents a comprehensive survey of the advances with ABC and its applications. It is hoped that this survey would be very beneficial for the researchers studying on SI, particularly ABC algorithm.  相似文献   

3.
Semantic Web technologies must integrate with Web 2.0 services for both to leverage each others strengths. We argue that the REST-based design methodologies [R.T. Fielding, R.N. Taylor, Principled design of the modern web architecture, ACM Trans. Internet Technol. (TOIT) 2 (2) (2002) 115–150] of the web present the ideal mechanism through which to align the publication of semantic data with the existing web architecture. We present the design and implementation of two solutions that combine REST-based design and RDF [D. Beckett (Ed.), RDF/XML Syntax Specification (Revised), W3C Recommendation, February 10, 2004] data access: one solution for integrating existing web services and one server-side solution for creating RDF REST services. Both of these solutions enable SPARQL [E. Prud’hommeaux, A. Seaborne (Eds.), SPARQL Query Language for RDF, W3C Working Draft, March 26, 2007] to be a unifying data access layer for aligning the Semantic Web and Web 2.0.  相似文献   

4.
This paper describes a new modelling language for the effective design and validation of Java annotations. Since their inclusion in the 5th edition of Java, annotations have grown from a useful tool for the addition of meta-data to play a central role in many popular software projects. Usually they are not conceived in isolation, but in groups, with dependency and integrity constraints between them. However, the native support provided by Java for expressing this design is very limited.To overcome its deficiencies and make explicit the rich conceptual model which lies behind a set of annotations, we propose a domain-specific modelling language. The proposal has been implemented as an Eclipse plug-in, including an editor and an integrated code generator that synthesises annotation processors. The environment also integrates a model finder, able to detect unsatisfiable constraints between different annotations, and to provide examples of correct annotation usages for validation. The language has been tested using a real set of annotations from the Java Persistence API (JPA). Within this subset we have found enough rich semantics expressible with Ann and omitted nowadays by the Java language, which shows the benefits of Ann in a relevant field of application.  相似文献   

5.
Microformats: the next (small) thing on the semantic Web?   总被引:2,自引:0,他引:2  
Clever application of existing XHTML elements and class attributes can make it easier to describe people, places, events, and other semistructured information in human-readable form. In this paper, the author takes a more detailed look at some examples of microformats, the general principles by which they can be constructed, and how a community of users is forming around these seemingly ad hoc specifications to advance the cause of what some call an alternative to the semantic Web, the "lowercase semantic Web".  相似文献   

6.
7.
Over the past two decades, human action recognition from video has been an important area of research in computer vision. Its applications include surveillance systems, human–computer interactions and various real-world applications where one of the actor is a human being. A number of review works have been done by several researchers in the context of human action recognition. However, it is found that there is a gap in literature when it comes to methodologies of STIP-based detector for human action recognition. This paper presents a comprehensive review on STIP-based methods for human action recognition. STIP-based detectors are robust in detecting interest points from video in spatio-temporal domain. This paper also summarizes related public datasets useful for comparing performances of various techniques.  相似文献   

8.
In the Internet of Things (IoT), data-producing entities sense their environment and transmit these observations to a data processing platform for further analysis. Applications can have a notion of context awareness by combining this sensed data, or by processing the combined data. The processes of combining data can consist both of merging the dynamic sensed data, as well as fusing the sensed data with background and historical data. Semantics can aid in this task, as they have proven their use in data integration, knowledge exchange and reasoning. Semantic services performing reasoning on the integrated sensed data, combined with background knowledge, such as profile data, allow extracting useful information and support intelligent decision making. However, advanced reasoning on the combination of this sensed data and background knowledge is still hard to achieve. Furthermore, the collaboration between semantic services allows to reach complex decisions. The dynamic composition of such collaborative workflows that can adapt to the current context, has not received much attention yet. In this paper, we present MASSIF, a data-driven platform for the semantic annotation of and reasoning on IoT data. It allows the integration of multiple modular reasoning services that can collaborate in a flexible manner to facilitate complex decision-making processes. Data-driven workflows are enabled by letting services specify the data they would like to consume. After thorough processing, these services can decide to share their decisions with other consumers. By defining the data these services would like to consume, they can operate on a subset of data, improving reasoning efficiency. Furthermore, each of these services can integrate the consumed data with background knowledge in its own context model, for rapid intelligent decision making. To show the strengths of the platform, two use cases are detailed and thoroughly evaluated.  相似文献   

9.
This paper presents some novel theoretical results as well as practical algorithms and computational procedures on fuzzy relation equations (FRE). These results refine and improve what has already been reported in a significant manner. In the previous paper, the authors have already proved that the problem of solving the system of fuzzy relation equations is an NP-hard problem. Therefore, it is practically impossible to determine all minimal solutions for a large system if PNP. In this paper, an existence theorem is proven: there exists a special branch-point-solution that is greater than all minimal solutions and less than the maximum solution. Such branch-point-solution can be calculated based on the solution-base-matrix. Furthermore, a procedure for determining all branch-point-solutions is designed. We also provide efficient algorithms which is capable of determining as well as searching for certain types of minimal solutions. We have thus obtained: (1) a fast algorithm to determine whether a solution is a minimal solution, (2) the algorithm to search for the minimal solutions that has at least a minimum value at a component in the solution vector, and (3) the procedure of determining if a system of fuzzy relation equations has the unique minimal solution. Other properties are also investigated.  相似文献   

10.
We present the MATCH corpus, a unique data set of 447 dialogues in which 26 older and 24 younger adults interact with nine different spoken dialogue systems. The systems varied in the number of options presented and the confirmation strategy used. The corpus also contains information about the users’ cognitive abilities and detailed usability assessments of each dialogue system. The corpus, which was collected using a Wizard-of-Oz methodology, has been fully transcribed and annotated with dialogue acts and “Information State Update” (ISU) representations of dialogue context. Dialogue act and ISU annotations were performed semi-automatically. In addition to describing the corpus collection and annotation, we present a quantitative analysis of the interaction behaviour of older and younger users and discuss further applications of the corpus. We expect that the corpus will provide a key resource for modelling older people’s interaction with spoken dialogue systems.  相似文献   

11.
This research presents a comprehensive analysis of U.S. counties’ adoption of e-Government and the functions of the websites. By using content analysis methodology, the services and functions of U.S. county e-Government portals are scrutinized. The investigation instrument is based on political and technological theories, an e-Government stage model, and the review of literature. The research finds that U.S. counties’ adoption of e-Government is highly associated with certain socioeconomic factors; in addition, the functionalities of U.S. county e-Government portals are significantly related to six socioeconomic factors according to the multiple regression analysis. The research provides insights for government officials and practitioners to understand and improve e-Government practice. It also sheds light on e-Government research by bringing in a valuable research instrument and comprehensive data about e-Government adoption. The implications for future research are discussed.  相似文献   

12.
This paper describes a superconductivity corpus for materials informatics (SC-CoMIcs) tailored for the extraction of superconductive material information from the literature. Currently, few corpora exist for materials informatics in contrast to the situation for biomedical informatics. In particular, there is no sizable corpus that can be used to assist with the search for superconducting materials. The SC-CoMIcs consists of 1,000 abstracts with manually annotated named entities, main materials, and relations/events associated with superconductivity. In particular, the main material is a selling point of our corpus, which can be regarded as a hub that binds implicitly related physical entities and properties in abstracts. We conduct named entity recognition, main material identification, and relation/event extraction experiments to determine the effectiveness of the corpus. The experimental results show that we can obtain F1 scores of approximately 74%–95%, 84% and 73%–97% for named entity recognition, main material identification and relation extraction tasks, respectively. We also demonstrate that the extracted doping information is consistent with that of the well-known Hume–Rothery rules, which implies that the constructed corpus can provide an opportunity to revisit or find physical chemical rules from the literature.  相似文献   

13.
14.
15.
In this paper, we describe a first version of a system for statisticaltranslation and present experimental results. The statistical translationapproach uses two types of information: a translation model and a languagemodel. The language model used is a standard bigram model. The translationmodel is decomposed into lexical and alignment models. After presenting the details of the alignment model, we describe the search problem and present a dynamic programming-based solution for the special case of monotone alignments.So far, the system has been tested on two limited-domain tasks for which abilingual corpus is available: the EuTrans traveller task (Spanish–English,500-word vocabulary) and the Verbmobil task (German–English, 3000-wordvocabulary). We present experimental results on these tasks. In addition to the translation of text input, we also address the problem of speech translation and suitable integration of the acoustic recognition process and the translation process.  相似文献   

16.
李博  徐泽水  秦勇 《控制与决策》2022,37(6):1583-1590
在《控制与决策》创刊35年之际,基于可视化工具(包括Vosviewer和CiteSpace)对其1986年至2020年期间的文献进行综合性计量分析.首先,对文献的基本特征进行初步探索,包括载文量时序分析、发文作者之间的合作关系分析、机构共现分析以及文章影响力分析.基于可视化工具,分时段对相应文献做关键词共现分析,突出该...  相似文献   

17.
In this paper, a web service composition and execution framework is presented for semantically -annotated web services. A monolithic approach to automated web service composition and execution problem is chosen, which provides some benefits by separating composition and execution phases. An AI planning method using a logical formalism, namely Abductive Event Calculus, is chosen for the composition phase. This formalism allows one to generate a narrative of actions and temporal orderings using abductive planning techniques given a goal. The functional properties of services, namely input/output/precondition/effects (IOPE) are taken into consideration in the composition phase and non-functional properties, namely Quality of Service (QoS) parameters are used in selecting the most appropriate solution to be executed. The repository of OWL-S semantic web services are translated to the Event Calculus axioms and the resulting plans found by the Abductive Event Calculus Planner are converted to graphs. These graphs can be sorted according to a score calculated using the defined quality of service parameters of the atomic services in the composition to determine the optimal solution. The selected graph is converted to an OWL-S file which is executed consequently.  相似文献   

18.
In this paper, a new method, named as L-tree match, is presented for extracting data from complex data sources. Firstly, based on data extraction logic presented in this work, a new data extraction model is constructed in which model components are structurally correlated via a generalized template. Secondly, a database-populating mechanism is built, along with some object-manipulating operations needed for flexible database design, to support data extraction from huge text stream. Thirdly, top-down and bottom-up strategies are combined to design a new extraction algorithm that can extract data from data sources with optional, unordered, nested, and/or noisy components. Lastly, this method is applied to extract accurate data from biological documents amounting to 100GB for the first online integrated biological data warehouse of China.  相似文献   

19.
《Computers & Geosciences》2003,29(9):1101-1110
The three-dimensional reconstruction of basin sediments has become a major topic in earth sciences and is now a necessary step for modeling and understanding the depositional context of sediments. Because data are generally scattered, the construction of any irregular, continuous surface involves the interpolation of a large number of points over a regular grid. However, interpolation is a highly technical specialty that is still somewhat of a black art for most people. The lack of multi-platform contouring software that is easy to use, fast and automatic, without numerous abstruse parameters, motivated the programming of a software, called ISOPAQ. This program is an interactive desktop tool for spatial analysis, interpolation and display (location, contour and surface mapping) of earth science data, especially stratigraphic data. It handles four-dimensional data sets, where the dimensions are usually longitude, latitude, thickness and time, stored in a single text file. The program uses functions written for the MATLAB® software. Data are managed by the means of a user-friendly graphical interface, which allows the user to interpolate and generate maps for stratigraphic analyses. This program can process and compare several interpolation methods (nearest neighbor, linear and cubic triangulations, inverse distance and surface splines) and some stratigraphic treatments, such as the decompaction of sediments. Moreover, the window interface helps the user to easily change some parameters like coordinates, grid cell size, and equidistance of contour lines and scale between files. Primarily developed for non-specialists of interpolation thanks to the graphical user interface, practitioners can also easily append the program with their own functions, since it is written in MATLAB open language. As an example, the program is applied here to the Bajocian stratigraphic sequences of eastern France.  相似文献   

20.
In this paper, we introduce new architectures of genetically oriented fuzzy relation neural networks (FrNNs) and offer a comprehensive design methodology that supports their development. The proposed FrNNs are based on “if–then”-rule-based networks, with the extended structure of the premise and the consequence parts of the individual rules. We consider two types of the FrNN topologies, which are called FrNN-I and FrNN-II here, depending upon the usage of inputs in the premise and the consequence of fuzzy rules. Three different forms of regression polynomials (namely, constant, linear, and quadratic) are used to construct the consequence of the rules. In order to develop optimal FrNNs, the structure and the parameters are optimized using genetic algorithms (GAs). The proposed methodology is compared when the two development strategies, with separate and simultaneous optimization schemes that involve structure and parameters, are carried out. Given the large search space associated with these FrNN models, we enhance the search capabilities of the GAs by introducing the dynamic variants of genetic optimization. It fully exploits the processing capabilities of the FrNNs by supporting their structural and parametric optimization. To evaluate the performance of the proposed FrNNs, we exploit a suite of several representative numerical examples. A comparative analysis shows that the FrNNs exhibit higher accuracy and predictive capabilities as well as better modeling stability, when compared with some other models that exist in the literature.   相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号