首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 6 毫秒
1.
针对逆向工程中曲面数字化与模型重建的信息交互方式单一,一些对后续建模有用的辅助信息(隐式信息)流失(目前的数据文档格式无法保存与传递),造成后续模型重建困难的问题,提出一种基于语义的测量信息传递方法,分析测量过程中有利于模型重建的隐式信息,建立语义信息模型,采用类IGES数据格式存储语义信息,阐述语义信息的封装、解析方法以及基于语义的数据预处理、模型重建过程。实验研究表明,基于语义的测量信息传递方法能够有效提高CAD模型重建效率。  相似文献   

2.
The paper presents a grammatical inference methodology for the generation of visual languages, that benefits from the availability of semantic information about the sample sentences. Several well-known syntactic inference algorithms are shown to obey a general inference scheme, which the authors call the Gen-Inf scheme. Then, all the algorithms of the Gen-Inf scheme are modified in agreement with the introduced semantics-based inference methodology. The use of grammatical inference techniques in the design of adaptive user interfaces was previously experimented with the VLG system for visual language generation. The system is a powerful tool for specifying, designing, and interpreting customized visual languages for different applications. They enhance the adaptivity of the VLG system to any visual environment by exploiting the proposed semantics-based inference methodology. As a matter of fact, a more general model of visual language generation is achieved, based on the Gen-Inf scheme, where the end-user is allowed to choose the algorithm which best fits his/her requirements within the particular application environment  相似文献   

3.
目前代码迷惑技术已经成为构造恶意软件变体的主要方式,大量出现的病毒变体使得传统基于程序文本特征的病毒排查工具的防护作用大大降低.本文提出一种新的基于语义的恶意软件变体判定框架,为了确定一个程序是否是某种恶意软件的变体:首先基于符号执行收集程序语义状态,然后通过证明语义之间是否满足变体关系来确定该程序是否是恶意软件的变体.本框架能够识别经代码迷惑变换后得到的程序是属于变换前程序的变体,从而可以减少对病毒数据库的更新.最后,通过一个实现了该框架的原型系统来说明基于语义的恶意软件判定器框架的可行性.  相似文献   

4.
针对传统的信息泄漏检测技术无法有效检测Android应用中存在的隐式信息泄露的问题,提出了一种将控制结构本体模型与语义网规则语言(SWRL)推理规则相结合的Android隐式信息流(ⅡF)推理方法。首先,对控制结构中能够产生隐式信息流的关键要素进行分析和建模,建立控制结构本体模型;其次,通过分析隐式信息泄露的主要原因,给出基于严格控制依赖(SCD)隐式信息流的判定规则并将其转换为SWRL推理规则;最后,将添加的控制结构本体实例与推理规则共同导入到推理引擎Jess中进行推理。实验结果表明:所提方法能够推理出多种不同性质的SCD隐式流,公开样本集的测试准确率达到83.3%,且推理耗时在分支数有限时处于合理区间。所提模型方法可有效辅助传统信息泄露检测提升其准确率。  相似文献   

5.
Multimedia Tools and Applications - Event detection have long been a fundamental problem in computer vision society. Various datasets for recognizing human events and activities have been proposed...  相似文献   

6.
The Journal of Supercomputing - The rapid growth of multimedia data and the improvement of deep learning technology has allowed high-accuracy models to be trained for various fields. Video tools...  相似文献   

7.
8.
9.
In this work we propose methods that exploit context sensor data modalities for the task of detecting interesting events and extracting high-level contextual information about the recording activity in user generated videos. Indeed, most camera-enabled electronic devices contain various auxiliary sensors such as accelerometers, compasses, GPS receivers, etc. Data captured by these sensors during the media acquisition have already been used to limit camera degradations such as shake and also to provide some basic tagging information such as the location. However, exploiting the sensor-recordings modality for subsequent higher-level information extraction such as interesting events has been a subject of rather limited research, further constrained to specialized acquisition setups. In this work, we show how these sensor modalities allow inferring information (camera movements, content degradations) about each individual video recording. In addition, we consider a multi-camera scenario, where multiple user generated recordings of a common scene (e.g., music concerts) are available. For this kind of scenarios we jointly analyze these multiple video recordings and their associated sensor modalities in order to extract higher-level semantics of the recorded media: based on the orientation of cameras we identify the region of interest of the recorded scene, by exploiting correlation in the motion of different cameras we detect generic interesting events and estimate their relative position. Furthermore, by analyzing also the audio content captured by multiple users we detect more specific interesting events. We show that the proposed multimodal analysis methods perform well on various recordings obtained in real live music performances.  相似文献   

10.
基于网页信息检索的地理信息变化检测方法   总被引:1,自引:0,他引:1  
曾文华  黄桦 《计算机应用》2010,30(4):1132-1134
针对地理信息变化频繁,难以及时发现的问题,提出了一种基于网页信息检索的地理信息变化检测方法,通过设计搜索条件在互联网上收集符合条件的网页,设计评价方法评价搜索结果的可信度,并对最终搜索结果进行统计和空间分析,实现基于网页信息检索技术的地理信息变化检测。以杭州地区为例,开发了基于Web的杭州地区地物变化检测系统,验证了该方法的可行性及有效性,为区域的地物变化检测提供了新方法。  相似文献   

11.
除了机器翻译,平行语料库对信息检索、信息抽取及知识获取等研究领域具有重要的作用,但是传统的平行语料库只是在句子级对齐,因而对跨语言自然语言处理研究的作用有限。鉴于此,以OntoNotes中英文平行语料库为基础,通过自动抽取、自动映射加人工标注相结合的方法,构建了一个面向信息抽取的高质量中英文平行语料库。该语料库不仅包含中英文实体及其相互关系,而且实现了中英文在实体和关系级别上的对齐。因此,该语料库将有助于中英文信息抽取的对比研究,揭示不同语言在语义表达上的差异,也为跨语言信息抽取的研究提供了一个有价值的平台。  相似文献   

12.
数据空间技术是数据库管理技术的进一步发展,如何有效地搜索数据空间中的资源成为一个值得研究的问题.为此提出一种基于语义的数据空间资源搜索机制(S-RSM,Semantics-based Resource Search Mechanism for Dataspace).定义了资源描述模型,能够有效地将数据资源进行统一描述和包装;提出一种基于语义的资源搜索策略,利用Dbped ia语义知识库评估资源对象关联和语义项关联.同其它搜索策略相比,S-RSM在查全率和查准率等方面具有一定的优势.  相似文献   

13.
14.
叶益林  吴礼发  颜慧颖 《计算机科学》2017,44(6):161-167, 173
原生代码已在Android应用中广泛使用,为恶意攻击者提供了新的攻击途径,其安全问题不容忽视。当前已有Android恶意应用检测方案,主要以Java代码或由Java代码编译得到的Dalvik字节码为分析对象,忽略了对原生代码的分析。针对这一不足,提出了一种基于双层语义的原生库安全性检测方法。首先分析原生方法Java层语义,提取原生方法函数调用路径,分析原生方法与Java层的数据流依赖关系以及原生方法函数调用路径的入口点。对于原生代码语义,定义了数据上传、下载、敏感路径读写、敏感字符串、可疑方法调用5类可疑行为,基于IDA Pro和IDA Python对原生代码内部行为进行自动分析。使用开源机器学习工具Weka,以两层语义作为数据特征,对5336个普通应用和3426个恶意应用进行了分析,最佳检测率达到92.4%,表明所提方法能够有效检测原生库的安全性。  相似文献   

15.
为解决目前客运索道的检测系统中只能对钢丝绳进行检测,不能准确提供车厢及车内乘客的相关信息的问题,研究了一种索道车厢信息检测系统.该系统以单片机为中心,各部分以无线网络进行通信,利用编码轮测量车厢运行速度和方向,并能准确定位车厢与索道运营中心的距离;光电开关配合行程开关的使用能检测车厢内游客的信息,为发生事故时人员的救援提供了宝贵的资料;上位机部分采用组态王编写,画面形象逼真,可以实时动态显示车厢运行情况.运行结果表明,该系统运行稳定,使用方便,提高了工作效率.  相似文献   

16.
17.
Automatically identifying and extracting the target information of a webpage, especially main text, is a critical task in many web content analysis applications, such as information retrieval and automated screen reading. However, compared with typical plain texts, the structures of information on the web are extremely complex and have no single fixed template or layout. On the other hand, the amount of presentation elements on web pages, such as dynamic navigational menus, flashing logos, and a multitude of ad blocks, has increased rapidly in the past decade. In this paper, we have proposed a statistics-based approach that integrates the concept of fuzzy association rules (FAR) with that of sliding window (SW) to efficiently extract the main text content from web pages. Our approach involves two separate stages. In Stage 1, the original HTML source is pre-processed and features are extracted for every line of text; then, a supervised learning is performed to detect fuzzy association rules in training web pages. In Stage 2, necessary HTML source preprocessing and text line feature extraction are conducted the same way as that of Stage 1, after which each text line is tested whether it belongs to the main text by extracted fuzzy association rules. Next, a sliding window is applied to segment the web page into several potential topical blocks. Finally, a simple selection algorithm is utilized to select those important blocks that are then united as the detected topical region (main texts). Experimental results on real world data show that the efficiency and accuracy of our approach are better than existing Document Object Model (DOM)-based and Vision-based approaches.  相似文献   

18.
A generic information extraction architecture for financial applications   总被引:1,自引:0,他引:1  
The advent of computing has exacerbated the problem of overwhelming information. To manage the deluge of information, information extraction systems can be used to automatically extract relevant information from free-form text for update to databases or for report generation. One of the major challenges to the information extraction is the representation of domain knowledge in the task, that is how to represent the meaning of the input text, the knowledge of the field of application, and the knowledge about the target information to be extracted. We have chosen a directed graph structure, a domain ontology and a frame representation, respectively. We have further developed a generic information extraction (GIE) architecture that combines these knowledge structures for the task of processing. The GIE system is able to extract information from free-form text, further infer and derive new information. It analyzes the input text into a graph structure and subsequently unifies the graph and the ontology for extraction of relevant information, driven by the frame structure during a template filling process. The GIE system has been adopted for use in the message formatting expert system, a large-scale information extraction system for a specific financial application within a major bank in Singapore.  相似文献   

19.
《Artificial Intelligence》2006,170(14-15):1101-1122
To successfully embed statistical machine learning models in real world applications, two post-deployment capabilities must be provided: (1) the ability to solicit user corrections and (2) the ability to update the model from these corrections. We refer to the former capability as corrective feedback and the latter as persistent learning. While these capabilities have a natural implementation for simple classification tasks such as spam filtering, we argue that a more careful design is required for structured classification tasks.One example of a structured classification task is information extraction, in which raw text is analyzed to automatically populate a database. In this work, we augment a probabilistic information extraction system with corrective feedback and persistent learning components to assist the user in building, correcting, and updating the extraction model. We describe methods of guiding the user to incorrect predictions, suggesting the most informative fields to correct, and incorporating corrections into the inference algorithm. We also present an active learning framework that minimizes not only how many examples a user must label, but also how difficult each example is to label. We empirically validate each of the technical components in simulation and quantify the user effort saved. We conclude that more efficient corrective feedback mechanisms lead to more effective persistent learning.  相似文献   

20.
基于直觉模糊ART神经网络的群事件检测方法   总被引:1,自引:0,他引:1  
林剑  雷英杰 《计算机应用》2009,29(1):130-131,
描述了态势评估系统中的目标编群问题、目标群处理流程和群事件的检测。结合直觉模糊贴近度理论,构造了直觉模糊ART神经网络。设计了网络的运行机制和网络权值向量的学习机制。给出了一个具体实例,检验了直觉模糊ART神经网络的目标编群效果,为群事件检测提供了一条有效途径。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号