首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Model transformations are central components of most model-based software projects. While ensuring their correctness is vital to guarantee the quality of the solution, current transformation tools provide limited support to statically detect and fix errors. In this way, the identification of errors and their correction are nowadays mostly manual activities which incur in high costs. The aim of this work is to improve this situation. Recently, we developed a static analyser that combines program analysis and constraint solving to identify errors in ATL model transformations. In this paper, we present a novel method and system that uses our analyser to propose suitable quick fixes for ATL transformation errors, notably some non-trivial, transformation-specific ones. Our approach supports speculative analysis to help developers select the most appropriate fix by creating a dynamic ranking of fixes, reporting on the consequences of applying a quick fix, and providing a pre-visualization of each quick fix application. The approach integrates seamlessly with the ATL editor. Moreover, we provide an evaluation based on existing faulty transformations built by a third party, and on automatically generated transformation mutants, which are then corrected with the quick fixes of our catalogue.  相似文献   

2.
Personal experience computing is an emerging research area in computing support for capturing, archiving, and editing. This paper presents our design, implementation, and evaluation of a mobile authoring tool called mProducer that enables everyday users to effectively and efficiently perform archiving and editing at or immediately after the point-of-capture of digital personal experiences from their camera-equipped mobile devices. This point-of-capture capability is crucial to enable immediate sharing of digital personal experiences anytime, anywhere. For example, we have seen everyday people who used handheld camcorders to capture and report their personal, eye-witnessed experiences during the September 11, 2001 terrorist attack in New York (The September 11 Digital Archive. ). With mProducer, they would be able to perform editing immediately after the point of capture, and then share these newsworthy, time-sensitive digital experiences on the Internet. To address the challenges in both user interface constraints and limited system resources on a mobile device, mProducer provides the following innovative system techniques and UI designs. (1) Keyframe-based editing UI enables everyday users to easily and efficiently edit recorded digital experiences from a mobile device using only key frames with the storyboard metaphor. (2) Storage constrained uploading (SCU) algorithm archives continuous multimedia data by uploading them to remote storage servers at the point of capture, so that it alleviates the problem of limited storage on a mobile device. (3) Sensor-assisted automated editing uses data from a GPS receiver and a tilt sensor attached to a mobile device to facilitate two manual editing steps at the point of capture: removal of blurry frames from hand-induced camera shaking, and content search via location-based content management. We have conducted user studies to evaluate mProducer. Results from the user studies have shown that mProducer scores high in user satisfaction in editing experience, editing quality, task performance time, ease of use, and ease of learning.
Jane Yung-jen Hsu (Corresponding author)Email:
  相似文献   

3.
This paper investigates constraints for matching records from unreliable data sources. (a) We introduce a class of matching dependencies (mds) for specifying the semantics of unreliable data. As opposed to static constraints for schema design, mds are developed for record matching, and are defined in terms of similarity predicates and a dynamic semantics. (b) We identify a special case of mds, referred to as relative candidate keys (rcks), to determine what attributes to compare and how to compare them when matching records across possibly different relations. (c) We propose a mechanism for inferring mds, a departure from traditional implication analysis, such that when we cannot match records by comparing attributes that contain errors, we may still find matches by using other, more reliable attributes. Moreover, we develop a sound and complete system for inferring mds. (d) We provide a quadratic-time algorithm for inferring mds and an effective algorithm for deducing a set of high-quality rcks from mds. (e) We experimentally verify that the algorithms help matching tools efficiently identify keys at compile time for matching, blocking or windowing and in addition, that the md-based techniques effectively improve the quality and efficiency of various record matching methods.  相似文献   

4.
Video compositing, the editing and integrating of many video sequences into a single presentation, is an integral part of advanced multimedia services. Single-user compositing systems have been suggested in the past, but when they are extended to accommodate many users, the amount of memory required quickly grows out of hand. We propose two new architectures for digital video compositing in a multiuser environment that are memory-efficient and can operate in real time. Both architectures decouple the task of memory management from compositing processing. We show that under hard throughput and bandwidth constraints, a memory less solution for transferring data from many video sources to many users does not exist. We overcome this using (i) a dynamic memory buffering architecture and (ii) a constant memory bandwidth solution that transforms the sources-to-users transfer schedule into two schedules, then pipelines the computation. The architectures support opaque overlapping of images, arbitrarily shaped images, and images whose shapes dynamically change from frame to frame.  相似文献   

5.
The success of constraint-based approaches to drawing has been limited by difficulty in creating constraints, solving them, and presenting them to users. In this paper, we discuss techniques used in theBriar drawing program to address all of these issues. Briar's approach separates the problem of initially establishing constraints from the problem of maintaining them during subsequent editing. We describe how non-constraint-based drawing tools can be augmented to specify constraints in addition to positions. These constraints are then maintained as the user drags the model, which allows the user to explore configurations consistent with the constraints. Visual methods are provided for displaying and editing the constraints.  相似文献   

6.
Editors for visual languages should provide a user-friendly environment supporting end users in the composition of visual sentences in an effective way. Syntax-aware editors are a class of editors that prompt users into writing syntactically correct programs by exploiting information on the visual language syntax. In particular, they do not constrain users to enter only correct syntactic states in a visual sentence. They merely inform the user when visual objects are syntactically correct. This means detecting both syntax and potential semantic errors as early as possible and providing feedback on such errors in a non-intrusive way during editing. As a consequence, error handling strategies are an essential part of such editing style of visual sentences.In this work, we develop a strategy for the construction of syntax-aware visual language editors by integrating incremental subsentence parsers into free-hand editors. The parser combines the LR-based techniques for parsing visual languages with the more general incremental Generalized LR parsing techniques developed for string languages. Such approach has been profitably exploited for introducing a noncorrecting error recovery strategy, and for prompting during the editing the continuation of what the user is drawing.  相似文献   

7.
及时获取并应用安全漏洞修复补丁对保障服务器用户的安全至关重要.但是,学者和机构研究发现开源软件维护者经常悄无声息地修复安全漏洞,比如维护者88%的情况在发布软件新版本时才在发行说明中告知用户修复了安全漏洞,并且只有9%的漏洞修复补丁明确给出对应的CVE(common vulnerabilities and exposures)标号,只有3%的修复会及时主动通知安全监控服务提供者.这导致在很多情况下,安全工程师不能通过补丁的代码和描述信息直接区分漏洞修复、Bug修复、功能性补丁.造成漏洞修复补丁不能被用户及时识别和应用,同时用户从大量的补丁提交中识别漏洞修复补丁代价很高.以代表性Linux内核为例,给出一种自动识别漏洞修复补丁的方法,该方法为补丁的代码和描述部分分别定义特征,构建机器学习模型,训练学习可区分安全漏洞补丁的分类器.实验表明,该方法可以取得91.3%的精确率、92%的准确率、87.53%的召回率,并将误报率降低到5.2%,性能提升明显.  相似文献   

8.
基于编辑规则和主数据的数据修复技术能自动地、确切地修复不一致数据,但目前编辑规则的获取主要依靠专业人员的定义. 为了实现数据清洗全自动化,数据规则的挖掘技术近年来成为研究热点,针对条件函数依赖提出的挖掘算法主要有CFDMiner,CTANE,FastCFD. 在此基础上,扩展条件函数依赖(CFD)的定义,在编辑规则的定义下提出了一种基于输入样本和主数据的编辑规则挖掘算法,主要思路是从输入样本中挖掘出CFD,然后根据输入样本与主数据在属性上的定义域相似性求出输入样本在主数据中的对应属性,从而形成带模式组的编辑规则,此算法能有效地挖掘编辑规则. 且所挖掘的编辑规则按照编辑规则语义能有效地进行数据修复.  相似文献   

9.
mwKAT is an interactive knowledge acquisition tool for acquiring domain knowledge about multimedia components. It constructs knowledge bases for a consulting system that produces the design specification for a multimedia workstation according to the user requirements.mwKAT is generated from and executed inGAS, a primitives-based generic knowledge acquisition meta-tool. It contains three acquisition primitives, namely, parameter proposing, constraint proposing, and fix proposing to construct an intermediate knowledge base represented by a dependency model. These primitives identify necessary domain knowledge and guide users to propose significant components, constraints, and fix methods into the dependency model.mwKAT also invokes knowledge verification and validation primitives to verify the completeness, consistency, compilability, and correctness of the intermediate knowledge base.  相似文献   

10.
Toward an understanding of bug fix patterns   总被引:1,自引:1,他引:0  
Twenty-seven automatically extractable bug fix patterns are defined using the syntax components and context of the source code involved in bug fix changes. Bug fix patterns are extracted from the configuration management repositories of seven open source projects, all written in Java (Eclipse, Columba, JEdit, Scarab, ArgoUML, Lucene, and MegaMek). Defined bug fix patterns cover 45.7% to 63.3% of the total bug fix hunk pairs in these projects. The frequency of occurrence of each bug fix pattern is computed across all projects. The most common individual patterns are MC-DAP (method call with different actual parameter values) at 14.9–25.5%, IF-CC (change in if conditional) at 5.6–18.6%, and AS-CE (change of assignment expression) at 6.0–14.2%. A correlation analysis on the extracted pattern instances on the seven projects shows that six have very similar bug fix pattern frequencies. Analysis of if conditional bug fix sub-patterns shows a trend towards increasing conditional complexity in if conditional fixes. Analysis of five developers in the Eclipse projects shows overall consistency with project-level bug fix pattern frequencies, as well as distinct variations among developers in their rates of producing various bug patterns. Overall, data in the paper suggest that developers have difficulty with specific code situations at surprisingly consistent rates. There appear to be broad mechanisms causing the injection of bugs that are largely independent of the type of software being produced.
E. James Whitehead Jr.Email:
  相似文献   

11.
With the usage of version control systems, many bug fixes have accumulated over the years. Researchers have proposed various automatic program repair (APR) approaches that reuse past fixes to fix new bugs. However, some fundamental questions, such as how new fixes overlap with old fixes, have not been investigated. Intuitively, the overlap between old and new fixes decides how APR approaches can construct new fixes with old ones. Based on this intuition, we systematically designed six overlap metrics, and performed an empirical study on 5,735 bug fixes to investigate the usefulness of past fixes when composing new fixes. For each bug fix, we created delta graphs (i.e., program dependency graphs for code changes), and identified how bug fixes overlap with each other in terms of the content, code structures, and identifier names of fixes. Our results show that if an APR approach knows all code name changes and composes new fixes by fully or partially reusing the content of past fixes, only 2.1% and 3.2% new fixes can be created from single or multiple past fixes in the same project, compared with 0.9% and 1.2% fixes created from past fixes across projects. However, if an APR approach knows all code name changes and composes new fixes by fully or partially reusing the code structures of past fixes, up to 41.3% and 29.7% new fixes can be created. By making the above observations and revealing other ten findings, we investigated the upper bound of reusable past fixes and composable new fixes, exploring the potential of existing and future APR approaches.  相似文献   

12.
Usually visualization is applied to gain insight into data. Yet consuming the data in form of visual representation is not always enough. Instead, users need to edit the data, preferably through the same means used to visualize them. In this work, we present a semi‐automatic approach to visual editing of graphs. The key idea is to use an interactive EditLens that defines where an edit operation affects an already customized and established graph layout. Locally optimal node positions within the lens and edge routes to connected nodes are calculated according to different criteria. This spares the user much manual work, but still provides sufficient freedom to accommodate application‐dependent layout constraints. Our approach utilizes the advantages of multi‐touch gestures, and is also compatible with classic mouse and keyboard interaction. Preliminary user tests have been conducted with researchers from bio‐informatics who need to manually maintain a slowly, but constantly growing molecular network. As the user feedback indicates, our solution significantly improves the editing procedure applied so far.  相似文献   

13.
Public auditing is an important issue in cloud storage service because a cloud service provider may try to hide management mistakes and system errors from users or even steal or tamper with a user’s data for monetary reasons. Without the protection of a proper auditing mechanism, cloud users would have to run high risks of having their legal rights and interests spoiled without their knowledge. Therefore, many data integrity, assurance, and correctness schemes have been proposed for data auditing. Most of these schemes work by randomly sampling and aggregating signatures from bilinear maps (for more efficiency) to check whether the cloud storage service is honest and whether the data stored in the cloud is correct. Although aggregating signatures can reduce the auditor’s computing overhead and time, unfortunately, none of these schemes have offered any workable solution to giving detailed information on where the errors are when the cloud data as a whole fails the auditing. To fix this problem, we shall propose a new public auditing scheme with a mechanism integrated into it especially to locate the problematic data blocks when they exist. With efficiency, the proposed scheme is capable not only of giving an accurate pass/fail report but also providing detailed information on the locations of the errors detected.  相似文献   

14.
为优化跑道容量和定位点空中交通流量,将机场终端区的进离港定位点和跑道看作一个系统;在 服从整个系统容量限制的前提下,引入了满意度准则,通过分析跑道容量和定位点流量的相互影响,建立了 多目标优化模型.算例研究表明,通过对进离港航班流量需求的协调分配,给定时段内的所有航班均得到有 效调度.整体航班需求排队队列减少了约10%,容量利用满意度为0.75,在机场终端区实现了跑道容量的有 效利用和定位点流量的合理分配.  相似文献   

15.
Recovery from human operator error is key to system dependability: users are usually the main source for incorrect data, and this data can render a system unusable. Although current management information systems allow for the reversal of some processes, they don't offer an undo function to correct the errors human operators might introduce while editing data. The authors propose a method for correcting this potential problem with a new undo function that allows the recovery of previous states even after records have changed. While developing a new open-source, Web-based enterprise resource planning (ERP) system, we at IT solutions provider Tecnicia realized the added value of such an undo function. The undo function is a basic feature of recovery-oriented computing (ROC), which aims to increase dependability by reducing repair and recovery time.  相似文献   

16.
In this paper we explore ways to study the zero temperature limit of quantum statistical mechanics using Quantum Monte Carlo simulations. We develop a Quantum Monte Carlo method in which one fixes the ground state energy as a parameter. The Hamiltonians we consider are of the form H=H0+λV with ground state energy E. For fixed H0 and V, one can view E as a function of λ whereas we view λ as a function of E. We fix E and define a path integral Quantum Monte Carlo method in which a path makes no reference to the times (discrete or continuous) at which transitions occur between states. For fixed E we can determine λ(E) and other ground state properties of H.  相似文献   

17.
These days with the expanded fame of cloud computing, the interest for cloud-based collaborative editing service is rising. The encryption method is utilized to ensure and secure the data, during the collaborative editing process. In the encryption process, the cloud requires more time to work the collaborative editing. This paper proposes an efficient scheme for reducing the encryption burden over the cooperative users, as the possibilities of cooperative users read and write data by means of any gadget. In the proposed scheme, the encrypted file sent by the data owner is split into smaller segments and stored in the cloud by the cloud service provider (CSP) along with specific tags. Once the cooperative user receives and decrypts the file from the CSP, it modifies and encrypts only the modified segment and resends to the CSP. The CSP after verifying the signature replace the original file segment in the cloud with the modified segment based on the tag information. The scheme that is put forward is performed based on the modified ciphertext-policy hierarchical attribute–based encryption, and the security process is done based on the attribute-based signature schemes. This work employs a proficient attribute updating method to accomplish the dynamic change of users' attributes, consisting granting new attributes, revoking previous attributes, and regranting formerly revoked attributes. A writer's attributes and keys have been revoked, and the stale information cannot be written.  相似文献   

18.
李淼  谷峪  陈默  于戈 《软件学报》2017,28(2):310-325
随着地理位置定位技术的蓬勃发展,基于在线位置服务技术的应用也越来越多.提出一种查询类型——反向空间偏好top-k查询.类似于传统的反向空间top-k查询,对于给定的空间查询对象,该查询返回使该对象满足top-k属性得分的那些用户.但不同的是,该对象的属性不是自身具有的特性,而是通过计算该对象与其他偏好对象之间的空间关系(如距离)而确定.这种查询在市场分析等许多重要领域具有需求,例如,根据查询结果,分析出某个地区中某个设施受欢迎的程度.但是,由于大量空间对象的存在导致对象之间空间关系的计算代价非常高,如何实时地计算出对象的空间属性得分,给查询处理带来很大的挑战.针对该问题提出优化的查询处理算法包括:数据集剪枝、数据集批量处理、基于权重的用户分组等策略.通过理论分析和充分的实验验证,证明了所提出方法的有效性.与普通方法相比,这些方法能够大幅度提高查询处理的执行时间和I/O效率.  相似文献   

19.

Most of the existing recommender systems understand the preference level of users based on user-item interaction ratings. Rating-based recommendation systems mostly ignore negative users/reviewers (who give poor ratings). There are two types of negative users. Some negative users give negative or poor ratings randomly, and some negative users give ratings according to the quality of items. Some negative users, who give ratings according to the quality of items, are known as reliable negative users, and they are crucial for a better recommendation. Similar characteristics are also applicable to positive users. From a poor reflection of a user to a specific item, the existing recommender systems presume that this item is not in the user’s preferred category. That may not always be correct. We should investigate whether the item is not in the user’s preferred category, whether the user is dissatisfied with the quality of a favorite item or whether the user gives ratings randomly/casually. To overcome this problem, we propose a Social Promoter Score (SPS)-based recommendation. We construct two user-item interaction matrices with users’ explicit SPS value and users’ view activities as implicit feedback. With these matrices as inputs, our attention layer-based deep neural model deepCF_SPS learns a common low-dimensional space to present the features of users and items and understands the way users rate items. Extensive experiments on online review datasets present that our method can be remarkably futuristic compared to some popular baselines. The empirical evidence from the experimental results shows that our model is the best in terms of scalability and runtime over the baselines.

  相似文献   

20.
谭振华  杨广明  王兴伟  程维  宁婧宇 《软件学报》2016,27(11):2912-2928
近年来,云存储所提供的“数据存储即服务”为租户实现廉价高效共享资源.由于租户缺乏对云端数据的绝对控制,数据安全,尤其是机密数据的安全存储成为一大问题,这也是近年来云存储安全的研究热点.针对机密数据的云存储问题,提出了一种基于多维球面原理的分布式秘密共享方案.在分发阶段,结合分发者、云存储容器信息,将原始秘密转换为m维球心坐标,进而生成同球面的n个影子秘密坐标,并将这些影子秘密作为机密数据分布式存储在n个云存储容器中.在恢复阶段,通过证明任意kk=m+1)个线性不相关的坐标可确定唯一球心,完成原始秘密的恢复.算法性能分析和仿真分析表明,该方案具备假数据攻击、共谋攻击防御能力,且密钥不需要额外的管理开销,租户对密钥有绝对控制权,加强了租户对云数据的控制,在运算性能、存储性能方面正确、有效.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号