首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 312 毫秒
1.
2.
Abstract

This paper describes a case study of user-participation focusing on the introduction of a new computer-based system in a large UK bank. We use Wall and Lischeron's (1977) characterization of participation as consisting of three interrelated elements (i.e., interaction, information, and influence) and Gowler and Legge's (1978) contextual interpretation exploring user participation as a ‘dependent’ rather than an ‘independent’ variable. The study examines the process of participation using a range of research methods. We argue that user participation in systems development can only be properly understood through consideration of the nature of the organizational context (e.g., structures and processes), the system and its users, and by analysis of the interactions between these elements.  相似文献   

3.
ContextEnterprise software systems (e.g., enterprise resource planning software) are often deployed in different contexts (e.g., different organizations or different business units or branches of one organization). However, even though organizations, business units or branches have the same or similar business goals, they may differ in how they achieve these goals. Thus, many enterprise software systems are subject to variability and adapted depending on the context in which they are used.ObjectiveOur goal is to provide a snapshot of variability in large scale enterprise software systems. We aim at understanding the types of variability that occur in large industrial enterprise software systems. Furthermore, we aim at identifying how variability is handled in such systems.MethodWe performed an exploratory case study in two large software organizations, involving two large enterprise software systems. Data were collected through interviews and document analysis. Data were analyzed following a grounded theory approach.ResultsWe identified seven types of variability (e.g., functionality, infrastructure) and eight mechanisms to handle variability (e.g., add-ons, code switches).ConclusionsWe provide generic types for classifying variability in enterprise software systems, and reusable mechanisms for handling such variability. Some variability types and handling mechanisms for enterprise software systems found in the real world extend existing concepts and theories. Others confirm findings from previous research literature on variability in software in general and are therefore not specific to enterprise software systems. Our findings also offer a theoretical foundation for describing variability handling in practice. Future work needs to provide more evaluations of the theoretical foundations, and refine variability handling mechanisms into more detailed practices.  相似文献   

4.
(Omega-)Regular model checking is the name of a family of techniques in which states are represented by words, sets of states by finite automata on these objects, and transitions by finite automata operating on pairs of state encodings, i.e., finite-state transducers. In this context, the problem of computing the set of reachable states of a system can be reduced to the one of computing the iterative closure of the finite-state transducer representing its transition relation. In this tutorial article, we survey an extrapolation-based technique for computing the closure of a given transducer. The approach proceeds by comparing successive elements of a sequence of approximations of the iteration, detecting an “increment” that is added to move from one approximation to the next, and extrapolating the sequence by allowing arbitrary repetitions of this increment. The technique applies to finite-word and deterministic weak Büchi automata. Finally, we discuss the implementation of these results within the T(O)RMC toolsets and present case studies that show the advantages and the limits of the approach.  相似文献   

5.
Abstract

Starting from individual fuzzy preference relations, some (sets of) socially best acceptable options are determined, directly or via a social fuzzy preference relation. An assumed fuzzy majority rule is given by a fuzzy linguistic quantifier, e.g., “most.” Here, as opposed to Part I, where we used a consensory-like pooling of individual opinions, we use an approach to linguistic quantifiers that leads to a competitive-like pooling. Some solution concepts are considered: cores, minimax (opposition) sets, consensus winners, and so forth,  相似文献   

6.

In this paper we present an implemented account of multilingual linguistic resources for multilingual text generation that improves significantly on the degree of reuse of resources both across languages and across applications. We argue that this is a necessary step for multilingual generation in order to reduce the high cost of constructing linguistic resources and to make natural language generation relevant for a wider range of applications particularly, in this paper, for multilingual software and user interfaces. We begin by contrasting a weak and a strong approach to multilinguality in the state of the art in multilingual text generation. Neither approach has provided sufficient principles for organizing multilingual work. We then introduce our framework , where multilingual variation is included as an intrinsic feature of all levels of representation. We provide an example of multilingual tactical generation using this approach and discuss some of the performance, maintenance, and development issues that arise.  相似文献   

7.
The distributed nature of the Web, as a decentralized system exchanging information between heterogeneous sources, has underlined the need to manage interoperability, i.e., the ability to automatically interpret information in Web documents exchanged between different sources, necessary for efficient information management and search applications. In this context, XML was introduced as a data representation standard that simplifies the tasks of interoperation and integration among heterogeneous data sources, allowing to represent data in (semi-) structured documents consisting of hierarchically nested elements and atomic attributes. However, while XML was shown most effective in exchanging data, i.e., in syntactic interoperability, it has been proven limited when it comes to handling semantics, i.e.,  semantic interoperability, since it only specifies the syntactic and structural properties of the data without any further semantic meaning. As a result, XML semantic-aware processing has become a motivating challenge in Web data management, requiring dedicated semantic analysis and disambiguation methods to assign well-defined meaning to XML elements and attributes. In this context, most existing approaches: (i) ignore the problem of identifying ambiguous XML elements/nodes, (ii) only partially consider their structural relationships/context, (iii) use syntactic information in processing XML data regardless of the semantics involved, and (iv) are static in adopting fixed disambiguation constraints thus limiting user involvement. In this paper, we provide a new XML Semantic Disambiguation Framework titled XSDFdesigned to address each of the above limitations, taking as input: an XML document, and then producing as output a semantically augmented XML tree made of unambiguous semantic concepts extracted from a reference machine-readable semantic network. XSDF consists of four main modules for: (i) linguistic pre-processing of simple/compound XML node labels and values, (ii) selecting ambiguous XML nodes as targets for disambiguation, (iii) representing target nodes as special sphere neighborhood vectors including all XML structural relationships within a (user-chosen) range, and (iv) running context vectors through a hybrid disambiguation process, combining two approaches: concept-basedand context-based disambiguation, allowing the user to tune disambiguation parameters following her needs. Conducted experiments demonstrate the effectiveness and efficiency of our approach in comparison with alternative methods. We also discuss some practical applications of our method, ranging over semantic-aware query rewriting, semantic document clustering and classification, Mobile and Web services search and discovery, as well as blog analysis and event detection in social networks and tweets.  相似文献   

8.
ContextAlthough metamodelling is generally accepted as important for our understanding of software and systems development, arguments about the validity and utility of ontological versus linguistic metamodelling continue.ObjectiveThe paper examines the traditional, metamodel-focused construction of modelling languages in the context of language use, and particularly speech act theory. These concepts are then applied to the problems introduced by the “Orthogonal Classification Architecture” that is often called the ontological/linguistic paradox. The aim of the paper is to show how it is possible to overcome these problems.MethodThe paper adopts a conceptual–analytical approach by revisiting the published arguments and developing an alternative metamodelling architecture based on language use.ResultsThe analysis shows that when we apply a language use perspective of meaning to traditional modelling concepts, a number of incongruities and misconceptions in the traditional approaches are revealed – issues that are not evident in previous work based primarily on set theory. Clearly differentiating between the extensional and intensional aspects of class concepts (as sets) and also between objects (in the social world) and things (in the physical world) allows for a deeper understanding to be gained of the relationship between the ontological and linguistic views promulgated in the modelling world.ConclusionsWe propose that a viewpoint that integrates language use ideas into traditional modelling (and metamodelling) is vital, and stress that meaning is not inherent in the physical world; meaning, and thus socially valid objects, are constructed by use of language, which may or may not establish a one-to-one correspondence relationship between objects and physical things.  相似文献   

9.
语言修饰集作为一种刻画不确定信息的有效表达方式,用更加符合语言习惯的表达形式描述决策者对事物的评价结果.与其他语言术语相比,语言修饰集旨在修正隶属函数使其在表达专家的决策信息过程中更具有有效性、客观性,因此基于语言修饰集的研究是非常必要的.鉴于此,对语言修饰集的发展进行综述:首先回顾语言修饰集的研究背景;然后对语言修饰集在运算法则、语义量化、模糊逻辑、分类器等方面的发展进行回顾,同时介绍一些基于语言修饰集在情感分析、工程风险管理等方面的应用;最后展望语言修饰集的研究前景.  相似文献   

10.
ContextBlocking bugs are bugs that prevent other bugs from being fixed. Previous studies show that blocking bugs take approximately two to three times longer to be fixed compared to non-blocking bugs.ObjectiveThus, automatically predicting blocking bugs early on so that developers are aware of them, can help reduce the impact of or avoid blocking bugs. However, a major challenge when predicting blocking bugs is that only a small proportion of bugs are blocking bugs, i.e., there is an unequal distribution between blocking and non-blocking bugs. For example, in Eclipse and OpenOffice, only 2.8% and 3.0% bugs are blocking bugs, respectively. We refer to this as the class imbalance phenomenon.MethodIn this paper, we propose ELBlocker to identify blocking bugs given a training data. ELBlocker first randomly divides the training data into multiple disjoint sets, and for each disjoint set, it builds a classifier. Next, it combines these multiple classifiers, and automatically determines an appropriate imbalance decision boundary to differentiate blocking bugs from non-blocking bugs. With the imbalance decision boundary, a bug report will be classified to be a blocking bug when its likelihood score is larger than the decision boundary, even if its likelihood score is low.ResultsTo examine the benefits of ELBlocker, we perform experiments on 6 large open source projects – namely Freedesktop, Chromium, Mozilla, Netbeans, OpenOffice, and Eclipse containing a total of 402,962 bugs. We find that ELBlocker achieves F1 and EffectivenessRatio@20% scores of up to 0.482 and 0.831, respectively. On average across the 6 projects, ELBlocker improves the F1 and EffectivenessRatio@20% scores over the state-of-the-art method proposed by Garcia and Shihab by 14.69% and 8.99%, respectively. Statistical tests show that the improvements are significant and the effect sizes are large.ConclusionELBlocker can help deal with the class imbalance phenomenon and improve the prediction of blocking bugs. ELBlocker achieves a substantial and statistically significant improvement over the state-of-the-art methods, i.e., Garcia and Shihab’s method, SMOTE, OSS, and Bagging.  相似文献   

11.
Context:How can quality of software systems be predicted before deployment? In attempting to answer this question, prediction models are advocated in several studies. The performance of such models drops dramatically, with very low accuracy, when they are used in new software development environments or in new circumstances.ObjectiveThe main objective of this work is to circumvent the model generalizability problem. We propose a new approach that substitutes traditional ways of building prediction models which use historical data and machine learning techniques.MethodIn this paper, existing models are decision trees built to predict module fault-proneness within the NASA Critical Mission Software. A genetic algorithm is developed to combine and adapt expertise extracted from existing models in order to derive a “composite” model that performs accurately in a given context of software development. Experimental evaluation of the approach is carried out in three different software development circumstances.ResultsThe results show that derived prediction models work more accurately not only for a particular state of a software organization but also for evolving and modified ones.ConclusionOur approach is considered suitable for software data nature and at the same time superior to model selection and data combination approaches. It is then concluded that learning from existing software models (i.e., software expertise) has two immediate advantages; circumventing model generalizability and alleviating the lack of data in software-engineering.  相似文献   

12.
ContextApplication of a refactoring operation creates a new set of dependency in the revised design as well as a new set of further refactoring candidates. In the studies of stepwise refactoring recommendation approaches, applying one refactoring at a time has been used, but is inefficient because the identification of the best candidate in each iteration of refactoring identification process is computation-intensive. Therefore, it is desirable to accurately identify multiple and independent candidates to enhance efficiency of refactoring process.ObjectiveWe propose an automated approach to identify multiple refactorings that can be applied simultaneously to maximize the maintainability improvement of software. Our approach can attain the same degree of maintainability enhancement as the method of the refactoring identification of the single best one, but in fewer iterations (lower computation cost).MethodThe concept of maximal independent set (MIS) enables us to identify multiple refactoring operations that can be applied simultaneously. Each MIS contains a group of refactoring candidates that neither affect (i.e., enable or disable) nor influence maintainability on each other. Refactoring effect delta table quantifies the degree of maintainability improvement each elementary candidate. For each iteration of the refactoring identification process, multiple refactorings that best improve maintainability are selected among sets of refactoring candidates (MISs).ResultsWe demonstrate the effectiveness and efficiency of the proposed approach by simulating the refactoring operations on several large-scale open source projects such as jEdit, Columba, and JGit. The results show that our proposed approach can improve maintainability by the same degree or to a better extent than the competing method, choosing one refactoring candidate at a time, in a significantly smaller number of iterations. Thus, applying multiple refactorings at a time is both effective and efficient.ConclusionOur proposed approach helps improve the maintainability as well as the productivity of refactoring identification.  相似文献   

13.
ContextAn increasing number of publications in product line engineering address product derivation, i.e., the process of building products from reusable assets. Despite its importance, there is still no consensus regarding the requirements for product derivation support.ObjectiveOur aim is to identify and validate requirements for tool-supported product derivation.MethodWe identify the requirements through a systematic literature review and validate them with an expert survey.ResultsWe discuss the resulting requirements and provide implementation examples from existing product derivation approaches.ConclusionsWe conclude that key requirements are emerging in the research literature and are also considered relevant by experts in the field.  相似文献   

14.
Complex activities, e.g. pole vaulting, are composed of a variable number of sub-events connected by complex spatio-temporal relations, whereas simple actions can be represented as sequences of short temporal parts. In this paper, we learn hierarchical representations of activity videos in an unsupervised manner. These hierarchies of mid-level motion components are data-driven decompositions specific to each video. We introduce a spectral divisive clustering algorithm to efficiently extract a hierarchy over a large number of tracklets (i.e. local trajectories). We use this structure to represent a video as an unordered binary tree. We model this tree using nested histograms of local motion features. We provide an efficient positive definite kernel that computes the structural and visual similarity of two hierarchical decompositions by relying on models of their parent–child relations. We present experimental results on four recent challenging benchmarks: the High Five dataset (Patron-Perez et al., High five: recognising human interactions in TV shows, 2010), the Olympics Sports dataset (Niebles et al., Modeling temporal structure of decomposable motion segments for activity classification, 2010), the Hollywood 2 dataset (Marszalek et al., Actions in context, 2009), and the HMDB dataset (Kuehne et al., HMDB: A large video database for human motion recognition, 2011). We show that per-video hierarchies provide additional information for activity recognition. Our approach improves over unstructured activity models, baselines using other motion decomposition algorithms, and the state of the art.  相似文献   

15.
ContextDuring the definition of software product lines (SPLs) it is necessary to choose the components that appropriately fulfil a product’s intended functionalities, including its quality requirements (i.e., security, performance, scalability). The selection of the appropriate set of assets from many possible combinations is usually done manually, turning this process into a complex, time-consuming, and error-prone task.ObjectiveOur main objective is to determine whether, with the use of modeling tools, we can simplify and automate the definition process of a SPL, improving the selection process of reusable assets.MethodWe developed a model-driven strategy based on the identification of critical points (sensitivity points) inside the SPL architecture. This strategy automatically selects the components that appropriately match the product’s functional and quality requirements. We validated our approach experimenting with different real configuration and derivation scenarios in a mobile healthcare SPL where we have worked during the last three years.ResultsThrough our SPL experiment, we established that our approach improved in nearly 98% the selection of reusable assets when compared with the unassisted analysis selection. However, using our approach there is an increment in the time required for the configuration corresponding to the learning curve of the proposed tools.ConclusionWe can conclude that our domain-specific modeling approach significantly improves the software architect’s decision making when selecting the most suitable combinations of reusable components in the context of a SPL.  相似文献   

16.
目的 光场相机可以通过一次拍摄,获取立体空间中的4D光场数据,渲染出焦点堆栈图像,然后采用聚焦性检测函数从中提取深度信息。然而,不同聚焦性检测函数响应特性不同,不能适应于所有的场景,且现有多数方法提取的深度信息散焦误差较大,鲁棒性较差。针对该问题,提出一种新的基于光场聚焦性检测函数的深度提取方法,获取高精度的深度信息。方法 设计加窗的梯度均方差聚焦性检测函数,提取焦点堆栈图像中的深度信息;利用全聚焦彩色图像和散焦函数标记图像中的散焦区域,使用邻域搜索算法修正散焦误差。最后利用马尔可夫随机场(MRF)将修正后的拉普拉斯算子提取的深度图与梯度均方差函数得到的深度图融合,得到高精确度的深度图像。结果 在Lytro数据集和自行采集的测试数据上,相比于其他先进的算法,本文方法提取的深度信息噪声较少。精确度平均提高约9.29%,均方误差平均降低约0.056。结论 本文方法提取的深度信息颗粒噪声更少;结合彩色信息引导,有效修正了散焦误差。对于平滑区域较多的场景,深度提取效果较好。  相似文献   

17.
ContextInheritance is the cornerstone of object-oriented development, supporting conceptual modeling, subtype polymorphism and software reuse. But inheritance can be used in subtle ways that make complex systems hard to understand and extend, due to the presence of implicit dependencies in the inheritance hierarchy.ObjectiveAlthough these dependencies often specify well-known schemas (i.e., recurrent design or coding patterns, such as hook and template methods), new unanticipated dependency schemas arise in practice, and can consequently be hard to recognize and detect. Thus, a developer making changes or extensions to an object-oriented system needs to understand these implicit contracts defined by the dependencies between a class and its subclasses, or risk that seemingly innocuous changes break them.MethodTo tackle this problem, we have developed an approach based on Formal Concept Analysis. Our Formal Concept Analysis based-Reverse Engineering methodology (FoCARE) identifies undocumented hierarchical dependencies in a hierarchy by taking into account the existing structure and behavior of classes and subclasses.ResultsWe validate our approach by applying it to a large and non-trivial case study, yielding a catalog of hierarchy schemas, each one composed of a set of dependencies over methods and attributes in a class hierarchy. We show how the discovered dependency schemas can be used not only to identify good design practices, but also to expose bad smells in design, thereby helping developers in initial reengineering phases to develop a first mental model of a system. Although some of the identified schemas are already documented in existing literature, with our approach based on Formal Concept Analysis (FCA), we are also able to identify previously unidentified schemas.ConclusionsFCA is an effective tool because it is an ideal classification mining tool to identify commonalities between software artifacts, and usually these commonalities reveal known and unknown characteristics of the software artifacts. We also show that once a catalog of useful schemas stabilizes after several runs of FoCARE, the added cost of FCA is no longer needed.  相似文献   

18.
Abstract

We analyze Junctional models in the wide spectrum of model types. Our premise is that, in order to identify the merits of model types, their perceptions of reality are important. We classify modeling approaches into two categories, the kernel and the interpretative. The kernel approach, which assumes that the world can be modeled by a composition of predefined primitives, suffers from important drawbacks. The interpretative approach, to which the functional models belong, assumes that we know how things work. We investigate the characteristics of this approach from a diagnostic point of view and show that, although it provides a powerful tool for describing devices, it is hindered by serious disadvantages when used in isolation. We advertise the use of multiple models as a solution and provide an ontological semantic network as a framework for the integration of multiple model types.  相似文献   

19.
ContextIn industrial settings products are developed by more than one organization. Software vendors and suppliers commonly typically maintain their own product lines, which contribute to a larger (multi) product line or software ecosystem. It is unrealistic to assume that the participating organizations will agree on using a specific variability modeling technique—they will rather use different approaches and tools to manage the variability of their systems.ObjectiveWe aim to support product configuration in software ecosystems based on several variability models with different semantics that have been created using different notations.MethodWe present an integrative approach that provides a unified perspective to users configuring products in multi product line environments, regardless of the different modeling methods and tools used internally. We also present a technical infrastructure and a prototype implementation based on web services.ResultsWe show the feasibility of the approach and its implementation by using it with the three most widespread types of variability modeling approaches in the product line community, i.e., feature-based, OVM-style, and decision-oriented modeling. To demonstrate the feasibility and flexibility of our approach, we present an example derived from industrial experience in enterprise resource planning. We further applied the approach to support the configuration of privacy settings in the Android ecosystem based on multiple variability models. We also evaluated the performance of different model enactment strategies used in our approach.ConclusionsTools and techniques allowing stakeholders to handle variability in a uniform manner can considerably foster the initiation and growth of software ecosystems from the perspective of software reuse and configuration.  相似文献   

20.
Question–answering systems make good use of knowledge bases (KBs, e.g., Wikipedia) for responding to definition queries. Typically, systems extract relevant facts from articles regarding the question across KBs, and then they are projected into the candidate answers. However, studies have shown that the performance of this kind of method suddenly drops, whenever KBs supply narrow coverage. This work describes a new approach to deal with this problem by constructing context models for scoring candidate answers, which are, more precisely, statistical n‐gram language models inferred from lexicalized dependency paths extracted from Wikipedia abstracts. Unlike state‐of‐the‐art approaches, context models are created by capturing the semantics of candidate answers (e.g., “novel,”“singer,”“coach,” and “city”). This work is extended by investigating the impact on context models of extra linguistic knowledge such as part‐of‐speech tagging and named‐entity recognition. Results showed the effectiveness of context models as n‐gram lexicalized dependency paths and promising context indicators for the presence of definitions in natural language texts.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号