首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Ye  Tong  Zhuang  Yi  Qiao  Gongzhe 《Software and Systems Modeling》2023,22(4):1251-1280
Software and Systems Modeling - Nowadays, large-scale software systems in many domains, such as smart cities, involve multiple parties whose privacy policies may conflict with each other, and thus,...  相似文献   

2.
3.
在软件工程或具体的需求工程中,用户需求通常具有不确定性.这成为了企业信息化实践中的主要问题之一;在企业信息系统工程中这尤其是一关键问题.通过扩展模型的概念、分析企业领域中可用模型的情况,提出了一种基于模型来应对用户需求之不确定性的方法.描述了应用基于模型的方法确定企业信息系统需求的基本逻辑与主要活动过程,并给出了一个应用ARIS(集成信息系统结构)参考模型库解决需求问题的实例.研究表明,基于模型的方法可用于有效地应对企业信息系统工程中的不确定性需求.  相似文献   

4.
5.
Regeneration of templates from match scores has security and privacy implications related to any biometric authentication system. We propose a novel paradigm to reconstruct face templates from match scores using a linear approach. It proceeds by first modeling the behavior of the given face recognition algorithm by an affine transformation. The goal of the modeling is to approximate the distances computed by a face recognition algorithm between two faces by distances between points, representing these faces, in an affine space. Given this space, templates from an independent image set (break-in) are matched only once with the enrolled template of the targeted subject and match scores are recorded. These scores are then used to embed the targeted subject in the approximating affine (non-orthogonal) space. Given the coordinates of the targeted subject in the affine space, the original template of the targeted subject is reconstructed using the inverse of the affine transformation. We demonstrate our ideas using three, fundamentally different, face recognition algorithms: Principal Component Analysis (PCA) with Mahalanobis cosine distance measure, Bayesian intra-extrapersonal classifier (BIC), and a feature-based commercial algorithm. To demonstrate the independence of the break-in set with the gallery set, we select face templates from two different databases: Face Recognition Grand Challenge (FRGC) and Facial Recognition Technology (FERET) Database (FERET). With an operational point set at 1 percent False Acceptance Rate (FAR) and 99 percent True Acceptance Rate (TAR) for 1,196 enrollments (FERET gallery), we show that at most 600 attempts (score computations) are required to achieve a 73 percent chance of breaking in as a randomly chosen target subject for the commercial face recognition system. With similar operational set up, we achieve a 72 percent and 100 percent chance of breaking in for the Bayesian and PCA based face recognition systems, respectively. With three different levels of score quantization, we achieve 69 percent, 68 percent and 49 percent probability of break-in, indicating the robustness of our proposed scheme to score quantization. We also show that the proposed reconstruction scheme has 47 percent more probability of breaking in as a randomly chosen target subject for the commercial system as compared to a hill climbing approach with the same number of attempts. Given that the proposed template reconstruction method uses distinct face templates to reconstruct faces, this work exposes a more severe form of vulnerability than a hill climbing kind of attack where incrementally different versions of the same face are used. Also, the ability of the proposed approach to reconstruct actual face templates of the users increases privacy concerns in biometric systems.  相似文献   

6.
7.
Hypertext development is still, for the most part, at the ‘handcrafting’ level, where each hypertext document must be hand-designed. We present a compiler which takes hyperdocuments designed using a model-based approach and generates stacks executable in Hyper Card. This compiler is implemented in standard SQL over a relational database representation of a hyperdocument designed using the hypermedia design model (HDM). The compiling approach, even though illustrated with HDM, can be used with any ‘structured’ design methodology.  相似文献   

8.
Web requirements engineering is an essential phase in the software project life cycle for the project results. This phase covers different activities and tasks that in many situations, depending on the analyst's experience or intuition, help getting accurate specifications. One of these tasks is the conciliation of requirements in projects with different groups of users. This article presents an approach for the systematic conciliation of requirements in big projects dealing with a model-based approach. The article presents a possible implementation of the approach in the context of the NDT (Navigational Development Techniques) Methodology and shows the empirical evaluation in a real project by analysing the improvements obtained with our approach. The paper presents interesting results that demonstrate that we can get a reduction in the time required to find conflicts between requirements, which implies a reduction in the global development costs.  相似文献   

9.
ContextFollowing the evolution of the business needs, the requirements of software systems change continuously and new requirements emerge frequently. Requirements documents are often textual artifacts with structure not explicitly given. When a change in a requirements document is introduced, the requirements engineer may have to manually analyze all the requirements for a single change. This may result in neglecting the actual impact of a change. Consequently, the cost of implementing a change may become several times higher than expected.ObjectiveIn this paper, we aim at improving change impact analysis in requirements by using formal semantics of requirements relations and requirements change types.MethodIn our previous work we present a requirements metamodel with commonly used requirements relation types and their semantics formalized in first-order logic. In this paper the classification of requirements changes based on structure of a textual requirement is provided with formal semantics. The formalization of requirements relations and changes is used for propagating proposed changes and consistency checking of proposed changes in requirements models. The tool support for change impact analysis in requirements models is an extension of our Tool for Requirements Inferencing and Consistency Checking (TRIC).ResultsThe described approach for change impact analysis helps in the elimination of some false positive impacts in change propagation, and enables consistency checking of changes.ConclusionWe illustrate our approach in an example which shows that the formal semantics of requirements relations and change classification enables change alternatives to be proposed semi-automatically, the reduction of some false positive impacts and contradicting changes in requirements to be determined.  相似文献   

10.
Reflectance from images: a model-based approach for human faces   总被引:1,自引:0,他引:1  
In this paper, we present an image-based framework that acquires the reflectance properties of a human face. A range scan of the face is not required. Based on a morphable face model, the system estimates the 3D shape and establishes point-to-point correspondence across images taken from different viewpoints and across different individuals' faces. This provides a common parameterization of all reconstructed surfaces that can be used to compare and transfer BRDF data between different faces. Shape estimation from images compensates deformations of the face during the measurement process, such as facial expressions. In the common parameterization, regions of homogeneous materials on the face surface can be defined a priori. We apply analytical BRDF models to express the reflectance properties of each region and we estimate their parameters in a least-squares fit from the image data. For each of the surface points, the diffuse component of the BRDF is locally refined, which provides high detail. We present results for multiple analytical BRDF models, rendered at novel orientations and lighting conditions.  相似文献   

11.
Many software systems fail to address their intended purpose because of a lack of user involvement and requirements deficiencies. This paper discusses the elaboration of a requirements-analysis process that integrates a critical-parameter-based approach to task modeling within a user-centric design framework. On one hand, adapting task models to capture requirements bridges the gap between scenarios and critical parameters which benefits design from the standpoint of user involvement and accurate requirements. On the other hand, using task models as a reusable component leverages requirements reuse which benefits design by increasing quality while simultaneously reducing development costs and time-to-market. First, we present the establishment of both a user-centric and reuse-centric requirements process along with its implementation within an integrated design tool suite. Secondly, we report the design, procedures, and findings of two user studies aimed at assessing the feasibility for novice designers to conduct the process as well as evaluating the resulting benefits upon requirements-analysis deliverables, requirements quality, and requirements reuse.  相似文献   

12.
起源(Provenance)是记录数据演变历史的元数据。最近研究者提出起源感知的访问控制,通过追溯和分析访问者或被访问对象的起源来决定允许或拒绝访问请求。由于起源通常由系统在运行时记录并呈现为复杂的有向图,识别、规约和管理起源感知的访问控制策略非常困难。为此,提出了一个基于UML模型的起源感知访问控制策略分析方法,包括对复杂起源图的抽象建模技术以及一个在面向对象的软件开发过程中系统地建立起源模型、规约起源感知访问控制策略的参考过程指南。最后结合企业在线培训系统案例说明如何应用所提出的方法。  相似文献   

13.
This paper presents a congestion control protocol for ad-hoc Wireless-LAN (WLAN) with Bandwidth-on Demand (BoD) access. The novelty of this paper is in the extensive use of model-based control methodologies to simultaneously compute the capacity requests necessary to access the network (BoD) and the capacity allocations required to regulate the rates of the traffic flows (congestion control). The proposed scheme allows one to compute upper-bounds of the queue lengths in all the network buffers (thus allowing proper buffer dimensioning and, therefore, overflow prevention), avoids that the assigned capacity is left unused (thus entailing full link utilization) and guarantees the recovery of a satisfactory traffic behaviour as soon as congestion situations terminate (congestion recovery). The high-speed WLAN considered in the paper has been developed within the European Union (EU) project Wireless Indoor Flexible High Bitrate Modern Architecture (WINDFLEX). Extensive simulations prove the effectiveness of the proposed scheme.  相似文献   

14.
Creating a formal specification for a design is an error-prone process. At the same time, debugging incorrect specifications is difficult and time consuming. In this work, we propose a debugging method for formal specifications that does not require an implementation. We handle conflicts between a formal specification and the informal design intent using a simulation-based refinement loop, where we reduce the problem of debugging overconstrained specifications to that of debugging unrealizability. We show how model-based diagnosis can be applied to locate an error in an unrealizable specification. The diagnosis algorithm computes properties and signals that can be modified in such a way that the specification becomes realizable, thus pointing out potential error locations. In order to fix the specification, the user must understand the problem. We use counterstrategies to explain conflicts in the specification. Since counterstrategies may be large, we propose several ways to simplify them. First, we compute the counterstrategy not for the original specification but only for an unrealizable core. Second, we use a heuristic to search for a countertrace, i.e., a single input trace which necessarily leads to a specification violation. Finally, we present the countertrace or the counterstrategy as an interactive game against the user, and as a graph summarizing possible plays of this game. We introduce a user-friendly implementation of our debugging method and present experimental results for GR(1) specifications.  相似文献   

15.
At early phases of a product development lifecycle of large scale Cyber-Physical Systems (CPSs), a large number of requirements need to be assigned to stakeholders from different organizations or departments of the same organization for review, clarification and checking their conformance to standards and regulations. These requirements have various characteristics such as extents of importance to the organization, complexity, and dependencies between each other, thereby requiring different effort (workload) to review and clarify. While working with our industrial partners in the domain of CPSs, we discovered an optimization problem, where an optimal solution is required for assigning requirements to various stakeholders by maximizing their familiarity to assigned requirements, meanwhile balancing the overall workload of each stakeholder. In this direction, we propose a fitness function that takes into account all the above-mentioned factors to guide a search algorithm to find an optimal solution. As a pilot experiment, we first investigated four commonly applied search algorithms (i.e., GA, (1 + 1) EA, AVM, RS) together with the proposed fitness function and results show that (1 + 1) EA performs significantly better than the other algorithms. Since our optimization problem is multi-objective, we further empirically evaluated the performance of the fitness function with six multi-objective search algorithms (CellDE, MOCell, NSGA-II, PAES, SMPSO, SPEA2) together with (1 + 1) EA (the best in the pilot study) and RS (as the baseline) in terms of finding an optimal solution using an real-world case study and 120 artificial problems of varying complexity. Results show that both for the real-world case study and the artificial problems (1 + 1) EA achieved the best performance for each single objective and NSGA-II achieved the best performance for the overall fitness. NSGA-II has the ability to solve a wide range of problems without having their performance degraded significantly and (1 + 1) EA is not fit for problems with less than 250 requirements Therefore we recommend that, if a project manager is interested in a particular objective then (1 + 1) EA should be used; otherwise, NSGA-II should be applied to obtain optimal solutions when putting the overall fitness as the first priority.  相似文献   

16.
At present,great demands are posed on software dependability.But how to elicit the dependability requirements is still a challenging task.This paper proposes a novel approach to address this issue.The essential idea is to model a dependable software system as a feedforward-feedback control system,and presents the use cases+control cases model to express the requirements of the dependable software systems.In this model,while the use cases are adopted to model the functional requirements,two kinds of control cases(namely the feedforward control cases and the feedback control cases)are designed to model the dependability requirements.The use cases+control cases model provides a unified framework to integrate the modeling of the functional requirements and the dependability requirements at a high abstract level.To guide the elicitation of the dependability requirements,a HAZOP based process is also designed.A case study is conducted to illustrate the feasibility of the proposed approach.  相似文献   

17.
Since its conception nearly two decades ago, cognitive load theory (CLT) has been a fertile ground for both empirical and theoretical investigations. The research accumulated over the years has contributed not only to the theory’s validation, but also generated new insights. These new insights helped to refine CLT, making it more precise, but also more complex. A formal (mathematical) simulation model is proposed as a new analytical tool for investigating CLT’s increasingly intricate postulates and their dynamic implications. This paper describes how the theoretical relationships between certain features of instruction and the cognitive capacities of learners can be expressed formally, and how the resulting model can help gain insights into the learning dynamics that arise from these relationships, providing a new aid for research, teaching and practice in the field of instructional design.  相似文献   

18.
Decision trees (DTs) are effective in extracting linguistically interpretable models from data. This paper shows that DTs can also be used to extract information from process models, e.g. they can be used to represent homogenous operating regions of complex process. To illustrate the usefulness of this novel approach a detailed case study is shown where DTs are used for forecasting the development of runaway in an industrial, fixed bed, tube reactor. Based on first-principles knowledge and historical process data the steady-state simulator of the tube reactor has been identified and validated. The runaway criterion based on Ljapunov's indirect stability analysis has been applied to generate a data base used for DT induction. Finally, the logical rules extracted from the DTs are used in an operator support system (OSS), since they are proven to be useful to describe the safe operating regions. A simulation study based on the dynamical model of the process is also presented. The results confirm that by the synergistic combination of a DT based on expert system and the dynamic simulator a powerful tool for runaway forecasting and analysis is achieved and it can be used to work safe operating strategies out.  相似文献   

19.
Human motion capture (MoCap) data can be used for animation of virtual human-like characters in distributed virtual reality applications and networked games. MoCap data compressed using the standard MPEG-4 encoding pipeline comprising of predictive encoding (and/or DCT decorrelation), quantization, and arithmetic/Huffman encoding, entails significant power consumption for the purpose of decompression. In this paper, we propose a novel algorithm for compression of MoCap data, which is based on smart indexing of the MoCap data by exploiting structural information derived from the skeletal virtual human model. The indexing algorithm can be fine-controlled using three predefined quality control parameters (QCPs). We demonstrate how an efficient combination of the three QCPs results in a lower network bandwidth requirement and reduced power consumption for data decompression at the client end when compared to standard MPEG-4 compression. Since the proposed algorithm exploits structural information derived from the skeletal virtual human model, it is observed to result in virtual human animation of visually acceptable quality upon decompression  相似文献   

20.
Building enterprise reuse program——A model-based approach   总被引:1,自引:0,他引:1  
Reuse is viewed as a realistically effective approach to solving software crisis. For an organization that wants to build a reuse program, technical and non-technical issues must be considered in parallel. In this paper, a model-based approach to building systematic reuse program is presented. Component-based reuse is currently a dominant approach to software reuse. In this approach, building the right reusable component model is the first important step. In order to achieve systematic reuse, a set of component models should be built from different perspectives. Each of these models will give a specific view of the components so as to satisfy different needs of different persons involved in the enterprise reuse program. There already exist some component models for reuse from technical perspectives. But less attention is paid to the reusable components from a non-technical view, especially fromthe view of process and management. In our approach, a reusable component model--FLP modelfor reusable component  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号