首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
为了解决民用飞机故障诊断中任务分配问题,提出了使用新型离散萤火虫算法对任务分配问题进行研究,并在此基础上,使用Petri网进行可视化建模与仿真。新型的萤火虫算法离散方法,使得原本应用于连续问题的萤火虫算法适用于任务分配问题,这样使Petri网建模的复杂性降低,更利于建模。最后,对一实例进行分析并进行可视化建模与仿真,动态的仿真过程及最终结果验证了模型的可行性及算法的正确性。因此,使用该方法可有效地降低建模的复杂性,提高任务分配的效率,同时能够应用于协同可视化诊断平台中的任务分配模块。  相似文献   

2.
Abstract. The development of software is a complex task frequently resulting in unfinished projects, project overruns and system failures. Software process improvement (SPI) approaches have been promoted as a promising remedy for this situation. The organizational implementation of such approaches is a crucial issue and attempts to introduce SPI into software organizations often fail. This paper presents a framework to understand, and subsequently successfully perform, the implementation of SPI innovations in software organizations. The framework consists of three perspectives on innovation: an individualist, a structuralist and an interactive process perspective. Applied to SPI, they emphasize different aspects of implementing SPI innovations. While the first focuses on leadership, champions and change agents, the second focuses on organization size, departmental and task differentiation and complexity, and the third perspective views the contents of the innovation, the social context and process of the implementation as related in an interactive process. We demonstrate the framework's applicability through two cases. We show that the three perspectives supplement each other and together provide a deeper understanding of the implementation process. Such understanding is crucial for the successful uptake of SPI approaches in software organizations.  相似文献   

3.
Over the past several years, more efficient approaches have been on increasing demands for designing, modeling, and implementing inter-organizational business processes. In the process collaboration across organizational boundaries, organizations still stay autonomic, which means each organization can freely modify its internal operations to meet its private goals while satisfying the mutual objectives with its partners. Recently, artifact-centric process modeling has been evidenced with higher flexibility in process modeling and execution than traditional activity-centric modeling methods. Although some efforts have been put to exploring how artifact-centric modeling facilitates the collaboration between organizations, the achievement is still far from satisfaction level, particularly in aspects of process modeling and validating. To fill in the gaps, we propose a view framework for modeling and validating the changes of inter-organizational business processes. The framework consists of an artifact-centric process meta-model, public view constructing mechanism, and private view and change validating mechanisms, which are specially designed to facilitate the participating organizations to customize their internal operations while ensuring the correctness of the collaborating processes. We also implement a software tool named Artifact-M to help organizations to automatically construct a minimal and consistent public view from their processes.  相似文献   

4.
Software quality models can predict which modules will have high risk, enabling developers to target enhancement activities to the most problematic modules. However, many find collection of the underlying software product and process metrics a daunting task.Many software development organizations routinely use very large databases for project management, configuration management, and problem reporting which record data on events during development. These large databases can be an unintrusive source of data for software quality modeling. However, multiplied by many releases of a legacy system or a broad product line, the amount of data can overwhelm manual analysis. The field of data mining is developing ways to find valuable bits of information in very large databases. This aptly describes our software quality modeling situation.This paper presents a case study that applied data mining techniques to software quality modeling of a very large legacy telecommunications software system's configuration management and problem reporting databases. The case study illustrates how useful models can be built and applied without interfering with development.  相似文献   

5.
Boehm  B. 《Software, IEEE》1996,13(4):73-82
Software organizations need common milestones to serve as a basis for their software development processes. The author proposes three such milestones, gives an example of their use, and discusses why they are success-critical for software projects. To avoid the problems of the previous model milestones-stakeholder mismatches, gold plating, inflexible point solutions, high risk downstream capabilities, and uncontrolled developments-software projects need a mix of flexibility and discipline. The risk driven content of the LCO, LCA, and IOC milestones let you tailor them to specific software situations and yet they remain general enough to apply to most software projects. And, because they emphasize stakeholder commitment to shared system objectives, they can provide your organization with a collaborative framework for successfully realizing software's most powerful capability: its ability to help people and organizations cope with change  相似文献   

6.
We examine the impact of development process modeling on outcomes in software development projects, limiting our attention to process and product quality. Modeling the software development process requires a careful determination of tasks and their logical relationships. Essentially, the modeling is undertaken to establish a management framework of the project. We define and interrelate development process modeling, task uncertainty, and development outcomes, as assessed by product and process quality. A survey-based research design was used to collect data to prove our model. The results suggest that development process modeling is positively related to both product and process quality, while task uncertainty is negatively related to them. Development process modeling reduces the negative impact of task uncertainty on quality-oriented development outcomes. Development projects operating with high levels of task uncertainty should consider defining development process models that provide a framework for management of the project by establishing tasks and their logical interrelationships. Such a model should promote shared understanding of the work process among development constituents and enhance resource utilization efficiency.  相似文献   

7.
Software reliability is increasingly important in today's marketplace. When traditional software development processes fail to deliver the level of reliability demanded by customers, radical changes in software development processes may be needed. Business process reengineering (BPR) is the popular term for comprehensive redesign of business processes. This paper focuses on the business processes that produce commercial software, and illustrates the central role that models have in implementation of BPR. Software metrics and software-quality modeling technology enable reengineering of software development processes, moving from a static process model to a dynamic one that adapts to the expected quality of each module. We present a method for cost-benefit analysis of BPR of software development processes as a function of model accuracy. The paper defines costs, benefits, profit, and return on investment from both short-term and long-term perspectives. The long-term perspective explicitly accounts for software maintenance efforts. A case study of a very large legacy telecommunications system illustrates the method. The dependent variable of the software-quality model was whether a module will have faults discovered by customers. The independent variables were software product and process metrics. In an example, the costs and benefits of using the model are compared to using random selection of modules for reliability enhancement. Such a cost-benefit analysis clarifies the implications of following model recommendations.  相似文献   

8.
A process model based on product function and physical topology structure is proposed. This model decomposes the product top-down design process into module design task, part design task, design parameter and dependencies among them. According to the characteristics of the design process, an integrated collaborative design software framework is designed, which includes module task agent, part task agent and sensitivity-based collaborative design method for agents. The system is developed with Java and Web technologies and applied to the transmission design case to demonstrate its utilization and efficiency.  相似文献   

9.
Commercial organizations increasingly need software processes sensitive to business value, quick to apply, supportive of multi-stakeholder collaboration, and capable of early analysis for the cost- e ectiveness of process instances This paper presents experience in applying a lightweight synthesis of a Value-Based Software Quality Achievement process and an Object-Petri-Net-based process model to achieve a stakeholder win-win outcome for software quality achievement in an on-going ERP software project in China The application results con rmed that 1) the Object-Petri-Net-based process model provided project managers with a synchronization and stabilization framework for process activities, successcritical stakeholders and their value propositions; 2) process visualization and simulation tools signi cantly increased management visibility and controllability for the success of the software project  相似文献   

10.
郭娜  刘聪  李彩虹  陆婷  闻立杰  曾庆田 《软件学报》2024,35(3):1341-1356
流程剩余时间预测对于业务异常的预防和干预有着重要的价值和意义.现有的剩余时间预测方法通过深度学习技术达到了更高的准确率,然而大多数深度模型结构复杂难以解释预测结果,即不可解释问题.此外,剩余时间预测除了活动这一关键属性还会根据领域知识选择若干其他属性作为预测模型的输入特征,缺少通用的特征选择方法,对于预测的准确率和模型的可解释性存在一定的影响.针对上述问题,提出基于可解释特征分层模型(explainable feature-based hierarchical model,EFH model)的流程剩余时间预测框架.具体而言,首先提出特征自选择策略,通过基于优先级的后向特征删除和基于特征重要性值的前向特征选择,得到对预测任务具有积极影响的属性作为模型输入.然后提出可解释特征分层模型架构,通过逐层加入不同特征得到每层的预测结果,解释特征值与预测结果的内在联系.采用LightGBM (light gradient boosting machine)和LSTM (long short-term memory)算法实例化所提方法,框架是通用的,不限于选用算法.最后在8个真实事件日志上与最新方法进行比较.实验结果表明所提方法能够选取出有效特征,提高预测的准确率,并解释预测结果.  相似文献   

11.
李海峰  王栓奇  刘畅  郑军  李震 《软件学报》2013,24(4):749-760
为了进一步提升现有非齐次泊松过程类软件可靠性增长模型的拟合与预计精度,首先,提出一个同时考虑测试工作量与测试覆盖率的NHPP类软件可靠性建模框架.在此基础上,将变形S型测试工作量函数(IS-TEF)以及Logistic测试覆盖率函数(LO-TCF)带入该建模框架,建立了一个新的软件可靠性增长模型,即IS-LO-SRGM.同时,还对利用该框架进行建模过程中的两个重要问题进行了描述与分析,即如何确定具体的TEF和TCF以及模型参数估计.然后,在两组真实的失效数据集上,利用该建模框架建立了最为合适的增长模型,即IS-LO-SRGM,并将该模型与8种经典NHPP模型进行对比.实例验证结果表明,所提出的IS-LO-SRGM模型具有最为优秀的拟合与预计性能,从而证明新建模框架的有效性和实用性.最后,对不完美排错情况进行了初步的讨论与建模分析.  相似文献   

12.
13.
现有时空感知的表示学习框架无法对强时空语义的实际场景存在的“When”、“Where”和“What”3个问题给出一个统一的解决方案。同时,现有的时间和空间建模上的研究方案也存在着一定的缺陷,无法在复杂的实际场景中取得最优的性能。为了解决这些问题,本文提出了一个统一的用户表示框架—GTRL (geography and time aware representation learning),可以同时在时间和空间的维度上对用户的历史行为轨迹进行联合建模。在时间建模上,GTRL采用函数式的时间编码以及连续时间和上下文感知的图注意力网络,在动态的用户行为图上灵活地捕获高阶的结构化时序信息。在空间建模上,GTRL采用了层级化的地理编码和深度历史轨迹建模模块高效地刻画了用户的地理位置偏好。GTRL设计了统一的联合优化方案,同时在交互预测、交互时间预测以及交互位置3个任务上进行模型学习。最后,本文在公开数据集和工业数据集上设计了大量的实验,分别验证了GTRL相较学术界基线模型的优势,以及在实际业务场景中的有效性。  相似文献   

14.
Characterizing the software process: a maturity framework   总被引:1,自引:0,他引:1  
Humphrey  W.S. 《Software, IEEE》1988,5(2):73-79
  相似文献   

15.
Testing takes a large share of software development efforts, and hence is of interest when seeking improvements. Several test process improvement frameworks exist, but they are extensive and much too large to be effective for smaller organizations. This paper presents a minimal test practice framework (MTPF) that allows the incremental introduction of appropriate practices at the appropriate time in rapidly expanding organizations. The process for introducing the practice framework tries to minimize resistance to change by maximizing the involvement of the entire organization in the improvement effort and ensuring that changes are made in small steps with a low threshold for each step. The practice framework created and its method of introduction have been evaluated at one company by applying the framework for a one‐year period. Twelve local software development companies have also evaluated the framework in a survey. Copyright © 2005 John Wiley & Sons, Ltd.  相似文献   

16.
In the frame of air quality monitoring of urban areas the task of short-term prediction of key-pollutants concentrations is a daily activity of major importance. Automation of this process is desirable but development of reliable predictive models with good performance to support this task in operational basis presents many difficulties. In this paper we present and discuss the NEMO prototype that has been built in order to support short-term prediction of NO2 maximum concentration levels in Athens, Greece. NEMO is based on a case-based reasoning approach combining heuristic and statistical techniques. The process of development of the system, its architecture and its performance, are described in this paper. NEMO performance is compared with that of a back propagating neural network and a decision tree. The overall performance of NEMO makes it a good candidate to support air pollution experts in operational conditions.  相似文献   

17.
Software development is a fast-paced environment where developers need constant update to ever-changing technologies. Furthermore, process improvement initiatives have been proven useful in increasing the productivity of a software organization. As such, these organizations need to decide where to invest their training budget. As a result, training in technological update to their workforce or training in process conformance with its productive processes become conflicting alternatives. This paper presents a system dynamics simulation of a software factory product line. The objective of this simulation is to understand the changes in behavior when selecting either one of the above-training alternatives. The system dynamics model was validated with an expert panel, and the simulation results have been empirically validated—using statistical process control—against the performance baseline of a real software development organization. With the simulation under statistical control and performing like the baseline, the independent variables representing process conformance (process training) and technology skills (skills training) were modified to study their impact on product defects and process stability. Our results show that while both variables have positive impact on defects and process stability, investment in process training results in a process with less variation and with fewer defects.  相似文献   

18.
ContextBuilding defect prediction models in large organizations has many challenges due to limited resources and tight schedules in the software development lifecycle. It is not easy to collect data, utilize any type of algorithm and build a permanent model at once. We have conducted a study in a large telecommunications company in Turkey to employ a software measurement program and to predict pre-release defects. Based on our prior publication, we have shared our experience in terms of the project steps (i.e. challenges and opportunities). We have further introduced new techniques that improve our earlier results.ObjectiveIn our previous work, we have built similar predictors using data representative for US software development. Our task here was to check if those predictors were specific solely to US organizations or to a broader class of software.MethodWe have presented our approach and results in the form of an experience report. Specifically, we have made use of different techniques for improving the information content of the software data and the performance of a Naïve Bayes classifier in the prediction model that is locally tuned for the company. We have increased the information content of the software data by using module dependency data and improved the performance by adjusting the hyper-parameter (decision threshold) of the Naïve Bayes classifier. We have reported and discussed our results in terms of defect detection rates and false alarms. We also carried out a cost–benefit analysis to show that our approach can be efficiently put into practice.ResultsOur general result is that general defect predictors, which exist across a wide range of software (in both US and Turkish organizations), are present. Our specific results indicate that concerning the organization subject to this study, the use of version history information along with code metrics decreased false alarms by 22%, the use of dependencies between modules further reduced false alarms by 8%, and the decision threshold optimization for the Naïve Bayes classifier using code metrics and version history information further improved false alarms by 30% in comparison to a prediction using only code metrics and a default decision threshold.ConclusionImplementing statistical techniques and machine learning on a real life scenario is a difficult yet possible task. Using simple statistical and algorithmic techniques produces an average detection rate of 88%. Although using dependency data improves our results, it is difficult to collect and analyze such data in general. Therefore, we would recommend optimizing the hyper-parameter of the proposed technique, Naïve Bayes, to calibrate the defect prediction model rather than employing more complex classifiers. We also recommend that researchers who explore statistical and algorithmic methods for defect prediction should spend less time on their algorithms and more time on studying the pragmatic considerations of large organizations.  相似文献   

19.
How do practitioners use conceptual modeling in practice?   总被引:1,自引:0,他引:1  
Islay  Peter  Michael  Marta  Stan   《Data & Knowledge Engineering》2006,58(3):358-380
Much research has been devoted over the years to investigating and advancing the techniques and tools used by analysts when they model. As opposed to what academics, software providers and their resellers promote as should be happening, the aim of this research was to determine whether practitioners still embraced conceptual modeling seriously. In addition, what are the most popular techniques and tools used for conceptual modeling? What are the major purposes for which conceptual modeling is used? The study found that the top six most frequently used modeling techniques and methods were ER diagramming, data flow diagramming, systems flowcharting, workflow modeling, UML, and structured charts. Modeling technique use was found to decrease significantly from smaller to medium-sized organizations, but then to increase significantly in larger organizations (proxying for large, complex projects). Technique use was also found to significantly follow an inverted U-shaped curve, contrary to some prior explanations. Additionally, an important contribution of this study was the identification of the factors that uniquely influence the decision of analysts to continue to use modeling, viz., communication (using diagrams) to/from stakeholders, internal knowledge (lack of) of techniques, user expectations management, understanding models’ integration into the business, and tool/software deficiencies. The highest ranked purposes for which modeling was undertaken were database design and management, business process documentation, business process improvement, and software development.  相似文献   

20.
The importance of high data quality and the need to consider data quality in the context of business processes are well acknowledged. Process modeling is mandatory for process-driven data quality management, which seeks to improve and sustain data quality by redesigning processes that create or modify data. A variety of process modeling languages exist, which organizations heterogeneously apply. The purpose of this article is to present a context-independent approach to integrate data quality into the variety of existing process models. The authors aim to improve communication of data quality issues across stakeholders while considering process model complexity. They build on a keyword-based literature review in 74 IS journals and three conferences, reviewing 1,555 articles from 1995 onwards. 26 articles, including 46 process models, were examined in detail. The literature review reveals the need for a context-independent and visible integration of data quality into process models. First, the authors derive the within-model integration, that is, enhancement of existing process models with data quality characteristics. Second, they derive the across-model integration, that is, integration of a data-quality-centric process model with existing process models. Since process models are mainly used for communicating processes, they consider the impact of integrating data quality and the application of patterns for complexity reduction on the models’ complexity metrics. There is need for further research on complexity metrics to improve applicability of complexity reduction patterns. Missing knowledge about interdependency between metrics and missing complexity metrics impede assessment and prediction of process model complexity and thus understandability. Finally, our context-independent approach can be used complementarily to data quality integration focusing on specific process modeling languages.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号