首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 609 毫秒
1.
Computer simulation is a well-established decision support tool in the manufacturing industry. The rapid development and deployment of simulation models however, are inhibited by factors such as inefficient data collection, lengthy model documentation, and poorly planned experimentation. Typically, more than one third of project time is spent on identification, collection, validation, and analysis of input data. Whilst most research work has been focused on statistical techniques for data analysis, less attention has been paid to the development of systematic approaches to input data gathering. This paper presents a methodology for rapid identification and collection of input data in batch manufacturing environments. A functional module library and a reference data model, both developed using the IDEF (Integrated computer aided manufacturing DEFinition) family of constructs, are the core elements of the methodology. The paper also identifies the major causes behind the inefficient collection of data.  相似文献   

2.
An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To ensure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50 percent of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful.  相似文献   

3.
针对农村供水工程各类监测数据难以长期、实时、有效的汇集与监管问题,探讨农村供水安全工程信息汇集体系建设。提出以“数据、用户、标准”为核心的建设思路,采用政务与水利平台资源融合的体系架构,并根据农村供水工程的规模及平台建设情况,提出规模供水工程数据汇集共享、小型集中供水工程数据直传、县级农饮安全管理系统数据平台端共享3类农村供水信息汇集技术方案,实现全省农饮数据资源的高效汇集,清晰划分各级农村供水工程监测数据的监管权责,提升安徽省农村供水安全监管水平。技术成果在安徽省农村供水安全监测项目建设中得到应用,可为安徽省农饮安全监管能力的稳定提升提供技术保障。  相似文献   

4.
Efficient utilisation of new mobility data-based services and promotion of acceptance of data collection from vehicles and people demand an understanding of mobility data privacy concerns, associated with increasing use of tracking technologies, diverse data usages and complex data collection environments. Understanding privacy concerns enables improved service and system development and identification of appropriate data management solutions that contribute to data subjects’ privacy protection, as well as efficient utilisation of the collected data. This study aimed to explore earlier research findings on privacy concerns evaluation and investigate their validity in mobility data collection. Explorative multimethod research was conducted in a mobility service pilot through data controller interviews, user interviews and a user survey. The study's results indicated the need to revise and complement existing privacy concerns evaluation in mobility data collection contexts. The primary findings were as follows: (1) Privacy concerns specific to the mobility data collection context exist. (2) Privacy concerns may change during the service use. (3) Users are not necessarily personally worried about their privacy although they ponder on privacy issues. (4) In contrast to traditional ‘privacy calculus’ thinking, users’ expected benefits from data disclosure may also be driven by altruistic motives.  相似文献   

5.
An effective data collection methodology for evaluating software development methodologies was applied to five different software development projects. Results and data from three of the projects are presented. Goals of the data collection included characterizing changes, errors, projects, and programmers, identifying effective error detection and correction techniques, and investigating ripple effects.  相似文献   

6.
Among the key factors for the success of a metrics program are the regularity of metrics collection, a seamless and efficient data collection methodology, and the presence of non-intrusive automated data collection tools. This paper presents the software process data warehousing architecture SPDW+ as a solution to the frequent, seamless, and automated capturing of software quality metrics, and their integration in a central repository for a full range of analyses. The striking features of the SPDW+ ETL (data extraction, transformation, and loading) approach are that it addresses heterogeneity issues related to the software development context, it is automatable and non-intrusive, and it allows different capturing frequency and latency strategies, hence allowing both analysis and monitoring of software metrics. The paper also provides a reference framework that details three orthogonal dimensions for considering ETL issues in the software development process context, used to develop SPDW+ ETL. The advantages of SPDW+ are: (1) flexibility to meet the requirements of the frequent changes in SDP environments; (2) support for monitoring, which implies the execution of frequent and incremental loads; (3) automation of the complex and time-consuming task of capturing metrics, making it seamless; (4) freedom of choice regarding management models and support tools used in projects; and (5) cohesion and consistency of the information stored in the metrics repository which will be used to compare data of different projects. The paper presents the reference framework, illustrates the key role played by the metrics capturing process in a metrics program using a case study, and presents the striking features of SPDW+ and its ETL approach, as well as an evaluation based on a prototype implementation.  相似文献   

7.
Security and Privacy in Pervasive Computing   总被引:2,自引:0,他引:2  
In this issue's Works in Progress department, we have six projects. The first two projects address an individual's privacy concerns and preferences. The next entry discusses a project on data protection for electronic passports. The remaining three projects are investigating various types of privacy protection mechanisms for data collected in pervasive computing environments, by attestation services, and by voice recording systems.  相似文献   

8.
In this paper, we present a Collaborative Object-oriented Visualization Environment (COVE) which provides a flexible and extensible framework for collaborative visualization. COVE integrates collaborative and parallel computing environments based on a distributed object model. It is built as a collection of concurrent objects: collaborative and application objects which interact with one another to construct collaborative parallel computing environments. The former enables COVE to execute various collaborative functions, while the latter allows it to execute fast parallel visualization in various modes. Also, flexibility and extensibility are provided by plugging the proper application objects into COVE at run-time, and making them interact with one another through collaboration objects. For our experiment, three visualization modes for volume rendering are designed and implemented to support the fast and flexible analysis of volume data in a collaborative environment. This work has been supported by KIPA-Information Technology Research Center, University research program by Ministry of Information & Communication, and Brain Korea 21 projects in 2005.  相似文献   

9.
电子商务网站用户访问模式挖掘中的预处理技术   总被引:6,自引:0,他引:6  
郭伟刚 《计算机应用》2005,25(3):691-694
对电子商务网站的用户访问模式挖掘中数据预处理阶段所采用的技术做了全面的研究,主要包括源数据的采集方法以及数据清理、用户识别、会话识别、事务识别、会话子序列生成等所采用的技术。并给出了框架网页过滤、识别搜索引擎Robot产生的访问记录,以及生成用户会话语义序列的方法。  相似文献   

10.
This article discusses several networked media projects that use sensor technology to transmit data from real-world environments to virtual environments. The Eolus One project uses an experimental virtual control room to run building systems and provide a better communication network among its users. A 3D application design group called green phosphor creates code for translating n-dimensional information into 3D interactive formats for real-time effects. The Parsec voice controller system uses sonic inputs to control 3D graphical objects on the second life virtual platform. This article focuses on the design principles applied by the three projects at this experimental stage of x-reality design.  相似文献   

11.
This paper presents an approach to automatically diagnosing rediscovered software failures using symptoms, in environments in which many users run the same procedural software system. The approach is based on the observation that the great majority of field software failures are rediscoveries of previously reported problems and that failures caused by the same defect often share common symptoms. Based on actual data, the paper develops a small software failure fingerprint, which consists of the procedure call trace, problem detection location, and the identification of the executing software. The paper demonstrates that over 60 percent of rediscoveries can be automatically diagnosed based on fingerprints; less than 10 percent of defects are misdiagnosed. The paper also discusses a pilot that implements the approach. Using the approach not only saves service resources by eliminating repeated data collection for and diagnosis of reoccurring problems, but it can also improve service response time for rediscoveries  相似文献   

12.
虚拟科研应用环境中多源异构数据的集成和共享问题是个热点问题。本文基于虚拟科研环境协同工作套件duckling应用集成框架,实现了一个通用的数据集成与共享工具DLM。该工具使用Portlet技术,是BS架构的Web应用,提供通用的数据采集、数据处理、数据可视化展示和数据下载功能,解决了虚拟科研环境中数据的统一采集、处理和共享问题。该工具已经在大气、气象和生物等相关领域中的课题研究中得到了应用,取得了初步的成效。  相似文献   

13.
14.
The project review information plays an important role in the recommendation of review experts. In this paper, we aim to determine review expert's rating by using the historical rating records and the final decision results on the previous projects, and by means of some rules, we construct a rating matrix for projects and experts. For the data sparseness problem of the rating matrix and the “cold start” problem of new expert recommendation, we assume that those projects/experts with similar topics have similar feature vectors and propose a review expert collaborative recommendation algorithm based on topic relationship. Firstly, we obtain topics of projects/experts based on latent Dirichlet allocation (LDA) model, and build the topic relationship network of projects/experts. Then, through the topic relationship between projects/experts, we find a neighbor collection which shares the largest similarity with target project/expert, and integrate the collection into the collaborative filtering recommendation algorithm based on matrix factorization. Finally, by learning the rating matrix to get feature vectors of the projects and experts, we can predict the ratings that a target project will give candidate review experts, and thus achieve the review expert recommendation. Experiments on real data set show that the proposed method could predict the review expert rating more effectively, and improve the recommendation effect of review experts.   相似文献   

15.
Risk identification is a knowledge-based process that requires the time-consuming and laborious identification of project-specific risk factors. Current practices for risk identification in construction rely heavily on an expert’s subjective knowledge of the current project and of similar historical projects to determine if a risk may affect the project under study. When quantitative risk-related data are available, they are often stored across multiple sources and in different types of documents complicating data sharing and reuse. The present study introduces an ontology-based approach for construction risk identification that maps and automates the representation of project context and risk information, thereby enhancing the storage, sharing, and reuse of knowledge for the purpose of risk identification. The study also presents a novel wind farm construction project risk ontology that has been validated by a group of industry experts. The resulting ontology-based risk identification approach is able to accommodate project context in the risk identification process and, through implementation of the proposed approach, has identified risk factors that affect the construction of onshore wind farm projects.  相似文献   

16.
针对多种应用环境下微弱电流信号采集的特殊性,介绍了一种分布式多通道微弱电流采集系统,基于"以数据为中心"的分布式结构,将系统逻辑分层为电流采集节点、数据存储与处理节点及人机界面等功能模块.其中,电流采集节点采用单片智能结构,易于实现信号的本地数字化及传输;数据存储与处理节点负责各采集节点数据流的收集、处理及封装.针对多...  相似文献   

17.
尹子都  岳昆  张彬彬  李劲 《软件学报》2020,31(11):3540-3558
互联网中,以网页、社交媒体和知识库等为载体呈现的大量非结构化数据可表示为在线大图.在线大图数据的获取包括数据收集和更新,是大数据分析与知识工程的重要基础,但面临着数据量大、分布广、异构和变化快速等挑战.基于采样技术,提出并行、自适应的在线大图数据收集和更新方法.首先,将分支限界方法与半蒙特卡罗采样技术相结合,提出能够自适应地收集在线大图数据的HD-QMC算法;然后,为了使收集的数据能反映实际中在线大图的动态变化,进一步基于信息熵及泊松过程,提出高效更新在线大图数据的EPP算法.从理论上分析了该算法的有效性,并将获取的各类在线大图数据统一表示为RDF三元组的形式,为在线大图数据分析及相关研究提供方便易用的数据基础.基于Spark实现了在线大图数据的收集和更新算法,人工生成数据和真实数据上的实验结果展示了该方法的有效性和高效性.  相似文献   

18.
Rapid Prototyping (RP) is an emerging technology with different applications in the cycle of product and process development used during the conceptual and specification phases. To support it, the usage of a Data Base Management System (DBMS) and an Expert Systems (ES) are important to manage the data, support the identification of errors and check product conformance and production viability during this cycle, seeking intelligent product development. The underlying concept of RP and ES is very straight forward, but still exists a major problem that concerns the integration in the CAD/CAM environment. CAD data can have a variety of formats depending on the model used in the CAD system, as well as on ES' data and knowledge structure, that usually relies on the expert system shell's where it was developed. To support and speed up the integration process of these tools in a manufacturing environment, is essential to have an integrator environment where a standard is adopted for the data and knowledge exchange process, providing the software applications with a neutral format interface, becoming these tools open and standard compliant, ready to be used in any other standard environment. UNINOVA, in the scope of European and National projects, developed a STEP-based Platform for Integration of applications (SIP) to support and aid the software integration in industrial environments using the standard ISO10303-STEP, already used in different industrial domains. This paper addresses the integration aspects using STEP within industrial environments from the technical point of view, and proposes an architecture and a toolkit for the integrator environment, highlighting the implementation efforts required to have RP technology, ES and DBMS focused towards a STEP-based intelligent product development.  相似文献   

19.
开源代码托管平台为软件开发行业带来了活力和机遇,但存在诸多安全隐患。开源代码的不规范性、项目依赖库的复杂性、漏洞披露平台收集漏洞的被动性等问题都影响着开源项目及引入开源组件的闭源项目的安全,大部分漏洞修复行为无法及时被察觉和识别,进而将各类项目的安全风险直接暴露给攻击者。为了全面且及时地发现开源项目中的漏洞修复行为,设计并实现了基于项目版本差异性的漏洞识别系统—VpatchFinder。系统自动获取开源项目中的更新代码及内容数据,对更新前后代码和文本描述信息进行提取分析。提出了基于安全行为与代码特征的差异性特征,提取了包括项目注释信息特征组、页面统计特征组、代码统计特征组以及漏洞类型特征组的共40个特征构建特征集,采用随机森林算法来训练可识别漏洞的分类器。通过真实漏洞数据进行测试,VpatchFinder的精确率为84.35%,准确率为85.46%,召回率为85.09%,优于其他常见的机器学习算法模型。进一步通过整理的历年部分开源软件CVE漏洞数据进行实验,其结果表明68.07%的软件漏洞能够提前被VpatchFinder发现。该研究结果可以为软件安全架构设计、开发及成分分析等领域提供...  相似文献   

20.
Research on practical design verification techniques has long been impeded by the lack of published, detailed error data. We have systematically collected design error data over the last few years from a number of academic microprocessor design projects. We analyzed this data and report on the lessons learned in the collection effort  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号