首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A new and computationally efficient heuristic algorithm is developed for reliability and maintainability (R&M) allocation in complex hierarchical systems (CHS). This approach has advantage over other available methods in that mean-time-to repair (MTTR) and other R&M parameters are estimated at the component level before desing. The R&M allocation is achieved by decomposing the system R&M state space into a hierarchy of subsystems and reformulating the problem as a reliability network equivalent.  相似文献   

2.
The cost of research & development (R&D) and quality management are always regarded as two major parts of total cost. The variable performance of R&D and quality design is an important index that will reflect the effectiveness of the cost reduction. This research has attempted to simultaneously vary all of the variables to achieve the global optimum for the optimal variable selections of R&D and quality design. Genetic algorithm (GA) can treat all of the variables for the global search. In this study, fuzzy refinement with orthogonal arrays was effective in improving the performance of the GA, and also showed the benefits of a good chromosome structure on the behavior of GA. It is also proposed the postponement design with temporal concept, to select the effective variables for the cost reduction of R&D and quality management design. The experimental results showed that tempo-postponement design will increase the flexibility and quick response for supply chain management. Hence, this approach can act as a useful guideline for researchers working on the optimization of the key variable selections for R&D and quality model design.  相似文献   

3.
The maintenance of large information systems involves continuous modifications in response to evolving business conditions or changing user requirements. Based on evidence from a case study, it is shown that the system maintenance activity would benefit greatly if the process knowledge reflecting the teleology of a design could be captured and used in order to reason about he consequences of changing conditions or requirements, A formalism called REMAP (representation and maintenance of process knowledge) that accumulates design process knowledge to manage systems evolution is described. To accomplish this, REMAP acquires and maintains dependencies among the design decisions made during a prototyping process, and is able to learn general domain-specific design rules on which such dependencies are based. This knowledge cannot only be applied to prototype refinement and systems maintenance, but can also support the reuse of existing design or software fragments to construct similar ones using analogical reasoning techniques  相似文献   

4.
Inventory management is being discussed recently as one of the key elements for survival and success in the production industry today. Little efforts were made to solve inventory problems in the Hi R&D environment.

In this paper, a prototype of a Microcomputer Based Decision Support is introduced to support decisions for managing Hi R&D inventories. The decisions involve timing and scheduling of purchased components, as well as classification of those items.  相似文献   


5.
Most of the conventional concepts used by the R&D project evaluation models do not seem to be appropriate for modeling the imprecision R&D project evaluation. This paper is concerned with the project evaluation by aggregating the multiple rank-ordered sets based on fuzzy set priority. First the rank-ordered priority lists of the R&D projects were determined based on the majority-rule methods, then the aggregate fuzzy set rank order was computed and compared with the others. Finally, we have developed a systemic and practical program suitable for a simple and easy calculation of all the algorithms. It was found that our model was validated by the comparative computations in various R&D project examples.  相似文献   

6.
A computerized quality function deployment approach for retail services   总被引:1,自引:0,他引:1  
Product and service quality can only be effectively improved when the most important needs of customers are satisfied. Quality Function Deployment (QFD) is an approach used to guide R&D, manufacturing, and management toward the development of products and services that satisfy the needs of consumers. The QFD operations are performed by way of a diagram called the House of Quality (HOQ). The HOQ contains information about the customers' needs (what), mechanisms to address these needs (how), and the criterion for deciding which “what” is the most important and which “how” provides the greatest customer satisfaction. A less familiar application of QFD is for the improvement of retail services. When QFD is applied to retail services, a computerized HOQ approach becomes integral to the process for providing continuous, iterative quality improvement. The objective of this research is to develop a formal QFD methodology for the retail industry and to build a computerized retail QFD system. The system provides a HOQ architecture for specifying and analyzing the customers' needs, deriving improvement strategies, and formalizing records of progress. Furthermore, two ranking methods that apply customer satisfaction theory are used to assist managers improve retail services. This system provides an integrated workbench for building retail HOQs and designing retail service strategies.  相似文献   

7.
International standards for Open Systems Interconnection (OSI) services and protocols are well advanced. Complementary standardization work has begun for testing conformance of products to OSI protocol standards. This is drawing on about 7 years R&D work on techniques and tools for testing protocol implementations. This paper presents the major aspects of the testing methodology and framework being standardized by ISO and CCITT. It relates some of the ideas to the work done in the R&D community and identifies the main topics on which further work is needed.  相似文献   

8.
A novel technique for maximum “a posteriori” (MAP) adaptation of maximum entropy (MaxEnt) and maximum entropy Markov models (MEMM) is presented.The technique is applied to the problem of automatically capitalizing uniformly cased text. Automatic capitalization is a practically relevant problem: speech recognition output needs to be capitalized; also, modern word processors perform capitalization among other text proofing algorithms such as spelling correction and grammar checking. Capitalization can be also used as a preprocessing step in named entity extraction or machine translation.A “background” capitalizer trained on 20 M words of Wall Street Journal (WSJ) text from 1987 is adapted to two Broadcast News (BN) test sets – one containing ABC Primetime Live text and the other NPR Morning News/CNN Morning Edition text – from 1996.The “in-domain” performance of the WSJ capitalizer is 45% better relative to the 1-gram baseline, when evaluated on a test set drawn from WSJ 1994. When evaluating on the mismatched “out-of-domain” test data, the 1-gram baseline is outperformed by 60% relative; the improvement brought by the adaptation technique using a very small amount of matched BN data – 25–70k words – is about 20–25% relative. Overall, automatic capitalization error rate of 1.4% is achieved on BN data.The performance gain obtained by employing our adaptation technique using a tiny amount of out-of-domain training data on top of the background data is striking: as little as 0.14 M words of in-domain data brings more improvement than using 10 times more background training data (from 2 M words to 20 M words).  相似文献   

9.
The process of determining user requirements for software systems is often plagued with uncertainty, ambiguity, and inconsistency. Rapid prototyping offers an iterative approach to requirements engineering that can be used to alleviate some of the problems with communication and understanding. Since the systems development process is characterized by changing requirements and assumptions, involving multiple stakeholders with often differing viewpoints, it is beneficial to capture the justifications for the decisions in the development process in a structured manner. Thisdesign rationale captured during requirements engineering can be used in conjunction with the rapid prototyping process to support various stakeholders involved in systems development. CAPS (the Computer Aided Prototyping System) has been built to help software engineers rapidly construct prototypes of proposed software systems. REMAP (Representation and MAintenance of Process knowledge) provides a conceptual model and mechanisms to represent and reason with (design) rationale knowledge. In this paper, we describe how in the context of evolving requirements, the CAPS system augmented with REMAP helps firm up software requirements through iterative negotiations via examination of executable prototypes and reasoning with design rationale knowledge.  相似文献   

10.
This work presents a method based on Computer Aided Design or CAD for facilitating the synthesis of Revolute–Revolute (R–R) dyads with adjustable moving pivots. The CAD-based method presented in this work ensures that all prescribed rigid-body parameters used to synthesize the R–R dyad satisfy particular kinematic requirements of an R–R dyad. Through the application of this CAD method, five of the six general R–R dyad constraint equations are satisfied and therefore not essential for the synthesis of the R–R dyad. By reducing the number of dyad design constraints from six to one, the user can synthesize R–R links with adjustable moving pivots for multi-phase motion and path generation applications. The example included demonstrates the use of the CAD method in the synthesis of an RRSS path generator with adjustable moving pivots.  相似文献   

11.
Research and development on OSI became a key issue in many Korean R&D projects related to computer, telecommunications and services. The Korean computer and communications industry also finds their possible area of expansion through OSI-compatible products. But claiming OSI as the future direction in Korea at the national/government level is not near. This short paper presents various present and future R&D projects on OSI in Korea. These include ISDN, LAN, Videotex, Teletex, protocol testing, mixed mode terminal, network architecture etc.  相似文献   

12.
Support for various stakeholders involved in software projects (designers, maintenance personnel, project managers and executives, end users) can be provided by capturing the history about design decisions in the early stages of the system's development life cycle in a structured manner. Much of this knowledge, which is called the process knowledge, involving the deliberation on alternative requirements and design decisions, is lost in the course of designing and changing such systems. Using an empirical study of problem-solving behavior of individual and groups of information systems professionals, a conceptual model called REMAP (representation and maintenance of process knowledge) that relates process knowledge to the objects that are created during the requirements engineering process has been developed. A prototype environment that provides assistance to the various stakeholders involved in the design and management of large systems has been implemented  相似文献   

13.
TNO is doing research in many areas of industrial automation and is heavily involved in European projects financed by R&D programmes such as Esprit, Eureka and Brite, and in many ISO and CEN standardization activities. From this experience it becomes clear that the I of Integration in CIM has not only to do with the integration of the so-called “islands of automation” but also with the integration of ”islands of manufacturing”: how we can improve the transfer of manufacturing knowledge. We have to increase the semantic content of our integration approaches, so that not only can computer scientist be involved, but also people from the companies we are trying to help, and people who are responsible for the development of new CIM components. The real problem is not a problem of technical integration of computers, but much more a “conceptual modelling” problem. Fundamental questions are, for instance, how we can, on the semantic level really required, model information transfer upstream and downstream in the product life cycle. Based on the analysis of existing CIM projects such as CAD*I, CIM- OSA, Open CAM Systems (Esprit I) IMPACT (Esprit II), CAM-I's CIM Architecture, the Danish Principal model for CIM, and more, we developed a generic and reusable CIM reference architecture. This architecture shows manufacturing activities, real and information flow objects, CIM components and industrial automation standards like STEP, MAP, TOP, EDIFACT, MMS etc. in an integrated way. In this paper we describe the CIM base model used to express the CIM reference architecture and give some details of the CIM reference architecture itself.  相似文献   

14.
Cross Impact Analysis(CIA) as a means of technological forecasting has a critical weakness in that it does not reflect the time impact on forecast events.

The objective of this research is to modify the current Cross Impact Analysis and to develop a new model which estimates the impact of time on the completion of R&D events when the interdependency between the events are considered.  相似文献   


15.
This paper proposes an algorithm for the model based design of a distributed protocol for fault detection and diagnosis for very large systems. The overall process is modeled as different Time Petri Net (TPN) models (each one modeling a local process) that interact with each other via guarded transitions that becomes enabled only when certain conditions (expressed as predicates over the marking of some places) are satisfied (the guard is true). In order to use this broad class of time DES models for fault detection and diagnosis we derive in this paper the timing analysis of the TPN models with guarded transitions. In this paper we also extend the modeling capability of the faults calling some transitions faulty when operations they represent take more or less time than a prescribed time interval corresponding to their normal execution. We consider here that different local agents receive local observation as well as messages from neighboring agents. Each agent estimates the state of the part of the overall process for which it has model and from which it observes events by reconciling observations with model based predictions. We design algorithms that use limited information exchange between agents and that can quickly decide “questions” about “whether and where a fault occurred?” and “whether or not some components of the local processes have operated correctly?”. The algorithms we derive allow each local agent to generate a preliminary diagnosis prior to any communication and we show that after communicating the agents we design recover the global diagnosis that a centralized agent would have derived. The algorithms are component oriented leading to efficiency in computation.  相似文献   

16.
This research contributes to the theoretical basis for appropriate design of computer-based, integrated planning information systems. The research provides a framework for integrating relevant knowledge, theory, methods, and technology. Criteria for appropriate system design are clarified. The requirements for a conceptual system design are developed based on “diffusion of innovation” theory, lessons learned in the adoption and use of existing planning information systems, current information-processing technology (including expert system technology), and methodology for evaluation of mitigation strategies for disaster events. Research findings focus on the assessment of new information systems technology. Chief among these findings is the utility of case-based reasoning for discovering and formalizing the meta rules needed by expert systems, the role of the “diffusion of innovation” theory in establishing design criteria, and the definition of client interests served by integrated planning information systems. The work concludes with the selection of a prototyping exercise. The prototype is developed in a forthcoming technical paper (Masri & Moore, 1994).  相似文献   

17.
余愚  邓志平 《计算机应用》2007,27(6):1521-1523
系统可靠性与维修性仿真属于离散事件仿真的范畴。针对可靠性与维修性仿真过程中可能涉及大量的随机变量的问题,论述了利用动态数据结构提高仿真运行的效率和扩大仿真解题规模的可能性。并以Turbo C为背景,讨论了动态数组的建立、动态内存的管理以及动态数组的使用方法,给出了动态数组在系统可靠性与维修性仿真中的应用。  相似文献   

18.
We propose a cooperative multi-agent platform to support the invention process based on the patent document analysis. It helps industrial knowledge managers to retrieve and analyze existing patent documents and extract structure information from patents with the aid of ontology and natural language processing techniques. It allows the invention process to be carried out through the cooperation and coordination among software agents delegated by the various domain experts in the complex industrial R&D environment. Furthermore, it integrates the patent document analysis with the inventive problem solving method known as TRIZ method that can suggest invention directions based on the heuristics or principles to resolve the contradictions among design objectives and engineering parameters. We chose the patent invention for chemical mechanical polishing (CMP) as our case study. However, the platform and techniques could be extended to most cooperative invention domains.  相似文献   

19.
This article describes a general overview of industry development, present status and future perspectives of manufacturing technology in Korea. The Advanced Manufacturing System project, which is one of the national R&D projects in manufacturing technology, is introduced. The problems for the future development of manufacturing technology in Korea is discussed.  相似文献   

20.
Takayuki  Mor  Alan   《Performance Evaluation》2005,61(4):347-369
We consider two processors, each serving its own M/GI/1 queue, where one of the processors (the “donor”) can help the other processor (the “beneficiary”) with its jobs, during times when the donor processor is idle. That is the beneficiary processor “steals idle cycles” from the donor processor. There is a switching time required for the donor processor to start working on the beneficiary jobs, as well as a switching back time. We also allow for threshold constraints on both the beneficiary and donor sides, whereby the decision to help is based not only on idleness but also on satisfying threshold criteria in the number of jobs.

We analyze the mean response time for the donor and beneficiary processors. Our analysis is approximate, but can be made as accurate as desired, and is validated via simulation. Results of the analysis illuminate principles on the general benefits of cycle stealing and the design of cycle stealing policies.  相似文献   


设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号