At GKN, fatigue monitoring of important components has been conducted since 1979. The monitoring methods depend on the mechanisms of damage; quasi-static loads are regarded as well as dynamic loads. The components were selected for monitoring on the basis of a system analysis. The data resulting from monitoring are used to optimise operation mode steadily. Experience shows that the use of monitoring data as input for fatigue assessment is the most realistic and cost-effective way. This fatigue assessment uses global and local sensitivity studies to evaluate the load-stress relation for each component. These relations can be programmed to produce stress vs. time curves. These are processed according to ASME rules to give a realistic fatigue usage. 相似文献
A CEC-funded project has been performed to tackle the problem of producing an advanced Life Monitoring System (LMS) which would calculate the creep and fatigue damage experienced by high temperature pipework components. Four areas were identified where existing Life Monitoring System technology could be improved:
1. 1. the inclusion of creep relaxation
2. 2. the inclusion of external loads on components
3. 3. a more accurate method of calculating thermal stresses due to temperature transients
4. 4. the inclusion of high cycle fatigue terms.
The creep relaxation problem was solved using stress reduction factors in an analytical in-elastic stress calculation. The stress reduction factors were produced for a number of common geometries and materials by means of non-linear finite element analysis. External loads were catered for by producing influence coefficients from in-elastic analysis of the particular piping system and using them to calculate bending moments at critical positions on the pipework from load and displacement measurements made at the convenient points at the pipework. The thermal stress problem was solved by producing a completely new solution based on Green's Function and Fast Fourier transforms. This allowed the thermal stress in a complex component to be calculated from simple non-intrusive thermocouple measurements made on the outside of the component. The high-cycle fatigue problem was dealt with precalculating the fatigue damage associated with standard transients and adding this damage to cumulative total when a transient occurred.
The site testing provided good practical experience and showed up problems which would not otherwise have been detected. 相似文献
The trends in high density interconnection (HDI) multichip module (MCM) techniques that have the potential to reduce interconnection cost and production time are described. The implementation in laminated dielectric (MCM-L) technology of a workstation processor core illustrates current substrate technology capabilities. The design, routing, layout and thermal management of the processor core are described. Thin-film deposited dielectric (MCM-D) technology is discussed as a cost-effective method for future interconnection applications 相似文献
Given the enormous size of the genome and that there are potentially many other types of measurements we need to do to understand it, it has become necessary to pick and choose one's targets to measure because it is still impossible to evaluate the entire genome all at once. What has emerged is a need to have rapidly customizable microarrays. There are two dominant methods to accomplish custom microarray synthesis, Affymetrix-like microarrays manufactured using light projection rather than semiconductor-like masks used by Affymetrix to mass manufacture their GeneChip/sup TM/ arrays now, or the ink-jet printing method employed by Agilent. The manufacture of these custom Affymetrix-like microarrays can now be done on a digital optical chemistry (DOC) machine developed at the University of Texas Southwestern Medical Center, and this method offers much higher feature numbers and feature density than is possible with ink-jet printed arrays. On a microarray, each feature contains a single genetic measurement. The initial DOC prototype has been described in several publications, but that has now led to a second-generation machine. This machine reliably produces a number of arrays daily, has been deployed against a number of biomedical questions, is being used in new ways and has also led to a number of spin-off technologies. 相似文献
A distributed problem solving system can be characterized as a group of individual cooperating agents running to solve common problems. As dynamic application domains continue to grow in scale and complexity, it becomes more difficult to control the purposeful behavior of agents, especially when unexpected events may occur. This article presents an information and knowledge exchange framework to support distributed problem solving. From the application viewpoint the article concentrates on the stock trading domain; however, many presented solutions can be extended to other dynamic domains. It addresses two important issues: how individual agents should be interconnected so that their resources are efficiently used and their goals accomplished effectively; and how information and knowledge transfer should take place among the agents to allow them to respond successfully to user requests and unexpected external situations. The article introduces an architecture, the MASST system architecture, which supports dynamic information and knowledge exchange among the cooperating agents. The architecture uses a dynamic blackboard as an interagent communication paradigm to facilitate factual data, business rule, and command exchange between cooperating MASST agents. The critical components of the MASST architecture have been implemented and tested in the stock trading domain, and have proven to be a viable solution for distributed problem solving based on cooperating agents 相似文献
No generally accepted principles and guidelines currently exist to help engineers design local interaction mechanisms that result in a desired global behavior. However, several communities have developed ways of approaching this problem in the context of niched application areas. Because the ideas underlying these approaches are often obscured or underemphasized in technical papers, the authors review the role of self-organization in their work. They provide a better picture of the status of the emerging field of self-organizing systems or autonomic computing. 相似文献