首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 187 毫秒
1.
The so-called “thrashing effect”, well known from virtual storage, but also reported from data-base systems and packet switching networks, has turned out to be a common phenomenon of large systems with concurrent processing. It simply means that beyond a saturation point an increase of the load (e.g. number of jobs) leads to a (sometimes sudden) decrease in performance (e.g. throughput). With growing size and complexity of computer systems and the general trend towards distribution, overload phenomena of different origin can interfere and superimpose mutually, resulting in a composite overload effect that can hardly be broken down into its constituents. Because the complexity of such systems defies detailed modeling, it is more appropriate to look at those systems in a more macroscopic, behavioral way, considering only the two externally measurable variables “load” and “throughput”. The resulting abstraction from internal details can smooth the way to a more general treatment and application. The article deals with such overload phenomena and their prevention in a general way using a control-theoretic approach. Special emphasis is placed on dynamic behavior, where load characteristics are changing with time, making feedback mechanisms necessary. The problem is approached as a dynamic optimum search problem for which different algorithms are presented and compared by simulation.  相似文献   

2.
The overview presented covers a wide spectrum of aspects on information systems. Consequently, we had to be very brief and for detailed definitions and discussions we must refer the interested reader to the underlying literature. We have described how information systems present complex problems to their designers and we argued that it is hardly possible for any one individual to acquire (and continuously update) sufficient skill over the whole spectrum of problems. It is shown how the partitioning of the design task into two major areas, the infological or behavioral area on the one hand and the datalogical and computer technology oriented area on the other hand, makes it possible to combine the skills of two (or more) groups of people. In addition, the users are to be directly involved in the (infological part of) design. Development in the “infological area”, as surveyed in the paper, has brought us to the situation where it is possible to apply a documentation technique that is computer independent and intelligible to the lay users in its infological parts and yet is precise enough to the data and program design stage. Actual research problems in the infological area are associated with how one could develop the understanding and the motivation of the users so that they can better exploit the possibility to control the design process that is now offered to them. Such research is not covered by the paper. Development in the “datalogical area”, as presented has increased the possibilities for using computers as aid to the designers and to base the design on more system-wide information. A research field which is presently of high interest, but not presented in the paper, is the development of more formalized methods for handling the interface between the infological and the datalogical design stages. Such research is presently making promising progress in combining recent results from “structured programming” and ”structured information analysis”.  相似文献   

3.
:Reported here is about the trouble diagnosis system for AN-24 aircraft engine which has been realized by inputting the experiences of the repair mechanics or experts of the engine as a computer software.The system is composed of following four sections which are called “model” ; a phenomena model, an inference model, a learning model, and an interpretation model.Therefore, the system is called as “model diagnosis system”. These four models are relatively independent which makes parallel operation, easy debugging, and addition of new knowledge possible.

The experience of the engine experts has been stored initially to outer knowledge base in the computer. Intermidiate knowledge which arises on the process of the inference is treated at inner knowledge base. The inner knowledge base adopts a blackboard structure. This makes the system not only able to diagnose the vague preconditioned reason, but also to diagnose the unpreconditioned one by learning. The validity of the system was proved from some experiments.  相似文献   


4.
In this contribution we report about a study of a very versatile neural network algorithm known as “Self-organizing Feature Maps” and based on earlier work of Kohonen [1,2]. In its original version, the algorithm addresses a fundamental issue of brain organization, namely how topographically ordered maps of sensory information can be formed by learning.

This algorithm is investigated for a large number of neurons (up to 16 K) and for an input space of dimension d900. To meet the computational demands this algorithm was implemented on two parallel machines, on a self-built Transputer systolic ring and on a Connection Machine CM-2.

We will present below

1. (i) a simulation based on the feature map algorithm modelling part of the synaptic organization in the “hand-region” of the somatosensory cortex,
2. (ii) a study of the influence of the dimension of the input-space on the learning process,
3. (iii) a simulation of the extended algorithm, which explicitly includes lateral interactions, and
4. (iv) a comparison of the transputer-based “coarse-grained” implementation of the model, and the “fine-grained” implementation of the same system on the Connection Machine.
  相似文献   

5.
With the decline in cost of hardware, more and more professionals are acquiring personal computers at their desk. Most common uses of these office computers are word processing and spreadsheet (e.g. LOTUS 123, MULTIMATE, FRAMEWORK, etc.1) applications. Professionals typically generate text directly on the personal computer (in lieu of hand written copy) and use spreadsheet programs to tabulate and analyze collected field data. A problem in some offices is integrating text and data prepared on the personal computer with existing dedicated word processing systems which have existed in many office environments for some time. One solution is to have the text and tabulated data retyped into the word processing system, however, this approach is not an effective use of resources.

The Industrial Hygiene Section, Industrywide Studies Branch (DSHEFS), NIOSH has developed procedures for electronically linking personal microcomputers with an office-wide word processing system. Using a commercially available hardware “board” (which may be inserted into an open slot in an IBM-PC or compatible), the rough copy report and tabulated spreadsheet data can be electronically linked and “uploaded” from a microcomputer to the word processing system. At NIOSH, a WANG word processing system is the office-wide system for preparing and publishing final reports. This system is not readily compatible with IBM-PC (or similar) microcomputers; however, using MULTIMATE (a commercially available word processing program) and the hardware board, documents can be readily transferred to the WANG virtually unchanged from the copy generated on the microcomputer. Importantly, spreadsheet data can be similarly transferred and linked to a document on the WANG wordprocessing system.

This paper describes the sequence of steps, along with necessary hardware and software, to achieve the integration of written documents and numerical data (analyzed by LOTUS 123) from a microcomputer to a office-wide WANG word processing system.  相似文献   


6.
7.
A model of program complexity is introduced which combines structural control flow measures with data flow measures. This complexity measure is based upon the prime program decomposition of a program written for a Hierarchical Abstract Computer. It is shown that this measure is consistent with the ideas of information hiding and data abstraction. Because this measure is sensitive to the linear form of a program, it can be used to measure different concrete representations of the same algorithm, as in a structured and an unstructured version of the same program. Application of the measure as a model of system complexity is given for “upstream” processes (e.g. specification and design phases) where there is no source program to measure by other techniques.  相似文献   

8.
The emergence of parallel array processing, both in software methodology and hardware technology, opens new avenues for the implementation and optimization of systems for interactive computer graphics. The Q-spline interpolation method is presented, designed for incremental curve definition, local curve modification, “on-the-curve” control points and computational efficiency in array processing environment. The implementation and performance of the algorithms in the environment of a general purpose interactive computer graphics system is described.  相似文献   

9.
Today's environments of increasing business change require software development methodologies that are more adaptable. This article examines how complex adaptive systems (CAS) theory can be used to increase our understanding of how agile software development practices can be used to develop this capability. A mapping of agile practices to CAS principles and three dimensions (product, process, and people) results in several recommendations for “best practices” in systems development.  相似文献   

10.
Most Western Governments (USA, Japan, EEC, etc.) have now launched national programmes to develop computer systems for use in the 1990s. These so-called Fifth Generation computers are viewed as “knowledge” processing systems which support the symbolic computation underlying Artificial Intelligence applications. The major driving force in Fifth Generation computer design is to efficiently support very high level programming languages (i.e. VHLL architecture).

Historycally, however, commercial VHLL architectures have been largely unsuccesful. The driving force in computer designs has principally been advances in hardware which at the present time means architectures to exploit very large scale integration (i.e. VLSI architecture).

This paper examines VHLL architectures and VLSI architectures and their probable influences on Fifth Generation computers. Interestingly the major problem for both architecture classes is parallelism; how to orchestrate a single parallel computation so that it can be distributed across an ensemble of processors.  相似文献   


11.
12.
The “shadow costs” capability of Linear Programming (LP) provides office Automation (OA) managers with an instrument that determines optimal future allocations of money for resources such as computer workstatiions, software, training, awareness sessions, and user support. The approach can be implemented at any stage of OA development and based on current experience predicts how future restricted expenditures of money should be allocated among the listed resources in order to achieve the greatest return.  相似文献   

13.
Some combinatory logics are examined as object code for functional programs. The worst-case performances of certain algorithms for abstracting variables from combinatory expressions are analysed. A lower bound on the performance of any abstraction algorithm for a finite set of combinators is given. Using the combinators S, K, I, B, C, S′, B′, C′ and Y, the problem of finding an optimal abstraction algorithm is shown to be NP-complete. Some methods of improving abstraction algorithms for those combinators are examined, including “balancing” (for asymptotic performance) and “peephole” optimisations (for smaller cases).  相似文献   

14.
The integration of computers within the manufacturing environment has long been a method of enhancing productivity. Their use in many facets of a manufacturing enterprise has given industries the ability to deliver low-cost, high-quality competitive products. As computer technology advances, we find more and more uses for new hardware and software in the enterprise. Over a period of time, we have seen many “islands” of computer integration. Distinct, fully functional hardware and software installations are a common base for many industries. Unfortunately, these islands are just that, separate, distinct and functional but non-integrated. The lack of integration within these information systems make it difficult for end users to see the same manufacturing data. We are finding the need for a “single image” real-time information system to provide the enterprise with the data that is required to plan, justify, design, manufacture and deliver products to the customer. Unfortunately, many industries have a large installed base of hardware and software. Replacement of current systems is not a cost-justified business decision. An alternative would be the migration of current systems to a more integrated solution. The migration to a computer-integrated manufacturing (CIM)-based architecture would provide that single image real-time information system.

The effort and skills necessary for the implementation of a CIM-based architecture would require active participation from two key organizations: Manufacturing and information systems (I/S). The manufacturing engineers, process engineers and other manufacturing resource would be the cornerstone for obtaining requirements. The ability to effectively use I/S is a critical success factor in the implementation of CIM. I/S has to be viewed as an equal partner, not just as a service organization. Manufacturing management needs to understand the justification process of integrating computer systems and the “real” cost of integration versus the cost of non-integrated manufacturing systems. The active participation of both organizations during all phases of CIM implementation will result in a effective and useful integrated information system.  相似文献   


15.
We develop an abstract model for our case-study: software to support a “video rental service.” This illustrates how a visual formalism, constraint diagrams, may be used in order to specify such systems precisely.  相似文献   

16.
Hardware implementations of neuroprocessor architectures are currently enjoying commercial availability for the first time ever. This development has been caused in part by the requirement for real-time solutions to time critical neural network applications. Massively parallel asynchronous neuromorphic representations are inherently capable of very high computational speeds when properly cast in the “right stuff”, i.e. electronic or optoelectronic hardware. However, hardware based learning in such systems is still at a primitive stage. In practise, simulations are typically performed in software, and the resulting synaptic weight capturing the input-output transformation subsequently quantized and down-loaded onto the neural hardware. However, because of the numerous discrepancies between the software and hardware, such systems are inherently poor in performance. In this paper we report on chip-in-the-loop learning systems assembled from custom analog “building blocks” hardware.  相似文献   

17.
Donnel type stability equations for buckling of stringer stiffened cylindrical panels under combined axial compression and hydrostatic pressure are solved by the displacement approach of [6], The solution is employed for a parametric study over a wide range of panel and stringer geometries to evaluate the combined influence of panel configurations and boundary conditions along the straight edges on the buckling behavior of the panel relative to a complete “counter” cylinder (i.e. a cylinder with identical skin and stiffener parameters).

The parametric studies reveal a “sensitivity” to the “weak in shear”, Nx = Nxφ = 0, along the straight edges, SS1 boundary conditions type where the panel buckling loads are always smaller than those predicted for a complete “counter” cylinder. In the case of “classical”, SS3 B.Cs., there always exist values of panel width, 2φ0, for which ρ = 1, i.e. the panel buckling load equals that of the complete “counter” cylinder. For SS2 and SS4 B.Cs. types, the nature by which the panel critical load approaches that of the complete cylinder appears to be panel configuration dependent.

Utilization of panels for the experimental determination of a complete cylinder buckling load is found to be satisfactory for relatively very lightly and heavily stiffened panels, as well as for short panels, (L/R) = 0.2 and 0.5. Panels of moderate length and stiffening have to be debarred, since they lead to nonconservative buckling load predictions.  相似文献   


18.
Constrained multibody system dynamics an automated approach   总被引:1,自引:0,他引:1  
The governing equations for constrained multibody systems are formulated in a manner suitable for their automated, numerical development and solution. Specifically, the “closed loop” problem of multibody chain systems is addressed.

The governing equations are developed by modifying dynamical equations obtained from Lagrange's form of d'Alembert's principle. This modification, which is based upon a solution of the constraint equations obtained through a “zero eigenvalues theorem,” is, in effect, a contraction of the dynamical equations.

It is observed that, for a system with n generalized coordinates and m constraint equations, the coefficients in the constraint equations may be viewed as “constraint vectors” in n-dimensional space. Then, in this setting the system itself is free to move in the nm directions which are “orthogonal” to the constraint vectors.  相似文献   


19.
It is hard to implement the ADI method in an efficient way on distributed-memory parallel computers. We propose “P-scheme” which parallelizes a tridiagonal linear system of equations for the ADI method, but its effectiveness is limited to the cases where the problem size is large enough mainly because of the communication cost of the propagation phase of the scheme.

In order to overcome this difficulty, we propose an improved version of the P-scheme with “message vectorization” which aggregates several communication messages into one and alleviates the communication cost. Also we evaluate the effectiveness of message vectorization for the ADI method and show that the improved version of the P-scheme works well even for smaller problems and linear and super-linear speedups can be achieved for 8194 × 8194 and 16,386 × 16,386 problems, respectively.  相似文献   


20.
This paper is a summary of the major technical report (Williams et al., 1993) of the IFAC/IFIP Task Force on Architectures for Integrating Manufacturing Activities and Enterprises. It presents a synopsis of the investigations of pertinent architectures undertaken, and the findings generated relating to the suitability of various architectures for the integration task. It also presents the Task Force's recommendations for achieving a “complete” architecture in terms of the necessary capabilities by “completing” a currently available architecture. The Task Force also outlined how a “best” architecture could be achieved by selecting and combining the best features of the available architectures.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号