共查询到20条相似文献,搜索用时 0 毫秒
1.
UNIX系统源级调试器设计 总被引:2,自引:0,他引:2
本文探讨在UNIX操作系统上的设计和实现源级调度器的原理和方法,文章首先了Solaris操作系统上的与调试有关的系统资源,然后讨论了调试器的内部数量结构,最后给出了基本调试操作的实现方法。 相似文献
2.
3.
The steps necessary to extend the UNIX 1 UNIX is a trademark of Bell Laboratories. time sharing system to a network which includes a central processor and a set of satellite processors is described. Software interfaces permit a program in the satellite processor to behave as if it were running in the central processor. Tasks are executed in parallel in several processors resulting in improved reliability and response time. The economics of such systems becomes more feasible with the reduction of the cost of CPUs and memories and the increasing demand for dedicated local computers. 相似文献
4.
The paper describes methods for using Extremal Optimization (EO) for processor load balancing during execution of distributed applications. A load balancing algorithm for clusters of multicore processors is presented and discussed. In this algorithm the EO approach is used to periodically detect the best tasks as candidates for migration and for a guided selection of the best computing nodes to receive the migrating tasks. To decrease the complexity of selection for migration, the embedded EO algorithm assumes a two-step stochastic selection during the solution improvement based on two separate fitness functions. The functions are based on specific models which estimate relations between the programs and the executive hardware. The proposed load balancing algorithm is assessed by experiments with simulated load balancing of distributed program graphs. The algorithm is compared against a greedy fully deterministic approach, a genetic algorithm and an EO-based algorithm with random placement of migrated tasks. 相似文献
5.
W. M. Waite 《Software》1973,3(1):75-79
Periodic location counter sampling is a well-known technique for conducting performance measurements on operating systems. It is also extremely useful for applications programs. This paper describes a set of interface conventions for such a monitor. SPY, a monitor obeying these conventions, has been implemented on three different computers. 相似文献
6.
Larry Hughes 《Software》1988,18(1):15-27
The UNIX 4.3 operating system, like its predecessor UNIX 4.2, is designed to allow interprocess communications between pairs of processes using sockets — pairs of integers identifying specific hosts and ports — which are associated with a process. In this paper a software package which allows interprocess communications between many UNIX processes (as opposed to two) is presented. 相似文献
7.
A distributed version of the UNIX operations system is currently under development through a joint effort of New Mexico State University and the Hebrew University of Jerusalem. A microprocessor version of the UNIX kernel has been developed which will run on any PDP-11 or LSI-11 based processing element and allows processes to run in a UNIX ‘look-alike’ environment. Each process is fully transportable among all processors in the system. Although the preliminary version of the system was built in a star configuration, the system is currently being upgraded by the addition of a communication ring with 8-bit microprocessors as ring interface units. The current paper describes the software structure, the hardware structure and the communication protocol of the system. 相似文献
8.
In this paper we describe the design and implementation of an integrated monitoring and debugging system for a distributed real-time computer system. The monitor provides continuous, transparent monitoring capabilities throughout a real-time system's lifecycle with bounded, minimal, predictable interference by using software support. The monitor is flexible enough to observe both high-level events that are operating system- and application-specific, as well as low-level events such as shared variable references. We present a novel approach to monitoring shared variable references that provides transparent monitoring with low overhead. The monitor is designed to support tasks such as debugging realtime applications, aiding real-time task scheduling, and measuring system performance. Since debugging distributed real-time applications is particularly difficult, we describe how the monitor can be used to debug distributed and parallel applications by deterministic execution replay. 相似文献
9.
Summary Distributed Mutual Exclusion algorithms have been mainly compared using the number of messages exchanged per critical section execution. In such algorithms, no attention has been paid to the serialization order of the requests. Indeed, they adopt FCFS discipline. Conversely, the insertion of priority serialization disciplines, such as Short-Job-First, Head-Of-Line, Shortest-Remaining-Job-First etc., can be useful in many applications to optimize some performance indices. However, such priority disciplines are prone to starvation. The goal of this paper is to investigate and evaluate the impact of the insertion of a priority discipline in Maekawa-type algorithms. Priority serialization disciplines will be inserted by means of agated batch mechanism which avoids starvation. In a distributed algorithm, such a mechanism needs synchronizations among the processes. In order to highlight the usefulness of the priority based serialization discipline, we show how it can be used to improve theaverage response time compared to the FCFS discipline. The gated batch approach exhibits other advantages: algorithms are inherently deadlock-free and messages do not need to piggyback timestamps. We also show that, under heavy demand, algorithms using gated batch exchange less messages than Maekawa-type algorithms per critical section excution.
Roberto Baldoni was born in Rome on February 1, 1965. He received the Laurea degree in electronic engineering in 1990 from the University of Rome La Sapienza and the Ph.D. degree in Computer Science from the University of Rome La Sapienza in 1994. Currently, he is a researcher in computer science at IRISA, Rennes (France). His research interests include operating systems, distributed algorithms, network protocols and real-time multimedia applications.
Bruno Ciciani received the Laurea degree in electronic engineering in 1980 from the University of Rome La Sapienza. From 1983 to 1991 he has been a researcher at the University of Rome Tor Vergata. He is currently full professor in Computer Science at the University of Rome La Sapienza. His research activities include distributed computer systems, fault-tolerant computing, languages for parallel processing, and computer system performance and reliability evaluation. He has published in IEEE Trans. on Computers, IEEE Trans. on Knowledge and Data Engineering, IEEE Trans. on Software Engineering and IEEE Trans. on Reliability. He is the author of a book titled Manufactoring Yield Evaluation of VLSI/WSI Systems to be published by IEEE Computer Society Press.This research was supported in part by the Consiglio Nazionale delle Ricerche under grant 93.02294.CT12This author is also supported by a grant of the Human Capital and Mobility project of the European Community under contract No. 3702 CABERNET 相似文献
10.
A common debugging strategy involves reexecuting a program (on a given input) over and over, each time gaining more information about bugs. Such techniques can fail on message-passing parallel programs. Because of nondeterminacy, different runs on the given input may produce different results. This nonrepeatability is a serious debugging problem, since an execution cannot always be reproduced to track down bugs. This paper presents a technique for tracing and replaying message-passing programs. By tracing the order in which messages are delivered, a reexecution can be forced to deliver messages in their original order, reproducing the original execution. To reduce the overhead of such a scheme, we show that the delivery'order of only messages involved inraces need be traced (and not every message). Our technique makes run-time decisions to detect and trace racing messages and is usuallyoptimal in the sense that the minimal number of racing messages is traced. Experiments indicate that only 1% of the messages are often traced, gaining a reduction of two orders of magnitude over traditional techniques that trace every message. These traces allow an execution to be reproduced any number of times for debugging. Our work is novel in that we adaptively decide what to trace, and trace only those messages that introduce nondeterminacy. With our strategy, large reductions in trace size allow long-running programs to be replayed that were previously unmanageable. In addition, the reduced tracing requirements alleviate tracing bottle-necks, allowing executions to be debugged with substantially lower execution time overhead.This work was supported in part by National Science Foundation grants CCR-8815928 and CCR-9100968, Office of Naval Research grant N00014-89-J-1222, and a grant from Sequent Computer Systems, Inc. 相似文献
11.
This paper describes two mechanisms – reproduction sets and metafiles – that provide low-cost, semi-automatic file replication and location transparency facilities for an interconnected collection of machines running Berkeley Unix. A reproduction set is a collection of files that the system attempts to keep identical; this is done on a ‘best effort’ basis, with the system relying on the user to handle unusual situations. A metafile is a special file that contains symbolic pathnames of other files, each of which may be on any machine in the network; opening a metafile results in opening an available constituent file. Examples are given to illustrate the use of these mechanisms. Their implementation and performance are also described. 相似文献
12.
Network researchers have dedicated a notable part of their efforts to the area of modeling traffic and to the implementation of efficient traffic generators. We feel that there is a strong demand for traffic generators capable to reproduce realistic traffic patterns according to theoretical models and at the same time with high performance. This work presents an open distributed platform for traffic generation that we called distributed internet traffic generator (D-ITG), capable of producing traffic (network, transport and application layer) at packet level and of accurately replicating appropriate stochastic processes for both inter departure time (IDT) and packet size (PS) random variables. We implemented two different versions of our distributed generator. In the first one, a log server is in charge of recording the information transmitted by senders and receivers and these communications are based either on TCP or UDP. In the other one, senders and receivers make use of the MPI library. In this work a complete performance comparison among the centralized version and the two distributed versions of D-ITG is presented. 相似文献
13.
Jason Gait 《Software》1985,15(6):539-554
This paper reports on an experimental debugger for concurrent programs. Design objectives include a showing of greatest usefulness when dealing with multiprocess interactions, creation of a simplified more approachable interface for programmers, allowance for the systematic organization (and limitation) of debugging information by programmers, reflection of a natural view of concurrency, and portability. The design responds to a perceived need for debugging techniques applicable in systems of concurrent, communicating, asynchronous processes. During debugging sessions, a user is able to dynamically explore interprocess synchronization points, parallel actions and deadlock situations. The programmer interface is based on a transparent window multiplexer providing a set of windows for each concurrent process. The window manager interactively maps interesting windows to programmer-specified viewscreen locations, while relegating temporarily uninteresting windows to the background. In implementing a debugger for concurrent programs, a principal concern is the probe effect, or the possibility that the debugger itself masks synchronization errors in the program being debugged. For the examples explored, the probe effect was not observed to limit the localization of implanted synchronization errors. 相似文献
14.
Summary We present a formal proof method for distributed programs. The semantics used to justify the proof method explicitly identifies equivalence classes of execution sequences which are equivalent up to permuting commutative operations. Each equivalence class is called an interleaving set or a run. The proof rules allow concluding the correctness of certain classes of properties for all execution sequences, even though such properties are demonstrated directly only for a subset of the sequences. The subset used must include a representative sequence from each interleaving set, and the proof rules, when applicable, guarantee that this is the case. By choosing a subset with appropriate sequences, simpler intermediate assertions can be used than in previous formal approaches. The method employs proof lattices, and is expressed using the temporal logic ISTL.
Shmuel Katz received his B.A. in Mathematics and English Literature from U.C.L.A., and his M.Sc. and Ph.D. in Computer Science (1976) from the Weizmann Institute in Rechovot, Israel. From 1976 to 1981 he was at the IBM Israel Scientific Center. Presently, he is on the faculty of the Computer Science Department at the Technion in Haifa, Israel. In 1977–1978 he visited for a year at the University of California, Berkeley, and in 1984–1985 was at the University of Texas at Austin. He has been a consultant and visitor at the MCC Software Technology Program, and in 1988–1989 was a visiting scientist at the I.B.M. Watson Research Center. His research interests include the methodology of programming, specification methods, program verification and semantics, distributed programming, data structures, and programming languages.
Doron Peled was born in 1962 in Haifa. He received his B.Sc. and M.Sc. in Computer Science from the Technion, Israel in 1984 and 1987, respectively. Between 1987 and 1991 he did his military service. He also completed his D.Sc. degree in the Technion during these years. Dr. Peled was with the Computer Science department at Warwick University in 1991–1992. He is currently a member of the technical staff with AT & T Bell Laboratories. His main research interests are specification and verification of programs, especially as related to partial order models, fault-tolerance and real-time. He is also interested in semantics and topology.This research was carried out while the second author was at the Department of Computer Science, The Technion, Haifa 32000, Israel 相似文献
15.
The UNIX 1 UNIX is a trademark of AT&T Bell Laboratories. operating system has been extended to support a crash resistant file system. This paper describes the modifications which were made to the UNIX kernel to support such a file system and the associated recovery facilities for maintaining the internal consistency of files despite the failure of the host processor, transient hardware failures and decay of the storage medium. 相似文献
16.
The Mixed-Language Programming (MLP) System is a simple system that facilitates construction of sequential programs in which procedures can be written in different programming languages to exploit heterogeneity in language functionality. In addition, MLP provides a simple remote procedure call (RPC) facility that allows heterogeneity in machine functionality to be exploited. To minimize implementation cost, the system does not solve all of the problems related to mixed-language programming; rather, MLP is designed to handle common situations well. Among the unique aspects of MLP are its advanced facilities, which allow complex situations to be handled with user intervention; for example, these facilities allow arguments of a type not defined by a language to be used by procedures written in that language. This paper overviews the use of MLP and describes its implementation. In addition, two programs that have been written using the MLP system—a small database system and a collection of plot routines—are discussed. The system executes on a collection of Vaxes and Suns running Berkeley UNIX. Currently supported languages are C, Pascal and Icon. 相似文献
17.
Summary Methodological design of distributed programs is necessary if one is to master the complexity of parallelism. The class of control programs, whose purpose is to observe or detect properties of an underlying program, plays an important role in distributed computing. The detection of a property generally rests upon consistent evaluations of a predicate; such a predicate can be global, i.e. involve states of several processes and channels of the observed program. Unfortunately, in a distributed system, the consistency of an evaluation cannot be trivially obtained. This is a central problem in distributed evaluations. This paper addresses the problem of distributed evaluation, used as a basic tool for solution of general distributed detection problems. A new evaluation paradigm is put forward, and a general distributed detection program is designed, introducing the iterative scheme ofguarded waves sequence. The case of distributed termination detection is then taken to illustrate the proposed methodological design.
Jean-Michel Hélary is currently professor of Computer Science at the University of Rennes, France. He received a first Ph.D. degree in Numerical Analysis in 1968, then another Ph.D. Degree in Computer Science in 1988. His research interests include distributed algorithms and protocols, specially the methodological aspects. He is a member of an INRIA research group working at IRISA (Rennes) on distributed algorithms and applications. Professor Jean-Michel Hélary has published several papers on these subjects, and is co-author of a book with Michel Raynal. He serves as a PC member in an international conference.
Michel Raynal is currently professor of Computer Science at the University of Rennes, France. He received the Ph.D. degree in 1981. His research interests include distributed algorithms, operating systems, protocols and parallelism. He is the head of an INRIA research group working at IRISA (Rennes) on distributed algorithms and applications. Professor Michel Raynal has organized several international conferences and has served as a PC member in many international workshops, conferences and symposia. Over the past 9 years, he has written 7 books that constitute an introduction to distributed algorithms and distributed systems (among them: Algorithms for Mutual Exclusion, the MIT Press, 1986, and Synchronization and Control of Distributed Programs, Wiley, 1990, co-authored with J.M. Hélary). He is currently involved in two european Esprit projects devoted to large scale distributed systems.This work was supported by French Research Program C3 on Parallelism and Distributed ComputingAn extended abstract has been presented to ISTCS '92 [12] 相似文献
18.
David S.H. Rosenthal 《Computer aided design》1982,14(3):131-135
The suitability of the UNIX operating system as a basis for constructing and using CAD software is assessed in the light of several years experience of using it for this purpose at EdCAAD. 相似文献
19.
The paper concerns parallel methods for extremal optimization (EO) applied in processor load balancing in execution of distributed programs. In these methods EO algorithms detect an optimized strategy of tasks migration leading to reduction of program execution time. We use an improved EO algorithm with guided state changes (EO-GS) that provides parallel search for next solution state during solution improvement based on some knowledge of the problem. The search is based on two-step stochastic selection using two fitness functions which account for computation and communication assessment of migration targets. Based on the improved EO-GS approach we propose and evaluate several versions of the parallelization methods of EO algorithms in the context of processor load balancing. Some of them use the crossover operation known in genetic algorithms. The quality of the proposed algorithms is evaluated by experiments with simulated load balancing in execution of distributed programs represented as macro data flow graphs. Load balancing based on so parallelized improved EO provides better convergence of the algorithm, smaller number of task migrations to be done and reduced execution time of applications. 相似文献
20.
Jason Gait 《Software》1986,16(3):225-233
This paper reports on an experimental study of the probe effect, defined as an alteration in the frequency of run-time computational errors observed when delays are introduced into concurrent programs. If the concurrent program being studied has no synchronization errors, then there is no probe effect. In the presence of synchronization errors, the frequency of observable output errors for a sample experimental program starts at a high value for small delays, oscillates rapidly as the delay is increased, and apparently settles at zero errors for larger values of delay. Thus, for sufficiently large delays, the probe effect can almost completely mask synchronization errors in concurrent programs. For sufficiently large concurrent process sets, even small values of embedded delay may mask synchronization errors, provided side effects in shared memory are not included in the observation. 相似文献