首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 140 毫秒
1.
在原有模型和算法分析的基础上,提出了一种共享存储器MPSOC互斥模型。该模型能适应各种互斥算法的描述、论证需求,能更好地描述任务优先级、实时性;能够适应区分处理器源任务的互斥算法(即区分对待来自不同处理器的任务);严格区分并发性、并行性,描述更加精确;扩展了服务周期、事件之间关系;能够精确地量化互斥性能指标,以便更好地比较互斥算法优劣。最后,给出了该模型的一个简单实例,对模型应用提供指导。  相似文献   

2.
分布式系统进程互斥算法的研究与改进   总被引:2,自引:0,他引:2  
本文分析比较了传统互斥算法,提出了一种新的基于令牌的算法,并详细阐述算法的设计思想及其数据结构。本算法最主要的特点是在分布式互斥中引入了优先级和树的概念,能有效的降低进程问的通信量,以及保证互斥和预防死锁。  相似文献   

3.
4.
夏晨曦  邱毓兰  彭德纯 《计算机工程》2000,26(3):59-60,F003
广域网可简单地看作由多个局域网通过远程通信线路互连组成。为了适应广域网环境的特点,文章提出了一种两层结构的分布式互斥算法模型,把广域网系统组织成由局部进程组成的局部网络的由每个局部网络中的协调进程组成的全局两层。为了互斥地访问共享资源,局部进程必须首先获得局部令牌,然后再向本地协调进程申请全局令牌,只有获得了局部和全避令牌的局部进程才能进入临界区。还讨论了对该算法可能的扩展。  相似文献   

5.
分布式对象系统的容错采用对象冗余来实现,它要求冗余对象各副本具有状态一致性,状态一致性需要对象行为的确定性来保证。文章提出了一种基于读写互斥的分布式互斥算法,保证系统节点能互斥地访问临界资源,从而确保对象行为结果的确定性,尤其是在读频繁的系统中,能大大降低消息复杂度。  相似文献   

6.
在几种基于令牌算法的基础上,提出了一个对网络逻辑结构无要求的分布式互斥算法。算法不但能够在逻辑结构无要求的计算机网络中通过发送消息和传递令牌来同步对临界资源的访问,而且可以很好地解决请求丢失、令牌丢失等问题。通过对算法的性能进行分析验证了该算法是高效的,并给出了正确性证明。  相似文献   

7.
用PV操作实现进程互斥与同步   总被引:2,自引:0,他引:2  
介绍了操作系统中进程互斥与同步的基本概念,给出了用PV操作实现进程互斥与同步的基本方法,并对软件设计师考试中出现的相关试题进行了解析.  相似文献   

8.
本文阐述了虚拟共享存储器的基本概念,并重点介绍了在多处理机系统中采用一种虚拟共享存储器的构成及其实现方法。  相似文献   

9.
网络锁保证多机系统中互斥资源的安全,它的效率是系统节点扩充的关键。为此,在分析了各种互斥算法以后,将集中式和分布式结合起来,提出了根据不同资源选择不同控制节点的算法,并且在发现节点失效时转移有效请求和删除无效请求,在节点激活时将部分请求转移给它,实现负载平衡。实验表明,这种方法具有最少的消息量和很好的容错性,直到系统剩下一个节点,它仍然可以正确运行。  相似文献   

10.
11.
Shared memory is a simple yet powerful paradigm for structuring systems. Recently, there has been an interest in extending this paradigm to non-shared memory architectures as well. For example, the virtual address spaces for all objects in a distributed object-based system could be viewed as constituting a global distributed shared memory. We propose a set of primitives for managing distributed shared memory. We present an implementation of these primitives in the context of an object-based operating system as well as on top of Unix.  相似文献   

12.
There are two distinct types of MIMD (Multiple Instruction, Multiple Data) computers: the shared memory machine, e.g. Butterfly, and the distributed memory machine, e.g. Hypercubes, Transputer arrays. Typically these utilize different programming models: the shared memory machine has monitors, semaphores and fetch-and-add; whereas the distributed memory machine uses message passing. Moreover there are two popular types of operating systems: a multi-tasking, asynchronous operating system and a crystalline, loosely synchronous operating system.

In this paper I firstly describe the Butterfly, Hypercube and Transputer array MIMD computers, and review monitors, semaphores, fetch-and-add and message passing; then I explain the two types of operating systems and give examples of how they are implemented on these MIMD computers. Next I discuss the advantages and disadvantages of shared memory machines with monitors, semaphores and fetch-and-add, compared to distributed memory machines using message passing, answering questions such as “is one model ‘easier’ to program than the other?” and “which is ‘more efficient‘?”. One may think that a shared memory machine with monitors, semaphores and fetch-and-add is simpler to program and runs faster than a distributed memory machine using message passing but we shall see that this is not necessarily the case. Finally I briefly discuss which type of operating system to use and on which type of computer. This of course depends on the algorithm one wishes to compute.  相似文献   


13.
进程互斥与同步是操作系统课程中的一个重要知识点,抽象且学生较难理解;排课管理是我院教务工作中的一个难点.本文介绍了进程互斥与同步的基本原理,将该原理应用到排课管理中,并设计出算法,既方便了教学,又为排课管理提供了一定的支持,效果良好.  相似文献   

14.
High speed networks and rapidly improving microprocessor performance make the network of workstations an extremely important tool for parallel computing in order to speedup the execution of scientific applications. Shared memory is an attractive programming model for designing parallel and distributed applications, where the programmer can focus on algorithmic development rather than data partition and communication. Based on this important characteristic, the design of systems to provide the shared memory abstraction on physically distributed memory machines has been developed, known as Distributed Shared Memory (DSM). DSM is built using specific software to combine a number of computer hardware resources into one computing environment. Such an environment not only provides an easy way to execute parallel applications, but also combines available computational resources with the purpose of speeding up execution of these applications. DSM systems need to maintain data consistency in memory, which usually leads to communication overhead. Therefore, there exists a number of strategies that can be used to overcome this overhead issue and improve overall performance. Strategies as prefetching have been proven to show great performance in DSM systems, since they can reduce data access communication latencies from remote nodes. On the other hand, these strategies also transfer unnecessary prefetching pages to remote nodes. In this research paper, we focus on the access pattern during execution of a parallel application, and then analyze the data type and behavior of parallel applications. We propose an adaptive data classification scheme to improve prefetching strategy with the goal to improve overall performance. Adaptive data classification scheme classifies data according to the accessing sequence of pages, so that the home node uses past history access patterns of remote nodes to decide whether it needs to transfer related pages to remote nodes. From experimental results, we can observe that our proposed method can increase the accuracy of data access in effective prefetch strategy by reducing the number of page faults and misprefetching. Experimental results using our proposed classification scheme show a performance improvement of about 9–25% over the same benchmark applications running on top of an original JIAJIA DSM system.
Kuan-Ching Li (Corresponding author)Email:
  相似文献   

15.
大型分布式系统环境中通常需要在各异地数据库中存储同一数据的不同副本,因此数据同步显得十分重要.在分析比较各种数据一致性方法的基础上,对Oracle9i数据库提供的新功能模块进行了研究,并结合消息队列方法从实践角度提出了基于Oracle高级队列和JMS消息服务机制的数据同步模型.对模型采用的主要技术、工作流程、实现方案做出了详细分析,最后对模型做出了评价和展望.  相似文献   

16.
Ray tracing is a well known technique to generate life-like images. Unfortunately, ray tracing complex scenes can require large amounts of CPU time and memory storage. Distributed memory parallel computers with large memory capacities and high processing speeds are ideal candidates to perform ray tracing. However, the computational cost of rendering pixels and patterns of data access cannot be predicted until runtime. To parallelize such an application efficiently on distributed memory parallel computers, the issues of database distribution, dynamic data management and dynamic load balancing must be addressed. In this paper, we present a parallel implementation of a ray tracing algorithm on the Intel Delta parallel computer. In our database distribution, a small fraction of database is duplicated on each processor, while the remaining part is evenly distributed among groups of processors. In the system, there are multiple copies of the entire database in the memory of groups of processors. Dynamic data management is acheived by an ALRU cache scheme which can exploit image coherence to reduce data movements in ray tracing consecutive pixels. We balance load among processors by distributing subimages to processors in a global fashion based on previous workload requests. The success of our implementation depends crucially on a number of parameters which are experimentally evaluated. © 1997 John Wiley & Sons, Ltd.  相似文献   

17.
In this paper, we present a scalable three-dimensional hybrid parallel Delaunay image-to-mesh conversion algorithm (PDR.PODM) for distributed shared memory architectures. PDR.PODM is able to explore parallelism early in the mesh generation process thanks to the aggressive speculative approach employed by the Parallel Optimistic Delaunay Mesh generation algorithm (PODM). In addition, it decreases the communication overhead and improves data locality by making use of a data partitioning scheme offered by the Parallel Delaunay Refinement algorithm (PDR). PDR.PODM supports fully functional volume grading by creating elements with varying size. Small elements are created near boundary or inside the critical regions in order to capture the fine features while big elements are created in the rest of the mesh. We tested PDR.PODM on Blacklight, a distributed shared memory (DSM) machine in Pittsburgh Supercomputing Center. For the uniform mesh generation, we observed a weak scaling speedup of 163.8 and above for up to 256 cores as opposed to PODM whose weak scaling speedup is only 44.7 on 256 cores. PDR.PODM scales well on uniform refinement cases running on DSM supercomputers. The end result is that PDR.PODM can generate 18 million elements per second as opposed to 14 million per second in our earlier work. The varying size version sharply reduces the number of elements compared to the uniform version and thus reduces the time to generate the mesh while keeping the same fidelity.  相似文献   

18.
介绍了虚拟资源块的计算以及虚拟资源块到物理资源块的映射,并在此基础上,提出了一种可用DSP实现的方案。该方案简化了实现过程,算法简单,耗时小。  相似文献   

19.
高性能高适应性分布式文件服务器研究与实现   总被引:1,自引:0,他引:1  
随着计算机网络技术的发展以及Internet技术广泛应用,分布式技术及分布式系统应运而生.但不管是传统分布式文件系统还是面向Internet的分布式文件系统,它们在可靠性、扩展性、易用性及性能等诸多方面都很难做到兼得.研究的目标就是面向分布式计算环境下的文件存储系统,分析当前文件服务器领域的现状及不足,研究构造一个高性能、高适应性的分布式文件服务器系统所涉及到的理论和技术问题.提出了基于虚拟机的分布式文件服务器Cache系统的设计思想,并给出了详细的设计方案;将Adapter的设计思想用于文件传输协议模块的设计,使文件服务器的网络环境适应能力以及客户需求的适应能力得到加强.  相似文献   

20.
D. B. Wortman  S. Zhou  S. Fink 《Software》1994,24(1):111-125
This paper describes the issues involved in sharing data among processes executing co-operatively in a heterogeneous computing environment. In a heterogeneous environment, the physical representation of a data object may need to be transformed when the object is moved from one type of processor to another. As a part of a larger project to build a heterogeneous distributed shared memory system we developed an automated tool for generating the conversion routines that are used to implement representation conversion for data objects. We developed a novel method for processing source programs that allowed us to extract detailed information about the physical representation of data objects without access to the source code of the compilers that were generating those representations. A performance comparison with the more general XDR heterogeneous data conversion package shows that the customized conversion routines that we generate are 4 to 8 times faster.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号