共查询到20条相似文献,搜索用时 86 毫秒
1.
2.
I/O计算机是类似计算器式的小型手握计算机,它对工业控制提供智能控制,能减少硬件设计时间,降低工发成本,增加故障容限以及提供合适的系统配置,本文介绍I/O计算机的硬件组成结构,软件设计,功能特点及I/O语言。 相似文献
3.
4.
I/O部分一直是制约计算机系统整体性能提升的瓶颈。本文提出外存性能模型,用于定量分析外存的I/O性能并帮助克服I/O瓶颈,并在此基础上,提出用多通道I/O克服PCI总线瓶颈。采用多钱程控制和异步I/O技术,使所有通道的磁盘并行工作。对比实.验表明,最大顺序读性能提升了46%,顺序写提升48%,随机读提升4%,随机写提升57%。 相似文献
5.
传统的微/小型计算机采用共享总线结构,其带宽,扩展性,可用性,可靠性等受到限制,因此,总线形成了系统内部通信瓶颈,并行PC结构是一种新型的,以交换网络为中心的、高效的系统结构,它解决了传统PC机存在的不足和缺陷,本文对并行PC系统结构了介绍,然后采用M/M/1排队模型,针对传统PC和并行PC的I/O性能进行了评价与比较,最后给出结论及建议。 相似文献
6.
本文分析了并行计算机I/O子系统逻辑模型,并从I/O子系统物理接口、I/O子系统分布与互连以及I/O结点内部结构三方面详细对比、分析子I/O子系统的物理实现。 相似文献
7.
高性能计算中的并行I/O技术 总被引:2,自引:0,他引:2
1 引言高性能计算能力已经日益成为一个国家经济、科技与国防实力的重要组成部分。由于科学工程计算和大规模商业事务处理需求的牵引,高性能计算中对I/O处理能力的要求简直是无止境的。大规模多媒体应用要求大容量快速存储系统支持,多用户事务处理环境要求快速I/O支持实时访问,而一些重大挑战性科学计算课题更是追求计算机系统具有3T性能(即要求能提供 1 Teraflops计算能力、1 Terabyte主存容量和1 Terabyte/s I/O带宽)。 I/O一直是高性能计算中的瓶颈。典型地,磁盘比主存的速度要慢一万倍到十万倍。近年来,随着大规模集成电路技术和网络技术的飞速发展,CPU的性能大约每三年就有一个较大的飞跃,网络带宽增长更快,但I/O设备的性能受制于机 相似文献
8.
1 引言计算机的总体性能主要依赖于它的五个主要组成部分的性能,即处理器、内存、总线、存储器和显示子系统等。在过去十多年里,计算机的各个组成部分的性能有了不同程度的提高,尤其是作为处理核心的微处理器的性能更是得到了长足的发展;但是,整个计算机的体系结构仍基本沿用了基于共享 相似文献
9.
Mach的I/O系统 总被引:1,自引:0,他引:1
孙凝晖 《计算机研究与发展》1994,31(9):30-35
MachI/O系统采用了和UNIX完全不同的概念和结构。Mach设备管理围绕端口和存储对象这两个Mach基本概念进行,提供了方便的PRC用户界面。Mach3.0的I/O系统新进展将I/O管理作为用户Server对待。本文介绍了Mach设备管理机制、“设备独立”的驱动程序的构造原理及其例子、在用户空间对设备的直接控制方法和引进新概念后设备管理的性能情况。 相似文献
10.
苏金树 《小型微型计算机系统》1990,11(2):1-6
计算机设计自始至终都在追求三个传统瓶颈问题的平衡,也就是:处理机执行速度,存储器宽度,I/O吞吐量。近年来,微处理器的执行速度不断上升,而总线,控制器,存贮器发展则较慢。同时,数据库管理,计算机辅助工程设计,软件开发等方面也需要与高性能计算机相匹配的I/O子系统,因此,I/O吞吐量成为日益突出的瓶颈问题,体系结构设计面临的挑战性问题是如何突破I/O瓶颈,保持三者适当平衡,充分发挥整机性能。本文先分析I/O瓶颈产生的背景,然后从微处理器及高速存贮器,磁盘传输,外设接口芯片和专用I/O计算机等方面探讨处理瓶颈,提高吞吐量的技术。 相似文献
11.
12.
Dynamic I/O-Aware Scheduling for Batch-Mode Applications on Chip Multiprocessor Systems of Cluster Platforms简 下载免费PDF全文
Efficiency of batch processing is becoming increasingly important for many modern commercial service centers, e.g., clusters and cloud computing datacenters. However, periodical resource contentions have become the major performance obstacles for concurrently running applications on mainstream CMP servers. I/O contention is such a kind of obstacle, which may impede both the co-running performance of batch jobs and the system throughput seriously. In this paper, a dynamic I/O-aware scheduling algorithm is proposed to lower the impacts of I/O contention and to enhance the co-running performance in batch processing. We set up our environment on an 8-socket, 64-core server in Dawning Linux Cluster. Fifteen workloads ranging from 8 jobs to 256 jobs are evaluated. Our experimental results show significant improvements on the throughputs of the workloads, which range from 7% to 431%. Meanwhile, noticeable improvements on the slowdown of workloads and the average runtime for each job can be achieved. These results show that a well-tuned dynamic I/O-aware scheduler is beneficial for batch-mode services. It can also enhance the resource utilization via throughput improvement on modern service platforms. 相似文献
13.
阐述了用数值仿真计算获得不锈钢管表面周向缺损涡流检测信号的方法,除了概述该仿真计算的数学模型、有限元法以及计算步骤外,还将计算结果与实验结果作了比较,以证明该仿真计算的可行性。 相似文献
14.
分析了多处理机系统中的集中式、分散式和全分布式这三种工作池的动态平衡的工作原理,通过仿真实验,提出了利用工作池技术,及时将工作池技术中待处理的工作任务分派给松耦合多处理机系统中处于空闲状态的处理器,使松耦合多处理机系统(尤其是异构机群平台)中各机器能够有机地协调工作,有效地提高海量数据的处理速度. 相似文献
15.
We study the scheduling situation where n tasks, subjected to release dates and due dates, have to be scheduled on m parallel processors. We show that, when tasks have unit processing times and either require 1 or m processors simultaneously, the minimum maximal tardiness can be computed in polynomial time. Two algorithms are described. The first one is based on a linear programming formulation of the problem while the second one is a combinatorial algorithm. The complexity status of this tall/small task scheduling problem P|r
i
,p
i
=1, size
i
{1,m}|T
max was unknown before, even for two processors. 相似文献
16.
17.
18.
实时多处理机系统BEST-FIT启发式容错调度 总被引:4,自引:0,他引:4
本文从有效利用资源的角度出发,提出了一种以最小化处理机数目为优化目标的Best-Fit启发式容错调度算法。该算法采用主/副版本备份技术和副版本的主动运行方式与被运行方式相结合的方法,将实时任务的主版本和副版本调度到不同处理机上运行;并且按照Best-Fit启发式策略为实时任务主版本寻找“最佳满足”处理机,使尽可能多的实时任务副版本以被动方式运行。算法既保证了系统的实时性和容错性,也节约了处理机。分析和仿真结果均证明了算法的有效性。 相似文献
19.
Seng Lin Shee Andrea Erdos Sri Parameswaran 《International journal of parallel programming》2008,36(1):140-162
Multicore processors have been utilized in embedded systems and general computing applications for some time. However, these
multicore chips execute multiple applications concurrently, with each core carrying out a particular task in the system. Such
systems can be found in gaming, automotive real-time systems and video / image encoding devices. These system are commonly
deployed to overcome deadline misses, which are primarily due to overloading of a single multitasking core. In this paper,
we explore the use of multiple cores for a single application, as opposed to multiple applications executing in a parallel
fashion. A single application is parallelized using two different methods: one, a master-slave model; and two, a sequential
pipeline model. The systems were implemented using Tensilica’s Xtensa LX processors with queues as the means of communications
between two cores. In a master-slave model, we utilized a course grained approach whereby a main core distributes the workload
to the remaining cores and reads the processed data before writing the results back to file. In the pipeline model, a lower
granularity is used. The application is partitioned into multiple sequential blocks; each block representing a stage in a
sequential pipeline. For both models we applied a number of differing configurations ranging from a single core to a nine-core
system. We found that without any optimization for the seven core system, the sequential pipeline approach has a more efficient
area usage, with an area increase to speedup ratio of 1.83 compared to the master-slave approach of 4.34. With selective optimization in the pipeline approach, we obtained
speed ups of up to 4.6 × while with an area increase of only 3.1 × (area increase to speedup ratio of just 0.68).
National ICT Australia is funded through the Australian Government’s Backing Australia’s Ability initiative, in part through the Australian Research Council. 相似文献
20.
提出了一种基于分批优化的实时多处理器系统的集成动态调度算法,该算法采用在每次扩充当前局部调度时,通过对所选取的一批任务进行优化分配的策略以及软实时任务的服务质量QoS(quality of service)降级策略,以统一方式实现了对实时多处理器糸统中硬、软实时任务的集成动态调度.进行了大量的模拟研究,结果表明.在多种任务参数取值下,新算法的调度成功率均高于近视算法(Myopic Algorithm). 相似文献