共查询到20条相似文献,搜索用时 15 毫秒
1.
Conventional methods of storing aK-dimensional array allow easy extension only along one dimension. We present a technique of allocating a linear sequence of contiguous storage locations for aK-dimensional extendible array by adjoining blocks of (K–1)-dimensional subarrays. Element access is by determination of the block header location and then the displacement within the block. For cubical and all practical cases of rectangular arrays considered, the storage requirement isO (N) whereN is the array size. The element access cost isO (K) for the 2-step computed access function used.
Ein Speicherschema für erweiterbare Felder
Zusammenfassung Konventionelle Methoden der SpeicherungK-dimensionaler Felder lassen eine einfache Erweiterung lediglich entlang einer Dimension zu. Wir beschreiben eine Technik der Zuweisung einer linearen Folge von zusammenhängenden Speicherzellen fürK-dimensional erweiterbare Felder durch Hinzufügen von Blöcken aus (K–1)-dimensionierten Teilfeldern. Der Elementzugriff erfolgt durch Bestimmung des Headers und des Displacements innerhalb des Blockes. Für kubische und alle praktische Fälle rechteckiger Felder ist der SpeicherbedarfO (N) wobeiN die Feldgröße ist. Die Kosten eines Elementzugriffs betragenO (K) für die in zwei Schritten berechnete Zugriffsfunktion.相似文献
2.
3.
Case studies of worldwide winners: Ford Motor Company's Team Ranger and mid-sized, US$10 million, engineer-to-order Airolite Company, ventilator louver manufacturer, authenticate benefits of applying concurrent Engineering to design processes. They confront and need to surmount: global competition, increased labor costs, rising customer expectations, shorter product life cycles and government regulation. This paper clearly illustrates how Concurrent Engineering in meeting these demands embraces supporting subsystems that include Computer Aided Drafting & Design, Quality Function Deployment and Design for Manufacture & Assembly. 相似文献
4.
Jaroslaw Nieplocha Robert J. Harrison Richard J. Littlefield 《The Journal of supercomputing》1996,10(2):169-189
Portability, efficiency, and ease of coding are all important considerations in choosing the programming model for a scalable parallel application. The message-passing programming model is widely used because of its portability, yet some applications are too complex to code in it while also trying to maintain a balanced computation load and avoid redundant computations. The shared-memory programming model simplifies coding, but it is not portable and often provides little control over interprocessor data transfer costs. This paper describes an approach, called Global Arrays (GAs), that combines the better features of both other models, leading to both simple coding and efficient execution. The key concept of GAs is that they provide a portable interface through which each process in a MIMD parallel program can asynchronously access logical blocks of physically distributed matrices, with no need for explicit cooperation by other processes. We have implemented the GA library on a variety of computer systems, including the Intel Delta and Paragon, the IBM SP-1 and SP-2 (all message passers), the Kendall Square Research KSR-1/2 and the Convex SPP-1200 (nonuniform access shared-memory machines), the CRAY T3D (a globally addressable distributed-memory computer), and networks of UNIX workstations. We discuss the design and implementation of these libraries, report their performance, illustrate the use of GAs in the context of computational chemistry applications, and describe the use of a GA performance visualization tool.(An earlier version of this paper was presented at Supercomputing'94.) 相似文献
5.
Several meetings of the Extremely Large Databases Community for large scale scientific applications advocate the use of multidimensional arrays as the appropriate model for representing scientific databases. Scientific databases gradually grow to massive sizes of the order of terabytes and petabytes. As such, the storage of such databases require efficient dynamic storage schemes where the array is allowed to arbitrarily extend the bounds of the dimensions. Conventional multidimensional array representations cannot extend or shrink their bounds without relocating elements of the data-set. In general, extendibility of the bounds of the dimensions, is limited to only one dimension. This paper presents a technique for storing dense multidimensional arrays by chunks such that the array can be extended along any dimension without compromising the access time for an element. This is done with a computed access mapping function, that maps the k-dimensional index onto a linear index of the storage locations. This concept forms the basis for the implementation of an array file of any number of dimensions, where the bounds of the array dimension can be extended arbitrarily. Such a feature currently exists in the Hierarchical Data Format version 5 (HDF5). However, extending the bound of a dimension in the HDF5 array file can be unusually expensive in time. Such extensions, in our storage scheme for dense array files, can still be performed while still accessing elements of the array at orders of magnitude faster than in HDF5 or conventional arrays-files. We also present theoretical and experimental analysis of our scheme with respect to access time and storage overhead. Such mapping scheme can be readily integrated into existing PGAS models for parallel processing in a cluster networked computing environment. 相似文献
6.
We propose a systems approach to providing video service that integrates the multiresolution data generated by scalable compression algorithms with the high-bandwidth, high-capacity storage provided by disk arrays. We introduce two layout strategies for storing multiresolution video data on magnetic disk arrays, which vary in the degrees of parallelism and concurrency they use to satisfy requests. Our simulation results show that the storage of multiple video resolutions allows a video file server to satisfy considerably more user requests than a server that stores a single resolution of video data. 相似文献
7.
Glenford Mapp Dhawal Thakker Orhan Gemikonakli 《Journal of Computer and System Sciences》2011,77(5):837-851
Gate-limited service is a type of service discipline found in queueing theory and can be used to describe a number of operational environments, for example, large transport systems such as buses, trains or taxis, etc. Recently, there has been the observation that such systems can also be used to describe interactive Internet Services which use a Client/Server interaction. In addition, new services of this genre are being developed for the local area. One such service is a Network Memory Server (NMS) being developed here at Middlesex University. Though there are several examples of real systems that can be modelled using gate-limited service, it is fair to say that the analytical models which have been developed for gate-limited systems have been difficult to use, requiring many iterations before practical results can be generated. In this paper, a detailed gate-limited bulk service queueing model based on Markov chains is explored and a numerical solution is demonstrated for simple scenarios. Quantitative results are presented and compared with a mathematical simulation. The analysis is used to develop an algorithm based on the concept of optimum operational points. The algorithm is then employed to build a high-performance server which is capable of balancing the need to prefetch for streaming applications while promptly satisfying demand misses. The algorithm is further tested using a systems simulation and then incorporated into an Experimental File System (EFS) which showed that the algorithm can be used in a real networking environment. 相似文献
8.
9.
10.
We propose a efficient writeback scheme that enables guaranteeing throughput in high-performance storage systems. The proposed scheme, called de-fragmented writeback (DFW), reduces positioning time of storage devices in writing workloads, and thus enables fast writeback in storage systems. We consider both of storage media in designing DFW scheme; traditional rotating disk and emerging solid-state disks. First, sorting and filling holes methods are used for rotating disk media for the higher throughput. The scheme converts fragmented data blocks into sequential ones so that it reduces the number of write requests and unnecessary disk-head movements. Second, flash block aware clustering-based writeback scheme is used for solid-state disks considering the characteristics of flash memory. The experimental results show that our schemes guarantee system’s high throughput while guaranteeing data reliability. 相似文献
11.
The primary objective of DRAM-dynamic random access memory-is to offer the largest memory capacity at the lowest possible cost. Designers achieve this by two means. First, they optimize the process and the design to minimize die area. Second, they ensure that the device serves high-volume markets and can be mass-produced to achieve the greatest economies of scale. SLDRAM-synchronous-link DRAM-is a new memory interface specification developed through the cooperative efforts of leading semiconductor memory manufacturers and high-end computer architects and system designers. SLDRAM meets the high data bandwidth requirements of emerging processor architectures and retains the low cost of earlier DRAM interface standards. These and other benefits suggest that SLDRAM will become the mainstream commodity memory of the early 21st century 相似文献
12.
Alexandre Brandwajn 《Performance Evaluation》1981,1(4):263-281
In this paper, devoted to modeling the performance of Direct Access Storage Devices (DASDs), a basic model of a block-multiplexor channel is developed. The goal is to properly and simply represent the missed reconnection effect together with its influence on the overall performance of a DASD subsystem. The approach taken is that of step-wise decomposition and analysis under loads of constant numbers of users. Computational algorithms for evaluating the solution obtained are considered. The use of the basic model as a building block for more complex systems is illustrated in the example of a symmetrical two-CPU system with shared disks. Finally, the limitations of the approach are discussed. 相似文献
13.
Alexandre Brandwajn 《Performance Evaluation》1981,1(3):263-281
In this paper, devoted to modeling the performance of Direct Access Storage Devices (DASDs), a basic model of a block-multiplexor channel is developed. The goal is to properly and simply represent the missed reconnection effect together with its influence on the overall performance of a DASD subsystem. The approach taken is that of step-wise decomposition and analysis under loads of constant numbers of users. Computational algorithms for evaluating the solution obtained are considered. The use of the basic model as a building block for more complex systems is illustrated in the example of a symmetrical two-CPU system with shared disks. Finally, the limitations of the approach are discussed. 相似文献
14.
15.
Although lock-based critical sections are the synchronization method of choice, they have significant performance limitations and lack certain properties, such as failure atomicity and stability. Addressing both these limitations requires considerable software overhead. Transactional lock removal can dynamically eliminate synchronization operations and achieve transparent transactional execution by treating lock-based critical sections as lock-free optimistic transactions. 相似文献
16.
《The Journal of Strategic Information Systems》2023,32(1):101756
High-reliability organizations (HROs) and their complex operating models have been a focus of scholarly work for more than three decades. Recently, HROs have been challenged by new market pressures that require them to digitally transform in ways that affect their identity and value creation models while still maintaining high levels of security and efficiency. This longitudinal, in-depth single-case study of a major European utility company examines the role of HRO identity in digital transformation (DT), specifically in terms of tensions between innovation and transformation on the one hand, and maintaining reliable operations on the other. Our findings show how tensions between HROs’ identity and key features of DT give rise to threat perceptions and self-protective behaviors by the IT workforce, that eventually may derail the transformation process. We develop a process model that highlights the sources and consequences of identity misalignment during major DT initiatives in HROs. In doing so, we extend the research on D T by highlighting the importance of bottom-up processes for DT success and failure, especially concerning the IT function’s perception of organizational identity. 相似文献
17.
U盘病毒又称Autorun病毒,是一种具有代表性的常见病毒,是通过Autorun.inf文件进行传播的。利用Windows的自动运行功能在打开U盘的同时自动运行U盘病毒。详细地分析了基于U盘的计算机病毒原理,并根据U盘病毒的原理设计了一个病毒程序,且提出了解决该病毒的预防方法和删除方法。 相似文献
18.
One of the persistently exciting control applications is that of disk drive servos. From the start in the early 1950s to the massive capacity commodity drives of the early 2000s, the problem of accessing data on rotating disk media has provided a wealth of control challenges to be solved. This survey paper traces the early history of disk drive control from the first disk drive in 1956 to the first commercial drive with Magneto-Resistive heads in 1990. Rather than the approach used in (Abramovitch and Franklin, 2002a) in which the histories of the components were outlined first, we will focus on the feedback loop itself in those early days. The paper will survey the different areas of the disk drive control problem and how they evolved. 相似文献
19.
EED: Energy Efficient Disk drive architecture 总被引:1,自引:0,他引:1
Energy efficiency has become one of the most important challenges in designing future computing systems, and the storage system is one of the largest energy consumers within them. This paper proposes an Energy Efficient Disk (EED) drive architecture which integrates a relatively small-sized NAND flash memory into a traditional disk drive to explore the impact of the flash memory on the performance and energy consumption of the disk. The EED monitors data access patterns and moves the frequently accessed data from the magnetic disk to the flash memory. Due to the data migration, most of the data accesses can be satisfied with the flash memory, which extends the idle period of the disk drive and enables the disk drive to stay in a low power state for an extended period of time. Because flash memory consumes considerably less energy and the read access is much faster than a magnetic disk, the EED can save significant amounts of energy while reducing the average response time. Real trace driven simulations are employed to validate the proposed disk drive architecture. An energy coefficient, which is the product of the average response time and the average energy consumption, is proposed as a performance metric to measure the EED. The simulation results, along with the energy coefficient, show that the EED can achieve an 89.11% energy consumption reduction and a 2.04% average response time reduction with cello99 trace, a 7.5% energy consumption reduction and a 45.15% average response time reduction with cello96 trace, and a 20.06% energy consumption reduction and a 6.02% average response time reduction with TPC-D trace, respectively. Traditionally, energy conservation and performance improvement are contradictory. The EED strikes a good balance between conserving energy and improving performance. 相似文献