首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The emergence of multimedia and wireless applications as growth leaders has created an increased demand for embedded systems. Examples of such applications range from digital cellular telephones to high-performance avionics systems. Along with the increased market share of these products, however, has come an increase in system complexity and cost. For example, the complexity of embedded avionics systems has quickly grown from millions to billions of operations per second. These systems, formerly implemented in hardwired or uniprocessor architectures, now must consist of programmable-multiprocessor arrays to meet performance requirements. Time-to-market and life-cycle costs are key factors in the success of these products in the competitive electronics marketplace. These costs, therefore, should have a dominant influence on the design of embedded microelectronic systems. In addition, these systems must meet rigid form factor (such as size, power, and weight) constraints, which further complicate the design task. For designers of high-end embedded microsystems or large-volume consumer products, rapidly prototyping cost-effective implementations that meet stringent performance, functional, timing, and physical requirements is a formidable challenge  相似文献   

2.
《Micro, IEEE》1998,18(4)
Posix standards represent approximately $80 billion of the $250+ billion-Unix market over the last decade; they also provide the basis for significant growth in the future. This picture of the substantial success of Posix standards (and the business impact of standards in general) differs from that presented in Micro's May/June 1998 issue. The overall Unix market boasts billions of dollars in revenues, and is projected to continue growing into the future. Major areas for this growth feature high-end systems (data warehousing, servers, and supercomputing) rather than desktop applications  相似文献   

3.
A.  H. 《Computer aided design》2001,33(14):1111-1122
In the 1990s, Case-Based Design (CBD) seemed an appealing approach to develop intelligent design support. Based on an alternative view of human cognition, CBD systems find new design solutions by adapting similar experiences from the past. Although several CBD applications have been built, a convincing breakthrough by these systems has yet to come. In search of reasons for this limited success, this article embarks on a critical review of the CBD approach. Its underlying cognitive model serves as a framework to analyse six CBD systems and to identify gaps in CBD research. The article focuses primarily on CBD applications for architecture, yet the findings may be relevant for other design domains as well.  相似文献   

4.
In the early 1990s, researchers at Sandia National Laboratories and the University of New Mexico began development of customized system software for massively parallel ‘capability’ computing platforms. These lightweight kernels have proven to be essential for delivering the full power of the underlying hardware to applications. This claim is underscored by the success of several supercomputers, including the Intel Paragon, Intel Accelerated Strategic Computing Initiative Red, and the Cray XT series of systems, each having established a new standard for high‐performance computing upon introduction. In this paper, we describe our approach to lightweight compute node kernel design and discuss the design principles that have guided several generations of implementation and deployment. A broad strategy of operating system specialization has led to a focus on user‐level resource management, deterministic behavior, and scalable system services. The relative importance of each of these areas has changed over the years in response to changes in applications and hardware and system architecture. We detail our approach and the associated principles, describe how our application of these principles has changed over time, and provide design and performance comparisons to contemporaneous supercomputing operating systems. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

5.
File system metadata management has become a bottleneck for many data-intensive applications that rely on high-performance file systems. Part of the bottleneck is due to the limitations of an almost 50-year-old interface standard with metadata abstractions that were designed at a time when high-end file systems managed less than 100 MB. Today's high-performance file systems store 7–9 orders of magnitude more data, resulting in a number of data items for which these metadata abstractions are inadequate, such as directory hierarchies unable to handle complex relationships among data. Users of file systems have attempted to work around these inadequacies by moving application-specific metadata management to relational databases to make metadata searchable. Splitting file system metadata management into two separate systems introduces inefficiencies and systems management problems. To address this problem, we propose QMDS: a file system metadata management service that integrates all file system metadata and uses a graph data model with attributes on nodes and edges. Our service uses a query language interface for file identification and attribute retrieval. We present our metadata management service design and architecture and study its performance using a text analysis benchmark application. Results from our QMDS prototype show the effectiveness of this approach. Compared to the use of a file system and relational database, the QMDS prototype shows superior performance for both ingest and query workloads.  相似文献   

6.
To increase the likelihood of experts systems' success it is useful to identify the more important determinants to focus managerial attention and resources. Seven major determinants of expert system success, here defined as desirable impact of the system on users' jobs, have been identified from the literature. DuPont's collection of expert systems (ES) provided the sample of 114 applications used in this study. The results corroborate much of what has been proposed in the literature, except for user characteristics which have been found to have no significant relationship to the measure of success. Based on the results, some recommendations are made to expert system development managers, including: (1) besides the recommendations already proposed in the literature, managers should deliberately seek to deal with important, large, labour intensive and complex problems which are important to the organization; (2) the selection of a development shell with the characteristics described in this study, and which matches the problem at hand is an important determinant of success; and (3) even though the primary source of knowledge is one or more domain experts, user involvement in the development process remains an important factor for ES to have a desirable impact on the latter's jobs.  相似文献   

7.
一款基于多处理器片上系统的动态自适应仲裁器   总被引:1,自引:0,他引:1  
随着深亚微米工艺技术的发展,同一芯片上集成多个处理器得以实现.通信架构是多处理器片上系统的瓶颈,而高效的仲裁器可以解决多个处理器同时访问共享资源引起的冲突和竞争,从而防止系统性能的下降,提出一款算法简单的动态自适应仲裁器,它可以自动调节各个处理器占据的总线带宽,避免饥饿现象,基于多处理器仿真平台的实验结果显示它比传统的仲裁器减少了68%的任务完成时间,缩短了78%的总线等待时间,并且能更好地控制各处理器的总线带宽.  相似文献   

8.
数据模型及其发展历程   总被引:1,自引:0,他引:1  
数据库是数据管理的技术,是计算机学科的重要分支.经过近半个世纪的发展,数据库技术形成了坚实的理论基础、成熟的商业产品和广泛的应用领域.数据模型描述了数据库中数据的存储方式和操作方式.从数据组织形式,可以将数据模型分为结构化模型、半结构化模型、OLAP分析模型和大数据模型.20世纪60年代中后期到90年代初,结构化模型最早被提出,其主要包括层次模型、网状模型、关系模型和面向对象模型等.20世纪90年代末期,随着互联网应用和科学计算等复杂应用的快速发展,开始出现半结构化模型,包括XML模型、JSON模型和图模型等.21世纪,随着电子商务、商业智能等应用的不断发展,数据分析模型成为研究热点,主要包括关系型ROLAP和多维型MOLAP.2010年以来,随着大数据工业应用的快速发展,以NoSQL和NewSQL数据库系统为代表的大数据模型成为新的研究热点.对上述数据模型进行了综述,并选取每个模型的典型数据库系统进行了性能的分析.  相似文献   

9.
Many control applications can be described as non-linear feedback systems whose stability properties can, for example, be analysed with the circle criterion, the Popov criterion or the off-axis circle criterion. In the SISO case these sector criteria became very popular because of their simple graphic interpretations; the criteria's significance for multivariable feedback systems remained rather poor since there are almost no practicable ways to evaluate them. In this paper the MIMO versions of the three stability criteria are derived using Popov's theory of hyperstability. The main contribution is the presentation of new methods for a frequency domain evaluation of the sector criteria in the multivariable case. The new methods are based on graphic elements: the 'positivity curves'.  相似文献   

10.
Scientific Computing Kernels on the Cell Processor   总被引:1,自引:0,他引:1  
In this work, we examine the potential of using the recently-released STI Cell processor as a building block for future high-end scientific computing systems. Our work contains several novel contributions. First, we introduce a performance model for Cell and apply it to several key numerical kernels: dense matrix multiply, sparse matrix vector multiply, stencil computations, and 1D/2D FFTs. Next, we validate our model by comparing results against published hardware data, as well as our own Cell blade implementations. Additionally, we compare Cell performance to benchmarks run on leading superscalar (AMD Opteron), VLIW (Intel Itanium2), and vector (Cray X1E) architectures. Our work also explores several different kernel implementations and demonstrates a simple and effective programming model for Cell’s unique architecture. Finally, we propose modest microarchitectural modifications that could significantly increase the efficiency of double-precision calculations. Overall results demonstrate the tremendous potential of the Cell architecture for scientific computations in terms of both raw performance and power efficiency.  相似文献   

11.
12.
Gallium arsenide (GaAs) is now being considered by the semiconductor industry as a serious challenger to the dominance of silicon in high-speed digital circuits. In addition, with the accelerated progress in developing GaAs ICs for microwave and optoelectronics applications, major future markets are projected. This paper surveys the product areas and associated system applications for GaAs and related III-V compound semiconductor devices and ICs. Particular attention is given to the three main IC product areas of digital, analogue and optoelectronics and their applications to both military and civil systems. The strongest market currently is for military analogue devices. The projected digital market in the 1990s, however, dominates the other IC product areas.  相似文献   

13.
朱涛  郭进伟  周欢  周烜  周傲英 《软件学报》2018,29(1):131-149
随着各类应用在数据量和业务量上的扩展,单机数据库系统越发难以应对现实需求。分布式数据库能够根据业务的需求动态地扩容,因此逐步开始受到应用的青睐。近年来,分布式数据库产品层出不穷,并在互联网应用中被大量投入使用。然而,分布式数据库的系统复杂度前所未有。为了让系统可用,设计者需要在多种属性中作合理选择和折中。这造成现有的数据库产品形态各异、优缺点对比分明。至今为止,尚未有人对分布式数据库的设计空间和折中方案进行过深入分析和整理。本文作者在对多个分布式数据库产品进行深入理解之后认识到:分布式数据库系统的设计方案可以通过三个属性进行基本刻画–操作一致性、事务一致性和系统可用性。虽然这三个属性并不新颖,但它们在数据库语境下的含义在文献中尚未得到充分澄清。本文对这三个属性进行澄清,并通过它们对典型数据库产品的格局进行概括、对现有的分布式数据库技术进行综述。此外,本文还对这三个属性之间的相互关系进行深入分析,以期帮助未来的开发者在分布式数据库的设计过程中作出合理选择。  相似文献   

14.
The merger of three-dimensional graphics with the X Window System has recently been standardized by adapting PHIGS, the Programmer's Hierarchical Interactive Graphics System, to the X Window System with PEX, the PHIGS Extension to X. The standard programming library for PEX has been defined to be identical to PHIGS PLUS allowing PHIGS programs to port directly to the X environment. X uses a client server model to run applications as client processes which communicate with a server to perform graphical display and input. For improved performance, the PEX extension defines new server resources to reduce network traffic and to take advantage of graphics hardware existing on high-end servers. A side effect of this distributed model of computation is a distribution of PHIGS structures leading to a relaxation of the exclusive access which a PHIGS application usually maintains over its Central Structure Store. We exploit the distributed nature of a PEX/PHIGS client's Central Structure Store to provide access to it for other applications besides the originating PEX/PHIGS client. We refer to these other applications as tools since one of our primary goals is to create development tools for PHIGS programmers. Rather than concentrate on particular debugging tools, we focus upon easing the process of actually developing tools. Our goal is to supply a collection of routines which can be used by PHIGS programmers to create custom tools or other programs which require access to the graphics data of remote PHIGS processes. Our Tool Development Library provides the PHIGS programmer a small number of management routines which orchestrate the connection and mapping to the data of one or more remote PHIGS applications. Manipulation of remote PHIGS structures is accomplished just as easily as local operations and is performed using standard PHIGS calls. The remote application being accessed requires no changes to its source code. Obvious uses for the Tool Development Library are in the construction of PHIGS tools such as structure browsers, editors and debugging aids. Less obvious is the potential for developing collections of cooperating graphics applications which share graphics data.  相似文献   

15.
Abstract. Landsat was launched as a research satellite but it soon became obvious that it had operational potential. Throughout the establishment of the Landsat series, and the introduction of SPOT and other satellites, there has been a continual forecast of a large commercial market potential. In the past eighteen years this commercial boom has failed to materialize; companies have come and gone and a number have continued to be successful.

The customer base has not grown as expected. In the developed countries governments have invested in the establishment of space systems but seldom allocate funds for the application of data for national surveys which would, in turn, stimulate the commercial market. In the developing countries a market does exist but many have introduced, or are establishing, their own capabilities.

The contribution to the development of remote sensing by the value-added industry has therefore been small and mainly concerned with improving analysis and developing applications.  相似文献   

16.
Although the shared memory abstraction is gaining ground as a programming abstraction for parallel computing, the main platforms that support it, small-scale symmetric multiprocessors (SMPs) and hardware cache-coherent distributed shared memory systems (DSMs), seem to lie inherently at the extremes of the cost-performance spectrum for parallel systems. In this paper we examine if shared virtual memory (SVM) clusters can bridge this gap by examining how application performance scales on a state-of-the-art shared virtual memory cluster. We find that: (i) The level of application restructuring needed is quite high compared to applications that perform well on a DSM system of the same scale and larger problem sizes are needed for good performance. (ii) However, surprisingly, SVM performs quite well for a fairly wide range of applications, achieving at least half the parallel efficiency of a high-end DSM system at the same scale and often much more.  相似文献   

17.
Weber  M. Weisbrod  J. 《Software, IEEE》2003,20(1):16-24
In the automotive industry, especially in the high-end market, the complexity of electronic components is increasing rapidly. Currently, about a third of all development costs in high-end models go to electric and electronic system development, and the cost continues to grow. At the same time, many slightly different variations on components are each developed in a series of prototyping phases on different schedules. Consequently, the complexity of specification activities surpasses what conventional text-processing systems can support in terms of management and tracing functionality. Using real world projects as a foundation, the authors describe problems and solutions for requirements engineering in the automotive domain.  相似文献   

18.
This paper deals with system identification for control of linear parameter varying systems. In practical applications, it is often important to be able to identify small plant changes in an incremental manner without shutting down the system and/or disconnecting the controller; unfortunately, closed‐loop system identification is more difficult than open‐loop identification. In this paper we prove that the so‐called Hansen scheme, a technique known from linear time‐invariant systems theory for transforming closed‐loop system identification problems into open‐loop‐like problems, can be extended to accommodate linear parameter varying systems as well. We investigate the identified subsystem's parameter dependency and observe that, under mild assumptions, the identified subsystem is affine in the parameter vector. Various identification methods are compared in direct and Hansen Scheme setups in simulation studies, and the application of the Hansen Scheme is seen to improve the identification performance.  相似文献   

19.
In the recent few decades there has been very significant developments in the theoretical understanding of Support vector machines (SVMs) as well as algorithmic strategies for implementing them, and applications of the approach to practical problems. SVMs introduced by Vapnik and others in the early 1990s are machine learning systems that utilize a hypothesis space of linear functions in a high dimensional feature space, trained with optimization algorithms that implements a learning bias derived from statistical learning theory. This paper reviews the state-of-the-art and focuses over a wide range of applications of SVMs in the field of hydrology. To use SVM aided hydrological models, which have increasingly extended during the last years; comprehensive knowledge about their theory and modelling approaches seems to be necessary. Furthermore, this review provides a brief synopsis of the techniques of SVMs and other emerging ones (hybrid models), which have proven useful in the analysis of the various hydrological parameters. Moreover, various examples of successful applications of SVMs for modelling different hydrological processes are also provided.  相似文献   

20.
Vector clocks (VC) are an inherent component of a rich class of distributed applications. In this paper, we first consider the problem of realistic implementation—more specifically, bounded-space and fault-tolerant—of applications that use vector clocks (VC). To this end, we generalize the notion of VC to resettable vector clocks (RVC), and provide a realistic implementation of RVC. Further, we identify an interface contract under which our RVC implementation can be substituted for VC in client applications, without affecting the client's correctness. This interface contract is designed for phase-based applications that provide certain communication guarantees. Based on such substitution and the use of the interface contract, we show how to transform the client so that it is itself realistically implemented. We illustrate our method in the context of Ricart–Agrawala's mutual exclusion program and Garg–Chase's predicate detection program.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号