首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Avizienis  A. 《Computer》1971,4(1):5-8
The discipline of fault-tolerant computing would be unnecessary if computer hardware and programs would always behave in perfect agreement with the designer's or programmer's intentions. However, imperfections of computer systems and program "bugs" have been with us since the first computer was built, and fault tolerance will remain an important objective as long as computers are in demand.  相似文献   

2.
This article describes a framework that combines simple hardware traditionally used in manufacturing with sensor-based planning and design algorithms from robotics. For repetitive assembly, the authors argue that this combination can reduce start-up and maintenance costs, increase throughput, and greatly reduce the set-up and changeover times for new products. The proposed hardware bears a close resemblance to existing "hard" automation; what is new is the application of computational methods for robust design and control of these systems, and more extensive use of (simple) sensors. Clearly this enhances the capabilities of the hardware. A less-obvious benefit is that software capability is also enhanced--algorithms for fine-motion, grasp planning and some sensing algorithms which would be intractable on a general-purpose robot work in real-time when applied to simple hardware. To describe this approach the authors chose the acronym RISC--Reduced Intricacy in Sensing and Control-by analogy with computer architecture. Analogously, the authors propose to use simple hardware elements that are coordinated by software to perform complex tasks  相似文献   

3.
A decade of hardware/software codesign   总被引:1,自引:0,他引:1  
Wolf  W. 《Computer》2003,36(4):38-43
The term hardware/software codesign, coined about 10 years ago, describes a confluence of problems in integrated circuit design. By the 1990s, it became clear that microprocessor-based systems would be an important design discipline for IC designers as well. Large 16- and 32-bit microprocessors had already been used in board-level designs, and Moore's law ensured that chips would soon be large enough to include both a CPU and other subsystems. Multiple disciplines inform hardware/software codesign. Computer architecture tells us about the performance and energy consumption of single CPUs and multiprocessors. Real-time system theory helps analyze the deadline-driven performance of embedded systems. Computer-aided design assists hardware cost evaluation and design space exploration.  相似文献   

4.
The integration of computers within the manufacturing environment has long been a method of enhancing productivity. Their use in many facets of a manufacturing enterprise has given industries the ability to deliver low-cost, high-quality competitive products. As computer technology advances, we find more and more uses for new hardware and software in the enterprise. Over a period of time, we have seen many “islands” of computer integration. Distinct, fully functional hardware and software installations are a common base for many industries. Unfortunately, these islands are just that, separate, distinct and functional but non-integrated. The lack of integration within these information systems make it difficult for end users to see the same manufacturing data. We are finding the need for a “single image” real-time information system to provide the enterprise with the data that is required to plan, justify, design, manufacture and deliver products to the customer. Unfortunately, many industries have a large installed base of hardware and software. Replacement of current systems is not a cost-justified business decision. An alternative would be the migration of current systems to a more integrated solution. The migration to a computer-integrated manufacturing (CIM)-based architecture would provide that single image real-time information system.

The effort and skills necessary for the implementation of a CIM-based architecture would require active participation from two key organizations: Manufacturing and information systems (I/S). The manufacturing engineers, process engineers and other manufacturing resource would be the cornerstone for obtaining requirements. The ability to effectively use I/S is a critical success factor in the implementation of CIM. I/S has to be viewed as an equal partner, not just as a service organization. Manufacturing management needs to understand the justification process of integrating computer systems and the “real” cost of integration versus the cost of non-integrated manufacturing systems. The active participation of both organizations during all phases of CIM implementation will result in a effective and useful integrated information system.  相似文献   


5.
A novel approach to hardware fault tolerance is demonstrated that takes inspiration from the human immune system as a method of fault detection. The human immune system is a remarkable system of interacting cells and organs that protect the body from invasion and maintains reliable operation even in the presence of invading bacteria or viruses. This paper seeks to address the field of electronic hardware fault tolerance from an immunological perspective with the aim of showing how novel methods based upon the operation of the immune system can both complement and create new approaches to the development of fault detection mechanisms for reliable hardware systems. In particular, it is shown that by use of partial matching, as prevalent in biological systems, high fault coverage can be achieved with the added advantage of reducing memory requirements. The development of a generic finite-state-machine immunization procedure is discussed that allows any system that can be represented in such a manner to be "immunized" against the occurrence of faulty operation. This is demonstrated by the creation of an immunized decade counter that can detect the presence of faults in real time  相似文献   

6.
The body of medical knowledge is so vast, and finding a fact within it is often so difficult and frustrating, that various systems have been proposed to help. reviewing the various approaches reveals some of the difficulties of the retrieval task and why it has not been easy to devise a general approach. The imprecision of medical terminology is a major stumbling block. The development of specialized systems to serve particular medical interests has hampered the wider applicability of many such systems. This article offers eight characteristics for an ideal medical knowledge system. The key is for each medical term to be coded and independently defined, with synonymous terms to be equated through term codes. Any coded term could become the nucleus of a cluster of related, coded terms. The knowledge system would encompass all of medicine, would be in the public domain, and would be independent of hardware and software. While such a system is utopian, consideration of its characteristics can further the development of better interim systems.  相似文献   

7.
随着芯片设计、生产环节的增多,芯片中存在硬件木马的可能性越来越高,对于安全性要求较高的芯片就需要对其进行硬件木马检测。反向解剖芯片并将其与原始GDSII文件进行一致性比对是检测芯片是否被植入硬件木马的主要方法之一。而从GDSII文件生成与反向解剖芯片的照片一一对应的图片(后面统称为GDSII图片)是木马分析的重要步骤,为此提出了两点定位算法对GDSII图片进行分割;同时,针对芯片照相和拼接过程中存在图片信息位置偏移的问题,提出了一种基于偏差统计的校正算法。经过工程实际应用证明,该算法较好地校正了GDSII图片与芯片照片的偏差,消除了因为图片信息位置偏移所带来的影响。  相似文献   

8.
Soft errors (or Transient faults) are temporary faults that arise in a circuit due to a variety of internal noise and external sources such as cosmic particle hits. Though soft errors still occur infrequently, they are rapidly becoming a major impediment to processor reliability. This is due primarily to processor scaling characteristics. In the past, systems designed to tolerate such faults utilized costly customized solutions, entailing the use of replicated hardware components to detect and recover from microprocessor faults. As the feature size keeps shrinking and with the proliferation of multiprocessor on die in all segments of computer-based systems, the capability to detect and recover from faults is also desired for commodity hardware. For such systems, however, performance and power constitute the main drivers, so the traditional solutions prove inadequate and new approaches are required. We introduce two independent and complementary microarchitecture-level techniques: Double Execution and Double Decoding. Both exploit the typically low average processor resource utilization of modern processors to enhance processor reliability. Double Execution protects the Out-Of-Order part of the CPU by executing each instruction twice. Double Decoding uses a second, low-performance low-power instruction decoder to detect soft errors in the decoder logic. These simple-to-implement techniques are shown to improve the processor's reliability with relatively low performance, power, and hardware overheads. Finally, the resulting “excessive” reliability can even be traded back for performance by increasing clock rate and/or reducing voltage, thereby improving upon single execution approaches.  相似文献   

9.
《Computer》2006,39(11):12-14
Virtualization is a hot topic in the technology world. The technology enables a single computer to run multiple operating systems simultaneously. It lets companies use a single server for multiple tasks that would normally have to run on multiple servers, each running a different OS. Now, vendors are releasing products based on two lightweight virtualization approaches that also let a single operating system run several instances of the same OS or different OSs. However, today's new virtualization approaches do not try to emulate an entire hardware environment, as traditional virtualization does. They thus require fewer CPU and memory resources, which is why the technology is called "lightweight" virtualization. However, lightweight virtualization still faces several barriers to widespread adoption  相似文献   

10.
High-end embedded systems, like their general-purpose counterparts, are turning to many-core cluster-based shared-memory architectures that provide a shared memory abstraction subject to non-uniform memory access costs. In order to keep the cores and memory hierarchy simple, many-core embedded systems tend to employ simple, scratchpad-like memories, rather than hardware managed caches that require some form of cache coherence management. These “coherence-free” systems still require some means to synchronize memory accesses and guarantee memory consistency. Conventional lock-based approaches may be employed to accomplish the synchronization, but may lead to both usability and performance issues. Instead, speculative synchronization, such as hardware transactional memory, may be a more attractive approach. However, hardware speculative techniques traditionally rely on the underlying cache-coherence protocol to synchronize memory accesses among the cores. The lack of a cache-coherence protocol adds new challenges in the design of hardware speculative support. In this article, we present a new scheme for hardware transactional memory (HTM) support within a cluster-based, many-core embedded system that lacks an underlying cache-coherence protocol. We propose two alternative data versioning implementations for the HTM support, Full-Mirroring and Distributed Logging and we conduct a performance comparison between them. To the best of our knowledge, these are the first designs for speculative synchronization for this type of architecture. Through a set of benchmark experiments using our simulation platform, we show that our designs can achieve significant performance improvements over traditional lock-based schemes.  相似文献   

11.
嵌入式系统集成开发环境的构成   总被引:2,自引:2,他引:0  
李林功  李继凯  谷金宏 《计算机工程》2001,27(5):146-147,155
传统的嵌入式系统设计方法需要反复悠系统的设计方案,速度慢,效率低。软/硬件协同设计能够最大限度地挖掘系统潜能,有效利用现有资源,速度快,质量高。文章描述了为实现协高设计而开发的集成开发环境的构成、目前的现状及发展趋势。  相似文献   

12.
Multicore Resource Management   总被引:2,自引:0,他引:2  
Current resource management mechanisms and policies are inadequate for future multicore systems. Instead, a hardware/software interface based on the virtual private machine abstraction would allow software policies to explicitly manage microarchitecture resources. VPM policies, implemented primarily in software, translate application and system objectives into VPM resource assignments. Then, VPM mechanisms securely multiplex, arbitrate, or distribute hardware resources to satisfy the VPM assignments.  相似文献   

13.
The next generation of complex software systems will be highly distributed, component-based, and service-oriented. They will need to operate in unattended mode, possibly in hostile environments, and they'll be composed of many "replaceable" components discoverable at runtime. Moreover, they will have to run on a multitude of unknown and heterogeneous hardware and network platforms. Three major requirements for such systems are performance, availability, and security. Performance requirements imply that these systems must be adaptable and self-configurable to changes in workload intensity. Availability and security requirements suggest that these systems also must adapt and reconfigure themselves to withstand attacks and failures. This paper focuses specifically on QoS requirements for performance and describes the framework for QoS-aware distributed applications.  相似文献   

14.
Walsh  L. Tenold  S. 《Computer》1977,10(10):54-63
Rapid advances in the semiconductor industry have prompted the increased use of computer technology to produce better and "smarter" military systems. Rugged computer hardware, able to pass mil-spec testing, is needed and used to solve problems in a wide range of application areas including space exploration, research in severe environments, military communications systems, navigation systems, and weapons guidance systems.  相似文献   

15.
McKim  J.C.  Jr. 《Computer》1996,29(3):109-111
Why can't software be more like hardware? has been the software engineer's lament for nearly as long as there have been large software systems. In particular, why isn't there a software components industry to rival the existing hardware components industry? Hardware components come with the following attributes: an interface that hides detail that would only confuse or at least distract me; an unambiguous interface specification written in a language I can understand (in the case of the integrated circuit, this may be a fairly complex language, but it's one I expect to learn if I'm going to work with that hardware); a guarantee-the component has been tested and/or validated against its specification. All three items-especially the last one-are notably lacking for software components. Indeed, software tends to come with an antiguarantee, otherwise known as a disclaimer. All of the above points rely on a rigorous specification of the hardware component's interface. In a nutshell, programming by contract is about providing just such specifications for software components (that is, classes), and it provides the best hope of a basis for a true software component industry. The discussion focuses on object oriented software  相似文献   

16.

Context

Mutation testing is a fault-injection-based technique to help testers generate test cases for detecting specific and predetermined types of faults.

Objective

Before mutation testing can be effectively applied to embedded systems, traditional mutation testing needs to be modified. To inject a fault into an embedded system without causing any system failure or hardware damage is a challenging task as it requires some knowledge of the underlying layers such as the kernel and the corresponding hardware.

Method

We propose a set of mutation operators for embedded systems using kernel-based software and hardware fault simulation. These operators are designed for software developers so that they can use the mutation technique to test the entire system after the software is integrated with the kernel and hardware devices.

Results

A case study on a programmable logic controller for a digital reactor protection system in a nuclear power plant is conducted. Our results suggest that the proposed mutation operators are useful for fault-injection and this is evidenced by the fact that faults not injected by us were discovered in the subject software as a result of the case study.

Conclusion

We conclude that our mutation operators are useful for integration testing of an embedded system.  相似文献   

17.
Modern hardware is abundantly parallel and increasingly heterogeneous. The numerous processing cores have non-uniform access latencies to the main memory and processor caches, which causes variability in the communication costs. Unfortunately, database systems mostly assume that all processing cores are the same and that microarchitecture differences are not significant enough to appear in critical database execution paths. As we demonstrate in this paper, however, non-uniform core topology does appear in the critical path and conventional database architectures achieve suboptimal and even worse, unpredictable performance. We perform a detailed performance analysis of OLTP deployments in servers with multiple cores per CPU (multicore) and multiple CPUs per server (multisocket). We compare different database deployment strategies where we vary the number and size of independent database instances running on a single server, from a single shared-everything instance to fine-grained shared-nothing configurations. We quantify the impact of non-uniform hardware on various deployments by (a) examining how efficiently each deployment uses the available hardware resources and (b) measuring the impact of distributed transactions and skewed requests on different workloads. We show that no strategy is optimal for all cases and that the best choice depends on the combination of hardware topology and workload characteristics. Finally, we argue that transaction processing systems must be aware of the hardware topology in order to achieve predictably high performance.  相似文献   

18.
This paper describes a microprocessor interface based on a Texas TMS 320 signal processing chip which permits information to be transferred between a manufacturer supplied parallel interface on a GEC 4000 series computer and a Cambridge ring. The very high speed of the TMS microprocessor permitted features which would normally require hardware logic to be implemented in software. The interface is full duplex, independence of the receive and transmit functions being achieved by a simple process switch mechanism. The design is generally applicable; only the parallel port wiring and the software status testing would need to be modified to use this interface on other computing systems.  相似文献   

19.
基于依赖图的硬件事务存储技术研究   总被引:1,自引:0,他引:1  
事务存储技术能够简化并行程序中对共享资源的访问控制,是当前的研究热点之一.目前,多数基于硬件的事务存储系统采用基于冲突检测与处理的并发控制协议,当检测到两事务发生冲突时就中止二者之一.但是对事务间"冲突"更深入的分析表明,某些"冲突"并不一定会导致事务的回退,这种冲突称为"弱冲突".基于依赖图的硬件事务存储技术能够避免弱冲突引发的多余事务回退.模拟实验表明,基于依赖图的事务存储系统与基于冲突处理的事务存储系统相比具有明显的性能优势.  相似文献   

20.
防火墙是网络安全领域首要的、基础的设施,Cisco公司的PIX防火墙就是一种典型的企业级硬件防火墙产品。在高等职业教育中,计算机网络相关专业引入防火墙的配置与维护相关技术的学习迎合了当前人才的需求。本文结合作者的教学实践,从教学内容、教学方法等方面进行探讨,为高等职业教育相关专业防火墙技术课程教学提供思路与方法。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号