首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We present novel architectures for the modified K-best algorithm and its very-large-scale integration implementation for spatially multiplexed wireless multiple-input multiple-output systems. The objective was to propose a simplified architecture based on the algorithm and to significantly improve the suitability for hardware implementation. Two different architecture designs were proposed: a distributed arithmetic- based tree-search detector and a breadth-first search sphere detector. The implementations were performed to obtain a configurable architectural solution for different antenna configurations and constellations. The synthesis analysis shows that the proposed architectures achieve a throughput of > 500 Mbps with reduced hardware complexity compared to previously reported architectures.  相似文献   

2.
Secure information exchange in resource constrained devices can be accomplished efficiently through elliptic curve cryptography (ECC). Due to the high computational complexity of ECC arithmetic, a high performance dedicated hardware architecture is essential to provide sufficient performance in a computation of elliptic curve scalar multiplication. This paper presents a high performance hardware support for elliptic curve cryptography over a prime field GF(p). It exploited a best available possible parallelism of elliptic curve points in projective representation. The proposed hardware for ECC is implemented on Xilinx Virtex-4, Virtex-5 and Virtex-6 FPGAs. A 256-bit scalar multiplication is completed in 2.01  ms, 2.62  ms and 3.91  ms on Virtex-6, Virtex-5 and Virtex-4 FPGA platforms, respectively. The results show that the proposed design is 1.96 times faster with insignificant increase in area consumption as compared to the other reported designs. Therefore, it is a good choice to be used in many ECC based schemes.  相似文献   

3.
Current high-end microprocessors achieve high performance as a result of adding more features and therefore increasing complexity. This paper makes the case for a Chip-Multiprocessor based on the Data-Driven Multithreading (DDM-CMP) execution model in order to overcome the limitations of current design trends. Data-Driven Multithreading (DDM) is a multithreading model that effectively hides the communication delay and synchronization overheads. DDM-CMP avoids the complexity of other designs by combining simple commodity microprocessors with a small hardware overhead for thread scheduling and an interconnection network. Preliminary experimental results show that a DDM-CMP chip of the same hardware budget as a high-end commercial microprocessor, clocked at the same frequency, achieves a speedup of up to 18.5 with a 78–81% power consumption of the commercial chip. Overall, the estimated results for the proposed DDM-CMP architecture show a significant benefit in terms of both speedup and power consumption making it an attractive architecture for future processors.  相似文献   

4.
5.
The latest versions of the “International Technology Roadmap for Semiconductors” (ITRS) highlight that verification has changed from playing a relatively minor supporting role in the design process to becoming the dominant cost. This situation is the result of the exponential growth of the functional complexity of designs and the historical emphasis of CAD tools in other areas of the design process such as automatic synthesis or place-and-route. The problem is even worst in embedded systems that normally integrate functionally complex hardware and software parts. This work presents a new verification technique based on interval analysis that can handle embedded designs described at behavioural level. The proposed technique is able to verify assertions that the users insert in software and hardware tasks. It shows very promising results in systems that cannot be efficiently verified with other tools (e.g. data-dominated designs).  相似文献   

6.
Evolvable Hardware in Evolutionary Robotics   总被引:1,自引:0,他引:1  
In recent decades the research on Evolutionary Robotics (ER) has developed rapidly. This direction is primarily concerned with the use of evolutionary computing techniques in the design of intelligent and adaptive controllers for robots. Meanwhile, much attention has been paid to a new set of integrated circuits named Evolvable Hardware (EHW), which is capable of reconfiguring its architectures unlimited time based on artificial evolution techniques. This paper surveys the application of evolvable hardware in evolutionary robotics. The evolvable hardware is an emerging research field concerning the development of evolvable robot controllers at the hardware level to adapt to dynamic changes in environments. The context of evolvable hardware and evolutionary robotics is reviewed, and a few representative experiments in the field of robotic hardware evolution are presented. As an alternative to conventional robotic controller designs, the potentialities and limitations of the EHW-based robotic system are discussed and summarized.  相似文献   

7.
8.
In today’s competitive market designing of digital systems (hardware as well as software) faces tremendous challenges. In fact, notwithstanding an ever decreasing project budget, time to market and product lifetime, designers are faced with an ever increasing system complexity and customer expected quality. The above situation calls for better and better formal verification techniques at all steps of the design flow. This special issue is devoted to publishing revised versions of contributions first presented at the 12th Advanced Research Working Conference on Correct Hardware Design and Verification Methods (CHARME) held 21–24 October 2003 in L’Aquila, Italy. Authors of well regarded papers from CHARME’03 were invited to submit to this special issue. All papers included here have been suitably extended and have undergone an independent round of reviewing.  相似文献   

9.
Transaction-level modeling is used in hardware design for describing designs at a higher level compared to the register-transfer level (RTL) (e.g. Cai and Gajski in CODES+ISSS ’03: proceedings of the 1st IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, pp. 19–24, 2003; Chen et al. in FMCAD ’07: proceedings of the formal methods in computer aided design, pp. 53–61, 2007; Mahajan et al. in MEMOCODE ’07: proceedings of the 5th IEEE/ACM international conference on formal methods and models for codesign, pp. 123–132, 2007; Swan in DAC ’06: proceedings of the 43rd annual conference on design automation, pp. 90–92, 2006). Each transaction represents a unit of work, which is also a useful unit for design verification. In such models, there are many properties of interest which involve interactions between multiple transactions. Examples of this are ordering relationships in sequential processing and hazard checking in pipelined circuits. Writing such properties on the RTL design requires significant expertise in understanding the higher-level computation being done in a given RTL design and possible instrumentation of the RTL to express the property of interest. This is a barrier to the easy use of such properties in RTL designs.  相似文献   

10.
We study web caching with request reordering. The goal is to maintain a cache of web documents so that a sequence of requests can be served at low cost. To improve cache hit rates, a limited reordering of requests is allowed. Feder et al. (Proceedings of the 13th ACM–SIAM Symposium on Discrete Algorithms, pp. 104–105, 2002), who recently introduced this problem, considered caches of size 1, i.e. a cache can store one document. They presented an offline algorithm based on dynamic programming as well as online algorithms that achieve constant factor competitive ratios. For arbitrary cache sizes, Feder et al. (Theor. Comput. Sci. 324:201–218, 2004) gave online strategies that have nearly optimal competitive ratios in several cost models.  相似文献   

11.
The more recent computer developments cause us to take a new look at human intelligence. The prevailing occidental view of human intelligence represents a very one-sided, logocentric approach, so that it is becoming more urgent to look for a more complete view. In this way, specific strengths of so-called human information processing are becoming particularly evident in a new way. To provide a general substantiation for this view, some elements of a phenomenological model for a dialectical coherence of human expressions of life are briefly outlined. The starting point is the everyday experience of constantly being confronted with contradictory situations. A model of polar, or dualistic, dialectic is proposed, which attempts to systematically establish the contradictions and contingencies of human life in theoretical structures. It is assumed that formal logic fails to work — strictly speaking — when applied to real situations of human interaction. Instead, definite negations are supposed to be involved. In this way, four polarities, pairs of concepts which are mutually dependent on each other and negate each other definitely, are presented: process and structure, the individual and general (societal), acting and imagining, subject-subject-relation and subject-object-relation (love and work). The latter gives rise to the dialectic of emotion and cognition, which plays a decisive role in a comparison of human and computer-“intelligence”. To cope with the specific human strengths, which are expressed through these polarities, self-learning computers would have to be able to act and to love like human beings and to grow up in a society like people. Furthermore, the learning process would have to implement the process-structure-dialectic with its aspects of finality. In spite of the recent breath-taking achievements of computers, one can not imagine how computers would ever achieve this. Computers are, however, thought of, by virtue of their complexity, being qualitatively different from machines. But human abilities still represent a new level of complexity: the level of dialectic.  相似文献   

12.
A progressive evolution model is proposed in which evolution takes place stepwise to match environmental changes. This model was designed to accelerate evolution. Environmental complexity is defined, and the problem environment progresses in environmental complexity gradually from easy to difficult. A verification system for the model is constructed on a hardware evolution system called AdAM (adaptive architecture methodology) in which each individual circuit takes parallel input sequences and operates on this input. A measure that is suitable for such parallel simultaneous operations schemes is designed to express environmental complexity. Simulations using an artificial ant problem (a modified John Muir trail) show that in the progressive evolution model, circuits can easily evolve complex behaviors. This work was presented, in part, at the Second International Symposium on Artificial Life and Robotics, Oita, Japan, February 18–20, 1997  相似文献   

13.
Thomas Jefferson dedicated his later years to establishing the University of Virginia, believing that the availability of a public liberal education was essential to national prosperity and individual happiness. His design for the University stands as one of his greatest accomplishments and has been called “the proudest achievement of American architecture.” Taking Jefferson’s design drawings as a basis for study, this paper explores the possibility that he incorporated incommensurable geometric proportions in his designs for the Rotunda. Without actual drawings to illustrate specific geometric constructions, it cannot be said definitively that Jefferson utilized such proportions. But a comparative analysis between Jefferson’s plans and Palladio’s renderings of the Pantheon (Jefferson’s primary design source) suggests that both designs developed from similar geometric techniques.  相似文献   

14.
 In recent years, developments in the micro electronics industry have focused on semiconductors and semiconductor processes. However, microcircuit assembly technologies have lagged chip development. This has spurred research in interconnection and packaging creating many new technologies and enhancing integration. These new microelectronic technologies are enabling micro systems, and resulting in products, from portable work stations to advanced automotive electronics. This evolution of technology has also created the need to reexamine how we achieve a reliable system. Clearly, in highly competitive marketplaces, reliability is a key element in achieving successful products. To achieve a reliable product in a cost effective manner, “upstream problem solving” must be employed which focuses on root cause of failure. This paper provides an overview of the reliability assessment process needed to achieve effective microsystem development. A case study of reliability in a complex multi-chip module is presented which includes an assessment of the stochastic nature of via fatigue by applying Monte Carlo simulations. Received: 30 October 1995 / Accepted: 20 May 1996  相似文献   

15.
The entertainment industry, primarily the video games industry, continues to dictate the development and performance requirements of graphics hardware and computer graphics algorithms. However, despite the enormous progress in the last few years, it is still not possible to achieve some of industry’s demands, in particular high-fidelity rendering of complex scenes in real-time, on a single desktop machine. A realisation that sound/music and other senses are important to entertainment led to an investigation of alternative methods, such as cross-modal interaction in order to try and achieve the goal of “realism in real-time”. In this paper we investigate the cross-modal interaction between vision and audition for reducing the amount of computation required to compute visuals by introducing movement related sound effects. Additionally, we look at the effect of camera movement speed on temporal visual perception. Our results indicate that slow animations are perceived as smoother than fast animations. Furthermore, introducing the sound effect of footsteps to walking animations further increased the animation smoothness perception. This has the consequence that for certain conditions, the number of frames that need to be rendered each second can be reduced, saving valuable computation time, without the viewer being aware of this reduction. The results presented are another step towards the full understanding of the auditory-visual cross-modal interaction and its importance for helping achieve “realism int real-time”.  相似文献   

16.
Network, system and service management has evolved into an important scientific discipline over the last 20 years. While management problem solving is expected to continue ad infinitum, one would have expected that, after 20 years of research and standardization, an agreement would have been reached regarding a common management framework and protocol. But despite relevant progress and various available solutions, there seems to exist a permanent quest for the all encompassing next generation management technology. This paper looks at the evolution of management approaches, frameworks and protocols over the last 20 years, proposes a relevant taxonomy, presents the salient features of the representative tecnologies and discusses relevant issues. The purpose of this paper is to document historically this evolution, highlight important design choices and explain the how’s and why’s behind the various frameworks and technologies. The paper is concluded with a summary and future outlook.
George PavlouEmail:
  相似文献   

17.
为实现光纤连接器研磨机的全自动控制,设计了基于PLC控制的步进电机送料系统。介绍了系统的硬件结构和软件实现方法,并对系统的步进电机驱动部分进行了详细分析和研究,实际应用表明,该系统操作方便、运行稳定,可以在同类产品的设计中推广应用。  相似文献   

18.
Quantum Property Testing of Group Solvability   总被引:1,自引:0,他引:1  
Testing efficiently whether a finite set Γ with a binary operation ⋅ over it, given as an oracle, is a group is a well-known open problem in the field of property testing. Recently, Friedl, Ivanyos and Santha have made a significant step in the direction of solving this problem by showing that it is possible to test efficiently whether the input (Γ,⋅) is an abelian group or is far, with respect to some distance, from any abelian group. In this paper, we make a step further and construct an efficient quantum algorithm that tests whether (Γ,⋅) is a solvable group, or is far from any solvable group. More precisely, the number of queries used by our algorithm is polylogarithmic in the size of the set Γ.  相似文献   

19.
Fractal image compression (FIC) is a very popular coding technique use in image/video applications due to its simplicity and superior performance. The major drawback with FIC is that it is very time consuming algorithm, especially when a full search is attempted. Hence, it is very challenging to achieve a real-time operation if this algorithm is implemented on general processors. In this paper, a new parallel architecture with bit-width reduction scheme is implemented. The hardware is synthesized on Altera Cyclone II FPGA whose architecture is optimized at circuit level in order to achieve a real-time operation. The performance of the proposed architecture is evaluated in terms of runtime, peak-signal-to-noise-ratio (PSNR) and compression efficiency. On average the speedup of 3 was attainable through a bit-width reduction while the PSNR was maintained at acceptable level. Empirical results demonstrate that this firmware is competitive when compared to other existing hardware with PSNR averaging at 29.9 dB, 5.82% compression efficiency and a runtime equivalent to video speed of 101 frames per second (fps).  相似文献   

20.
Component-based software development is a promising approach for controlling the complexity and quality of software systems. Nevertheless, recent advances in quality control techniques do not seem to keep up with the growing complexity of embedded software; embedded systems often consist of dozens to hundreds of software/hardware components that exhibit complex interaction behavior. Unanticipated quality defects in a component can be a major source of system failure. To address this issue, this paper suggests a design verification approach integrated into the model-driven, component-based development methodology Marmot. The notion of abstract components—the basic building blocks of Marmot—helps to lift the level of abstraction, facilitates high-level reuse, and reduces verification complexity by localizing verification problems between abstract components before refinement and after refinement. This enables the identification of unanticipated design errors in the early stages of development. This work introduces the Marmot methodology, presents a design verification approach in Marmot, and demonstrates its application on the development of a μ-controller-based abstraction of a car mirror control system. An application on TinyOS shows that the approach helps to reuse models as well as their verification results in the development process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号