首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 203 毫秒
1.
《电子技术应用》2016,(1):15-18
提出了一种用于测试一个DVB-S编码调制系统的功能验证平台。该平台使用高级验证方法学(Universal Verification Methodology,UVM)搭建了验证平台的主要结构,并在验证平台中使用外接Matlab作为复杂数字信号处理的参考模型。介绍了功能验证平台的主要结构和组件的设计,详细介绍了UVM通过直接编程接口(Direct Programming Interface,DPI)以C++为桥梁与Matlab连接的设计方法。通过实际仿真验证比较,使用这种方法搭建的联合平台比纯硬件语言Verilog语言搭建的仿真验证平台在验证时间上缩短了近50%,避免了对复杂信号处理验证模型的硬件语言设计,提高了针对复杂信号处理系统验证平台的搭建效率。  相似文献   

2.
针对信道化滤波器要求运算速度快、消耗资源多、难以实时处理的突出问题,从多相滤波器,信道化滤波器的结构、原理和运算效率分析出发,推导了一种基于多相带通结构的信道化滤波器算法模型。这种算法将现有多相结构信道化滤波器模型中的低通设计改为带通设计,实现了复数乘法运算全部集中在带通滤波环节当中,并采用协调分级DFT算法的实现方案,大幅度节省了硬件资源,提高了运算效率,实现了信道化滤波器在通用FP—GA和DSP芯片中的实时处理,硬件仿真结果验证了算法模型的正确性和有效性。  相似文献   

3.
《微型机与应用》2019,(3):91-93
针对传统的中值滤波算法在硬件实现时,在窗口较大的情况下会消耗较多资源的问题,根据位级运算的中值滤波比较算法,设计了一种低硬件资源消耗的基于5×5窗口大小的位级中值滤波硬件架构,并在Xilinx Vivado 2017. 1硬件平台上进行了仿真验证。相比较传统的中值滤波算法的硬件设计,本次硬件架构设计大大减少了硬件资源,且运算速度较快,满足实时性的要求。  相似文献   

4.
针对6G时代多样的边缘计算要求,基于FPGA上的可重构技术可以实现更低的时延同时提供多样性的服务。基于局部动态重配置的思路,使用ICAP接口对FPGA资源进行重新配置,从而实现FPGA逻辑上的局部动态可重构方案。借鉴操作系统中软件进程管理的思想,基于Linux操作系统中引入硬件进程的概念,这样可以将一整块FPGA资源划分为多个小的FPGA资源块,每一个小的可重构的FPGA资源块都可以抽象成为一个硬件进程,硬件进程实际并不运行在CPU上而是运行在FPGA逻辑资源区域中,在操作系统上只是硬件进程的软件语言描述。由此,设计出CPU加FPGA的硬件方案来实现局部可重构系统,并在Xilinx公司Zynq系列芯片上进行了验证,将FPGA硬件资源进行进程式调度以及资源分配,大大提高了FPGA硬件资源的利用率以及灵活性。  相似文献   

5.
介绍了一个用来验证嵌入式系统硬件/软件,及其交互功能的硬件/软件协同验证环境,硬件/软件功能分别使用系统提供的EHDL语言及ESDL语言设计和描述,该协同验证环境由嵌入式软件调试器和嵌入式硬件模拟器组成,其采用了指令集结构的协同模拟模型,利用该集成环境,硬件/软件的设计错误可以在系统制造之前被发现并修改。  相似文献   

6.
VHDL——标准化的硬件设计语言   总被引:4,自引:0,他引:4  
本文介绍了用VHDL语言来描述硬件模型的方法,结合实例说明这种高级硬件设计语言在逻辑设计中的灵活性和可移植性,并指出VHDL将推动逻辑设计的标准化  相似文献   

7.
基于ISS和硬件模拟器的协同模拟关键技术研究   总被引:1,自引:0,他引:1  
提出一种在嵌入式系统软硬件协调设计中对系统功能进行验证的软硬件协同模拟方法.该方法使用指令集模拟器(ISS)和事件驱动硬件模拟器分别完成软硬件的模拟,并采用C 语言构造处理器的总线功能模型,实现软硬件模拟器的信息交互.重点讨论指令集模拟器、总线功能模型以及硬件模拟器协同模拟接口的设计与实现方法,同时还给出了软硬件模拟器之间的同步算法.  相似文献   

8.
一种道路识别算法的硬件设计与实现   总被引:2,自引:1,他引:1  
车辆视觉导航算法的硬件实现具有实际意义,是目前的研究热点之一;为克服传统算法硬件设计实现比较复杂、调试困难、对设计人员要求较高等缺点,对基于高级语言的复杂算法硬件设计实现方法进行了研究;分析了基于Handel-C语言的道路识别算法FPGA硬件设计与实现过程,并进行了实验验证;实验结果表明,和目前采用的VHDL语言等设计方法相比,该方法具有设计灵活、开发周期短、资源利用合理等优点,同时易于软硬件协同设计.  相似文献   

9.
提出了一种专用指令处理器的软硬件协同设计方法,该方法可以在设计的早期阶段对处理器进行系统探索和验证.根据椭圆曲线密码算法的特点,并按照专用指令处理器的设计原则,以椭圆曲线密码运算基本操作及运算存储特性为基础,设计了超长指令字ECC专用指令处理器的指令集结构模型.根据处理器的指令集结构模型,以指令模拟器为基础,搭建了处理器的软硬件协同验证平台,从系统设计、RTL描述和FPGA硬件原型3个不同层次对处理器进行了验证.  相似文献   

10.
介绍了处理器与液晶显示模块之间硬件接口的一种设计方案。采用I/O方式连接,实现了SEP3203与液晶显示控制器JM12864G的一种接口设计方法.并通过与键盘复用接口,节省了处理器硬件引脚资源。给出了主要部分软件实现的流程,并在实际系统中得到验证。  相似文献   

11.
提出一个基于SystemC的可配置嵌入式系统快速虚拟原型平台,它具有典型的片上系统结构,支持多层总线架构.作为SystemC事务处理级模型,该平台支持快速仿真和通信细化.将此平台应用于IEEE 802.11媒体访问控制器的设计,目前该系统正处于板级调试过程中.  相似文献   

12.
基于可配置处理器的SoC系统级设计方法   总被引:2,自引:0,他引:2  
论文对一种经过改进的SoC系统级快速设计方法进行了介绍和研究。该设计基于可配置处理器核,在设计早期阶段对SoC系统快速建模,以获得针对具体应用算法的最优性能。同时,利用软硬件协同设计方法,得到硬件结构模型和软件开发平台。实验结果表明,该方法不仅灵活,而且设计周期短,减少了设计工作量。  相似文献   

13.
System reliability has become a main concern during the computer-based system design process. It is one of the most important characteristics of the system quality. The continuous increase of the system complexity makes the reliability evaluation extremely costly. Therefore, there is need to develop new methods with less cost and effort. Furthermore, the system is vulnerable to both software and hardware faults. While the software faults are usually introduced by the programmer either at the design or the implementation stage of the software, the hardware faults are caused by physical phenomena affecting the hardware components, such as environmental perturbations, manufacturing defects, and aging-related phenomena. The software faults can only impact the software components. However, the hardware faults can propagate through the different system layers, and affect both the hardware and the software. This paper discusses the differences between the software testing and the software fault injections techniques used for reliability evaluation. We describe the mutation analysis as a method mainly used in software testing. Then, we detail the fault injection as a technique to evaluate the system reliability. Finally, we discuss how to use software mutation analysis in order to evaluate, at software level, the system reliability against hardware faults. The main advantage of this technique is its usability at early design stage of the system, when the instruction set architecture is not available. Experimental results run to evaluate faults occurring the memory show that the proposed approach significantly reduces the complexity of the system reliability evaluation in terms of time and cost.  相似文献   

14.
金轶丰  於健 《计算机工程》2010,36(13):201-204
内部总线带宽和存储器容量的限制可能导致系统瓶颈,且在寄存器传输级代码设计阶段才能被发现,增加了设计迭代次数。针对该问题,利用系统资源预算技术,通过分析异步传输模式(ATM)信元在交换过程中占用的各种硬件资源确定ATM交换芯片设计过程中的各项参数。仿真与硬件测试结果表明,该芯片能满足系统工作要求。  相似文献   

15.
In this paper, we present an interface for the hardware modeled in SystemC to access those modeled in QEMU on a QEMU and SystemC-based virtual platform. By using QEMU as the instruction-accurate instruction set simulator (IA-ISS) and its capability to run a full-fledged operating system such as Linux, the virtual platform with the proposed interface can be used to facilitate the co-design of hardware models and device drivers at the early stage of Electronic System Level (ESL) design flow. In other words, by using such a virtual platform, the hardware models and associated device drivers can be cross verified while they are being developed so that malfunctions in the hardware models or the device drivers can be easily detected. Moreover, the virtual platform with the proposed interface is capable of providing statistics of instructions executed, memory accessed, and I/O performed at the instruction-accurate level—thus not only making it easy to evaluate the performance of the hardware models but also making it possible for design space exploration.  相似文献   

16.
The concept of multi-Level-of-Development (multi-LOD) modelling represents a flexible approach of information management and compilation in building information modelling (BIM) on a set of consistent levels. From an energy perspective during early architectural design, the refinement of design parameters by addition of information allows a more precise prediction of building performance. The need for energy-efficient buildings requires a designer to focus on the parameters in order of their ability to reduce uncertainty in energy performance to prioritise energy relevant decisions. However, there is no method for assigning and prioritising information for a particular level of multi-LOD. In this study, we performed a sensitivity analysis of energy models to estimate the uncertainty caused by the design parameters in energy prediction. This study allows to rank the design parameters in order of their influence on the energy prediction and determine the information required at each level of multi-LOD approach. We have studied the parametric energy model of different building shapes representing architectural design variation at the early design stage. A variance-based sensitivity analysis method is used to calculate the uncertainty contribution of each design parameter. The three levels in the uncertainty contribution by the group of parameters are identified which form the basis of information required at each level of multi-LOD BIM approach. The first level includes geometrical parameters, the second level includes technical specification and operational design parameters, and the third level includes window construction and system efficiency parameters. These findings will be specifically useful in the development of a multi-LOD approach to prioritise performance relevant decisions at early design phases.  相似文献   

17.
冯鑫  郭炜 《计算机仿真》2007,24(10):257-260
随着SoC(System on Chip)系统设计复杂度的不断提高,设计前期在系统级别进行软硬件规划对SoC 性能的影响日趋增加,在复杂视频解码SoC 设计中迫切需要高效的性能分析和验证平台从架构层次上优化性能.将基于电子系统级设计(Electronic System Level , ESL)仿真方法在MPEG-4 视频解码SoC 软硬件协同设计中的应用,利用ARM SoC-Designer ESL 平台分析软件算法的瓶颈,实现软硬划分.通过SystemC 对硬件单元周期精确建模,最终实现了MEPG-4 解码软硬件协同仿真验证.实践证明利用ESL 进行系统设计不仅可以有效提高仿真速度而且设计的视频解码硬件能有效改善系统的性能.  相似文献   

18.
The past researches emphasize merely the avoidance of over-learning at the system level and ignore the problem of over-learning at the model level, which lead to the poor performance of the evolutionary computation based stock trading decision-making system. This study presents a new evaluation approach to focus on evaluating the generalization capability at the model level. An empirical study was provided and the results reveal four important findings. First, the decision-making system generated at the model design stage outperforms the system generated at the model validation stage, which shows over-learning at the model level. Secondly, for the decision-making system generated either at the model design stage or at the model validation stage, the investment performance in the training period is much better than that in the testing period, exhibiting over-learning at the system level. Third, employing moving timeframe approach is unable to improve the investment performance at the model validation stage. Fourth, reducing the evolution generation and input variables are unable to avoid the over-learning at the model level. The major contribution of this study is to clarify the issue of over-learning at the model and the system level. For future research, this study developed a more reliable evaluation approach in examining the generalization capability of evolutionary computation based decision-making system.  相似文献   

19.
A multilevel testing approach for embedded systems addresses mixed hardware and software implementations. Contrary to conventional approaches, it provides consistent generation of scenarios throughout all levels of testing, an early assessment of alternative design implementations, integrated system and component testing, and performance assessments of design specifications starting from the system level  相似文献   

20.
In the early 1990s, researchers at Sandia National Laboratories and the University of New Mexico began development of customized system software for massively parallel ‘capability’ computing platforms. These lightweight kernels have proven to be essential for delivering the full power of the underlying hardware to applications. This claim is underscored by the success of several supercomputers, including the Intel Paragon, Intel Accelerated Strategic Computing Initiative Red, and the Cray XT series of systems, each having established a new standard for high‐performance computing upon introduction. In this paper, we describe our approach to lightweight compute node kernel design and discuss the design principles that have guided several generations of implementation and deployment. A broad strategy of operating system specialization has led to a focus on user‐level resource management, deterministic behavior, and scalable system services. The relative importance of each of these areas has changed over the years in response to changes in applications and hardware and system architecture. We detail our approach and the associated principles, describe how our application of these principles has changed over time, and provide design and performance comparisons to contemporaneous supercomputing operating systems. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号