首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 93 毫秒
1.
《Micro, IEEE》1988,8(5):64-75
The author offers a simple mathematical model of the interrelation of hardware and software to illustrate the necessity of knowledge of, and control over, the system under evaluation. He presents a list of hardware and software variables in Unix systems that should be considered when running benchmarks and a set of guidelines to benchmarking technique. While Unix is used as the case study, most (if not all) of the unwritten rules generally apply to other operating system environments. The author applies these guidelines to various benchmarks to illustrate the need for proper technique  相似文献   

2.
Parallel integration of automatic speech recognition (ASR) models and statistical machine translation (MT) models is an unexplored research area in comparison to the large amount of works done on integrating them in series, i.e., speech-to-speech translation. Parallel integration of these models is possible when we have access to the speech of a target language text and to its corresponding source language text, like a computer-assisted translation system. To our knowledge, only a few methods for integrating ASR models with MT models in parallel have been studied. In this paper, we systematically study a number of different translation models in the context of the $N$-best list rescoring. As an alternative to the $N$ -best list rescoring, we use ASR word graphs in order to arrive at a tighter integration of ASR and MT models. The experiments are carried out on two tasks: English-to-German with an ASR vocabulary size of 17 K words, and Spanish-to-English with an ASR vocabulary of 58 K words. For the best method, the MT models reduce the ASR word error rate by a relative of 18% and 29% on the 17 K and the 58 K tasks, respectively.   相似文献   

3.
The work reported in this paper examined performance on a mixed pointing and data entry task using direct and indirect positioning devices for younger, middle-aged, and older adults (n=72) who were experienced mouse users. Participants used both preferred and non-preferred hands to perform an item selection and text entry task simulating a typical web page interaction. Older adults performed more slowly than middle-aged adults who in turn performed more slowly than young adults. Performance efficiency was superior with the mouse for older adults only on the first two trial blocks. Thereafter mouse and light pen yielded equivalent performance. For other age groups, mouse and light pen were equivalent at all points of practice. Contrary to prior research revealing superior performance with a light pen for pure pointing tasks, these results suggest that older adults may initially perform worse with a light pen than a mouse for mixed tasks.  相似文献   

4.
为满足航天领域多型号多任务的测试需求,对USB3.0高速总线技术进行了研究,对目前卫星、船器等型号中固存单元的测试需求进行了详细分析,提出了一种基于USB3.0的小型化通用测试平台,进行了详细的硬件电路设计,通用测试软件设计以及软硬件交互协议设计,研制出的平台在多个型号中得到了应用,应用结果表明该平台充分利用了USB3.0高性能、即插即用的特点,具备功能在线配置功能,能够实现1.2Gbps高速数据传输,性能可靠、稳定,通用性强、成本低,能满足航天领域多型号多任务的测试需求。  相似文献   

5.
This paper has two complementary focuses. The first is the system design and algorithmic development for air traffic control (ATC) using an associative SIMD processor (AP). The second is the comparison of this implementation with a multiprocessor implementation and the implications of these comparisons. This paper demonstrates how one application, ATC, can more easily, more simply, and more efficiently be implemented on an AP than is generally possible on other types of traditional hardware. The AP implementation of ATC will take advantage of its deterministic hardware to use static scheduling. The software will be dramatically smaller and cheaper to create and maintain. Likewise, a large AP system will be considerably simpler and cheaper than the MIMD hardware currently used. While APs were used for ATC-type applications earlier, these are no longer available. We use a ClearSpeed CSX600 accelerator to emulate the AP solutions of ATC on an ATC prototype consisting of eight data-intensive ATC real-time tasks. Its performance is compared with an 8-core multiprocessor (MP) using OpenMP. Our extensive experiments show that the AP implementation meets all deadlines while the MP will regularly miss a large number of deadlines. The AP code will be similar in size to sequential code for the same tasks and will avoid all of the additional support software needed with an MP to handle dynamic scheduling, load balancing, shared resource management, race conditions, false sharing, etc. At this point, essentially only MIMD systems are built. Many of the advantages of using an AP to solve an ATC problem would carry over to other applications. AP solutions for a wide variety of applications will be cited in this paper. Applications that involve a high degree of data parallelism such as database management, text processing, image processing, graph processing, bioinformatics, weather modeling, managing UAS (Unmanned Aircraft Systems or drones) etc., are good candidates for AP solutions. This raises the issue of whether we should routinely consider using non-multiprocessor hardware like the AP for applications where substantially simpler software solutions will normally exist. It also raises the question of whether the use of both AP and MIMD hardware in a single hetergeneous system could provide more versatility and efficiency. Either the AP or MIMD could serve as the primary system, but could hand off jobs it could not handle efficiently to the other system.  相似文献   

6.
Current virtual reality technologies have not yet crossed the threshold of usability. Not surprisingly, VR has so far shown more promise than practical applications. Yet the promise looks bright for fields such as data visualization and analysis. For such problems, VR offers a natural interface between human and computer that will simplify complicated manipulations of the data. It also provides an opportunity to rely on the interplay of combined senses rather than on a single or even dominant sense. Still, we cannot yet say whether VR is better than other visualization and analysis approaches for certain classes of data and, if so, by how much. The payoff will come not for those applications or tasks for which VR is merely better, even if significantly, but for those applications or tasks for which it offers some unique advantage unavailable otherwise. To answer these questions, we embarked on a multipronged program involving the Graphics, Visualization. and Usability (GVU) Center, the Office of Information Technology Scientific Visualization Lab. and other research groups at Georgia Tech. Integration is mandatory since these questions involve basic considerations: how immersive environments affect user interfaces and human-computer interactions; the ranges and capabilities of sensors; computer graphics and the VR optical system; and applications' needs. We describe some of our results  相似文献   

7.
This paper presents hardware designs, arithmetic algorithms, and numerical applications for variable-precision, interval arithmetic coprocessors. These coprocessors give the programmer the ability to set the initial precision of the computation, determine the accuracy of the results, and recompute inaccurate results with higher precision. Variable-precision, interval arithmetic algorithms are used to reduce the execution times of numerical applications. Three hardware designs with data paths of 16, 32, and 64 bits are examined. These designs are compared based on their estimated chip area, cycle time, and execution times for various numerical applications. Each coprocessor can be implemented on a single chip with a cycle time that is comparable to IEEE double-precision floating point coprocessors. For certain numerical applications, the coprocessors are two to four orders of magnitude faster than a conventional software package for variable-precision, interval arithmetic.  相似文献   

8.
The proposed idea of software defined radios (SDRs) offers the potential to cope with the complexity and flexibility demands of future wireless communication devices. Unfortunately, the tight interaction of software and hardware as well as the high computational requirements make development of SDRs to one of the most challenging tasks system architects are facing today. The main challenge is to select the optimal or a sub-optimal solution within the large design space spread by the many design options. This paper introduces a novel design space exploration framework for particular early development stages. The key contribution is a pre-simulation based mathematical analysis based on synchronous data flow (SDF) graphs in order to take the right software and hardware design decisions. This analysis can be utilized right from the start of the design cycle with only limited knowledge of the final implementation. In addition, the proposed technique seamlessly integrates into an electronic system level (ESL) based simulation framework. This allows for a smooth transition from pure mathematical analysis to the simulation of the final implementation. The practical usage of the framework and its capabilities are highlighted by a case study from a typical SDR design.  相似文献   

9.
An approach to fault-tolerant execution of real-time application tasks in hypercubes is proposed. The approach is based on the distributed recovery block (DRB) scheme and does not require special hardware mechanisms in support of fault tolerance. Each task is assigned to a pair of processors forming a DRB computing station for execution in a dual-redundant and self-checking mode. Assignment of all tasks in an application in such a form is called the full DRB mapping. The DRB scheme was developed as an approach to uniform treatment of hardware and software faults with the effect of fast forward recovery. However, if the system developer is concerned with hardware fault possibilities only, then forming DRB stations becomes a mechanical process not burdening the application software designer in any way. A procedure for converting an efficient nonredundant task-to-processor mapping into an efficient full DRB mapping is presented  相似文献   

10.
采用预配置策略的可重构混合任务调度算法   总被引:2,自引:2,他引:2  
在对可重构硬件资源进行抽象的基础上,采用软硬件混合任务有向无环图来描述应用,提出一种基于列表的混合任务调度算法.该算法通过任务计算就绪顺序及可重构资源状态确定硬件任务的动态预配置优先级,按此优先级进行硬件任务预配置,隐藏硬件任务的配置时间,从而获得硬件任务运算加速.实验结果表明,针对可重构系统中的软硬件混合任务调度,能够有效地降低配置时间对应用执行时间的影响.  相似文献   

11.
Automatic Speaker Recognition (ASR) refers to the task of identifying a person based on his or her voice with the help of machines. ASR finds its potential applications in telephone based financial transactions, purchase of credit card and in forensic science and social anthropology for the study of different cultures and languages. Results of ASR are highly dependent on database, i.e., the results obtained in ASR are meaningless if recording conditions are not known. In this paper, a methodology and a typical experimental setup used for development of corpora for various tasks in the text-independent speaker identification in different Indian languages, viz., Marathi, Hindi, Urdu and Oriya have been described. Finally, an ASR system is presented to evaluate the corpora.  相似文献   

12.

Cloud computing is a popular and widely adopted computing platform for the execution of scientific workflows as it provides flexible infrastructure and offers access to collection of autonomous heterogeneous resources. Effective scheduling of computationally complex workflows which contain many interconnected tasks is a complex problem and becomes more challenging in cloud environment. Optimal solutions can be obtained by considering not only the heterogeneity of computation costs involved, but also by taking into account the communication costs among the tasks in a way that schedule length of the application is reduced. In this paper, we propose a list scheduling heuristic, namely minimal optimistic processing time (MOPT), with optimized duplication approach. The additional feature is introduced for the entry task and is applied only in scenarios in which duplication is more practical and effective. The prioritization phase of the proposed work is based on an optimistic processing time matrix that is used for ranking of the tasks. The algorithm has same time complexity as state-of-the-art existing algorithms, but notable improvements are acquired in terms of makespan and other performance evaluation parameters. Extensive experimental analysis of the proposed algorithm is carried out using synthesized graphs and graphs from the real-world applications. The results prove that MOPT achieves quality schedules with reduced makespans. As communication cost among the tasks grows higher, performance of the proposed algorithm becomes more effective, thus providing the evidence that the MOPT algorithm is well-suited for communication-intensive applications.

  相似文献   

13.
Traditional information extraction systems for compound tasks adopt pipeline architectures, which are highly ineffective and suffer from several problems such as cascading accumulation of errors. In this paper, we propose a joint discriminative probabilistic framework to optimize all relevant subtasks simultaneously. This framework offers a great flexibility to incorporate the advantage of both uncertainty for sequence modeling and first-order logic for domain knowledge. The first-order logic model provides a more expressive formalism tackling the issue of limited expressiveness of traditional attribute-value representation. Our framework defines a joint probability distribution for both segmentations in sequence data and possible worlds of relations between segments in the form of an exponential family. Since exact parameter estimation and inference are prohibitively intractable in this model, a structured variational inference algorithm is developed to perform parameter estimation approximately. For inference, we propose a highly coupled, bi-directional Metropolis-Hastings (MH) algorithm to find the maximum a posteriori (MAP) assignments for both segmentations and relations. Extensive experiments on two real-world information extraction tasks, entity identification and relation extraction from Wikipedia, and citation matching show that (1) the proposed model achieves significant improvement on both tasks compared to state-of-the-art pipeline models and other joint models; (2) the bi-directional MH inference algorithm obtains boosted performance compared to the greedy, N-best list, and uni-directional MH sampling algorithms.  相似文献   

14.
For functional verification, software simulation provides full controllability and observability, whereas hardware emulation offers speed. This article describes a new platform that leverages the advantages of both. This platform implements an efficient scheme to record the internal behavior of an FPGA emulator and replay the relevant segment of a simulation in a software environment for debugging. Experimental results show an order-of-magnitude savings in debugging time compared to a software-only simulation approach.  相似文献   

15.
ABS/ASR集成控制系统ECU开发与验证   总被引:2,自引:0,他引:2  
集成了防抱死制动系统(ABS)和驱动防滑控制系统(ASR)的汽车防滑控制系统是汽车主动安全性控制系统的核心之一,目前国内开发的ECU还不能能够满足ABS/ASR集成控制产业化要求.研究了ABS/ASR控制要求和控制策略,设计开发了ABS/ASR集成控制系统的ECU硬件和软件,并对ECU进行了ABS实车道路试验和ASR硬件在环试验.试验结果验证了所开发ECU的制动防抱死和驱动防滑转功能,硬件和软件满足控制需求,能够适应实车复杂的电磁环境,工作可靠.  相似文献   

16.
汽车驱动防滑控制硬件在环仿真系统设计   总被引:1,自引:0,他引:1  
利用 Matlab 中的 Simulink/RTW/xPC target 一体化开发工具构建了车辆驱动防滑控制硬件在环仿真平台,并基于单片机设计了驱动防滑电控单元的硬件电路和控制程序。针对几种典型工况进行半实物仿真,验证电控单元硬件以及控制程序的控制效果。  相似文献   

17.
针对可重构系统中任务模型灵活性差、硬件任务重构延时长、FPGA资源利用率低等问题,提出了将应用程序划分为软件任务和混合任务的划分模式,并在eCos的基础上,通过重构控制机制、混合任务管理机制、通信机制三方面的拓展,设计了支持可重构系统的嵌入式操作系统框架eCos4RC。仿真结果表明,eCos4RC实现了对混合任务的有效管理,在兼容eCos多线程机制的同时提高了应用程序执行速度和可重构资源利用率,为可重构计算平台提供了良好的运行环境支持。  相似文献   

18.
参数依赖型软件是指初始化时读取并解析配置参数,并据此进行任务处理的软件,航天测控软件是典型的参数依赖型软件;航天测控软件具有明显的领域软件特征,多采用领域工程分析技术,实现业务处理逻辑和具体任务参数的分离,达到仅通过修改任务配置参数而适应高强度型号任务的目的;通过对参数依赖型软件架构、应用模式的分析,提出一种对参数依赖特性进行验收测试、参数更动测试的流程、策略和方法;并基于该方法,对远程数据交互软件进行了参数依赖特性测试,测试结果表明,该方法具有测试覆盖性强、测试重点突出、测试效率高的特点。  相似文献   

19.
Daily numerical data entry is subject to human errors, and errors in numerical data can cause serious losses in health care, safety and finance. Difficulty in detecting errors by human operators in numerical data entry necessitates an early error detection/prediction mechanism to proactively prevent severe accidents. To explore the possibility of using multi-channel electroencephalography (EEG) collected before movements/reactions to detect/predict human errors, linear discriminant analysis (LDA) classifier was utilised to predict numerical typing errors before their occurrence in numerical typing. Single trial EEG data were collected from seven participants during numerical hear-and-type tasks and three temporal features were extracted from six EEG sites in a 150-ms time window. The sensitivity of LDA classifier was revealed by adjusting the critical ratio of two Mahalanobis distances as a classification criterion. On average, the LDA classifier was able to detect 74.34% of numerical typing errors in advance with only 34.46% false alarms, resulting in a sensitivity of 1.05. A cost analysis also showed that using the LDA classifier would be beneficial as long as the penalty is at least 15 times the cost of inspection when the error rate is 5%. LDA demonstrated its realistic potential in detecting/predicting relatively few errors in numerical data without heavy pre-processing. This is one step towards predicting and preventing human errors in perceptual-motor tasks before their occurrence.  相似文献   

20.
The mel-frequency cepstral coefficient (MFCC) or perceptual linear prediction (PLP) feature extraction typically used for automatic speech recognition (ASR) employ several principles which have known counterparts in the cochlea and auditory nerve: frequency decomposition, mel- or bark-warping of the frequency axis, and compression of amplitudes. It seems natural to ask if one can profitably employ a counterpart of the next physiological processing step, synaptic adaptation. We, therefore, incorporated a simplified model of short-term adaptation into MFCC feature extraction. We evaluated the resulting ASR performance on the AURORA 2 and AURORA 3 tasks, in comparison to ordinary MFCCs, MFCCs processed by RASTA, and MFCCs processed by cepstral mean subtraction (CMS), and both in comparison to and in combination with Wiener filtering. The results suggest that our approach offers a simple, causal robustness strategy which is competitive with RASTA, CMS, and Wiener filtering and performs well in combination with Wiener filtering. Compared to the structurally related RASTA, our adaptation model provides superior performance on AURORA 2 and, if Wiener filtering is used prior to both approaches, on AURORA 3 as well.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号