首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
We focus here on the application of multi critera decision analysis (MCDA) techniques in hardware/software partitioning activities to be used in the design and deployment of embedded systems. Our goal is to identify the best existing methods and tools suitable to support the approach we have taken for the partitioning process. We provide this via a survey of the most well-known MCDA methods and tools (for a specific class of MCDA methods called multi attribute decision making. We identify a set of criteria that need to be addressed, in some way, by the methods, and implemented by related tools. These “11-suitability criteria” help us in deciding the appropriateness of the analysed methods and tools for the envisaged partitioning approach. In brief, we are interested that the MCDA methods are taking into account multiple extra-functional properties, expressed by a variety of types, with possible missing values, should enable dependency handling, decision traceability, etc. The conclusion is that there are criteria that are not fulfilled by any of the methods, and hence there is no method or tool that can directly used for the partitioning. However, the results shows the potential of using MCDA in the partitioning process and provide a good starting point for future research activities.  相似文献   

2.
Fairness of competing transmission control protocol (TCP) flows is an integral and indispensable part of transport protocol design for next-generation, high-bandwidth-delay product networks. It is not just a protocol-intrinsic property but it could also have severe impact on quality of experience (QoE). In this paper, we revisit FAST TCP fairness behavior based on a comprehensive performance evaluation study. We demonstrate that FAST TCP with proper parameter settings can always achieve fair behavior with HighSpeed TCP and Scalable TCP. We also show that this behavior is a rather robust property of the protocol concerning different traffic mix or network topology. The dynamic behavior of reaching the fair equilibrium state can be different, which is demonstrated in the paper. Our study also emphasizes the important need for finding a dynamic sensitive fairness metric for performance evaluation of transport protocols for next-generation, high-bandwidth-delay product networks.  相似文献   

3.
Spatially correlating in-line inspection data and post process electrical test data is an effective approach for estimating the yield impact of different defect types and/or process steps. An estimator for the probability that a particular type of defect kills an electrically testable structure, the kill ratio, has been described in the literature. This estimator may be used to predict the yield impact immediately after inspection, providing a number of benefits. It may also be used to generate a yield loss pareto by defect type. This paper introduces a new estimator for the kill ratio, which takes into account the impact of tolerance, a parameter setting the maximum distance between a defect and structure under which they are considered spatially correlated. This estimator was developed for memory (bitmap) data, where the tolerance is very large relative to the size of the structure. The tolerance is often increased to accommodate for misalignment between inspection tool sets and the electrical data. The problem with increasing the tolerance is that the chance of coincidental correlation between failed bits and defects increases as the square of tolerance. Analytical and simulation results are presented to illustrate the danger of using the existing kill ratio estimator with too large a tolerance or overly sensitive inspection tool recipes. These same results illustrate the improved performance of the new estimator. Because the number of falsely attributed defects adds up over a number of inspections, a small error in the kill ratio estimator can have a major impact on the yield loss pareto.  相似文献   

4.
In this research, we first investigate the cross-layer interaction between TCP and routing protocols in the IEEE 802.11 ad hoc network. On-demand ad hoc routing protocols respond to network events such as channel noise, mobility, and congestion in the same manner, which, in association with TCP, deteriorates the quality of an existing end-to-end connection. The poor end-to-end connectivity deteriorates TCP's performance in turn. Based on the well-known TCP-friendly equation, we conduct a quantitative study on the TCP operation range using static routing and long-lived TCP flows and show that the additive-increase, multiplicative-decrease (AIMD) behavior of the TCP window mechanism is aggressive for a typical multihop IEEE 802.11 network with a low-bandwidth-delay product. Then, to address these problems, we propose two complementary mechanisms, that is, the TCP fractional window increment (FeW) scheme and the Route-failure notification using BUIk-losS Trigger (ROBUST) policy. The TCP FeW scheme is a preventive solution used to reduce the congestion-driven wireless link loss. The ROBUST policy is a corrective solution that enables on-demand routing protocols to suppress overreactions induced by the aggressive TCP behavior. It is shown by computer simulation that these two mechanisms result in a significant improvement of TCP throughput without modifying the basic TCP window or the wireless MAC mechanisms.  相似文献   

5.
Virtual prototyping environments are emerging as a new generation of EDA tools. Such a tool will allow the designer to test interactively complex electronic systems on an enhanced-reality virtual workbench, by concurrently running multidomain (mechanical, electrical, thermal, etc.) what-if experiments. Virtual prototyping will shorten the design cycle, improve the product quality, and reduce the time to market. Advanced computational techniques are needed to reduce the execution time, especially for the field (EM and thermal) models used in these virtual prototyping environments. This article shows how neural networks could be used efficiently for 3D EM field modeling. Neural network models have much better real-time performance than classical numerical EM-field modeling methods, and this is particularly important when the field analysis is coupled with system optimization  相似文献   

6.
High Efficiency Video Coding (HEVC) surpasses its predecessors in encoding efficiency by introducing new coding tools at the cost of an increased encoding time-complexity. The Coding Tree Unit (CTU) is the main building block used in HEVC. In the HEVC standard, frames are divided into CTUs with the predetermined size of up to 64 × 64 pixels. Each CTU is then divided recursively into a number of equally sized square areas, known as Coding Units (CUs). Although this diversity of frame partitioning increases encoding efficiency, it also causes an increase in the time complexity due to the increased number of ways to find the optimal partitioning. To address this complexity, numerous algorithms have been proposed to eliminate unnecessary searches during partitioning CTUs by exploiting the correlation in the video. In this paper, existing CTU depth decision algorithms for HEVC are surveyed. These algorithms are categorized into two groups, namely statistics and machine learning approaches. Statistics approaches are further subdivided into neighboring and inherent approaches. Neighboring approaches exploit the similarity between adjacent CTUs to limit the depth range of the current CTU, while inherent approaches use only the available information within the current CTU. Machine learning approaches try to extract and exploit similarities implicitly. Traditional methods like support vector machines or random forests use manually selected features, while recently proposed deep learning methods extract features during training. Finally, this paper discusses extending these methods to more recent video coding formats such as Versatile Video Coding (VVC) and AOMedia Video 1(AV1).  相似文献   

7.
The critical product planning phase, early in the product development cycle, requires a design tool to establish engineering priorities, select the preferred design concept, and set target levels of engineering performance while considering the needs of both the consumer and producer. The quality function deployment (QFD) method was developed as a design process tool to translate customer needs into engineering characteristics; however, limitations have been identified in using the QFD method for product planning. In this paper, a new design tool called product attribute function deployment (PAFD), based on the principles of decision-based design (DBD), is introduced as a decision-theoretic, enterprise-level process tool to guide the conceptual design phase. The PAFD method extends the qualitative matrix principles of QFD while utilizing the quantitative decision-making processes of DBD. The PAFD method is built upon established methods in engineering, marketing, and decision analysis to eliminate the need for the user ratings and rankings of performance, priority, and attribute coupling in the QFD method. The differences between the QFD and the PAFD processes are compared and contrasted, and the conceptual design of an automotive manifold absolute pressure sensor is used as a case study to demonstrate the features and benefits of the PAFD method.  相似文献   

8.
胡勇  黄本雄 《信息通信》2007,20(2):68-70
随着无线网络的迅速发展,如何改进TCP在无线网络中的传输性能这一课题,已经成为国内外研究的热点.文章分析了现有的几种典型的TCP改进方案,并在此基础上介绍一种新的跨层方案.通过在传输层和链路层之间引入ARQ Snoop代理,在链路层检测并重传ARQ分组的同时,协调WLAN MAC子层的ARQ机制与TCP的ARQ策略.  相似文献   

9.
This paper presents an analysis of several Transmission Control Protocol (TCP) variants working over a digital video broadcasting‐second generation (DVB‐S2) satellite link with the support of the Differentiated Services (DiffServ) architecture to provide quality of service (QoS). This analysis is carried out using the NS‐2 simulator tool. Three TCP variants are considered: SACK TCP, Hybla TCP, and CUBIC TCP. These TCP variants are taken as a starting point because they have proven to be the most suitable variants to deal with long delays present in satellite links. The DVB‐S2 link also introduces the challenge of dealing with variable bandwidth, whereas the DiffServ architecture introduces the challenge of dealing with different priorities. In this paper, we propose a DiffServ model that includes a modified queuing mechanism to enhance the goodput of the assured forwarding traffic class. This modified DiffServ model is simulated and tested, considering the interaction of the selected TCP variants. In addition, we present evaluation metrics, significant simulations results, and conclusions about the performance of these TCP variants evaluated over the proposed scenario. As a general conclusion, we show that CUBIC TCP is the TCP variant that shows the best performance in terms of goodput, latency, and friendliness. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

10.
This paper will start with a discussion of why probe yield (the number of good chips per silicon wafer) is so important to financial success in integrated circuit manufacturing. Actual data will be quoted and a numerical example shown. A simple model will be given to demonstrate the main factors influencing yield and the relationship between yield and reliability of the final product. In the last few years a range of new tools have been deployed in manufacturing, and these have accelerated the pace of yield improvement, thus increasing competitive pressures. These tools will be described, along with examples of their use. Topics will include in-line inspection and control, automatic defect classification and data mining techniques. A proposal is made to extend these tools to the improvement of reliability of products already in manufacturing by maintaining absolute chip identity throughout the entire wafer fabrication, packaging and final testing steps.  相似文献   

11.
覆晶软带封装(COF)以及带载芯片封装(TCP)是液晶显示驱动芯片普遍采用的封装方式.与传统封装的微焊球等技术不同,COF和TCP封装工艺采用内引脚键合(ILB)技术来实现驱动芯片与外部电路的电性连接,所以ILB工艺的可靠性对于封装质量起着至关重要的作用.利用改进的田口实验设计的方法,结合实际生产数据,获得了最优化的生产工艺,并利用FEA有限元模拟验证了实验参数.实际的生产结果显示,ILB引脚的可靠性有很大幅度的提高.  相似文献   

12.
When supporting both voice and TCP in a wireless multihop network, there are two conflicting goals: to protect the VoIP traffic and to completely utilize the remaining capacity for TCP. We investigate the interaction between these two popular categories of traffic and find that many solution approaches, such as enhanced TCP variants, priority queues, bandwidth limitation, and traffic shaping, do not always achieve the coexistence goals. Enhanced TCP variants (Reno, Vegas, C-TCP, CUBIC, Westwood) generally fail to protect VoIP in wired-wireless multihop scenarios. Priority schemes, including those built into the 802.11 MAC such as RTS/CTS or 802.11e, do not account for the interference nature of wireless multihop. Finally, bandwidth shaping and window control are valid tools to control TCP, but come with their own trade-offs.  相似文献   

13.
All over the world Global System for Mobile Communication (GSM) cellular mobile networks have been upgraded to support the "always-on" general packet radio service (GPRS). Despite the apparent availability of levels of bandwidth not dissimilar to that provided by conventional fixed-wire telephone modems, the user experience using GPRS is still considerably poor. In this paper, we examine the performance of protocols such as transmission control protocol (TCP) over GPRS, and show how certain network characteristics interact badly with TCP to yield problems such as: link underutilization for short-lived flows, excess queueing for long-lived flows, acknowledgment bunching, poor loss recovery, and gross unfairness between competing flows. We present the design and implementation of a transparent TCP proxy that mitigates many of these problems without requiring any changes to the TCP implementations in either mobile or fixed-wire end systems. The proxy is interposed in the cellular provider's network, and splits TCP connections transparently into two halves-the wired and wireless sides. Connections destined for the same mobile host are treated as an aggregate due to their statistical dependence. We demonstrate packet scheduling and flow control algorithms that use information shared between the connections to maximize performance of the wireless link, while interworking with unmodified TCP peers. We also demonstrate how fairness between flows and response to loss is improved, and that queueing and, hence, network latency is reduced. We discuss how TCP enhancing proxies could be transparently deployed, and conclude that installing such a proxy into GPRS network would be of significant benefit to users.  相似文献   

14.
一种用于暗域交替式相移掩模设计的自适应版图划分方法   总被引:1,自引:0,他引:1  
提出了一种新的用于加速 1 30 nm以下工艺交替式相移掩模设计流程的版图划分方法 ,该方法能够自适应调整版图划分的粒度 .讨论了消除相位冲突的方法和版图压缩中相位兼容性保持的策略 .利用上述算法实现的 CAD原型系统经多个工业界例子的测试表明能够有效地适应随版图尺寸而快速增长的相位冲突复杂性 ,同时提供较好的 PSM设计质量 ,并能满足不同求解精度和加速比的要求  相似文献   

15.
This paper focuses on the product design process and integration of environmental performance criteria during this critical stage. Specifically, the authors explore environmentally responsible manufacturing (ERM) as perceived and acted on by two critical groups within this design process. The first consists of the champions and supporters of ERM, who either formally or informally act as advocates of ERM within the organization. The second consists of the users of design for environment (DfE) tools and procedures. Typically, these people consist of product designers and design engineers. The authors study these two groups through in-depth interviews with managers and engineers in a sample of ten firms drawn from the “best-in-class” environmental leaders. They found that a large gap exists between the ERM supporters and the users of ERM tools in terms of expectations, perceptions and orientations toward ERM principles, practices and tools. To overcome some of the many obstacles preventing effective integration of environmental criteria into the design process, they conclude with a process map that proposes the following steps: (1) enlisting support of a corporate champion; (2) defining environmental goals; (3) selecting a pilot project; (4) setting product launch goals and evaluation system; (5) enlisting support of team members; (6) providing DfE tools and training; (7) monitoring the project; and (8) celebrating successes  相似文献   

16.
TCP error control mechanism lacks the ability to detect with precision the nature of potential errors during communication. It is only capable of detecting the results of the errors, namely that segments are dropped. As a result, the protocol lacks the ability to implement an appropriate error recovery strategy cognizant of current network conditions and responsive to the distinctive error characteristics of the communication channel. TCP sender always calls for the sending window to shrink. We show that probing mechanisms could enhance the error detection capabilities of the protocol. TCP could then flexibly adjust its window in a manner that permits the available bandwidth to be exploited without violating the requirements of stability, efficiency and fairness that need to be guaranteed during congestion. Our experiments have three distinct goals: First, to demonstrate the potential contribution of probing mechanisms. A simple probing mechanism and an immediate recovery strategy are grafted into TCP‐Tahoe and TCP‐Reno. We show that, this way, standard TCP can improve its performance without requiring any further change. Second, to study the performance of adaptive strategies. An adaptive TCP with probing is used, that is responsive to the detected error conditions by alternating slow start, fast recovery and immediate recovery. An adaptive error recovery strategy can yield better performance. Third, to study the design limitations of the probing device itself. The aggressive or conservative nature of the probing mechanisms themselves can determine the aggressive or conservative behaviour of the protocol and exploit accordingly the energy/throughput trade‐off. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

17.
《Mechatronics》1999,9(4):349-361
Older machine tools are often mechanically sound but their proprietary controls are outdated and can be economically upgraded to modern open architecture control. The objective of this paper is to present a procedure for integrating these new flexible control architectures into machine tools. As an example, this paper presents the integration of an open architecture controller and a diamond turning machine. The open architecture control permits research into various types of control such as position, velocity, force, and adaptive control that will improve the quality of manufactured optics and other high precision components.The goal of incorporating an open architecture controller into an existing diamond turning machine required the identification, integration and installation of many components. After installing these components, the machine was calibrated to improve its performance. The most important aspect in improving the overall performance of the machine was correcting its vibration characteristics. Spindle imbalance, the spindle shaft coupler and the table and slide amplifiers were found to play a significant role in producing vibration. The sidewise amplitude of acceleration was reduced from approximately 2.5 to 0.6 g due to reduction in vibration after correcting these sources. The remaining level of vibration is normal for this type of diamond turning machine allowing it to produce finishes with 90.24 nm (3.56 μin) RMS surface roughness using a radius tool and 24.13 nm (0.95 μin) RMS with the straight-edge tool.  相似文献   

18.
Nowadays,TFT-LCD manufacturing has become a very complex process,in which many different products being manufactured with many different tools.The ability to predict the quality of product in such a high-mix system is critical to developing and maintaining a high yield.In this paper,a statistical method is proposed for building a virtual metrology model from a number of products using a high-mix manufacturing process.Stepwise regression is used to select key variables that really affect the quality of th...  相似文献   

19.
In this paper, we proposed a data embedding method based on human visual system (HVS) and quadtree partitioning. For most HVS-based methods, the amount of embedded data is based on the measurement of differences of pixel pairs or the standard deviation of image blocks. However, these methods often result in larger image distortion and are vulnerable to statistical attacks. The proposed method employs a specially designed function to measure the complexity of image blocks, and uses quadtree partitioning to partition images into blocks with different sizes. Larger blocks are associated with smooth regions in images whereas smaller blocks are associated with complex regions. Therefore, we embed less data into larger blocks to preserve the image quality and embed more data into smaller blocks to increase the payload. Data embedment is done by using the diamond encoding technique. Experimental results revealed that the proposed method provides better image quality and offers higher payload compared to other HVS-based methods.  相似文献   

20.
With the exponential growth of the internet, wireless networks such as satellite networks are becoming increasingly popular. The characteristics of satellite networks such as long latency, large delay-bandwidth product, high bit error rate over satellite links and variable round trip time, severely degrade TCP/IP performance. At the conjunction of the satellite link and the fixed link, the basestation, the difference in capacity between the satellite link and the fixed link causes the basestation to experience congestion losses that adversely impact TCP performance. We propose a technique that substantially reduces the congestion at the base station and enforces fairness among the TCP connections that are sharing the satellite link. The technique does not require any change in the TCP sender or the receiver. The stability of our algorithm is analytically proven and its performance is evaluated using ns-2 simulations. Preliminary results yield almost a null congestion loss rate, a 60% decrease in average queue length, and more than 30% increase in the throughput. Fairness is well enforced.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号