首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A methodology for the automated development of fuzzy expert systems is presented. The idea is to start with a crisp model described by crisp rules and then transform them into a set of fuzzy rules, thus creating a fuzzy model. The adjustment of the model's parameters is performed via a stochastic global optimization procedure. The proposed methodology is tested by applying it to problems related to cardiovascular diseases, such as automated arrhythmic beat classification and automated ischemic beat classification, which, besides being well-known benchmarks, are of particular interest due to their obvious medical diagnostic importance. For both problems, the initial set of rules was determined by expert cardiologists, and the MIT-BIH arrhythmia database and the European ST-T database are used for optimizing the fuzzy model's parameters and evaluating the fuzzy expert system. In both cases, the results indicate an escalation of the performance from the simple initial crisp model to the more sophisticated fuzzy models, proving the scientific added value of the proposed framework. Also, the ability to interpret the decisions of the created fuzzy expert systems is a major advantage compared to "black box" approaches, such as neural networks and other techniques.  相似文献   

2.
Future computing devices are likely to be based on heterogeneous architectures, which comprise of multi-core CPUs accompanied with GPU or special purpose accelerators. A challenging issue for such devices is how to effectively manage the resources to achieve high efficiency and low energy consumption. With multiple new programming models and advanced framework support for heterogeneous computing, we have seen many regular applications benefit greatly from heterogeneous systems. However, migrating the success of heterogeneous computing to irregulars remains a challenge. An irregular program's attribute may vary during execution and are often unpredictable, making it difficult to allocate heterogeneous resources to achieve the highest efficiency. Moreover, the irregularity in applications may cause control flow divergence, load imbalance and low efficiency in parallel execution. To resolve these issues, we studied and proposed phase guided dynamic work partitioning, a light-weight and fast analysis technique, to collect information during program phases at runtime in order to guide work partitioning in subsequent phases for more efficient work dispatching on heterogeneous systems. We implemented an adaptive Runtime System based on the aforementioned technique and take Ray-Tracing to explore the performance potential of dynamic work distribution techniques in our framework. The experiments have shown that the performance gain of this approach can be as high as 5 times faster than the original system. The proposed techniques can be applied to other irregular applications with similar properties.  相似文献   

3.
Skin detection is frequently used as the first step for the tasks of face and gesture recognition in perceptual interfaces for human–computer interaction and communication. Thus, it is important for the researchers using skin detection to choose the optimal method for their specific task. In this paper, we propose a novel method of measuring the performance of skin detection for a task. We have created an evaluation framework for the task of hand detection and executed this assessment using a large dataset containing 17 million pixels from 225 images taken under various conditions. The parameter set of the skin detection has been trained extensively. Five colorspace transformations with and without the illuminance component coupled with two color modeling approaches have been evaluated. The results indicate that the best performance is achieved by transforming to SCT colorspace, using the illuminance component, and modeling the distribution with the histogram approach. Some conclusions such as the SCT colorspace being one of the best colorspaces are consistent with our previous work, while findings such as the YUV colorspace performing well in this work when it was one of the worst in our previous work are different. This indicates that the performance measured at the pixel-level might not be the ultimate indicator for the performance at the task-level of hand detection. We believe that the users of skin detection will find our task-based results to be more relevant than the traditional pixel-level results. However, we acknowledge that an evaluation is limited by its specific dataset and evaluation protocols.  相似文献   

4.
Recent work has shown that multicell cooperative signal processing in cellular networks can significantly increase system capacity and fairness. For example, multicell joint transmission and joint detection can be performed to combat intercell interference, often mentioned in the context of distributed antenna systems. Most publications in this field assume that an infinite amount of information can be exchanged between the cooperating base stations, neglecting the main downside of such systems, namely, the need for an additional network backhaul. In recent publications, we have thus proposed an optimization framework and algorithm that applies multicell signal processing to only a carefully selected subset of users for cellular systems with a strongly constrained backhaul. In this paper, we consider the cellular downlink and provide a comprehensive summary and extension of our previous and current work. We compare the performance obtained through centralized or decentralized optimization approaches, or through optimal or suboptimal calculation of precoding matrices, and identify reasonable performance–complexity trade-offs. It is shown that even low-complexity optimization approaches for cellular systems with a strongly constrained backhaul can yield major performance improvements over conventional systems.  相似文献   

5.
In wireless networks, real‐time applications have strict QoS requirements for packet delay, packet loss, and reliability. However, most existing work has not considered these QoS metrics when allocating wireless resources so that the QoS requirements of real‐time applications may not be satisfied. To overcome this shortcoming, a rate and power allocation framework incorporating these QoS metrics is first proposed for slow‐fading systems. Second, two distributed algorithms are developed to solve this optimization framework although it is nonconvex and nonseparable. Third, an improved framework is proposed to deal with the rate and power allocation with QoS requirements for fast‐fading systems. It is shown that the fast‐fading state of the network does not need to be considered in this improved framework, and it can be solved using algorithms that are similar to those for the framework of slow‐fading systems. In the end, simulations show that our algorithms converge closely to the globally optimal solution. By comparison with an existing model, simulations also verify the validity of our frameworks on dealing with the rate and power allocation with QoS requirements. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

6.
In our previous work, we have developed a rate-distortion (R-D) modeling framework H.263 video coding by introducing the new concepts of characteristic rate curves and rate curve decomposition. In this paper, we further show it is a unified R-D analysis framework for all typical image/video transform coding systems, such as EZW, SPIHT and JPEG image coding; MPEG-2, H.263, and MPEG-4 video coding. Based on this framework, a unified R-D estimation and control algorithm is proposed for all typical transform coding systems. We have also provided a theoretical justification for the unique properties of the characteristic rate curves. A linear rate regulation scheme is designed to further improve the estimation accuracy and robustness, as well as to reduce the computational complexity of the R-D estimation algorithm. Our extensive experimental results show that with the proposed algorithm, we can accurately estimate the R-D functions and robustly control the output bit rate or picture quality of the image/video encoder.  相似文献   

7.
In this paper, we present the design and implementation of a cross-layer framework for evaluating power and performance tradeoffs for video streaming to mobile handheld systems. We utilize a distributed middleware layer to perform joint adaptations at all levels of system hierarchy - applications, middleware, OS, network and hardware for optimized performance and energy benefits. Our framework utilizes an intermediate server in close proximity of the mobile device to perform end-to-end adaptations such as admission control, intelligent network transmission and dynamic video transcoding. The knowledge of these adaptations are then used to drive "on-device" adaptations, which include CPU voltage scaling through OS based soft realtime scheduling, LCD backlight intensity adaptation and network card power management. We first present and evaluate each of these adaptations individually and subsequently report the performance of the joint adaptations. We have implemented our cross-layer framework (called DYNAMO) and evaluated it on Compaq iPaq running Linux using streaming video applications. Our experimental results show that such joint adaptations can result in energy savings as high as 54% over the case where no optimization are used while substantially enhancing the user experience on hand-held systems.  相似文献   

8.
In modern communication systems, different users have different requirements for quality of service (QoS). In this work, QoS refers to the average codeword error probability experienced by the users in the network. Although several practical schemes (collectively referred to as unequal error protection schemes) have been studied in the literature and are implemented in existing systems, the corresponding performance limits have not been studied in an information-theoretic framework. In this paper, an information-theoretic framework is considered to study communication systems which provide heterogeneous reliabilities for the users. This is done by defining individual probabilities of error for the users in the network and obtaining the fundamental tradeoffs of the corresponding error exponents. In particular, we quantify the reliability tradeoff by introducing the notion of error exponent region (EER), which specifies the set of error exponent vectors that are simultaneously achievable by the users for a fixed vector of users' rates. We show the existence of a tradeoff among the users' error exponents by deriving inner and outer bounds for the EER. Using this framework, a system can be realized, which can provide a tradeoff of reliabilities among the users for a fixed vector of users' rates. This adds a completely new dimension to the performance tradeoff in such networks, which is unique to multiterminal communication systems, and is beyond what is given by the conventional performance-versus-rate tradeoff in single-user systems. Although this is a very general concept and can be applied to any multiterminal communication system, in this paper we consider Gaussian broadcast and multiple-access channels (MACs).  相似文献   

9.
A methodology for the automated development of fuzzy expert systems is presented. The idea is to start with a crisp model described by crisp rules and then transform them into a set of fuzzy rules, thus creating a fuzzy model. The adjustment of the model's parameters is performed via a stochastic global optimization procedure. The proposed methodology is tested by applying it to problems related to cardiovascular diseases, such as automated arrhythmic beat classification and automated ischemic beat classification, which, besides being well-known benchmarks, are of particular interest due to their obvious medical diagnostic importance. For both problems, the initial set of rules was determined by expert cardiologists, and the MIT-BIH arrhythmia database and the European ST-T database are used for optimizing the fuzzy model's parameters and evaluating the fuzzy expert system. In both cases, the results indicate an escalation of the performance from the simple initial crisp model to the more sophisticated fuzzy models, proving the scientific added value of the proposed framework. Also, the ability to interpret the decisions of the created fuzzy expert systems is a major advantage compared to "black box" approaches, such as neural networks and other techniques.  相似文献   

10.
Validation and verification (V&V) are procedures used to evaluate system structure or behavior with respect to a set of requirements. Although expert systems are often developed as a series of prototypes without requirements, it is not possible to perform V&V on any system for which requirements have not been prepared. In addition, there are special problems associated with the evaluation of expert systems that do not arise in the evaluation of conventional systems, such as verification of the completeness and accuracy of the knowledge base. The criticality of most National Aeronautics and Space Administration (NASA) missions makes it important to be able to certify the performance of the expert systems used to support these missions. This paper presents recommendations for the most appropriate methods for integrating V&V into the Expert System Development Methodology (ESDM) and suggestions for the most suitable approaches for each stage of ESDM development.  相似文献   

11.
A co-synthesis approach to embedded system design automation   总被引:1,自引:0,他引:1  
Embedded systems are targeted for specific applications under constraints on relative timing of their actions. For such systems, the use of pre-designed reprogrammable components such as microprocessors provides an effective way to reduce system cost by implementing part of the functionality as a program running on the processor. However, dedicated hardware is often necessary to achieve the requisite timing performance. Analysis of timing constraints is, therefore, key to determination of an efficient hardware-software implementation. In this paper, we present a methodology for embedded system design as a co-synthesis of interacting hardware and software components. We present a decomposition of the co-synthesis problem into sub-problems, that is useful in building a framework for embedded system CAD. In particular, we present operation-level timing constraints and develop the notion of satisfiability of constraints by a given implementation both in the deterministic and probabilistic sense. Constraint satisfiability analysis is then used to define hardware and software portions of functionality. We describe algorithms and techniques used in developing a practical co-synthesis framework, vulcan. Examples are presented to show the utility of our approach.  相似文献   

12.
13.
膜系统通常也称为P系统,是一类分布式并行计算模型.本文提出了一种基于类组织膜系统的新变体——基于对象进化规则的内稳态组织膜系统.在这类系统中,去除了"环境中可以包含任意多份物质"这个条件,并引入了对象进化规则.通过模拟注册机,证明了任何图灵可计算数都可通过该类膜系统产生.为了建立容错性能更好的计算系统,将时间无关的概念引入到这类系统中,证明了在时间无关模式下,构建的识别内稳态组织膜系统可以在线性时间内得到三着色问题统一解.证明结果表明,这类模型求解NP完全问题具有较好的计算效率.  相似文献   

14.
The Effect of Narrowband Interference on Wideband Wireless Communication Systems This paper evaluates the performance of wideband communication systems in the presence of narrowband interference. In particular, we derive closed bit-error probability expressions for spread-spectrum systems by approximating narrowband interferers as independent asynchronous tone interferers. The scenarios considered include additive white Gaussian noise channels, flat-fading channels, and frequency-selective multipath fading channels. For multipath fading channels, we develop a new analytical framework based on perturbation theory to analyze the performance of a Rake receiver in Nakagami-$m$channels. Simulation results for narrowband interference such as GSM and Bluetooth are in good agreement with our analytical results, showing the approach developed is useful for investigating the coexistence of ultrawide bandwidth systems with existing wireless systems.  相似文献   

15.
A new and practical way of profiling R & D projects is presented. The operational context is the investment of discretionary R & D funds by systems engineering firms. In this work program positioning profile, R & D projects are identified as one of four types: targeting, reinforcing, enabling, or remodeling, according to the relationship between the R &D project and the systems engineering work program. A portfolio map is introduced to provide a visual representation of a set of R & D projects and to show likely transitions between project types over time. The new profile and portfolio map can support management efforts to characterize and assess an R & D portfolio for its contribution to ongoing systems engineering work. In addition to this new project profile, other profiling strategies are discussed in the framework of an R & D balanced scorecard. The multiple profiles are a natural consequence of the inherent performance multidimensionality of the R &D activity in the wide-ranging discipline and practice of systems engineering.  相似文献   

16.
Secure group communication in wireless mesh networks   总被引:1,自引:0,他引:1  
Jing  Kurt  Cristina   《Ad hoc Networks》2009,7(8):1563-1576
Wireless mesh networks (WMNs) have emerged as a promising technology that offers low-cost community wireless services. The community-oriented nature of WMNs facilitates group applications, such as webcast, distance learning, online gaming, video conferencing, and multimedia broadcasting. Security is critical for the deployment of these services. Previous work focused primarily on MAC and routing protocol security, while application-level security has received relatively little attention. In this paper we focus on providing data confidentiality for group communication in WMNs. Compared to other network environments, WMNs present new challenges and opportunities in designing such protocols. We propose a new protocol framework, Secure Group Overlay Multicast (SeGrOM), that employs decentralized group membership, promotes localized communication, and leverages the wireless broadcast nature to achieve efficient and secure group communication. We analyze the performance and discuss the security properties of our protocols. We demonstrate through simulations that our protocols provide good performance and incur a significantly smaller overhead than a baseline centralized protocol optimized for WMNs.  相似文献   

17.
An analysis of steganographic systems subject to the following perfect undetectability condition is presented in this paper. Following embedding of the message into the covertext, the resulting stegotext is required to have exactly the same probability distribution as the covertext. Then no statistical test can reliably detect the presence of the hidden message. We refer to such steganographic schemes as perfectly secure. A few such schemes have been proposed in recent literature, but they have vanishing rate. We prove that communication performance can potentially be vastly improved; specifically, our basic setup assumes independent and identically distributed (i.i.d.) covertext, and we construct perfectly secure steganographic codes from public watermarking codes using binning methods and randomized permutations of the code. The permutation is a secret key shared between encoder and decoder. We derive (positive) capacity and random-coding exponents for perfectly secure steganographic systems. The error exponents provide estimates of the code length required to achieve a target low error probability. In some applications, steganographic communication may be disrupted by an active warden, modeled here by a compound discrete memoryless channel (DMC). The transmitter and warden are subject to distortion constraints. We address the potential loss in communication performance due to the perfect-security requirement. This loss is the same as the loss obtained under a weaker order-1 steganographic requirement that would just require matching of first-order marginals of the covertext and stegotext distributions. Furthermore, no loss occurs if the covertext distribution is uniform and the distortion metric is cyclically symmetric; steganographic capacity is then achieved by randomized linear codes. Our framework may also be useful for developing computationally secure steganographic systems that have near-optimal communication performance.  相似文献   

18.
Parallelization of Digital Signal Processing (DSP) software is an important trend in Multiprocessor System-on-Chip (MPSoC) implementation. The performance of DSP systems composed of parallelized computations depends on the scheduling technique, which must in general allocate computation and communication resources for competing tasks, and ensure that data dependencies are satisfied. In this paper, we formulate a new type of parallel task scheduling problem called Parallel Actor Scheduling (PAS) for MPSoC mapping of DSP systems that are represented as Synchronous Dataflow (SDF) graphs. In contrast to traditional SDF-based scheduling techniques, which focus on exploiting graph level (inter-actor) parallelism, the PAS problem targets the integrated exploitation of both intra- and inter-actor parallelism for platforms in which individual actors can be parallelized across multiple processing units. We first address a special case of the PAS problem in which all of the actors in the DSP application or subsystem being optimized are parallel actors (i.e., they can be parallelized to exploit multiple cores). For this special case, we develop and experimentally evaluate a two-phase scheduling framework with three work flows that involve particle swarm optimization (PSO) — PSO with a mixed integer programming formulation, PSO with simulated annealing, and PSO with a fast heuristic based on list scheduling. Then, we extend our scheduling framework to support the general PAS problem, which considers both parallel actors and sequential actors (actors that cannot be parallelized) in an integrated manner. We demonstrate that our PAS-targeted scheduling framework provides a useful range of trade-offs between synthesis time requirements and the quality of the derived solutions. We also demonstrate the performance of our scheduling framework from two aspects: simulations on a diverse set of randomly generated SDF graphs, and implementations of an image processing application and a software defined radio benchmark on a state-of-the-art multicore DSP platform.  相似文献   

19.
Studies have considered the possible performance improvements when smart antennas are used in packet-switched data networks [1, 2, 3]. This work has included systems which operate using various ALOHA, polling, and reservation-based protocols. Recently however, a single-beam system was described which uses a smart antenna basestation to communicate with a set of stations using a carrier sense multiple access (CSMA) protocol [4]. In this system, performance improvements are obtained by having the antenna dynamically point pattern nulls in the direction of interfering stations, thus reducing the effects of channel collisions.In this paper, we consider the performance of CSMA systems where stations access a smart antenna basestation using multibeam SDMA. As in other SDMA networks, the objective is for the basestation to transmit or receive multiple packets simultaneously. A basic CSMA/SDMA protocol is proposed for this purpose. Note that unlike conventional systems, the CSMA objective of isolating a single successful transmission is not desirable. Instead, our protocol uses carrier-sensing to synchronize various smart antenna operations. In this paper we also present a more sophisticated CSMA/SDMA protocol which incorporates novel basestation/portable signalling which mitigates the effects of hidden stations. The proposed mechanism takes into account the transient connectivity of such systems using the coherence time of the channel as an operating parameter. The performance of these systems is characterized and compared using analytical throughput/capacity models and mean delay simulations. It is shown that when hidden station effects are present, the capacity performance of the more sophisticated protocol may be much higher than that of the basic version.  相似文献   

20.
Research on monitored grid‐connected PV systems can lead to an improved performance of PV systems. This view is based on monitoring results from PV systems in Western Europe which lag behind the expected values. However, current methods for analysing these systems do not allow to investigate the potential system efficiency improvement on the basis of field experience. Hence, we have developed a method for analysing monitored grid‐connected PV systems which meets this need. In this method the common technical approach to analysing PV systems is broadened with an economic assessment. First an energy loss analysis of the PV system is made using its monitored data. In our analysis the energy loss effects in the PV system are split up by simulation. This provides a profound insight into the actual performance of the system. Next, measures to enhance the performance of the system are identified. The costs involved to improve the performance are analysed. Finally, the cost‐effectiveness of the potential improvements is calculated. In this paper we will present our method TEAMS. Although we will not formulate strict rules, we will provide a well‐defined frame and structure for the application of the TEAMS method. It is shown that applying TEAMS contributes to improved transparency in the evaluation of monitored grid‐connected PV systems. Copyright © 1999 John Wiley & Sons, Ltd.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号