首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 13 毫秒
1.
A queueing model for a relay in a communication network that is employing network coding is introduced. It is shown that communication networks with coding are closely related to queueing networks with positive and negative customers. The tradeoff between minimizing energy consumption and minimizing delay for a two-way relay is investigated. Analytical upper and lower bounds on the energy consumption and the delay are obtained using a Markov reward approach. Exact expressions are given for the minimum energy consumption and the minimum delay that are attainable.  相似文献   

2.
Time–cost tradeoff (TCT) problem in project scheduling studies how to schedule project activities to achieve a tradeoff between project cost and project completion time. It gives project planners both challenges and opportunities to work out the best plan that optimizes time and cost to complete a project. In this paper, we present a novel method which examines the effects of project uncertainties on both, the duration as well as the cost of the activities. This method integrates a fuzzy logic framework with Hybrid Meta-Heuristic. Hybrid Meta-Heuristic (HMH) is an innovative approach which hybridizes a multiobjective genetic algorithm and simulated annealing. Integration of HMH and fuzzy logic is referred to as ‘integrated Fuzzy–HMH’. A rule based fuzzy logic framework brings up changes in the duration and the cost of each activity for the input uncertainties and HMH searches for Pareto-optimal front (TCT profile) for a given set of time–cost pair of each project activity. Two standard test problems from the literature are attempted using HMH. A case study of TCT problem is solved using integrated Fuzzy–HMH. The method solves time–cost tradeoff problems within an uncertain environment and carries out its sensitivity analysis.  相似文献   

3.
A novel algorithm for automated simultaneous exploration of datapath and Unrolling Factor (UF) during power–performance tradeoff in High Level Synthesis (HLS) using multi-dimensional particle swarm optimization (PSO) (termed as ‘M-PSO’) for control and data flow graphs (CDFGs) is presented. The major contributions of the proposed algorithm are as follows: (a) simultaneous exploration of datapath and loop UF through an integrated multi-dimensional particle encoding process using swarm intelligence; (b) an estimation model for computation of execution delay of a loop unrolled CDFG (based on a resource configuration visited) without requiring to tediously unroll the entire CDFG for the specified loop value in most cases; (c) balancing the tradeoff between power–performance metrics as well as control states and execution delay during loop unrolling; (d) sensitivity analysis of PSO parameter such as swarm size on the impact of exploration time and Quality of Results (QoR) of the proposed design space exploration (DSE) process. This analysis presented would assist the designer in pre-tuning the PSO parameters to an optimum value for achieving efficient exploration results within a quick runtime; (e) analysis of design metrics such as power, execution time and number of control steps of the global best particle found in every iteration with respect to increase/decrease in unrolling factor.The proposed approach when tested on a variety of data flow graphs (DFGs) and CDFGs indicated an average improvement in QoR of >28% and reduction in runtime of >94% compared to recent works.  相似文献   

4.
Neural Computing and Applications - The tradeoff between speed and accuracy of human movements has been exploited from many different perspectives, such as experimental psychology, workspace...  相似文献   

5.
《Computer Networks》2008,52(7):1365-1389
We study the throughput of multi-hop routes and stability of forwarding queues in a wireless ad-hoc network with random access channel. We focus on a wireless network with static nodes, such as community wireless networks. Our main result is characterization of stability condition and the end-to-end throughput using the balance rate. We also investigate the impact of routing on end-to-end throughput and stability of intermediate nodes. We show that (i) as long as the intermediate queues in the network are stable, the end-to-end throughput of a connection does not depend on the load on the intermediate nodes, (ii) we show that if the weight of a link originating from a node is set to the number of neighbors of this node, then shortest-path routing maximizes the minimum probability of end-to-end packet delivery in a network of weighted fair queues. Numerical results are given and support the results of the analysis. Finally, we perform extensive simulation and verify that the analytical results closely match the results obtained from simulations.  相似文献   

6.
A new method is developed to estimate daily turbulent air–sea fluxes over the global ocean on a 0.25° grid. The required surface wind speed (w 10) and specific air humidity (q 10) at 10 m height are both estimated from remotely sensed measurements. w 10 is obtained from the SeaWinds scatterometer on board the QuikSCAT satellite. A new empirical model relating brightness temperatures (T b) from the Special Sensor Microwave Imager (SSM/I) and q 10 is developed. It is an extension of the author's previous q 10 model. In addition to T b, the empirical model includes sea surface temperature (SST) and air–sea temperature difference data. The calibration of the new empirical q 10 model utilizes q 10 from the latest version of the National Oceanography Centre air–sea interaction gridded data set (NOCS2.0). Compared with mooring data, the new satellite q 10 exhibits better statistical results than previous estimates. For instance, the bias, the root mean square (RMS), and the correlation coefficient values estimated from comparisons between satellite and moorings in the northeast Atlantic and the Mediterranean Sea are –0.04 g kg?1, 0.87 g kg?1, and 0.95, respectively. The new satellite q 10 is used in combination with the newly reprocessed QuikSCAT V3, the latest version of SST analyses provided by the National Climatic Data Center (NCDC), and 10 m air temperature estimated from the European Centre for Medium-Range Weather Forecasts (ECMWF) reanalyses (ERA-Interim), to determine three daily gridded turbulent quantities at 0.25° spatial resolution: surface wind stress, latent heat flux (LHF), and sensible heat flux (SHF). Validation of the resulting fields is performed through a comprehensive comparison with daily, in situ values of LHF and SHF from buoys. In the northeast Atlantic basin, the satellite-derived daily LHF has bias, RMS, and correlation of 5 W m?2, 27 W m?2, and 0.89, respectively. For SHF, the statistical parameters are –2 W m?2, 10 W m?2, and 0.94, respectively. At global scale, the new satellite LHF and SHF are compared to NOCS2.0 daily estimates. Both daily fluxes exhibit similar spatial and seasonal variability. The main departures are found at latitudes south of 40° S, where satellite latent and sensible heat fluxes are generally larger.  相似文献   

7.
8.
STT–RAM is considered as a promising alternative to SRAM due to its low static power (non-volatility) and high density. However, write operation of STT–RAM is inefficient in terms of energy and speed compared to SRAM and thus various device-/circuit-/architecture-level solutions have been proposed to tackle this inefficiency. One of the proposed solutions is redesigning STT–RAM cell for better write characteristics at the cost of shortened retention time (volatile STT–RAM). Because the retention failure of STT–RAM has a stochastic property, an extra overhead of periodic scrubbing with error correcting code (ECC) is required to tolerate the failure. The more frequent scrubbing and stronger ECC are used, the shorter retention time is allowed. With an analysis based on analytic STT–RAM model, we have conducted extensive experiments on various volatile STT–RAM cache design parameters including scrubbing period, ECC strength, and target failure rate. The experimental results show the impact of the parameter variations on last-level cache energy and performance and provide a guideline for designing a volatile STT–RAM with ECC and scrubbing.  相似文献   

9.
ECG Steganography ensures protection of patient data when ECG signals embedded with patient data are transmitted over the internet. Steganography algorithms strive to recover the embedded patient data entirely and to minimize the deterioration in the cover signal caused by the embedding. This paper presents a Continuous Ant Colony Optimization (CACO) based ECG Steganography scheme using Discrete Wavelet Transform and Singular Value Decomposition. Quantization techniques allow embedding the patient data into the ECG signal. The scaling factor in the quantization techniques governs the tradeoff between imperceptibility and robustness. The novelty of the proposed approach is to use CACO in ECG Steganography, to identify Multiple Scaling Factors (MSFs) that will provide a better tradeoff compared to uniform Single Scaling Factor (SSF). The optimal MSFs significantly improve the performance of ECG steganography which is measured by metrics such as Peak Signal to Noise Ratio, Percentage Residual Difference, Kullback–Leibler distance and Bit Error Rate. Performance of the proposed approach is demonstrated on the MIT-BIH database and the results validate that the tradeoff curve obtained through MSFs is better than the tradeoff curve obtained for any SSF. The results also advocate appropriate SSFs for target imperceptibility or robustness.  相似文献   

10.
In portfolio selection problem, the expected return, risk, liquidity etc. cannot be predicted precisely. The investor generally makes his portfolio decision according to his experience and his economic wisdom. So, deterministic portfolio selection is not a good choice for the investor. In most of the recent works on this problem, fuzzy set theory is widely used to model the problem in uncertain environments. This paper utilizes the concept of interval numbers in fuzzy set theory to extend the classical mean–variance (MV) portfolio selection model into mean–variance–skewness (MVS) model with consideration of transaction cost. In addition, some other criteria like short and long term returns, liquidity, dividends, number of assets in the portfolio and the maximum and minimum allowable capital invested in stocks of any selected company are considered. Three different models have been proposed by defining the future financial market optimistically, pessimistically and in the combined form to model the fuzzy MVS portfolio selection problem. In order to solve the models, fuzzy simulation (FS) and elitist genetic algorithm (EGA) are integrated to produce a more powerful and effective hybrid intelligence algorithm (HIA). Finally, our approaches are tested on a set of stock data from Bombay Stock Exchange (BSE).  相似文献   

11.
Seasonal and interannual variation in rainfall can cause massive economic loss for farmers and pastoralists, not only because of deficient total rainfall amounts but also because of long dry spells within the rainy season. The semi-arid to sub-humid mountain climate of the North Ethiopian Highlands is especially vulnerable to rainfall anomalies. In this article, spatio-temporal rainfall patterns are analysed on a regional scale in the North Ethiopian Highlands using satellite-derived rainfall estimates (RFEs). To counter the weak correlation in the dry season, only the rainy season rainfall from March till September is used, responsible for approximately 91% of the annual rainfall. Validation analysis demonstrates that the RFEs are well correlated with the meteorological station (MS) rainfall data, i.e. 85% for RFE 1.0 (1996–2000) and 80% for RFE 2.0 (2001–2006). However, discrepancies indicate that RFEs generally underestimate MS rainfall and the scatter around the trendlines indicates that the estimation by RFEs can be in gross error. A local calibration of RFE with rain gauge information is validated as a technique to improve RFEs for a regional mountainous study area. Slope gradient, slope aspect, and elevation have no added value in the calibration of the RFEs. The estimation of monthly rainfall using this calibration model improved on average by 8%. Based upon the calibration model, annual rainfall maps and an average isohyet map for the period 1996–2006 were constructed. The maps show a general northeast–southwest gradient of increasing rainfall in the study area and a sharp east–west gradient in its northern part. Slope gradient, slope aspect, elevation, easting, and northing were evaluated as explanatory factors for the spatial variability of annual rainfall in a stepwise multiple regression with the calibrated average of RFE 1.0 as dependent variable. Easting and northing are the only significant contributing variables (R2 = 0.86), of which easting has proved to be the most important factor (R2 = 0.72). The scatter around the individual trendlines of easting and northing corresponds to an increase in rainfall variability in the drier regions. Despite the remaining underestimation of rainfall in the southern part of the study area, the improved estimation of spatio-temporal rainfall variability in a mountainous region by RFEs is valuable as input to a wide range of scientific models.  相似文献   

12.
The Amazon basin remains a major hotspot of tropical deforestation, presenting a clear need for timely, accurate and consistent data on forest cover change. We assessed the utility of a hybrid classification technique, iterative guided spectral class rejection (IGSCR), for accurately mapping Amazonian deforestation using annual imagery from the Landsat Thematic Mapper (TM) and Enhanced Thematic Mapper Plus (ETM+) from 1992 to 2002. The mean overall accuracy of the 11 annual classifications was 95% with a standard deviation of 1.4%, and z‐score analysis revealed that all classifications were significant at the 0.05 level. The IGSCR thus seems inherently suitable for monitoring forest cover in the Amazon. The resulting classifications were sufficiently accurate to assess preliminarily the magnitude and causes of discrepancies between farmer‐reported and satellite‐based estimates of deforestation at the household level using a sample of 220 farms in Rôndonia mapped in the field in 1992 and 2002. The field‐ and satellite‐derived estimates were significantly different only at the 0.10 level for the 220 farms studied, with the satellite‐derived deforestation estimates 8.9% higher than estimates derived from in situ survey methods. Some of this difference was due to a tendency of farmers to overestimate the amount of forest within their property in our survey. Given the objectivity and reduced expense of satellite‐based deforestation monitoring, we recommend that it be an integral part of household‐level analysis of the causes, patterns and processes of deforestation.  相似文献   

13.
This paper proposes detecting incipient fault conditions in complex dynamic systems using the Kullback–Leibler or KL divergence. Subspace identification is used to identify dynamic models and the KL divergence examines changes in probability density functions between a reference set and online data. Gaussian distributed process variables produce a simple form of the KL divergence. Non-Gaussian distributed process variables require the use of a density-ratio estimation to compute the KL divergence. Applications to recorded data from a gearbox and two distillation processes confirm the increased sensitivity of the proposed approach to detect incipient faults compared to the dynamic monitoring approach based on principal component analysis and the statistical local approach.  相似文献   

14.
15.
Recently, power-supply failures have caused major social losses. Therefore, power-supply systems need to be highly reliable. The objective of this study is to present a significant and effective method of determining a productive investment to protect a power-supply system from damage. In this study, the reliability and risks of each of the units are evaluated with a variance–covariance matrix, and the effects and expenses of replacement are analyzed. The mean–variance analysis is formulated as a mathematical program with the following two objectives: (1) to minimize the risk and (2) to maximize the expected return. Finally, a structural learning model of a mutual connection neural network is proposed to solve problems defined by mixed-integer quadratic programming and is employed in the mean–variance analysis. Our method is applied to a power system network in the Tokyo Metropolitan area. This method enables us to select results more effectively and enhance decision making. In other words, decision-makers can select the investment rate and risk of each ward within a given total budget.  相似文献   

16.
This paper addresses the problem of fault detection and isolation in railway track circuits. A track circuit can be considered as a large-scale system composed of a series of trimming capacitors located between a transmitter and a receiver. A defective capacitor affects not only its own inspection data (short circuit current) but also the measurements related to all capacitors located downstream (between the defective capacitor and the receiver). Here, the global fault detection and isolation problem is broken down into several local pattern recognition problems, each dedicated to one capacitor. The outputs from local neural network or decision tree classifiers are expressed using the Dempster–Shafer theory and combined to make a final decision on the detection and localization of a fault in the system. Experiments with simulated data show that correct detection rates over 99% and correct localization rates over 92% can be achieved using this approach, which represents a major improvement over the state of the art reference method.  相似文献   

17.
An ion concentration polarization-based microfluidic sample preconcentration chip consisting of a polydimethylsiloxane microchannel and a graphene oxide (GO)–Nafion nanomembrane is fabricated using conventional MEMS techniques. The performance of the proposed device is evaluated using a fluorescein sample with an initial concentration of 10?5 M for membranes with three different GO–Nafion volume ratios (2:1, 3:1 and 4:1) and four different GO concentrations (0.3, 0.5, 1 and 2 wt%). It is shown that for a GO concentration of 0.3 wt%, a maximum preconcentration factor of approximately 50-fold is achieved using a GO–Nafion volume ratio of 3:1. Moreover, for the same volume ratio (3:1), a 60-fold enhancement of the sample concentration is obtained given a GO concentration of 0.5 wt%. Overall, the results show that compared to a pure Nafion membrane, the addition of GO yields an effective improvement in the sample preconcentration factor (i.e., 60-fold vs. 40-fold), but at the expense of a longer preconcentration time (30 min vs. 6 min 36 s). The superior concentration performance of the GO–Nafion membrane is attributed to the effects of dissociated carboxylate ions in attracting a greater number of cations into the membrane nanopores.  相似文献   

18.
19.
ABSTRACT

Decreasing the volume of the Urmia Lake, as the largest inland water body in Iran, is one of the current environmental and water resource management concerns. This study obtains a reliable spaceborne water level (WL)–area–volume relationship for the Urmia Lake using terrestrial, aerial and satellite-based data. The aim of this study is to improve Urmia Lake’s WL derived from satellite altimetry and, consequently, to more accurately estimate the volume of the lake for the last decade. To this end, improved WL is obtained from the Satellite with Argos and Altika (SARAL/AltiKa) and Jason-2 altimetry missions by performing a post-processing method. The post-processing method includes a denoising, a classification and appropriate retracking algorithms. The results are validated against in situ gauge data and also compared with results from Prototype Innovant de Système de Traitement pour les Applications Côtières et l’Hydrologie (PISTACH) and Prototype on AltiKa for Coastal, Hydrology and Ice (PEACHI) products. The Denoising–Classification–Retracking (DCR) method improves the root mean square error (RMSE) of WL with respect to those of PISTACH and PEACHI by 54% and 24%, respectively. The surface area of the lake is determined from Landsat 7 Enhanced Thematic Mapper Plus (ETM+) images based on calculating normalized difference water index (NDWI). The results are validated against the surface area obtained from aerial photogrammetry and Cartosat high resolution image. Moreover, based on bathymetric map a Look-up table including surface area and volume of the lake at specific levels are formed. The obtained surface area is then compared with the values of the Look-up table. The normalized root mean square error between surface extent obtained from proposed method and corresponding values is about 11%. The estimated lake’s volume is compared with the level-volume curve from the bathymetric data. The result showed the RMSE of this comparison is about 0.12 km3. Our validated results show that the lake has lost 75% of its volume from late 2008 to early 2016 but continued with an increase in its volume in May 2017 twice as much as in early 2016. Our results support urgent or long-term restoration plan of Lake Urmia and highlight the important role of spaceborne sensors for hydrological applications.  相似文献   

20.
We consider the mean–variance relationship of the number of flows in traffic aggregation, where flows are divided into several groups randomly, based on a predefined flow aggregation index, such as source IP address. We first derive a quadratic relationship between the mean and the variance of the number of flows belonging to a randomly chosen traffic aggregation group. Note here that the result is applicable to sampled flows obtained through packet sampling. We then show that our analytically derived mean–variance relationship fits well those in actual packet trace data sets. Next, we present two applications of the mean–variance relationship to traffic management. One is an application to detecting network anomalies through monitoring a time series of traffic. Using the mean–variance relationship, we determine the traffic aggregation level in traffic monitoring so that it meets two predefined requirements on false positive and false negative ratios simultaneously. The other is an application to load balancing among network equipments that require per-flow management. We utilize the mean–variance relationship for estimating the processing capability required in each network equipment.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号