首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
《Computer Networks》2001,35(5):537-549
The scenario for telecom services is undergoing a rapid change. A new set of communication services is emerging: due to the explosion of the Internet and to the dramatic increase of mobile phone markets, more and more customers are requiring new Internet-based communication services. The ability to merge the ubiquitous telephone service (terminal and personal mobility) and the friendliness (easy-to-use, easy-to-customize) of the Internet is recognized as a major driver to promote new classes of services. Users want to access multiple services over heterogeneous networks and from heterogeneous terminals. The higher flexibility in service offers as well as the possibility of a rapid introduction of new services (typical of Internet & IT worlds) are key factors that give a high competitive advantage to service providers (SPs). This article describes a supporting distributed architecture in terms of functionality and components, which enables the integration of the IN architecture and the Internet. It also describes by means of a service example – the virtual presence (VP) – the main interactions among the components that form the architecture. The developed solutions prove that the use of distributed platforms, open APIs, and object-oriented techniques are enabling factors in achieving full interoperability between heterogeneous networks and terminals.  相似文献   

3.
Floating-point fast Fourier transform (FFT) has been widely expected in scientific computing and high-resolution imaging applications due to the wide dynamic range and high processing precision. However, it suffers high area and energy overhead problems in comparison to fixed-point implementations. To address these issues, this paper presents an area- and energy-efficient hybrid architecture for floating-point FFT computations. It minimizes the required arithmetic units and reduces the memory usage significantly by combining three different parts. The serial radix-4 butterfly (SR4BF) is used in the single-path delay commutator (SDC) part to minimize the required arithmetic units with 100% adder utilization ratio obtained. A modified single-path delay feedback (MSDF) architecture is proposed to achieve a tradeoff between arithmetic resources and memory usage by using the new half radix-4 butterfly (HR4BF) with 50% adder utilization ratio obtained. The intermediate caching buffer is modified accordingly in the MSDF part. By combining both the advantages on arithmetic units reducing and memory usage optimization in different parts, the optimized area and power are obtained without throughput loss. The logic synthesis results in a 65 nm CMOS technology show that the energy per FFT is about 331.5 nJ for 1024-point FFT computations at 400 MHz. The total hardware overhead is equivalent to 460k NAND2 gates.  相似文献   

4.
Although time and space are interrelated in every occurrence of real-world events, only spatial codes are used at the basic level of most computational architectures. Inspired by neurobiological facts and hypotheses that assign a primordial coding role to the temporal dimension, and developed to address both cognitive and engineering applications, guided propagation networks (GPNs) are aimed at a generic real-time machine, based on time-space coincidence testing. The involved temporal parameters are gradually introduced, in relation with complementary applications in the field of human-machine communication: sensori-motor modeling, pattern recognition and natural language processing.  相似文献   

5.
The paper presents a new algorithm for feature selection and classification. The algorithm is based on an immune metaphor, and combines both negative and clonal selection mechanisms characteristic for B- and T-lymphocytes. The main goal of the algorithm is to select the best subset of features for classification. Two level evolution is used in the proposed system for detectors creation and feature selection. Subpopulations of evolving detectors (T-lymphocytes) are able to discover subsets of features well suited for classification. The subpopulations cooperate during evolution by means of a novel suppression mechanism which is compared to the traditional suppression mechanism. The proposed suppression method proved to be superior to the traditional suppression in both recognition performance and its ability to select the proper number of subpopulations dynamically. Some results in the task of ECG signals classification are presented. The results for binary and real coded T-lymphocytes are compared and discussed.  相似文献   

6.
7.
A task-appropriate hybrid architecture for explanation   总被引:2,自引:0,他引:2  
  相似文献   

8.
When designing very complex control strategy using hybrid technology, one usually faces the challenge of balancing effective realization of multi-control modeling with design simplicity. To better manage this difficulty we have used the agent paradigm as a simple and powerful bridge between asynchronous/distributed computation and Matlab environment. The proposed architecture has been used to design a complex hybrid control environment using multi-objective, fuzzy c-means, and genetic algorithms optimization to design hybrid control strategies suitable for the energy flows management on board of hybrid electric vehicles.  相似文献   

9.
Integral imaging is a promising technique for delivering high-quality three-dimensional content. However, the large amounts of data produced during acquisition prohibits direct transmission of Integral Image data. A number of highly efficient compression architectures are proposed today that outperform standard two-dimensional encoding schemes. However, critical issues regarding real-time compression for quality demanding applications are a primary concern to currently existing Integral Image encoders. In this work we propose a real-time FPGA-based encoder for Integral Image and integral video content transmission. The proposed encoder is based on a highly efficient compression algorithm used in Integral Imaging applications. Real-time performance is achieved by realizing a pipelined architecture, taking into account the specific structure of an Integral Image. The required memory access operations are minimized by adopting a systolic concept of data flow through the core processing elements, further increasing the performance boost. The encoder targets, real-time, broadcast-type high-resolution Integral Image and video sequences and performs three orders of magnitude faster than the analogous software approach.  相似文献   

10.
The authors propose a new architecture that combines two existing technologies: lookup-table-based FPGAs and complex programmable logic devices based on PLA-like blocks. Their mapping results indicate that on average LUT-based FPGAs require 78% more area than their hybrid FPGA, while providing roughly the same circuit depth  相似文献   

11.
The special bandwidth efficiency features of coded single carrier multilevel data communication signals pave the way for implementation of high speed data transmission via public analogue telephone networks. The minimum distance of nonredundant coded signals and the minimum free-distance of redundant coded signals are discussed and their influence on error rates is analysed. Security of data on public networks, where it is subject to noise and interference, is improved by advanced trellis coding. Combining the trellis with multilevel signalling provides high speed data modems with security against nearly all spurious signals, transient noise and circuit impairments. The signal structure and coding algorithm of CCITT's recent recommendations on 9600 bit/s and 14 400 bit/s modems for PSTN and leased lines are also discussed.  相似文献   

12.
13.
Similarity measure of contents plays an important role in TV personalization, e.g., TV content group recommendation and similar TV content retrieval, which essentially are content clustering and example-based retrieval. We define similar TV contents to be those with similar semantic information, e.g., plot, background, genre, etc. Several similarity measure methods, notably vector space model based and category hierarchy model based similarity measure schemes, have been proposed for the purpose of data clustering and example-based retrieval. Each method has advantages and shortcomings of its own in TV content similarity measure. In this paper, we propose a hybrid approach for TV content similarity measure, which combines both vector space model and category hierarchy model. The hybrid measure proposed here makes the most of TV metadata information and takes advantage of the two similarity measurements. It measures TV content similarity from the semantic level other than the physical level. Furthermore, we propose an adaptive strategy for setting the combination parameters. The experimental results showed that using the hybrid similarity measure proposed here is superior to using either alone for TV content clustering and example-based retrieval.  相似文献   

14.
目前软件水印技术还不是很成熟,尚存在很多的问题,CollBerg和Thomborson对软件水印做了分类,提出了基于数据结构的水印嵌入方法,但是PPCT的动态图编码的效率较低。结合基数[k]枚举编码方案和PPCT编码方案进行混合编码,使叶子节点的右指针可以指向所有节点,利用叶子节点进行编码,提高了数据的嵌入率,利用叶子节点的左指针进行校验,提高鲁棒性。  相似文献   

15.
Analog neural signals must be converted into spike trains for transmission over electrically leaky axons. This spike encoding and subsequent decoding leads to distortion. We quantify this distortion by deriving approximate expressions for the mean square error between the inputs and outputs of a spiking link. We use integrate-and-fire and Poisson encoders to convert naturalistic stimuli into spike trains and spike count and inter-spike interval decoders to generate reconstructions of the stimulus. The distortion expressions enable us to compare these spike coding schemes over a large parameter space. We verify that the integrate-and-fire encoder is more effective than the Poisson encoder. The disparity between the two encoders diminishes as the stimulus coefficient of variation (CV) increases, at which point, the variability attributed to the stimulus overwhelms the variability attributed to Poisson statistics. When the stimulus CV is small, the interspike interval decoder is superior, as the distortion resulting from spike count decoding is dominated by a term that is attributed to the discrete nature of the spike count. In this regime, additive noise has a greater impact on the interspike interval decoder than the spike count decoder. When the stimulus CV is large, the average signal excursion is much larger than the quantization step size, and spike count decoding is superior.  相似文献   

16.
Irresponsible and negligent use of natural resources in the last five decades has made it an important priority to adopt more intelligent ways of managing existing resources, especially the ones related to energy. The main objective of this paper is to explore the opportunities of integrating internal data already stored in Data Warehouses together with external Big Data to improve energy consumption predictions. This paper presents a study in which we propose an architecture that makes use of already stored energy data and external unstructured information to improve knowledge acquisition and allow managers to make better decisions. This external knowledge is represented by a torrent of information that, in many cases, is hidden across heterogeneous and unstructured data sources, which are recuperated by an Information Extraction system. Alternatively, it is present in social networks expressed as user opinions. Furthermore, our approach applies data mining techniques to exploit the already integrated data. Our approach has been applied to a real case study and shows promising results. The experiments carried out in this work are twofold: (i) using and comparing diverse Artificial Intelligence methods, and (ii) validating our approach with data sources integration.  相似文献   

17.
In this paper, a new approach for time series forecasting is presented. The forecasting activity results from the interaction of a population of experts, each integrating genetic and neural technologies. An expert of this kind embodies a genetic classifier designed to control the activation of a feedforward artificial neural network for performing a locally scoped forecasting activity. Genetic and neural components are supplied with different information: The former deal with inputs encoding information retrieved from technical analysis, whereas the latter process other relevant inputs, in particular past stock prices. To investigate the performance of the proposed approach in response to real data, a stock market forecasting system has been implemented and tested on two stock market indexes, allowing for account realistic trading commissions. The results pointed to the good forecasting capability of the approach, which repeatedly outperformed the “Buy and Hold” strategy.  相似文献   

18.
个体基于量子概率幅进行编码,并将经典遗传算法的杂交算子用于量子演化算法中演化目标的优化,提出了混合量子演化算法。算法中对量子旋转角自适应更新,并首次引入了突变度的概念定义了自适应的变异算子,对量子个体的演化目标定期实施杂交,有效地交换并利用了演化信息,避免了未成熟收敛,提高了算法效率。数值优化问题的实验结果表明该算法优于QEA和CGA,并能以极大概率成功地解决“大海捞针”问题,且计算效率高,优化速度与CGA相当。  相似文献   

19.
Compared with the conventional dynamic random access memory (DRAM), emerging non-volatile memory technologies provide better density and energy efficiency. However, current NVM devices typically suffer from high write power, long write latency and low write endurance. In this paper, we study the task allocation problem for the hybrid main memory architecture with both DRAM and PRAM, in order to leverage system performance and the energy consumption of the memory subsystem via assigning different memory devices for each individual task. For an embedded system with a static set of periodical tasks, we design an integer linear programming (ILP) based offline adaptive space allocation (offline-ASA) algorithm to obtain the optimal task allocation. Furthermore, we propose an online adaptive space allocation (online-ASA) algorithm for dynamic task set where arrivals of tasks are not known in advance. Experimental results show that our proposed schemes achieve 27.01% energy saving on average, with additional performance cost of 13.6%.  相似文献   

20.
张震  付印金  胡谷雨 《计算机应用》2018,38(8):2230-2235
相变存储器(PCM)凭借低功耗的优势有望成为新一代主存储器,但是耐受性的缺陷成为其广泛应用的重要障碍。现有的随机存取存储器(DRAM)缓存技术和磨损均衡分别从减少PCM写数量以及均匀化写操作分布两个角度延长PCM使用寿命,但前者在写回数据时未考虑数据的读写倾向性,后者在空间局部性较强的应用场景下存在数据交换粒度、空间开销、随机性等诸多问题。因此,设计一种全新的混合存储架构,结合最近最少使用(LRU)算法和带有时间变化的最不经常使用(LFU-Aging)算法提出区分数据读写倾向性的缓存策略,并且基于布隆过滤器(BF)设计针对强空间局部性工作集的动态磨损均衡算法,在有效减少冗余写操作的同时实现低空间开销的组间磨损均衡操作。实验结果表明,该策略能够减少PCM上13.4%~38.6%的写操作,同时有效均匀90%以上分组的写操作分布。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号