首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9篇
  免费   0篇
无线电   5篇
自动化技术   4篇
  2023年   1篇
  2013年   1篇
  2012年   1篇
  2009年   1篇
  2008年   1篇
  2006年   1篇
  2003年   1篇
  1999年   2篇
排序方式: 共有9条查询结果,搜索用时 62 毫秒
1
1.
In the context of future dynamic applications, systems will exhibit unpredictably varying platform resource requirements. To deal with this, they will not only need to be programmable in terms of instruction set processors, but also at least partial reconfigurability will be required. In this context, it is important for applications to optimally exploit the memory hierarchy under varying memory availability. This article presents a mapping strategy for wavelet-based applications: depending on the encountered conditions, it switches to different memory optimized instantations or localizations, permitting up to 51% energy gains in memory accesses. Systematic and parameterized mapping guidelines indicate which localization should be selected when, for varying algorithmic wavelet parameters. The results have been formalized and generalized to be applicable to more general wavelet-based applications.  相似文献   
2.
A Scalable Architecture for MPEG-4 Wavelet Quantization   总被引:3,自引:0,他引:3  
Wavelet-based image compression has been adopted in MPEG-4 for visual texture coding. All wavelet quantization schemes in MPEG-4—Single Quantization (SQ), Multiple Quantization (MQ) and Bi-level Quantization—use Embedded Zero Tree (EZT) coding followed by an adaptive arithmetic coder for the compression and quantization of a wavelet image. This paper presents the OZONE chip, a dedicated hardware coprocessor for EZT and arithmetic coding. Realized in a 0.5 m CMOS technology and operating at 32 MHz, the EZT coder is capable of processing up to 25.6 Mega pixel-bitplanes per second. This is equivalent to the lossless compression of 31.6 8-bit grayscale CIF images (352 × 288) per second. The adaptive arithmetic coder processes up to 10 Mbit per second. The combination of the performance of the EZT coder and the arithmetic coder allows the OZONE to perform visual-lossless compression of more than 30 CIF images per second. Due to its novel and scalable architecture, parallel operation of multiple OZONEs is supported. The OZONE functionality is demonstrated on a PC-based compression system.  相似文献   
3.
We present a new method to extract scale-invariant features from an image by using a Cosine Modulated Gaussian (CM-Gaussian) filter. Its balanced scale-space atom with minimal spread in scale and space leads to an outstanding scale-invariant feature detection quality, albeit at reduced planar rotational invariance. Both sharp and distributed features like corners and blobs are reliably detected, irrespective of various image artifacts and camera parameter variations, except for planar rotation. The CM-Gaussian filters are approximated with the sum of exponentials as a single, fixed-length filter and equal approximation error over all scales, providing constant-time, low-cost image filtering implementations. The approximation error of the corresponding digital signal processing is below the noise threshold. It is scalable with the filter order, providing many quality-complexity trade-off working points. We validate the efficiency of the proposed feature detection algorithm on image registration applications over a wide range of testbench conditions.  相似文献   
4.
Flurries of terminals with large differences in terminal capabilities currently consume information and multi-media content. Their different processing capabilities make it challenging to guarantee satisfactory quality in all possible situations. This paper proposes a systematic methodology for interactive 3D graphics applications to adapt the complexity of the content automatically to the terminal’s available resources. Our contribution is an off-line/online partitioned optimisation that increases the visual quality with respect to previous work at the same rendering cost, while the overhead of the optimisation is minimal.  相似文献   
5.
Most current multiplayer 3D games can only be played on dedicated platforms, requiring specifically designed content and communication over a predefined network. To overcome these limitations, the OLGA (On-Line GAming) consortium has devised a framework to develop distributive, multiplayer 3D games. Scalability at the level of content, platforms and networks is exploited to achieve the best trade-offs between complexity and quality.
Robert-Paul BerrettyEmail:
  相似文献   
6.
A folded very large scale integration (VLSI) architecture is presented for the implementation of the two-dimensional discrete wavelet transform, without constraints on the choice of the wavelet-filter bank. The proposed architecture is dedicated to flexible block-oriented image processing, such as adaptive vector quantization used in wavelet image coding. We show that reading the image along a two-dimensional (2-D) pseudo-fractal scan creates a very modular and regular data flow and, therefore, considerably reduces the folding complexity and memory requirements for VLSI implementation. This leads to significant area savings for on-chip storage (up to a factor of two) and reduces the power consumption. Furthermore, data scheduling and memory management remain very simple. The end result is an efficient VLSI implementation with a reduced area cost compared to the conventional approaches, reading the input data line by line  相似文献   
7.
The main implementations of the 2-D binary-tree discrete wavelet decomposition are theoretically analyzed and compared with respect to data-cache performance on instruction-set processor-based realizations. These implementations include various image-scanning techniques, from the classical row-column approach to the block-based and line-based methods, which are proposed in the framework of multimedia-coding standards. Analytical parameterized equations for the prediction of data-cache misses under general realistic assumptions are proposed. The accuracy and the consistency of the theory are verified through simulations on test platforms and a comparison is made with the results from a real platform.  相似文献   
8.
In this brief, we present a constant time method for the joint bilateral filtering. First, we propose an image data structure, coined as joint integral histograms (JIHs). Extending the classic integral images and the integral histograms, it represents the global information of two correlated images. In a JIH, the value at each bin indicates an integral determined by the two images. Then, the joint bilateral filtering is transformed to computation and manipulation of histograms. Utilizing the JIHs, we are capable of joint bilateral filtering in constant time. Its performance is validated in a digital photography approach using Flash-noFlash image pairs. Compared with the brute-force method, the proposed method achieves a speedup factor of 2-3 orders of magnitude while producing similar filtering results.  相似文献   
9.
Virtual Reality - Heads-up displays that are ‘see-through’ and ‘curved’ and capable of displaying 3D contents are considered crucial for augmented reality-based navigation...  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号