In this paper, a fast Fourier transform (FFT) hardware architecture optimized for field-programmable gate-arrays (FPGAs) is proposed. We refer to this as the single-stream FPGA-optimized feedforward (SFF) architecture. By using a stage that trades adders for shift registers as compared with the single-path delay feedback (SDF) architecture the efficient implementation of short shift registers in Xilinx FPGAs can be exploited. Moreover, this stage can be combined with ordinary or optimized SDF stages such that adders are only traded for shift registers when beneficial. The resulting structures are well-suited for FPGA implementation, especially when efficient implementation of short shift registers is available. This holds for at least contemporary Xilinx FPGAs. The results show that the proposed architectures improve on the current state of the art. 相似文献
Response surface models were developed for effects of temperature (16 to 34 degrees C) and previous temperature (pretemperature; 16 to 34 degrees C) on lag time (lambda) and specific growth rate (mu) of Salmonella Typhimurium on cooked ground chicken breast. The primary objective was to determine whether pretemperature is a major factor affecting growth of Salmonella Typhimurium. Growth curves for model development (n = 32) and model testing (n = 18) were fit to a two-phase linear equation that directly estimated lambda and mu. Response surface models for ln lambda and ln mu as a function of temperature and pretemperature were obtained by regression analysis. Lag time and mu of Salmonella Typhimurium were affected by temperature but not pretemperature. Models were tested against data not used in their development. Prediction error (model accuracy) was 13.4% for lambda and 11.3% for mu, whereas the median relative error of predictions (model bias) was -3.0% for lambda and 6.8% for mu. Results indicated that the models provide reliable predictions of lambda and mu of Salmonella Typhimurium on cooked ground chicken breast within the matrix of conditions modeled. In addition, pretemperature (16 to 34 degrees C) is not a major factor affecting growth of Salmonella Typhimurium. 相似文献
Most multimedia applications implement some kind of packet losses analysis mechanism to trigger self-adaptation actions. However, error recovery mechanisms such as ARQ variants avoid packet losses at IP level. Then, there appears an impact into the delay that may indeed result on application-level losses due to samples arriving later than the scheduled playout time. In most cases such degradations are not detected until they are so severe than even losses at IP level appear. We propose a lightweight method that achieves a finer grain estimation. We prove mathematically the capability of modified versions of statistics of delay to predict the error ratio of the wireless link and the load of the wired backhaul. After deducing also simplified heuristics for proposed method, we analyze their estimation capabilities and provide guidance for selecting appropriate parameters. Finally, we test and adjust the algorithm to an specific scenario including mobile video streaming and VoIP calls. 相似文献
Multi-Hypothesis motion compensated filter (MHMCF) utilizes a number of hypotheses (temporal predictions) to estimate the current pixel which is corrupted with noise. While showing remarkable denoising results, MHMCF is computationally intensive as full search is employed in the expectation of finding good temporal predictions in the presence of noise. In the frame of MHMCF, a fast denoising algorithm FMHMCF is proposed in this paper. With edge preserved low-pass prefiltering and noise-robust fast multihypothesis search, FMHMCF could find reliable hypotheses while checking very few search locations, so that the denoising process can be dramatically accelerated. Experimental results show that FMHMCF can be 10 to 14 times faster than MHMCF, while achieving the same or even better denoising performance with up to 1.93 dB PSNR (peak-signal-noise-ratio) improvement. 相似文献
Mobile Networks and Applications - Recently, Unmanned Aerial Vehicles (UAVs) have become a cheap alternative to sense pollution values in a certain area due to their flexibility and ability to... 相似文献
The most challenging aspect of particle filtering hardware implementation is the resampling step. This is because of high latency as it can be only partially executed in parallel with the other steps of particle filtering and has no inherent parallelism inside it. To reduce the latency, an improved resampling architecture is proposed which involves pre-fetching from the weight memory in parallel to the fetching of a value from a random function generator along with architectures for realizing the pre-fetch technique. This enables a particle filter using M particles with otherwise streaming operation to get new inputs more often than 2M cycles as the previously best approach gives. Results show that a pre-fetch buffer of five values achieves the best area-latency reduction trade-off while on average achieving an 85% reduction in latency for the resampling step leading to a sample time reduction of more than 40%. We also propose a generic division-free architecture for the resampling steps. It also removes the need of explicitly ordering the random values for efficient multinomial resampling implementation. In addition, on-the-fly computation of the cumulative sum of weights is proposed which helps reduce the word length of the particle weight memory. FPGA implementation results show that the memory size is reduced by up to 50%.
The main goal of this work is the generation of ground-truth data for the validation of atrophy measurement techniques, commonly used in the study of neurodegenerative diseases such as dementia. Several techniques have been used to measure atrophy in cross-sectional and longitudinal studies, but it is extremely difficult to compare their performance since they have been applied to different patient populations. Furthermore, assessment of performance based on phantom measurements or simple scaled images overestimates these techniques' ability to capture the complexity of neurodegeneration of the human brain. We propose a method for atrophy simulation in structural magnetic resonance (MR) images based on finite-element methods. The method produces cohorts of brain images with known change that is physically and clinically plausible, providing data for objective evaluation of atrophy measurement techniques. Atrophy is simulated in different tissue compartments or in different neuroanatomical structures with a phenomenological model. This model of diffuse global and regional atrophy is based on volumetric measurements such as the brain or the hippocampus, from patients with known disease and guided by clinical knowledge of the relative pathological involvement of regions and tissues. The consequent biomechanical readjustment of structures is modelled using conventional physics-based techniques based on biomechanical tissue properties and simulating plausible tissue deformations with finite-element methods. A thermoelastic model of tissue deformation is employed, controlling the rate of progression of atrophy by means of a set of thermal coefficients, each one corresponding to a different type of tissue. Tissue characterization is performed by means of the meshing of a labelled brain atlas, creating a reference volumetric mesh that will be introduced to a finite-element solver to create the simulated deformations. Preliminary work on the simulation of acquisition artefacts is also presented. Cross-sectional and longitudinal sets of simulated data are shown and a visual classification protocol has been used by experts to rate real and simulated scans according to their degree of atrophy. Results confirm the potential of the proposed methodology. 相似文献
Cod salted with various combinations of NaCl, KCl, MgCl2 and CaCl2 were soaked for the first 24 h with a 0.2 M carbonate buffer solution (pH 9.5) or for the first 5 h with 1% hydrogen peroxide (pH 7.16). Chloride, ions and ash content in muscle were measured during the soaking process. After soaking, the composition of soluble muscle protein extracted in NaCl 0.86 M was determined by SDS-PAGE. The divalent, cation content was not affected by the salting and soaking processes. The protein extractability was low, especially when H2O2 was used. The use of alkaline buffer solution produced better protein functional quality. When the alkaline solution was used for soaking, the soluble fraction was constituted of a larger variety of proteins. Actin was soluble only when MgCl2 and KCl were used for salting and the alkaline solution for soaking. The use of carbonate/bicarbonate buffer solution for soaking could therefore be useful to improve the functional quality of muscle protein of desalted cod. 相似文献
We show that some of the fundamental closure properties (such as concatenation) that hold for Turing machines (TMs) operating
in space above logn, do not hold for TMs operating in space below logn. We also compare the powers of TMs andsweeping TMs operating in space below logn. While the proof that the powers of TMs and sweeping TMs are the same is trivial for space greater than or equal to logn, it is not obvious when the space is sublogarithmic. To explore the nature of sublogarithmic space computations further,
we introduce a nonuniform space complexity measure and study some of its fundamental properties (such as closure, hierarchy,
and gap) in the sublogarithmic range.
This research was supported in part by NSF Grant DCR-8604603. 相似文献