共查询到20条相似文献,搜索用时 0 毫秒
1.
2.
From microarrays and next generation sequencing to clinical records, the amount of biomedical data is growing at an exponential rate. Handling and analyzing these large amounts of data demands that computing power and methodologies keep pace. The goal of this paper is to illustrate how high performance computing methods in SAS can be easily implemented without the need of extensive computer programming knowledge or access to supercomputing clusters to help address the challenges posed by large biomedical datasets. We illustrate the utility of database connectivity, pipeline parallelism, multi-core parallel process and distributed processing across multiple machines. Simulation results are presented for parallel and distributed processing. Finally, a discussion of the costs and benefits of such methods compared to traditional HPC supercomputing clusters is given. 相似文献
3.
4.
Brahimi Tahar Boubchir Larbi Fournier Régis Naït-Ali Amine 《Multimedia Tools and Applications》2017,76(15):16783-16805
Multimedia Tools and Applications - In this paper, a new multimodal compression scheme is proposed with the aim of compressing jointly an image and a signal via a single codec. The key idea behind... 相似文献
5.
Data compression techniques can be grouped into three categories, direct and transformation compression and parameter extraction. The principles and details of ECG data compression implementation using fast Walsh transform are presented. Performance evaluation has been made on the basis of compression ratio and visual comparison. To know the extent to which the clinical information preserved in the reconstructed signal, peak and boundary measurements were made on both the reconstructed and original signal, and then compared. As the number of fast Walsh transform coefficients are reduced, the compression ratio increases. At higher compression ratios deviations in the R-peak are found to be higher than those in the other ECG peaks. Peak and boundary measurements reveal clinical acceptability of the compression algorithm, because the errors are quite tolerable. It is also worth noting that artefacts such as electromyographic noise are better eliminated because of filtering, and this also makes the signal compatible to cardiologists for visual examination. The experimental results show that a compression ratio of four is acceptable to preserve the clinical information. 相似文献
6.
Nick J. Pizzi 《Theoretical computer science》2011,412(42):5909-5925
A fuzzy set based preprocessing method is described that may be used in the classification of patterns. This method, dispersion-adjusted fuzzy quartile encoding, determines the respective degrees to which a feature (attribute) belongs to a collection of fuzzy sets that overlap at the respective quartile boundaries of the feature. The fuzzy sets are adjusted to take into account the overall dispersion of values for a feature. The membership values are subsequently used in place of the original feature value. This transformation has a normalizing effect on the feature space and is robust to feature outliers. This preprocessing method, empirically evaluated using five biomedical datasets, is shown to improve the discriminatory power of the underlying classifiers. 相似文献
7.
We propose a method for matching non-affinely related sparse model and data point-sets of identical cardinality, similar spatial distribution and orientation. To establish a one-to-one match, we introduce a new similarity K-dimensional tree. We construct the tree for the model set using spatial sparsity priority order. A corresponding tree for the data set is then constructed, following the sparsity information embedded in the model tree. A matching sequence between the two point sets is generated by traversing the identically structured trees. Experiments on synthetic and real data confirm that this method is applicable to robust spatial matching of sparse point-sets under moderate non-rigid distortion and arbitrary scaling, thus contributing to non-rigid point-pattern matching. 相似文献
8.
9.
集散控制系统和现代测控仪表的大量使用,使得化工生产过程产生了大量数据。而企业信息化和网络控制技术均对过程数据的管理、存储和传输等性能提出了新的要求。应用过程数据压缩技术对化工过程数据进行预处理,不仅可以节约存储空间,而且可以降低数据流量,避免出现网络拥塞,对保障现场总线和网络控制技术的成功实施具有重要意义。本文以典型的Tennessee Eastman(TE)生产过程为对象,提出一种基于工业应用的小波变换在线数据压缩测量,可以同时兼顾过程数据的时域和频域特征信息。对TE过程中的汽提塔压力信号的仿真压缩结果验证了本算法的有效性。 相似文献
10.
11.
Shang-Kuan ChenAuthor Vitae 《Computer Standards & Interfaces》2011,33(4):367-371
All the various data hiding methods can be simply divided into two types: (1) the extracted important data are lossy, (2) the extracted important data are lossless. The proposed method belongs to the second type. In this paper, a module-based substitution method with lossless secret data compression function is used for concealing smoother area of secret image by modifying fewer pixels in the generated stego-image. Compared with the previous data hiding methods that extract lossless data, the generated stego-image by the proposed method is always with better quality, unless the hidden image is with very strong randomness. 相似文献
12.
针对跳频通信在宽带噪声干扰和梳状干扰下误比特性能较差的问题,基于“信道即消息”的思想,提出了一种新的短波跳频通信技术——图样匹配跳频.以Williard码、Walsh码和Gold码为例,分析了几种图样码的特性及对系统抗干扰能力和信息速率的影响.提出了图样码的选取要求,结合系统的两种帧同步策略分别给出了图样码型,对FH/MFsK和图样匹配跳频系统进行了仿真对比.仿真结果表明,图样匹配跳频系统具有更强的抗干扰能力,在恶意干扰严重影响常规跳频的误码性能时,图样匹配跳频可以显著提升通信的可靠性. 相似文献
13.
14.
Zhi-Hui Wang Author Vitae Kuo-Nan Chen Author Vitae 《Journal of Systems and Software》2010,83(11):2073-2082
Reversible image data hiding technology means the cover image can be totally recovered after the embedded secret data is extracted. In this paper, we propose a reversible image data hiding scheme based on vector quantization (VQ) compressed images. The secret bits are embedded into the VQ index table by modifying the index value according to the difference of neighboring indices. The data hiding capacity and the size of the final codestream (embedded result) are a trade-off, and it can be decided by users. In other words, the proposed scheme has flexible hiding capacity ability. To estimate the performance, the proposed scheme was compared with the scheme proposed by Wang and Lu (2009). The results of the comparison showed that our proposed scheme is superior to the scheme proposed by Wang and Lu in both data hiding capacity and bit rate. 相似文献
15.
指纹传感器规格繁多,原理和设计各不相同,产品之间并不能兼容和协作,这导致了不同类型传感器的系统匹配性能大大降低。提出一种融合两类常用的指纹传感器(光学和电容传感器)的方法来提高系统的匹配性能。两类传感器分别采集两幅图像后通过预处理程序提取细节点后分别与模板指纹相匹配,得到2个匹配分数,然后,将这2个匹配分数通过融合规则得到最后的匹配分数,通过与单一传感器性能比较表明:融合后的结果对系统的性能有了很大的提高。 相似文献
16.
Sotoodeh Mahmood Tajeripour Farshad Teimori Sadegh Jorgensen Kirk 《Multimedia Tools and Applications》2018,77(13):16833-16866
Multimedia Tools and Applications - Optical Music Recognition (OMR) can be divided into three main phases: (i) staff line detection and removal. The goal of this phase is to detect and to remove... 相似文献
17.
Abdul Khader Jilani Saudagar Abdul Sattar Syed 《Neural computing & applications》2014,24(7-8):1725-1734
Image compression is applied to many fields such as television dissemination, remote sensing, image storage. Digitized images are compressed by a method which exploits the redundancy of the images so that the number of bits required to represent the image can be reduced with acceptable degradation of the decoded image. The humiliation of the image quality is limited with respect to the application used. There are various biomedical applications where accuracy is of major concern. To attain the objective of performance improvement with respect to decoded picture quality and compression ratios, in contrast to existing image compression techniques, an effective image coding technique which involves transforming the image into another domain with ridgelet function and then quantizing the coefficients with hybrid neural networks combining two different learning networks called auto-associative multilayer perceptron and self-organizing feature map is proposed. Ridge functions are effective in representing functions that have discontinuities along straight lines. Normal wavelet transforms not succeed to represent such functions effectively. The results obtained from the combination of finite ridgelet transform with hybrid neural networks found much better than that obtained from the JPEG2000 image compression system. 相似文献
18.
在SP061A单片机上实现对ECG信号的FFT、滤波和压缩。合理组织SP061A的硬件资源,并采取数据分段长度可选、避开高频分量的计算和简易的数据压缩算法,使存储开销、运算速度和精度满足实用要求。 相似文献
19.
The use of cross-correlation function for the alignment of ECG waveforms and rejection of extrasystoles 总被引:1,自引:0,他引:1
The cross-correlation function for alignment of ECG waveforms and rejection of artifacts was studied. ECG waveforms were recorded and digitized (10-bit resolution) at 1.28 KHz and were processed with a CDC 6600 computer. The cross-correlation function was calculated using the cross spectrum and the fast Fourier transform algorithm. The maximum value of the cross-correlation function and the time location of that value were searched for (a) the similarity between the waveforms (for elimination of extrasystoles), and (b) measuring the relative time delay (for the waveform's alignment in the averaging process) between the ECG waveforms. Prior to the correlation procedure each of the ECG waveforms was filtered through a nonrecursive digital bandpass filter. Experiments with various filters indicated that when low-frequency cutoff at 30 Hz and high-frequency cutoff at 250 Hz were used the maximum value of the cross-correlation function was higher than 0.9 for most of our recorded waveforms and more accurate results were obtained. 相似文献
20.
1 Introduction Functional magnetic resonance imaging (fMRI) is a non-invasive brain imaging technique which has been utilized in brain function researches since the early 1990s[1]. However, it is often difficult to do analysis in fMRI data because of the low signal to noise ratio (SNR) (about 2%—4% with 1.5T magnetic field strength) and the delay within the true neural activity and the stimuli-induced signal dynamic responses. The prevalent methods applied to fMRI data could be divided i… 相似文献