排序方式: 共有110条查询结果,搜索用时 16 毫秒
1.
2.
多天线波束成形系统与发送分集系统在下行蜂窝系统中是两种互补的技术。波束成形系统能够提供阵列增益,但是没有分集增益;相反,发送分集系统能够提供分集增益而没有阵列增益。通过计算在有无切换、相关衰落时系统的中断容量、CDF、传输功率等来比较两种系统的性能。 相似文献
3.
伴随着GSM网络的全球化普及,短消息及其增值业务以其方便、灵活、快捷、价格低廉等优势已经迈入许多领域,并逐步取代传统的信息传输方式。因此,利用GSM通信模块和短消息技术构建无线发布平台来实现信息的传输和监控将成为一种必然趋势。本文提出的基于GSM短消息的小流量数据传输,将能稳定地进行数据采集和远程监控。 相似文献
4.
In this paper, we have proposed a new feature selection method called kernel F-score feature selection (KFFS) used as pre-processing step in the classification of medical datasets. KFFS consists of two phases. In the first phase, input spaces (features) of medical datasets have been transformed to kernel space by means of Linear (Lin) or Radial Basis Function (RBF) kernel functions. By this way, the dimensions of medical datasets have increased to high dimension feature space. In the second phase, the F-score values of medical datasets with high dimensional feature space have been calculated using F-score formula. And then the mean value of calculated F-scores has been computed. If the F-score value of any feature in medical datasets is bigger than this mean value, that feature will be selected. Otherwise, that feature is removed from feature space. Thanks to KFFS method, the irrelevant or redundant features are removed from high dimensional input feature space. The cause of using kernel functions transforms from non-linearly separable medical dataset to a linearly separable feature space. In this study, we have used the heart disease dataset, SPECT (Single Photon Emission Computed Tomography) images dataset, and Escherichia coli Promoter Gene Sequence dataset taken from UCI (University California, Irvine) machine learning database to test the performance of KFFS method. As classification algorithms, Least Square Support Vector Machine (LS-SVM) and Levenberg–Marquardt Artificial Neural Network have been used. As shown in the obtained results, the proposed feature selection method called KFFS is produced very promising results compared to F-score feature selection. 相似文献
5.
In financial markets, investors attempt to maximize their profits within a constructed portfolio with the aim of optimizing the tradeoffs between risk and return across the many stocks. This requires proper handling of conflicting factors, which can benefit from the domain of multiple criteria decision making (MCDM). However, the indexes and factors representing the stock performance are often imprecise or vague and this should be represented by linguistic terms characterized by fuzzy numbers. The aim of this research is to first develop three group MCDM methods, then use them for selecting undervalued stocks by dint of financial ratios and subjective judgments of experts. This study proposes three versions of fuzzy TOPSIS (Technique for Order Preference by Similarity to Ideal Solution): conventional TOPSIS (C-TOPSIS), adjusted TOPSIS (A-TOPSIS) and modified TOPSIS (M-TOPSIS) where a new fuzzy distance measure, derived from the confidence level of the experts and fuzzy performance ratings have been included in the proposed methods. The practical aspects of the proposed methods are demonstrated through a case study in the Tehran stock exchange (TSE), which is timely given the need for investors to select undervalued stocks in untapped markets in the anticipation of easing economic sanctions from a change in recent government leadership. 相似文献
6.
In order to improve the performance of image segmentation, this paper presented a gray level jump segmentation algorithm, which defined the direction of the texture, simultaneously, calculated the width of ridge line, gave the distance characteristics between textures, and established the mathematical model of the texture border, accordingly presented a new texture segmentation algorithm and compared with other texture segmentation algorithms. The simulation results show that the segmentation algorithm has some advantages to texture segmentation, such as has higher segmentation precision, faster segmentation speed, stronger anti-noise capability, less lost information of target, and so on. The segmented regions hardly contain other texture regions and background region. Moreover, this paper extracted the characteristic points and characteristic parameters in various segmented regions for texture image to obtain the characteristic vector, compared the characteristic vector with the standard template vectors, and identified the type of target in a range of threshold value. Experimental results show that the proposed target recognition approach has higher recognition rate and faster recognition speed than the existing target recognition approaches. Advancements in image processing through the study of texture segmentation are not only applicable to image fields, but also are of important theoretical value to target recognition. These researches in this paper will play an important role in a theoretical reference and practical significance to the development of all target recognition departments based on image system such as the aerospace, public security, road traffic, and so on. 相似文献
7.
The problem of finding the expected shortest path in stochastic networks, where the presence of each node is probabilistic and the arc lengths are random variables, have numerous applications, especially in communication networks. The problem being NP-hard we use an ant colony system (ACS) to propose a metaheuristic algorithm for finding the expected shortest path. A new local heuristic is formulated for the proposed algorithm to consider the probabilistic nodes. The arc lengths are randomly generated based on the arc length distribution functions. Examples are worked out to illustrate the applicability of the proposed approach. 相似文献
8.
Image fusion methods based on multiscale transform (MST) suffer from high computational load due to the use of fast Fourier transforms (ffts) in the lowpass and highpass filtering steps. Lifting wavelet scheme which is based on second generation wavelets has been proposed as a solution to this issue. Lifting Wavelet Transform (LWT) is composed of split, prediction and update operations all implemented in the spatial domain using multiplications and additions, thus computation time is highly reduced. Since image fusion performance benefits from undecimated transform, it has later been extended to Stationary Lifting Wavelet Transform (SLWT). In this paper, we propose to use the lattice filter for the MST analysis step. Lattice filter is composed of analysis and synthesis parts where simultaneous lowpass and highpass operations are performed in spatial domain with the help of additions/multiplications and delay operations, in a recursive structure which increases robustness to noise. Since the original filter is designed for the undecimated case, we have developed undecimated lattice structures, and applied them to the fusion of multifocus images. Fusion results and evaluation metrics show that the proposed method has better performance especially with noisy images while having similar computational load with LSWT based fusion method. 相似文献
9.
An important property of today’s big data processing is that the same computation is often repeated on datasets evolving over time, such as web and social network data. While repeating full computation of the entire datasets is feasible with distributed computing frameworks such as Hadoop, it is obviously inefficient and wastes resources. In this paper, we present HadUP (Hadoop with Update Processing), a modified Hadoop architecture tailored to large-scale incremental processing with conventional MapReduce algorithms. Several approaches have been proposed to achieve a similar goal using task-level memoization. However, task-level memoization detects the change of datasets at a coarse-grained level, which often makes such approaches ineffective. Instead, HadUP detects and computes the change of datasets at a fine-grained level using a deduplication-based snapshot differential algorithm (D-SD) and update propagation. As a result, it provides high performance, especially in an environment where task-level memoization has no benefit. HadUP requires only a small amount of extra programming cost because it can reuse the code for the map and reduce functions of Hadoop. Therefore, the development of HadUP applications is quite easy. 相似文献
10.