Parallel machines are extensively used to increase computational speed in solving different scientific problems. Various topologies with different properties have been proposed so far and each one is suitable for specific applications. Pyramid interconnection networks have potentially powerful architecture for many applications such as image processing, visualization, and data mining. The major advantage of pyramids which is important for image processing systems is hierarchical abstracting and transferring the data toward the apex node, just like the human being vision system, which reach to an object from an image. There are rapidly growing applications in which the multidimensional datasets should be processed simultaneously. For such a system, we need a symmetric and expandable interconnection network to process data from different directions and forward them toward the apex. In this paper, a new type of pyramid interconnection network called Non-Flat Surface Level (NFSL) pyramid is proposed. NFSL pyramid interconnection networks constructed by L-level A-lateral-base pyramids that are named basic-pyramids. So, the apex node is surrounded by the level-one surfaces of NFSL that are the first nearest level of nodes to apex in the basic pyramids. Two topologies which are called NFSL-T and NFSL-Q originated from Trilateral-base and Quadrilateral-base basic-pyramids are studied to exemplify the proposed structure. To evaluate the proposed architecture, the most important properties of the networks are determined and compared with those of the standard pyramid networks and its variants. 相似文献
This paper proposes an adaptive Wiener filtering method for speech enhancement. This method depends on the adaptation of the filter transfer function from sample to sample based on the speech signal statistics; the local mean and the local variance. It is implemented in the time domain rather than in the frequency domain to accommodate for the time-varying nature of the speech signals. The proposed method is compared to the traditional frequency-domain Wiener filtering, spectral subtraction and wavelet denoising methods using different speech quality metrics. The simulation results reveal the superiority of the proposed Wiener filtering method in the case of Additive White Gaussian Noise (AWGN) as well as colored noise. 相似文献
The need of human beings for better social media applications has increased tremendously. This increase has necessitated the need for a digital system with a larger storage capacity and more processing power. However, an increase in multimedia content size reduces the overall processing performance. This occurs because the process of storing and retrieving large files affects the execution time. Therefore, it is extremely important to reduce the multimedia content size. This reduction can be achieved by image and video compression. There are two types of image or video compression: lossy and lossless. In the latter compression, the decompressed image is an exact copy of the original image, while in the former compression, the original and the decompressed image differ from each other. Lossless compression is needed when every pixel matters. This can be found in autoimage processing applications. On the other hand, lossy compression is used in applications that are based on human visual system perception. In these applications, not every single pixel is important; rather, the overall image quality is important. Many video compression algorithms have been proposed. However, the balance between compression rate and video quality still needs further investigation. The algorithm developed in this research focuses on this balance. The proposed algorithm exhibits diversity of compression stages used for each type of information such as elimination of redundant and semi redundant frames, elimination by manipulating consecutive XORed frames, reducing the discrete cosine transform coefficients based on the wanted accuracy and compression ratio. Neural network is used to further reduce the frame size. The proposed method is a lossy compression type, but it can reach the near-lossless type in terms of image quality and compression ratio with comparable execution time.
We consider the reconstruction of a complex-valued object that is coherently illuminated and viewed through the same random-phase screen. The reconstruction is based on two intensity measurements: the intensity of the Fourier transform of the image and the intensity of the Fourier transform of the image when modulated with an exponential filter. The illumination beam has a Gaussian intensity profile of arbitrary width, and the phase screen is assumed to be described by a Gaussian random process of large variance and arbitrary correlation length. Computer-simulated examples of the reconstruction of a two-dimensional complex object demonstrate that the reconstruction is robust. 相似文献
The world of information technology is more than ever being flooded with huge amounts of data, nearly 2.5 quintillion bytes every day. This large stream of data is called big data, and the amount is increasing each day. This research uses a technique called sampling, which selects a representative subset of the data points, manipulates and analyzes this subset to identify patterns and trends in the larger dataset being examined, and finally, creates models. Sampling uses a small proportion of the original data for analysis and model training, so that it is relatively faster while maintaining data integrity and achieving accurate results. Two deep neural networks, AlexNet and DenseNet, were used in this research to test two sampling techniques, namely sampling with replacement and reservoir sampling. The dataset used for this research was divided into three classes: acceptable, flagged as easy, and flagged as hard. The base models were trained with the whole dataset, whereas the other models were trained on 50% of the original dataset. There were four combinations of model and sampling technique. The F-measure for the AlexNet model was 0.807 while that for the DenseNet model was 0.808. Combination 1 was the AlexNet model and sampling with replacement, achieving an average F-measure of 0.8852. Combination 3 was the AlexNet model and reservoir sampling. It had an average F-measure of 0.8545. Combination 2 was the DenseNet model and sampling with replacement, achieving an average F-measure of 0.8017. Finally, combination 4 was the DenseNet model and reservoir sampling. It had an average F-measure of 0.8111. Overall, we conclude that both models trained on a sampled dataset gave equal or better results compared to the base models, which used the whole dataset. 相似文献
This paper deals with defining the concept of agent-based time delay margin and computing its value in multi-agent systems controlled by event-triggered based controllers. The agent-based time delay margin specifying the time delay tolerance of each agent for ensuring consensus in event-triggered controlled multi-agent systems can be considered as complementary for the concept of (network) time delay margin, which has been previously introduced in some literature. In this paper, an event-triggered control method for achieving consensus in multi-agent systems with time delay is considered. It is shown that the Zeno behavior is excluded by applying this method. Then, in a multi-agent system controlled by the considered event-triggered method, the concept of agent-based time delay margin in the presence of a fixed network delay is defined. Moreover, an algorithm for computing the value of the time delay margin for each agent is proposed. Numerical simulation results are also provided to verify the obtained theoretical results. 相似文献