There are many optimization problems in different branches of science that should be solved using an appropriate methodology. Population-based optimization algorithms are one of the most efficient approaches to solve this type of problems. In this paper, a new optimization algorithm called All Members-Based Optimizer (AMBO) is introduced to solve various optimization problems. The main idea in designing the proposed AMBO algorithm is to use more information from the population members of the algorithm instead of just a few specific members (such as best member and worst member) to update the population matrix. Therefore, in AMBO, any member of the population can play a role in updating the population matrix. The theory of AMBO is described and then mathematically modeled for implementation on optimization problems. The performance of the proposed algorithm is evaluated on a set of twenty-three standard objective functions, which belong to three different categories: unimodal, high-dimensional multimodal, and fixed-dimensional multimodal functions. In order to analyze and compare the optimization results for the mentioned objective functions obtained by AMBO, eight other well-known algorithms have been also implemented. The optimization results demonstrate the ability of AMBO to solve various optimization problems. Also, comparison and analysis of the results show that AMBO is superior and more competitive than the other mentioned algorithms in providing suitable solution. 相似文献
Multidimensional Systems and Signal Processing - Algebraic structures and their hardware–software implementation gain considerable attention in the field of information security and coding... 相似文献
Pythagorean fuzzy set (PFS) is a powerful tool to deal with the imprecision and vagueness. Many aggregation operators have been proposed by many researchers based on PFSs. But the existing methods are under the hypothesis that the decision-makers (DMs) and the attributes are at the same priority level. However, in real group decision-making problems, the attribute and DMs may have different priority level. Therefore, in this paper, we introduce multiattribute group decision-making (MAGDM) based on PFSs where there exists a prioritization relationship over the attributes and DMs. First we develop Pythagorean fuzzy Einstein prioritized weighted average operator and Pythagorean fuzzy Einstein prioritized weighted geometric operator. We study some of its desirable properties such as idempotency, boundary, and monotonicity in detail. Moreover we propose a MAGDM approach based on the developed operators under Pythagorean fuzzy environment. Finally, an illustrative example is provided to illustrate the practicality of the proposed approach. 相似文献
In this paper we propose a new multispectral image fusion architecture. The proposed method includes two steps related to two neural networks. First the extracted spatial information, from panchromatic (Pan) image, is injected to upsampled multi-spectral (MS) image. In this step, the method employed a deep convolution neural network (DCNN) to estimate the spatial information of the MS image, according to multi-resolution analysis (MRA) scheme. This DCNN is trained by the low-spatial resolution version of Pan as an input, and by the spatial information as the target. This trained DCNN is called ‘Fusion network (FN)’. The FN, adaptively, estimates the spatial information of the MS images, and operates as an injection gain in the MRA scheme. In the second step, the spectral compensation is performed on the fused MS image. For this purpose, we used a novel loss function for this DCNN, to reduce the spectral distortion in the fused images, and simultaneously maintain the spatial information. This network is called ‘Spectral compensation network (SCN)’. Finally, the proposed method is compared to the several state-of-the-art methods on three datasets, using both full-reference and reduced reference criterion. The experimental results show that the proposed method can achieve competitive performance in both spatial and spectral information. 相似文献
Social networking platforms provide a vital source for disseminating
information across the globe, particularly in case of disaster. These platforms
are great mean to find out the real account of the disaster. Twitter is an example
of such platform, which has been extensively utilized by scientific community due
to its unidirectional model. It is considered a challenging task to identify eyewitness tweets about the incident from the millions of tweets shared by twitter users.
Research community has proposed diverse sets of techniques to identify eyewitness account. A recent state-of-the-art approach has proposed a comprehensive set
of features to identify eyewitness account. However, this approach suffers some
limitation. Firstly, automatically extracting the feature-words remains a perplexing
task against each feature identified by the approach. Secondly, all identified features were not incorporated in the implementation. This paper has utilized the language structure, linguistics, and word relation to achieve automatic extraction of
feature-words by creating grammar rules. Additionally, all identified features were
implemented which were left out by the state-of-the-art model. A generic
approach is taken to cover different types of disaster such as earthquakes, floods,
hurricanes, and wildfires. The proposed approach was then evaluated for all disaster-types, including earthquakes, floods, hurricanes, and fire. Based on the static
dictionary, the Zahra et al. approach was able to produce an F-Score value of
0.92 for Eyewitness identification in the earthquake category. The proposed
approach secured F-Score values of 0.81 in the same category. This score can
be considered as a significant score without using a static dictionary. 相似文献
The scale-up of laboratory grinding data to industrial milling operations generally relies on tests carried out in cylindrical ball mills run in batch mode. This approach imposes no restriction on the diameter and length of the laboratory mill.In this work, the breakage characteristics of a copper ore were measured using two batch mills of different designs. For each mill, a number of feed samples of similar size distributions were prepared for testing under various conditions. Product size distributions were then measured after predefined milling time intervals. Finally, the selection function and breakage function parameters of the copper ore were back-calculated from the milling data.Results showed that the breakage function parameters from the two mills are statistically similar indicative of a normalisable copper ore. It was also found that the scale-up equations for batch grinding data described well the effect of mill diameter on the selection function parameters. 相似文献
In this study, a new hybrid model, bootstrap multiple linear regression (BMLR) is suggested to investigate the potential of bootstrap resampling technique for daily reservoir inflow prediction. The proposed model compares with three other models: Multiple linear regression (MLR), wavelet multiple linear regression (WMLR) and wavelet bootstrap multiple linear regression (WBMLR). River stage data of monsoon season (1st July 2010 to 30 September 2010) from three gauging stations of Chenab river basin are used. In wavelet transformation, input vectors are decomposed using discrete wavelet transformation (DWT) into discrete wavelet components (DWCs). Then suitable DWCs are used to provide input to MLR model to develop WMLR model. Bootstrap technique coupled with MLR model to build up BMLR model. While WBMLR model is the conjunction of suitable DWCs and bootstrap technique to MLR model. Performance indices namely root mean square error (RMSE), mean absolute error (MAE), Nash-Sutcliffe coefficient of efficiency (NSC), and persistence index (CP) are used in study to evaluate the performance of model. Results showed that hybrid model BMLR produce significantly better results on performance indices than other models MLR, WMLR and WBMLR.
Multivariate probability analysis of hydrological elements using copula functions can significantly improve the modeling of complex phenomena by considering several dependent variables simultaneously. The main objectives of this study were to: (i) develop a stand-alone and event-based rainfall-runoff (RR) model using the common bivariate copula functions (i.e. the BCRR model); (ii) improve the structure of the developed copula-based RR model by using a trivariate version of fully-nested Archimedean copulas (i.e. the FCRR model); and (iii) compare the performance of the developed copula-based RR models in an Iranian watershed. Results showed that both of the developed models had acceptable performance. However, the FCRR model outperformed the BCRR model and provided more reliable estimations, especially for lower joint probabilities. For example, when joint probabilities were increased from 0.5 to 0.8 for the peak discharge (qp) variable, the reliability criteria value increased from 0.0039 to 0.8000 in the FCRR model, but only from 0.0010 to 0.6400 in the BCRR model. This is likely because the FCRR considers more than one rainfall predictor, while the BCRR considers only one.
Wireless Personal Communications - In this article, enhanced chaotic range map is used for data hiding and multimedia security. By using complex properties of improved chaotic logistic map that... 相似文献
Authenticating digital images is increasingly becoming important because digital images carry important information and due to their use in different areas such as courts of law as essential pieces of evidence. Nowadays, authenticating digital images is difficult because manipulating them has become easy as a result of powerful image processing software and human knowledge. The importance and relevance of digital image forensics has attracted various researchers to establish different techniques for detection in image forensics. The core category of image forensics is passive image forgery detection. One of the most important passive forgeries that affect the originality of the image is copy-move digital image forgery, which involves copying one part of the image onto another area of the same image. Various methods have been proposed to detect copy-move forgery that uses different types of transformations. The goal of this paper is to determine which copy-move forgery detection methods are best for different image attributes such as JPEG compression, scaling, rotation. The advantages and drawbacks of each method are also highlighted. Thus, the current state-of-the-art image forgery detection techniques are discussed along with their advantages and drawbacks. 相似文献