Shear connectors play a prominent role in the design of steel-concrete composite systems. The behavior of shear connectors is generally determined through conducting push-out tests. However, these tests are costly and require plenty of time. As an alternative approach, soft computing (SC) can be used to eliminate the need for conducting push-out tests. This study aims to investigate the application of artificial intelligence (AI) techniques, as sub-branches of SC methods, in the behavior prediction of an innovative type of C-shaped shear connectors, called Tilted Angle Connectors. For this purpose, several push-out tests are conducted on these connectors and the required data for the AI models are collected. Then, an adaptive neuro-fuzzy inference system (ANFIS) is developed to identify the most influencing parameters on the shear strength of the tilted angle connectors. Totally, six different models are created based on the ANFIS results. Finally, AI techniques such as an artificial neural network (ANN), an extreme learning machine (ELM), and another ANFIS are employed to predict the shear strength of the connectors in each of the six models. The results of the paper show that slip is the most influential factor in the shear strength of tilted connectors and after that, the inclination angle is the most effective one. Moreover, it is deducted that considering only four parameters in the predictive models is enough to have a very accurate prediction. It is also demonstrated that ELM needs less time and it can reach slightly better performance indices than those of ANN and ANFIS.
Recently, medical image compression becomes essential to effectively handle large amounts of medical data for storage and communication purposes. Vector quantization (VQ) is a popular image compression technique, and the commonly used VQ model is Linde–Buzo–Gray (LBG) that constructs a local optimal codebook to compress images. The codebook construction was considered as an optimization problem, and a bioinspired algorithm was employed to solve it. This article proposed a VQ codebook construction approach called the L2‐LBG method utilizing the Lion optimization algorithm (LOA) and Lempel Ziv Markov chain Algorithm (LZMA). Once LOA constructed the codebook, LZMA was applied to compress the index table and further increase the compression performance of the LOA. A set of experimentation has been carried out using the benchmark medical images, and a comparative analysis was conducted with Cuckoo Search‐based LBG (CS‐LBG), Firefly‐based LBG (FF‐LBG) and JPEG2000. The compression efficiency of the presented model was validated in terms of compression ratio (CR), compression factor (CF), bit rate, and peak signal to noise ratio (PSNR). The proposed L2‐LBG method obtained a higher CR of 0.3425375 and PSNR value of 52.62459 compared to CS‐LBG, FA‐LBG, and JPEG2000 methods. The experimental values revealed that the L2‐LBG process yielded effective compression performance with a better‐quality reconstructed image. 相似文献
The Journal of Supercomputing - Wireless sensor networks (WSNs) are typically deployed environments, often very hostile and without assistance. A certain level of security must be provided.... 相似文献
Neural Computing and Applications - Security is one of the primary concerns when designing wireless networks. Along detecting user identity, it is also important to detect the devices at the... 相似文献
Neural Computing and Applications - This paper presents an adaptive fuzzy fault-tolerant tracking control for a class of unknown multi-variable nonlinear systems, with external disturbances,... 相似文献
In the era of Industry 4.0, the ease of access to precise measurements in real-time and the existence of machine-learning (ML) techniques will play a vital role in building practical tools to isolate inefficiencies in energy-intensive processes. This paper aims at developing an abnormal event diagnosis (AED) tool based on ML techniques for monitoring the operation of industrial processes. This tool makes it easier for operators to accomplish their tasks and to make quick and accurate decisions to ensure highly efficient processes. One of the most popular ML techniques for AED is the multivariate statistical control (MSC) method; it only requires the dataset of the normal operating conditions (NOC) to detect and identify the variables that contribute to abnormal events (AEs). Despite the popularity of MSC, it is challenging to select the appropriate method for detecting and isolating all possible abnormalities a complex industrial process can experience. To address this limitation and improve efficiency, we have developed a generic methodology that integrates different ML techniques into a unified multiagent based approach, the selected ML techniques are supposed to be built using only the normal operating condition. For the sake of demonstration, we chose a combination of two ML methods: principal component analysis and k-nearest neighbors (k-NN). The k-NN was integrated into the proposed multiagent to take into account the nonlinearity and multimodality that frequently occur in industrial processes. In addition, we modified a k-NN method proposed in the literature to reduce computation time during real-time detection and isolation. Finally, the proposed methodology was successfully validated to monitor the energy efficiency of a reboiler located in a thermomechanical pulp mill.
One of the important aspects in achieving better performance for transient stability assessment (TSA) of power systems employing
computational intelligence (CI) techniques is by incorporating feature reduction techniques. For small power system the number
of features may be small but when larger systems are considered the number of features increased as the size of the systems
increases. Apart from employing faster CI techniques to achieve faster and accurate TSA of power system, feature reduction
techniques are needed in reducing the input features while preserving the needed information so as to make faster training
of the CI technique. This paper presents feature reductions techniques used, namely correlation analysis and principle component
analysis, in reducing number of input features presented to two CI techniques for TSA, namely probabilistic neural network
(PNN) and least squares support vector machines (LS-SVM). The proposed feature reduction techniques are implemented and tested
on the IEEE 39-bus test system and 87-bus Malaysia’s power system. Numerical results are presented to demonstrate the performance
of the feature reduction techniques and its effects on the accuracies and time taken for training the two CI techniques. 相似文献
In this paper, we present an interactive edutainment system for the children that leverages multimedia and RFID technologies in a seamless manner. The proposed system allows children to learn about new objects/entities by tapping on physical objects through a specially designed RFID-Bluetooth based Tangible User Interface (TUI) tool. The output of the system is delivered as a set of appropriate multimedia representations related to the objects being tapped. The TUI uses RFID technology for object identification and Bluetooth communication to transmit data to the computer where the system??s software is running. We incorporated our system in three games that allow children of different ages to benefit from the system??s functionalities and encourage them to interact with it. 相似文献
Combining accurate neural networks (NN) in the ensemble with negative error correlation greatly improves the generalization ability. Mixture of experts (ME) is a popular combining method which employs special error function for the simultaneous training of NN experts to produce negatively correlated NN experts. Although ME can produce negatively correlated experts, it does not include a control parameter like negative correlation learning (NCL) method to adjust this parameter explicitly. In this study, an approach is proposed to introduce this advantage of NCL into the training algorithm of ME, i.e., mixture of negatively correlated experts (MNCE). In this proposed method, the capability of a control parameter for NCL is incorporated in the error function of ME, which enables its training algorithm to establish better balance in bias-variance-covariance trade-off and thus improves the generalization ability. The proposed hybrid ensemble method, MNCE, is compared with their constituent methods, ME and NCL, in solving several benchmark problems. The experimental results show that our proposed ensemble method significantly improves the performance over the original ensemble methods. 相似文献
Cookies are the primary means for web applications to authenticate HTTP requests and to maintain client states. Many web applications (such as those for electronic commerce) demand a secure cookie scheme. Such a scheme needs to provide the following four services: authentication, confidentiality, integrity, and anti-replay. Several secure cookie schemes have been proposed in previous literature; however, none of them are completely satisfactory. In this paper, we propose a secure cookie scheme that is effective, efficient, and easy to deploy. In terms of effectiveness, our scheme provides all of the above four security services. In terms of efficiency, our scheme does not involve any database lookup or public key cryptography. In terms of deployability, our scheme can be easily deployed on existing web services, and it does not require any change to the Internet cookie specification. We implemented our secure cookie scheme using PHP and conducted experiments. The experimental results show that our scheme is very efficient on both the client side and the server side.A notable adoption of our scheme in industry is that our cookie scheme has been used by Wordpress since version 2.4. Wordpress is a widely used open source content management system. 相似文献