Shear connectors play a prominent role in the design of steel-concrete composite systems. The behavior of shear connectors is generally determined through conducting push-out tests. However, these tests are costly and require plenty of time. As an alternative approach, soft computing (SC) can be used to eliminate the need for conducting push-out tests. This study aims to investigate the application of artificial intelligence (AI) techniques, as sub-branches of SC methods, in the behavior prediction of an innovative type of C-shaped shear connectors, called Tilted Angle Connectors. For this purpose, several push-out tests are conducted on these connectors and the required data for the AI models are collected. Then, an adaptive neuro-fuzzy inference system (ANFIS) is developed to identify the most influencing parameters on the shear strength of the tilted angle connectors. Totally, six different models are created based on the ANFIS results. Finally, AI techniques such as an artificial neural network (ANN), an extreme learning machine (ELM), and another ANFIS are employed to predict the shear strength of the connectors in each of the six models. The results of the paper show that slip is the most influential factor in the shear strength of tilted connectors and after that, the inclination angle is the most effective one. Moreover, it is deducted that considering only four parameters in the predictive models is enough to have a very accurate prediction. It is also demonstrated that ELM needs less time and it can reach slightly better performance indices than those of ANN and ANFIS.
Combining accurate neural networks (NN) in the ensemble with negative error correlation greatly improves the generalization ability. Mixture of experts (ME) is a popular combining method which employs special error function for the simultaneous training of NN experts to produce negatively correlated NN experts. Although ME can produce negatively correlated experts, it does not include a control parameter like negative correlation learning (NCL) method to adjust this parameter explicitly. In this study, an approach is proposed to introduce this advantage of NCL into the training algorithm of ME, i.e., mixture of negatively correlated experts (MNCE). In this proposed method, the capability of a control parameter for NCL is incorporated in the error function of ME, which enables its training algorithm to establish better balance in bias-variance-covariance trade-off and thus improves the generalization ability. The proposed hybrid ensemble method, MNCE, is compared with their constituent methods, ME and NCL, in solving several benchmark problems. The experimental results show that our proposed ensemble method significantly improves the performance over the original ensemble methods. 相似文献
Importance sampling is a technique that is commonly used to speed up Monte Carlo simulation of rare events. However, little is known regarding the design of efficient importance sampling algorithms in the context of queueing networks. The standard approach, which simulates the system using an a priori fixed change of measure suggested by large deviation analysis, has been shown to fail in even the simplest network settings. Estimating probabilities associated with rare events has been a topic of great importance in queueing theory, and in applied probability at large. In this article, we analyse the performance of an importance sampling estimator for a rare event probability in a Jackson network. This article carries out strict deadlines to a two-node Jackson network with feedback whose arrival and service rates are modulated by an exogenous finite state Markov process. We have estimated the probability of network blocking for various sets of parameters, and also the probability of missing the deadline of customers for different loads and deadlines. We have finally shown that the probability of total population overflow may be affected by various deadline values, service rates and arrival rates. 相似文献
Recently, physical layer security commonly known as Radio Frequency (RF) fingerprinting has been proposed to provide an additional layer of security for wireless devices. A unique RF fingerprint can be used to establish the identity of a specific wireless device in order to prevent masquerading/impersonation attacks. In the literature, the performance of RF fingerprinting techniques is typically assessed using high-end (expensive) receiver hardware. However, in most practical situations receivers will not be high-end and will suffer from device specific impairments which affect the RF fingerprinting process. This paper evaluates the accuracy of RF fingerprinting employing low-end receivers. The vulnerability to an impersonation attack is assessed for a modulation-based RF fingerprinting system employing low-end commodity hardware (by legitimate and malicious users alike). Our results suggest that receiver impairment effectively decreases the success rate of impersonation attack on RF fingerprinting. In addition, the success rate of impersonation attack is receiver dependent. 相似文献
Innovations in Systems and Software Engineering - One of the most important modules of computer systems is the one that is responsible for user safety. It was proven that simple passwords and... 相似文献
This article proposes a novel algorithm to improve the lifetime of a wireless sensor network. This algorithm employs swarm intelligence algorithms in conjunction with compressive sensing theory to build up the routing trees and to decrease the communication rate. The main contribution of this article is to extend swarm intelligence algorithms to build a routing tree in such a way that it can be utilized to maximize efficiency, thereby rectifying the delay problem of compressive sensing theory and improving the network lifetime. In addition, our approach offers accurate data recovery from small amounts of compressed data. Simulation results show that our approach can effectively extend the network lifetime of a large‐scale wireless sensor network. 相似文献
In this paper, we present faster than real-time implementation of a class of dense stereo vision algorithms on a low-power massively parallel SIMD architecture, the CSX700. With two cores, each with 96 Processing Elements, this SIMD architecture provides a peak computation power of 96 GFLOPS while consuming only 9 Watts, making it an excellent candidate for embedded computing applications. Exploiting full features of this architecture, we have developed schemes for an efficient parallel implementation with minimum of overhead. For the sum of squared differences (SSD) algorithm and for VGA (640 × 480) images with disparity ranges of 16 and 32, we achieve a performance of 179 and 94 frames per second (fps), respectively. For the HDTV (1,280 × 720) images with disparity ranges of 16 and 32, we achieve a performance of 67 and 35 fps, respectively. We have also implemented more accurate, and hence more computationally expensive variants of the SSD, and for most cases, particularly for VGA images, we have achieved faster than real-time performance. Our results clearly demonstrate that, by developing careful parallelization schemes, the CSX architecture can provide excellent performance and flexibility for various embedded vision applications. 相似文献
Organisations have invested in self‐service information systems (IS) to provide a direct interface for service delivery. Enriching the usage of these systems can provide organisations with immense benefits. However, limited research has been directed towards understanding post‐adoption IS usage behaviour in general and specifically in the context of self‐service IS. This study proposes post‐adoption IS usage behaviour as a broader concept constituting feature level usage of IS, integration of IS in the work system and exploration of new uses of IS. We evaluate how the new conceptualisation can be used to classify users at different stages of self‐service IS usage. Further, we examine user perceptions that differentiate among the users situated at different self‐service IS usage stages. Data were collected in the context of a self‐service Web‐based IS to validate the post‐adoption IS usage constructs and to examine the proposed thesis. The newly developed conceptual structure and measures for post‐adoption IS usage behaviour exhibit strong psychometric properties. The analysis shows three distinct post‐adoption IS usage stages and highlights that usefulness, user‐initiated learning, ease of use, satisfaction and voluntariness of use differentiate users at the different stages of post‐adoption IS usage. The results show that these variables aggregate into value confirmation and learning orientation as two higher‐level concepts. Further, we evaluate the predictive efficacy of the research model in classifying users into different post‐adoption self‐service IS usage stages. Implications are drawn for future research. 相似文献