A robust Fault Diagnosis (FD) scheme for a real quadrotor Unmanned Aerial Vehicle (UAV) is proposed in this paper. Firstly, a novel Adaptive Thau observer (ATO) is developed to estimate the quadrotor system states and build a set of offset residuals to indicate actuators’ faults. Based on these residuals, some rules of Fault Diagnosis (FD) are designed to detect and isolate the faults as well as estimate the fault offset parameters. Secondly, a synthetic robust optimization scheme is presented to improve Fault Estimation (FE) accuracies, three key issues include modeling uncertainties, and magnitude order unbalances as well as noises are addressed. Finally, a typical fault of rotors is simulated and injected into one of four rotors of the quadrotor, and experiments for the FD scheme have been carried out. Unlike former research works on the FD schemes for quadrotors, our proposed FD scheme based on the ATO can not only detect and isolate the failed actuators, but also estimate the fault severities. Regardless of roughness of the real flying data, the FD results still have sufficient FE accuracies. 相似文献
Any sniffer can see the information sent through unprotected ‘probe request messages’ and ‘probe response messages’ in wireless local area networks (WLAN). A station (STA) can send probe requests to trigger probe responses by simply spoofing a genuine media access control (MAC) address to deceive access point (AP) controlled access list. Adversaries exploit these weaknesses to flood APs with probe requests, which can generate a denial of service (DoS) to genuine STAs. The research examines traffic of a WLAN using supervised feed-forward neural network classifier to identify genuine frames from rogue frames. The novel feature of this approach is to capture the genuine user and attacker training data separately and label them prior to training without network administrator’s intervention. The model’s performance is validated using self-consistency and fivefold cross-validation tests. The simulation is comprehensive and takes into account the real-world environment. The results show that this approach detects probe request attacks extremely well. This solution also detects an attack during an early stage of the communication, so that it can prevent any other attacks when an adversary contemplates to start breaking into the network. 相似文献
It is necessary to study the effect of dyebath additives on decolorization efficiency in order to optimize ozone-based decolorization processes as the consumption of ozone can be reduced through selecting ozone favorable additives. The effect of 5 dyebath additives viz. electrolytes (sodium chloride and sodium sulfate), chelating agent (ethylene diamine tetra acetic acid or EDTA), reducing agent (sodium dithionite), optical brightener (Uvitex BHT), and dispersing agent (Zetex DNVL) was investigated. All of the additives showed synergistic effect as addition of sodium chloride, sodium dithionite and Zetex DN-VL markedly improved decolorization efficiency, but EDTA and optical brightener showed negative effect. Sodium sulfate did not show any positive or negative effect on decolorization efficiency. 相似文献
Early diagnosis of Alzheimer’s disease (AD) is essential if treatments are to be administered at an earlier point in time before neurons degenerate to a stage beyond repair. In order for early detection to occur tools used to detect the disorder must be sensitive to the earliest of cognitive impairments. Virtual reality technology offers opportunities to provide products which attempt to mimic daily life situations, as much as is possible, within the computational environment. This may be useful for the detection of cognitive difficulties. We develop a virtual simulation designed to assess visuospatial memory in order to investigate cognitive function in a group of healthy elderly participants and those with a mild cognitive impairment (MCI). Participants were required to guide themselves along a virtual path to reach a virtual destination which they were required to remember. The preliminary results indicate that this virtual simulation has the potential to be used for detection of early AD since significant correlations of scores on the virtual environment with existing neuropsychological tests were found. Furthermore, the test discriminated between healthy elderly participants and those with a MCI. 相似文献
Consideration is given to the buoyancy effects on the fully developed gaseous slip flow in a vertical rectangular microduct. Two different cases of the thermal boundary conditions are considered, namely uniform temperature at two facing duct walls with different temperatures and adiabatic other walls (case A) and uniform heat flux at two walls and uniform temperature at other walls (case B). The rarefaction effects are treated using the first-order slip boundary conditions. By means of finite Fourier transform method, analytical solutions are obtained for the velocity and temperature distributions as well as the Poiseuille number. Furthermore, the threshold value of the mixed convection parameter to start the flow reversal is evaluated. The results show that the Poiseuille number of case A is an increasing function of the mixed convection parameter and a decreasing function of the channel aspect ratio, whereas its functionality on the Knudsen number is not monotonic. For case B, the Poiseuille number is decreased by increasing each of the mixed convection parameter, the Knudsen number, and the channel aspect ratio. 相似文献
Diagnosis, detection and classification of tumors, in the brain MRI images, are important because misdiagnosis can lead to death. This paper proposes a method that can diagnose brain tumors in the MRI images and classify them into 5 categories using a Convolutional Neural Network (CNN). The proposed network uses a Convolutional Auto-Encoder Neural Network (CANN) to extract and learn deep features of input images. Extracted deep features from each level are combined to make desirable features and improve results. To classify brain tumor into three categories (Meningioma, Glioma, and Pituitary) the proposed method was applied on Cheng dataset and has reached a considerable performance accuracy of 99.3%. To diagnosis and grading Glioma tumors, the proposed method was applied on IXI and BraTS 2017 datasets, and to classify brain images into six classes including Meningioma, Pituitary, Astrocytoma, High-Grade Glioma, Low-Grade Glioma and Normal images (No tumor), the all datasets including IXI, BraTS2017, Cheng and Hazrat-e-Rassol, was used by the proposed network, and it has reached desirable performance accuracy of 99.1% and 98.5%, respectively.
Many recent software engineering papers have examined duplicate issue reports. Thus far, duplicate reports have been considered a hindrance to developers and a drain on their resources. As a result, prior research in this area focuses on proposing automated approaches to accurately identify duplicate reports. However, there exists no studies that attempt to quantify the actual effort that is spent on identifying duplicate issue reports. In this paper, we empirically examine the effort that is needed for manually identifying duplicate reports in four open source projects, i.e., Firefox, SeaMonkey, Bugzilla and Eclipse-Platform. Our results show that: (i) More than 50 % of the duplicate reports are identified within half a day. Most of the duplicate reports are identified without any discussion and with the involvement of very few people; (ii) A classification model built using a set of factors that are extracted from duplicate issue reports classifies duplicates according to the effort that is needed to identify them with a precision of 0.60 to 0.77, a recall of 0.23 to 0.96, and an ROC area of 0.68 to 0.80; and (iii) Factors that capture the developer awareness of the duplicate issue’s peers (i.e., other duplicates of that issue) and textual similarity of a new report to prior reports are the most influential factors in our models. Our findings highlight the need for effort-aware evaluation of approaches that identify duplicate issue reports, since the identification of a considerable amount of duplicate reports (over 50 %) appear to be a relatively trivial task for developers. To better assist developers, research on identifying duplicate issue reports should put greater emphasis on assisting developers in identifying effort-consuming duplicate issues. 相似文献
Reuse of software components, either closed or open source, is considered to be one of the most important best practices in software engineering, since it reduces development cost and improves software quality. However, since reused components are (by definition) generic, they need to be customized and integrated into a specific system before they can be useful. Since this integration is system-specific, the integration effort is non-negligible and increases maintenance costs, especially if more than one component needs to be integrated. This paper performs an empirical study of multi-component integration in the context of three successful open source distributions (Debian, Ubuntu and FreeBSD). Such distributions integrate thousands of open source components with an operating system kernel to deliver a coherent software product to millions of users worldwide. We empirically identified seven major integration activities performed by the maintainers of these distributions, documented how these activities are being performed by the maintainers, then evaluated and refined the identified activities with input from six maintainers of the three studied distributions. The documented activities provide a common vocabulary for component integration in open source distributions and outline a roadmap for future research on software integration. 相似文献
Applying model predictive control (MPC) in some cases such as complicated process dynamics and/or rapid sampling leads us to poorly numerically conditioned solutions and heavy computational load. Furthermore, there is always mismatch in a model that describes a real process. Therefore, in this paper in order to prevail over the mentioned difficulties, we design a robust MPC using the Laguerre orthonormal basis in order to speed up the convergence at the same time with lower computation adding an extra parameter “a” in MPC. In addition, the Kalman state estimator is included in the prediction model and accordingly the MPC design is related to the Kalman estimator parameters as well as the error of estimations which helps the controller react faster against unmeasured disturbances. Tuning the parameters of the Kalman estimator as well as MPC is another achievement of this paper which guarantees the robustness of the system against the model mismatch and measurement noise. The sensitivity function at low frequency is minimized to tune the MPC parameters since the lower the magnitude of the sensitivity function at low frequency the better command tracking and disturbance rejection results. The integral absolute error (IAE) and peak of the sensitivity are used as constraints in optimization procedure to ensure the stability and robustness of the controlled process. The performance of the controller is examined via the controlling level of a Tank and paper machine processes. 相似文献