Mapping vulnerability to Saltwater Intrusion (SWI) in coastal aquifers is studied in this paper using the GALDIT framework but with a novelty of transforming the concept of vulnerability indexing to risk indexing. GALDIT is the acronym of 6 data layers, which are put consensually together to invoke a sense of vulnerability to the intrusion of saltwater against aquifers with freshwater. It is a scoring system of prescribed rates to account for local variations; and prescribed weights to account for relative importance of each data layer but these suffer from subjectivity. Another novelty of the paper is to use fuzzy logic to learn rate values and catastrophe theory to learn weight values and these together are implemented as a scheme and hence Fuzzy-Catastrophe Scheme (FCS). The GALDIT data layers are divided into two groups of Passive Vulnerability Indices (PVI) and Active Vulnerability Indices (AVI), where their sum is Total Vulnerability Index (TVI) and equivalent to GALDIT. Two additional data layers (Pumping and Water table decline) are also introduced to serve as Risk Actuation Index (RAI). The product of TVI and RAI yields Risk Indices. The paper applies these new concepts to a study area, subject to groundwater decline and a possible saltwater intrusion problem. The results provide a proof-of-concept for PVI, AVI, RAI and RI by studying their correlation with groundwater quality samples using the fraction of saltwater (fsea), Groundwater Quality Indices (GQI) and Piper diagram. Significant correlations between the appropriate values are found and these provide a new insight for the study area.
In this paper, a new fuzzy group decision-making methodology which determines and incorporates negotiation powers of decision makers is developed. The proposed method is based on a combination of interval type-2 fuzzy sets and a multi-criteria decision making (MCDM) model, namely TOPSIS. To examine the applicability of the proposed methodology, it is used for finding the best scenario of allocating water and reclaimed wastewater to domestic, agricultural, and industrial water sectors and restoring groundwater quantity and quality in the Varamin region located in Tehran metropolitan area in Iran. The results show that the selected scenario leads to an acceptable groundwater conservation level during a long-term planning horizon. Although the capital cost of this scenario is high, which leads to groundwater restoration during the 34-year planning horizon, it is determined as the best allocation scenario. This scenario also entails the second least pumping cost, due to less water allocation from the groundwater. To evaluate the results of the proposed methodology, they are compared with those obtained using some well-known interval type-2 decision-making approaches including arithmetic-based, TOPSIS-based, and likelihood-based comparison methods. The Spearman correlation coefficient shows that the obtained results generally concur with those of the other methods. It is also concluded that the proposed methodology gives more reasonable results by calculating and considering the negotiation powers of decision makers in an extended TOPSIS-based group decision-making model.
Automation of deburring and cleaning of castings is desirable for many reasons. The major reasons are dangerous working conditions, difficulties in finding workers for cleaning sections, and improved profitability. A suitable robot cell capable of using different tools, such as cup grinders, disc grinders and rotary files, is the solution. This robot should be completed with sensors in order to keep the quality of the cleaned surface at an acceptable level. Although using sensors simplifies both the programming and quality control there are still other problems that need to be solved. These involve selection of machining data, e.g. feeding rate and grinding force in a force controlled operation based on parameters such as tool type, disc grinder and geometry. In order to decrease the programming time, a process model for disc grinders has been developed. This article investigates this process model and pays attention to problems such as wavy or burned surfaces and the effect of a robot's repetition accuracy in the results obtained. Many aspects treated in this article are quite general, and can be applied in other types of grinding operations. 相似文献
In persistent homology, the persistence barcode encodes pairs of simplices meaning birth and death of homology classes. Persistence barcodes depend on the ordering of the simplices (called a filter) of the given simplicial complex. In this paper, we define the notion of “minimal” barcodes in terms of entropy. Starting from a given filtration of a simplicial complex K, an algorithm for computing a “proper” filter (a total ordering of the simplices preserving the partial ordering imposed by the filtration as well as achieving a persistence barcode with small entropy) is detailed, by way of computation, and subsequent modification, of maximum matchings on subgraphs of the Hasse diagram associated to K. Examples demonstrating the utility of computing such a proper ordering on the simplices are given. 相似文献
Wireless Multimedia Sensor Networks (WMSNs) consist of networks of interconnected devices involved in retrieving multimedia content, such as, video, audio, acoustic, and scalar data, from the environment. The goal of these networks is optimized delivery of multimedia content based on quality of service (QoS) parameters, such as delay, jitter and distortion. In multimedia communications each packet has strict playout deadlines, thus late arriving packets and lost packets are treated equally. It is a challenging task to guarantee soft delay deadlines along with energy minimization, in resource constrained, high data rate WMSNs. Conventional layered approach does not provide optimal solution for guaranteeing soft delay deadlines due to the large amount of overhead involved at each layer. Cross layer approach is fast gaining popularity, due to its ability to exploit the interdependence between different layers, to guarantee QoS constraints like latency, distortion, reliability, throughput and error rate. The paper presents a channel utilization and delay aware routing (CUDAR) protocol for WMSNs. This protocol is based on a cross-layer approach, which provides soft end-to-end delay guarantees along with efficient utilization of resources. Extensive simulation analysis of CUDAR shows that it provides better delay guarantees than existing protocols and consequently reduces jitter and distortion in WMSN communication. 相似文献
A bipartite state is classical with respect to party A if and only if party A can perform nondisruptive local state identification (NDLID) by a projective measurement. Motivated by this we introduce a class of quantum correlation measures for an arbitrary bipartite state. The measures utilize the general Schatten p-norm to quantify the amount of departure from the necessary and sufficient condition of classicality of correlations provided by the concept of NDLID. We show that for the case of Hilbert–Schmidt norm, i.e., \(p=2\), a closed formula is available for an arbitrary bipartite state. The reliability of the proposed measures is checked from the information-theoretic perspective. Also, the monotonicity behavior of these measures under LOCC is exemplified. The results reveal that for the general pure bipartite states these measures have an upper bound which is an entanglement monotone in its own right. This enables us to introduce a new measure of entanglement, for a general bipartite state, by convex roof construction. Some examples and comparison with other quantum correlation measures are also provided. 相似文献
In this paper, we present faster than real-time implementation of a class of dense stereo vision algorithms on a low-power massively parallel SIMD architecture, the CSX700. With two cores, each with 96 Processing Elements, this SIMD architecture provides a peak computation power of 96 GFLOPS while consuming only 9 Watts, making it an excellent candidate for embedded computing applications. Exploiting full features of this architecture, we have developed schemes for an efficient parallel implementation with minimum of overhead. For the sum of squared differences (SSD) algorithm and for VGA (640 × 480) images with disparity ranges of 16 and 32, we achieve a performance of 179 and 94 frames per second (fps), respectively. For the HDTV (1,280 × 720) images with disparity ranges of 16 and 32, we achieve a performance of 67 and 35 fps, respectively. We have also implemented more accurate, and hence more computationally expensive variants of the SSD, and for most cases, particularly for VGA images, we have achieved faster than real-time performance. Our results clearly demonstrate that, by developing careful parallelization schemes, the CSX architecture can provide excellent performance and flexibility for various embedded vision applications. 相似文献
Dysfluency and stuttering are a break or interruption of normal speech such as repetition, prolongation, interjection of syllables, sounds, words or phrases and involuntary silent pauses or blocks in communication. Stuttering assessment through manual classification of speech dysfluencies is subjective, inconsistent, time consuming and prone to error. This paper proposes an objective evaluation of speech dysfluencies based on the wavelet packet transform with sample entropy features. Dysfluent speech signals are decomposed into six levels by using wavelet packet transform. Sample entropy (SampEn) features are extracted at every level of decomposition and they are used as features to characterize the speech dysfluencies (stuttered events). Three different classifiers such as k-nearest neighbor (kNN), linear discriminant analysis (LDA) based classifier and support vector machine (SVM) are used to investigate the performance of the sample entropy features for the classification of speech dysfluencies. 10-fold cross validation method is used for testing the reliability of the classifier results. The effect of different wavelet families on the classification performance is also performed. Experimental results demonstrate that the proposed features and classification algorithms give very promising classification accuracy of 96.67% with the standard deviation of 0.37 and also that the proposed method can be used to help speech language pathologist in classifying speech dysfluencies. 相似文献
In this paper, we propose a source localization algorithm based on a sparse Fast Fourier Transform (FFT)-based feature extraction method and spatial sparsity. We represent the sound source positions as a sparse vector by discretely segmenting the space with a circular grid. The location vector is related to microphone measurements through a linear equation, which can be estimated at each microphone. For this linear dimensionality reduction, we have utilized a Compressive Sensing (CS) and two-level FFT-based feature extraction method which combines two sets of audio signal features and covers both short-time and long-time properties of the signal. The proposed feature extraction method leads to a sparse representation of audio signals. As a result, a significant reduction in the dimensionality of the signals is achieved. In comparison to the state-of-the-art methods, the proposed method improves the accuracy while the complexity is reduced in some cases. 相似文献