Phenolics have recently been of great concern because of the extreme toxicity and persistence in the environment. This study explores the possibility of using gastropod shell dust (GPSD) to remove phenol from aqueous solutions. The removal of phenol was investigated in batch mode. The influence of different experimental parameters—initial pH, adsorbent dose, initial concentration, contact time, stirring rate, temperature, and their interaction during phenol adsorption—were determined by response surface methodology based on three-level four-factorial Box–Behnken design. Optimized values of initial phenol concentration, pH, adsorbent dose, and contact time were found as 10.16 mg/L, 4.22, 0.50 g/L, and 33.47 min, respectively. The experimental equilibrium data were tested by four widely used isotherm models namely, Langmuir and Freundlich, D–R, and Temkin. It was found that adsorption of phenol on gastropod shell dust correlated with the Langmuir isotherm model, implying monolayer coverage of phenol onto the surface of the adsorbent. The maximum adsorption capacity was found to be 56.89 mg g?1 at 333 K. Regeneration study revealed that about 92 % phenol can be regenerate within 90 min from the spent GPSD. Kinetics of the adsorption process was tested by pseudo-first-order, pseudo-second-order kinetics, and intra-particle diffusion mechanism. Pseudo-second-order kinetic model provided a better correlation for the experimental data studied in comparison to the pseudo-first-order model. Intra-particle diffusion was not the sole rate-controlling factor. The activation energy of the adsorption process (Ea) was found to be 2.68 kJ mol?1, indicating physisorption nature of phenol adsorption onto gastropod shell dust. A thermodynamic study showed spontaneous nature and feasibility of the adsorption process. A negative enthalpy (ΔH°) value indicated that the adsorption process was exothermic. The results revealed that gastropod shell dust can be used as an effective and low-cost adsorbent to remove phenol from aqueous solutions. 相似文献
Air pollution is caused by variety of sources such as industries, vehicles, cremation, bakeries, and open burning. These sources have variation in emission with different time scales. Industry and bakeries have variation in emission with day or week, rest of the sources like vehicles and domestic sector have variation with time in a day. In fact, vehicles have a large variation in emission with time period of the day. The average concentration of 24 h is much less than hourly concentration of peak time when there is heavy vehicular emissions. The hourly concentration of off-peak time or lean time is very low due to low emission for that period. The air quality standards of India are prescribed for 24-h average concentration with which the predicted average concentration from models is compared. However, the peak time concentration may be much higher than the standard. In the peak time, outdoor concentration is more and since a large proportion of the population is out the exposure is also very high and can cause severe health effect. In this paper, vehicular pollution modeling has been carried out using AERMOD with simulated meteorology by Weather Research and Forecasting model. NOx and PM concentrations were 3.6 and 1.45 times higher in peak time than off-peak and evening peak, respectively. Lean time has higher concentration for both NOx and PM than off-peak and evening peak. It shows the misleading concept of comparing average predicted concentration of 24 h with standards for vehicles.
Reconfigurable manufacturing systems are designed to deliver exact functionality and capacity that is needed, when it is needed. The reconfigurable machine tool (RMT) plays a pivotal role in the accomplishment of this objective through their built in modular structure consisting of basic and auxiliary modules along with the open architecture software. 相似文献
Various techniques have been proposed to enable organisations to assess the current quality level of their data. Unfortunately, organisations have many different requirements related to data quality (DQ) assessment. For example, some organisations may need to focus on ensuring regulations are met rather than reducing costs. Due to this, organisations may be forced to follow an assessment technique, which may not wholly fit their needs and current situation. Therefore, we propose and evaluate the Hybrid Approach to assessing DQ, which demonstrates how to dynamically configure an assessment technique as needed while leveraging the best practices from existing assessment techniques. 相似文献
Meta-schedulers map jobs to computational resources that are part of a Grid, such as clusters, that in turn have their own local job schedulers. Existing Grid meta-schedulers either target system-centric metrics, such as utilisation and throughput, or prioritise jobs based on utility metrics provided by the users. The system-centric approach gives less importance to users’ individual utility, while the user-centric approach may have adverse effects such as poor system performance and unfair treatment of users. Therefore, this paper proposes a novel meta-scheduler, based on the well-known double auction mechanism that aims to satisfy users’ service requirements as well as ensuring balanced utilisation of resources across a Grid. We have designed valuation metrics that commodify both the complex resource requirements of users and the capabilities of available computational resources. Through simulation using real traces, we compare our scheduling mechanism with other common mechanisms widely used by both existing market-based and traditional meta-schedulers. The results show that our meta-scheduling mechanism not only satisfies up to 15% more user requirements than others, but also improves system utilisation through load balancing. 相似文献
A system for the detection, segmentation and recognition of multi-class hand postures against complex natural backgrounds is presented. Visual attention, which is the cognitive process of selectively concentrating on a region of interest in the visual field, helps human to recognize objects in cluttered natural scenes. The proposed system utilizes a Bayesian model of visual attention to generate a saliency map, and to detect and identify the hand region. Feature based visual attention is implemented using a combination of high level (shape, texture) and low level (color) image features. The shape and texture features are extracted from a skin similarity map, using a computational model of the ventral stream of visual cortex. The skin similarity map, which represents the similarity of each pixel to the human skin color in HSI color space, enhanced the edges and shapes within the skin colored regions. The color features used are the discretized chrominance components in HSI, YCbCr color spaces, and the similarity to skin map. The hand postures are classified using the shape and texture features, with a support vector machines classifier. A new 10 class complex background hand posture dataset namely NUS hand posture dataset-II is developed for testing the proposed algorithm (40 subjects, different ethnicities, various hand sizes, 2750 hand postures and 2000 background images). The algorithm is tested for hand detection and hand posture recognition using 10 fold cross-validation. The experimental results show that the algorithm has a person independent performance, and is reliable against variations in hand sizes and complex backgrounds. The algorithm provided a recognition rate of 94.36 %. A comparison of the proposed algorithm with other existing methods evidences its better performance. 相似文献
This paper proposes a new method for image binarization that uses an iterative partitioning approach. The proposed method has been tested towards binarization of both document and graphic images. The quantitative comparisons with other standard methods reveal that the proposed approach outperforms existing widely used binarization techniques in terms of accuracy of binarization. The experimental results further establish the superiority of the proposed method, especially for degraded documents and graphic images. The proposed algorithm is suitable for a multi-core processing environment as it can be split into multiple parallel units of executions after the initial partitioning. 相似文献
The present paper deals with modeling of AC resistance of twisted litz wires used for high-frequency inverter-fed induction cooker. Several traditional approaches are available, most of which have concentrated in deriving the analytical relationships between the AC resistances with the parameters of the wire. However, it is very difficult to get the exact relationship, due to several reasons. An attempt is made in this paper to model the AC resistance using a three-layered feed-forward Neural Network. For this purpose, four inputs (wire type, number of strand, number of spiral turn and operating frequency) and one output as AC resistance have been considered. Since the performance of Neural Network alone might not be optimal; it is optimized using a binary-coded Genetic Algorithm. Performances of the proposed approach were compared with the method of AC resistance computation proposed by Ferreira. Genetic-neural system has given a very close accuracy, and the computational complexity was found to be very low. Thus, it is suitable for online implementations. 相似文献
Early detection and diagnosis of faults in industrial machines would reduce the maintenance cost and also increase the overall equipment effectiveness by increasing the availability of the machinery systems. In this paper, a semi-nonparametric approach based on hidden Markov model is introduced for fault detection and diagnosis in synchronous motors. In this approach, after training the hidden Markov model classifiers (parametric stage), two matrices named probabilistic transition frequency profile and average probabilistic emission are computed based on the hidden Markov models for each signature (nonparametric stage) using probabilistic inference. These matrices are later used in forming a similarity scoring function, which is the basis of the classification in this approach. Moreover, a preprocessing method, named squeezing and stretching is proposed which rectifies the difficulty of dealing with various operating speeds in the classification process. Finally, the experimental results are provided and compared. Further investigations are carried out, providing sensitivity analysis on the length of signatures, the number of hidden state values, as well as statistical performance evaluation and comparison with conventional hidden Markov model-based fault diagnosis approach. Results indicate that implementation of the proposed preprocessing, which unifies the signatures from various operating speeds, increases the classification accuracy by nearly 21% and moreover utilization of the proposed semi-nonparametric approach improves the accuracy further by nearly 6%. 相似文献
AbstractIn wireless sensor network, data aggregation can cause increased transmission overhead, failures, data loss and security-related issues. Earlier works did not concentrate on both fault management and loss recovery issues. In order to overcome these drawbacks, in this paper, a reliable data aggregation scheme is proposed that uses support vector machine (SVM) for performing failure detection and loss recovery. Initially, a group head, selected based on node connectivity, splits the nodes into clusters based on their location information. In each cluster, the cluster member with maximum node connectivity is chosen as the cluster head. When the aggregator receives data from the source, it identifies node failures in the received data by classifying the faulty data using SVM. Furthermore, a reserve node-based fault recovery mechanism is developed to prevent data loss. Through simulations, we show that the proposed technique minimises the transmission overhead and increases reliability. 相似文献