Fluidized beds have been widely used in power generation and in chemical, biochemical, and petroleum industries. 3D simulation of commercial scale fluidized beds has been computationally impractical due to the required memory and processor speeds. In this study, 3D Computational Fluid Dynamics simulation of a gas-solid bubbling fluidized bed is performed to investigate the effect of using different inter-phase drag models. The drag correlations of Richardon and Zaki, Wen-Yu, Gibilaro, Gidaspow, Syamlal-O’Brien, Arastoopour, RUC, Di Felice, Hill Koch Ladd, Zhang and Reese, and adjusted Syamlal are reviewed using a multiphase Eulerian-Eulerian model to simulate the momentum transfer between phases. Furthermore, a method has been proposed to adjust the Di Felice drag model in a three dimensional domain based on the experimental value of minimum fluidization velocity as a calibration point. Comparisons are made with both a 2D Cartesian simulation and experimental data. The experiments are performed on a Plexiglas rectangular fluidized bed consisting of spherical glass beads and ambient air as the gas phase. Comparisons were made based on solid volume fractions, expansion height, and pressure drop inside the fluidized bed at different superficial gas velocities. The results of the proposed drag model were found to agree well with experimental data. The effect of restitution coefficient on three dimensional prediction of bed height is also investigated and an optimum value of restitution coefficient for modeling fluidized beds in a bubbling regime has been proposed. Finally sensitivity analysis is performed on the grid interval size to obtain an optimum mesh size with the objective of accuracy and time efficiency. 相似文献
The purpose of this study is to suggest and examine a PI–fuzzy path planner and associated low-level control system for a linear discrete dynamic model of omni-directional mobile robots to obtain optimal inputs for drivers. Velocity and acceleration filtering is also implemented in the path planner to satisfy planning prerequisites and prevent slippage. Regulated drivers’ rotational velocities and torques greatly affect the ability of these robots to perform trajectory planner tasks. These regulated values are examined in this research by setting up an optimal controller. Introducing optimal controllers such as linear quadratic tracking for multi-input–multi-output control systems in acceleration and deceleration is one of the essential subjects for motion control of omni-directional mobile robots. The main topics presented and discussed in this article are improvements in the presented discrete-time linear quadratic tracking approach such as the low-level controller and combined PI–fuzzy path planner with appropriate speed monitoring algorithm such as the high-level one in conditions both with and without external disturbance. The low-level tracking controller presented in this article provides an optimal solution to minimize the differences between the reference trajectory and the system output. The efficiency of this approach is also compared with that of previous PID controllers which employ kinematic modeling. Utilizing the new approach in trajectory-planning controller design results in more precise and appropriate outputs for the motion of four-wheeled omni-directional mobile robots, and the modeling and experimental results confirm this issue. 相似文献
One of the greatest challenges while working on image segmentation algorithms is a comprehensive measure to evaluate their accuracy. Although there are some measures for doing this task, but they can consider only one aspect of segmentation in evaluation process. The performance of evaluation measures can be improved using a combination of single measures. However, combination of single measures does not always lead to an appropriate criterion. Besides its effectiveness, the efficiency of the new measure should be considered. In this paper, a new and combined evaluation measure based on genetic programming (GP) has been sought. Because of the nature of evolutionary approaches, the proposed approach allows nonlinear and linear combinations of other single evaluation measures and can search within many and different combinations of basic operators to find a good enough one. We have also proposed a new fitness function to make GP enable to search within search space effectively and efficiently. To test the method, Berkeley and Weizmann datasets besides several different experiments have been used. Experimental results demonstrate that the GP based approach is suitable for effective combination of single evaluation measures. 相似文献
Time-based Software Transactional Memory (STM) exploits a global clock to validate transactional data and guarantee consistency of transactions. While this method is simple to implement it results in contentions over the clock if transactions commit simultaneously. The alternative method is thread local clock (TLC) which exploits local variables to maintain consistency of transactions. However, TLC may increase false aborts and degrade performance of STMs. In this paper, we analyze global clock and TLC in the context of STM systems, highlighting both the implementation trade-offs and the performance implications of the two techniques. We demonstrate that neither global clock nor TLC is optimum across applications. To counter this challenge, we introduce two optimization techniques: The first optimization technique is Adaptive Clock (AC) which dynamically selects one of the two validation techniques based on probability of conflicts. AC is a speculative approach and relies on software O-GEHL predictors to speculate future conflicts. The second optimization technique is AC+ which reduces timing overhead of O-GEHL predictors by implementing the predictors in hardware. In addition, we exploit information theory to eliminate unnecessary computational resources and reduce storage requirements of the O-GEHL predictors. Our evaluation with TL2 and Stamp benchmark suite reveals that AC is effective and improves execution time of transactional applications up to 65%. 相似文献
Cops and Robbers is a pursuit and evasion game played on graphs that has received much attention. We consider an extension of Cops and Robbers, distance k Cops and Robbers, where the cops win if at least one of them is of distance at most k from the robber in G. The cop number of a graph G is the minimum number of cops needed to capture the robber in G. The distance k analogue of the cop number, written ck(G), equals the minimum number of cops needed to win at a given distance k. We study the parameter ck from algorithmic, structural, and probabilistic perspectives. We supply a classification result for graphs with bounded ck(G) values and develop an O(n2s+3) algorithm for determining if ck(G)≤s for s fixed. We prove that if s is not fixed, then computing ck(G) is NP-hard. Upper and lower bounds are found for ck(G) in terms of the order of G. We prove that
International Journal of Computer Vision - Visual place recognition (VPR) is the process of recognising a previously visited place using visual information, often under varying appearance... 相似文献
This paper updates a method for generating small, accurate kinetic models for applications in computational fluid dynamics programs. This particular method first uses a time-integrated flux-based algorithm to generate the smallest possible skeletal model based on the detailed kinetic model. Then, it uses a multi-stage optimization process in which multiple runs of a genetic algorithm are used to optimize the rate constant parameters of the retained reactions. This optimization technique provides the user with the flexibility needed to balance the fidelity of the model with their time constraints. The updated method was applied to the reduction of a methane-air model under conditions meant to approximate the end of a compression stroke of an internal combustion engine. When compared to previous techniques, the results showed that this method could produce a more accurate model in considerably less time. The best model obtained in this study resulted in relative errors ranging from 0.22 to 1.14% on all six optimization targets. This reduced model was also able to adequately predict optimization targets for certain operating conditions, which were not included in the optimization process.
In this paper a new method for handling occlusion in face recognition is presented. In this method the faces are partitioned
into blocks and a sequential recognition structure is developed. Then, a spatial attention control strategy over the blocks
is learned using reinforcement learning. The outcome of this learning is a sorted list of blocks according to their average
importance in the face recognition task. In the recall mode, the sorted blocks are employed sequentially until a confident
decision is made. Obtained results of various experiments on the AR face database demonstrate the superior performance of
proposed method as compared with that of the holistic approach in the recognition of occluded faces. 相似文献
Data Grid is a geographically distributed environment that deals with large-scale data-intensive applications. Effective scheduling in Grid can reduce the amount of data transferred among nodes by submitting a job to a node, where most of the requested data files are available. Data replication is another key optimization technique for reducing access latency and managing large data by storing data in a wisely manner. In this paper two algorithms are proposed, first a novel job scheduling algorithm called Combined Scheduling Strategy (CSS) that uses hierarchical scheduling to reduce the search time for an appropriate computing node. It considers the number of jobs waiting in queue, the location of required data for the job and the computing capacity of sites. Second a dynamic data replication strategy, called the Modified Dynamic Hierarchical Replication Algorithm (MDHRA) that improves file access time. This strategy is an enhanced version of Dynamic Hierarchical Replication (DHR) strategy. Data replication should be used wisely because the storage capacity of each Grid site is limited. Thus, it is important to design an effective strategy for the replication replacement. MDHRA replaces replicas based on the last time the replica was requested, number of access, and size of replica. It selects the best replica location from among the many replicas based on response time that can be determined by considering the data transfer time, the storage access latency, the replica requests that waiting in the storage queue and the distance between nodes. The simulation results demonstrate the proposed replication and scheduling strategies give better performance compared to the other algorithms. 相似文献