首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In this paper, an automatic grid generator based on STL models is proposed. The staircase boundary treatment is implemented to handle irregular geometries and the computation domain is discretized using a regular Cartesian grid. Using the grid generator, staircase grids that are suitable for fast and accurate finite difference analysis could be generated. Employing the slicing algorithm in RP technologies [1], the STL models are sliced with a set of parallel planes to generate 2D slices after the STL files obtained from a CAD system undergo topology reconstruction. To decrease the staircase error (increase accuracy) and enhance working efficiency, the cross-section at the middle of the layer is taken to represent the cross-section of whole layer. The scan line filling technique of computer graphics [2] is used to achieve grid generation after slicing. Finally, we demonstrate an application of the introduced method to generate staircase grids, which allows successful FDM simulation in the field of explosion. The example shows that the automatic grid generator based on STL models is fast and gives simulation results that are in agreement with practical observations.  相似文献   

2.
RoboCup中型组足球机器人比赛具有高度的对抗性和实时性.比赛中机器人需要针对不同的比赛态势进行角色切换和任务选择.在这种环境下,应用传统人工势场或一般改进型人工势场的路径规划方法都无法得到令人满意的结果.将障碍物与机器人之间的相对速度矢量以及目标与机器人之间的相对速度矢量分别引入人工势场法中,对传统的势场函数进行了改进;并根据机器人的不同角色和任务,采用模糊逻辑方法对势场函数进行修正,提出一种处理多角色多任务环境的改进型人工势场法机器人路径规划方法.仿真试验和实际应用验证了此算法存足球机器人比赛系统中的可行性.  相似文献   

3.
《Advanced Robotics》2013,27(9):943-959
An adaptive control scheme is proposed for the end-effector trajectory tracking control of free-floating space robots. In order to cope with the nonlinear parameterization problem of the dynamic model of the free-floating space robot system, the system is modeled as an extended robot which is composed of a pseudo-arm representing the base motions and a real robot arm. An on-line estimation of the unknown parameters along with a computed-torque controller is used to track the desired trajectory. The proposed control scheme does not require measurement of the accelerations of the base and the real robot arm. A two-link planar space robot system is simulated to illustrate the validity and effectiveness of the proposed control scheme.  相似文献   

4.
Data hiding in images has evolved as one of the trusted methods of secure data communication and numerous approaches have been introduced over the years using gray scale images as the cover media. Most of the methods are based on data hiding in least significant bit planes of cover images. Many such methods purely depend on data substitution algorithms by defining a pattern in which data is embedded. One can gain access to the secret data in a few attempts, if the algorithm is known. Keeping this in view several approaches based on secret keys have also been proposed by researchers. This paper proposes an efficient data embedding scheme using a key and an embedding pattern generated through midpoint circle generation algorithm. The pattern can be applied to a carrier that is mapped onto a grid/image. The cryptosystem uses the concept of steganography and is computationally light and secure. The secret-key is generated in such a way that Avalanche effect is ensured except in very rare cases. The proposed data embedding method is shown to be robust and highly secure while maintaining good hiding capacity and imperceptibility. It is applicable for data hiding in a generic grid that could be of pixels or bits.  相似文献   

5.
A novel manifold learning approach is presented to efficiently identify low-dimensional structures embedded in high-dimensional MRI data sets. These low-dimensional structures, known as manifolds, are used in this study for predicting brain tumor progression. The data sets consist of a series of high-dimensional MRI scans for four patients with tumor and progressed regions identified. We attempt to classify tumor, progressed and normal tissues in low-dimensional space. We also attempt to verify if a progression manifold exists—the bridge between tumor and normal manifolds. By identifying and mapping the bridge manifold back to MRI image space, this method has the potential to predict tumor progression. This could be greatly beneficial for patient management. Preliminary results have supported our hypothesis: normal and tumor manifolds are well separated in a low-dimensional space. Also, the progressed manifold is found to lie roughly between the normal and tumor manifolds.  相似文献   

6.
This paper addresses team formation in the RoboCup Rescue centered on task allocation. We follow a previous approach that is based on so-called extreme teams, which have four key characteristics: agents act in domains that are dynamic; agents may perform multiple tasks; agents have overlapping functionality regarding the execution of each task but differing levels of capability; and some tasks may depict constraints such as simultaneous execution. So far these four characteristics have not been fully tested in domains such as the RoboCup Rescue. We use a swarm intelligence based approach, address all characteristics, and compare it to other two GAP-based algorithms. Experiments where computational effort, communication load, and the score obtained in the RoboCup Rescue aremeasured, show that our approach outperforms the others.  相似文献   

7.
As we delve deeper into the ‘Digital Age’, we witness an explosive growth in the volume, velocity, and variety of the data available on the Internet. For example, in 2012 about 2.5 quintillion bytes of data was created on a daily basis that originated from myriad of sources and applications including mobile devices, sensors, individual archives, social networks, Internet of Things, enterprises, cameras, software logs, etc. Such ‘Data Explosions’ has led to one of the most challenging research issues of the current Information and Communication Technology era: how to optimally manage (e.g., store, replicated, filter, and the like) such large amount of data and identify new ways to analyze large amounts of data for unlocking information. It is clear that such large data streams cannot be managed by setting up on-premises enterprise database systems as it leads to a large up-front cost in buying and administering the hardware and software systems. Therefore, next generation data management systems must be deployed on cloud. The cloud computing paradigm provides scalable and elastic resources, such as data and services accessible over the Internet Every Cloud Service Provider must assure that data is efficiently processed and distributed in a way that does not compromise end-users’ Quality of Service (QoS) in terms of data availability, data search delay, data analysis delay, and the like. In the aforementioned perspective, data replication is used in the cloud for improving the performance (e.g., read and write delay) of applications that access data. Through replication a data intensive application or system can achieve high availability, better fault tolerance, and data recovery. In this paper, we survey data management and replication approaches (from 2007 to 2011) that are developed by both industrial and research communities. The focus of the survey is to discuss and characterize the existing approaches of data replication and management that tackle the resource usage and QoS provisioning with different levels of efficiencies. Moreover, the breakdown of both influential expressions (data replication and management) to provide different QoS attributes is deliberated. Furthermore, the performance advantages and disadvantages of data replication and management approaches in the cloud computing environments are analyzed. Open issues and future challenges related to data consistency, scalability, load balancing, processing and placement are also reported.  相似文献   

8.
This paper considers binary classification. We assess a classifier in terms of the area under the ROC curve (AUC). We estimate three important parameters, the conditional AUC (conditional on a particular training set) and the mean and variance of this AUC. We derive, as well, a closed form expression of the variance of the estimator of the AUG. This expression exhibits several components of variance that facilitate an understanding for the sources of uncertainty of that estimate. In addition, we estimate this variance, i.e., the variance of the conditional AUC estimator. Our approach is nonparametric and based on general methods from U-statistics; it addresses the case where the data distribution is neither known nor modeled and where there are only two available data sets, the training and testing sets. Finally, we illustrate some simulation results for these estimators  相似文献   

9.
Today’s security threats like malware are more sophisticated and targeted than ever, and they are growing at an unprecedented rate. To deal with them, various approaches are introduced. One of them is Signature-based detection, which is an effective method and widely used to detect malware; however, there is a substantial problem in detecting new instances. In other words, it is solely useful for the second malware attack. Due to the rapid proliferation of malware and the desperate need for human effort to extract some kinds of signature, this approach is a tedious solution; thus, an intelligent malware detection system is required to deal with new malware threats. Most of intelligent detection systems utilise some data mining techniques in order to distinguish malware from sane programs. One of the pivotal phases of these systems is extracting features from malware samples and benign ones in order to make at least a learning model. This phase is called “Malware Analysis” which plays a significant role in these systems. Since API call sequence is an effective feature for realising unknown malware, this paper is focused on extracting this feature from executable files. There are two major kinds of approach to analyse an executable file. The first type of analysis is “Static Analysis” which analyses a program in source code level. The second one is “Dynamic Analysis” that extracts features by observing program’s activities such as system requests during its execution time. Static analysis has to traverse the program’s execution path in order to find called APIs. Because it does not have sufficient information about decision making points in the given executable file, it is not able to extract the real sequence of called APIs. Although dynamic analysis does not have this drawback, it suffers from execution overhead. Thus, the feature extraction phase takes noticeable time. In this paper, a novel hybrid approach, HDM-Analyser, is presented which takes advantages of dynamic and static analysis methods for rising speed while preserving the accuracy in a reasonable level. HDM-Analyser is able to predict the majority of decision making points by utilising the statistical information which is gathered by dynamic analysis; therefore, there is no execution overhead. The main contribution of this paper is taking accuracy advantage of the dynamic analysis and incorporating it into static analysis in order to augment the accuracy of static analysis. In fact, the execution overhead has been tolerated in learning phase; thus, it does not impose on feature extraction phase which is performed in scanning operation. The experimental results demonstrate that HDM-Analyser attains better overall accuracy and time complexity than static and dynamic analysis methods.  相似文献   

10.
为了对某通信企业内海量历史数据进行充分利用,对管理者的科学决策提供有力支持,通过应用数据仓库技术获取潜在有用信息,设计并实现了基于数据仓库的性能综合分析系统(PSAS),详细阐述了数据仓库的设计与实现流程,解决了数据仓库构建、数据分析等技术问题.实践表明其具有良好效果.  相似文献   

11.
12.
Performance assessment and robustness analysis using an ARMarkov approach   总被引:1,自引:0,他引:1  
Application of the ARMarkov model-based formulation offers significant advantages for assessment/monitoring and robustness analysis of process systems. The ARMarkov method does not require a priori specification of the system time delay/interactor matrix, needs only an approximate estimate of model order and can be done using open or closed-loop process data. By appropriate use of standard, linear model estimation techniques, it directly produces statistically consistent estimates of the first few, user-specified number of Markov parameters even in the presence of colored noise. It is shown in this paper that the Markov parameters and the ARMarkov model can be used to calculate the interactor matrix and several process performance metrics including sensitivity/complementary-sensitivity functions and time-domain criteria such as speed of response, minimum variance values etc. In addition it is shown that model-based predictive control (MPC) systems formulated using ARMarkov models have a special state space structure that leads to less conservative robustness bounds for specific types of uncertainties (such as gain mismatch, uncertainty in the fast or slow dynamics, etc.) than applying the Small Gain Theorem directly to the conventional state space model structure.  相似文献   

13.
Interactive multidimensional visualisation based on parallel coordinates has been studied previously as a tool for process historical data analysis. Here attention is given to improvement of the technique by the introduction of dimension reduction and upper and lower limits for separating abnormal data to the plots of coordinates. Dimension reduction using independent component analysis transforms the original variables to a smaller number of latent variables which are statistically independent to each other. This enables the visualisation technique to handle a large number of variables more effectively, particularly when the original variables have recycling and interacting correlations and dependencies. Statistical independence between the parallel coordinates also makes it possible to calculate upper and lower limits (UL and LL) for each coordinate separating abnormal data from normal. Calculation of the UL and LL limits requires each coordinate to satisfy Gaussian distribution. In this work a method called the Box–Cox transformation is proposed to transform the non-Gaussian coordinate to a Gaussian distribution before the UL and LL limits are calculated.  相似文献   

14.
A new pseudospectral technique for integrating incompressible Navier-Stokes equations with one nonperiodic boundary in Cartesian or cylindrical coordinate system is presented. Algorithm constructed makes use of Chebyshev collocation technique in nonperiodic direction. Special attention is paid to the approximate factorization of the discrete Navier-Stokes equations in cylindrical geometry leading to highly fast and robust numerical procedure providing spectral accuracy. New approach is an efficient tool for further investigation of turbulent shear flows, for physical hypotheses and alternative algorithms testing. Classical problems of incompressible fluid flows in an infinite plane channel and annuli at transitional Reynolds numbers are taken as model ones.  相似文献   

15.
基于GridFTP的并行数据传输性能分析与研究   总被引:1,自引:0,他引:1       下载免费PDF全文
分析论述了广域网格网络中数据传输的需求以及网格数据传输协议—GridFTP的性能特点,对GridFTP中最重要的并行传输机制进行了探讨。通过大量实验测试对不同并行度下数据传输的时间、带宽、吞吐量、数据传输总量等性能参数进行了比较,讨论了并行度对传输性能的影响,提出了并行传输机制在提高传输性能方面的局限和值得注意的问题。  相似文献   

16.
Data envelopment analysis (DEA) is a mathematical approach for evaluating the efficiency of decision-making units (DMUs) that convert multiple inputs into multiple outputs. Traditional DEA models assume that all input and output data are known exactly. In many situations, however, some inputs and/or outputs take imprecise data. In this paper, we present optimistic and pessimistic perspectives for obtaining an efficiency evaluation for the DMU under consideration with imprecise data. Additionally, slacks-based measures of efficiency are used for direct assessment of efficiency in the presence of imprecise data with slack values. Finally, the geometric average of the two efficiency values is used to determine the DMU with the best performance. A ranking approach based on degree of preference is used for ranking the efficiency intervals of the DMUs. Two numerical examples are used to show the application of the proposed DEA approach.  相似文献   

17.
目的 平行坐标是经典的多维数据可视化方法,但在用于地理空间多维数据分析时,往往存在空间位置信息缺失和空间关联分析不确定等问题。对此,本文设计了一种有效关联平行坐标和地图的地理空间多维数据可视分析方法。方法 根据多维属性信息对地理空间位置进行聚类分析,引入Voronoi图和颜色明暗映射对地理空间各类区域进行显著标识,利用平行坐标呈现地理空间多维属性信息,引入互信息度量地理空间聚类与属性类别的相关性,动态地确定平行坐标轴排列顺序,进一步计算属性轴与地图之间数据线的绑定位置,对数据线的布局进行优化处理,降低地图与平行坐标系间数据线分布的紊乱程度。结果 有效集成上述可视化设计及数据分析方法,设计与实现一种基于平行坐标轴动态排列的地理空间多维数据可视化分析系统,提供便捷的用户交互模式,通过2组具有明显地理空间多维属性特征的数据进行测试,验证了本文可视分析方法的有效性和实用性。结论 本文提出的可视分析方法和工具可以帮助用户快速分析地理空间多维属性存在的空间分布特征及其关联模式,为地理空间多维数据的探索提供了有效手段。  相似文献   

18.
Stochastic timed Petri nets are a useful tool in the performance analysis of concurrent systems such as parallel computers, communication networks and flexible manufacturing systems. In general, performance measures of stochastic timed Petri nets are difficult to obtain for practical problems due to their sizes. In this paper, we provide a method to efficiently compute upper and lower bounds for the throughputs and mean token numbers for a large class of stochastic timed Petri nets. Our approach is based on uniformization technique and linear programming  相似文献   

19.
When designing products and environments, detailed data on body size and shape are seldom available for the specific user population. One way to mitigate this issue is to reweight available data such that they provide an accurate estimate of the target population of interest. This is done by assigning a statistical weight to each individual in the reference data, increasing or decreasing their influence on statistical models of the whole. This paper presents a new approach to reweighting these data. Instead of stratified sampling, the proposed method uses a clustering algorithm to identify relationships between the detailed and reference populations using their height, mass, and body mass index (BMI). The newly weighted data are shown to provide more accurate estimates than traditional approaches. The improved accuracy that accompanies this method provides designers with an alternative to data synthesis techniques as they seek appropriate data to guide their design practice.Practitioner Summary: Design practice is best guided by data on body size and shape that accurately represents the target user population. This research presents an alternative to data synthesis (e.g. regression or proportionality constants) for adapting data from one population for use in modelling another.  相似文献   

20.
A method of adaptive artificial viscosity (AAV2D-3D) for the solution of two-and three-dimensional equations of gas dynamics for Euler variables in the Cartesian coordinates system is considered. This paper continues works [1, 2]. The computational scheme is described in detail and the results of the test case are given.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号