首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
Scientific applications require large computing power, traditionally exceeding the amount that is available within the premises of a single institution. Therefore, clouds can be used to provide extra resources whenever required. For this vision to be achieved, however, requires both policies defining when and how cloud resources are allocated to applications and a platform implementing not only these policies but also the whole software stack supporting management of applications and resources. Aneka is a cloud application platform capable of provisioning resources obtained from a variety of sources, including private and public clouds, clusters, grids, and desktops grids. In this paper, we present Aneka’s deadline-driven provisioning mechanism, which is responsible for supporting quality of service (QoS)-aware execution of scientific applications in hybrid clouds composed of resources obtained from a variety of sources. Experimental results evaluating such a mechanism show that Aneka is able to efficiently allocate resources from different sources in order to reduce application execution times.  相似文献   

2.
In traditional CAD (computer-aided design) systems, the manipulation of points and lines is often difficult because designers manipulate virtual objects through their vision system. Nowadays, designers can explore and manipulate virtual objects in haptic-enabled CAD systems using haptic devices. Haptic devices can present force feedbacks to pull or push the users’ hands into desirable targets. Of course the intent is for the user to experience the same sensations in the virtual realm as they would in the real world. Thus, sub-threshold forces, which cannot be perceived by users, should be incorporated in the control of users’ movements. As a result, our attention is directed to study the effect of sub-threshold forces on the accuracy of movement in a haptic-enabled virtual reality (VR) system. In this study, our goal is to manipulate users’ hands using controlled forces such that users cannot notice the forces. With this in mind, we have constructed a haptic-enabled virtual environment (VE) to carry out a multi-modal Fitts’ type task. In the task, subjects could see the position of the haptic probe in the VE where forces were applied on their hands. Basically, the accuracy of subjects was measured using a performance index when the intensity and direction of forces changed. A psychophysical method was utilized to ensure that the forces were below the force threshold of the human force perception. Results indicate that the accuracy is affected by the intensity and direction of sub-threshold forces even when users are allowed to control their actions through visual feedbacks.  相似文献   

3.
Cloud computing has established itself as an interesting computational model that provides a wide range of resources such as storage, databases and computing power for several types of users. Recently, the concept of cloud computing was extended with the concept of federated clouds where several resources from different cloud providers are inter-connected to perform a common action (e.g. execute a scientific workflow). Users can benefit from both single-provider and federated cloud environment to execute their scientific workflows since they can get the necessary amount of resources on demand. In several of these workflows, there is a demand for high performance and parallelism techniques since many activities are data and computing intensive and can execute for hours, days or even weeks. There are some Scientific Workflow Management Systems (SWfMS) that already provide parallelism capabilities for scientific workflows in single-provider cloud. Most of them rely on creating a virtual cluster to execute the workflow in parallel. However, they also rely on the user to estimate the amount of virtual machines to be allocated to create this virtual cluster. Most SWfMS use this initial virtual cluster configuration made by the user for the entire workflow execution. Dimensioning the virtual cluster to execute the workflow in parallel is then a top priority task since if the virtual cluster is under or over dimensioned it can impact on the workflow performance or increase (unnecessarily) financial costs. This dimensioning is far from trivial in a single-provider cloud and specially in federated clouds due to the huge number of virtual machine types to choose in each location and provider. In this article, we propose an approach named GraspCC-fed to produce the optimal (or near-optimal) estimation of the amount of virtual machines to allocate for each workflow. GraspCC-fed extends a previously proposed heuristic based on GRASP for executing standalone applications to consider scientific workflows executed in both single-provider and federated clouds. For the experiments, GraspCC-fed was coupled to an adapted version of SciCumulus workflow engine for federated clouds. This way, we believe that GraspCC-fed can be an important decision support tool for users and it can help determining an optimal configuration for the virtual cluster for parallel cloud-based scientific workflows.  相似文献   

4.
In the last years, scientific workflows have emerged as a fundamental abstraction for structuring and executing scientific experiments in computational environments. Scientific workflows are becoming increasingly complex and more demanding in terms of computational resources, thus requiring the usage of parallel techniques and high performance computing (HPC) environments. Meanwhile, clouds have emerged as a new paradigm where resources are virtualized and provided on demand. By using clouds, scientists have expanded beyond single parallel computers to hundreds or even thousands of virtual machines. Although the initial focus of clouds was to provide high throughput computing, clouds are already being used to provide an HPC environment where elastic resources can be instantiated on demand during the course of a scientific workflow. However, this model also raises many open, yet important, challenges such as scheduling workflow activities. Scheduling parallel scientific workflows in the cloud is a very complex task since we have to take into account many different criteria and to explore the elasticity characteristic for optimizing workflow execution. In this paper, we introduce an adaptive scheduling heuristic for parallel execution of scientific workflows in the cloud that is based on three criteria: total execution time (makespan), reliability and financial cost. Besides scheduling workflow activities based on a 3-objective cost model, this approach also scales resources up and down according to the restrictions imposed by scientists before workflow execution. This tuning is based on provenance data captured and queried at runtime. We conducted a thorough validation of our approach using a real bioinformatics workflow. The experiments were performed in SciCumulus, a cloud workflow engine for managing scientific workflow execution.  相似文献   

5.
The purpose of this study is twofold: first, to investigate user goals in social virtual worlds; second, to introduce a methodological alternative (i.e., a means-end chain approach) for analyzing user goals in cyberspaces. The data were acquired from a web survey, and were analyzed by means-end chain analysis (MECA), which produces users’ goal structure in reference to a hierarchical system of interrelated goals (Olson & Reynolds, 1983). The results show that people come to social virtual worlds to satisfy their social and hedonic needs, and to escape from real world constraints, as do virtual community members and virtual gamers; they also pursue unique activities, such as creating virtual objects and selling them. On the other hand, by clarifying relations among users’ goals, MECA provides a richer explanation for user goals than prior research which only offers separate user goals for cyberspace users without explanation of relationship among goals.  相似文献   

6.
To get more results or greater accuracy, computational scientists execute their applications on distributed computing platforms such as clusters, grids, and clouds. These platforms are different in terms of hardware and software resources as well as locality: some span across multiple sites and multiple administrative domains, whereas others are limited to a single site/domain. As a consequence, in order to scale their applications up, the scientists have to manage technical details for each target platform. From our point of view, this complexity should be hidden from the scientists, who, in most cases, would prefer to focus on their research rather than spending time dealing with platform configuration concerns.In this article, we advocate for a system management framework that aims to automatically set up the whole run-time environment according to the applications’ needs. The main difference with regards to usual approaches is that they generally only focus on the software layer whereas we address both the hardware and the software expectations through a unique system. For each application, scientists describe their requirements through the definition of a virtual platform (VP) and a virtual system environment (VSE). Relying on the VP/VSE definitions, the framework is in charge of (i) the configuration of the physical infrastructure to satisfy the VP requirements, (ii) the set-up of the VP, and (iii) the customization of the execution environment (VSE) upon the former VP. We propose a new formalism that the system can rely upon to successfully perform each of these three steps without burdening the user with the specifics of the configuration for the physical resources, and system management tools. This formalism leverages Goldberg’s theory for recursive virtual machines (Goldberg, 1973 [6]) by introducing new concepts based on system virtualization (identity, partitioning, aggregation) and emulation (simple, abstraction). This enables the definition of complex VP/VSE configurations without making assumptions about the hardware and the software resources. For each requirement, the system executes the corresponding operation with the appropriate management tool.As a proof of concept, we implemented a first prototype that currently interacts with several system management tools (e.g., OSCAR, the Grid’5000 toolkit, and XtreemOS) and that can be easily extended to integrate new resource brokers or cloud systems such as Nimbus, OpenNebula, or Eucalyptus, for instance.  相似文献   

7.
A number of studies have examined virtual worlds, which can facilitate knowledge sharing, education, and enjoyment, among others. However, no study has provided an insightful research model for evaluating virtual worlds. This study suggests that users’ identification with virtual communities and avatars plays a critical role in the construction of attractive virtual worlds. The proposed model measures the level of the user’s identification with virtual communities, through which the user builds his or her trust in other community members. In addition, the study suggests that users’ identification with avatars is an important element of their satisfaction with virtual worlds. The results indicate that users’ identification with virtual communities as well as avatars can enhance their efficacy and trust and thus facilitate their sustained use of virtual services. The results have important theoretical and practical implications.  相似文献   

8.
Usually, scientists need to execute experiments that demand high performance computing environments and parallel techniques. This is the scenario found in many bioinformatics experiments modeled as scientific workflows, such as phylogenetic and phylogenomic analyses. To execute these experiments, scientists have adopted virtual machines (VMs) instantiated in clouds. Estimating the number of VMs to instantiate is a crucial task to avoid negative impacts on the execution performance and on the financial costs with under or overestimations. Previously, the necessary number of VMs to execute bioinformatics workflows have been estimated by a GRASP heuristic and have been coupled to a Cloud-based Parallel Scientific Workflow Management System. Although this work was a step forward, this approach only provided a static dimensioning. If the characteristics of the environment change (processing capacity, network speed), this static dimensioning may not be suitable. In this way, it is of interest that the dimensioning is adjusted at runtime. To achieve this, we developed a novel framework for monitoring and dynamically dimensioning resources during the execution of parallel scientific workflows in clouds, called Dynamic Dimensioning of Cloud Computing Framework (DDC-F). We have evaluated DDC-F in real executions of bioinformatics workflows. Experiments showed that DDC-F is able to efficiently calculate the number of VMs necessary to execute bioinformatics workflows of Comparative Genomics (CG), also reducing the financial costs, when compared with other works of the related literature.  相似文献   

9.
Despite the growth and commercial potential of virtual worlds, relatively little is known about what drives users’ motivations to engage in virtual worlds. This paper proposes and empirically tests a conceptual model aimed at filling this research gap. Given the multipurpose nature of virtual words the model integrates extrinsic and intrinsic motivation as behavioral determinants. By making use of the literature on information system value and motivation theory four important system-specific virtual world characteristics (economic value, ease of use, escapism, visual attractiveness) are added as motivational drivers. Using structural equation modeling on a sample of 846 users of the virtual world Second Life the hypotheses were tested. The results support the model; they confirm the role of extrinsic and intrinsic motivation as behavioral determinants and show how and to what extent the four system-specific elements function as motivational basis. Implications for research and practice are discussed.  相似文献   

10.
While prior research has investigated the main effects of external factors on user perceptions of a new IT, little work has been into the interaction effect of external factors on user perceptions. In a longitudinal experimental study, we examined the effect of the quality of persuasive argument, user training, and first-hand use on user perceptions of the new technology over time. We found that the effect of argument quality on users’ perceived ease of use was greater when users had no training. However, we did not find the same effect occurred due to perceived usefulness. We also found that first-hand use changed users’ perceived usefulness more over time when users received high quality arguments or when they had no training. While we found that first-hand use changed users’ perceived ease of use more when users received high quality arguments, first-hand use did not change users’ perceived ease of use differently whether they had or had not received prior training.  相似文献   

11.
Cloud computing is emerging as an increasingly popular computing paradigm, allowing dynamic scaling of resources available to users as needed. This requires a highly accurate demand prediction and resource allocation methodology that can provision resources in advance, thereby minimizing the virtual machine downtime required for resource provisioning. In this paper, we present a dynamic resource demand prediction and allocation framework in multi‐tenant service clouds. The novel contribution of our proposed framework is that it classifies the service tenants as per whether their resource requirements would increase or not; based on this classification, our framework prioritizes prediction for those service tenants in which resource demand would increase, thereby minimizing the time needed for prediction. Furthermore, our approach adds the service tenants to matched virtual machines and allocates the virtual machines to physical host machines using a best‐fit heuristic approach. Performance results demonstrate how our best‐fit heuristic approach could efficiently allocate virtual machines to hosts so that the hosts are utilized to their fullest capacity. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

12.
Traditional hypermedia applications present the same content and provide identical navigational support to all users. Adaptive Hypermedia Systems (AHS) make it possible to construct personalized presentations to each user, according to preferences and needs identified. We present in this paper an alternative approach to educational AHS where a virtual character personalizes the interaction with the user through the use of a particular recommender system. The character has natural language communication abilities; it can learn students’ profiles and use this knowledge to recommend appropriate contents and activities. Through its interaction with the user, the character is able to collect and organize information about students in order to identify appropriate suggestions of contents. The recommender system employs a knowledge representation scheme that is easy to understand and to modify, enabling teachers/tutors to explore the types of recommendations being made and to appreciate why they are made. An experiment with computer science students was carried out in order to validate the approach proposed. The results of the experiment showed that the presentation of personalized links through a virtual character had a positive impact in the users’ perception of the system as a learning tool. The combination of the virtual character with a recommender system proved to be a good alternative for the delivery of personalized contents without making constant changes in the main user interface. This approach provides mechanisms to guide users through paths of study followed by students with similar profiles, without violating the human–computer interaction principle of perceived stability.  相似文献   

13.
Meta-schedulers map jobs to computational resources that are part of a Grid, such as clusters, that in turn have their own local job schedulers. Existing Grid meta-schedulers either target system-centric metrics, such as utilisation and throughput, or prioritise jobs based on utility metrics provided by the users. The system-centric approach gives less importance to users’ individual utility, while the user-centric approach may have adverse effects such as poor system performance and unfair treatment of users. Therefore, this paper proposes a novel meta-scheduler, based on the well-known double auction mechanism that aims to satisfy users’ service requirements as well as ensuring balanced utilisation of resources across a Grid. We have designed valuation metrics that commodify both the complex resource requirements of users and the capabilities of available computational resources. Through simulation using real traces, we compare our scheduling mechanism with other common mechanisms widely used by both existing market-based and traditional meta-schedulers. The results show that our meta-scheduling mechanism not only satisfies up to 15% more user requirements than others, but also improves system utilisation through load balancing.  相似文献   

14.
The Butler system is an e-health platform designed to improve the elderly population’s quality of life. The Butler system has three applications diagnostic, therapeutic and playful. The objective of this work is to present the influence of the use of the platform on elderly users’ mood states and the degree of acceptance. These measures were assessed through visual analogical scales before and after the platform use, with the outcome measures MANOVAS analyses were performed. Seventeen users (from 58 to 79 years old) completed four sessions with the system; they could choose which technological resources they used in each session. The results showed that after using the system, the participants’ insensitive emotions increased and their negative ones decreased. In addition, they obtained high levels of satisfaction, low levels of subjective difficulty and would recommend Butler to other users. Therefore, the results indicate that Butler might be an effective system for improving emotional states in elderly population.  相似文献   

15.
在分析现有轮廓线提取方法不足的基础上,提出基于虚拟格网的建筑物轮廓线自动提取方法。该方法利用建筑物点云生成虚拟格网并进行二值填充;采用邻域分析方法进行边界格网的标记与追踪;为了避免边界追踪错误,设计了基于方向的单边缘格网抑制方法及基于距离的连接关系调整方法以改善提取结果质量;根据格网追踪结果,从原始建筑物点云中提取真实轮廓点以保持原始建筑物轮廓形态;采用随机抽样一致性估计及最小二乘拟合方法进行轮廓线规则化处理,实现建筑物轮廓线的自动提取。实验结果表明,该方法能快速从建筑物点云中提取轮廓线,可为建筑物轮廓线的自动提取提供一种可行的解决方案。  相似文献   

16.
17.
18.
智能电网弹性响应时间业务需求的接入控制   总被引:1,自引:0,他引:1  

考虑智能电网多种类型业务需求的接入控制, 通过利用响应时间的弹性来平缓业务负荷的波动, 使得电网运行的长期平均代价最小. 针对业务需求和用户行为的随机分布特性, 建立连续时间Markov 控制过程的系统分析模型; 结合性能势基于样本轨道的估计, 提出一种基于仿真的策略迭代优化算法, 有效缓解了系统大状态空间导致的维数灾问题, 具有较快的收敛速度和良好的应用效果. 仿真实验结果验证了所提出方法的有效性.

  相似文献   

19.
Many scientific workflows are data intensive: large volumes of intermediate datasets are generated during their execution. Some valuable intermediate datasets need to be stored for sharing or reuse. Traditionally, they are selectively stored according to the system storage capacity, determined manually. As doing science on clouds has become popular nowadays, more intermediate datasets in scientific cloud workflows can be stored by different storage strategies based on a pay-as-you-go model. In this paper, we build an intermediate data dependency graph (IDG) from the data provenances in scientific workflows. With the IDG, deleted intermediate datasets can be regenerated, and as such we develop a novel algorithm that can find a minimum cost storage strategy for the intermediate datasets in scientific cloud workflow systems. The strategy achieves the best trade-off of computation cost and storage cost by automatically storing the most appropriate intermediate datasets in the cloud storage. This strategy can be utilised on demand as a minimum cost benchmark for all other intermediate dataset storage strategies in the cloud. We utilise Amazon clouds’ cost model and apply the algorithm to general random as well as specific astrophysics pulsar searching scientific workflows for evaluation. The results show that benchmarking effectively demonstrates the cost effectiveness over other representative storage strategies.  相似文献   

20.
当前虚拟桌面实施方法中,终端用户对3D图形处理能力越来越高的要求与虚拟机GPU处理能力之间的矛盾逐渐凸显。为解决上述问题,对GPU虚拟化的典型实施方法进行了研究。在对上述虚拟化技术进行分析的基础上,介绍了一种改进的基于设备独占法和API remoting法的虚拟化方案。利用Hypervisor创建两种模式的虚拟机,分别为一台父虚拟机(GVM)和多台子虚拟机(DVM)。GVM完全独占物理GPU,而DVM与物理GPU无直接交互关系。两种模式虚拟机共享GPU内存以及指令通道,DVM中的GPU调用指令传递至GVM,通过GVM对物理GPU进行快速调用,将调用结果返回到共享内存空间,进而呈现给用户。最后对改进的GPU虚拟化方法与典型虚拟化方法进行了对比与分析,总结了其中的优缺点,梳理了将来的研究重点。  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号