首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到18条相似文献,搜索用时 15 毫秒
1.
Fog computing or a fog network is a decentralized network placed in between data source and the cloud to minimize the network latency issues and thus support in-time service delivery, of Internet of Things (IoT) applications. However, placing computational tasks of IoT applications in fog infrastructure is a challenging task. State of the art focuses on quality of service and quality of experience (QoE) based application placement. In this article, we design hierarchical fuzzy based QoE-aware application placement strategy for mapping IoT applications with compatible instances in the fog network. The proposed method considers user application expectation parameters and metrics of available fog instances, and assigns the priority of applications using hierarchical fuzzy logic. The method later uses Hungarian maximization assignment algorithm to map applications with compatible instances. The simulation results of the proposed policy show better performance over the existing baseline algorithms in terms of resource gain (RG), processing time reduction ratio (PTRR), and similarly network relaxation ratio. When considering 10 applications in the fog network, our proposed method simulation results show 70.00%, 22.44%, 37.83% improvement in RG, and 28.46%, 37.5%, 23.07% improvement in PTRR, when compared with QoE-aware, randomized, FIFO algorithms, respectively.  相似文献   

2.
3.
一种基于不确定性因素叠加的Web服务质量度量方法   总被引:2,自引:0,他引:2  
以克服现有服务质量度量方法的主观性、反映服务调用中存在的不确定性和各影响因素之间存在的内在关系为出发点,定义了原子服务调用率、成功率和效率3个因素,通过对原子服务调用的历史信息进行统计计算得到各因素的量值,提出一种基于不确定性因素叠加的原子服务质量度量方法,以及基于各原子服务质量平均值的高粒度Web服务的质量度量标准,并给出服务优先级的判断方法.性能分析验证了所提出方法的高效性、可扩展性和可行性.  相似文献   

4.
The advancement of wireless networks offers mobile users a diversity connectivity options, but the choice of the best connection should consider classics QoS aspects and, with increasing multimedia applications, should also consider QoE metrics. Another important parameter to choose the best connection is the energy efficiency by reducing the battery consumption of the devices and reducing CO2 emissions (green network). This paper validates a Markovian policy for distribution of user load balancing in femtocell/macrocell networks considering QoS/QoE and energy consumption providing quality for multimedia applications. The results obtained by simulation proved the benefits.  相似文献   

5.
For many years video content delivery has established itself as the killer application. Improving QoE on adaptive streaming is focusing many efforts in the quest for optimized methods and metrics to allow a QoE driven adaptation. Questions such as whether adaptive systems based on Scalable Video Coding improve subjective quality and in which situations or to what degree are still open issues. Tolerance and indifference thresholds for each type of content, conditions or viewer category, with regard to adaptive systems are critical success factors that are yet unresolved. We compare the performance of a complete adaptive system with the traditional, i.e. non-adaptive, approach in subjective terms. Results of surveying 75 participants show that the adaptation improves QoE under most of the evaluated conditions. Tolerance thresholds for triggering adaptation events have been identified. Users accustomed to Internet video are more critical than users that only watch TV. The under 35 year old subset among the available population is generally more satisfied with the adaptive system than the older subset.  相似文献   

6.
The term user experience (UX) encompasses the concepts of usability and affective engineering. However, UX has not been defined clearly. In this study, a literature survey, user interview and indirect observation were conducted to develop definitions of UX and its elements. A literature survey investigated 127 articles that were considered to be helpful to define the concept of UX. An in‐depth interview targeted 14 hands‐on workers in the Korean mobile phone industry. An indirect observation captured daily experiences of eight end‐users with mobile phones. This study collected various views on UX from academia, industry, and end‐users using these three approaches. As a result, this article proposes definitions of UX and its elements: usability, affect, and user value. These results are expected to help design products or services with greater levels of UX. © 2011 Wiley Periodicals, Inc. This article was published online on 20 October 2011. An error was subsequently identified. This notice is included in the online and print versions to indicate that both have been corrected 13 June 2013.  相似文献   

7.
This paper presents a model of (en)action from a conceptual and theoretical point of view. This model is used to provide solid bases to overcome the complexity of designing virtual environments for learning (VEL). It provides a common grounding for trans-disciplinary collaborations where embodiment can be perceived as the cornerstone of the project. Where virtual environments are concerned, both computer scientists and educationalists have to deal with the learner/user’s body; therefore the model provides tools with which to approach both human actions and learning processes within a threefold model. It is mainly based on neuroscientific research, including enaction and the neurophysiology of action.  相似文献   

8.
针对现有的Web服务质量模型主要考虑的都是通用的服务质量属性,没有考虑特定的Web服务属性在评价中的作用,提出了一个新的Web服务质量模型,它引入了一个特定领域的服务质量属性,该模型包括三个子模型。模型中建立了相关的服务质量树和目标对象树,通过量化指标子模型对目标对象进行量化,形成相应的服务权重树。该模型既可适应用于单个的Web服务请求,也可以用于多个服务组合而成的请求。通过一个应用实例验证了模型的正确性和可用性。  相似文献   

9.
This paper first reviews current ergonomics design approaches in delivering digital solutions to achieve a unified experience from interaction and business process design perspectives. Then, it analyses the opportunities that new technologies may bring in for enhancing current ergonomics design approaches from integration and intelligence design perspectives. To address the challenges in today’s ergonomics practices in delivering digital solutions, an interaction, process, integration and intelligence (IPII) design approach is proposed. A case study is presented that implemented the IPII approach. The quantitative data gathered from the case study demonstrates that the IPII approach has achieved significant advantages in reaching the goal of a unified experience and operational benefits for delivering digital solutions. The IPII approach also demonstrates improvements compared to today’s ergonomics design approaches, such as user-centred design, for digital solutions. Finally, the paper highlights the contributions of the IPII approach for future ergonomics practices in delivering digital solutions.

Practitioner Summary: In addition to the interaction design for the UI of digital solutions, as is the case in current typical ergonomics practice, the IPII adds three additional design components: process, integration and intelligence design. The case study demonstrates the advantages of the IPII, providing an enhanced approach for designing digital solutions.

Abbreviations: IPII: interaction, process, integration and intelligence; IEA: International Ergonomics; Association; HFE: human factors/ ergonomics; HCD: human-centred design; UX: user experience; UI: user interface; ISO: International Organization for Standardization; UCD: user-centred design; ERP: enterprise resource planning; E2E experience: end-to-end experience; UXD: user experience design; AI: artificial intelligence; ML: machine learning; HCI: human-computer interaction; IaaS: infrastructure as a service; PaaS: platform as a service; SaaS: software as a service; CRM: customer relation management; SCM: supply chain management; HCM: human capability management; BI: business intelligence; BOMA: Bill of Materials Application; POC: proof of concept; TCM: transition change management; SMEs: subject matter experts; PMO: program management office; UAT: user acceptance test; iBPMS: intelligent business process management suite  相似文献   


10.
This article reviews research work on set of experience knowledge structure (SOEKS)-decisional DNA (DDNA) done in the past, ongoing, and planned for the future. Firstly, the concept of the knowledge representation technique of SOEKS-DDNA is discussed, and then an attempt is made to organize the past research related with it in chronological order. This work focuses on the review on SOEKS-DDNA, its application in different domains, the various implementation platforms, as well as its benefits and its limitations. The second part of this article provides an idea of the SOEKS-DDNA-related research endeavors currently carried out by us and the last part is a sneak peek into our planned future work.  相似文献   

11.
To identify the most commonly used external factors of Technology Acceptance Model (TAM) in the context of e-learning adoption, a quantitative meta-analysis of 107 papers covering the last ten years was performed. The results show that Self-Efficacy, Subjective Norm, Enjoyment, Computer Anxiety and Experience are the most commonly used external factors of TAM. The effects of these commonly used external factors on TAM's two main constructs, Perceived Ease of Use (PEOU) and Perceived Usefulness (PU), have been studied across a range of e-learning technology types and e-learning user types. The results show that the best predictor of student's PEOU of e-learning systems is Self-Efficacy (β = 0.352), followed by Enjoyment (β = 0.341), Experience (β = 0.221), Computer Anxiety (β = −0.199) and Subjective Norm (β = 0.195). The best predictor of student's PU of e-learning systems is Enjoyment (β = 0.452), followed by Subjective Norm (β = 0.301), Self-Efficacy (β = 0.174) and Experience (β = 0.169). Using these external factors and their effect sizes on PEOU and PU, this study proposes a General Extended Technology Acceptance Model for E-Learning (GETAMEL).  相似文献   

12.
Researchers have developed visual discrimination models (VDMs) that can predict a human observer's ability to detect a target object superposed on an image. These models incorporate sophisticated knowledge of the properties of the human visual system. In the predictive approach, termed conventional VDM usage, two input images with and without a target are analyzed by an algorithm that calculates a just-noticeable-difference (JND) index, which is a taken as a measure of the detectability of the target. A new method of using the VDM is described, termed channelized VDM, which involves finding the linear combination of the VDM-generated channels (which are not used in conventional VDM analysis) that has optimal classification ability between normal and abnormal images. The classification ability can be measured using receiver operating characteristic (ROC) or two alternative forced choice (2AFC) experiments, and in special cases they can also be predicted by signal detection theory (SDT) based model-observer methods. In this study simulated background and nodule containing regions were used to validate the new method. It was found that the channelized VDM predictions were in excellent qualitative agreement with human-observer validated SDT predictions. Either VDM method (conventional or channelized) has potential applicability to soft-copy display optimization. An advantage of any VDM-based approach is that complex effects, such as visual masking, are automatically accounted for, which effects are usually not included in SDT-based methods.  相似文献   

13.
The paper describes results of a longitudinal study of developments in the area of software product and process quality improvement within a Hungarian software company, IQSOFT Ltd. This company has been active in this area since 1993, trying to build, introduce and maintain an efficiently working quality management system which, e.g., fulfils the ISO 9001 requirements, allows steady software process improvement and, at the same time, conforms to company's own needs. Over the last eight years five phases could be distinguished. Each phase is described shortly, following the same structure, namely: basic starting points, key problem areas, literature consulted, activities and design executed, reflections on what happened and why. The lessons resulting from the analysis of this case have been formulated in terms of guidelines. We feel that these are applicable to any low maturity software development organisation embarking on a product or process quality improvement endeavour. These guidelines are developed around a framework containing the basic issues of software production (project management, technical processes and products). The guidelines advocate a careful step-by-step development of definitions, quality characteristics, and metrics related to these objects while at the same time developing and introducing the associated process.  相似文献   

14.
Although quality requirements (QRs) have become a major drive in today's software development, there have been very few real‐world examples in the literature that demonstrate how to meet these requirements. This paper presents such an example. Specifically, the paper describes the design of a partition‐based distributed stock trading service system that satisfies a set of QRs related to resource utilization, performance, scalability and availability. The paper evaluates this design through detailed experiments and discusses some design alternatives and the lessons learned. Central to this design are a static load distribution strategy and a dynamic load balancing strategy. The first strategy is to achieve an initial balanced workload on the system's server cluster during the system initialization time, whereas the second strategy is to maintain this balanced workload throughout the system execution time. Together, these two strategies work in unison to ensure that the server resources are efficiently utilized; the user requests are processed with the required speed; the application is partitioned with sufficient room to scale; and the system is highly available. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   

15.
Time series data are widely used in many applications including critical decision support systems. The goodness of the dataset, called the Fitness of Use (FoU), used in the analysis has direct bearing on the quality of the information and knowledge generated and hence on the quality of the decisions based on them. Unlike traditional quality of data which is independent of the application in which it is used, FoU is a function of the application. As the use of geospatial time series datasets increase in many critical applications, it is important to develop formal methodologies to compute their FoU and propagate it to the derived information, knowledge and decisions. In this paper we propose a formal framework to compute the FoU of time series datasets. We present three different techniques using the Dempster–Shafer belief theory framework as the foundation. These three approaches investigate the FoU by focusing on three aspects of data: data attributes, data stability, and impact of gap periods, respectively. The effectiveness of each approach is shown using an application in hydrological datasets that measure streamflow. While we use hydrological information analysis as our application domain in this research, the techniques can be used in many other domains as well.
Ashok SamalEmail:
  相似文献   

16.
This note points out and corrects an error in the algorithm proposed in [Ting-Yem Ho, Yue-Li Wang and Ming-Tsan Juan, A linear time algorithm for finding all hinge vertices of a permutation graph, Information Processing Letters 59 (2) (1996) 103-107].  相似文献   

17.
K‐means clustering can be highly accurate when the number of clusters and the initial cluster centre are appropriate. An inappropriate determination of the number of clusters or the initial cluster centre decreases the accuracy of K‐means clustering. However, determining these values is problematic. To solve these problems, we used density‐based spatial clustering of application with noise (DBSCAN) because it does not require a predetermined number of clusters; however, it has some significant drawbacks. Using DBSCAN with high‐dimensional data and data with potentially different densities decreases the accuracy to some degree. Therefore, the objective of this research is to improve the efficiency of DBSCAN through a selection of region clusters based on density DBSCAN to automatically find the appropriate number of clusters and initial cluster centres for K‐means clustering. In the proposed method, DBSCAN is used to perform clustering and to select the appropriate clusters by considering the density of each cluster. Subsequently, the appropriate region data are chosen from the selected clusters. The experimental results yield the appropriate number of clusters and the appropriate initial cluster centres for K‐means clustering. In addition, the results of the selection of region clusters based on density DBSCAN method are more accurate than those obtained by traditional methods, including DBSCAN and K‐means and related methods such as Partitioning‐based DBSCAN (PDBSCAN) and PDBSCAN by applying the Ant Clustering Algorithm DBSCAN (PACA‐DBSCAN).  相似文献   

18.
A correlation between a learning and a fuzzy entropy, using the control of robotic part macro-assembly (part-bringing) task as an example, is introduced. Two intelligent part-bringing algorithms, to bring a part from an initial position to an assembly hole or a receptacle (target or destination) for a purpose of a part mating in a partially unknown environment containing obstacles, related to a robotic part assembly task are introduced. An entropy function, which is a useful measure of the variability and the information in terms of uncertainty, is introduced to measure its overall performance of a task execution related to the part-bringing task. The degree of uncertainty associated with the part-bringing task is used as an optimality criterion, e.g. minimum entropy, for a specific task execution. Fuzzy set theory, well-suited to the management of uncertainty, is used to address the uncertainty associated with the macro-assembly procedure. In the first algorithm, a macro-assembly, locating various shaped assembly holes (targets) in the workspace corresponding to the shapes of the parts and then bringing the part to the corresponding target, despite existing obstacles is introduced. This is accomplished by combining a neural network control strategy coordinating with a mobile rectilinear grid composed of optical sensors as well as fuzzy optimal controls. Depending on topological relationships among the part's present position, the position of obstacles, and the target position in the workspace, a specific rulebase from a family of distinct fuzzy rulebases for avoiding obstacles is activated. The higher the probability, the input pattern (or value) of the neural network to be identified as the desired output is, the lower the fuzzy entropy is. Through the fuzzy entropy, a degree of identification between the input pattern and the desired output of the neural network can be measured. In the second algorithm, a macro-assembly with a learning algorithm and a sensor fusion for bringing the part to the target is introduced. By employing a learning approach, the uncertainty associated with the part-bringing task is reduced. The higher the probability of success is, the lower the fuzzy entropy is. The results show clearly the correlation between a probability of success related to the task execution of the part-bringing and the fuzzy entropy, and also show the effectiveness of above methodologies. The proposed technique is not only a useful tool to measure the behaviour of the learning but applicable to a wide range of robotic tasks including motion planning, and pick and place operations with various shaped parts and targets.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号