首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We explored the reliability of detecting a learner’s affect from conversational features extracted from interactions with AutoTutor, an intelligent tutoring system (ITS) that helps students learn by holding a conversation in natural language. Training data were collected in a learning session with AutoTutor, after which the affective states of the learner were rated by the learner, a peer, and two trained judges. Inter-rater reliability scores indicated that the classifications of the trained judges were more reliable than the novice judges. Seven data sets that temporally integrated the affective judgments with the dialogue features of each learner were constructed. The first four datasets corresponded to the judgments of the learner, a peer, and two trained judges, while the remaining three data sets combined judgments of two or more raters. Multiple regression analyses confirmed the hypothesis that dialogue features could significantly predict the affective states of boredom, confusion, flow, and frustration. Machine learning experiments indicated that standard classifiers were moderately successful in discriminating the affective states of boredom, confusion, flow, frustration, and neutral, yielding a peak accuracy of 42% with neutral (chance = 20%) and 54% without neutral (chance = 25%). Individual detections of boredom, confusion, flow, and frustration, when contrasted with neutral affect, had maximum accuracies of 69, 68, 71, and 78%, respectively (chance = 50%). The classifiers that operated on the emotion judgments of the trained judges and combined models outperformed those based on judgments of the novices (i.e., the self and peer). Follow-up classification analyses that assessed the degree to which machine-generated affect labels correlated with affect judgments provided by humans revealed that human-machine agreement was on par with novice judges (self and peer) but quantitatively lower than trained judges. We discuss the prospects of extending AutoTutor into an affect-sensing ITS.  相似文献   

2.
Patents contain a large quantity of technical information not available elsewhere and therefore very interesting for both academia and industry. The purpose of the research is to try to detect and extract information about the functions, the physical behaviours and the states of the system directly from the text of a patent in an automatic way. The above three categories constitute a well-known set of relevant entities in the theory of engineering design, and their study allows powerful analysis of individual artefacts as well as that of groups of products or technologies. The focus is in providing a handy tool that could speed up and facilitate human analysis and allow tackling also large corpora of documents. A second goal is to develop a protocol based on free software and database resources, so that it could be replicable with limited effort by everyone without having to rely on commercial databases.Extracting technical and design information from a document whose aim is more legal than technical, and that is written using a specific jargon, is not a trivial task. The approach chosen to overcome the various issues is to support state-of-the-art Computational Linguistic tools with a large Knowledge Base. The latter has been constructed both manually and automatically and comprises not only keywords but also concepts, relationships and regular expressions. A case study about a very recent patent describing a mechanical device has been included to show the functioning and output of the entire system.  相似文献   

3.
Many smartphone apps routinely gather various private user data and send them to advertisers. Despite recent study on protection mechanisms and analysis on apps’ behavior, the understanding of the consequences of such privacy losses remains limited. In this paper, we investigate how much an advertiser can infer about users’ social and community relationships. After one month’s user study involving about 190 most popular Android apps, we find that an advertiser can infer 90% of the social relationships. We further propose a privacy leakage inference framework and use real mobility traces and Foursquare data to quantify the consequences of privacy leakage. We find that achieving 90% inference accuracy of the social and community relationships requires merely 3 weeks’ user data. Finally, we present a real-time privacy leakage visualization tool that captures and displays the spatial–temporal characteristics of the leakages. The discoveries underscore the importance of early adoption of privacy protection mechanisms.  相似文献   

4.
Although a few studies have focused on mobile value from the distinctive feature of a mobile technology perspective, limited attempts have been made from a mobile user’s value tendency perspective. In this study, building upon prior research on productivity-oriented and pleasure-oriented nature of systems, we categorize mobile values as having utilitarian and hedonic use. Based on these two values, we conceptualize types of tendency of mobile users’ application use namely utilitarian tendency and hedonic tendency. The goal of this study is to examine the relationships between mobile consumers’ value tendency and their perceptions of mobile Internet service quality in terms of three different mobile quality dimensions (i.e., connection quality, design quality, and information quality). In addition, drawing upon the “digital divide” literature, the relationships between mobile users’ personal dispositions (i.e., maturity and socio-economic status) and their mobile value tendency are also tested. The empirical results of the study, the interpretation of the results, research contributions, and limitations are discussed.  相似文献   

5.
Computer users have different levels of system skills. Moreover, each user has different levels of skill across different applications and even in different portions of the same application. Additionally, users’ skill levels change dynamically as users gain more experience in a user interface. In order to adapt user interfaces to the different needs of user groups with different levels of skills, automatic methods of skill detection are required. In this paper, we present our experiments and methods, which are used to build automatic skill classifiers for desktop applications. Machine learning algorithms were used to build statistical predictive models of skill. Attribute values were extracted from high frequency user interface events, such as mouse motions and menu interactions, and were used as inputs to our models. We have built both task-independent and task-dependent classifiers with promising results.  相似文献   

6.
Automated semantic web service composition is one of the critical research challenges of service-oriented computing, since it allows users to create an application simply by specifying the inputs that the application requires, the outputs it should produce, and any constraints it should respect. The composition problem has been handled using a variety of techniques, from artificial intelligence planning to optimization algorithms. However no approach so far has focused on handling three composition dimensions simultaneously, producing solutions that are: (1) fully functional (i.e., fully executable) by using a mechanism of semantic matching between the services involved in the solutions, (2) are optimized according to non-functional quality-of-service (QoS) measurements, and (3) respect global QoS constraints. This paper presents a novel approach based on a Harmony Search algorithm that addresses these three dimensions simultaneously through a fitness function, to select the optimal or near-optimal solution in semantic web service composition. In our approach, the search space is modeled as a planning-graph structure which encodes all the possible composition solutions for a given user request. To improve the selection process we have compared the original Harmony Search algorithm with its recently developed variants Improved Harmony Search (IHS) algorithm and Global Best Harmony Search (GHS) algorithm. An experimentation of the approach conducted with an extended version of the Web Service Challenge 2009 dataset showed that: (1) our approach is efficient and effective to extract the optimal or near-optimal composition in diverse scenarios; and (2) both variants IHS and GHS algorithms have brought improvements in terms of fitness and execution time.  相似文献   

7.
Automatic Georeferencing of Images Acquired by UAV’s   总被引:1,自引:1,他引:0  
This paper implements and evaluates experimentally a procedure for automatically georeferencing images acquired by unmanned aerial vehicles (UAV’s) in the sense that ground control points (GCP) are not necessary. Since the camera model is necessary for georeferencing, this paper also proposes a completely automatic procedure for collecting corner pixels in the model plane image to solve the camera calibration problem, i.e., to estimate the camera and the lens distortion parameters. The performance of the complete georeferencing system is evaluated with real flight data obtained by a typical UAV.  相似文献   

8.
The arrival of 360° video to the everyday life creates the necessity of assessing both the audiovisual production and the playback environment offered to the final user. Leveraging the standard Experience API (xAPI), that considers collecting micro-interactions with e-learning content, we propose a platform to automatically collect the users’ interaction with applications based on interactive 360° multimedia. To validate the platform, we introduce an example of educational activities based on interactive 360° videos and the tools used to first, annotate these videos and convert them into interactive activities; second, to perform said activity and collect the users’ behavior via xAPI statements; and finally, to convert these statements to meaningful information in the form of user metrics and charts, both at individual level and also aggregated by activity, creating the possibility of finding singular and group behavior. This work concludes that the presented platform helps to analyze how users behave with omnidirectional interactive productions, with the aim of validating and improving its usability, ending with the discussion of future work ideas.  相似文献   

9.
Multimedia Tools and Applications - Learning concepts from examples presented in user’s query and infer the other items that belong to this query is still a significant challenge for images...  相似文献   

10.
11.
A recommender system is a Web technology that proactively suggests items of interest to users based on their objective behavior or explicitly stated preferences. Evaluations of recommender systems (RS) have traditionally focused on the performance of algorithms. However, many researchers have recently started investigating system effectiveness and evaluation criteria from users?? perspectives. In this paper, we survey the state of the art of user experience research in RS by examining how researchers have evaluated design methods that augment RS??s ability to help users find the information or product that they truly prefer, interact with ease with the system, and form trust with RS through system transparency, control and privacy preserving mechanisms finally, we examine how these system design features influence users?? adoption of the technology. We summarize existing work concerning three crucial interaction activities between the user and the system: the initial preference elicitation process, the preference refinement process, and the presentation of the system??s recommendation results. Additionally, we will also cover recent evaluation frameworks that measure a recommender system??s overall perceptive qualities and how these qualities influence users?? behavioral intentions. The key results are summarized in a set of design guidelines that can provide useful suggestions to scholars and practitioners concerning the design and development of effective recommender systems. The survey also lays groundwork for researchers to pursue future topics that have not been covered by existing methods.  相似文献   

12.
As social networks’ popularity has increased, so have the attendant problems. The purpose of this study is to identify the important problems by exploring the determinants of Facebook continuance intention from a negative perspective. A questionnaire survey and interview method is used to provide a deep understanding of both the problems and their causes. The research hypotheses are empirically evaluated using the responses from a field survey of 555 undergraduates. The results indicate that previous usage behaviour is the most important determinate of continuance intention. There is a positive causal relationship between perceived privacy self-protection and usage behaviour. In addition to a common privacy issue, this study discovers some problems such as underestimating the potential risks, having misconceptions and lacking legal and information security knowledge. Moreover, this study presents a diagnostic system that administrators can use to detect students’ key problems and understand the reasons behind students’ behaviour.  相似文献   

13.
Web accessibility can help reduce the digital divide between persons with disabilities and the web by providing easy access to information on the Internet. Providing web accessibility can be an important element that manifests a firm's corporate social responsibility (CSR) and employees can play a vital role in this process. This paper examines how employees can impact a firm's decision to fulfil their CSR regarding web accessibility. We propose that employees’ intention to exert pressure on a firm is primarily influenced by three psychological needs, namely need for control, need for belonging, and need for meaningful existence. Additionally, perceived importance of CSR moderates the relationship between need for meaningful existence and intention. We empirically test the research model using data collected from 106 Chinese employees. The results suggest that for employees to pressure their firms to improve the accessibility of their websites, it is imperative to enhance their perceived importance of web accessibility, and their need for belonging and for a meaningful existence. We present the theoretical and managerial implications arising from our findings.  相似文献   

14.
With the advent of technology man is endeavoring for relevant and optimal results from the web through search engines. Retrieval performance can often be improved using several algorithms and methods. Abundance in web has impelled to exert better search systems. Categorization of the web pages abet fairly in addressing this issue. The anatomy of the web pages, links, categorization of text and their relations are empathized with time. Search engines perform critical analysis using several inputs for a keyword(s) to obtain quality results in shortest possible time. Categorization is mostly done with separating the content using the web link structure. We estimated two different page weights (a) Page Retaining Weight (PRW) and (b) Page Forwarding Weight (PFW) for a web page and grouped for categorization. Using these experimental results we classified the web pages into four different groups i.e. (A) Simple type (B) Axis shifted (c) Fluctuated and (d) Oscillating types. Implication in development of such categorization alleviates the performance of search engines and also delves into study of web modeling studies.  相似文献   

15.
16.
International Journal of Control, Automation and Systems - A range of motion (ROM) has been measured using several indices to judge the progress of ankylosing spondylitis (AS). However, measuring...  相似文献   

17.
Early and accurate diagnosis of Parkinson’s disease (PD) is important for early management, proper prognostication and for initiating neuroprotective therapies once they become available. Recent neuroimaging techniques such as dopaminergic imaging using single photon emission computed tomography (SPECT) with 123I-Ioflupane (DaTSCAN) have shown to detect even early stages of the disease. In this paper, we use the striatal binding ratio (SBR) values that are calculated from the 123I-Ioflupane SPECT scans (as obtained from the Parkinson’s progression markers initiative (PPMI) database) for developing automatic classification and prediction/prognostic models for early PD. We used support vector machine (SVM) and logistic regression in the model building process. We observe that the SVM classifier with RBF kernel produced a high accuracy of more than 96% in classifying subjects into early PD and healthy normal; and the logistic model for estimating the risk of PD also produced high degree of fitting with statistical significance indicating its usefulness in PD risk estimation. Hence, we infer that such models have the potential to aid the clinicians in the PD diagnostic process.  相似文献   

18.
19.
Collaborative recommender systems select potentially interesting items for each user based on the preferences of like-minded individuals. Particularly, e-commerce has become a major domain in these research field due to its business interest, since identifying the products the users may like or find useful can boost consumption. During the last years, a great number of works in the literature have focused in the improvement of these tools. Expertise, trust and reputation models are incorporated in collaborative recommender systems to increase their accuracy and reliability. However, current approaches require extra data from the users that is not often available. In this paper, we present two contributions that apply a semantic approach to improve recommendation results transparently to the users. On the one hand, we automatically build implicit trust networks in order to incorporate trust and reputation in the selection of the set of like-minded users that will drive the recommendation. On the other hand, we propose a measure of practical expertise by exploiting the data available in any e-commerce recommender system – the consumption histories of the users.  相似文献   

20.
Abstract

The purpose of this paper is to describe a procedure for the automatic selection of control-points in remote-sensing images of high-relief terrains for alignment with a reference map. This problem has been found to be of strategic importance whenever remote sensing images have to be integrated in a Geographic Information System (GIS) and processed in real time. The procedure described here is based on the recognition of shadow structures in the satellite image and on their comparison with the computer-generated shadows obtained from the Digital Terrain Model (DTM) of the region. The procedure was developed for a Landsat TM image of the Aurina Valley (in the Pusteresi Alps) with the DTM obtained from an IGM (Istituto Geografico Militare) 1:25000 reference map, but with minor changes it can be extended to other remote-sensing images. Comparison of the shadow structures is performed by similarity evaluation of a simplified model of their shapes described by means of inertia ellipses. Each pair of shadow structures, recognized as similar and meeting a number of positional constraints, yields a pair of corresponding points whose coordinates provide input values for determining the parameters in the transformation of the input image into a planimetrically corrected one. The performance and robustness of the method and its boundary applicability are assessed. An example is given in which the automatically-determined control-points are directly inserted in a warping function, with reasonably good results.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号