首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
With the development of digital music technologies, it is an interesting and useful issue to recommend the ‘favored music’ from large amounts of digital music. Some Web-based music stores can recommend popular music which has been rated by many people. However, three problems that need to be resolved in the current methods are: (a) how to recommend the ‘favored music’ which has not been rated by anyone, (b) how to avoid repeatedly recommending the ‘disfavored music’ for users, and (c) how to recommend more interesting music for users besides the ones users have been used to listen. To achieve these goals, we proposed a novel method called personalized hybrid music recommendation, which combines the content-based, collaboration-based and emotion-based methods by computing the weights of the methods according to users’ interests. Furthermore, to evaluate the recommendation accuracy, we constructed a system that can recommend the music to users after mining users’ logs on music listening records. By the feedback of the user’s options, the proposed methods accommodate the variations of the users’ musical interests and then promptly recommend the favored and more interesting music via consecutive recommendations. Experimental results show that the recommendation accuracy achieved by our method is as good as 90%. Hence, it is helpful for recommending the ‘favored music’ to users, provided that each music object is annotated with the related music emotions. The framework in this paper could serve as a useful basis for studies on music recommendation.  相似文献   

2.
Contextual factors greatly influence users’ musical preferences, so they are beneficial remarkably to music recommendation and retrieval tasks. However, it still needs to be studied how to obtain and utilize the contextual information. In this paper, we propose a context-aware music recommendation approach, which can recommend music pieces appropriate for users’ contextual preferences for music. In analogy to matrix factorization methods for collaborative filtering, the proposed approach does not require music pieces to be represented by features ahead, but it can learn the representations from users’ historical listening records. Specifically, the proposed approach first learns music pieces’ embeddings (feature vectors in low-dimension continuous space) from music listening records and corresponding metadata. Then it infers and models users’ global and contextual preferences for music from their listening records with the learned embeddings. Finally, it recommends appropriate music pieces according to the target user’s preferences to satisfy her/his real-time requirements. Experimental evaluations on a real-world dataset show that the proposed approach outperforms baseline methods in terms of precision, recall, F1 score, and hitrate. Especially, our approach has better performance on sparse datasets.  相似文献   

3.

In the past decades, a large number of music pieces are uploaded to the Internet every day through social networks, such as Last.fm, Spotify and YouTube, that concentrates on music and videos. We have been witnessing an ever-increasing amount of music data. At the same time, with the huge amount of online music data, users are facing an everyday struggle to obtain their interested music pieces. To solve this problem, music search and recommendation systems are helpful for users to find their favorite content from a huge repository of music. However, social influence, which contains rich information about similar interests between users and users’ frequent correlation actions, has been largely ignored in previous music recommender systems. In this work, we explore the effects of social influence on developing effective music recommender systems and focus on the problem of social influence aware music recommendation, which aims at recommending a list of music tracks for a target user. To exploit social influence in social influence aware music recommendation, we first construct a heterogeneous social network, propose a novel meta path-based similarity measure called WPC, and denote the framework of similarity measure in this network. As a step further, we use the topological potential approach to mine social influence in heterogeneous networks. Finally, in order to improve music recommendation by incorporating social influence, we present a factor graphic model based on social influence. Our experimental results on one real world dataset verify that our proposed approach outperforms current state-of-the-art music recommendation methods substantially.

  相似文献   

4.
Music is the language of emotions. In recent years, music emotion recognition has attracted widespread attention in the academic and industrial community since it can be widely used in fields like recommendation systems, automatic music composing, psychotherapy, music visualization, and so on. Especially with the rapid development of artificial intelligence, deep learning-based music emotion recognition is gradually becoming mainstream. This paper gives a detailed survey of music emotion recognition. Starting with some preliminary knowledge of music emotion recognition, this paper first introduces some commonly used evaluation metrics. Then a three-part research framework is put forward. Based on this three-part research framework, the knowledge and algorithms involved in each part are introduced with detailed analysis, including some commonly used datasets, emotion models, feature extraction, and emotion recognition algorithms. After that, the challenging problems and development trends of music emotion recognition technology are proposed, and finally, the whole paper is summarized.  相似文献   

5.
With the advent of the ubiquitous era, many studies have been devoted to various situation-aware services in the semantic web environment. One of the most challenging studies involves implementing a situation-aware personalized music recommendation service which considers the user’s situation and preferences. Situation-aware music recommendation requires multidisciplinary efforts including low-level feature extraction and analysis, music mood classification and human emotion prediction. In this paper, we propose a new scheme for a situation-aware/user-adaptive music recommendation service in the semantic web environment. To do this, we first discuss utilizing knowledge for analyzing and retrieving music contents semantically, and a user adaptive music recommendation scheme based on semantic web technologies that facilitates the development of domain knowledge and a rule set. Based on this discussion, we describe our Context-based Music Recommendation (COMUS) ontology for modeling the user’s musical preferences and contexts, and supporting reasoning about the user’s desired emotions and preferences. Basically, COMUS defines an upper music ontology that captures concepts on the general properties of music such as titles, artists and genres. In addition, it provides functionality for adding domain-specific ontologies, such as music features, moods and situations, in a hierarchical manner, for extensibility. Using this context ontology, we believe that logical reasoning rules can be inferred based on high-level (implicit) knowledge such as situations from low-level (explicit) knowledge. As an innovation, our ontology can express detailed and complicated relations among music clips, moods and situations, which enables users to find appropriate music. We present some of the experiments we performed as a case-study for music recommendation.  相似文献   

6.
7.
In this paper, we suggest a new approach of genetic programming for music emotion classification. Our approach is based on Thayer’s arousal-valence plane which is one of representative human emotion models. Thayer’s plane which says human emotions is determined by the psychological arousal and valence. We map music pieces onto the arousal-valence plane, and classify the music emotion in that space. We extract 85 acoustic features from music signals, rank those by the information gain and choose the top k best features in the feature selection process. In order to map music pieces in the feature space onto the arousal-valence space, we apply genetic programming. The genetic programming is designed for finding an optimal formula which maps given music pieces to the arousal-valence space so that music emotions are effectively classified. k-NN and SVM methods which are widely used in classification are used for the classification of music emotions in the arousal-valence space. For verifying our method, we compare with other six existing methods on the same music data set. With this experiment, we confirm the proposed method is superior to others.  相似文献   

8.

Music categorization based on acoustic features extracted from music clips and user-defined tags forms the basis of recent music recommendation applications, because relevant tags can be automatically assigned based on the feature values and their relation to tags. In practice, especially for handheld lightweight mobile devices, there is a certain limitation on the computational capacity, owing to consumers’ usage behavior or battery consumption. This also limits the maximum number of acoustic features to be extracted, and results in the necessity of identifying a compact feature subset that is used for the music categorization process. In this study, we propose an approach to compact feature subset-based multi-label music categorization for mobile music recommendation services. Experimental results using various multi-labeled music datasets reveal that the proposed approach yields better performance when compared to conventional approach.

  相似文献   

9.
Music often plays an important role in people’s daily lives. Because it has the power to affect human emotion, music has gained a place in work environments and in sports training as a way to enhance the performance of particular tasks. Studies have shown that office workers perform certain jobs better and joggers run longer distances when listening to music. However, a personalized music system which can automatically recommend songs according to user’s physiological response remains absent. Therefore, this study aims to establish an intelligent music selection system for individual users to enhance their learning performance. We first created an emotional music database using data analytics classifications. During testing, innovative wearable sensing devices were used to detect heart rate variability (HRV) in experiments, which subsequently guided music selection. User emotions were then analyzed and appropriate songs were selected by using the proposed application software (App). Machine learning was used to record user preference, ensuring accurate and precise classification. Significant results generated through experimental validation indicate that this system generates high satisfaction levels, does not increase mental workload, and improves users’ performance. Under the trend of the Internet of Things (IoT) and the continuing development of wearable devices, the proposed system could stimulate innovative applications for smart factory, home, and health care.  相似文献   

10.
Emotion recognition of music objects is a promising and important research issues in the field of music information retrieval. Usually, music emotion recognition could be considered as a training/classification problem. However, even given a benchmark (a training data with ground truth) and using effective classification algorithms, music emotion recognition remains a challenging problem. Most previous relevant work focuses only on acoustic music content without considering individual difference (i.e., personalization issues). In addition, assessment of emotions is usually self-reported (e.g., emotion tags) which might introduce inaccuracy and inconsistency. Electroencephalography (EEG) is a non-invasive brain-machine interface which allows external machines to sense neurophysiological signals from the brain without surgery. Such unintrusive EEG signals, captured from the central nervous system, have been utilized for exploring emotions. This paper proposes an evidence-based and personalized model for music emotion recognition. In the training phase for model construction and personalized adaption, based on the IADS (the International Affective Digitized Sound system, a set of acoustic emotional stimuli for experimental investigations of emotion and attention), we construct two predictive and generic models \(AN\!N_1\) (“EEG recordings of standardized group vs. emotions”) and \(AN\!N_2\) (“music audio content vs. emotion”). Both models are trained by an artificial neural network. We then collect a subject’s EEG recordings when listening the selected IADS samples, and apply the \(AN\!N_1\) to determine the subject’s emotion vector. With the generic model and the corresponding individual differences, we construct the personalized model H by the projective transformation. In the testing phase, given a music object, the processing steps are: (1) to extract features from the music audio content, (2) to apply \(AN\!N_2\) to calculate the vector in the arousal-valence emotion space, and (3) to apply the transformation matrix H to determine the personalized emotion vector. Moreover, with respect to a moderate music object, we apply a sliding window on the music object to obtain a sequence of personalized emotion vectors, in which those predicted vectors will be fitted and organized as an emotion trail for revealing dynamics in the affective content of music object. Experimental results suggest the proposed approach is effective.  相似文献   

11.
Human emotion recognition using brain signals is an active research topic in the field of affective computing. Music is considered as a powerful tool for arousing emotions in human beings. This study recognized happy, sad, love and anger emotions in response to audio music tracks from electronic, rap, metal, rock and hiphop genres. Participants were asked to listen to audio music tracks of 1 min for each genre in a noise free environment. The main objectives of this study were to determine the effect of different genres of music on human emotions and indicating age group that is more responsive to music. Thirty men and women of three different age groups (15–25 years, 26–35 years and 36–50 years) underwent through the experiment that also included self reported emotional state after listening to each type of music. Features from three different domains i.e., time, frequency and wavelet were extracted from recorded EEG signals, which were further used by the classifier to recognize human emotions. It has been evident from results that MLP gives best accuracy to recognize human emotion in response to audio music tracks using hybrid features of brain signals. It is also observed that rock and rap genres generated happy and sad emotions respectively in subjects under study. The brain signals of age group (26–35 years) gave best emotion recognition accuracy in accordance to the self reported emotions.  相似文献   

12.

Recommender systems have become ubiquitous over the last decade, providing users with personalized search results, video streams, news excerpts, and purchasing hints. Human emotions are widely regarded as important predictors of behavior and preference. They are a crucial factor in decision making, but until recently, relatively little has been known about the effectiveness of using human emotions in personalizing real-world recommender systems. In this paper we introduce the Emotion Aware Recommender System (EARS), a large scale system for recommending news items using user’s self-assessed emotional reactions. Our original contribution includes the formulation of a multi-dimensional model of emotions for news item recommendations, introduction of affective item features that can be used to describe recommended items, construction of affective similarity measures, and validation of the EARS on a large corpus of real-world Web traffic. We collect over 13,000,000 page views from 2,700,000 unique users of two news sites and we gather over 160,000 emotional reactions to 85,000 news articles. We discover that incorporating pleasant emotions into collaborative filtering recommendations consistently outperforms all other algorithms. We also find that targeting recommendations by selected emotional reactions presents a promising direction for further research. As an additional contribution we share our experiences in designing and developing a real-world emotion-based recommendation engine, pointing to various challenges posed by the practical aspects of deploying emotion-based recommenders.

  相似文献   

13.
随着音乐科技研究的不断深入,音乐情感识别已被广泛实践和应用在音乐推荐、音乐心理治疗、声光场景构建等方面。模拟人类感受音乐表现情感的过程,针对音乐情感识别中长短时记忆神经网络的长距离依赖和训练效率低的问题,提出一种新的网络模型CBSA(CNN BiLSTM self attention),应用于长距离音乐情感识别回归训练。模型使用二维卷积神经网络获取音乐情感局部关键特征,采用双向长短时记忆神经网络从获取的局部关键特征中提取序列化音乐情感信息,利用自注意力模型对获取的序列化信息进行动态权重调整,突出音乐情感全局关键点。实验结果表明,CBSA模型可缩短分析音乐情感信息中数据规律的训练时间,有效地提高音乐情感识别精确度。  相似文献   

14.
15.
16.

Emotion is considered a physiological state that appears whenever a transformation is observed by an individual in their environment or body. While studying the literature, it has been observed that combining the electrical activity of the brain, along with other physiological signals for the accurate analysis of human emotions is yet to be explored in greater depth. On the basis of physiological signals, this work has proposed a model using machine learning approaches for the calibration of music mood and human emotion. The proposed model consists of three phases (a) prediction of the mood of the song based on audio signals, (b) prediction of the emotion of the human-based on physiological signals using EEG, GSR, ECG, Pulse Detector, and finally, (c) the mapping has been done between the music mood and the human emotion and classifies them in real-time. Extensive experimentations have been conducted on the different music mood datasets and human emotion for influential feature extraction, training, testing and performance evaluation. An effort has been made to observe and measure the human emotions up to a certain degree of accuracy and efficiency by recording a person’s bio- signals in response to music. Further, to test the applicability of the proposed work, playlists are generated based on the user’s real-time emotion determined using features generated from different physiological sensors and mood depicted by musical excerpts. This work could prove to be helpful for improving mental and physical health by scientifically analyzing the physiological signals.

  相似文献   

17.
音乐推荐系统是指根据用户的历史浏览数据,从候选库中推荐给用户可能喜欢的音乐的一种新型网络服务。该系统的关键在于需要对整个数据库按照音乐风格进行分类,基于此提出一种新的音乐特征处理方法来完成音乐库分类,以有效实现音乐推荐。该方法首先为候选音乐库构建常规的音乐特征数据集,然后基于分形理论对数据集进行属性约简,获取每一首音乐的推荐特征向量,并且依据特征向量的特点,定义了一种新的距离度量方法。在包含六种风格的音乐数据库的实验中,仿真结果证明了提出的音乐推荐特征和距离度量的有效性,与现有的基于内容的音乐检索研究相比,音乐推荐特征的使用极大地降低了对数据库存储量的需求,对音乐推荐系统的网络开发具有很好的应用价值。  相似文献   

18.
Previous studies examining the intention and level of music piracy among young people have postulated that collective attitudes, optimistic biases toward risk, and beliefs about copyright laws are the key factors involved in their decisions. We extend the analysis by studying the impact of music streaming services. Music streaming is an alternative business model in which, for a small fee, the consumer has access to a large set of songs without downloading them onto their devices. Results from a Logit model show that college students who are frequent users of music streaming are also more likely to download music illegally. A plausible explanation is that those engaged in music streaming are also heavy users of computer technology, software downloading, and digital sharing – factors that facilitate the conditions for music piracy. Demographics, collective attitudes, and beliefs about risk and rewards continue to play a key role in explaining music piracy, but their relevance is slightly reduced once controlled for music streaming usage.  相似文献   

19.
彭程  常相茂  仇媛 《计算机应用》2020,40(5):1539-1544
现有睡眠监测研究主要是针对睡眠质量提出非干扰式监测方法的研究,而缺乏对睡眠质量主动调节方法的研究。基于心率变异性(HRV)分析的精神状态以及睡眠分期研究主要集中在这两种信息的获取上,而这两种信息的获取需要佩戴专业医疗设备,并且这些研究缺乏对信息的应用以及调整。音乐可以作为一种解决睡眠问题的非药物类方法,但现有音乐推荐方法并未考虑个体睡眠及精神状态的差异。针对以上问题提出了一种基于移动设备的精神压力和睡眠状态的音乐推荐系统。首先,用手表采集光体积扫描计信号来提取特征并计算心率;其次,将采集的信号通过蓝牙传递给手机,手机通过这些信号评估人的精神压力以及睡眠状态来播放调整音乐;最后,根据个体每晚的入眠时间进行音乐推荐。实验结果表明,在使用睡眠音乐推荐系统后,用户睡眠总时长相较于使用前增长11.0%。  相似文献   

20.
Due to the overload of contents, the user suffers from difficulty in selecting items. The social cataloging services allow users to consume items and share their opinions, which influences in not only oneself but other users to choose new items. The recommendation system reduces the problem of the choice by recommending the items considering the behavior of the people and the characteristics of the items.In this study, we propose a tag-based recommendation method considering the emotions reflected in the user’s tags. Since the user’s estimation of the item is made after consuming the item, the feelings of the user obtained during consuming are directly reflected in ratings and tags. The rating has overall valence on the item, and the tag represents the detailed feelings. Therefore, we assume that the user’s rating for an item is the basic emotion of the tag attached to the item, and the emotion of tag is adjusted by the unique emotion value of the tag. We represent the relationships between users, items, and tags as a three-order tensor and apply tensor factorization. The experimental results show that the proposed method achieves better recommendation performance than baselines.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号