首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Financially motivated kernels based on EURUSD currency data are constructed from limit order book volumes, commonly used technical analysis methods and canonical market microstructure models—the latter in the form of Fisher kernels. These kernels are used through their incorporation into support vector machines (SVM) to predict the direction of price movement for the currency over multiple time horizons. Multiple kernel learning is used to replicate the signal combination process that trading rules embody when they aggregate multiple sources of financial information. Significant outperformance relative to both the individual SVM and benchmarks is found, along with an indication of which features are the most informative for financial prediction tasks. An average accuracy of 55% is achieved when classifying the direction of price movement into one of three categories for a 200 s predictive time horizon.  相似文献   

2.
联邦学习(Federated Learning,FL)是一种新兴的分布式机器学习范式,它允许移动设备以分散的方式协作训练全局模型,同时保持训练数据在终端上面。然而,由于数千个异构分布式终端设备参与FL任务,所以FL面临的挑战是通信效率问题。为了解决上述问题,基于边缘计算的FL被提出来了,即边缘联邦学习。边缘计算利用终端设备附近的边缘节点执行模型参数的下发和聚合,进而降低通信时间。尽管有上述巨大的好处,多任务的边缘联邦学习的激励机制尚未得到很好的解决。因此,提出了一种融合契约论和匹配博弈的激励机制;然后,基于三个数据集的实验结果验证了该激励机制和匹配算法的有效性。  相似文献   

3.
4.
In this paper, we present an approach to object recognition and scene understanding which integrates low-level image processing and high-level knowledge-based components. A novel machine learning system is presented which is used to acquire knowledge relating to a specific task. Learned feedback from high-level to low-level processes is introduced as a means of achieving robust task-specific segmentation. The system has been implemented and trained on a number of scenarios with differing tasks from which results are presented and discussed.  相似文献   

5.
Datasets with missing values are frequent in real-world classification problems. It seems obvious that imputation of missing values can be considered as a series of secondary tasks, while classification is the main purpose of any machine dealing with these datasets. Consequently, Multi-Task Learning (MTL) schemes offer an interesting alternative approach to solve missing data problems. In this paper, we propose an MTL-based method for training and operating a modified Multi-Layer Perceptron (MLP) architecture to work in incomplete data contexts. The proposed approach achieves a balance between both classification and imputation by exploiting the advantages of MTL. Extensive experimental comparisons with well-known imputation algorithms show that this approach provides excellent results. The method is never worse than the traditional algorithms – an important robustness property – and, also, it clearly outperforms them in several problems.  相似文献   

6.
针对选择性集成逆向传播神经网络(GASEN-BPNN)模型训练学习速度慢,选择性集成极限学习机(GASEN-ELM)模型建模精度稳定性差等问题,提出一种基于遗传算法的选择性集成核极限学习机(GASEN-KELM)建模方法。该方法首先通过对训练样本进行随机采样获取子模型训练样本;然后采用泛化性、稳定性较佳的核极限学习机(KELM)算法建立候选子模型,通过标准遗传算法工具箱,依据设定阈值按进化策略优化选择最佳子模型;最后通过简单平均加权集成的方式获得最终GASEN-KELM模型。采用标准混凝土抗压强度数据验证了所提出方法的有效性,并与GASEN-BPNN和GASEN-ELM选择性集成算法进行比较,表明所提出方法可以在模型学习速度和建模预测稳定性方面获得较好的均衡。  相似文献   

7.
王源  陈亚军  蔡彪 《微机发展》2006,16(8):51-54
在非线性分类算法中,重要的技术之一是对于核技巧的使用,目前仍是尚未解决的问题。针对Mercer核约束条件强的特点,引入条件正定核函数(c.p.d.核)改善其约束条件。实验证明,对于KPCA和KDDA等判决分类方法,通过c.p.d.核的使用,仍能保持其固有分类性能,从而推广了此类方法的适用范围。  相似文献   

8.
在非线性分类算法中,重要的技术之一是对于核技巧的使用,目前仍是尚未解决的问题。针对Mercer核约束条件强的特点,引入条件正定核函数(c.p,d.核)改善其约束条件。实验证明,对于KPCA和KDDA等判决分类方法,通过c.p.d.核的使用,仍能保持其固有分类性能,从而推广了此类方法的适用范围。  相似文献   

9.
针对传统深度卷积神经网络模型复杂、识别速度慢的问题,提出一种基于多任务学习的人脸属性识别方法。通过轻量化残差模块构建基础网络,根据属性类之间的关联关系设计共享分支网络,以大幅减少网络参数和计算开销。以多任务学习的方式联合优化各分支网络与基础网络的参数,利用关联属性间的共同特征实现人脸属性识别。采用带权重的交叉熵作为损失函数监督训练网络模型,改善正负样本数不均衡问题。在公开数据集CelebA上的实验结果表明,该方法的识别错误率低至8.45%,空间开销仅2.7 MB,在CPU上每幅图预测时间低至15ms,方便部署在资源有限的移动或便携式设备上,具有实际应用价值。  相似文献   

10.
传统的强化学习算法通常假设状态空间和行动空间是离散的,而实际上很多问题的状态空间是连续的,这就大大地限制了强化学习在实际中的应用.为克服以上不足,本文提出了一种基于核方法的强化学习算法,能直接处理具有连续状态空间的问题.最后,通过具有连续状态空间和离散行动空间的mountain car问题来验证算法.实验表明,这种算法在处理具有连续状态空间的问题时,和传统的先把状态空间离散化的方法相比,能以较少的训练数据收敛到更好的策略.  相似文献   

11.
In this paper, we formulate object tracking in a particle filter framework as a structured multi-task sparse learning problem, which we denote as Structured Multi-Task Tracking (S-MTT). Since we model particles as linear combinations of dictionary templates that are updated dynamically, learning the representation of each particle is considered a single task in Multi-Task Tracking (MTT). By employing popular sparsity-inducing $\ell _{p,q}$ mixed norms $(\text{ specifically} p\in \{2,\infty \}$ and $q=1),$ we regularize the representation problem to enforce joint sparsity and learn the particle representations together. As compared to previous methods that handle particles independently, our results demonstrate that mining the interdependencies between particles improves tracking performance and overall computational complexity. Interestingly, we show that the popular $L_1$ tracker (Mei and Ling, IEEE Trans Pattern Anal Mach Intel 33(11):2259–2272, 2011) is a special case of our MTT formulation (denoted as the $L_{11}$ tracker) when $p=q=1.$ Under the MTT framework, some of the tasks (particle representations) are often more closely related and more likely to share common relevant covariates than other tasks. Therefore, we extend the MTT framework to take into account pairwise structural correlations between particles (e.g. spatial smoothness of representation) and denote the novel framework as S-MTT. The problem of learning the regularized sparse representation in MTT and S-MTT can be solved efficiently using an Accelerated Proximal Gradient (APG) method that yields a sequence of closed form updates. As such, S-MTT and MTT are computationally attractive. We test our proposed approach on challenging sequences involving heavy occlusion, drastic illumination changes, and large pose variations. Experimental results show that S-MTT is much better than MTT, and both methods consistently outperform state-of-the-art trackers.  相似文献   

12.
We study support vector machines (SVM) for which the kernel matrix is not specified exactly and it is only known to belong to a given uncertainty set. We consider uncertainties that arise from two sources: (i) data measurement uncertainty, which stems from the statistical errors of input samples; (ii) kernel combination uncertainty, which stems from the weight of individual kernel that needs to be optimized in multiple kernel learning (MKL) problem. Much work has been studied, such as uncertainty sets that allow the corresponding SVMs to be reformulated as semi-definite programs (SDPs), which is very computationally expensive however. Our focus in this paper is to identify uncertainty sets that allow the corresponding SVMs to be reformulated as second-order cone programs (SOCPs), since both the worst case complexity and practical computational effort required to solve SOCPs is at least an order of magnitude less than that needed to solve SDPs of comparable size. In the main part of the paper we propose four uncertainty sets that meet this criterion. Experimental results are presented to confirm the validity of these SOCP reformulations.  相似文献   

13.
Domain adaptation aims to correct the mismatch in statistical properties between the source domain on which a classifier is trained and the target domain to which the classifier is to be applied. In this paper, we address the challenging scenario of unsupervised domain adaptation, where the target domain does not provide any annotated data to assist in adapting the classifier. Our strategy is to learn robust features which are resilient to the mismatch across domains and then use them to construct classifiers that will perform well on the target domain. To this end, we propose novel kernel learning approaches to infer such features for adaptation. Concretely, we explore two closely related directions. In the first direction, we propose unsupervised learning of a geodesic flow kernel (GFK). The GFK summarizes the inner products in an infinite sequence of feature subspaces that smoothly interpolates between the source and target domains. In the second direction, we propose supervised learning of a kernel that discriminatively combines multiple base GFKs. Those base kernels model the source and the target domains at fine-grained granularities. In particular, each base kernel pivots on a different set of landmarks—the most useful data instances that reveal the similarity between the source and the target domains, thus bridging them to achieve adaptation. Our approaches are computationally convenient, automatically infer important hyper-parameters, and are capable of learning features and classifiers discriminatively without demanding labeled data from the target domain. In extensive empirical studies on standard benchmark recognition datasets, our appraches yield state-of-the-art results compared to a variety of competing methods.  相似文献   

14.
Kernel methods are a class of well established and successful algorithms for pattern analysis thanks to their mathematical elegance and good performance. Numerous nonlinear extensions of pattern recognition techniques have been proposed so far based on the so-called kernel trick. The objective of this paper is twofold. First, we derive an additional kernel tool that is still missing, namely kernel quadratic discriminant (KQD). We discuss different formulations of KQD based on the regularized kernel Mahalanobis distance in both complete and class-related subspaces. Secondly, we propose suitable extensions of kernel linear and quadratic discriminants to indefinite kernels. We provide classifiers that are applicable to kernels defined by any symmetric similarity measure. This is important in practice because problem-suited proximity measures often violate the requirement of positive definiteness. As in the traditional case, KQD can be advantageous for data with unequal class spreads in the kernel-induced spaces, which cannot be well separated by a linear discriminant. We illustrate this on artificial and real data for both positive definite and indefinite kernels.  相似文献   

15.
In this paper, we proposed a multi-task system that can identify dish types, food ingredients, and cooking methods from food images with deep convolutional neural networks. We built up a dataset of 360 classes of different foods with at least 500 images for each class. To reduce the noises of the data, which was collected from the Internet, outlier images were detected and eliminated through a one-class SVM trained with deep convolutional features. We simultaneously trained a dish identifier, a cooking method recognizer, and a multi-label ingredient detector. They share a few low-level layers in the deep network architecture. The proposed framework shows higher accuracy than traditional method with handcrafted features, and the cooking method recognizer and ingredient detector can be applied to dishes which are not included in the training dataset to provide reference information for users.  相似文献   

16.
Kernel methods have been widely applied in machine learning to solve complex nonlinear problems. Kernel selection is one of the key issues in kernel methods, since it is vital for improving generalization performance. Traditionally, the selection of kernel is restricted to be positive definite which makes their applicability partially limited. Actually, in many real applications such as gene identification and object recognition, indefinite kernels frequently emerge and can achieve better performance. However, compared to positive definite ones, indefinite kernels are more complicated due to the non-convexity of the subsequent optimization problems, which leads to the incapability of most existing kernel algorithms. Some indefinite kernel methods have been proposed based on the dual of support vector machine (SVM), which mostly emphasize on how to transform the non-convex optimization to be convex by using positive definite kernels to approximate indefinite ones. In fact, the duality gap in SVM usually exists in the case of indefinite kernels and therefore these algorithms do not indeed solve the indefinite kernel problems themselves. In this paper, we present a novel framework for indefinite kernel learning derived directly from the primal of SVM, which establishes several new models not only for single indefinite kernel but also extends to multiple indefinite kernel scenarios. Several algorithms are developed to handle the non-convex optimization problems in these models. We further provide a constructive approach for kernel selection in the algorithms by using the theory of similarity functions. Experiments on real world datasets demonstrate the superiority of our models.  相似文献   

17.
Shot boundary detection is a fundamental step of video indexing. One crucial issue of this step is the discrimination of abrupt shot change from flashlight, because flashlight often induces a false shot boundary. Support vector machine (SVM) is a supervised learning technique for data classification. In this paper, we propose a SVM-based technique to detect flashlights in video. Our approach to flashlight detection is based on the facts that the duration of flashlight is short and the video contents before and after a flashlight should be similar. Therefore, we design a sliding window in temporal domain to monitor the instantaneous video variation and extract color and edge features to compare the visual contents between two video segments. Then, a SVM is employed to classify the luminance variation into flashlight or shot cut. Experimental results indicate that the proposed approach is effective and outperforms some existing techniques.  相似文献   

18.
We have applied the inductive learning of statistical decision trees and relaxation labeling to the Natural Language Processing (NLP) task of morphosyntactic disambiguation (Part Of Speech Tagging). The learning process is supervised and obtains a language model oriented to resolve POS ambiguities, consisting of a set of statistical decision trees expressing distribution of tags and words in some relevant contexts. The acquired decision trees have been directly used in a tagger that is both relatively simple and fast, and which has been tested and evaluated on the Wall Street Journal (WSJ) corpus with competitive accuracy. However, better results can be obtained by translating the trees into rules to feed a flexible relaxation labeling based tagger. In this direction we describe a tagger which is able to use information of any kind (n-grams, automatically acquired constraints, linguistically motivated manually written constraints, etc.), and in particular to incorporate the machine-learned decision trees. Simultaneously, we address the problem of tagging when only limited training material is available, which is crucial in any process of constructing, from scratch, an annotated corpus. We show that high levels of accuracy can be achieved with our system in this situation, and report some results obtained when using it to develop a 5.5 million words Spanish corpus from scratch.  相似文献   

19.
A Learning Approach to Robotic Table Tennis   总被引:2,自引:0,他引:2  
We propose a method of controlling a table tennis robot so as to return the incoming ball to a desired point on the table with specified flight duration. The proposed method consists of the following three input–output maps implemented by means of locally weighted regression: 1) a map for predicting the impact time of the ball hit by the paddle and the ball position and velocity at that moment according to input vectors describing the state of the incoming ball; 2) a map representing a change in ball velocities before and after impact; and 3) a map giving the relation between the ball velocity just after impact and the landing point and time of the returned ball. We also propose a feed-forward control scheme based on iterative learning control to accurately achieve the stroke movement of the paddle as determined by using these maps. Experimental results including rallies with a human opponent are also reported to demonstrate the effectiveness of our approach.  相似文献   

20.
基于多核学习的双稀疏关系学习算法   总被引:1,自引:1,他引:1  
在关系学习中样本无法在R n空间中表示.与其他机器学习问题有很大不同,因为无法利用R n空间的几何结构使得其解决异常困难.将多核学习方法用于关系学习中. 首先,可以证明当用逻辑规则生成的核矩阵进行多核学习时,其他核都可以等价转化为线性核.在此基础上,通过用修正FOIL算法迭代生成规则,构造相应的线性核然后进行多核优化,由此实现了由规则诱导出的特征空间上的线性分类器.算法具有"双稀疏"特性,即:可以同时得到支持向量和支持规则.此外,可以证明在规则诱导出的特征空间上的多核学习可以转化为平方l1 SVM,这是首次提出的新型SVM算法.在6个生物化学和化学信息数据集上与其他算法进行了对比实验.结果表明不仅预测准确率有明显提高,而且得到的规则集数目更小,解释更为直接.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号