首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Gupta  Umesh  Gupta  Deepak 《Applied Intelligence》2021,51(10):7058-7093

Better prediction ability is the main objective of any regression-based model. Large margin Distribution Machine for Regression (LDMR) is an efficient approach where it tries to reduce both loss functions, i.e. ε-insensitive and quadratic loss to diminish the effects of outliers. However, still, it has a significant drawback, i.e. high computational complexity. To achieve the improved generalization of the regression model with less computational cost, we propose an enhanced form of LDMR named as Least Squares Large margin Distribution Machine-based Regression (LS-LDMR) by transforming the inequality conditions alleviate to equality conditions. The elucidation is attained by handling a system of linear equations where we need to measure the inverse of the matrix only. Hence, there is no need to solve the large size of the quadratic programming problem, unlike in the case of other regression-based algorithms as SVR, Twin SVR, and LDMR. The numerical experiment has been performed on the benchmark real-life datasets along with synthetically generated datasets by using the linear and Gaussian kernel. All the experiments of presented LS-LDMR are analyzed with standard SVR, Twin SVR, primal least squares Twin SVR (PLSTSVR), ε-Huber SVR (ε-HSVR), ε-support vector quantile regression (ε-SVQR), minimum deviation regression (MDR), and LDMR, which shows the effectiveness and usability of LS-LDMR. This approach is also statistically validated and verified in terms of various metrics.

  相似文献   

2.
Zhao  Guodong  Wu  Yan 《Neural Processing Letters》2019,50(2):1257-1279

As known, the supervised feature extraction aims to search a discriminative low dimensional space where the new samples in the sample class cluster tightly and the samples in the different classes keep away from each other. For most of algorithms, how to push these samples located in class margin or in other class (called hard samples in this paper) towards the class is difficult during the transformation. Frequently, these hard samples affect the performance of most of methods. Therefore, for an efficient method, to deal with these hard samples is very important. However, fewer methods in the past few years have been specially proposed to solve the problem of hard samples. In this study, the large margin nearest neighbor (LMNN) and weighted local modularity (WLM) in complex network are introduced respectively to deal with these hard samples in order to push them towards the class quickly and the samples with the same labels as a whole shrink into the class, which both result in small within-class distance and large margin between classes. Combined WLM with LMNN, a novel feature extraction method named WLMLMNN is proposed, which takes into account both the global and local consistencies of input data in the projected space. Comparative experiments with other popular methods on various real-world data sets demonstrate the effectiveness of the proposed method.

  相似文献   

3.
Understanding the empirical success of boosting algorithms is an important theoretical problem in machine learning.One of the most influential works is the margin theory,which provides a series of upper bounds for the generalization error of any voting classifier in terms of the margins of the training data.Recently an equilibrium margin (Emargin) bound which is sharper than previously well-known margin bounds is proposed.In this paper,we conduct extensive experiments to test the Emargin theory.Specifically,we develop an efficient algorithm that,given a boosting classifier (or a voting classifier in general),learns a new voting classifier which usually has a smaller Emargin bound.We then compare the performances of the two classifiers and find that the new classifier often has smaller test errors,which agrees with what the Emargin theory predicts.  相似文献   

4.
This paper proposes a new classifier called density-induced margin support vector machines (DMSVMs). DMSVMs belong to a family of SVM-like classifiers. Thus, DMSVMs inherit good properties from support vector machines (SVMs), e.g., unique and global solution, and sparse representation for the decision function. For a given data set, DMSVMs require to extract relative density degrees for all training data points. These density degrees can be taken as relative margins of corresponding training data points. Moreover, we propose a method for estimating relative density degrees by using the K nearest neighbor method. We also show the upper bound on the leave-out-one error of DMSVMs for a binary classification problem and prove it. Promising results are obtained on toy as well as real-world data sets.  相似文献   

5.

广义特征值中心支持向量回归机(GEPSVR) 是一种有效的核回归算法, 但其在求解优化问题时易导致奇异 性问题. 为此, 提出一种基于特征值分解的支持向量回归机, 简称IGEPSVR. 与GEPSVR 相比, IGEPSVR 的主要优势 有: 结合最大间隔准则和GEPSVR 几何思想给出了新的距离度量准则; 在优化模型中引入Tikhonov 正则项, 克服了 可能产生的奇异性问题; IGEPSVR 仅需求解两个标准特征值, 降低了计算复杂度. 实验结果表明, 较GEPSVR 算法, IGEPSVR 不仅提高了学习能力, 而且缩短了训练时间.

  相似文献   

6.
Uniform offsetting is an important geometric operation for computer-aided design and manufacturing (CAD/CAM) applications such as rapid prototyping, NC machining, coordinate measuring machines, robot collision avoidance, and Hausdorff error calculation. We present a novel method for offsetting (grown and shrunk) a solid model by an arbitrary distance r. First, offset polygons are directly computed for each face, edge, and vertex of an input solid model. The computed polygonal meshes form a continuous boundary; however, such a boundary is invalid since there exist meshes that are closer to the original model than the given distance r as well as self-intersections. Based on the problematic polygonal meshes, we construct a well-structured point-based model, Layered Depth-Normal Image (LDNI), in three orthogonal directions. The accuracy of the generated point-based model can be controlled by setting the tessellation and sampling rates during the construction process. We then process all the sampling points in the model by using a set of point filters to delete all the invalid points. Based on the remaining points, we construct a two-manifold polygonal contour as the resulting offset boundary. Our method is general, simple and efficient. We report experimental results on a variety of CAD models and discuss various applications of the developed uniform offsetting method.  相似文献   

7.
Recently, due to the imprecise nature of the data generated from a variety of streaming applications, such as sensor networks, query processing on uncertain data streams has become an important problem. However, all the existing works on uncertain data streams study unbounded streams. In this paper, we take the first step towards the important and challenging problem of answering sliding-window queries on uncertain data streams, with a focus on one of the most important types of queries—top-k queries. It is nontrivial to find an efficient solution for answering sliding-window top-k queries on uncertain data streams, because challenges not only stem from the strict space and time requirements of processing both arriving and expiring tuples in high-speed streams, but also rise from the exponential blowup in the number of possible worlds induced by the uncertain data model. In this paper, we design a unified framework for processing sliding-window top-k queries on uncertain streams. We show that all the existing top-k definitions in the literature can be plugged into our framework, resulting in several succinct synopses that use space much smaller than the window size, while they are also highly efficient in terms of processing time. We also extend our framework to answering multiple top-k queries. In addition to the theoretical space and time bounds that we prove for these synopses, we present a thorough experimental report to verify their practical efficiency on both synthetic and real data.  相似文献   

8.
In this paper, we present an a posteriori error analysis for the finite element approximation of a variational inequality. We derive a posteriori error estimators of residual type, which are shown to provide upper bounds on the discretization error for a class of variational inequalities provided the solutions are sufficiently regular. Furthermore we derive sharp a posteriori error estimators with both lower and upper error bounds for a subclass of the obstacle problem which are frequently met in many physical models. For sufficiently regular solutions, these estimates are shown to be equivalent to the discretization error in an energy type norm. Our numerical tests show that these sharp error estimators are both reliable and efficient in guiding mesh adaptivity for computing the free boundaries.  相似文献   

9.
We introduce a new method to separate counting classes of a special type by oracles. Among the classes, for which this method is applicable, are NP, coNP, US (also called 1-NP), ⊕ P, all other MOD-classes, PP, and C = P, classes of Boolean Hierarchies over the named classes, classes of finite acceptance type, and many more. As an important special case, we completely characterize all relativizable inclusions between classes NP(k) from the Boolean Hierarchy over NP and other classes defined by what we call bounded counting. Received January 1996, and in final form November 1996.  相似文献   

10.

Applying deep neural networks (DNNs) in mobile and safety-critical systems, such as autonomous vehicles, demands a reliable and efficient execution on hardware. The design of the neural architecture has a large influence on the achievable efficiency and bit error resilience of the network on hardware. Since there are numerous design choices for the architecture of DNNs, with partially opposing effects on the preferred characteristics (such as small error rates at low latency), multi-objective optimization strategies are necessary. In this paper, we develop an evolutionary optimization technique for the automated design of hardware-optimized DNN architectures. For this purpose, we derive a set of inexpensively computable objective functions, which enable the fast evaluation of DNN architectures with respect to their hardware efficiency and error resilience. We observe a strong correlation between predicted error resilience and actual measurements obtained from fault injection simulations. Furthermore, we analyze two different quantization schemes for efficient DNN computation and find one providing a significantly higher error resilience compared to the other. Finally, a comparison of the architectures provided by our algorithm with the popular MobileNetV2 and NASNet-A models reveals an up to seven times improved bit error resilience of our models. We are the first to combine error resilience, efficiency, and performance optimization in a neural architecture search framework.

  相似文献   

11.
Xu  Xianghua  Zhao  Chengwei  Jiang  Zichen  Cheng  Zongmao  Chen  Jinjun 《World Wide Web》2020,23(2):1361-1380

Barrier Coverage is an important sensor deployment issue in many industrial, consumer and military applications.The barrier coverage in bistatic radar sensor networks has attracted many researchers recently. The Bistatic Radars (BR) consist of radar signal transmitters and radar signal receivers. The effective detection area of bistatic radar is a Cassini oval area that determined by the distance between transmitter and receiver and the predefined detecting SNR threshold. Many existing works on bistatic radar barrier coverage mainly focus on homogeneous radar sensor networks. However, cooperation among different types or different physical parameters of sensors is necessary in many practical application scenarios. In this paper, we study the optimal deployment problem in heterogeneous bistatic radar networks.The object is how to maximize the detection ability of bistatic radar barrier with given numbers of radar sensors and barrier’s length. Firstly, we investigate the optimal placement strategy of single transmitter and multiple receivers, and propose the patterns of aggregate deployment. Then we study the optimal deployment of heterogeneous transmitters and receivers and introduce the optimal placement sequences of heterogeneous transmitters and receivers. Finally, we design an efficient greedy algorithm, which realize optimal barrier deployment of M heterogeneous transmitters and N receivers on a L length boundary, and maximizing the detection ability of the barrier. We theoretically proved that the placement sequence of the algorithm construction is optimal deployment solution in heterogeneous bistatic radar sensors barrier. And we validate the algorithm effectiveness through comprehensive simulation experiments.

  相似文献   

12.
程昊翔  王坚 《控制与决策》2016,31(5):949-952
为了提高孪生支持向量机的泛化能力,提出一种新的孪生大间隔分布机算法,以增加间隔分布对于训练模型的影响.理论研究表明,间隔分布对于模型的泛化性能有着非常重要的影响.该算法在标准孪生支持向量机优化目标函数上增加了间隔分布的影响,间隔分布通过一阶和二阶数据统计特征来体现.在标准数据集上的实验结果表明,所提出的算法比SVM、TWSVM、TBSVM算法的分类精确度更高.  相似文献   

13.
We consider the problem [art gallery problem (AGP)] of minimizing the number of vertex guards required to monitor an art gallery whose boundary is an n‐vertex simple polygon. In this paper, we compile and extend our research on exact approaches for solving the AGP. In prior works, we proposed and tested an exact algorithm for the case of orthogonal polygons. In that algorithm, a discretization that approximates the polygon is used to formulate an instance of the set cover problem, which is subsequently solved to optimality. Either the set of guards that characterizes this solution solves the original instance of the AGP, and the algorithm halts, or the discretization is refined and a new iteration begins. This procedure always converges to an optimal solution of the AGP and, moreover, the number of iterations executed highly depends on the way we discretize the polygon. Notwithstanding that the best known theoretical bound for convergence is Θ(n3) iterations, our experiments show that an optimal solution is always found within a small number of them, even for random polygons of many hundreds of vertices. Herein, we broaden the family of polygon classes to which the algorithm is applied by including non‐orthogonal polygons. Furthermore, we propose new discretization strategies leading to additional trade‐off analysis of preprocessing vs. processing times and achieving, in the case of the novel Convex Vertices strategy, the most efficient overall performance so far. We report on experiments with both simple and orthogonal polygons of up to 2500 vertices showing that, in all cases, no more than 15 minutes are needed to reach an exact solution, on a standard desktop computer. Ultimately, we more than doubled the size of the largest instances solved to optimality compared with our previous experiments, which were already five times larger than those previously reported in the literature.  相似文献   

14.
We study heuristic learnability of classes of Boolean formulas, a model proposed by Pitt and Valiant. In this type of example-based learning of a concept class C by a hypothesis class H, the learner seeks a hypothesis h H that agrees with all of the negative (resp. positive) examples, and a maximum number of positive (resp. negative) examples. This learning is equivalent to the problem of maximizing agreement with a training sample, with the constraint that the misclassifications be limited to examples with positive (resp. negative) labels. Several recent papers have studied the more general problem of maximizing agreements without this one-sided error constraint. We show that for many classes (though not all), the maximum agreement problem with one-sided error is more difficult than the general maximum agreement problem. We then provide lower bounds on the approximability of these one-sided error problems, for many concept classes, including Halfspaces, Decision Lists, XOR, k-term DNF, and neural nets.Editor Philip M. LongThis research was supported by the fund for promotion of research at the Technion. Research no. 120-025.  相似文献   

15.
ABSTRACT

In this work, we investigate the challenging problem of estimating credit risk measures of portfolios with exposure concentration under the multi-factor Gaussian and multi-factor t-copula models. It is well-known that Monte Carlo (MC) methods are highly demanding from the computational point of view in the aforementioned situations. We present efficient and robust numerical techniques based on the Haar wavelets theory for recovering the cumulative distribution function of the loss variable from its characteristic function. To the best of our knowledge, this is the first time that multi-factor t-copula models are considered outside the MC framework. The analysis of the approximation error and the results obtained in the numerical experiments section show a reliable and useful machinery for credit risk capital measurement purposes in line with Pillar II of the Basel Accords.  相似文献   

16.
刘忠宝  王士同 《控制与决策》2012,27(12):1870-1875
受空间几何知识和光学领域光束角的启发,提出了基于光束角思想的最大间隔学习机(BAMLM).该方法试图在模式空间中找到一个“光源”分别照射两类样本,然后根据照射区域的不同确定样本类属.分析发现,BAMLM的核化形式等价于核化中心受限最小包含球(CCMEB),通过引入核心向量机将BAMLM扩展为基于核心向量机的BAMLM (BACVM),有效地解决了大规模样本的分类问题.标准数据集和人工数据集上的实验表明了BAMLM和BACVM的有效性.  相似文献   

17.
Redko  Ievgen  Habrard  Amaury  Sebban  Marc 《Machine Learning》2019,108(8-9):1635-1652

In many real-world applications, it may be desirable to benefit from a classifier trained on a given source task from some largely annotated dataset in order to address a different but related target task for which only weakly labeled data are available. Domain adaptation (DA) is the framework which aims at leveraging the statistical similarities between the source and target distributions to learn well. Current theoretical results show that the efficiency of DA algorithms depends on (i) their capacity of minimizing the divergence between the source and target domains and (ii) the existence of a good hypothesis that commits few errors in both domains. While most of the work in DA has focused on new divergence measures, the second aspect, often modeled as the capability term, remains surprisingly under-investigated. In this paper, we show that the problem of the best joint hypothesis estimation can be reformulated using a Wasserstein distance-based error function in the context of multi-source DA. Based on this idea, we provide a theoretical analysis of the capability term and derive inequalities allowing us to estimate it from finite samples. We empirically illustrate the proposed idea on different data sets.

  相似文献   

18.
We study the applicability of the discontinuous Petrov–Galerkin (DPG) variational framework for thin-body problems in structural mechanics. Our numerical approach is based on discontinuous piecewise polynomial finite element spaces for the trial functions and approximate, local computation of the corresponding ‘optimal’ test functions. In the Timoshenko beam problem, the proposed method is shown to provide the best approximation in an energy-type norm which is equivalent to the L2-norm for all the unknowns, uniformly with respect to the thickness parameter. The same formulation remains valid also for the asymptotic Euler–Bernoulli solution. As another one-dimensional model problem we consider the modelling of the so called basic edge effect in shell deformations. In particular, we derive a special norm for the test space which leads to a robust method in terms of the shell thickness. Finally, we demonstrate how a posteriori error estimator arising directly from the discontinuous variational framework can be utilized to generate an optimal hp-mesh for resolving the boundary layer.  相似文献   

19.
We tackle the structured output classification problem using the Conditional Random Fields (CRFs). Unlike the standard 0/1 loss case, we consider a cost-sensitive learning setting where we are given a non-0/1 misclassification cost matrix at the individual output level. Although the task of cost-sensitive classification has many interesting practical applications that retain domain-specific scales in the output space (e.g., hierarchical or ordinal scale), most CRF learning algorithms are unable to effectively deal with the cost-sensitive scenarios as they merely assume a nominal scale (hence 0/1 loss) in the output space. In this paper, we incorporate the cost-sensitive loss into the large margin learning framework. By large margin learning, the proposed algorithm inherits most benefits from the SVM-like margin-based classifiers, such as the provable generalization error bounds. Moreover, the soft-max approximation employed in our approach yields a convex optimization similar to the standard CRF learning with only slight modification in the potential functions. We also provide the theoretical cost-sensitive generalization error bound. We demonstrate the improved prediction performance of the proposed method over the existing approaches in a diverse set of sequence/image structured prediction problems that often arise in pattern recognition and computer vision domains.  相似文献   

20.
In this paper we address several issues arising from a singularly perturbed fourth order problem with small parameter ε. First, we introduce a new family of non-conforming elements. We then prove that the corresponding finite element method is robust with respect to the parameter ε and uniformly convergent to order h 1/2. In addition, we analyze the effect of treating the Neumann boundary condition weakly by Nitsche’s method. We show that such treatment is superior when the parameter ε is smaller than the mesh size h and obtain sharper error estimates. Such error analysis is not restricted to the proposed elements and can easily be carried out to other elements as long as the Neumann boundary condition is imposed weakly. Finally, we discuss the local error estimates and the pollution effect of the boundary layers in the interior of the domain.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号