首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   13篇
  免费   0篇
冶金工业   4篇
自动化技术   9篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  2002年   1篇
  2000年   1篇
  1999年   2篇
  1998年   1篇
  1997年   2篇
  1995年   1篇
  1984年   1篇
排序方式: 共有13条查询结果,搜索用时 15 毫秒
1.
In this brief, prior knowledge over general nonlinear sets is incorporated into nonlinear kernel classification problems as linear constraints in a linear program. These linear constraints are imposed at arbitrary points, not necessarily where the prior knowledge is given. The key tool in this incorporation is a theorem of the alternative for convex functions that converts nonlinear prior knowledge implications into linear inequalities without the need to kernelize these implications. Effectiveness of the proposed formulation is demonstrated on publicly available classification data sets, including a cancer prognosis data set. Nonlinear kernel classifiers for these data sets exhibit marked improvements upon the introduction of nonlinear prior knowledge compared to nonlinear kernel classifiers that do not utilize such knowledge.  相似文献   
2.
Robust linear and support vector regression   总被引:5,自引:0,他引:5  
The robust Huber M-estimator, a differentiable cost function that is quadratic for small errors and linear otherwise, is modeled exactly, in the original primal space of the problem, by an easily solvable simple convex quadratic program for both linear and nonlinear support vector estimators. Previous models were significantly more complex or formulated in the dual space and most involved specialized numerical algorithms for solving the robust Huber linear estimator. Numerical test comparisons with these algorithms indicate the computational effectiveness of the new quadratic programming model for both linear and nonlinear support vector problems. Results are shown on problems with as many as 20000 data points, with considerably faster running times on larger problems  相似文献   
3.
Mathematical Programming in Data Mining   总被引:14,自引:0,他引:14  
Mathematical programming approaches to three fundamental problems will be described: feature selection, clustering and robust representation. The feature selection problem considered is that of discriminating between two sets while recognizing irrelevant and redundant features and suppressing them. This creates a lean model that often generalizes better to new unseen data. Computational results on real data confirm improved generalization of leaner models. Clustering is exemplified by the unsupervised learning of patterns and clusters that may exist in a given database and is a useful tool for knowledge discovery in databases (KDD). A mathematical programming formulation of this problem is proposed that is theoretically justifiable and computationally implementable in a finite number of steps. A resulting k-Median Algorithm is utilized to discover very useful survival curves for breast cancer patients from a medical database. Robust representation is concerned with minimizing trained model degradation when applied to new problems. A novel approach is proposed that purposely tolerates a small error in the training process in order to avoid overfitting data that may contain errors. Examples of applications of these concepts are given.  相似文献   
4.
Oxidative DNA damage by a model Cr(V) complex, [CrO(ehba)2]-, with and without added H2O2, was investigated for the formation of base and sugar products derived from C1', C4', and C5' hydrogen atom abstraction mechanisms. EPR studies with 5,5-dimethylpyrroline N-oxide (DMPO) have shown that Cr(V)-ehba alone can oxidize the spin trap via a direct chromium pathway, whereas reactions of Cr(V)-ehba in the presence of H2O2 generated the hydroxyl radical. Direct (or metal-centered) Cr(V)-ehba oxidation of single-stranded (ss) and double-stranded (ds) calf thymus DNA demonstrated the formation of thiobarbituric acid-reactive species (TBARS) and glycolic acid in an O2-dependent manner, consistent with abstraction of the C4' H atom. A minor C1' H atom abstraction mechanism was also observed for direct Cr(V) oxidation of DNA, but no C5' H atom abstraction product was observed. Direct Cr(V) oxidation of ss- and ds-DNA also caused the release of all four nucleic acid bases with a preference for the pyrimidines cytosine and thymine in ds-DNA, but no base release preference was observed in ss-DNA. This base release was O2-independent and could not be accounted for by the H atom abstraction mechanisms in this study. Reaction of Cr(V)-ehba with H2O2 and DNA yielded products consistent with all three DNA oxidation pathways measured, namely, C1', C4', and C5' H atom abstractions. Cr(V)-ehba and H2O2 also mediated a nonpreferential release of DNA bases with the exception of the oxidatively sensitive purine, guanine. Direct and H2O2-induced Cr(V) DNA oxidation had opposing substrate preferences, with direct Cr(V) oxidation favoring ss-DNA while H2O2-induced Cr(V) oxidative damage favored ds-DNA. These results may help explain the carcinogenic mechanism of chromium(VI) and serve to highlight the differences and similarities in DNA oxidation between high-valent chromium and oxygen-based radicals.  相似文献   
5.
Nonlinear Knowledge in Kernel Approximation   总被引:1,自引:0,他引:1  
Prior knowledge over arbitrary general sets is incorporated into nonlinear kernel approximation problems in the form of linear constraints in a linear program. The key tool in this incorporation is a theorem of the alternative for convex functions that converts nonlinear prior knowledge implications into linear inequalities without the need to kernelize these implications. Effectiveness of the proposed formulation is demonstrated on two synthetic examples and an important lymph node metastasis prediction problem. All these problems exhibit marked improvements upon the introduction of prior knowledge over nonlinear kernel approximation approaches that do not utilize such knowledge  相似文献   
6.
A new approach to support vector machine (SVM) classification is proposed wherein each of two data sets are proximal to one of two distinct planes that are not parallel to each other. Each plane is generated such that it is closest to one of the two data sets and as far as possible from the other data set. Each of the two nonparallel proximal planes is obtained by a single MATLAB command as the eigenvector corresponding to a smallest eigenvalue of a generalized eigenvalue problem. Classification by proximity to two distinct nonlinear surfaces generated by a nonlinear kernel also leads to two simple generalized eigenvalue problems. The effectiveness of the proposed method is demonstrated by tests on simple examples as well as on a number of public data sets. These examples show the advantages of the proposed approach in both computation time and test set correctness.  相似文献   
7.
Large Scale Kernel Regression via Linear Programming   总被引:1,自引:0,他引:1  
The problem of tolerant data fitting by a nonlinear surface, induced by a kernel-based support vector machine is formulated as a linear program with fewer number of variables than that of other linear programming formulations. A generalization of the linear programming chunking algorithm for arbitrary kernels is implemented for solving problems with very large datasets wherein chunking is performed on both data points and problem variables. The proposed approach tolerates a small error, which is adjusted parametrically, while fitting the given data. This leads to improved fitting of noisy data (over ordinary least error solutions) as demonstrated computationally. Comparative numerical results indicate an average time reduction as high as 26.0% over other formulations, with a maximal time reduction of 79.7%. Additionally, linear programs with as many as 16,000 data points and more than a billion nonzero matrix elements are solved.  相似文献   
8.
The main purpose of this work is to give explicit sparsity-preserving SOR (successive overrelaxation) algorithms for the solution of separable quadratic and linear programming problems. The principal and computationally-distinguishing feature of the present SOR algorithms is that they preserve the sparsity structure of the problem and do not require the computation of the product of the constraint matrix by its transpose as is the case in earlier SOR algorithms for linear and quadratic programming.  相似文献   
9.
The Nef protein of primate lentiviruses down-regulates the cell surface expression of CD4 and probably MHC I by connecting these receptors with the endocytic machinery. Here, we reveal that Nef interacts with the mu chains of adaptor complexes, key components of clathrin-coated pits. For human immunodeficiency virus type 2 (HIV-2) and simian immunodeficiency virus (SIV) Nef, this interaction occurs via tyrosine-based motifs reminiscent of endocytosis signals. Mutating these motifs prevents the binding of SIV Nef to the mu chain of plasma membrane adaptor complexes, abrogates its ability to induce CD4 internalization, suppresses the accelerated endocytosis of a chimeric integral membrane protein harboring Nef as its cytoplasmic domain and confers a dominant-negative phenotype to the viral protein. Taken together, these data identify mu adaptins as downstream mediators of the down-modulation of CD4, and possibly MHC I, by Nef.  相似文献   
10.
This article describes the use of computer-based analytical techniques to define nuclear size, shape, and texture features. These features are then used to distinguish between benign and malignant breast cytology. The benign and malignant cell samples used in this study were obtained by fine needle aspiration (FNA) from a consecutive series of 569 patients: 212 with cancer and 357 with fibrocystic breast masses. Regions of FNA preparations to be analyzed were converted by a video camera to computer files that were displayed on a computer monitor. Nuclei to be analyzed were roughly outlined by an operator using a mouse. Next, the computer generated a "snake" that precisely enclosed each designated nucleus. The computer calculated 10 features for each nucleus. The ability to correctly classify samples as benign or malignant on the basis of these features was determined by inductive machine learning and logistic regression. Cross-validation was used to test the validity of the predicted diagnosis. The logistic regression cross validated classification accuracy was 96.2% and the inductive machine learning cross-validated classification accuracy was 97.5%. Our computerized system provides a probability that a sample is malignant. Should this probability fall between 30% and 70%, the sample is considered "suspicious," in the same way a visually graded FNA may be termed suspicious. All of the 128 consecutive cases obtained since the introduction of this system were correctly diagnosed, but nine benign aspirates fell into the suspicious category.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号