首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   42篇
  免费   0篇
电工技术   4篇
化学工业   5篇
建筑科学   1篇
无线电   3篇
一般工业技术   2篇
冶金工业   3篇
自动化技术   24篇
  2016年   1篇
  2012年   2篇
  2011年   1篇
  2010年   1篇
  2006年   2篇
  2005年   1篇
  2003年   1篇
  2001年   3篇
  1999年   2篇
  1998年   5篇
  1997年   2篇
  1996年   3篇
  1995年   3篇
  1994年   3篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1986年   1篇
  1984年   1篇
  1954年   1篇
  1939年   1篇
  1937年   1篇
  1936年   1篇
  1934年   1篇
排序方式: 共有42条查询结果,搜索用时 406 毫秒
1.
In this paper we consider several variants of Valiant's learnability model that have appeared in the literature. We give conditions under which these models are equivalent in terms of the polynomially learnable concept classes they define. These equivalences allow comparisons of most of the existing theorems in Valiant-style learnability and show that several simplifying assumptions on polynomial learning algorithms can be made without loss of generality. We also give a useful reduction of learning problems to the problem of finding consistent hypotheses, and give comparisons and equivalences between Valiant's model and the prediction learning models of Haussler, Littlestone, and Warmuth (in “29th Annual IEEE Symposium on Foundations of Computer Science,” 1988).  相似文献   
2.
Recently much work has been done analyzing online machine learning algorithms in a worst case setting, where no probabilistic assumptions are made about the data. This is analogous to the H/sup /spl infin// setting used in adaptive linear filtering. Bregman divergences have become a standard tool for analyzing online machine learning algorithms. Using these divergences, we motivate a generalization of the least mean squared (LMS) algorithm. The loss bounds for these so-called p-norm algorithms involve other norms than the standard 2-norm. The bounds can be significantly better if a large proportion of the input variables are irrelevant, i.e., if the weight vector we are trying to learn is sparse. We also prove results for nonstationary targets. We only know how to apply kernel methods to the standard LMS algorithm (i.e., p=2). However, even in the general p-norm case, we can handle generalized linear models where the output of the system is a linear function combined with a nonlinear transfer function (e.g., the logistic sigmoid).  相似文献   
3.
bcr-abl, the oncogene causing chronic myeloid leukemia, encodes a fusion protein with constitutively active tyrosine kinase and transforming capacity in hematopoietic cells. Various intracellular signaling intermediates become activated and/or associate by/with Bcr-Abl, including the Src family kinase Hck. To elucidate some of the structural requirements and functional consequences of the association of Bcr-Abl with Hck, their interaction was investigated in transiently transfected COS7 cells. Neither the complex formation of Hck kinase with Bcr-Abl nor the activation of Hck by Bcr-Abl was dependent on the Abl kinase activity. Both inactivating point mutations of Hck and dephosphorylation of Hck enhanced its complex formation with Bcr-Abl, indicating that their physical interaction was negatively regulated by Hck (auto)phosphorylation. Finally, experiments with a series of kinase negative Bcr-Abl mutants showed that Hck phosphorylated Bcr-Abl and induced the binding of Grb2 to Tyr177 of Bcr-Abl. Taken together, our results suggest that Bcr-Abl preferentially binds inactive forms of Hck by an Abl kinase-independent mechanism. This physical interaction stimulates the Hck tyrosine kinase, which may then phosphorylate the Grb2-binding site in Bcr-Abl.  相似文献   
4.
Azoury  Katy S.  Warmuth  M. K. 《Machine Learning》2001,43(3):211-246
We consider on-line density estimation with a parameterized density from the exponential family. The on-line algorithm receives one example at a time and maintains a parameter that is essentially an average of the past examples. After receiving an example the algorithm incurs a loss, which is the negative log-likelihood of the example with respect to the current parameter of the algorithm. An off-line algorithm can choose the best parameter based on all the examples. We prove bounds on the additional total loss of the on-line algorithm over the total loss of the best off-line parameter. These relative loss bounds hold for an arbitrary sequence of examples. The goal is to design algorithms with the best possible relative loss bounds. We use a Bregman divergence to derive and analyze each algorithm. These divergences are relative entropies between two exponential distributions. We also use our methods to prove relative loss bounds for linear regression.  相似文献   
5.
We consider the following type of online variance minimization problem: In every trial t our algorithms get a covariance matrix C t and try to select a parameter vector w t−1 such that the total variance over a sequence of trials ?t=1T (wt-1)T Ctwt-1\sum_{t=1}^{T} (\boldsymbol {w}^{t-1})^{\top} \boldsymbol {C}^{t}\boldsymbol {w}^{t-1} is not much larger than the total variance of the best parameter vector u chosen in hindsight. Two parameter spaces in ℝ n are considered—the probability simplex and the unit sphere. The first space is associated with the problem of minimizing risk in stock portfolios and the second space leads to an online calculation of the eigenvector with minimum eigenvalue of the total covariance matrix ?t=1T Ct\sum_{t=1}^{T} \boldsymbol {C}^{t}. For the first parameter space we apply the Exponentiated Gradient algorithm which is motivated with a relative entropy regularization. In the second case, the algorithm has to maintain uncertainty information over all unit directions u. For this purpose, directions are represented as dyads uu and the uncertainty over all directions as a mixture of dyads which is a density matrix. The motivating divergence for density matrices is the quantum version of the relative entropy and the resulting algorithm is a special case of the Matrix Exponentiated Gradient algorithm. In each of the two cases we prove bounds on the additional total variance incurred by the online algorithm over the best offline parameter.  相似文献   
6.
A novel, to our knowledge, integrated wavelength-division multiplexing-passive optical net demultiplexer that uses an arrayed-waveguide grating and diffractive optical elements is presented. The demultiplexer is used to distribute 1.3-mum wavelength signals and to multiplex an eight-channel wavelength-division multiplexer spectrum at a 1.55-mum wavelength. The device shows high functionality and good optical performance. The measured cross talk was less than -21 dB, and the 3-dB bandwidth was determined to be 97 GHz, which is close to the theoretical value of 93 GHz. Average losses of 4.5 and 8 dB were measured for the 1.3- and the 1.55-mum signals, respectively.  相似文献   
7.
BACKGROUND: The infrared coagulator, a by-product of laser technology, has been used in dermatology in a variety of settings. During hair transplantation sessions, we observed a significant reduction of the donor ellipse width while performing hemostasis with the infrared coagulator. OBJECTIVE: Quantitative assessment of the donor wound width after infrared coagulator use, and correlation to the number of previous transplant sessions and patients' age. METHODS: Twenty-four patients (22 men, two women) underwent hair transplantation. The infrared coagulator was utilized for hemostasis with a pulse duration of 2.5 seconds. RESULTS: The infrared coagulator produced an average donor area decrease of 42%, while achieving rapid hemostasis. No correlation was demonstrated to number of previous transplant sessions or patients' age. CONCLUSIONS: The infrared coagulator significantly decreases the donor wound width while providing hemostasis. Advantages include the potential of larger donor strip harvest, minimal tissue manipulation, and less traumatic closure.  相似文献   
8.
Übersicht Für den ballistischen Entmagnetisierungsfaktor zylindrischer Stäbe werden mit Hilfe der Entmagnetisierungsfaktoren von Ellipsoiden einfache Beziehungen abgeleitet.Herrn Dr. H. Neumann und Herrn Dr. W. Dannöhl danke ich für freundliche Ratschläge bei der Durchführung der Arbeit.  相似文献   
9.
The field of dynamic covalent nanocapsule synthesis is very young, and most contributions to the development of reliable approaches for the assembly of dynamic covalent capsules have been made during the past five years. In 1991, Quan and Cram published the first Schiff base molecular container compound. Over the past six years, a large number of multi-component polyimine hemicarcerand and polyhedron syntheses have been developed. This review will focus primarily on recent achievements in the area of pure Schiff base nanocapsules and highlight different synthetic approaches and design strategies, as well as first applications of these capsules in molecular recognition, gas storage, and gas separation.  相似文献   
10.
We study the problem of parallel computation of a schedule for a system of n unit-length tasks on m identical machines, when the tasks are related by a set of precedence constraints. We present NC algorithms for computing an optimal schedule in the case where m, the number of available machines, does not vary with time and the precedence constraints are represented by a collection of outtrees. The algorithms run on an exclusive read, exclusive write (EREW) PRAM. Their complexities are O(log n) and O((log n)2) parallel time using O(n2) and O(n) processors, respectively. The schedule computed by our algorithms is a height-priority schedule. As a complementary result we show that it is very unlikely that computing such a schedule is in NC when any of the above conditions is significantly relaxed. We prove that the problem is P-complete under logspace reductions when the precedence constraints are a collection of intrees and outtrees, or for a collection of outtrees when the number of available machines is allowed to increase with time. The time span of a height-priority schedule for an arbitrary precedence constraints graph is at most 2 − 1/(m − 1) times longer than the optimal (N. E Chen and C. L. Liu, Proc. 1974 Sagamore Computer Conference on Parallel Processing, T. Fend (Ed.), Springer-Verlag, Berlin, 1975, pp. 1–16). Whereas it is P-complete to produce the classical height-priority schedules even for very restricted precedence constraints graphs, we present a simple NC parallel algorithm which produces a different schedule that is only 2 − 1/m times the optimal.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号