首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24篇
  免费   1篇
化学工业   4篇
机械仪表   1篇
无线电   6篇
一般工业技术   2篇
自动化技术   12篇
  2021年   1篇
  2018年   1篇
  2016年   1篇
  2014年   1篇
  2013年   1篇
  2011年   4篇
  2009年   1篇
  2008年   1篇
  2007年   2篇
  2006年   4篇
  2000年   1篇
  1997年   1篇
  1996年   2篇
  1994年   1篇
  1989年   1篇
  1988年   1篇
  1975年   1篇
排序方式: 共有25条查询结果,搜索用时 31 毫秒
1.
Node-Capability-Aware Replica Management for Peer-to-Peer Grids   总被引:1,自引:0,他引:1  
Data objects have to be replicated in large-scale distributed systems for reasons of fault tolerance, availability, and performance. Furthermore, computations may have to be scheduled on these objects, when these objects are part of a grid computation. Although replication mechanism for unstructured peer-to-peer (P2P) systems can place replicas on capable nodes, they may not be able to provide deterministic guarantees on searching. Replication mechanisms in structured P2P systems provide deterministic guarantees on searching but do not address node capability in replica placement. We propose Virat, a node-capability-aware P2P middleware for managing replicas in large-scale distributed systems. Virat uses a unique two-layered architecture that builds a structured overlay over an unstructured P2P layer, combining the advantages of both structured and unstructured P2P systems. Detailed performance comparison is made with a replication mechanism realized over OpenDHT, a state-of-the-art structured P2P system. We show that the 99th percentile response time for Virat does not exceed 600 ms, whereas for OpenDHT, it goes beyond 2000 ms in our test bed, created specifically for the aforementioned comparison.  相似文献   
2.
We introduce a class of Gaussian mixture models (GMMs) in which the covariances or the precisions (inverse covariances) are restricted to lie in subspaces spanned by rank-one symmetric matrices. The rank-one basis are shared between the Gaussians according to a sharing structure. We describe an algorithm for estimating the parameters of the GMM in a maximum likelihood framework given a sharing structure. We employ these models for modeling the observations in the hidden-states of a hidden Markov model based speech recognition system. We show that this class of models provide improvement in accuracy and computational efficiency over well-known covariance modeling techniques such as classical factor analysis, shared factor analysis and maximum likelihood linear transformation based models which are special instances of this class of models. We also investigate different sharing mechanisms. We show that for the same number of parameters, modeling precisions leads to better performance when compared to modeling covariances. Modeling precisions also gives a distinct advantage in computational and memory requirements.  相似文献   
3.
Resolution analysis for the problem of signal recovery from finitely many linear measurements is the subject of this paper. The classical Rayleigh limit serves only as a lower bound on resolution since it does not assume any recovery strategy and is based only on observed data. We show that details finer than the Rayleigh limit can be recovered by simple linear processing that incorporates prior information. We first define a measure of resolution based on allowable levels of error that is more appropriate for current signal recovery strategies than the Rayleigh definition. In the practical situation in which only finitely many noisy observations are available, we have to restrict the class of signals in order to make the resolution measure meaningful. We consider the set of bandlimited and essentially timelimited signals since it describes most signals encountered in practice. For this set, we show how to precompute resolution limits from knowledge of measurement functionals, signal-to-noise ratio, passband, energy concentration regions, energy concentration factor, and a prescribed level of error tolerance. In the process, we also derive an algorithm for high-resolution signal recovery. We illustrate the results with examples in one and two dimensions  相似文献   
4.
5.
Model Context Selection for Run-to-Run Control   总被引:1,自引:0,他引:1  
In the design of run-to-run controllers one is usually faced with the problem of selecting a model structure that best explains the variability in the data. The variable selection problem often becomes more complex when there are large numbers of candidate variables and the usual regression modeling assumptions are not satisfied. This paper proposes a model selection approach that uses ideas from the statistical linear models and stepwise regression literature to identify the context variables that contribute most to the autocorrelation and to the offsets in the data. A simulation example and an application to lithography alignment control are presented to illustrate the approach.  相似文献   
6.
The problem of extrapolating discrete-index bandlimited signals from a finite number of samples is addressed in this paper. The algorithm presented in this paper exploits the fact that the set of bandlimited signals that are also essentially time-limited is approximated well by a low-dimensional linear subspace. This fact, which is well known for one-dimensional (1-D) signals with contiguous passbands and time-concentration intervals, is established for a more general class of multidimensional (m-D) signals with discontiguous passbands and discontiguous time-concentration regions. A criterion is presented for determining the dimension of the approximating subspace and the minimax optimal subspace itself based on knowledge of the passband, time-concentration regions, energy concentration factor, and bounds on the tolerable extrapolation error. The extrapolation is constrained to lie in this subspace, and parameters characterizing the extrapolation are obtained from the data by solving a linear system of equations. For certain sampling patterns, the system is ill conditioned, and a second rank reduction is needed to reduce the deleterious effects of observation noise and modeling error. A novel criterion for rank selection based on known bounds on noise power and modeling error is presented. The effectiveness of the new algorithm and the rank selection criterion are demonstrated by means of computer simulations  相似文献   
7.
As manufacturing transitions to real‐time sensing, it becomes more important to handle multiple, high‐dimensional (non‐stationary) time series that generate thousands of measurements for each batch. Predictive models are often challenged by such high‐dimensional data and it is important to reduce the dimensionality for better performance. With thousands of measurements, even wavelet coefficients do not reduce the dimensionality sufficiently. We propose a two‐stage method that uses energy statistics from a discrete wavelet transform to identify process variables and appropriate resolutions of wavelet coefficients in an initial (screening) model. Variable importance scores from a modern random forest classifier are exploited in this stage. Coefficients that correspond to the identified variables and resolutions are then selected for a second‐stage predictive model. The approach is shown to provide good performance, along with interpretable results, in an example where multiple time series are used to indicate the need for preventive maintenance. In general, the two‐stage approach can handle high dimensionality and still provide interpretable features linked to the relevant process variables and wavelet resolutions that can be used for further analysis. Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
8.
Composite tissue injuries (CTI) are common among US Military Service members during combat operations, and carry a high potential of morbidity. Furthermore, CTI are often complicated due to an altered wound healing response, resulting in part from a dysregulation of the innate and adaptive immune responses. Unlike normal wound healing, in CTI, disruptions occur in innate immune responses, altering neutrophil functions, macrophage activation and polarization, further impacting the functions of T regulatory cells. Additionally, the biological underpinnings of these unfavorable wound healing conditions are multifactorial, including various processes, such as: ischemia, hypoxia, low nutrient levels, and altered cell metabolic pathways, among others, all of which are thought to trigger anergy in immune cells and destabilize adaptive immune responses. As a result, impaired wound healing is common in CTI. Herein, we review the altered innate and adaptive immune cells and their metabolic status and responses following CTI, and discuss the role a multi-pronged immunomodulatory approach may play in facilitating improved outcomes for afflicted patients.  相似文献   
9.
Tolerancing of parts using an expert system   总被引:1,自引:0,他引:1  
Tolerancing of parts is an important aspect involving both the designer and the manufacturer. The designer has a tendency to specify tolerances tighter than those required by the function of a part in order to increase its reliability. But tolerances greatly influence manufacturing costs. Tolerance specification of parts should therefore consider the manufacturing cost alongise the functional requirement of the part. A good tolerance specification strikes an appropriate balance between these two. The paper presents a rational approach to tolerance specification of parts, considering functional requirements as well as manufacturing costs. The approach is implemented using the expert system EXDEM developed on the ICL 2960 computer at IIT, New Delhi. This system assigns tolerances to dimensions of mechanical parts automatically.  相似文献   
10.
Various mechanisms for hardening the Linux kernel (for example, enforcing system call policies, device driver failure recovery, protection against exploitation of bugs in code) are proposed in the literature. The main problem with these mechanisms is that, they require changes in the kernel code leading to the possibility of introducing new bugs and hence increasing the testing time. We propose a message filter model as an extension to object‐oriented wrappers for the Linux kernel, to dynamically provide various filtering capabilities to the kernel. This model works as a comprehensive framework for specifying system call policies, handling device driver faults, protecting the kernel against exploits of bugs in code etc, without modifying the existing kernel code. This considerably reduces the possibility of creating new bugs in the kernel code. We have integrated policies for system call interception and device driver failure handling, into the Linux kernel (2.6.9), using message filter model. Our experiments show that the overhead due to our filter objects is very low, making it a useful mechanism for providing filtering capabilities to the Linux kernel. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号