首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   47篇
  免费   4篇
  国内免费   3篇
电工技术   1篇
化学工业   2篇
金属工艺   2篇
机械仪表   2篇
建筑科学   3篇
轻工业   1篇
无线电   8篇
一般工业技术   8篇
自动化技术   27篇
  2022年   1篇
  2020年   1篇
  2018年   2篇
  2016年   2篇
  2015年   1篇
  2014年   4篇
  2013年   5篇
  2012年   4篇
  2011年   4篇
  2010年   2篇
  2009年   2篇
  2008年   3篇
  2007年   7篇
  2006年   3篇
  2005年   1篇
  2003年   7篇
  2002年   1篇
  2000年   1篇
  1996年   2篇
  1994年   1篇
排序方式: 共有54条查询结果,搜索用时 62 毫秒
31.
The choice of the proper resolution in landslide susceptibility mapping is a worth considering issue. If, on the one hand, a coarse spatial resolution may describe the terrain morphologic properties with low accuracy, on the other hand, at very fine resolutions, some of the DEM-derived morphometric factors may hold an excess of details. Moreover, the landslide inventory maps are represented throughout geospatial vector data structure, therefore a conversion procedure vector-to-raster is required.This work investigates the effects of raster resolution on the susceptibility mapping in conjunction with the use of different algorithms of vector-raster conversion. The Artificial Neural Network technique is used to carry out such analyses on two Sicilian basins. Seven resolutions and three conversion algorithms are investigated. Results indicate that the finest resolutions do not lead to the highest model performances, whereas the algorithm of conversion data may significantly affect the ANN training procedure at coarse resolutions.  相似文献   
32.
本文从基本原理、实现电路两个方面对多率信号处理中的重采样技术进行了详细分析 ,既分析了固定变换率重采样电路 ,又分析了自适应重采样电路。由于实现重采样技术的关键是求模运算的实现 ,所以文中对如何用累加器实现求模运算也进行了详细分析。  相似文献   
33.
The data produced by high-throughput genomic techniques are often high dimensional and undersampled. In these settings, statistical analyses that require the inversion of covariance matrices, such as those pursuing supervised dimension reduction or the assessment of interdependence structures, are problematic. In this article we show how the idea of adding noise to the bootstrap, pioneered by Efron, and Silverman and Young, in the late seventies and eighties, can be used to overcome undersampling and effectively estimate the inverse covariance matrix for data sets in which the number of observations is small relative to the number of variables. We demonstrate the performance of this approach, which we call augmented bootstrap, on simulated data and on data derived from genomic DNA sequences and microarray experiments. This invited paper is discussed in the comments available at: , , , , , , , . This work was partially supported by NIH grant HG02238 to W. Miller, NIH grant R01-GM072264 to K. Makova, and NSF grant DMS-0704621 to R.D. Cook, B. Li and F. Chiaromonte.  相似文献   
34.
The objective of this research was twofold: first, the performance of the tetrad protocol was compared to that of the triangle test under conditions that could possibly lower its sensitivity, consequently resulting in the loss of its theoretical power advantage. Second, the same samples were compared with a preference test to investigate whether a no difference conclusion obtained with a discrimination test would consistently result in a non-significant preference (consumer relevance).The investigation involved sensory differences that could be deemed small (d′ values less than 1.0) as well as the comparison of resampling vs. no resampling conditions. 456 consumers performed tests using apple and orange juices for which slight sensory differences were created through dilution. In all conditions, the tetrad always exhibited a greater number of correct answers than the triangle, confirming its greater statistical power. Therefore, it was concluded that even for small sensory differences, and in conditions where sensory fatigue could play a greater role (resampling allowed), the tetrad test sill appears like a good alternative to the triangle. Also, the theoretical increase in performance predicted when allowing sample resampling was confirmed.For the preference study, the same stimuli were evaluated by 208 subjects. Consumer relevance was defined as a significant result between two products in a preference test (assuming no population segmentation). Such significant preferences were found for three out of the four conditions, including the one with the smallest difference for which a significant result had not been found with either the tetrad or triangle. The non-significant preference in the fourth condition was attributed to segmentation in the population.Therefore, this investigation confirmed further that the tetrad test is a viable alternative to the triangle test, as it exhibits a greater statistical power even in conditions that could potentially affect it negatively. Also, it was shown that a non-significant sensory difference can still result in a significant preference test, outlining the necessity to go beyond the simple use of a ‘more powerful’ discrimination test when making decisions and to define the actual consumer relevance of an underlying sensory difference.  相似文献   
35.
Resampling-based software for estimating optimal sample size   总被引:1,自引:0,他引:1  
The SISSI program implements a novel approach for the estimation of the optimal sample size in experimental data collection. It provides a visual evaluation system of sample size determination, derived from a resampling-based procedure (namely, jackknife). The approach is based on intensive use of the sample data by systematically taking sub-samples of the original data set, and calculating mean and standard deviation for each of sub-samples. This approach overcomes the typical limitations of conventional methods, requiring data-matching statistical assumptions. Visual, easy-to-interpret provisions are supplied to display the variation of means and standard deviations as size of generated samples increases. An automatic option for identification of optimal sample size is given, targeted at the size for which the rate of change of means becomes negligible. Alternatively, a manual option can be applied. An ideal application of SISSI is in supporting the collection of plant and soil samples from field-grown crops, but it also holds potential for more general application. SISSI is developed in Visual Basic and runs under the Windows operating systems. The installation software package includes the executable files and a hypertext help file. SISSI is freely available for non-profit applications.  相似文献   
36.
In this paper, we describe resource-efficient hardware architectures for software-defined radio (SDR) front-ends. These architectures are made efficient by using a polyphase channelizer that performs arbitrary sample rate changes, frequency selection, and bandwidth control. We discuss area, time, and power optimization for field programmable gate array (FPGA) based architectures in an M -path polyphase filter bank with modified N -path polyphase filter. Such systems allow resampling by arbitrary ratios while simultaneously performing baseband aliasing from center frequencies at Nyquist zones that are not multiples of the output sample rate. A non-maximally decimated polyphase filter bank, where the number of data loads is not equal to the number of M subfilters, processes M subfilters in a time period that is either less than or greater than the M data-load’s time period. We present a load-process architecture (LPA) and a runtime architecture (RA) (based on serial polyphase structure) which have different scheduling. In LPA, N subfilters are loaded, and then M subfilters are processed at a clock rate that is a multiple of the input data rate. This is necessary to meet the output time constraint of the down-sampled data. In RA, M subfilters processes are efficiently scheduled within N data-load time while simultaneously loading N subfilters. This requires reduced clock rates compared with LPA, and potentially less power is consumed. A polyphase filter bank that uses different resampling factors for maximally decimated, under-decimated, over-decimated, and combined up- and down-sampled scenarios is used as a case study, and an analysis of area, time, and power for their FPGA architectures is given. For resource-optimized SDR front-ends, RA is superior for reducing operating clock rates and dynamic power consumption. RA is also superior for reducing area resources, except when indices are pre-stored in LUTs.  相似文献   
37.
The performance of a learning-based method highly depends on the quality of a training set. However, it is very challenging to collect an efficient and effective training set for training a good classifier, because of the high dimensionality of the feature space and the complexity of decision boundaries. In this research, we study the methodology of automatically obtaining an optimal training set for robust face detection by resampling the collected training set. We propose a genetic algorithm (GA) and manifold-based method to resample a given training set for more robust face detection. The motivations behind lie in two folds: (1) dynamic optimization, diversity, and consistency of the training samples are cultivated by the evolutionary nature of GA and (2) the desirable non-linearity of the training set is preserved by using the manifold-based resampling. We demonstrate the effectiveness of the proposed method through experiments and comparisons to other existing face detectors. The system trained from the training set by the proposed method has achieved 90.73% accuracy with no false alarm on MIT+CMU frontal face test set—the best result reported so far to our knowledge. Moreover, as a fully automatic technology, the proposed method can significantly facilitate the preparation of training sets for obtaining well-performed object detection systems in different applications.  相似文献   
38.
为了改进Unscented Fast SLAM2.0算法重采样过程中的"粒子退化"和"粒子贫化"问题,本文提出了一种基于引力场优化的Unscented Fast SLAM2.0算法.首先采用Unscented粒子滤波器替代扩展卡尔曼滤波估计移动机器人路径后验概率,然后采用扩展卡尔曼滤波器对环境进行估计更新,最后用引力场优化思想优化重采样过程:在重采样中每个采样粒子近似成宇宙灰尘,通过引力场的移动因子产生作用驱动粒子集更快朝着真实的机器人位姿状态逼近,改善粒子退化问题:通过自转因子的自转作用,避免粒子过分集中,保障了粒子多样性.实验结果表明了该算法的有效性.  相似文献   
39.
在现实问题中,相似性学习的样本对存在不平衡现象,即相似性样本对的数量会远小于不相似性样本对的数量.针对此问题,文中提出两种样本对构造方法——不相似K近邻-相似K近邻(DKNN-SKNN)和不相似K近邻-相似K远邻(DKNN-SKFN).运用这两种方法可有针对性地选择相似性学习样本对,不仅可加快支持向量机的训练过程,而且在一定程度上解决样本对之间的不平衡问题.在多个数据集上进行文中方法和经典的重采样方法的对比实验,结果表明DKNN-SKNN和DKNN-SKFN具有良好性能.  相似文献   
40.
可变载频带限信号的重采样,一般归结为按转换比P/Q(P为内插比,Q为抽取比)对原采样序列做内插和抽取。当P值很大时,需要多路内插滤波器,由于抗镜像的需要,滤波器系数矩阵非常庞大,使得高阶重采样难以实现。该文提出一种多项式近似滤波器的方法,用一组低阶多项式近似内插滤波器系数矩阵,简化了滤波器的结构,运算效率高,且内插延迟可任意改变。计算机仿真的结果表明:该结构适用于可变延迟的高阶带限内插滤波器。在一定条件下,误差在容许的范围之内。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号