首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   144篇
  免费   42篇
  国内免费   28篇
电工技术   2篇
综合类   9篇
化学工业   9篇
机械仪表   2篇
能源动力   2篇
轻工业   4篇
水利工程   3篇
石油天然气   1篇
无线电   15篇
一般工业技术   6篇
冶金工业   1篇
自动化技术   160篇
  2024年   4篇
  2023年   13篇
  2022年   14篇
  2021年   12篇
  2020年   15篇
  2019年   7篇
  2018年   11篇
  2017年   10篇
  2016年   16篇
  2015年   3篇
  2014年   15篇
  2013年   10篇
  2012年   14篇
  2011年   12篇
  2010年   10篇
  2009年   10篇
  2008年   8篇
  2007年   4篇
  2006年   8篇
  2005年   1篇
  2004年   4篇
  2003年   4篇
  2002年   3篇
  2001年   2篇
  1997年   1篇
  1993年   1篇
  1986年   2篇
排序方式: 共有214条查询结果,搜索用时 15 毫秒
31.
积雪的年际和年内变化强烈地影响着区域及全球的水量平衡,同时,积雪反照率反馈也显著地影响着气候变化。目前长时间序列的格网雪深数据主要来自被动微波遥感及再分析资料,但不同数据之间存在着明显差异。基于多源雪深数据的评估,特别是空间特性的评估还很缺乏。因此,本研究选取了AMSR-E、WESTDC、GlobSnow、RA-Interim及MERRA2这5种雪深数据,以站点观测数据为参考真值,对它们进行了中国地区的空间误差对比及基于误差排序的相对表现分析。评估结果初步显示:①WESTDC在我国西北及东北积雪区表现较好,适合用于我国北方的雪深研究;②MERRA2在西北和东北积雪区也有较好的表现,但由于其分辨率较粗,缺乏细节的空间信息,因此认为比较适用于大区域的统计分析;③AMSR-E在我国中部和东南地区表现最好,因此认为适合我国中部及东南部的雪深研究。  相似文献   
32.
Feature selection is a process aimed at filtering out unrepresentative features from a given dataset, usually allowing the later data mining and analysis steps to produce better results. However, different feature selection algorithms use different criteria to select representative features, making it difficult to find the best algorithm for different domain datasets. The limitations of single feature selection methods can be overcome by the application of ensemble methods, combining multiple feature selection results. In the literature, feature selection algorithms are classified as filter, wrapper, or embedded techniques. However, to the best of our knowledge, there has been no study focusing on combining these three types of techniques to produce ensemble feature selection. Therefore, the aim here is to answer the question as to which combination of different types of feature selection algorithms offers the best performance for different types of medical data including categorical, numerical, and mixed data types. The experimental results show that a combination of filter (i.e., principal component analysis) and wrapper (i.e., genetic algorithms) techniques by the union method is a better choice, providing relatively high classification accuracy and a reasonably good feature reduction rate.  相似文献   
33.
The algorithm for halftoning 2D gray scale images based on subdivision was extended to the processing of colour volume datasets. Two main improvements were made. The first is adding a procedure to deal with cases with large errors so as to reduce the total quantizing error. The second is randomizing the directions in which errors are propagated, assuring the even distribution of the halftoned binary voxels. In addition, a method used to process large volume datasets was also proposed. The new algorithm is simple in principle, but produces good halftoning results, especially in the boundary regions. It is especially applicable in the data preparation for the rapid forming of coloured models and heterogeneous objects.  相似文献   
34.
不平衡数据集分类为机器学习热点研究问题之一,近年来研究人员提出很多理论和算法以改进传统分类技术在不平衡数据集上的性能,其中用阈值判定标准确定神经网络中的阈值是重要的方法之一。常用的阈值判定标准存在一定缺点,如不能使少数类及多数类分类精度同时取得最好、过于偏好多数类的精度等。为此提出一种新的阈值判定标准,依据该标准能够使少数类及多数类分类精度同时取得最好而不受样例类别比例的影响。以神经网络与遗传算法相结合训练分类器,作为阈值选择条件和分类器的评价标准,新标准能够得到较好的结果。  相似文献   
35.
黄再祥  周忠眉  何田中 《计算机科学》2014,41(2):111-113,122
许多研究表明关联分类具有较高的分类准确率,然而,大多数关联分类基于"支持度-置信度"框架,在不平衡数据集中,置信度和支持度都偏向产生多数类的规则,因此,少数类的实例容易被错误分类。针对上述问题,提出了一种基于相关规则的不平衡数据的关联分类算法。该算法挖掘频繁且互关联的项集,在以该项集为前件的分类规则中选取提升度最大的规则。规则按结合了提升度、置信度和补类支持度(CCS)的规则强度进行排序。实验表明,该算法取得了较高的平均分类准确率且在分类少数类的实例时具有更高的准确率。  相似文献   
36.
对复杂网络中节点的3种暂态中心性进行了预测研究。通过在真实数据集中分析节点不同时刻的暂态中心性值发现,不同时刻节点的暂态中心性具有很强的相关性。基于此,提出几种预测方法对真实数据集中节点未来的暂态中心性值进行预测。通过对真实值与预测值进行误差分析,比较了不同预测方法在不同真实数据中的预测性能。结果表明,在MIT数据集中,最近时窗加权平均方法的性能最好;在Infocom 06数据集中,最近时窗平均方法的性能最好。  相似文献   
37.
38.
Aggregated Conformal Prediction is used as an effective alternative to other, more complicated and/or ambiguous methods involving various balancing measures when modelling severely imbalanced datasets. Additional explicit balancing measures other than those already apart of the Conformal Prediction framework are shown not to be required. The Aggregated Conformal Prediction procedure appears to be a promising approach for severely imbalanced datasets in order to retrieve a large majority of active minority class compounds while avoiding information loss or distortion.  相似文献   
39.
Modern digital data production methods, such as computer simulation and remote sensing, have vastly increased the size and complexity of data collected over spatial domains. Analysis of these large spatial datasets for scientific inquiry is typically carried out using the Gaussian process. However, nonstationary behavior and computational requirements for large spatial datasets can prohibit efficient implementation of Gaussian process models. To perform computationally feasible inference for large spatial data, we consider partitioning a spatial region into disjoint sets using hierarchical clustering of observations and finite differences as a measure of dissimilarity. Intuitively, directions with large finite differences indicate directions of rapid increase or decrease and are, therefore, appropriate for partitioning the spatial region. Spatial contiguity of the resulting clusters is enforced by only clustering Voronoi neighbors. Following spatial clustering, we propose a nonstationary Gaussian process model across the clusters, which allows the computational burden of model fitting to be distributed across multiple cores and nodes. The methodology is primarily motivated and illustrated by an application to the validation of digital temperature data over the city of Houston as well as simulated datasets. Supplementary materials for this article are available online.  相似文献   
40.
Heart disease (HD) is a serious widespread life-threatening disease. The heart of patients with HD fails to pump sufficient amounts of blood to the entire body. Diagnosing the occurrence of HD early and efficiently may prevent the manifestation of the debilitating effects of this disease and aid in its effective treatment. Classical methods for diagnosing HD are sometimes unreliable and insufficient in analyzing the related symptoms. As an alternative, noninvasive medical procedures based on machine learning (ML) methods provide reliable HD diagnosis and efficient prediction of HD conditions. However, the existing models of automated ML-based HD diagnostic methods cannot satisfy clinical evaluation criteria because of their inability to recognize anomalies in extracted symptoms represented as classification features from patients with HD. In this study, we propose an automated heart disease diagnosis (AHDD) system that integrates a binary convolutional neural network (CNN) with a new multi-agent feature wrapper (MAFW) model. The MAFW model consists of four software agents that operate a genetic algorithm (GA), a support vector machine (SVM), and Naïve Bayes (NB). The agents instruct the GA to perform a global search on HD features and adjust the weights of SVM and BN during initial classification. A final tuning to CNN is then performed to ensure that the best set of features are included in HD identification. The CNN consists of five layers that categorize patients as healthy or with HD according to the analysis of optimized HD features. We evaluate the classification performance of the proposed AHDD system via 12 common ML techniques and conventional CNN models by using a cross-validation technique and by assessing six evaluation criteria. The AHDD system achieves the highest accuracy of 90.1%, whereas the other ML and conventional CNN models attain only 72.3%–83.8% accuracy on average. Therefore, the AHDD system proposed herein has the highest capability to identify patients with HD. This system can be used by medical practitioners to diagnose HD efficiently.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号