全文获取类型
收费全文 | 197篇 |
免费 | 32篇 |
国内免费 | 25篇 |
专业分类
电工技术 | 4篇 |
综合类 | 9篇 |
化学工业 | 9篇 |
机械仪表 | 3篇 |
能源动力 | 3篇 |
轻工业 | 4篇 |
水利工程 | 6篇 |
石油天然气 | 2篇 |
无线电 | 20篇 |
一般工业技术 | 6篇 |
冶金工业 | 1篇 |
自动化技术 | 187篇 |
出版年
2025年 | 6篇 |
2024年 | 21篇 |
2023年 | 15篇 |
2022年 | 14篇 |
2021年 | 13篇 |
2020年 | 19篇 |
2019年 | 10篇 |
2018年 | 13篇 |
2017年 | 12篇 |
2016年 | 16篇 |
2015年 | 3篇 |
2014年 | 16篇 |
2013年 | 11篇 |
2012年 | 15篇 |
2011年 | 12篇 |
2010年 | 10篇 |
2009年 | 10篇 |
2008年 | 8篇 |
2007年 | 4篇 |
2006年 | 8篇 |
2005年 | 1篇 |
2004年 | 4篇 |
2003年 | 4篇 |
2002年 | 3篇 |
2001年 | 2篇 |
1997年 | 1篇 |
1993年 | 1篇 |
1986年 | 2篇 |
排序方式: 共有254条查询结果,搜索用时 15 毫秒
11.
基于投影数据集的序列模式增量挖掘算法 总被引:1,自引:0,他引:1
提出一种基于投影数据集的序列增量更新算法Inc_SPM,该算法以PrefixSpan算法为基础。首先利用已有的知识得出频繁1序列,然后生成投影数据集以迭代产生频繁k序列;同时为了控制投影数据集的规模,利用等价投影数据集来改进投影终止条件。 相似文献
12.
Video tracking is a complex problem because the environment, in which video motion needs to be tracked, is widely varied based on the application and poses several constraints on the design and performance of the tracking system. Current datasets that are used to evaluate and compare video motion tracking algorithms use a cumulative performance measure without thoroughly analyzing the effect of these different constraints imposed by the environment. But it needs to analyze these constraints as parameters. The objective of this paper is to identify these parameters and define quantitative measures for these parameters to compare video datasets for motion tracking. 相似文献
13.
近地表速度模型层析反演多采用基于初至旅行时射线追踪的迭代反演方法。通常采用基于共享存储的MPI并行方式提高计算效率,但当计算节点增至一定规模时会存在网络I/O压力过大的计算瓶颈。为此,提出了一种快速、稳健的基于Spark技术的近地表速度模型层析反演方法,采用分布式内存管理技术将迭代中重复计算的数据持久化至内存中,提高程序运行效率。同时,为了解决共享存储中随着节点规模扩大而产生网络I/O堵塞的瓶颈问题,在分布式存储环境下组织弹性分布式数据集(RDD),设计基本规约单位为深度方向的一维反演数据,基于Spark Shuffle在规约过程中分布并行规约,利用Spark调度器在各个进程中分配任务,实现并行计算。实际数据计算结果表明:在反演结果精度不变的情况下,相对于常规MPI并行技术,该实现方法能够大幅度降低迭代过程中产生的网络I/O;当计算节点较多时,计算效率能够提高4倍以上;并行加速比呈现类线性增长趋势。 相似文献
14.
Francisco Fernández-Navarro Author Vitae César Hervás-Martínez Author VitaeAuthor Vitae 《Pattern recognition》2011,44(8):1821-12490
Classification with imbalanced datasets supposes a new challenge for researches in the framework of machine learning. This problem appears when the number of patterns that represents one of the classes of the dataset (usually the concept of interest) is much lower than in the remaining classes. Thus, the learning model must be adapted to this situation, which is very common in real applications. In this paper, a dynamic over-sampling procedure is proposed for improving the classification of imbalanced datasets with more than two classes. This procedure is incorporated into a memetic algorithm (MA) that optimizes radial basis functions neural networks (RBFNNs). To handle class imbalance, the training data are resampled in two stages. In the first stage, an over-sampling procedure is applied to the minority class to balance in part the size of the classes. Then, the MA is run and the data are over-sampled in different generations of the evolution, generating new patterns of the minimum sensitivity class (the class with the worst accuracy for the best RBFNN of the population). The methodology proposed is tested using 13 imbalanced benchmark classification datasets from well-known machine learning problems and one complex problem of microbial growth. It is compared to other neural network methods specifically designed for handling imbalanced data. These methods include different over-sampling procedures in the preprocessing stage, a threshold-moving method where the output threshold is moved toward inexpensive classes and ensembles approaches combining the models obtained with these techniques. The results show that our proposal is able to improve the sensitivity in the generalization set and obtains both a high accuracy level and a good classification level for each class. 相似文献
15.
This paper deals with the problem of supervised wrapper-based feature subset selection in datasets with a very large number of attributes. Recently the literature has contained numerous references to the use of hybrid selection algorithms: based on a filter ranking, they perform an incremental wrapper selection over that ranking. Though working fine, these methods still have their problems: (1) depending on the complexity of the wrapper search method, the number of wrapper evaluations can still be too large; and (2) they rely on a univariate ranking that does not take into account interaction between the variables already included in the selected subset and the remaining ones.Here we propose a new approach whose main goal is to drastically reduce the number of wrapper evaluations while maintaining good performance (e.g. accuracy and size of the obtained subset). To do this we propose an algorithm that iteratively alternates between filter ranking construction and wrapper feature subset selection (FSS). Thus, the FSS only uses the first block of ranked attributes and the ranking method uses the current selected subset in order to build a new ranking where this knowledge is considered. The algorithm terminates when no new attribute is selected in the last call to the FSS algorithm. The main advantage of this approach is that only a few blocks of variables are analyzed, and so the number of wrapper evaluations decreases drastically.The proposed method is tested over eleven high-dimensional datasets (2400-46,000 variables) using different classifiers. The results show an impressive reduction in the number of wrapper evaluations without degrading the quality of the obtained subset. 相似文献
16.
There are various algorithms used for binary classification where the cases are classified into one of two non-overlapping classes. The area under the receiver operating characteristic (ROC) curve is the most widely used metric to evaluate the performance of alternative binary classifiers. In this study, for the application domains where the high degree of imbalance is the main characteristic and the identification of the minority class is more important, we show that hit rate based measures are more correct to assess model performances and that they should be measured on out of time samples. We also try to identify the optimum composition of the training set. Logistic regression, neural network and CHAID algorithms are implemented for a real marketing problem of a bank and the performances are compared. 相似文献
17.
Marios Theristis;Kevin Anderson;Julian Ascencio-Vasquez;Joshua S. Stein; 《Solar RRL》2024,8(2):2300815
Different data pipelines and statistical methods are applied to photovoltaic (PV) performance datasets to quantify the performance loss rate (PLR). Since the real values of PLR are unknown, a variety of unvalidated values are reported. As such, the PV industry commonly assumes PLR based on statistically extracted ranges from the literature. However, the accuracy and uncertainty of PLR depend on several parameters including seasonality, local climatic conditions, and the response of a particular PV technology. In addition, the specific data pipeline and statistical method used affect the accuracy and uncertainty. To provide insights, a framework of (≈200 million) synthetic simulations of PV performance datasets using data from different climates is developed. Time series with known PLR and data quality are synthesized, and large parametric studies are conducted to examine the accuracy and uncertainty of different statistical approaches over the contiguous US, with an emphasis on the publicly available and “standardized” library, RdTools. In the results, it is confirmed that PLRs from RdTools are unbiased on average, but the accuracy and uncertainty of individual PLR estimates vary with climate zone, data quality, PV technology, and choice of analysis workflow. Best practices and improvement recommendations based on the findings of this study are provided. 相似文献
18.
《电子学报:英文版》2024,33(3)
The current booming development of the Internet has put the public in an era of information overload,in which false information is mixed and spread unscrupulously.This phenomenon has seriously disturbed the social network order.Thus,a substantial amount of research is beginning to be devoted to the effective management of fake information.We analyze the abnormal characteristics of fake information from its mechanism of generation and dissemination.In view of different exceptional features,we systematically sort out and evaluate the existing studies on false content detection.The commonly used public datasets,metrics,and performance are categorized and com-pared,hoping to provide a basis and guidance for related research.Thestudy found that the current active social platforms show different novelty.The future direction should point to mining platform features of multi-domain sources,multi-data forms,and multi-language heterogeneity to provide more valuable clues for fake information. 相似文献
19.
Modern digital data production methods, such as computer simulation and remote sensing, have vastly increased the size and complexity of data collected over spatial domains. Analysis of these large spatial datasets for scientific inquiry is typically carried out using the Gaussian process. However, nonstationary behavior and computational requirements for large spatial datasets can prohibit efficient implementation of Gaussian process models. To perform computationally feasible inference for large spatial data, we consider partitioning a spatial region into disjoint sets using hierarchical clustering of observations and finite differences as a measure of dissimilarity. Intuitively, directions with large finite differences indicate directions of rapid increase or decrease and are, therefore, appropriate for partitioning the spatial region. Spatial contiguity of the resulting clusters is enforced by only clustering Voronoi neighbors. Following spatial clustering, we propose a nonstationary Gaussian process model across the clusters, which allows the computational burden of model fitting to be distributed across multiple cores and nodes. The methodology is primarily motivated and illustrated by an application to the validation of digital temperature data over the city of Houston as well as simulated datasets. Supplementary materials for this article are available online. 相似文献
20.
Aggregated Conformal Prediction is used as an effective alternative to other, more complicated and/or ambiguous methods involving various balancing measures when modelling severely imbalanced datasets. Additional explicit balancing measures other than those already apart of the Conformal Prediction framework are shown not to be required. The Aggregated Conformal Prediction procedure appears to be a promising approach for severely imbalanced datasets in order to retrieve a large majority of active minority class compounds while avoiding information loss or distortion. 相似文献