全文获取类型
收费全文 | 144篇 |
免费 | 42篇 |
国内免费 | 28篇 |
专业分类
电工技术 | 2篇 |
综合类 | 9篇 |
化学工业 | 9篇 |
机械仪表 | 2篇 |
能源动力 | 2篇 |
轻工业 | 4篇 |
水利工程 | 3篇 |
石油天然气 | 1篇 |
无线电 | 15篇 |
一般工业技术 | 6篇 |
冶金工业 | 1篇 |
自动化技术 | 160篇 |
出版年
2024年 | 4篇 |
2023年 | 13篇 |
2022年 | 14篇 |
2021年 | 12篇 |
2020年 | 15篇 |
2019年 | 7篇 |
2018年 | 11篇 |
2017年 | 10篇 |
2016年 | 16篇 |
2015年 | 3篇 |
2014年 | 15篇 |
2013年 | 10篇 |
2012年 | 14篇 |
2011年 | 12篇 |
2010年 | 10篇 |
2009年 | 10篇 |
2008年 | 8篇 |
2007年 | 4篇 |
2006年 | 8篇 |
2005年 | 1篇 |
2004年 | 4篇 |
2003年 | 4篇 |
2002年 | 3篇 |
2001年 | 2篇 |
1997年 | 1篇 |
1993年 | 1篇 |
1986年 | 2篇 |
排序方式: 共有214条查询结果,搜索用时 15 毫秒
101.
Association rules have been widely used in many application areas to extract new and useful information expressed in a comprehensive way for decision makers from raw data. However, raw data may not always be available, it can be distributed in multiple datasets and therefore there resulting number of association rules to be inspected is overwhelming. In the light of these observations, we propose meta-association rules, a new framework for mining association rules over previously discovered rules in multiple databases. Meta-association rules are a new tool that convey new information from the patterns extracted from multiple datasets and give a “summarized” representation about most frequent patterns. We propose and compare two different algorithms based respectively on crisp rules and fuzzy rules, concluding that fuzzy meta-association rules are suitable to incorporate to the meta-mining procedure the obtained quality assessment provided by the rules in the first step of the process, although it consumes more time than the crisp approach. In addition, fuzzy meta-rules give a more manageable set of rules for its posterior analysis and they allow the use of fuzzy items to express additional knowledge about the original databases. The proposed framework is illustrated with real-life data about crime incidents in the city of Chicago. Issues such as the difference with traditional approaches are discussed using synthetic data. 相似文献
102.
Stephen Dill Nadav Eiron David Gibson Daniel Gruhl R. Guha Anant Jhingran Tapas Kanungo Kevin S. McCurley Sridhar Rajagopalan Andrew Tomkins John A. Tomlin Jason Y. Zien 《Journal of Web Semantics》2003,1(1):115-132
This paper describes Seeker, a platform for large-scale text analytics, and SemTag, an application written on the platform to perform automated semantic tagging of large corpora. We apply SemTag to a collection of approximately 264 million web pages, and generate approximately 434 million automatically disambiguated semantic tags, published to the web as a label bureau providing metadata regarding the 434 million annotations. To our knowledge, this is the largest scale semantic tagging effort to date.We describe the Seeker platform, discuss the architecture of the SemTag application, describe a new disambiguation algorithm specialized to support ontological disambiguation of large-scale data, evaluate the algorithm, and present our final results with information about acquiring and making use of the semantic tags. We argue that automated large-scale semantic tagging of ambiguous content can bootstrap and accelerate the creation of the semantic web. 相似文献
103.
Spark is a fast unified analysis engine for big data and machine learning, in which the memory is a crucial resource. Resilient Distribution Datasets (RDDs) are parallel data structures that allow users explicitly persist intermediate results in memory or on disk, and each one can be divided into several partitions. During task execution, Spark automatically monitors cache usage on each node. And when there is a RDD that needs to be stored in the cache where the space is insufficient, the system would drop out old data partitions in a least recently used (LRU) fashion to release more space. However, there is no mechanism specifically for caching RDD in Spark, and the dependency of RDDs and the need for future stages are not been taken into consideration with LRU. In this paper, we propose the optimization approach for RDDs cache and LRU based on the features of partitions, which includes three parts: the prediction mechanism for persistence, the weight model by using the entropy method, and the update mechanism of weight and memory based on RDDs partition feature. Finally, through the verification on the spark platform, the experiment results show that our strategy can effectively reduce the time in performing and improve the memory usage. 相似文献
104.
针对规则体数据的特点,提出了一种介于直接体绘制和面绘制之间的切片法来实现其可视化,用双线性插值的方法获取切割面上的属性值,并用颜色编码来表示属性值。探讨了三维小波算法在规则体可视化中的应用,给出了用三维小波压缩规则体数据的一般方法。 相似文献
105.
移动时间层次聚类是一种势能聚类算法,具有较好的聚类效果,但该算法无法识别数据集中存在的噪声数据点。为此,提出一种抗噪的移动时间势能聚类算法。通过各个数据点的势能值以及数据点之间的相似度找到各个数据点的父节点,计算各数据点到父节点的距离,按照该距离以及数据点的势能得到λ值,并依照λ值大小构造递增曲线,通过递增曲线中的拐点来识别出噪声点,将噪声数据归到新的类簇中,对去除噪声点后的数据集,根据数据点与父节点的距离进行层次聚类来获得聚类结果。实验结果表明,该算法能够识别出数据集中的噪声数据点,从而得到更优的聚类效果。 相似文献
106.
We present a design method to create close-fitting customized covers for given three-dimensional (3D) objects such as cameras, toys and figurines. The system first computes clustering of the input vertices using multiple convex hulls, then generates multiple convex hulls using the results. It then outputs a cover geometry to set union operation of these hulls, and the resulting intersection curves are set as seam lines. However, as some of the regions created are not necessarily suitable for flattening, the user can design seam lines by drawing and erasing. The system flattens the patches of the target cover geometry after segmentation, allowing the user to obtain a corresponding 2D pattern and sew the shapes in actual fabric. This paper’s contribution lies in its proposal of a clustering method to generate multiple convex hulls, i.e., a set of convex hulls that individually cover part of the input mesh and together cover all of it. The method is based on vertex clustering to allow handling of mesh models with poor vertex connectivity such as those obtained by 3D scanning, and accommodates conventional meshes with multiple connected components and point-based models with no connectivity information. Use of the system to design actual covers confirmed that it functions as intended. 相似文献
107.
数据集是若干相关数据元的集合体,如何对数据集进行科学有效地划分,以确保其类目唯一、结构合理、层次清晰、冗余性小,同时保证分类体系框架适应数据集不断丰富的内容和日益增长的种类与数量,是元数据注册管理系统中数据集管理必须解决的问题.首先提出数据集分类的基本原则与方法,接下来将所提出的原则与方法应用到卫生信息数据集分类中.对... 相似文献
108.
作为计算机视觉的重要分支,异常行为识别与检测技术已在智能安防、医疗监护、交通管控等领域获得了广泛应用.对异常行为的界定及判别方法与场景因素紧密相关,针对不同应用场景特点,适当选择特征提取及异常行为识别与检测方法,进而保证预警准确率,在实际应用中至关重要.基于此,对基于视频的人体异常行为识别与检测方法进行综述,首先给出人体异常行为的定义、特点及分类;其次,对特征提取方法进行总结,特征提取方法的选取及提取特征的好坏直接影响后续判别结果;再次,从异常行为识别和异常行为检测两个角度对异常行为判别方法进行分析和讨论,给出常用异常行为检测数据集及相关算法表现;最后,对本领域未来研究方向提出展望. 相似文献
109.
110.
A GRASP algorithm for fast hybrid (filter-wrapper) feature subset selection in high-dimensional datasets 总被引:2,自引:0,他引:2
Feature subset selection is a key problem in the data-mining classification task that helps to obtain more compact and understandable models without degrading (or even improving) their performance. In this work we focus on FSS in high-dimensional datasets, that is, with a very large number of predictive attributes. In this case, standard sophisticated wrapper algorithms cannot be applied because of their complexity, and computationally lighter filter-wrapper algorithms have recently been proposed. In this work we propose a stochastic algorithm based on the GRASP meta-heuristic, with the main goal of speeding up the feature subset selection process, basically by reducing the number of wrapper evaluations to carry out. GRASP is a multi-start constructive method which constructs a solution in its first stage, and then runs an improving stage over that solution. Several instances of the proposed GRASP method are experimentally tested and compared with state-of-the-art algorithms over 12 high-dimensional datasets. The statistical analysis of the results shows that our proposal is comparable in accuracy and cardinality of the selected subset to previous algorithms, but requires significantly fewer evaluations. 相似文献