首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   26598篇
  免费   3483篇
  国内免费   2378篇
电工技术   911篇
技术理论   2篇
综合类   2486篇
化学工业   803篇
金属工艺   248篇
机械仪表   665篇
建筑科学   3237篇
矿业工程   9586篇
能源动力   502篇
轻工业   286篇
水利工程   641篇
石油天然气   332篇
武器工业   98篇
无线电   1186篇
一般工业技术   711篇
冶金工业   1620篇
原子能技术   61篇
自动化技术   9084篇
  2024年   47篇
  2023年   214篇
  2022年   596篇
  2021年   849篇
  2020年   1038篇
  2019年   659篇
  2018年   665篇
  2017年   732篇
  2016年   870篇
  2015年   951篇
  2014年   1795篇
  2013年   1491篇
  2012年   2258篇
  2011年   2230篇
  2010年   1760篇
  2009年   1926篇
  2008年   1997篇
  2007年   2217篇
  2006年   1974篇
  2005年   1629篇
  2004年   1411篇
  2003年   1302篇
  2002年   912篇
  2001年   658篇
  2000年   572篇
  1999年   423篇
  1998年   260篇
  1997年   202篇
  1996年   187篇
  1995年   153篇
  1994年   130篇
  1993年   93篇
  1992年   77篇
  1991年   65篇
  1990年   30篇
  1989年   19篇
  1988年   23篇
  1987年   22篇
  1986年   7篇
  1985年   4篇
  1984年   2篇
  1983年   2篇
  1982年   1篇
  1980年   1篇
  1979年   1篇
  1965年   2篇
  1964年   1篇
  1959年   1篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
931.
粗糙集理论是一种研究不精确、不确定性、处理不完备知识的数学工具,目前被广泛应用于人工智能、模式识别、机器学习、决策支持和数据挖掘等领域。该文通过介绍粗糙集理论及特点,叙述了粗糙集理论在各领域的应用发展情况,并且展望了其未来发展趋势。  相似文献   
932.
挖掘关联规则是数据挖掘领域的一个重要研究方向,本文首先介绍了一种基于层次的Apriori算法和一种基于搜索算法的QAIS算法,通过二者的比较,指出了QAIS算法中的优点以及不足之处。然后有针对性的提出了解决的方案,形成了ImprovedQAIS算法。  相似文献   
933.
Machine learning or data mining technologies are often used in network intrusion detection systems. An intrusion detection system based on machine learning utilizes a classifier to infer the current state from the observed traffic attributes. The problem with learning-based intrusion detection is that it leads to false positives and so incurs unnecessary additional operation costs. This paper investigates a method to decrease the false positives generated by an intrusion detection system that employs a decision tree as its classifier. The paper first points out that the information-gain criterion used in previous studies to select the attributes in the tree-constructing algorithm is not effective in achieving low false positive rates. Instead of the information-gain criterion, this paper proposes a new function that evaluates the goodness of an attribute by considering the significance of error types. The proposed function can successfully choose an attribute that suppresses false positives from the given attribute set and the effectiveness of using it is confirmed experimentally. This paper also examines the more trivial leaf rewriting approach to benchmark the proposed method. The comparison shows that the proposed attribute evaluation function yields better solutions than the leaf rewriting approach.
Satoru OhtaEmail:
  相似文献   
934.
935.
Data stream values are often associated with multiple aspects. For example each value observed at a given time-stamp from environmental sensors may have an associated type (e.g., temperature, humidity, etc.) as well as location. Time-stamp, type and location are the three aspects, which can be modeled using a tensor (high-order array). However, the time aspect is special, with a natural ordering, and with successive time-ticks having usually correlated values. Standard multiway analysis ignores this structure. To capture it, we propose 2 Heads Tensor Analysis (2-heads), which provides a qualitatively different treatment on time. Unlike most existing approaches that use a PCA-like summarization scheme for all aspects, 2-heads treats the time aspect carefully. 2-heads combines the power of classic multilinear analysis with wavelets, leading to a powerful mining tool. Furthermore, 2-heads has several other advantages as well: (a) it can be computed incrementally in a streaming fashion, (b) it has a provable error guarantee and, (c) it achieves significant compression ratio against competitors. Finally, we show experiments on real datasets, and we illustrate how 2-heads reveals interesting trends in the data. This is an extended abstract of an article published in the Data Mining and Knowledge Discovery journal.  相似文献   
936.
Data co-clustering refers to the problem of simultaneous clustering of two data types. Typically, the data is stored in a contingency or co-occurrence matrix C where rows and columns of the matrix represent the data types to be co-clustered. An entry C ij of the matrix signifies the relation between the data type represented by row i and column j. Co-clustering is the problem of deriving sub-matrices from the larger data matrix by simultaneously clustering rows and columns of the data matrix. In this paper, we present a novel graph theoretic approach to data co-clustering. The two data types are modeled as the two sets of vertices of a weighted bipartite graph. We then propose Isoperimetric Co-clustering Algorithm (ICA)—a new method for partitioning the bipartite graph. ICA requires a simple solution to a sparse system of linear equations instead of the eigenvalue or SVD problem in the popular spectral co-clustering approach. Our theoretical analysis and extensive experiments performed on publicly available datasets demonstrate the advantages of ICA over other approaches in terms of the quality, efficiency and stability in partitioning the bipartite graph.  相似文献   
937.
Content distribution networks (CDNs) improve scalability and reliability, by replicating content to the “edge” of the Internet. Apart from the pure networking issues of the CDNs relevant to the establishment of the infrastructure, some very crucial data management issues must be resolved to exploit the full potential of CDNs to reduce the “last mile” latencies. A very important issue is the selection of the content to be prefetched to the CDN servers. All the approaches developed so far, assume the existence of adequate content popularity statistics to drive the prefetch decisions. Such information though, is not always available, or it is extremely volatile, turning such methods problematic. To address this issue, we develop self-adaptive techniques to select the outsourced content in a CDN infrastructure, which requires no apriori knowledge of request statistics. We identify clusters of “correlated” Web pages in a site, called Web site communities, and make these communities the basic outsourcing unit. Through a detailed simulation environment, using both real and synthetic data, we show that the proposed techniques are very robust and effective in reducing the user-perceived latency, performing very close to an unfeasible, off-line policy, which has full knowledge of the content popularity.  相似文献   
938.
Developing an efficient algorithm that can maintain discovered information as a database changes is quite important in data mining. Many proposed algorithms focused on a single level, and did not utilize previously mined information in incrementally growing databases. In the past, we proposed an incremental mining algorithm for maintenance of multiple-level association rules as new transactions were inserted. Deletion of records in databases is, however, commonly seen in real-world applications. In this paper, we thus attempt to extend our previous approach to solve this issue. The concept of pre-large itemsets is used to reduce the need for rescanning original databases and to save maintenance costs. A pre-large itemset is not truly large, but promises to be large in the future. A lower support threshold and an upper support threshold are used to realize this concept. The two user-specified upper and lower support thresholds make the pre-large itemsets act as a gap to avoid small itemsets becoming large in the updated database when transactions are deleted. A new algorithm is thus proposed based on the concept to maintain discovered multiple-level association rules for deletion of records. The proposed algorithm doesn't need to rescan the original database until a number of records have been deleted. It can thus save much maintenance time.  相似文献   
939.
This study proposed a novel PSO–SVM model that hybridized the particle swarm optimization (PSO) and support vector machines (SVM) to improve the classification accuracy with a small and appropriate feature subset. This optimization mechanism combined the discrete PSO with the continuous-valued PSO to simultaneously optimize the input feature subset selection and the SVM kernel parameter setting. The hybrid PSO–SVM data mining system was implemented via a distributed architecture using the web service technology to reduce the computational time. In a heterogeneous computing environment, the PSO optimization was performed on the application server and the SVM model was trained on the client (agent) computer. The experimental results showed the proposed approach can correctly select the discriminating input features and also achieve high classification accuracy.  相似文献   
940.
传统的取证技术是一种静态方法,该文从动态取证的角度去解决静态取证所面临的问题,研究了计算机动态取证的相关技术,提出了一个计算机动态取证系统模型并对相关模块进行设计。根据动态取证的特点,提出将数据挖掘技术应用于动态取证系统中,针对基本挖掘算法在取证分析实际应用中存在的不足,提出了相应的算法改进方法,通过实验分析,证明了改进算法在计算机动态取证应用中的有效性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号