首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3534篇
  免费   219篇
  国内免费   121篇
电工技术   41篇
综合类   82篇
化学工业   114篇
金属工艺   40篇
机械仪表   96篇
建筑科学   321篇
矿业工程   187篇
能源动力   76篇
轻工业   106篇
水利工程   13篇
石油天然气   52篇
武器工业   11篇
无线电   341篇
一般工业技术   146篇
冶金工业   54篇
原子能技术   16篇
自动化技术   2178篇
  2024年   8篇
  2023年   25篇
  2022年   61篇
  2021年   79篇
  2020年   81篇
  2019年   87篇
  2018年   66篇
  2017年   93篇
  2016年   123篇
  2015年   103篇
  2014年   219篇
  2013年   187篇
  2012年   198篇
  2011年   295篇
  2010年   174篇
  2009年   263篇
  2008年   243篇
  2007年   227篇
  2006年   243篇
  2005年   193篇
  2004年   138篇
  2003年   147篇
  2002年   101篇
  2001年   62篇
  2000年   57篇
  1999年   66篇
  1998年   52篇
  1997年   59篇
  1996年   26篇
  1995年   37篇
  1994年   20篇
  1993年   13篇
  1992年   9篇
  1991年   6篇
  1990年   6篇
  1989年   4篇
  1986年   8篇
  1985年   7篇
  1984年   10篇
  1983年   9篇
  1982年   7篇
  1981年   8篇
  1980年   8篇
  1979年   9篇
  1978年   5篇
  1977年   8篇
  1976年   10篇
  1975年   3篇
  1974年   4篇
  1973年   3篇
排序方式: 共有3874条查询结果,搜索用时 15 毫秒
61.
基于排序的关联分类算法   总被引:1,自引:0,他引:1  
提出了一种基于排序的关联分类算法.利用基于规则的分类方法中择优方法偏爱高精度规则的思想和考虑尽可能多的规则,改进了CBA(Classification Based on Associations)只根据少数几条覆盖训练集的规则构造分类器的片面性.首先采用关联规则挖掘算法产生后件为类标号的关联规则,然后根据长度、置信度、支持度和提升度等对规则进行排序,并在排序时删除对分类结果没有影响的规则.排序后的规则加上一个默认分类便构成最终的分类器.选用20个UCI公共数据集的实验结果表明,提出的算法比CBA具有更高的平均分类精度.  相似文献   
62.
利用LS—SVM模块化决策系统求解EEG源参数   总被引:1,自引:1,他引:0  
王志芳  吴清 《计算机仿真》2009,26(8):204-207
给定头皮脑电位的分布推算脑内电活动的源是脑电研究的一个重要的方面.研究涉及到信息科学、电磁场计算及生物医学工程等多个学科领域,其研究成果在神经疾病诊断、探索人的感觉和认知过程等方面具有蕈要意义.基于最小二乘支持向最机(LS-SVM)算法建立模块化决策系统,首先对脑电数据进行分类,然后依据分类结果提取数据样本,并建立回归模型,最后求解多种偶极子源参数.从而建立起头皮电压和脑电源参数之间的内在联系,为脑电动态分析提出一种实时的研究思路.计算机仿真计算结果证明了此方法的有效性.  相似文献   
63.
针对Internet日益增多的攻击现状,防火墙、入侵检侧系统等网络安全技术发展日益成熟。但是现实中总有一些攻击能够成功,我们就有必要研究在遭受攻击情况下分析网络的脆弱性技术及及时的恢复技术,最小化对于网络不利影响。本文提出了计算机网络脆弱性的概念。分析脆弱性存在的深层原因,对网络脆弱性提供了初步的了解。  相似文献   
64.
In recent years, classification learning for data streams has become an important and active research topic. A major challenge posed by data streams is that their underlying concepts can change over time, which requires current classifiers to be revised accordingly and timely. To detect concept change, a common methodology is to observe the online classification accuracy. If accuracy drops below some threshold value, a concept change is deemed to have taken place. An implicit assumption behind this methodology is that any drop in classification accuracy can be interpreted as a symptom of concept change. Unfortunately however, this assumption is often violated in the real world where data streams carry noise that can also introduce a significant reduction in classification accuracy. To compound this problem, traditional noise cleansing methods are incompetent for data streams. Those methods normally need to scan data multiple times whereas learning for data streams can only afford one-pass scan because of data’s high speed and huge volume. Another open problem in data stream classification is how to deal with missing values. When new instances containing missing values arrive, how a learning model classifies them and how the learning model updates itself according to them is an issue whose solution is far from being explored. To solve these problems, this paper proposes a novel classification algorithm, flexible decision tree (FlexDT), which extends fuzzy logic to data stream classification. The advantages are three-fold. First, FlexDT offers a flexible structure to effectively and efficiently handle concept change. Second, FlexDT is robust to noise. Hence it can prevent noise from interfering with classification accuracy, and accuracy drop can be safely attributed to concept change. Third, it deals with missing values in an elegant way. Extensive evaluations are conducted to compare FlexDT with representative existing data stream classification algorithms using a large suite of data streams and various statistical tests. Experimental results suggest that FlexDT offers a significant benefit to data stream classification in real-world scenarios where concept change, noise and missing values coexist.  相似文献   
65.
李雪婵 《计算机科学》2008,35(6):299-300
本文对目前比较优秀的各种分类方法进行了介绍、分析和比较.在此基础上,借鉴决策树方法的快速分类特性,提出了一种基于数据库抽样的海量数据分类算法,给出了算法的设计思想和实现原理,并对多处理环境下的优化进行了探讨.实验研究表明,该算法可以明显提高海量数据库的分类效率.  相似文献   
66.
在介绍TV3D引擎的基础上重,点讨论了在Visual C#.NET中使用TV3D引擎实现虚拟现实的方法和步骤.文中采用动态摄像机,当操作键盘或鼠标来移动目标物体时,场景视角和摄像机会自动调整,很简单地就实现了虚拟漫游.实验结果表明,使用TV3D引擎来实现虚拟现实,很多底层的功能很容易实现.  相似文献   
67.
Sub-Riemannian geometry is the geometry of a distribution ofk-planes on an-dimensional manifold with a smoothly varying inner product on thek-planes. Singular curves are singularities of the space of paths tangent to the distribution and joining two fixed points. This survey is devoted to the singular curves, which can be length minimizing geodesics, independent of the choice of inner product.  相似文献   
68.
In recent years, point cloud representation has become one of the research hotspots in the field of computer vision, and has been widely used in many fields, such as autonomous driving, virtual reality, robotics, etc. Although deep learning techniques have achieved great success in processing regular structured 2D grid image data, there are still great challenges in processing irregular, unstructured point cloud data. Point cloud classification is the basis of point cloud analysis, and many deep learning-based methods have been widely used in this task. Therefore, the purpose of this paper is to provide researchers in this field with the latest research progress and future trends. First, we introduce point cloud acquisition, characteristics, and challenges. Second, we review 3D data representations, storage formats, and commonly used datasets for point cloud classification. We then summarize deep learning-based methods for point cloud classification and complement recent research work. Next, we compare and analyze the performance of the main methods. Finally, we discuss some challenges and future directions for point cloud classification.  相似文献   
69.
Coronary artery disease (CAD) is a condition in which the heart is not fed sufficiently as a result of the accumulation of fatty matter. As reported by the World Health Organization, around 32% of the total deaths in the world are caused by CAD, and it is estimated that approximately 23.6 million people will die from this disease in 2030. CAD develops over time, and the diagnosis of this disease is difficult until a blockage or a heart attack occurs. In order to bypass the side effects and high costs of the current methods, researchers have proposed to diagnose CADs with computer-aided systems, which analyze some physical and biochemical values at a lower cost. In this study, for the CAD diagnosis, (i) seven different computational feature selection (FS) methods, one domain knowledge-based FS method, and different classification algorithms have been evaluated; (ii) an exhaustive ensemble FS method and a probabilistic ensemble FS method have been proposed. The proposed approach is tested on three publicly available CAD data sets using six different classification algorithms and four different variants of voting algorithms. The performance metrics have been comparatively evaluated with numerous combinations of classifiers and FS methods. The multi-layer perceptron classifier obtained satisfactory results on three data sets. Performance evaluations show that the proposed approach resulted in 91.78%, 85.55%, and 85.47% accuracy for the Z-Alizadeh Sani, Statlog, and Cleveland data sets, respectively.  相似文献   
70.
现有的基于分割的场景文本检测方法仍较难区分相邻文本区域,同时网络得到分割图后后处理阶段步骤复杂导致模型检测效率较低.为了解决此问题,该文提出一种新颖的基于全卷积网络的场景文本检测模型.首先,该文构造特征提取器对输入图像提取多尺度特征图.其次,使用双向特征融合模块融合两个平行分支特征的语义信息并促进两个分支共同优化.之后,该文通过并行地预测缩小的文本区域图和完整的文本区域图来有效地区分相邻文本.其中前者可以保证不同的文本实例之间具有区分性,而后者能有效地指导网络优化.最后,为了提升文本检测的速度,该文提出一个快速且有效的后处理算法来生成文本边界框.实验结果表明:在相关数据集上,该文所提出的方法均实现了最好的效果,且比目前最好的方法在F-measure指标上最多提升了1.0%,并且可以实现将近实时的速度,充分证明了该方法的有效性和高效性.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号