首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   11360篇
  免费   2996篇
  国内免费   2202篇
电工技术   358篇
综合类   1350篇
化学工业   57篇
金属工艺   104篇
机械仪表   681篇
建筑科学   93篇
矿业工程   56篇
能源动力   33篇
轻工业   205篇
水利工程   46篇
石油天然气   58篇
武器工业   63篇
无线电   2097篇
一般工业技术   684篇
冶金工业   343篇
原子能技术   14篇
自动化技术   10316篇
  2024年   125篇
  2023年   337篇
  2022年   583篇
  2021年   636篇
  2020年   632篇
  2019年   455篇
  2018年   398篇
  2017年   498篇
  2016年   565篇
  2015年   631篇
  2014年   833篇
  2013年   765篇
  2012年   1059篇
  2011年   1119篇
  2010年   1005篇
  2009年   988篇
  2008年   1051篇
  2007年   1029篇
  2006年   782篇
  2005年   739篇
  2004年   536篇
  2003年   427篇
  2002年   283篇
  2001年   226篇
  2000年   160篇
  1999年   125篇
  1998年   98篇
  1997年   69篇
  1996年   56篇
  1995年   48篇
  1994年   45篇
  1993年   32篇
  1992年   29篇
  1991年   22篇
  1990年   14篇
  1989年   18篇
  1988年   14篇
  1987年   9篇
  1986年   18篇
  1985年   24篇
  1984年   8篇
  1983年   12篇
  1982年   6篇
  1981年   5篇
  1980年   4篇
  1976年   3篇
  1964年   7篇
  1962年   3篇
  1961年   3篇
  1960年   5篇
排序方式: 共有10000条查询结果,搜索用时 203 毫秒
1.
电力系统维护是电力系统稳定运行的重要保障,应用智能算法的无人机电力巡检则为电力系统维护提供便捷。电力线提取是自主电力巡检以及保障飞行器低空飞行安全的关键技术,结合深度学习理论进行电力线提取是电力巡检的重要突破点。本文将深度学习方法用于电力线提取任务,结合电力线图像特点嵌入改进的图像输入策略和注意力模块,提出一种基于阶段注意力机制的电力线提取模型(SA-Unet)。本文提出的SA-Unet模型编码阶段采用阶段输入融合策略(Stage input fusion strategy, SIFS),充分利用图像的多尺度信息减少空间位置信息丢失。解码阶段通过嵌入阶段注意力模块(Stage attention module,SAM)聚焦电力线特征,从大量信息中快速筛选出高价值信息。实验结果表明,该方法在复杂背景的多场景中具有良好的性能。  相似文献   
2.
Clinical narratives such as progress summaries, lab reports, surgical reports, and other narrative texts contain key biomarkers about a patient's health. Evidence-based preventive medicine needs accurate semantic and sentiment analysis to extract and classify medical features as the input to appropriate machine learning classifiers. However, the traditional approach of using single classifiers is limited by the need for dimensionality reduction techniques, statistical feature correlation, a faster learning rate, and the lack of consideration of the semantic relations among features. Hence, extracting semantic and sentiment-based features from clinical text and combining multiple classifiers to create an ensemble intelligent system overcomes many limitations and provides a more robust prediction outcome. The selection of an appropriate approach and its interparameter dependency becomes key for the success of the ensemble method. This paper proposes a hybrid knowledge and ensemble learning framework for prediction of venous thromboembolism (VTE) diagnosis consisting of the following components: a VTE ontology, semantic extraction and sentiment assessment of risk factor framework, and an ensemble classifier. Therefore, a component-based analysis approach was adopted for evaluation using a data set of 250 clinical narratives where knowledge and ensemble achieved the following results with and without semantic extraction and sentiment assessment of risk factor, respectively: a precision of 81.8% and 62.9%, a recall of 81.8% and 57.6%, an F measure of 81.8% and 53.8%, and a receiving operating characteristic of 80.1% and 58.5% in identifying cases of VTE.  相似文献   
3.
曾招鑫  刘俊 《计算机应用》2020,40(5):1453-1459
利用计算机实现自动、准确的秀丽隐杆线虫(C.elegans)的各项形态学参数分析,至关重要的是从显微图像上分割出线虫体态,但由于显微镜下的图像噪声较多,线虫边缘像素与周围环境相似,而且线虫的体态具有鞭毛和其他附着物需要分离,多方面因素导致设计一个鲁棒性的C.elegans分割算法仍然面临着挑战。针对这些问题,提出了一种基于深度学习的线虫分割方法,通过训练掩模区域卷积神经网络(Mask R-CNN)学习线虫形态特征实现自动分割。首先,通过改进多级特征池化将高级语义特征与低级边缘特征融合,结合大幅度软最大损失(LMSL)损失算法改进损失计算;然后,改进非极大值抑制;最后,引入全连接融合分支等方法对分割结果进行进一步优化。实验结果表明,相比原始的Mask R-CNN,该方法平均精确率(AP)提升了4.3个百分点,平均交并比(mIOU)提升了4个百分点。表明所提出的深度学习分割方法能够有效提高分割准确率,在显微图像中更加精确地分割出线虫体。  相似文献   
4.
Massive Open Online Courses (MOOCs) are becoming an essential source of information for both students and teachers. Noticeably, MOOCs have to adapt to the fast development of new technologies; they also have to satisfy the current generation of online students. The current MOOCs’ Management Systems, such as Coursera, Udacity, edX, etc., use content management platforms where content are organized in a hierarchical structure. We envision a new generation of MOOCs that support interpretability with formal semantics by using the SemanticWeb and the online social networks. Semantic technologies support more flexible information management than that offered by the current MOOCs’ platforms. Annotated information about courses, video lectures, assignments, students, teachers, etc., can be composed from heterogeneous sources, including contributions from the communities in the forum space. These annotations, combined with legacy data, build foundations for more efficient information discovery in MOOCs’ platforms. In this article we review various Collaborative Semantic Filtering technologies for building Semantic MOOCs’ management system, then, we present a prototype of a semantic middle-sized platform implemented at Western Kentucky University that answers these aforementioned requirements.  相似文献   
5.
Image color clustering is a basic technique in image processing and computer vision, which is often applied in image segmentation, color transfer, contrast enhancement, object detection, skin color capture, and so forth. Various clustering algorithms have been employed for image color clustering in recent years. However, most of the algorithms require a large amount of memory or a predetermined number of clusters. In addition, some of the existing algorithms are sensitive to the parameter configurations. In order to tackle the above problems, we propose an image color clustering method named Student's t-based density peaks clustering with superpixel segmentation (tDPCSS), which can automatically obtain clustering results, without requiring a large amount of memory, and is not dependent on the parameters of the algorithm or the number of clusters. In tDPCSS, superpixels are obtained based on automatic and constrained simple non-iterative clustering, to automatically decrease the image data volume. A Student's t kernel function and a cluster center selection method are adopted to eliminate the dependence of the density peak clustering on parameters and the number of clusters, respectively. The experiments undertaken in this study confirmed that the proposed approach outperforms k-means, fuzzy c-means, mean-shift clustering, and density peak clustering with superpixel segmentation in the accuracy of the cluster centers and the validity of the clustering results.  相似文献   
6.
现阶段的语义解析方法大部分都基于组合语义,这类方法的核心就是词典。词典是词汇的集合,词汇定义了自然语言句子中词语到知识库本体中谓词的映射。语义解析一直面临着词典中词汇覆盖度不够的问题。针对此问题,该文在现有工作的基础上,提出了基于桥连接的词典学习方法,该方法能够在训练中自动引入新的词汇并加以学习,为了进一步提高新学习到的词汇的准确度,该文设计了新的词语—二元谓词的特征模板,并使用基于投票机制的核心词典获取方法。该文在两个公开数据集(WebQuestions和Free917)上进行了对比实验,实验结果表明,该文方法能够学习到新的词汇,提高词汇的覆盖度,进而提升语义解析系统的性能,特别是召回率。  相似文献   
7.
就经典分水岭图像分割算法中存在的过分割问题,提出一种结合位图切割和区域合并的彩色图像分割算法。对原始彩色图像通过空域梯度算子求其梯度图像,并利用位图切割重建梯度图像;对新梯度图像进行分水岭预分割;对预分割图像基于异质性最小原则进行区域合并,并获得最终分割结果。相比于现有的同类方法,该算法引入位图切割,抑制噪声对分割结果的影响,在边缘模糊处分割准确,得到符合人类视觉的较小分割区域数目,同时在运行效率上提高。  相似文献   
8.
Semantic search is gradually establishing itself as the next generation search paradigm, which meets better a wider range of information needs, as compared to traditional full-text search. At the same time, however, expanding search towards document structure and external, formal knowledge sources (e.g. LOD resources) remains challenging, especially with respect to efficiency, usability, and scalability.This paper introduces Mímir—an open-source framework for integrated semantic search over text, document structure, linguistic annotations, and formal semantic knowledge. Mímir supports complex structural queries, as well as basic keyword search.Exploratory search and sense-making are supported through information visualisation interfaces, such as co-occurrence matrices and term clouds. There is also an interactive retrieval interface, where users can save, refine, and analyse the results of a semantic search over time. The more well-studied precision-oriented information seeking searches are also well supported.The generic and extensible nature of the Mímir platform is demonstrated through three different, real-world applications, one of which required indexing and search over tens of millions of documents and fifty to hundred times as many semantic annotations. Scaling up to over 150 million documents was also accomplished, via index federation and cloud-based deployment.  相似文献   
9.
An explicit extraction of the retinal vessel is a standout amongst the most significant errands in the field of medical imaging to analyze both the ophthalmological infections, for example, Glaucoma, Diabetic Retinopathy (DR), Retinopathy of Prematurity (ROP), Age-Related Macular Degeneration (AMD) as well as non retinal sickness such as stroke, hypertension and cardiovascular diseases. The state of the retinal vasculature is a significant indicative element in the field of ophthalmology. Retinal vessel extraction in fundus imaging is a difficult task because of varying size vessels, moderately low distinction, and presence of pathologies such as hemorrhages, microaneurysms etc. Manual vessel extraction is a challenging task due to the complicated nature of the retinal vessel structure, which also needs strong skill set and training. In this paper, a supervised technique for blood vessel extraction in retinal images using Modified Adaboost Extreme Learning Machine (MAD-ELM) is proposed. Firstly, the fundus image preprocessing is done for contrast enhancement and in-homogeneity correction. Then, a set of core features is extracted, and the best features are selected using “minimal Redundancy-maximum Relevance (mRmR).” Later, using MAD-ELM method vessels and non vessels are classified. DRIVE and DR-HAGIS datasets are used for the evaluation of the proposed method. The algorithm’s performance is assessed based on accuracy, sensitivity and specificity. The proposed technique attains accuracy of 0.9619 on the DRIVE database and 0.9519 on DR-HAGIS database, which contains pathological images. Our results show that, in addition to healthy retinal images, the proposed method performs well in extracting blood vessels from pathological images and is therefore comparable with state of the art methods.  相似文献   
10.
Clip-art image segmentation is widely used as an essential step to solve many vision problems such as colorization and vectorization. Many of these applications not only demand accurate segmentation results, but also have little tolerance for time cost, which leads to the main challenge of this kind of segmentation. However, most existing segmentation techniques are found not sufficient for this purpose due to either their high computation cost or low accuracy. To address such issues, we propose a novel segmentation approach, ECISER, which is well-suited in this context. The basic idea of ECISER is to take advantage of the particular nature of cartoon images and connect image segmentation with aliased rasterization. Based on such relationship, a clip-art image can be quickly segmented into regions by re-rasterization of the original image and several other computationally efficient techniques developed in this paper. Experimental results show that our method achieves dramatic computational speedups over the current state-of-the-art approaches, while preserving almost the same quality of results.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号