首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   101714篇
  免费   1958篇
  国内免费   546篇
电工技术   1087篇
综合类   2347篇
化学工业   15553篇
金属工艺   5296篇
机械仪表   3632篇
建筑科学   2532篇
矿业工程   598篇
能源动力   2515篇
轻工业   4738篇
水利工程   1390篇
石油天然气   432篇
武器工业   1篇
无线电   11457篇
一般工业技术   20337篇
冶金工业   3977篇
原子能技术   426篇
自动化技术   27900篇
  2024年   79篇
  2023年   429篇
  2022年   988篇
  2021年   1187篇
  2020年   948篇
  2019年   968篇
  2018年   15298篇
  2017年   14052篇
  2016年   10714篇
  2015年   1210篇
  2014年   1103篇
  2013年   1775篇
  2012年   4031篇
  2011年   10373篇
  2010年   9033篇
  2009年   6330篇
  2008年   7459篇
  2007年   8296篇
  2006年   611篇
  2005年   1593篇
  2004年   1406篇
  2003年   1401篇
  2002年   735篇
  2001年   285篇
  2000年   350篇
  1999年   229篇
  1998年   349篇
  1997年   273篇
  1996年   265篇
  1995年   183篇
  1994年   177篇
  1993年   159篇
  1992年   119篇
  1991年   152篇
  1990年   101篇
  1989年   96篇
  1988年   88篇
  1987年   90篇
  1986年   75篇
  1985年   86篇
  1984年   74篇
  1983年   71篇
  1982年   60篇
  1981年   72篇
  1980年   48篇
  1968年   46篇
  1966年   43篇
  1965年   46篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 265 毫秒
991.

Image Completion plays a vital role in compressed sensing, machine learning, and computer vision applications. The Rank Minimization algorithms are used to perform the image completion. The major problem with rank minimization algorithms is the loss of information in the recovered image at high corruption ratios. To overcome this problem Lifting wavelet transform based Rank Minimization (LwRM), and Discrete wavelet transform based Rank Minimization (DwRM) methods are proposed, which can recover the image, if the corrupted observations are more than 80%. The evaluation of the proposed methods are accomplished by Full Reference Image Quality Assessment (FRIQA) and No Reference Image Quality Assessment (NR-IQA) metrics. The simulation results of proposed methods are superior to state-of-the-art methods.

  相似文献   
992.

Image captured by low dynamic range (LDR) camera fails to capture entire exposure level of scene, and instead only covers certain range of exposures. In order to cover entire exposure level in single image, bracketed exposure LDR images are combined. The range of exposures in different images results in information loss in certain regions. These regions need to be addressed and based on this motive a novel methodology of layer based fusion is proposed to generate high dynamic range image. High and low-frequency layers are formed by dividing each image based on pixel intensity variations. The regions are identified based on information loss section created in differently exposed images. High-frequency layers are combined using region based fusion with Dense SIFT which is used as activity level testing measure. Low-frequency layers are combined using weighted sum. Finally combined high and low-frequency layers are merged together on pixel to pixel basis to synthesize fused image. Objective analysis is performed to compare the quality of proposed method with state-of-the-art. The measures indicate superiority of the proposed method.

  相似文献   
993.
Multimedia Tools and Applications - Detection and clustering of commercial advertisements plays an important role in multimedia indexing also in the creation of personalized user content. In...  相似文献   
994.

Digital image watermarking technique based on LSB Substitution and Hill Cipher is presented and examined in this paper. For better imperceptibility watermark is inserted in the spatial domain. Further the watermark is implanted in the Cover Image block having the highest entropy value. To improve the security of the watermark hill cipher encryption is used. Both subjective and objective image quality assessment technique has been used to evaluate the imperceptibility of the proposed scheme.Further, the perceptual perfection of the watermarked pictures accomplished in the proposed framework has been contrasted and some state-of-art watermarking strategies. Test results demonstrates that the displayed method is robust against different image processing attacks like Salt and Peppers, Gaussian filter attack, Median filter attacks, etc.

  相似文献   
995.
Multimedia Tools and Applications - The designing of 2-D digital differentiator is multimodal and high dimensional problem which requires large number of differentiator coefficients to be...  相似文献   
996.

Over the last few years, there has been a rapid growth in digital data. Images with quotes are spreading virally through online social media platforms. Misquotes found online often spread like a forest fire through social media, which highlights the lack of responsibility of the web users when circulating poorly cited quotes. Thus, it is important to authenticate the content contained in the images being circulated online. So, there is a need to retrieve the information within such textual images to verify quotes before its usage in order to differentiate a fake or misquote from an authentic one. Optical Character Recognition (OCR) is used in this paper, for converting textual images into readable text format, but none of the OCR tools are perfect in extracting information from the images accurately. In this paper, a method of post-processing on the retrieved text to improve the accuracy of the detected text from images has been proposed. Google Cloud Vision has been used for recognizing text from images. It has also been observed that using post-processing on the extracted text improved the accuracy of text recognition by 3.5% approximately. A web-based text similarity approach (URLs and domain name) has been used to examine the authenticity of the content of the quoted images. Approximately, 96.26% accuracy has been achieved in classifying quoted images as verified or misquoted. Also, a ground truth dataset of authentic site names has been created. In this research, images with quotes by famous celebrities and global leaders have been used. A comparative analysis has been performed to show the effectiveness of our proposed algorithm.

  相似文献   
997.

With the advancement of image acquisition devices and social networking services, a huge volume of image data is generated. Using different image and video processing applications, these image data are manipulated, and thus, original images get tampered. These tampered images are the prime source of spreading fake news, defaming the personalities and in some cases (when used as evidence) misleading the law bodies. Hence before relying totally on the image data, the authenticity of the image must be verified. Works of the literature are reported for the verification of the authenticity of an image based on noise inconsistency. However, these works suffer from limitations of confusion between edges and noise, post-processing operation for localization and need of prior knowledge about an image. To handle these limitations, a noise inconsistency-based technique has been presented here to detect and localize a false region in an image. This work consists of three major steps of pre-processing, noise estimation and post-processing. For the experimental purpose two, publicly available datasets are used. The result is discussed in terms of precision, recall, accuracy and f1-score on the pixel level. The result of the presented work is also compared with the recent state-of-the-art techniques. The average accuracy of the proposed work on datasets is 91.70%, which is highest among state-of-the-art techniques.

  相似文献   
998.
Pattern Analysis and Applications - The performance of graph-based learning techniques largely relies on the edges defined between the vertices of the graph. These edges that represent the affinity...  相似文献   
999.
In this paper, three new Gramians are introduced namely ‐ limited‐time interval impulse response Gramians (LTIRG), generalized limited‐time Gramians (GLTG) and generalized limited‐time impulse response Gramians (GLTIRG). GLTG and GLTIRG are applicable to both unstable systems and also to systems which have eigenvalues of opposite polarities and equal magnitude. The concept of these Gramians is utilized to develop model reduction algorithms for linear time‐invariant continuous‐time single‐input single‐output (SISO) systems. In the cases of GLTIRG and GLTG based model reduction, the standard time‐limited Gramians are generalized to be applied to unstable systems by transforming the original system into a new system which requires the solution of two Riccati equations. Two numerical examples are included to illustrate the proposed methods. The results are also compared with standard techniques.  相似文献   
1000.

Biometric applications are very sensitive to the process because of its complexity in presenting unstructured input to the processing. The existing applications of image processing are based on the implementation of different programing segments such as image acquisition, segmentation, extraction, and final output. The proposed model is designed with 2 convolution layers and 3 dense layers. We examined the module with 5 datasets including 3 benchmark datasets, namely CASIA, UBIRIS, MMU, random dataset, and the live video. We calculated the FPR, FNR, Precision, Recall, and accuracy of each dataset. The calculated accuracy of CASIA using the proposed system is 82.8%, for UBIRIS is 86%, MMU is 84%, and the random dataset is 84%. On live video with low resolution, calculated accuracy is 72.4%. The proposed system achieved better accuracy compared to existing state-of-the-art systems.

  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号