首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   101634篇
  免费   1946篇
  国内免费   544篇
电工技术   1084篇
综合类   2347篇
化学工业   15562篇
金属工艺   5308篇
机械仪表   3612篇
建筑科学   2537篇
矿业工程   599篇
能源动力   2506篇
轻工业   4744篇
水利工程   1399篇
石油天然气   431篇
武器工业   1篇
无线电   11435篇
一般工业技术   20287篇
冶金工业   3979篇
原子能技术   427篇
自动化技术   27866篇
  2024年   101篇
  2023年   423篇
  2022年   1009篇
  2021年   1173篇
  2020年   921篇
  2019年   956篇
  2018年   15279篇
  2017年   14038篇
  2016年   10716篇
  2015年   1202篇
  2014年   1077篇
  2013年   1749篇
  2012年   4014篇
  2011年   10345篇
  2010年   9017篇
  2009年   6310篇
  2008年   7431篇
  2007年   8272篇
  2006年   596篇
  2005年   1562篇
  2004年   1387篇
  2003年   1402篇
  2002年   735篇
  2001年   278篇
  2000年   351篇
  1999年   235篇
  1998年   350篇
  1997年   273篇
  1996年   274篇
  1995年   184篇
  1994年   178篇
  1993年   167篇
  1992年   125篇
  1991年   157篇
  1990年   104篇
  1989年   103篇
  1988年   105篇
  1987年   99篇
  1986年   85篇
  1985年   98篇
  1984年   82篇
  1983年   80篇
  1982年   72篇
  1981年   85篇
  1980年   60篇
  1976年   45篇
  1968年   46篇
  1965年   46篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.

Image captured by low dynamic range (LDR) camera fails to capture entire exposure level of scene, and instead only covers certain range of exposures. In order to cover entire exposure level in single image, bracketed exposure LDR images are combined. The range of exposures in different images results in information loss in certain regions. These regions need to be addressed and based on this motive a novel methodology of layer based fusion is proposed to generate high dynamic range image. High and low-frequency layers are formed by dividing each image based on pixel intensity variations. The regions are identified based on information loss section created in differently exposed images. High-frequency layers are combined using region based fusion with Dense SIFT which is used as activity level testing measure. Low-frequency layers are combined using weighted sum. Finally combined high and low-frequency layers are merged together on pixel to pixel basis to synthesize fused image. Objective analysis is performed to compare the quality of proposed method with state-of-the-art. The measures indicate superiority of the proposed method.

  相似文献   
992.
Multimedia Tools and Applications - Detection and clustering of commercial advertisements plays an important role in multimedia indexing also in the creation of personalized user content. In...  相似文献   
993.

Digital image watermarking technique based on LSB Substitution and Hill Cipher is presented and examined in this paper. For better imperceptibility watermark is inserted in the spatial domain. Further the watermark is implanted in the Cover Image block having the highest entropy value. To improve the security of the watermark hill cipher encryption is used. Both subjective and objective image quality assessment technique has been used to evaluate the imperceptibility of the proposed scheme.Further, the perceptual perfection of the watermarked pictures accomplished in the proposed framework has been contrasted and some state-of-art watermarking strategies. Test results demonstrates that the displayed method is robust against different image processing attacks like Salt and Peppers, Gaussian filter attack, Median filter attacks, etc.

  相似文献   
994.
Multimedia Tools and Applications - The designing of 2-D digital differentiator is multimodal and high dimensional problem which requires large number of differentiator coefficients to be...  相似文献   
995.

Over the last few years, there has been a rapid growth in digital data. Images with quotes are spreading virally through online social media platforms. Misquotes found online often spread like a forest fire through social media, which highlights the lack of responsibility of the web users when circulating poorly cited quotes. Thus, it is important to authenticate the content contained in the images being circulated online. So, there is a need to retrieve the information within such textual images to verify quotes before its usage in order to differentiate a fake or misquote from an authentic one. Optical Character Recognition (OCR) is used in this paper, for converting textual images into readable text format, but none of the OCR tools are perfect in extracting information from the images accurately. In this paper, a method of post-processing on the retrieved text to improve the accuracy of the detected text from images has been proposed. Google Cloud Vision has been used for recognizing text from images. It has also been observed that using post-processing on the extracted text improved the accuracy of text recognition by 3.5% approximately. A web-based text similarity approach (URLs and domain name) has been used to examine the authenticity of the content of the quoted images. Approximately, 96.26% accuracy has been achieved in classifying quoted images as verified or misquoted. Also, a ground truth dataset of authentic site names has been created. In this research, images with quotes by famous celebrities and global leaders have been used. A comparative analysis has been performed to show the effectiveness of our proposed algorithm.

  相似文献   
996.

With the advancement of image acquisition devices and social networking services, a huge volume of image data is generated. Using different image and video processing applications, these image data are manipulated, and thus, original images get tampered. These tampered images are the prime source of spreading fake news, defaming the personalities and in some cases (when used as evidence) misleading the law bodies. Hence before relying totally on the image data, the authenticity of the image must be verified. Works of the literature are reported for the verification of the authenticity of an image based on noise inconsistency. However, these works suffer from limitations of confusion between edges and noise, post-processing operation for localization and need of prior knowledge about an image. To handle these limitations, a noise inconsistency-based technique has been presented here to detect and localize a false region in an image. This work consists of three major steps of pre-processing, noise estimation and post-processing. For the experimental purpose two, publicly available datasets are used. The result is discussed in terms of precision, recall, accuracy and f1-score on the pixel level. The result of the presented work is also compared with the recent state-of-the-art techniques. The average accuracy of the proposed work on datasets is 91.70%, which is highest among state-of-the-art techniques.

  相似文献   
997.
Pattern Analysis and Applications - The performance of graph-based learning techniques largely relies on the edges defined between the vertices of the graph. These edges that represent the affinity...  相似文献   
998.
In this paper, three new Gramians are introduced namely ‐ limited‐time interval impulse response Gramians (LTIRG), generalized limited‐time Gramians (GLTG) and generalized limited‐time impulse response Gramians (GLTIRG). GLTG and GLTIRG are applicable to both unstable systems and also to systems which have eigenvalues of opposite polarities and equal magnitude. The concept of these Gramians is utilized to develop model reduction algorithms for linear time‐invariant continuous‐time single‐input single‐output (SISO) systems. In the cases of GLTIRG and GLTG based model reduction, the standard time‐limited Gramians are generalized to be applied to unstable systems by transforming the original system into a new system which requires the solution of two Riccati equations. Two numerical examples are included to illustrate the proposed methods. The results are also compared with standard techniques.  相似文献   
999.

Biometric applications are very sensitive to the process because of its complexity in presenting unstructured input to the processing. The existing applications of image processing are based on the implementation of different programing segments such as image acquisition, segmentation, extraction, and final output. The proposed model is designed with 2 convolution layers and 3 dense layers. We examined the module with 5 datasets including 3 benchmark datasets, namely CASIA, UBIRIS, MMU, random dataset, and the live video. We calculated the FPR, FNR, Precision, Recall, and accuracy of each dataset. The calculated accuracy of CASIA using the proposed system is 82.8%, for UBIRIS is 86%, MMU is 84%, and the random dataset is 84%. On live video with low resolution, calculated accuracy is 72.4%. The proposed system achieved better accuracy compared to existing state-of-the-art systems.

  相似文献   
1000.
高速公路施工区车辆强制换道耗时生存模型   总被引:1,自引:0,他引:1  
为研究高速公路施工区车辆强制换道行为及其影响因素,采用生存分析中的半参数分析方法建立强制换道耗时的乘法风险率模型.通过无人机拍摄采集高速公路施工区的车辆换道耗时及其影响因素数据,最终确立换道耗时Cox比例风险模型,对换道耗时数据进行Cox回归建模分析.结果表明:近77%的换道车辆在10 s内完成换道;小型车和中型车经养护施工区的换道耗时未发现显著性差别;对于相同的换道耗时,平峰期的累积生存率明显低于高峰期和过渡时期,而高峰期的累积生存率最高.建立的强制换道耗时生存模型可有效的定量分析车型和交通时段对高速公路施工区车辆换道行为的影响,可为高速公路施工区交通管理控制及车辆换道行为建模及仿真奠定一定的理论基础.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号