收费全文 | 19763篇 |
免费 | 849篇 |
国内免费 | 141篇 |
电工技术 | 329篇 |
综合类 | 33篇 |
化学工业 | 4412篇 |
金属工艺 | 603篇 |
机械仪表 | 646篇 |
建筑科学 | 398篇 |
矿业工程 | 48篇 |
能源动力 | 1404篇 |
轻工业 | 1211篇 |
水利工程 | 152篇 |
石油天然气 | 96篇 |
武器工业 | 1篇 |
无线电 | 2410篇 |
一般工业技术 | 4482篇 |
冶金工业 | 1469篇 |
原子能技术 | 181篇 |
自动化技术 | 2878篇 |
2024年 | 103篇 |
2023年 | 427篇 |
2022年 | 1018篇 |
2021年 | 1184篇 |
2020年 | 926篇 |
2019年 | 955篇 |
2018年 | 1247篇 |
2017年 | 986篇 |
2016年 | 957篇 |
2015年 | 614篇 |
2014年 | 856篇 |
2013年 | 1563篇 |
2012年 | 923篇 |
2011年 | 1114篇 |
2010年 | 900篇 |
2009年 | 850篇 |
2008年 | 757篇 |
2007年 | 606篇 |
2006年 | 501篇 |
2005年 | 386篇 |
2004年 | 294篇 |
2003年 | 255篇 |
2002年 | 199篇 |
2001年 | 189篇 |
2000年 | 173篇 |
1999年 | 174篇 |
1998年 | 296篇 |
1997年 | 247篇 |
1996年 | 222篇 |
1995年 | 177篇 |
1994年 | 160篇 |
1993年 | 162篇 |
1992年 | 106篇 |
1991年 | 134篇 |
1990年 | 101篇 |
1989年 | 103篇 |
1988年 | 83篇 |
1987年 | 93篇 |
1986年 | 79篇 |
1985年 | 90篇 |
1984年 | 79篇 |
1983年 | 71篇 |
1982年 | 64篇 |
1981年 | 70篇 |
1980年 | 50篇 |
1979年 | 35篇 |
1978年 | 30篇 |
1977年 | 26篇 |
1976年 | 35篇 |
1975年 | 13篇 |
The development of digital technology is utilized by people to capture and share video frames. At present, rather than capturing images, people are interested in recording video footage for exploring information. Here, retrieval of video from large databases is challenging due to the continuous frame count. To overcome these challenges associated with the retrieval of video from available databases, this research proposed a likelihood-based regression approach for video processing. To improve the retrieval accuracy of video sequences, the proposed method utilizes a likelihood estimation technique integrated with a regression model. The likelihood estimate measures the pixel level roughly for estimating the pixel range, after which the regression approach measures the pixel level for transforming certainly blurred and unwanted pixels. In the proposed likelihood regression approach, the video is converted into a video frame and stored in a database. Query frames are taken into account by the generated database depending on the features which are used for a given video to be retrieved. The significant video retrieval performance obtained from the simulation results for the proposed likelihood-based regression model shows that the proposed model performs well over the other state-of-the-art techniques.
相似文献The human liver disorder is a genetic problem due to the habituality of alcohol or effect by the virus. It can lead to liver failure or liver cancer, if not been detected in initial stage. The aim of the proposed method is to detect the liver disorder in initial stage using liver function test dataset. The problem with many real-world datasets including liver disease diagnosis data is class imbalanced. The word imbalance refers to the conditions that the number of observations belongs to one class having more or less than the other class(es). Traditional K- Nearest Neighbor (KNN) or Fuzzy KNN classifier does not work well on the imbalanced dataset because they treat the neighbor equally. The weighted variant of Fuzzy KNN assign a large weight for the neighbor belongs to the minority class data and relatively small weight for the neighbor belongs to the majority class to resolve the issues with data imbalance. In this paper, Variable- Neighbor Weighted Fuzzy K Nearest Neighbor Approach (Variable-NWFKNN) is proposed, which is an improved variant of Fuzzy-NWKNN. The proposed Variable-NWFKNN method is implemented on three real-world imbalance liver function test datasets BUPA, ILPD from UCI and MPRLPD. The Variable-NWFKNN is compared with existing NWKNN and Fuzzy-NWKKNN methods and found accuracy 73.91% (BUPA Dataset), 77.59% (ILPD Dataset) and 87.01% (MPRLPD Dataset). Further, TL_RUS method is used for preprocessing and it improved the accuracy as 78.46% (BUPA Dataset), 78.46% (ILPD Dataset) and 95.79% (MPRLPD Dataset).
相似文献Image Completion plays a vital role in compressed sensing, machine learning, and computer vision applications. The Rank Minimization algorithms are used to perform the image completion. The major problem with rank minimization algorithms is the loss of information in the recovered image at high corruption ratios. To overcome this problem Lifting wavelet transform based Rank Minimization (LwRM), and Discrete wavelet transform based Rank Minimization (DwRM) methods are proposed, which can recover the image, if the corrupted observations are more than 80%. The evaluation of the proposed methods are accomplished by Full Reference Image Quality Assessment (FRIQA) and No Reference Image Quality Assessment (NR-IQA) metrics. The simulation results of proposed methods are superior to state-of-the-art methods.
相似文献Digital image watermarking technique based on LSB Substitution and Hill Cipher is presented and examined in this paper. For better imperceptibility watermark is inserted in the spatial domain. Further the watermark is implanted in the Cover Image block having the highest entropy value. To improve the security of the watermark hill cipher encryption is used. Both subjective and objective image quality assessment technique has been used to evaluate the imperceptibility of the proposed scheme.Further, the perceptual perfection of the watermarked pictures accomplished in the proposed framework has been contrasted and some state-of-art watermarking strategies. Test results demonstrates that the displayed method is robust against different image processing attacks like Salt and Peppers, Gaussian filter attack, Median filter attacks, etc.
相似文献Over the last few years, there has been a rapid growth in digital data. Images with quotes are spreading virally through online social media platforms. Misquotes found online often spread like a forest fire through social media, which highlights the lack of responsibility of the web users when circulating poorly cited quotes. Thus, it is important to authenticate the content contained in the images being circulated online. So, there is a need to retrieve the information within such textual images to verify quotes before its usage in order to differentiate a fake or misquote from an authentic one. Optical Character Recognition (OCR) is used in this paper, for converting textual images into readable text format, but none of the OCR tools are perfect in extracting information from the images accurately. In this paper, a method of post-processing on the retrieved text to improve the accuracy of the detected text from images has been proposed. Google Cloud Vision has been used for recognizing text from images. It has also been observed that using post-processing on the extracted text improved the accuracy of text recognition by 3.5% approximately. A web-based text similarity approach (URLs and domain name) has been used to examine the authenticity of the content of the quoted images. Approximately, 96.26% accuracy has been achieved in classifying quoted images as verified or misquoted. Also, a ground truth dataset of authentic site names has been created. In this research, images with quotes by famous celebrities and global leaders have been used. A comparative analysis has been performed to show the effectiveness of our proposed algorithm.
相似文献Biometric applications are very sensitive to the process because of its complexity in presenting unstructured input to the processing. The existing applications of image processing are based on the implementation of different programing segments such as image acquisition, segmentation, extraction, and final output. The proposed model is designed with 2 convolution layers and 3 dense layers. We examined the module with 5 datasets including 3 benchmark datasets, namely CASIA, UBIRIS, MMU, random dataset, and the live video. We calculated the FPR, FNR, Precision, Recall, and accuracy of each dataset. The calculated accuracy of CASIA using the proposed system is 82.8%, for UBIRIS is 86%, MMU is 84%, and the random dataset is 84%. On live video with low resolution, calculated accuracy is 72.4%. The proposed system achieved better accuracy compared to existing state-of-the-art systems.
相似文献