首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   69026篇
  免费   3103篇
  国内免费   225篇
电工技术   1044篇
综合类   108篇
化学工业   13434篇
金属工艺   2526篇
机械仪表   3695篇
建筑科学   1575篇
矿业工程   62篇
能源动力   2549篇
轻工业   5163篇
水利工程   304篇
石油天然气   235篇
武器工业   1篇
无线电   11745篇
一般工业技术   13479篇
冶金工业   7429篇
原子能技术   764篇
自动化技术   8241篇
  2023年   615篇
  2022年   1007篇
  2021年   1640篇
  2020年   1182篇
  2019年   1231篇
  2018年   1637篇
  2017年   1642篇
  2016年   1995篇
  2015年   1612篇
  2014年   2455篇
  2013年   4270篇
  2012年   3848篇
  2011年   4720篇
  2010年   3550篇
  2009年   3767篇
  2008年   3501篇
  2007年   2959篇
  2006年   2667篇
  2005年   2322篇
  2004年   2211篇
  2003年   2037篇
  2002年   1959篇
  2001年   1552篇
  2000年   1441篇
  1999年   1437篇
  1998年   2756篇
  1997年   1834篇
  1996年   1446篇
  1995年   1166篇
  1994年   927篇
  1993年   876篇
  1992年   625篇
  1991年   607篇
  1990年   526篇
  1989年   508篇
  1988年   386篇
  1987年   341篇
  1986年   318篇
  1985年   309篇
  1984年   273篇
  1983年   202篇
  1982年   196篇
  1981年   174篇
  1980年   175篇
  1979年   147篇
  1978年   129篇
  1977年   179篇
  1976年   246篇
  1975年   123篇
  1974年   101篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
991.
The purpose of this paper is to introduce an effective and structured methodology for carrying out a biometric system sensitivity analysis. The goal of sensitivity analysis is to provide the researcher/developer with insight and understanding of the key factors—algorithmic, subject-based, procedural, image quality, environmental, among others—that affect the matching performance of the biometric system under study. This proposed methodology consists of two steps: (1) the design and execution of orthogonal fractional factorial experiment designs which allow the scientist to efficiently investigate the effect of a large number of factors—and interactions—simultaneously, and (2) the use of a select set of statistical data analysis graphical procedures which are fine-tuned to unambiguously highlight important factors, important interactions, and locally-optimal settings. We illustrate this methodology by application to a study of VASIR (Video-based Automated System for Iris Recognition)—NIST iris-based biometric system. In particular, we investigated k = 8 algorithmic factors from the VASIR system by constructing a (26?1 × 31  × 41) orthogonal fractional factorial design, generating the corresponding performance data, and applying an appropriate set of analysis graphics to determine the relative importance of the eight factors, the relative importance of the 28 two-term interactions, and the local best settings of the eight algorithms. The results showed that VASIR’s performance was primarily driven by six factors out of the eight, along with four two-term interactions. A virtue of our two-step methodology is that it is systematic and general, and hence may be applied with equal rigor and effectiveness to other biometric systems, such as fingerprints, face, voice, and DNA.  相似文献   
992.
Aligning shapes is essential in many computer vision problems and generalized Procrustes analysis (GPA) is one of the most popular algorithms to align shapes. However, if some of the shape data are missing, GPA cannot be applied. In this paper, we propose EM-GPA, which extends GPA to handle shapes with hidden (missing) variables by using the expectation-maximization (EM) algorithm. For example, 2D shapes can be considered as 3D shapes with missing depth information due to the projection of 3D shapes into the image plane. For a set of 2D shapes, EM-GPA finds scales, rotations and 3D shapes along with their mean and covariance matrix for 3D shape modeling. A distinctive characteristic of EM-GPA is that it does not enforce any rank constraint often appeared in other work and instead uses GPA constraints to resolve the ambiguity in finding scales, rotations, and 3D shapes. The experimental results show that EM-GPA can recover depth information accurately even when the noise level is high and there are a large number of missing variables. By using the images from the FRGC database, we show that EM-GPA can successfully align 2D shapes by taking the missing information into consideration. We also demonstrate that the 3D mean shape and its covariance matrix are accurately estimated. As an application of EM-GPA, we construct a 2D + 3D AAM (active appearance model) using the 3D shapes obtained by EM-GPA, and it gives a similar success rate in model fitting compared to the method using real 3D shapes. EM-GPA is not limited to the case of missing depth information, but it can be easily extended to more general cases.  相似文献   
993.
Topology control can enhance energy efficiency and prolong network lifetime for wireless sensor networks. Several studies that attempted to solve the topology control problem focused only on topology construction or maintenance. This work designs a novel distributed and reliable energy-efficient topology control (RETC) algorithm for topology construction and maintenance in real application environments. Particularly, many intermittent links and accidents may result in packet loss. A reliable topology can ensure connectivity and energy efficiency, prolonging network lifetime. Thus, in the topology construction phase, a reliable topology is generated to increase network reachable probability. In the topology maintenance phase, this work applies a novel dynamic topology maintenance scheme to balance energy consumption using a multi-level energy threshold. This topology maintenance scheme can trigger the topology construction algorithm to build a new network topology with high reachable probability when needed. Experimental results demonstrate the superiority of the RETC algorithm in terms of average energy consumption and network lifetime.  相似文献   
994.
In order to remove physiological artefacts and gain the improved evoked potentials, we propose a filtering method using the multi-resolution wavelet transform. The wavelet transform is repeatedly performed until all resolution levels are obtained. It decomposes the measured evoked potentials into scale coefficients corresponding to low frequency components and wavelet coefficients corresponding to high frequency components. In the wavelet domain, artefacts are dispersed mainly at the wavelet coefficients rather than the scaling coefficients. Thus, when the inverse wavelet transform is performed, this method shrinks the wavelet coefficients to reduce artefacts with shrinkage functions. By repeatedly performing the inverse wavelet transform, an evoked potential having the reduced artefacts and background noise is obtained. In this study, quantitative evaluation with simulation data and actual clinical data were conducted. As a result, characteristic peaks of evoked potential could be gained removing background EEG and artefacts using suggested shrinkage function. It was improved more than 0.2–1.6Db compared to the conventional averaging method. Also, the system for measuring and analyzing evoked potentials using DSP is implemented.  相似文献   
995.
Digital forensics in the ubiquitous era can enhance and protect the reliability of multimedia content where this content is accessed, manipulated, and distributed using high quality computer devices. Color laser printer forensics is a kind of digital forensics which identifies the printing source of color printed materials such as fine arts, money, and document and helps to catch a criminal. This paper present a new color laser printer forensic algorithm based on noisy texture analysis and support vector machine classifier that can detect which color laser printer was used to print the unknown images. Since each printer vender uses their own printing process, printed documents from different venders have a little invisible difference looks like noise. In our identification scheme, the invisible noises are estimated with the wiener-filter and the 2D Discrete Wavelet Transform (DWT) filter. Then, a gray level co-occurrence matrix (GLCM) is calculated to analyze the texture of the noise. From the GLCM, 384 statistical features are extracted and applied to train and test the support vector machine classifier for identifying the color laser printers. In the experiment, a total of 4,800 images from 8 color laser printer models were used, where half of the image is for training and the other half is for classification. Results prove that the presented algorithm performs well by achieving 99.3%, 97.4% and 88.7% accuracy for the brand, toner and model identification respectively.  相似文献   
996.
This study proposes an intelligent algorithm with tri-state architecture for real-time car body extraction and color classification. The algorithm is capable of managing both the difficulties of viewpoint and light reflection. Because the influence of light reflection is significantly different on bright, dark, and colored cars, three different strategies are designed for various color categories to acquire a more intact car body. A SARM (Separating and Re-Merging) algorithm is proposed to separate the car body and the background, and recover the entire car body more completely. A robust selection algorithm is also performed to determine the correct color category and car body. Then, the color type of the vehicle is decided only by the pixels in the extracted car body. The experimental results show that the tri-state method can extract almost 90% of car body pixels from a car image. Over 98% of car images are distinguished correctly in their categories, and the average accuracy of the 10-color-type classification is higher than 93%. Furthermore, the computation load of the proposed method is light; therefore it is applicable for real-time systems.  相似文献   
997.
998.
We consider the semi-online parallel machine scheduling problem of minimizing the makespan given a priori information: the total processing time, the largest processing time, the combination of the previous two or the optimal makespan. We propose a new algorithm that can be applied to the problem with the known total or largest processing time and prove that it has improved competitive ratios for the cases with a small number of machines. Improved lower bounds of the competitive ratio are also provided by presenting adversary lower bound examples.  相似文献   
999.
The 3GPP Long Term Evolution (LTE) Advanced and IEEE 802.16j specifications adopt the mobile multi-hop relaying (MMR) mechanism for enlarging service area and improving wireless transmission quality simultaneously. By deploying different types of Relay Stations (RSs), MMR can bring some advantages: (1) the signal fading and wireless interference of a single long wireless link is improved obviously; (2) the ranges of wireless access and relay area are extended, etc. MMR can offer a high data rate transmission for packet services and can increase system capacity. Note that MMR can be applied to the public transportation system, e.g., equipped a mobile RS on a high-speed train. A mobile RS handoff initializes a multiple handoff requests of different types of traffics. It becomes as a critical handoff issue in 4G MMR. Thus, the MMR handoff needs a new efficient Connection Admission Control (CAC) to guarantee qualities for various types of traffics and to increase system revenue. However, traditional CACs are difficult to fulfill the objectives. This paper thus proposes the Dynamic Cost-Reward-based (DCR) CAC that consists of two key mechanisms: (1) adopting a Markov decision process-based (MDP) cost function and (2) providing different reward functions for different types of nodes and various types of connection. Additionally, a mathematical analytical Markov chain is modeled for DCR. The simulation results are very close to the analysis results, which justifies the correctness of the analytical model. Numerical results demonstrate that DCA outperforms the compared CACs in the probabilities of new blocking, MS-handoff, and RS-handoff dropping, FRL, GoS, and system reward.  相似文献   
1000.
As social media services such as Twitter and Facebook are gaining popularity, the amount of information published from those services is explosively growing. Most of them use feeds to facilitate distribution of a huge volume of content they publish. In this context, many users subscribe to feeds to acquire up-to-date information through feed aggregation services, and recent real-time search engines also increasingly utilize feeds to promptly find recent web content when it is produced. Accordingly, it is necessary for such services to effectively fetch feeds for minimizing fetching delay, while at the same time maximizing the number of fetched entries. Fetching delay is a time lag between entry publication and retrieval, which is primarily incurred by finiteness of fetching resources. In this paper, we consider a polling-based approach among the methods applicable to fetching feeds, which bases on a specific schedule for visiting feeds. While the existing polling-based approaches have focused on the allocation of fetching resources to feeds in order to either reduce the fetching delay or increase the number of fetched entries, we propose a resource allocation policy that can optimize both objectives. Extensive experiments have been carried out to evaluate the proposed model, in comparison with the existing alternative methods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号