首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   84673篇
  免费   1010篇
  国内免费   415篇
电工技术   822篇
综合类   2316篇
化学工业   11577篇
金属工艺   4823篇
机械仪表   3027篇
建筑科学   2159篇
矿业工程   562篇
能源动力   1140篇
轻工业   3650篇
水利工程   1269篇
石油天然气   341篇
无线电   9363篇
一般工业技术   16524篇
冶金工业   2920篇
原子能技术   274篇
自动化技术   25331篇
  2022年   14篇
  2021年   25篇
  2020年   17篇
  2019年   16篇
  2018年   14454篇
  2017年   13389篇
  2016年   9982篇
  2015年   610篇
  2014年   253篇
  2013年   249篇
  2012年   3162篇
  2011年   9432篇
  2010年   8297篇
  2009年   5582篇
  2008年   6812篇
  2007年   7809篇
  2006年   149篇
  2005年   1241篇
  2004年   1154篇
  2003年   1196篇
  2002年   556篇
  2001年   112篇
  2000年   206篇
  1999年   85篇
  1998年   170篇
  1997年   97篇
  1996年   100篇
  1995年   46篇
  1994年   36篇
  1993年   41篇
  1992年   22篇
  1991年   25篇
  1988年   18篇
  1984年   17篇
  1976年   15篇
  1969年   25篇
  1968年   44篇
  1967年   33篇
  1966年   42篇
  1965年   45篇
  1963年   28篇
  1962年   22篇
  1961年   18篇
  1960年   30篇
  1959年   35篇
  1958年   37篇
  1957年   36篇
  1956年   34篇
  1955年   63篇
  1954年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
Band selection plays an important role in identifying the most useful and valuable information contained in the hyperspectral images for further data analysis such as classification, clustering, etc. Memetic algorithm (MA), among other metaheuristic search methods, has been shown to achieve competitive performances in solving the NP-hard band selection problem. In this paper, we propose a formal probabilistic memetic algorithm for band selection, which is able to adaptively control the degree of global exploration against local exploitation as the search progresses. To verify the effectiveness of the proposed probabilistic mechanism, empirical studies conducted on five well-known hyperspectral images against two recently proposed state-of-the-art MAs for band selection are presented.  相似文献   
992.
With the development of modern image processing techniques, the numbers of images increase at a high speed in network. As a new form of visual communication, image is widely used in network transmission. However, the image information would be lost after transmission. In view of this, we are motivated to restore the image to make it complete in an effective and efficient way in order to save the network bandwidth. At present, there are two main methods for digital image restoration, texture-based method and non-textured-based method. In the texture-based method, Criminisi algorithm is a widely used algorithm. However, the inaccurate completion order and the inefficiency in searching matching patches are two main limitations of Criminisi algorithm. To overcome these shortcomings, in this paper, an exemplar image completion based on evolutionary algorithm is proposed. In the non-textured-based method, total variation method is a typical algorithm. An improved total variation algorithm is proposed in this paper. In the improved algorithm, the diffusion coefficients are defined according to the distance and direction between the damaged pixel and its neighborhood pixel. Experimental results show that the proposed algorithms have better general performance in image completion. And these two new algorithms could improve the experience of network surfing and reduce the network communication cost.  相似文献   
993.
Over the past decade process mining has emerged as a new analytical discipline able to answer a variety of questions based on event data. Event logs have a very particular structure; events have timestamps, refer to activities and resources, and need to be correlated to form process instances. Process mining results tend to be very different from classical data mining results, e.g., process discovery may yield end-to-end process models capturing different perspectives rather than decision trees or frequent patterns. A process-mining tool like ProM provides hundreds of different process mining techniques ranging from discovery and conformance checking to filtering and prediction. Typically, a combination of techniques is needed and, for every step, there are different techniques that may be very sensitive to parameter settings. Moreover, event logs may be huge and may need to be decomposed and distributed for analysis. These aspects make it very cumbersome to analyze event logs manually. Process mining should be repeatable and automated. Therefore, we propose a framework to support the analysis of process mining workflows. Existing scientific workflow systems and data mining tools are not tailored towards process mining and the artifacts used for analysis (process models and event logs). This paper structures the basic building blocks needed for process mining and describes various analysis scenarios. Based on these requirements we implemented RapidProM, a tool supporting scientific workflows for process mining. Examples illustrating the different scenarios are provided to show the feasibility of the approach.  相似文献   
994.
A Nom historical document recognition system is being developed for digital archiving that uses image binarization, character segmentation, and character recognition. It incorporates two versions of off-line character recognition: one for automatic recognition of scanned and segmented character patterns (7660 categories) and the other for user handwritten input (32,695 categories). This separation is used since including less frequently appearing categories in automatic recognition increases the misrecognition rate without reliable statistics on the Nom language. Moreover, a user must be able to check the results and identify the correct categories from an extended set of categories, and a user can input characters by hand. Both versions use the same recognition method, but they are trained using different sets of training patterns. Recursive XY cut and Voronoi diagrams are used for segmentation; kd tree and generalized learning vector quantization are used for coarse classification; and the modified quadratic discriminant function is used for fine classification. The system provides an interface through which a user can check the results, change binarization methods, rectify segmentation, and input correct character categories by hand. Evaluation done using a limited number of Nom historical documents after providing ground truths for them showed that the two stages of recognition along with user checking and correction improved the recognition results significantly.  相似文献   
995.
In this paper, we deal with those applications of textual image compression where high compression ratio and maintaining or improving the visual quality and readability of the compressed images are of main concern. In textual images, most of the information exists in the edge regions; therefore, the compression problem can be studied in the framework of region-of-interest (ROI) coding. In this paper, the Set Partitioning in Hierarchical Trees (SPIHT) coder is used in the framework of ROI coding along with some image enhancement techniques in order to remove the leakage effect which occurs in the wavelet-based low-bit-rate compression. We evaluated the compression performance of the proposed method with respect to some qualitative and quantitative measures. The qualitative measures include the averaged mean opinion scores (MOS) curve along with demonstrating some outputs in different conditions. The quantitative measures include two proposed modified PSNR measures and the conventional one. Comparing the results of the proposed method with those of three conventional approaches, DjVu, JPEG2000, and SPIHT coding, showed that the proposed compression method considerably outperformed the others especially from the qualitative aspect. The proposed method improved the MOS by 20 and 30 %, in average, for high- and low-contrast textual images, respectively. In terms of the modified and conventional PSNR measures, the proposed method outperformed DjVu and JPEG2000 up to 0.4 dB for high-contrast textual images at low bit rates. In addition, compressing the high contrast images using the proposed ROI technique, compared to without using this technique, improved the average textual PSNR measure up to 0.5 dB, at low bit rates.  相似文献   
996.
Non-symmetric similarity relation-based rough set model (NS-RSM) is viewed as mathematical tool to deal with the analysis of imprecise and uncertain information in incomplete information systems with “?” values. NS-RSM relies on the concept of non-symmetric similarity relation to group equivalent objects and generate knowledge granules that are then used to approximate the target set. However, NS-RSM results in unpromising approximation space when addressing inconsistent data sets that have lots of boundary objects. This is because objects in the same similarity classes are not necessarily similar to each other and may belong to different target classes. To enhance NS-RSM capability, we introduce the maximal limited similarity-based rough set model (MLS-RSM) which describes the maximal collection of indistinguishable objects that are limited tolerance to each other in similarity classes. This allows accurate computation to be done for the approximation space. Furthermore, approximation accuracy comparisons have been conducted among NS-RSM and MLS-RSM. The results demonstrate that MLS-RSM model outperforms NS-RSM and can approximate the target set more efficiently.  相似文献   
997.
998.
When a realistic modelling of radioactive contaminant transport in flowing groundwater is required, very large systems of coupled partial and ordinary differential equations can arise that have to be solved numerically. For that purpose, the software package \(r^3t\) is developed in which several advanced numerical methods are implemented to solve such models efficiently and accurately. Using software tools of \(r^3t\) one can treat successfully nontrivial mathematical problems like advection-dominated system with different retardation of transport for each component and with nonlinear Freundlich sorption and/or precipitation. Additionally, long time simulations on complex 3D geological domains using unstructured grids can be realized. In this paper we introduce and summarize the most important and novel features of numerical simulation for radioactive contaminant transport in porous media when using \(r^3t\).  相似文献   
999.
Evolution-in-materio uses evolutionary algorithms to exploit properties of materials to solve computational problems without requiring a detailed understanding of such properties. We show that using a purpose-built hardware platform called Mecobo, it is possible to solve computational problems by evolving voltages and signals applied to an electrode array covered with a carbon nanotube–polymer composite. We demonstrate for the first time that this methodology can be applied to function optimization and also to the tone discriminator problem (TDP). For function optimization, we evaluate the approach on a suite of optimization benchmarks and obtain results that in some cases come very close to the global optimum or are comparable with those obtained using well-known software-based evolutionary approach. We also obtain good results in comparison with prior work on the tone discriminator problem. In the case of the TDP we also investigated the relative merits of different mixtures of materials and organizations of electrode array.  相似文献   
1000.
In the context of information retrieval (IR) from text documents, the term weighting scheme (TWS) is a key component of the matching mechanism when using the vector space model. In this paper, we propose a new TWS that is based on computing the average term occurrences of terms in documents and it also uses a discriminative approach based on the document centroid vector to remove less significant weights from the documents. We call our approach Term Frequency With Average Term Occurrence (TF-ATO). An analysis of commonly used document collections shows that test collections are not fully judged as achieving that is expensive and maybe infeasible for large collections. A document collection being fully judged means that every document in the collection acts as a relevant document to a specific query or a group of queries. The discriminative approach used in our proposed approach is a heuristic method for improving the IR effectiveness and performance and it has the advantage of not requiring previous knowledge about relevance judgements. We compare the performance of the proposed TF-ATO to the well-known TF-IDF approach and show that using TF-ATO results in better effectiveness in both static and dynamic document collections. In addition, this paper investigates the impact that stop-words removal and our discriminative approach have on TF-IDF and TF-ATO. The results show that both, stop-words removal and the discriminative approach, have a positive effect on both term-weighting schemes. More importantly, it is shown that using the proposed discriminative approach is beneficial for improving IR effectiveness and performance with no information on the relevance judgement for the collection.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号