全文获取类型
收费全文 | 20886篇 |
免费 | 2552篇 |
国内免费 | 1599篇 |
专业分类
电工技术 | 734篇 |
技术理论 | 1篇 |
综合类 | 1273篇 |
化学工业 | 658篇 |
金属工艺 | 542篇 |
机械仪表 | 1997篇 |
建筑科学 | 1756篇 |
矿业工程 | 299篇 |
能源动力 | 225篇 |
轻工业 | 331篇 |
水利工程 | 230篇 |
石油天然气 | 337篇 |
武器工业 | 149篇 |
无线电 | 3959篇 |
一般工业技术 | 1570篇 |
冶金工业 | 337篇 |
原子能技术 | 248篇 |
自动化技术 | 10391篇 |
出版年
2024年 | 78篇 |
2023年 | 331篇 |
2022年 | 496篇 |
2021年 | 714篇 |
2020年 | 693篇 |
2019年 | 479篇 |
2018年 | 504篇 |
2017年 | 672篇 |
2016年 | 836篇 |
2015年 | 875篇 |
2014年 | 1415篇 |
2013年 | 1190篇 |
2012年 | 1399篇 |
2011年 | 1609篇 |
2010年 | 1270篇 |
2009年 | 1356篇 |
2008年 | 1255篇 |
2007年 | 1455篇 |
2006年 | 1345篇 |
2005年 | 1205篇 |
2004年 | 1006篇 |
2003年 | 968篇 |
2002年 | 781篇 |
2001年 | 536篇 |
2000年 | 453篇 |
1999年 | 406篇 |
1998年 | 352篇 |
1997年 | 277篇 |
1996年 | 200篇 |
1995年 | 153篇 |
1994年 | 117篇 |
1993年 | 110篇 |
1992年 | 70篇 |
1991年 | 58篇 |
1990年 | 48篇 |
1989年 | 43篇 |
1988年 | 41篇 |
1987年 | 22篇 |
1986年 | 27篇 |
1985年 | 38篇 |
1984年 | 26篇 |
1983年 | 35篇 |
1982年 | 17篇 |
1981年 | 25篇 |
1980年 | 16篇 |
1979年 | 10篇 |
1978年 | 9篇 |
1974年 | 2篇 |
1973年 | 5篇 |
1959年 | 2篇 |
排序方式: 共有10000条查询结果,搜索用时 453 毫秒
11.
In this paper, a new inverse identification method of constitutive parameters is developed from full kinematic and thermal field measurements. It consists in reconstructing the heat source field from two different approaches by using the heat diffusion equation. The first one requires the temperature field measurement and the value of the thermophysical parameters. The second one is based on the kinematic field measurement and the choice of a thermo-hyperelastic model that contains the parameters to be identified. The identification is carried out at the local scale, ie, at any point of the heat source field, without using the boundary conditions. In the present work, the method is applied to the challenging case of hyperelasticity from a heterogeneous test. Due to large deformations undergone by the rubber specimen tested, a motion compensation technique is developed to plot the kinematic and the thermal fields at the same points before reconstructing the heterogeneous heat source field. In the present case, the constitutive parameter of the Neo-Hookean model has been identified, and its distribution has been characterized with respect to the strain state at the surface of a cross-shaped specimen. 相似文献
12.
Steganography is the science of hiding secret message in an appropriate digital multimedia in such a way that the existence of the embedded message should be invisible to anyone apart from the sender or the intended recipient. This paper presents an irreversible scheme for hiding a secret image in the cover image that is able to improve both the visual quality and the security of the stego-image while still providing a large embedding capacity. This is achieved by a hybrid steganography scheme incorporates Noise Visibility Function (NVF) and an optimal chaotic based encryption scheme. In the embedding process, first to reduce the image distortion and to increase the embedding capacity, the payload of each region of the cover image is determined dynamically according to NVF. NVF analyzes the local image properties to identify the complex areas where more secret bits should be embedded. This ensures to maintain a high visual quality of the stego-image as well as a large embedding capacity. Second, the security of the secret image is brought about by an optimal chaotic based encryption scheme to transform the secret image into an encrypted image. Third, the optimal chaotic based encryption scheme is achieved by using a hybrid optimization of Particle Swarm Optimization (PSO) and Genetic Algorithm (GA) which is allowing us to find an optimal secret key. The optimal secret key is able to encrypt the secret image so as the rate of changes after embedding process be decreased which results in increasing the quality of the stego-image. In the extracting process, the secret image can be extracted from the stego-image losslessly without referring to the original cover image. The experimental results confirm that the proposed scheme not only has the ability to achieve a good trade-off between the payload and the stego-image quality, but also can resist against the statistics and image processing attacks. 相似文献
13.
本文对国内外的电视技术发展现状进行了充分的研究和分析,并对超高清电视系统的相关图像技术参数进行了分析和介绍。 相似文献
14.
In the field of images and imaging, super-resolution (SR) reconstruction of images is a technique that converts one or more low-resolution (LR) images into a highresolution (HR) image. The classical two types of SR methods are mainly based on applying a single image or multiple images captured by a single camera. Microarray camera has the characteristics of small size, multi views, and the possibility of applying to portable devices. It has become a research hotspot in image processing. In this paper, we propose a SR reconstruction of images based on a microarray camera for sharpening and registration processing of array images. The array images are interpolated to obtain a HR image initially followed by a convolution neural network (CNN) procedure for enhancement. The convolution layers of our convolution neural network are 3×3 or 1×1 layers, of which the 1×1 layers are used to improve the network performance particularly. A bottleneck structure is applied to reduce the parameter numbers of the nonlinear mapping and to improve the nonlinear capability of the whole network. Finally, we use a 3×3 deconvolution layer to significantly reduce the number of parameters compared to the deconvolution layer of FSRCNN-s. The experiments show that the proposed method can not only ameliorate effectively the texture quality of the target image based on the array images information, but also further enhance the quality of the initial high resolution image by the improved CNN. 相似文献
15.
We present a new image reconstruction method for Electrical Capacitance Tomography (ECT). ECT image reconstruction is generally ill-posed because the number of measurements is small whereas the image dimensions are large. Here, we present a sparsity-inspired approach to achieve better ECT image reconstruction from the small number of measurements. Our approach for ECT image reconstruction is based on Total Variation (TV) regularization. We apply an efficient Split-Bregman Iteration (SBI) approach to solve the problem. We also propose three metrics to evaluate image reconstruction performance, i.e., a joint metric of positive reconstruction rate (PRR) and false reconstruction rate (FRR), correlation coefficient, and a shape and location metric. The results on both synthetic and real data show that the proposed TV-SBI method can better preserve the edges of images and better resolve different objects within reconstructed images, as compared to a representative state-of-the-art ECT image reconstruction algorithm, Projected Landweber Iteration with Linear Back Projection initialization (LBP-PLI). 相似文献
16.
In this paper, we propose a novel change detection method for synthetic aperture radar images based on unsupervised artificial immune systems. After generating the difference image from the multitemporal images, we take each pixel as an antigen and build an immune model to deal with the antigens. By continuously stimulating the immune model, the antigens are classified into two groups, changed and unchanged. Firstly, the proposed method incorporates the local information in order to restrain the impact of speckle noise. Secondly, the proposed method simulates the immune response process in a fuzzy way to get an accurate result by retaining more image details. We introduce a fuzzy membership of the antigen and then update the antibodies and memory cells according to the membership. Compared with the clustering algorithms we have proposed in our previous works, the new method inherits immunological properties from immune systems and is robust to speckle noise due to the use of local information as well as fuzzy strategy. Experiments on real synthetic aperture radar images show that the proposed method performs well on several kinds of difference images and engenders more robust result than the other compared methods. 相似文献
17.
Clip-art image segmentation is widely used as an essential step to solve many vision problems such as colorization and vectorization. Many of these applications not only demand accurate segmentation results, but also have little tolerance for time cost, which leads to the main challenge of this kind of segmentation. However, most existing segmentation techniques are found not sufficient for this purpose due to either their high computation cost or low accuracy. To address such issues, we propose a novel segmentation approach, ECISER, which is well-suited in this context. The basic idea of ECISER is to take advantage of the particular nature of cartoon images and connect image segmentation with aliased rasterization. Based on such relationship, a clip-art image can be quickly segmented into regions by re-rasterization of the original image and several other computationally efficient techniques developed in this paper. Experimental results show that our method achieves dramatic computational speedups over the current state-of-the-art approaches, while preserving almost the same quality of results. 相似文献
18.
This paper summarizes the basics of pulsed thermal nondestructive testing (TNDT) including theoretical solutions, data processing algorithms and practical implementation. Typical defects are discussed along with 1D analytical and multi-dimensional numerical solutions. Special emphasis is focused on defect characterization by the use of inverse solutions. A list of TNDT terms is provided. Applications of active TNDT, mainly in the aerospace industry, are discussed briefly, and some trends in the further development of this technique are described. 相似文献
19.
Simulating the psychological experience of human vision,a road extraction model based on the format tower is proposed to extract the road in the high resolution remote sensing image from the perspective of morphology.Firstly,based on the spectral and texture information,the suspected road targets are extracted by using segmentation technology.Then these targets are classified according to their reliability and extract the road targets for each category.Finally,three types of identified road information are verified and merged,and the continuous smooth road extraction results are obtained.Experiments on real high resolution images show that the results are consistent with the visual perception of the human eye,and the overall classification accuracy is higher,indicating that the algorithm is effective and feasible and has good use value. 相似文献
20.
Haibo Zhang Guohua Geng Kang Li Cheng Liu Yuqing Hou 《Journal of Modern Optics》2018,65(20):2278-2289
Cone-beam X-ray luminescence computed tomography (CB-XLCT) is an attractive hybrid imaging modality, and it has the potential of monitoring the metabolic processes of nanophosphors-based drugs in vivo. However, the XLCT imaging suffers from a severe ill-posed problem. In this work, a sparse nonconvex Lp (0?p?1) regularization was utilized for the efficient reconstruction for early detection of small tumour in CB-XLCT imaging. Specifically, we transformed the non-convex optimization problem into an iteratively reweighted scheme based on the L1 regularization. Further, an iteratively reweighted split augmented lagrangian shrinkage algorithm (IRW_SALSA-Lp) was proposed to efficiently solve the non-convex Lp (0?p?1) model. We studied eight different non-convex p-values (1/16, 1/8, 1/4, 3/8, 1/2, 5/8, 3/4, 7/8) in both 3D digital mouse experiments and in vivo experiments. The results demonstrate that the proposed non-convex methods outperform L2 and L1 regularization in accurately recovering sparse targets in CB-XLCT. And among all the non-convex p-values, our Lp(1/4?p?1/2) methods give the best performance. 相似文献