首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1794篇
  免费   73篇
电工技术   35篇
综合类   1篇
化学工业   313篇
金属工艺   36篇
机械仪表   36篇
建筑科学   46篇
矿业工程   5篇
能源动力   48篇
轻工业   91篇
水利工程   6篇
石油天然气   2篇
无线电   150篇
一般工业技术   295篇
冶金工业   549篇
原子能技术   15篇
自动化技术   239篇
  2023年   11篇
  2022年   11篇
  2021年   32篇
  2020年   22篇
  2019年   37篇
  2018年   29篇
  2017年   34篇
  2016年   43篇
  2015年   32篇
  2014年   49篇
  2013年   60篇
  2012年   38篇
  2011年   87篇
  2010年   57篇
  2009年   52篇
  2008年   58篇
  2007年   60篇
  2006年   44篇
  2005年   52篇
  2004年   36篇
  2003年   35篇
  2002年   26篇
  2001年   26篇
  2000年   26篇
  1999年   29篇
  1998年   167篇
  1997年   95篇
  1996年   62篇
  1995年   41篇
  1994年   42篇
  1993年   38篇
  1992年   15篇
  1991年   18篇
  1990年   19篇
  1989年   24篇
  1988年   20篇
  1987年   25篇
  1986年   21篇
  1985年   24篇
  1984年   11篇
  1983年   18篇
  1982年   14篇
  1981年   26篇
  1980年   16篇
  1979年   11篇
  1978年   24篇
  1977年   25篇
  1976年   45篇
  1975年   15篇
  1974年   17篇
排序方式: 共有1867条查询结果,搜索用时 15 毫秒
51.
Without non-linear basis functions many problems can not be solved by linear algorithms. This article proposes a method to automatically construct such basis functions with slow feature analysis (SFA). Non-linear optimization of this unsupervised learning method generates an orthogonal basis on the unknown latent space for a given time series. In contrast to methods like PCA, SFA is thus well suited for techniques that make direct use of the latent space. Real-world time series can be complex, and current SFA algorithms are either not powerful enough or tend to over-fit. We make use of the kernel trick in combination with sparsification to develop a kernelized SFA algorithm which provides a powerful function class for large data sets. Sparsity is achieved by a novel matching pursuit approach that can be applied to other tasks as well. For small data sets, however, the kernel SFA approach leads to over-fitting and numerical instabilities. To enforce a stable solution, we introduce regularization to the SFA objective. We hypothesize that our algorithm generates a feature space that resembles a Fourier basis in the unknown space of latent variables underlying a given real-world time series. We evaluate this hypothesis at the example of a vowel classification task in comparison to sparse kernel PCA. Our results show excellent classification accuracy and demonstrate the superiority of kernel SFA over kernel PCA in encoding latent variables.  相似文献   
52.
This paper presents a multi-view acquisition system using multi-modal sensors, composed of time-of-flight (ToF) range sensors and color cameras. Our system captures the multiple pairs of color images and depth maps at multiple viewing directions. In order to ensure the acceptable accuracy of measurements, we compensate errors in sensor measurement and calibrate multi-modal devices. Upon manifold experiments and extensive analysis, we identify the major sources of systematic error in sensor measurement and construct an error model for compensation. As a result, we provide a practical solution for the real-time error compensation of depth measurement. Moreover, we implement the calibration scheme for multi-modal devices, unifying the spatial coordinate for multi-modal sensors. The main contribution of this work is to present the thorough analysis of systematic error in sensor measurement and therefore provide a reliable methodology for robust error compensation. The proposed system offers a real-time multi-modal sensor calibration method and thereby is applicable for the 3D reconstruction of dynamic scenes.  相似文献   
53.
54.
Mass-spring and particle systems have been widely employed in computer graphics to model deformable objects because they allow fast numerical solutions. In this work, we establish a link between these discrete models and classical mathematical elasticity. It turns out that discrete systems can be derived from a continuum model by a finite difference formulation and approximate classical continuum models unless the deformations are large. In this work, we present the derivation of a particle system from a continuum model, compare it to the models of classical elasticity theory, and assess its accuracy. In this way, we gain insight into the way discrete systems work and we are able to specify the correct scaling when the discretization is changed. Physical material parameters that describe materials in continuum mechanics are also used in the derived particle system.  相似文献   
55.
This paper describes a method for volume data compression and rendering which bases on wavelet splats. The underlying concept is especially designed for distributed and networked applications, where we assume a remote server to maintain large scale volume data sets, being inspected, browsed through and rendered interactively by a local client. Therefore, we encode the server’s volume data using a newly designed wavelet based volume compression method. A local client can render the volumes immediately from the compression domain by using wavelet footprints, a method proposed earlier. In addition, our setup features full progression, where the rendered image is refined progressively as data comes in. Furthermore, framerate constraints are considered by controlling the quality of the image both locally and globally depending on the current network bandwidth or computational capabilities of the client. As a very important aspect of our setup, the client does not need to provide storage for the volume data and can be implemented in terms of a network application. The underlying framework enables to exploit all advantageous properties of the wavelet transform and forms a basis for both sophisticated lossy compression and rendering. Although coming along with simple illumination and constant exponential decay, the rendering method is especially suited for fast interactive inspection of large data sets and can be supported easily by graphics hardware.  相似文献   
56.
The main task of digital image processing is to recognize properties of real objects based on their digital images. These images are obtained by some sampling device, like a CCD camera, and represented as finite sets of points that are assigned some value in a gray-level or color scale. Based on technical properties of sampling devices, these points are usually assumed to form a square grid and are modeled as finite subsets of Z2. Therefore, a fundamental question in digital image processing is which features in the digital image correspond, under certain conditions, to properties of the underlying objects. In practical applications this question is mostly answered by visually judging the obtained digital images. In this paper we present a comprehensive answer to this question with respect to topological properties. In particular, we derive conditions relating properties of real objects to the grid size of the sampling device which guarantee that a real object and its digital image are topologically equivalent. These conditions also imply that two digital images of a given object are topologically equivalent. This means, for example, that shifting or rotating an object or the camera cannot lead to topologically different images, i.e., topological properties of obtained digital images are invariant under shifting and rotation.  相似文献   
57.
We propose a numerical simulation technique to model the process of diffusional creep and stress relaxation that occurs in Cu-damascene interconnects of integrated circuit devices in processing stage. The mass flow problem is coupled to the stress analysis through vacancy flux and equilibrium vacancy concentration. The technique is implemented in a software package that seamlessly integrates the problem-oriented code with commercially available finite element program MSC.Marc. It is utilized to model the Coble creep phenomenon by introducing the nanoscale grain boundary region having the thickness on the order of several layers of atoms. As an illustration, the two-dimensional problem of stress relaxation in a single grain subjected to prescribed displacements and tractions is examined.  相似文献   
58.
Performance limits of electron holography   总被引:1,自引:0,他引:1  
Lichte H 《Ultramicroscopy》2008,108(3):256-262
Transmission electron microscopy is wave optics. The object exit wave contains the full object information. However, in the usual intensity images, recorded either in real space or in Fourier space, the phases are missing. In many applications at medium and at high resolution, electron holography has shown its unique ability of solving the “missing phase problem” and utilizing the recovered phase for complete interpretation of the object structure. The question is “What are the performance limits?” with respect to field of view, lateral resolution and signal resolution. In this article, the performance limits are derived and discussed.  相似文献   
59.
The prediction of formability is one of the most important tasks in sheet metal forming process simulation. The common criterion for ductile fracture in industrial applications is the Forming Limit Diagram (FLD). This is only applicable for linear strain paths. However, in most industrial simulation cases non-linear strain paths occur. To resolve this problem, a phenomenological approach is introduced, the so-called Generalized Forming Limit Concept (GFLC). The GFLC enables prediction of localized necking on arbitrary non-linear strain paths. Another possibility is the use of the Time Dependent Evaluation Method (TDEM) within the simulation as a failure criteria. During the Numisheet Benchmark 1 (2014) a two-stage forming process was performed with three typical sheet materials (AA5182, DP600 and TRIP 780) and three different blank shapes. The task was to determinate the point in time and space of local instability. Therefore the strain path for the point of maximum local thinning is evaluated. To predict the start of local necking the Generalized Forming Limit Concept (GFLC), the Time Dependent Evaluation Method (TDEM) and the modified TDEM were applied. The results of the simulation are compared with the results of the Benchmark experiment.  相似文献   
60.
Insights into service response time is important for service-oriented architectures and service management. However, directly measuring the service response time is not always feasible or can be very costly. This paper extends an analytical modeling method which uses enterprise architecture modeling to support the analysis. The extensions consist of (i) a formalization using the Hybrid Probabilistic Relational Model formalism, (ii) an implementation in an analysis tool for enterprise architecture and (iii) a data collection approach using expert assessments collected via interviews and questionnaires. The accuracy and cost effectiveness of the method was tested empirically by comparing it with direct performance measurements of five services of a geographical information system at a Swedish utility company. The tests indicate that the proposed method can be a viable option for rapid service response time estimates when a moderate accuracy within 15% is sufficient.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号