首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   6737篇
  免费   895篇
  国内免费   333篇
电工技术   221篇
技术理论   1篇
综合类   318篇
化学工业   683篇
金属工艺   118篇
机械仪表   244篇
建筑科学   261篇
矿业工程   49篇
能源动力   678篇
轻工业   286篇
水利工程   154篇
石油天然气   101篇
武器工业   53篇
无线电   732篇
一般工业技术   434篇
冶金工业   109篇
原子能技术   295篇
自动化技术   3228篇
  2024年   8篇
  2023年   41篇
  2022年   124篇
  2021年   133篇
  2020年   105篇
  2019年   117篇
  2018年   153篇
  2017年   274篇
  2016年   309篇
  2015年   328篇
  2014年   433篇
  2013年   385篇
  2012年   455篇
  2011年   517篇
  2010年   437篇
  2009年   459篇
  2008年   466篇
  2007年   377篇
  2006年   430篇
  2005年   444篇
  2004年   406篇
  2003年   377篇
  2002年   262篇
  2001年   156篇
  2000年   126篇
  1999年   105篇
  1998年   78篇
  1997年   77篇
  1996年   54篇
  1995年   47篇
  1994年   52篇
  1993年   44篇
  1992年   31篇
  1991年   33篇
  1990年   15篇
  1989年   15篇
  1988年   14篇
  1987年   12篇
  1986年   4篇
  1985年   7篇
  1984年   10篇
  1983年   6篇
  1982年   6篇
  1981年   6篇
  1979年   2篇
  1978年   4篇
  1977年   7篇
  1976年   6篇
  1975年   4篇
  1959年   2篇
排序方式: 共有7965条查询结果,搜索用时 15 毫秒
101.
Modern MRI measurements deliver volumetric and time‐varying blood‐flow data of unprecedented quality. Visual analysis of these data potentially leads to a better diagnosis and risk assessment of various cardiovascular diseases. Recent advances have improved the speed and quality of the imaging data considerably. Nevertheless, the data remains compromised by noise and a lack of spatiotemporal resolution. Besides imaging data, also numerical simulations are employed. These are based on mathematical models of specific features of physical reality. However, these models require realistic parameters and boundary conditions based on measurements. We propose to use data assimilation to bring measured data and physically‐based simulation together, and to harness the mutual benefits. The accuracy and noise robustness of the coupled approach is validated using an analytic flow field. Furthermore, we present a comparative visualization that conveys the differences between using conventional interpolation and our coupled approach.  相似文献   
102.
Recent years have seen increasing attention and significant progress in many‐light rendering, a class of methods for efficient computation of global illumination. The many‐light formulation offers a unified mathematical framework for the problem reducing the full lighting transport simulation to the calculation of the direct illumination from many virtual light sources. These methods are unrivaled in their scalability: they are able to produce plausible images in a fraction of a second but also converge to the full solution over time. In this state‐of‐the‐art report, we give an easy‐to‐follow, introductory tutorial of the many‐light theory; provide a comprehensive, unified survey of the topic with a comparison of the main algorithms; discuss limitations regarding materials and light transport phenomena and present a vision to motivate and guide future research. We will cover both the fundamental concepts as well as improvements, extensions and applications of many‐light rendering.  相似文献   
103.
Interactive rigid body simulation is an important part of many modern computer tools, which no authoring tool nor game engine can do without. Such high‐performance computer tools open up new possibilities for changing how designers, engineers, modelers and animators work with their design problems. This paper is a self contained state‐of‐the‐art report on the physics, the models, the numerical methods and the algorithms used in interactive rigid body simulation all of which have evolved and matured over the past 20 years. Furthermore, the paper communicates the mathematical and theoretical details in a pedagogical manner. This paper is not only a stake in the sand on what has been done, it also seeks to give the reader deeper insights to help guide their future research.  相似文献   
104.
We present a novel approach to recording and computing panorama light fields. In contrast to previous methods that estimate panorama light fields from focal stacks or naive multi‐perspective image stitching, our approach is the first that processes ray entries directly and does not require depth reconstruction or matching of image features. Arbitrarily complex scenes can therefore be captured while preserving correct occlusion boundaries, anisotropic reflections, refractions, and other light effects that go beyond diffuse reflections of Lambertian surfaces.  相似文献   
105.
Sparse localized decomposition is a useful technique to extract meaningful deformation components out of a training set of mesh data. However, existing methods cannot capture large rotational motion in the given mesh dataset. In this paper we present a new decomposition technique based on deformation gradients. Given a mesh dataset, the deformation gradient field is extracted, and decomposed into two groups: rotation field and stretching field, through polar decomposition. These two groups of deformation information are further processed through the sparse localized decomposition into the desired components. These sparse localized components can be linearly combined to form a meaningful deformation gradient field, and can be used to reconstruct the mesh through a least squares optimization step. Our experiments show that the proposed method addresses the rotation problem associated with traditional deformation decomposition techniques, making it suitable to handle not only stretched deformations, but also articulated motions that involve large rotations.  相似文献   
106.
Since indoor scenes are frequently changed in daily life, such as re‐layout of furniture, the 3D reconstructions for them should be flexible and easy to update. We present an automatic 3D scene update algorithm to indoor scenes by capturing scene variation with RGBD cameras. We assume an initial scene has been reconstructed in advance in manual or other semi‐automatic way before the change, and automatically update the reconstruction according to the newly captured RGBD images of the real scene update. It starts with an automatic segmentation process without manual interaction, which benefits from accurate labeling training from the initial 3D scene. After the segmentation, objects captured by RGBD camera are extracted to form a local updated scene. We formulate an optimization problem to compare to the initial scene to locate moved objects. The moved objects are then integrated with static objects in the initial scene to generate a new 3D scene. We demonstrate the efficiency and robustness of our approach by updating the 3D scene of several real‐world scenes.  相似文献   
107.
We present a data‐driven method for automatically recoloring a photo to enhance its appearance or change a viewer's emotional response to it. A compact representation called a RegionNet summarizes color and geometric features of image regions, and geometric relationships between them. Correlations between color property distributions and geometric features of regions are learned from a database of well‐colored photos. A probabilistic factor graph model is used to summarize distributions of color properties and generate an overall probability distribution for color suggestions. Given a new input image, we can generate multiple recolored results which unlike previous automatic results, are both natural and artistic, and compatible with their spatial arrangements.  相似文献   
108.
This paper proposes a new approach for color transfer between two images. Our method is unique in its consideration of the scene illumination and the constraint that the mapped image must be within the color gamut of the target image. Specifically, our approach first performs a white‐balance step on both images to remove color casts caused by different illuminations in the source and target image. We then align each image to share the same ‘white axis’ and perform a gradient preserving histogram matching technique along this axis to match the tone distribution between the two images. We show that this illuminant‐aware strategy gives a better result than directly working with the original source and target image's luminance channel as done by many previous methods. Afterwards, our method performs a full gamut‐based mapping technique rather than processing each channel separately. This guarantees that the colors of our transferred image lie within the target gamut. Our experimental results show that this combined illuminant‐aware and gamut‐based strategy produces more compelling results than previous methods. We detail our approach and demonstrate its effectiveness on a number of examples.  相似文献   
109.
Typical high dynamic range (HDR) imaging approaches based on multiple images have difficulties in handling moving objects and camera shakes, suffering from the ghosting effect and the loss of sharpness in the output HDR image. While there exist a variety of solutions for resolving such limitations, most of the existing algorithms are susceptible to complex motions, saturation, and occlusions. In this paper, we propose an HDR imaging approach using the coded electronic shutter which can capture a scene with row‐wise varying exposures in a single image. Our approach enables a direct extension of the dynamic range of the captured image without using multiple images, by photometrically calibrating rows with different exposures. Due to the concurrent capture of multiple exposures, misalignments of moving objects are naturally avoided with significant reduction in the ghosting effect. To handle the issues with under‐/over‐exposure, noise, and blurs, we present a coherent HDR imaging process where the problems are resolved one by one at each step. Experimental results with real photographs, captured using a coded electronic shutter, demonstrate that our method produces a high quality HDR images without the ghosting and blur artifacts.  相似文献   
110.
In this paper, we present a new approach for shape‐grammar‐based generation and rendering of huge cities in real‐time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real‐time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame‐to‐frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号