首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20830篇
  免费   2849篇
  国内免费   1130篇
电工技术   3199篇
技术理论   1篇
综合类   1704篇
化学工业   2153篇
金属工艺   759篇
机械仪表   3471篇
建筑科学   349篇
矿业工程   820篇
能源动力   879篇
轻工业   312篇
水利工程   216篇
石油天然气   483篇
武器工业   404篇
无线电   1232篇
一般工业技术   1543篇
冶金工业   646篇
原子能技术   246篇
自动化技术   6392篇
  2024年   138篇
  2023年   372篇
  2022年   693篇
  2021年   708篇
  2020年   647篇
  2019年   505篇
  2018年   529篇
  2017年   740篇
  2016年   869篇
  2015年   927篇
  2014年   1302篇
  2013年   1199篇
  2012年   1433篇
  2011年   1649篇
  2010年   1313篇
  2009年   1292篇
  2008年   1295篇
  2007年   1470篇
  2006年   1349篇
  2005年   1144篇
  2004年   967篇
  2003年   761篇
  2002年   641篇
  2001年   562篇
  2000年   446篇
  1999年   409篇
  1998年   269篇
  1997年   233篇
  1996年   205篇
  1995年   164篇
  1994年   138篇
  1993年   119篇
  1992年   75篇
  1991年   54篇
  1990年   42篇
  1989年   35篇
  1988年   28篇
  1987年   13篇
  1986年   11篇
  1985年   12篇
  1984年   6篇
  1983年   3篇
  1982年   5篇
  1981年   4篇
  1980年   3篇
  1977年   3篇
  1966年   3篇
  1961年   3篇
  1956年   2篇
  1955年   2篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
101.
4D Video Textures (4DVT) introduce a novel representation for rendering video‐realistic interactive character animation from a database of 4D actor performance captured in a multiple camera studio. 4D performance capture reconstructs dynamic shape and appearance over time but is limited to free‐viewpoint video replay of the same motion. Interactive animation from 4D performance capture has so far been limited to surface shape only. 4DVT is the final piece in the puzzle enabling video‐realistic interactive animation through two contributions: a layered view‐dependent texture map representation which supports efficient storage, transmission and rendering from multiple view video capture; and a rendering approach that combines multiple 4DVT sequences in a parametric motion space, maintaining video quality rendering of dynamic surface appearance whilst allowing high‐level interactive control of character motion and viewpoint. 4DVT is demonstrated for multiple characters and evaluated both quantitatively and through a user‐study which confirms that the visual quality of captured video is maintained. The 4DVT representation achieves >90% reduction in size and halves the rendering cost.  相似文献   
102.
This paper introduces a system for the direct editing of highlights produced by anisotropic BRDFs, which we call anisotropic highlights. We first provide a comprehensive analysis of the link between the direction of anisotropy and the shape of highlight curves for arbitrary object surfaces. The gained insights provide the required ingredients to infer BRDF orientations from a prescribed highlight tangent field. This amounts to a non‐linear optimization problem, which is solved at interactive framerates during manipulation. Taking inspiration from sculpting software, we provide tools that give the impression of manipulating highlight curves while actually modifying their tangents. Our solver produces desired highlight shapes for a host of lighting environments and anisotropic BRDFs.  相似文献   
103.
This study aims to develop a controller for use in the online simulation of two interacting characters. This controller is capable of generalizing two sets of interaction motions of the two characters based on the relationships between the characters. The controller can exhibit similar motions to a captured human motion while reacting in a natural way to the opponent character in real time. To achieve this, we propose a new type of physical model called a coupled inverted pendulum on carts that comprises two inverted pendulum on a cart models, one for each individual, which are coupled by a relationship model. The proposed framework is divided into two steps: motion analysis and motion synthesis. Motion analysis is an offline preprocessing step, which optimizes the control parameters to move the proposed model along a motion capture trajectory of two interacting humans. The optimization procedure generates a coupled pendulum trajectory which represents the relationship between two characters for each frame, and is used as a reference in the synthesis step. In the motion synthesis step, a new coupled pendulum trajectory is planned reflecting the effects of the physical interaction, and the captured reference motions are edited based on the planned trajectory produced by the coupled pendulum trajectory generator. To validate the proposed framework, we used a motion capture data set showing two people performing kickboxing. The proposed controller is able to generalize the behaviors of two humans to different situations such as different speeds and turning speeds in a realistic way in real time.  相似文献   
104.
Image calibration requires both linearization of pixel values and scaling so that values in the image correspond to real‐world luminances. In this paper we focus on the latter and rather than rely on camera characterization, we calibrate images by analysing their content and metadata, obviating the need for expensive measuring devices or modeling of lens and camera combinations. Our analysis correlates sky pixel values to luminances that would be expected based on geographical metadata. Combined with high dynamic range (HDR) imaging, which gives us linear pixel data, our algorithm allows us to find absolute luminance values for each pixel—effectively turning digital cameras into absolute light meters. To validate our algorithm we have collected and annotated a calibrated set of HDR images and compared our estimation with several other approaches, showing that our approach is able to more accurately recover absolute luminance. We discuss various applications and demonstrate the utility of our method in the context of calibrated color appearance reproduction and lighting design.  相似文献   
105.
Natural‐looking insect animation is very difficult to simulate. The fast movement and small scale of insects often challenge the standard motion capture techniques. As for the manual key‐framing or physics‐driven methods, significant amounts of time and efforts are necessary due to the delicate structure of the insect, which prevents practical applications. In this paper, we address this challenge by presenting a two‐level control framework to efficiently automate the modeling and authoring of insects’ locomotion. On the top level, we design a Triangle Placement Engine to automatically determine the location and orientation of insects’ foot contacts, given the user‐defined trajectory and settings, including speed, load, path and terrain etc. On the low‐level, we relate the Central Pattern Generator to the triangle profiles with the assistance of a Controller Look‐Up Table to fast simulate the physically‐based movement of insects. With our approach, animators can directly author insects’ behavior among a wide range of locomotion repertoire, including walking along a specified path or on an uneven terrain, dynamically adjusting to external perturbations and collectively transporting prey back to the nest.  相似文献   
106.
107.
In this paper, we propose a new continuous self‐collision detection (CSCD) method for a deformable surface that interacts with a simple solid model. The method is developed based on the radial‐view‐based culling method. Our method is suitable for the deformable surface that has large contact region with the solid model. The deformable surface may consist of small round‐shaped holes. At the pre‐processing stage, the holes of the deformable surface are filled with ghost triangles so as to make the mesh of the deformable surface watertight. An observer primitive (i.e. a point or a line segment) is computed so that it lies inside the solid model. At the runtime stage, the orientations of triangles with respect to the observer primitive are evaluated. The collision status of the deformable surface is then determined. We evaluated our method for several animations including virtual garments. Experimental results show that our method improves the process of CSCD.  相似文献   
108.
Multi‐view reconstruction aims at computing the geometry of a scene observed by a set of cameras. Accurate 3D reconstruction of dynamic scenes is a key component for a large variety of applications, ranging from special effects to telepresence and medical imaging. In this paper we propose a method based on Moving Least Squares surfaces which robustly and efficiently reconstructs dynamic scenes captured by a calibrated set of hybrid color+depth cameras. Our reconstruction provides spatio‐temporal consistency and seamlessly fuses color and geometric information. We illustrate our approach on a variety of real sequences and demonstrate that it favorably compares to state‐of‐the‐art methods.  相似文献   
109.
Capturing exposure sequences to compute high dynamic range (HDR) images causes motion blur in cases of camera movement. This also applies to light‐field cameras: frames rendered from multiple blurred HDR light‐field perspectives are also blurred. While the recording times of exposure sequences cannot be reduced for a single‐sensor camera, we demonstrate how this can be achieved for a camera array. Thus, we decrease capturing time and reduce motion blur for HDR light‐field video recording. Applying a spatio‐temporal exposure pattern while capturing frames with a camera array reduces the overall recording time and enables the estimation of camera movement within one light‐field video frame. By estimating depth maps and local point spread functions (PSFs) from multiple perspectives with the same exposure, regional motion deblurring can be supported. Missing exposures at various perspectives are then interpolated.  相似文献   
110.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号