首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   24篇
  免费   2篇
机械仪表   1篇
能源动力   1篇
无线电   4篇
一般工业技术   1篇
原子能技术   1篇
自动化技术   18篇
  2017年   1篇
  2016年   1篇
  2015年   1篇
  2013年   2篇
  2011年   1篇
  2009年   1篇
  2007年   1篇
  2006年   1篇
  2005年   2篇
  2004年   3篇
  2003年   1篇
  2002年   3篇
  2001年   4篇
  2000年   4篇
排序方式: 共有26条查询结果,搜索用时 15 毫秒
1.
The system described in this paper provides a real-time 3D visual experience by using an array of 64 video cameras and an integral photography display with 60 viewing directions. The live 3D scene in front of the camera array is reproduced by the full-color, full-parallax autostereoscopic display with interactive control of viewing parameters. The main technical challenge is fast and flexible conversion of the data from the 64 multicamera images to the integral photography format. Based on image-based rendering techniques, our conversion method first renders 60 novel images corresponding to the viewing directions of the display, and then arranges the rendered pixels to produce an integral photography image. For real-time processing on a single PC, all the conversion processes are implemented on a GPU with GPGPU techniques. The conversion method also allows a user to interactively control viewing parameters of the displayed image for reproducing the dynamic 3D scene with desirable parameters. This control is performed as a software process, without reconfiguring the hardware system, by changing the rendering parameters such as the convergence point of the rendering cameras and the interval between the viewpoints of the rendering cameras.  相似文献   
2.
This paper proposes a novel and general method of glare generation based on wave optics. A glare image is regarded as a result of Fraunhofer diffraction, which is equivalent to a 2D Fourier transform of the image of given apertures or obstacles. In conventional methods, the shapes of glare images are categorized according to their source apertures, such as pupils and eyelashes and their basic shapes (e.g. halos, coronas, or radial streaks) are manually generated as templates, mainly based on statistical observation. Realistic variations of these basic shapes often depend on the use of random numbers. Our proposed method computes glare images fully automatically from aperture images and can be applied universally to all kinds of apertures, including camera diaphragms. It can handle dynamic changes in the position of the aperture relative to the light source, which enables subtle movement or rotation of glare streaks. Spectra can also be simulated in the glare, since the intensity of diffraction depends on the wavelength of light. The resulting glare image is superimposed onto a given computer‐generated image containing high‐intensity light sources or reflections, aligning the center of the glare image to the high‐intensity areas. Our method is implemented as a multipass rendering software. By precomputing the dynamic glare image set and putting it into texture memory, the software runs at an interactive rate.  相似文献   
3.
This paper presents an efficient layout method for a high-speed multiplier. The Wallace-tree method is generally used for high-speed multipliers. In the conventional Wallace tree, however, every partial product is added in a single direction from top to bottom. Therefore, the number of adders increases as the adding stage moves forward. As a result, it generates a dead area when the multiplier is laid out in a rectangle. To solve this problem, we propose a rectangular Wallace-tree construction method. In our method, the partial products are divided into two groups and added in the opposite direction. The partial products in the first group are added downward, and the partial products in the second group are added upward. Using this method, we eliminate the dead area. Also, we optimized the carry propagation between the two groups to realize high speed and a simple layout, We applied it to a 54×54-bit multiplier. The 980 μm×1000 μm area size and the 600 MHz clock speed have been achieved using 0.18 μm CMOS technology  相似文献   
4.
Although it has been observed that motion-compensated frame differences increase toward block boundaries and overlapped block motion compensation (OBMC) has been shown to provide reduced blocking artifacts as well as improved prediction accuracy, there is almost no satisfactory theoretical basis that clearly interprets the space-dependent characteristics of motion-compensated frame differences, nor have the theoretical aspects of OBMC been investigated thoroughly. We first interpret the space-dependent characteristics of motion-compensated frame differences based on a novel statistical motion distribution model. We then apply the statistical motion distribution model to the analysis of prediction efficiency of OBMC. Through the analysis, we prove theoretically that OBMC can reduce and equalize the motion-compensated frame differences across a block. The analytical results are justified by empirical experiments with typical image sequences.  相似文献   
5.
Interactions between low-energy ions and solid rare gases have been investigated by observing desorbed ions. When low-energy ions were injected into solid Ne, we found that the kinetic energy of desorbed cluster ions significantly increased with irradiation time for a film thickness of several hundred monolayers. This result can be explained by charge-up (i.e., electronic holes) in the solid Ne. The kinetic energy of the desorbed ions is considered to be proportional to the number of holes created in the solid by the ion impacts. The solid’s temperature effects on the desorbed-ion kinetic energy can be understood as a dependence on the diffusion rate of the holes. The temperature was controlled between 4.7 and 7.0 K. The activation energy of hole-hopping transport is estimated at 0.7 meV based on the slope of Arrhenius plots.  相似文献   
6.
Lumisight Table: an interactive view-dependent tabletop display   总被引:1,自引:0,他引:1  
A novel tabletop display provides different images to different users surrounding the system. It can also capture users' gestures and physical objects on the tabletop. The Lumisight Table approach is based on the optical design of a special screen system composed of a building material called Lumisty and a Fresnel lens. The system combines these films and a lens with four projectors to display four different images, one for each user's view. In addition, we need appropriate input methods for this display media. In the current state of the project, we can control computers by placing physical objects on the display or placing our hands over the display. This screen system also makes it possible to use a camera to capture the appearance of the tabletop from inside of the system. Our other main idea is to develop attractive and specialized applications on the Lumisight Table, including games and applications for computer-supported cooperative-work (CSCW) environments. The projected images can be completely different from each other, or partially identical and partially different. Users can share the identical parts as public information, because all users can see it. This article is available with a short video documentary on CD-ROM.  相似文献   
7.
8.
R. Yoshihashi  R. Kawakami  M. Iida  T. Naemura 《风能》2017,20(12):1983-1995
Collisions of birds, especially endangered species, with wind turbines is a major environmental concern. Automatic bird monitoring can be of aid in resolving the issue, particularly in environmental risk assessments and real‐time collision avoidance. For automatic recognition of birds in images, a clean, detailed, and realistic dataset to learn features and classifiers is crucial for any machine‐learning‐based method. Here, we constructed a bird image dataset that is derived from the actual environment of a wind farm and that is useful for examining realistic challenges in bird recognition in practice. It consists of high‐resolution images covering a wide monitoring area around a turbine. The birds captured in these images are at relatively low resolution and are hierarchically labeled by experts for fine‐grained species classification. We conducted evaluations of state‐of‐the‐art image recognition methods by using this dataset. The evaluations revealed that a deep‐learning‐based method and a simpler traditional learning method were almost equally successful at detection, while the former captures more generalized features. The most promising results were provided by the deep‐learning‐based method in classification. The best methods in our experiments recorded a 0.98 true positive rate for bird detection at a false positive rate of 0.05 and a 0.85 true positive rate for species classification at a false positive rate of 0.1.  相似文献   
9.
In this paper, the authors have developed a system that animates 3D facial agents based on real-time facial expression analysis techniques and research on synthesizing facial expressions and text-to-speech capabilities. This system combines visual, auditory, and primary interfaces to communicate one coherent multimodal chat experience. Users can represent themselves using agents they select from a group that we have predefined. When a user shows a particular expression while typing a text, the 3D agent at the receiving end speaks the message aloud while it replays the recognized facial expression sequences and also augments the synthesized voice with appropriate emotional content. Because the visual data exchange is based on the MPEG-4 high-level Facial Animation Parameter for facial expressions (FAP 2), rather than real-time video, the method requires very low bandwidth.  相似文献   
10.
Our work targets 3D scenes in motion. In this article, we propose a method for view-dependent layered representation of 3D dynamic scenes. Using densely arranged cameras, we've developed a system that can perform processing in real time from image pickup to interactive display, using video sequences instead of static images, at 10 frames per second. In our system, images on layers are view dependent, and we update both the shape and image of each layer in real time. This lets us use the dynamic layers as the coarse structure of the dynamic 3D scenes, which improves the quality of the synthesized images. In this sense, our prototype system may be one of the first full real-time image -based modelling and rendering systems. Our experimental results show that this method is useful for interactive 3D rendering of real scenes  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号