全文获取类型
收费全文 | 97篇 |
免费 | 0篇 |
专业分类
化学工业 | 1篇 |
能源动力 | 3篇 |
无线电 | 12篇 |
一般工业技术 | 5篇 |
冶金工业 | 1篇 |
自动化技术 | 75篇 |
出版年
2024年 | 1篇 |
2020年 | 2篇 |
2018年 | 2篇 |
2016年 | 1篇 |
2012年 | 4篇 |
2011年 | 4篇 |
2010年 | 5篇 |
2009年 | 3篇 |
2008年 | 7篇 |
2007年 | 7篇 |
2006年 | 3篇 |
2005年 | 4篇 |
2004年 | 2篇 |
2003年 | 4篇 |
2002年 | 1篇 |
2001年 | 2篇 |
2000年 | 5篇 |
1999年 | 3篇 |
1998年 | 5篇 |
1997年 | 7篇 |
1996年 | 1篇 |
1994年 | 1篇 |
1993年 | 2篇 |
1992年 | 3篇 |
1991年 | 3篇 |
1990年 | 3篇 |
1989年 | 3篇 |
1988年 | 4篇 |
1986年 | 1篇 |
1985年 | 1篇 |
1984年 | 2篇 |
1980年 | 1篇 |
排序方式: 共有97条查询结果,搜索用时 19 毫秒
71.
K.G. Kanade Jin-Ook Baeg Ki-jeong Kong B.B. Kale Sang Mi Lee Sang-Jin Moon Chul Wee Lee Songhun Yoon 《International Journal of Hydrogen Energy》2008
We report here the novel approach to synthesis of layer perovskite photocatalysts, Pb2Ga2Nb2O10 and RbPb2Nb2O7 using solid state route (SSR) and molten salt synthesis (MSS) method. The reported modified MSS method has advantage over conventional SSR method for uniform particle size, well-defined crystal structure, controlled morphology and stoichiometry vis-à-vis photocatalysis. The structural study was performed using X-ray difractometry (XRD) and computation based on density functional theory (DFT). The simulation study showed that both the compounds belong to the Ruddlesden-Popper phase (A′2An−1BnO3n+1; n = 2 or 3). The surface morphology of the materials was studied using field emission scanning electron microscope (FESEM) and high resolution transmission electron microscope (HRTEM). The average particles size of perovskites Pb2Ga2Nb2O10 and RbPb2Nb2O7 was in the range 20–40 and 70–90 nm respectively. The efficacy of these materials was studied to particle size and morphology as a visible light driven photocatalyst for the hydrogen generation from H2S. Electronic band structure with DOS has also been performed for both the materials. Being a stable single-phase ternary-layered oxide perovskites and band gap (2.75 eV) in visible domain, they may have potential applications in electronic devices as well. 相似文献
72.
A close relationship exists between the advancement of face recognition algorithms and the availability of face databases varying factors that affect facial appearance in a controlled manner. The CMU PIE database has been very influential in advancing research in face recognition across pose and illumination. Despite its success the PIE database has several shortcomings: a limited number of subjects, a single recording session and only few expressions captured. To address these issues we collected the CMU Multi-PIE database. It contains 337 subjects, imaged under 15 view points and 19 illumination conditions in up to four recording sessions. In this paper we introduce the database and describe the recording procedure. We furthermore present results from baseline experiments using PCA and LDA classifiers to highlight similarities and differences between PIE and Multi-PIE. 相似文献
73.
Shuntaro Yamazaki Srinivasa G. Narasimhan Simon Baker Takeo Kanade 《International Journal of Computer Vision》2009,81(3):259-280
Acquiring 3D models of intricate objects (like tree branches, bicycles and insects) is a challenging task due to severe self-occlusions,
repeated thin structures, and surface discontinuities. In theory, a shape-from-silhouettes (SFS) approach can overcome these
difficulties and reconstruct visual hulls that are close to the actual shapes, regardless of the complexity of the object.
In practice, however, SFS is highly sensitive to errors in silhouette contours and the calibration of the imaging system,
and has therefore not been used for obtaining accurate shapes with a large number of views. In this work, we present a practical
approach to SFS using a novel technique called coplanar shadowgram imaging that allows us to use dozens to even hundreds of views for visual hull reconstruction. A point light source is moved around
an object and the shadows (silhouettes) cast onto a single background plane are imaged. We characterize this imaging system
in terms of image projection, reconstruction ambiguity, epipolar geometry, and shape and source recovery. The coplanarity
of the shadowgrams yields unique geometric properties that are not possible in traditional multi-view camera-based imaging
systems. These properties allow us to derive a robust and automatic algorithm to recover the visual hull of an object and
the 3D positions of the light source simultaneously, regardless of the complexity of the object. We demonstrate the acquisition
of several intricate shapes with severe occlusions and thin structures, using 50 to 120 views.
Electronic Supplementary Material The online version of this article () contains supplementary material, which is available to authorized users.
This is an extension and consolidation of our previous work on coplanar shadowgram imaging system (Yamazaki et al. 2007) presented at IEEE International Conference on Computer Vision 2007. 相似文献
74.
Quasiconvex optimization for robust geometric reconstruction 总被引:1,自引:0,他引:1
Geometric reconstruction problems in computer vision are often solved by minimizing a cost function that combines the reprojection errors in the 2D images. In this paper, we show that, for various geometric reconstruction problems, their reprojection error functions share a common and quasiconvex formulation. Based on the quasiconvexity, we present a novel quasiconvex optimization framework in which the geometric reconstruction problems are formulated as a small number of small-scale convex programs that are ready to solve. Our final reconstruction algorithm is simple and has intuitive geometric interpretation. In contrast to existing local minimization approaches, our algorithm is deterministic and guarantees a predefined accuracy of the minimization result.The quasiconvexity also provides an intuitive method to handle directional uncertainties and outliers in measurements. When there are outliers in the measurements, our method provides a mechanism to locate the global minimum of a robust error function. For large scale problems and when computational resources are constrained, we provide an efficient approximation that gives a good upper bound (but not global minimum) on the reconstruction error. We demonstrate the effectiveness of our algorithm by experiments on both synthetic and real data. 相似文献
75.
Mei Han Kanade T. 《IEEE transactions on pattern analysis and machine intelligence》2003,25(7):884-894
In this paper, we describe a reconstruction method for multiple motion scenes, which are scenes containing multiple moving objects, from uncalibrated views. Assuming that the objects are moving with constant velocities, the method recovers the scene structure, the trajectories of the moving objects, the camera motion, and the camera intrinsic parameters (except skews) simultaneously. We focus on the case where the cameras have unknown and varying focal lengths while the other intrinsic parameters are known. The number of the moving objects is automatically detected without prior motion segmentation. The method is based on a unified geometrical representation of the static scene and the moving objects. It first performs a projective reconstruction using a bilinear factorization algorithm and, then, converts the projective solution to a Euclidean one by enforcing metric constraints. Experimental results on synthetic and real images are presented. 相似文献
76.
A locally adaptive window for signal matching 总被引:5,自引:4,他引:5
This article presents a signal matching algorithm that can select an appropriate window size adaptively so as to obtain both precise and stable estimation of correspondences.Matching two signals by calculating the sum of squared differences (SSD) over a certain window is a basic technique in computer vision. Given the signals and a window, there are two factors that determine the difficulty of obtaining precise matching. The first is the variation of the signal within the window, which must be large enough, relative to noise, that the SSD values exhibit a clear and sharp minimum at the correct disparity. The second factor is the variation of disparity within the window, which must be small enough that signals of corresponding positions are duly compared. These two factors present conflicting requirements to the size of the matching window, since a larger window tends to increase the signal variation, but at the same time tends to include points of different disparity. A window size must be adaptively selected depending on local variations of signal and disparity in order to compute a most-certain estimate of disparity at each point.There has been little work on a systematic method for automatic window-size selection. The major difficulty is that, while the signal variation is measurable from the input, the disparity variation is not, since disparities are what we wish to calculate. We introduce here a statistical model of disparity variation within a window, and employ it to establish a link between the window size and the uncertainty of the computed disparity. This allows us to choose the window size that minimizes uncertainty in the disparity computed at each point. This article presents a theory for the model and the resultant algorithm, together with analytical and experimental results that demonstrate their effectiveness. 相似文献
77.
The measurement of highlights in color images 总被引:4,自引:3,他引:4
Gudrun J. Klinker Steven A. Shafer Takeo Kanade 《International Journal of Computer Vision》1988,2(1):7-32
In this paper, we present an approach to color image understanding that accounts for color variations due to highlights and shading. We demonstrate that the reflected light from every point on a dielectric object, such as plastic, can be described as a linear combination of the object color and the highlight color. The colors of all light rays reflected from one object then form a planar cluster in the color space. The shape of this cluster is determined by the object and highlight colors and by the object shape and illumination geometry. We present a method that exploits the difference between object color and highlight color to separate the color of every pixel into a matte component and a highlight component. This generates two intrinsic images, one showing the scene without highlights, and the other one showing only the highlights. The intrinsic images may be a useful tool for a variety of algorithms in computer vision, such as stereo vision, motion analysis, shape from shading, and shape from highlights. Our method combines the analysis of matte and highlight reflection with a sensor model that accounts for camera limitations. This enables us to successfully run our algorithm on real images taken in a laboratory setting. We show and discuss the results.This material is based upon work supported by the National Science Foundation under Grant DCR-8419990 and by the Defense Advanced Research Projects Agency (DOD), ARPA Order No. 4976, monitored by the Air Force Avionics Laboratory under contract F33615-84-K-1520. Any opinions, findings, and conclusions or recommendations expressed in this publication are those of the authors and do not necessarily reflect the views of the National Science Foundation, the Defense Advanced Research Projects Agency, or the US Government. 相似文献
78.
Long Quan Kanade T. 《IEEE transactions on pattern analysis and machine intelligence》1997,19(8):834-845
This paper presents a linear algorithm for recovering 3D affine shape and motion from line correspondences with uncalibrated affine cameras. The algorithm requires a minimum of seven line correspondences over three views. The key idea is the introduction of a one-dimensional projective camera. This converts 3D affine reconstruction of “line directions” into 2D projective reconstruction of “points”. In addition, a line-based factorization method is also proposed to handle redundant views. Experimental results both on simulated and real image sequences validate the robustness and the accuracy of the algorithm 相似文献
79.
Three-dimensional scene flow 总被引:2,自引:0,他引:2
Vedula S Baker S Rander P Collins R Kanade T 《IEEE transactions on pattern analysis and machine intelligence》2005,27(3):475-480
Just as optical flow is the two-dimensional motion of points in an image, scene flow is the three-dimensional motion of points in the world. The fundamental difficulty with optical flow is that only the normal flow can be computed directly from the image measurements, without some form of smoothing or regularization. In this paper, we begin by showing that the same fundamental limitation applies to scene flow; however, many cameras are used to image the scene. There are then two choices when computing scene flow: 1) perform the regularization in the images or 2) perform the regularization on the surface of the object in the scene. In this paper, we choose to compute scene flow using regularization in the images. We describe three algorithms, the first two for computing scene flow from optical flows and the third for constraining scene structure from the inconsistencies in multiple optical flows. 相似文献
80.