首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   100篇
  免费   0篇
无线电   84篇
一般工业技术   4篇
自动化技术   12篇
  2013年   3篇
  2012年   6篇
  2011年   1篇
  2010年   2篇
  2009年   4篇
  2008年   4篇
  2007年   2篇
  2006年   8篇
  2005年   5篇
  2004年   6篇
  2003年   6篇
  2002年   3篇
  2001年   4篇
  2000年   3篇
  1999年   3篇
  1998年   8篇
  1997年   6篇
  1996年   1篇
  1995年   11篇
  1994年   3篇
  1993年   4篇
  1992年   4篇
  1991年   2篇
  1984年   1篇
排序方式: 共有100条查询结果,搜索用时 15 毫秒
41.
A regularized iterative image restoration algorithm   总被引:11,自引:0,他引:11  
The development of the algorithm is based on a set theoretic approach to regularization. Deterministic and/or statistical information about the undistorted image and statistical information about the noise are directly incorporated into the iterative procedure. The restored image is the center of an ellipsoid bounding the intersection of two ellipsoids. The proposed algorithm, which has the constrained least squares algorithm as a special case, is extended into an adaptive iterative restoration algorithm. The spatial adaptivity is introduced to incorporate properties of the human visual system. Convergence of the proposed iterative algorithms is established. For the experimental results which are shown, the adaptively restored images have better quality than the nonadaptively restored ones based on visual observations and on an objective criterion of merit which accounts for the noise masking property of the visual system  相似文献   
42.
Clinical angiography requires hundreds of X-ray images, putting the patients and particularly the medical staff at risk. Dosage reduction involves an inevitable sacrifice in image quality. In this work, the latter problem is addressed by first modeling the signal-dependent, Poisson-distributed noise that arises as a result of this dosage reduction. The commonly utilized noise model for single images is shown to be obtainable from the new model. Stochastic temporal filtering techniques are proposed to enhance clinical fluoroscopy sequences corrupted by quantum mottle. The temporal versions of these filters as developed here are more suitable for filtering image sequences, as correlations along the time axis can be utilized. For these dynamic sequences, the problem of displacement field estimation is treated in conjunction with the filtering stage to ensure that the temporal correlations are taken along the direction of motion to prevent object blur.  相似文献   
43.
The authors provide a general framework for performing processing of stationary multichannel (MC) signals that is linear shift-invariant within channel and shift varying across channels. Emphasis is given to the restoration of degraded signals. It is shown that, by utilizing the special structure of semiblock circulant and block diagonal matrices, MC signal processing can be easily carried out in the frequency domain. The generalization of many frequency-domain single-channel (SC) signal processing techniques to the MC case is presented. It is shown that in MC signal processing each frequency component of a signal and system is presented, respectively, by a small vector and a matrix (of size equal to the number of channels), while in SC signal processing each frequency component in both cases is a scalar.  相似文献   
44.
In this paper the application of image prior combinations to the Bayesian Super Resolution (SR) image registration and reconstruction problem is studied. Two sparse image priors, a Total Variation (TV) prior and a prior based on the ?1 norm of horizontal and vertical first-order differences (f.o.d.), are combined with a non-sparse Simultaneous Auto Regressive (SAR) prior. Since, for a given observation model, each prior produces a different posterior distribution of the underlying High Resolution (HR) image, the use of variational approximation will produce as many posterior approximations as priors we want to combine. A unique approximation is obtained here by finding the distribution on the HR image given the observations that minimizes a linear convex combination of Kullback–Leibler (KL) divergences. We find this distribution in closed form. The estimated HR images are compared with the ones obtained by other SR reconstruction methods.  相似文献   
45.
In this paper, we present a new shape-coding approach, which decouples the shape information into two independent signal data sets; the skeleton and the boundary distance from the skeleton. The major benefit of this approach is that it allows for a more flexible tradeoff between approximation error and bit budget. Curves of arbitrary order can be utilized for approximating both the skeleton and distance signals. For a given bit budget for a video frame, we solve the problem of choosing the number and location of the control points for all skeleton and distance signals of all boundaries within a frame, so that the overall distortion is minimized. An operational rate-distortion (ORD) optimal approach using Lagrangian relaxation and a four-dimensional direct acyclic graph (DAG) shortest path algorithm is developed for solving the problem. To reduce the computational complexity from O(N/sup 5/) to O(N/sup 3/), where N is the number of admissible control points for a skeleton, a suboptimal greedy-trellis search algorithm is proposed and compared with the optimal algorithm. In addition, an even more efficient algorithm with computational complexity O(N/sup 2/) that finds an ORD optimal solution using a relaxed distortion criterion is also proposed and compared with the optimal solution. Experimental results demonstrate that our proposed approaches outperform existing ORD optimal approaches, which do not follow the same decomposition of the source data.  相似文献   
46.
The problem of application-layer error control for real-time video transmission over packet lossy networks is commonly addressed via joint source-channel coding (JSCC), where source coding and forward error correction (FEC) are jointly designed to compensate for packet losses. In this paper, we consider hybrid application-layer error correction consisting of FEC and retransmissions. The study is carried out in an integrated joint source-channel coding (IJSCC) framework, where error resilient source coding, channel coding, and error concealment are jointly considered in order to achieve the best video delivery quality. We first show the advantage of the proposed IJSCC framework as compared to a sequential JSCC approach, where error resilient source coding and channel coding are not fully integrated. In the USCC framework, we also study the performance of different error control scenarios, such as pure FEC, pure retransmission, and their combination. Pure FEC and application layer retransmissions are shown to each achieve optimal results depending on the packet loss rates and the round-trip time. A hybrid of FEC and retransmissions is shown to outperform each component individually due to its greater flexibility.  相似文献   
47.
Traditional visual communication systems convey only two-dimensional (2-D) fixed field-of-view (FOV) video information. The viewer is presented with a series of flat, nonstereoscopic images, which fail to provide a realistic sense of depth. Furthermore, traditional video is restricted to only a small part of the scene, based on the director's discretion and the user is not allowed to "look around" in an environment. The objective of this work is to address both of these issues and develop new techniques for creating stereo panoramic video sequences. A stereo panoramic video sequence should be able to provide the viewer with stereo vision at any direction (complete 360-degree FOV) at video rates. In this paper, we propose a new technique for creating stereo panoramic video using a multicamera approach, thus creating a high-resolution output. We present a setup that is an extension of a previously known approach, developed for the generation of still stereo panoramas, and demonstrate that it is capable of creating high-resolution stereo panoramic video sequences. We further explore the limitations involved in a practical implementation of the setup, namely the limited number of cameras and the nonzero physical size of real cameras. The relevant tradeoffs are identified and studied.  相似文献   
48.
Subspace and similarity metric learning are important issues for image and video analysis in the scenarios of both computer vision and multimedia fields. Many real-world applications, such as image clustering/labeling and video indexing/retrieval, involve feature space dimensionality reduction as well as feature matching metric learning. However, the loss of information from dimensionality reduction may degrade the accuracy of similarity matching. In practice, such basic conflicting requirements for both feature representation efficiency and similarity matching accuracy need to be appropriately addressed. In the style of “Thinking Globally and Fitting Locally”, we develop Locally Embedded Analysis (LEA) based solutions for visual data clustering and retrieval. LEA reveals the essential low-dimensional manifold structure of the data by preserving the local nearest neighbor affinity, and allowing a linear subspace embedding through solving a graph embedded eigenvalue decomposition problem. A visual data clustering algorithm, called Locally Embedded Clustering (LEC), and a local similarity metric learning algorithm for robust video retrieval, called Locally Adaptive Retrieval (LAR), are both designed upon the LEA approach, with variations in local affinity graph modeling. For large size database applications, instead of learning a global metric, we localize the metric learning space with kd-tree partition to localities identified by the indexing process. Simulation results demonstrate the effective performance of proposed solutions in both accuracy and speed aspects.  相似文献   
49.
Signal compression is an important problem encountered in many applications. Various techniques have been proposed over the years for addressing the problem. In this paper, we present a time domain algorithm based on the coding of line segments which are used to approximate the signal. These segments are fit in a way that is optimal in the rate distortion sense. Although the approach is applicable to any type of signal, we focus, in this paper, on the compression of electrocardiogram (ECG) signals. ECG signal compression has traditionally been tackled by heuristic approaches. However, it has been demonstrated [1] that exact optimization algorithms outperform these heuristic approaches by a wide margin with respect to reconstruction error. By formulating the compression problem as a graph theory problem, known optimization theory can be applied in order to yield optimal compression. In this paper, we present an algorithm that will guarantee the smallest possible distortion among all methods applying linear interpolation given an upper bound on the available number of bits. Using a varied signal test set, extensive coding experiments are presented. We compare the results from our coding method to traditional time domain ECG compression methods, as well as, to more recently developed frequency domain methods. Evaluation is based both on percentage root-mean-square difference (PRD) performance measure and visual inspection of the reconstructed signals. The results demonstrate that the exact optimization methods have superior performance compared to both traditional ECG compression methods and the frequency domain methods.  相似文献   
50.
Image restoration using a modified Hopfield network   总被引:12,自引:0,他引:12  
A modified Hopfield neural network model for regularized image restoration is presented. The proposed network allows negative autoconnections for each neuron. A set of algorithms using the proposed neural network model is presented, with various updating modes: sequential updates; n-simultaneous updates; and partially asynchronous updates. The sequential algorithm is shown to converge to a local minimum of the energy function after a finite number of iterations. Since an algorithm which updates all n neurons simultaneously is not guaranteed to converge, a modified algorithm is presented, which is called a greedy algorithm. Although the greedy algorithm is not guaranteed to converge to a local minimum, the l (1) norm of the residual at a fixed point is bounded. A partially asynchronous algorithm is presented, which allows a neuron to have a bounded time delay to communicate with other neurons. Such an algorithm can eliminate the synchronization overhead of synchronous algorithms.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号