首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   110篇
  免费   6篇
电工技术   4篇
化学工业   11篇
建筑科学   1篇
轻工业   7篇
无线电   27篇
一般工业技术   26篇
冶金工业   10篇
原子能技术   1篇
自动化技术   29篇
  2022年   2篇
  2020年   4篇
  2018年   5篇
  2017年   2篇
  2016年   4篇
  2015年   4篇
  2014年   2篇
  2013年   5篇
  2012年   8篇
  2011年   6篇
  2010年   11篇
  2009年   2篇
  2008年   10篇
  2007年   10篇
  2006年   7篇
  2005年   5篇
  2004年   4篇
  2003年   4篇
  2002年   5篇
  2001年   1篇
  1999年   2篇
  1998年   6篇
  1997年   1篇
  1993年   2篇
  1984年   1篇
  1976年   1篇
  1974年   1篇
  1970年   1篇
排序方式: 共有116条查询结果,搜索用时 15 毫秒
51.
Social bookmarking enables knowledge sharing and efficient discovery on the web, where users can collaborate together by tagging documents of interests. A lot of attention was given lately for utilizing social bookmarking data to enhance traditional IR tasks. Yet, much less attention was given to the problem of estimating the effectiveness of an individual bookmark for the specific tasks. In this work, we propose a novel framework for social bookmark weighting which allows us to estimate the effectiveness of each of the bookmarks individually for several IR tasks. We show that by weighting bookmarks according to their estimated quality, we can significantly improve social search effectiveness. We further demonstrate that using the same framework, we can derive solutions to several recommendation tasks such as tag recommendation, user recommendation, and document recommendation. Empirical evaluation on real data gathered from two large bookmarking systems demonstrates the effectiveness of the new social bookmark weighting framework.  相似文献   
52.
The separation of image content into semantic parts plays a vital role in applications such as compression, enhancement, restoration, and more. In recent years, several pioneering works suggested such a separation be based on variational formulation and others using independent component analysis and sparsity. This paper presents a novel method for separating images into texture and piecewise smooth (cartoon) parts, exploiting both the variational and the sparsity mechanisms. The method combines the basis pursuit denoising (BPDN) algorithm and the total-variation (TV) regularization scheme. The basic idea presented in this paper is the use of two appropriate dictionaries, one for the representation of textures and the other for the natural scene parts assumed to be piecewise smooth. Both dictionaries are chosen such that they lead to sparse representations over one type of image-content (either texture or piecewise smooth). The use of the BPDN with the two amalgamed dictionaries leads to the desired separation, along with noise removal as a by-product. As the need to choose proper dictionaries is generally hard, a TV regularization is employed to better direct the separation process and reduce ringing artifacts. We present a highly efficient numerical scheme to solve the combined optimization problem posed by our model and to show several experimental results that validate the algorithm's performance.  相似文献   
53.
The three main tools in the single image restoration theory are the maximum likelihood (ML) estimator, the maximum a posteriori probability (MAP) estimator, and the set theoretic approach using projection onto convex sets (POCS). This paper utilizes the above known tools to propose a unified methodology toward the more complicated problem of superresolution restoration. In the superresolution restoration problem, an improved resolution image is restored from several geometrically warped, blurred, noisy and downsampled measured images. The superresolution restoration problem is modeled and analyzed from the ML, the MAP, and POCS points of view, yielding a generalization of the known superresolution restoration methods. The proposed restoration approach is general but assumes explicit knowledge of the linear space- and time-variant blur, the (additive Gaussian) noise, the different measured resolutions, and the (smooth) motion characteristics. A hybrid method combining the simplicity of the ML and the incorporation of nonellipsoid constraints is presented, giving improved restoration performance, compared with the ML and the POCS approaches. The hybrid method is shown to converge to the unique optimal solution of a new definition of the optimization problem. Superresolution restoration from motionless measurements is also discussed. Simulations demonstrate the power of the proposed methodology.  相似文献   
54.
Model Checking with Strong Fairness   总被引:1,自引:0,他引:1  
In this paper we present a coherent framework for symbolic model checking of linear-time temporal logic (ltl) properties over finite state reactive systems, taking full fairness constraints into consideration. We use the computational model of a fair discrete system (fds) which takes into account both justice (weak fairness) and compassion (strong fairness). The approach presented here reduces the model-checking problem into the question of whether a given fds is feasible (i.e. has at least one computation). The contribution of the paper is twofold: On the methodological level, it presents a direct self-contained exposition of full ltl symbolic model checking without resorting to reductions to either μ-calculus or ctl. On the technical level, it extends previous methods by dealing with compassion at the algorithmic level instead of either adding it to the specification, or transforming compassion to justice. Finally, we extend ctl with past operators, and show that the basic symbolic feasibility algorithm presented here, can be used to model check an arbitrary ctl formula over an fds with full fairness constraints. This research was supported in part by an infra-structure grant from the Israeli Ministry of Science and Art, a grant from the U.S.-Israel Binational Science Foundation, and a gift from Intel.  相似文献   
55.
An efficient approach for face compression is introduced. Restricting a family of images to frontal facial mug shots enables us to first geometrically deform a given face into a canonical form in which the same facial features are mapped to the same spatial locations. Next, we break the image into tiles and model each image tile in a compact manner. Modeling the tile content relies on clustering the same tile location at many training images. A tree of vector-quantization dictionaries is constructed per location, and lossy compression is achieved using bit-allocation according to the significance of a tile. Repeating this modeling/coding scheme over several scales, the resulting multiscale algorithm is demonstrated to compress facial images at very low bit rates while keeping high visual qualities, outperforming JPEG-2000 performance significantly.  相似文献   
56.
It is well known that discrimination response variability increases with stimulus intensity, closely related to Weber's Law. It is also an axiom that sensation magnitude increases with stimulus intensity. Following earlier researchers such as Thurstone, Garner, and Durlach and Braida, we explored a new method of exploiting these relationships to estimate the power function exponent relating sound pressure level to loudness, using the accuracy with which listeners could identify the intensity of pure tones. The log standard deviation of the normally distributed identification errors increases linearly with stimulus range in decibels, and the slope, a, of the regression is proportional to the loudness exponent, n. Interestingly, in a demonstration experiment, the loudness exponent estimated in this way is greater for females than for males. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
57.
On the origin of the bilateral filter and ways to improve it   总被引:11,自引:0,他引:11  
Additive noise removal from a given signal is an important problem in signal processing. Among the most appealing aspects of this field are the ability to refer it to a well-established theory, and the fact that the proposed algorithms in this field are efficient and practical. Adaptive methods based on anisotropic diffusion (AD), weighted least squares (WLS), and robust estimation (RE) were proposed as iterative locally adaptive machines for noise removal. Tomasi and Manduchi (see Proc. 6th Int. Conf. Computer Vision, New Delhi, India, p.839-46, 1998) proposed an alternative noniterative bilateral filter for removing noise from images. This filter was shown to give similar and possibly better results to the ones obtained by iterative approaches. However, the bilateral filter was proposed as an intuitive tool without theoretical connection to the classical approaches. We propose such a bridge, and show that the bilateral filter also emerges from the Bayesian approach, as a single iteration of some well-known iterative algorithm. Based on this observation, we also show how the bilateral filter can be improved and extended to treat more general reconstruction problems.  相似文献   
58.
Experiments on the absolute identification of pure tones were conducted at a single frequency with 3 Ss (aged 18, 25, and 52 yrs) to explore several effects. The change in transmitted information as the stimulus range was varied was measured as well as the change in transmitted information as the number of categories within a fixed range was increased. In the former case, information increased with increasing range. In the latter case, information increased with increasing number of categories, but the increase was due to a purely mathematical effect. Transmitted information was estimated by means of computer simulation designed to overcome, in part, small sample bias. This simulator is of potential use to others by helping them calculate transmitted or mutual information accurately using a minimum number of experimental trials. The graph of calculated information against number of trials was found to assume a characteristic shape. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
59.
60.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号