全文获取类型
收费全文 | 284篇 |
免费 | 83篇 |
国内免费 | 72篇 |
专业分类
电工技术 | 8篇 |
综合类 | 31篇 |
化学工业 | 1篇 |
金属工艺 | 1篇 |
机械仪表 | 31篇 |
建筑科学 | 6篇 |
矿业工程 | 1篇 |
轻工业 | 1篇 |
水利工程 | 5篇 |
石油天然气 | 1篇 |
武器工业 | 4篇 |
无线电 | 89篇 |
一般工业技术 | 19篇 |
冶金工业 | 1篇 |
自动化技术 | 240篇 |
出版年
2024年 | 1篇 |
2023年 | 2篇 |
2022年 | 13篇 |
2021年 | 9篇 |
2020年 | 19篇 |
2019年 | 15篇 |
2018年 | 10篇 |
2017年 | 12篇 |
2016年 | 21篇 |
2015年 | 20篇 |
2014年 | 29篇 |
2013年 | 33篇 |
2012年 | 39篇 |
2011年 | 31篇 |
2010年 | 18篇 |
2009年 | 31篇 |
2008年 | 21篇 |
2007年 | 30篇 |
2006年 | 16篇 |
2005年 | 18篇 |
2004年 | 17篇 |
2003年 | 11篇 |
2002年 | 3篇 |
2001年 | 1篇 |
2000年 | 3篇 |
1999年 | 3篇 |
1998年 | 3篇 |
1997年 | 2篇 |
1996年 | 1篇 |
1994年 | 1篇 |
1992年 | 3篇 |
1990年 | 1篇 |
1988年 | 1篇 |
1980年 | 1篇 |
排序方式: 共有439条查询结果,搜索用时 15 毫秒
1.
Differences between oculomotor and perceptual artifacts for temporally limited head mounted displays
Alexander Goettker Kevin J. MacKenzie T. Scott Murdison 《Journal of the Society for Information Display》2020,28(6):509-519
We used perceptual and oculomotor measures to understand the negative impacts of low (phantom array) and high (motion blur) duty cycles with a high‐speed, AR‐likehead‐mounted display prototype. We observed large intersubject variability for the detection of phantom array artifacts but a highly consistent and systematic effect on saccadic eye movement targeting during low duty cycle presentations. This adverse effect on saccade endpoints was also related to an increased error rate in a perceptual discrimination task, showing a direct effect of display duty cycle on the perceptual quality. For high duty cycles, the probability of detecting motion blur increased during head movements, and this effect was elevated at lower refresh rates. We did not find an impact of the temporal display characteristics on compensatory eye movements during head motion (e.g., VOR). Together, our results allow us to quantify the tradeoff of different negative spatiotemporal impacts of user movements and make subsequent recommendations for optimized temporal HMD parameters. 相似文献
2.
目的 显微光学成像有景深小和易模糊等缺陷,很难根据几何光学中的点扩散函数准确评估图像的模糊程度,进而很难计算景物深度。同时,传统的使用边缘检测算子衡量图像模糊程度变化的方法缺少与景物深度之间的函数关系,影响深度计算的精度。为此,本文提出一种显微光学系统成像模糊程度与景物深度关系曲线的获取方法。方法 从显微光学系统中的光学传递特性出发,建立光学传递函数中的光程差、高频能量参数和景物深度之间的数学关系,并通过归一化和曲线拟合得到显微光学系统的成像模糊程度与景物深度之间的解析函数。结果 为了验证本文获取的图像模糊程度和景物深度之间的函数关系,首先使用纳米方形栅格的模糊图像进行深度计算,实验测得的深度平均误差为0.008 μm,即相对误差为0.8%,与通过清晰图像和模糊图像的逐个像素亮度值比较,根据最小二乘方法搜索两幅图像的亮度差最小时求得深度的方法相比,精度提高了约73%。然后基于深度测量结果进行模糊栅格图像的清晰重构,重构后的图像在平均梯度和拉普拉斯值两个方面都明显提高,且相对于传统基于高斯点扩散函数清晰重构方法,本文方法的重构精度更高,稳定性更强;最后通过多种不同形状和亮度特性的栅格模糊图像的深度计算,证明了本文的模糊程度—深度变化曲线对不同景物的通用性。结论 本文建立的函数关系能够更加直观地反映系统参数对光学模糊成像过程的影响。使用高频能量参数表征图像的模糊特性,既可以准确测量图像模糊程度,也与景物深度具有直接的函数关系。固定光学系统参数后,建立的归一化系统成像模糊程度与景物深度之间的函数关系不会受到景物图像的纹理、亮度等特性差异的影响,鲁棒性强、更方便、更省时。 相似文献
3.
目的 模糊图像的分析与识别是图像分析与识别领域的重要方向。有些图像形成过程中成像系统与物体之间存在相对旋转运动,如因导弹高速自旋转造成的制导图像的旋转运动模糊。大多数对于这类图像的识别都需要先对模糊图像进行“去模糊”的预处理,且该类方法存在计算时间复杂度较高及不适定的问题。对此,提出一种直接提取旋转运动模糊图像中的不变特征,用于旋转运动模糊图像目标检索和识别。方法 本文以旋转运动模糊的退化模型为出发点,提出了旋转运动模糊Gaussian-Hermite (GH)矩,构造了一组由5个对旋转变换和旋转运动模糊保持不变性的GH矩不变量组成的特征向量(rotational motion blur Gaussian-Hermite moment invariants,RMB_GHMI-5),可从旋转变换和旋转运动模糊的图像中直接进行目标检索和识别,无需前置复杂的“去模糊”预处理过程。结果 在USC-SIPI (University of Southern California — Signal and Image Processing Institute)数据集上进行不变性实验,对原图进行不同程度的旋转变换叠加旋转运动模糊处理,证明RMB_GHMI-5对于旋转变换和旋转运动模糊具有良好的稳定性和不变性。在两个数据集上与同类4种方法进行图像检索实验比较,在80%召回率下,本文方法维数更少,相比性能第2的特征向量,在Flavia数据集中,高斯噪声、椒盐噪声、泊松噪声和乘性噪声干扰下的准确率分别提高25.89%、39.95%、22.79%和35.80%;在Butterfly Image数据集中,高斯噪声、椒盐噪声、泊松噪声和乘性噪声干扰下的准确率分别提高4.79、7.63%、5.65%和18.31%。同时,在上述8个测试数据集中进行对比实验以验证融合算法的有效性,结果表明本文提出的GH矩和几何矩相融合算法显著改善了图像检索效果。结论 本文提出的RMB_GHMI-5特征向量在旋转变换和旋转运动模糊下具有良好的不变性与稳定性,在图像检索抗噪性能方面表现优异。相比同类方法,本文方法更具实际应用价值。 相似文献
4.
5.
This paper addresses issues in visual tracking where videos contain object intersections, pose changes, occlusions, illumination changes, motion blur, and similar color distributed background. We apply the structural local sparse representation method to analyze the background region around the target. After that, we reduce the probability of prominent features in the background and add new information to the target model. In addition, a weighted search method is proposed to search the best candidate target region. To a certain extent, the weighted search method solves the local optimization problem. The proposed scheme, designed to track single human through complex scenarios from videos, has been tested on some video sequences. Several existing tracking methods are applied to the same videos and the corresponding results are compared. Experimental results show that the proposed tracking scheme demonstrates a very promising performance in terms of robustness to occlusions, appearance changes, and similar color distributed background. 相似文献
6.
《Displays》2016
Human 3D perception provides an important clue to the removal of redundancy in stereoscopic 3D (S3D) videos. Because objects outside the binocular fusion limit cannot be fused on retina, the human visual system (HVS) makes them blur according to the depth-of-focus (DOF) effect to increase the binocular fusion limit and suppress diplopia, i.e. double vision. Based on human depth perception, we propose a disparity-based just-noticeable-difference model (DJND) to save bit-rate and improve visual comfort in S3D videos. We combine the DOF blur effect with conventional JND models in the pixel domain into DJND. Firstly, we use disparity information to get the average disparity value of each block. Then, we integrate the DOF blur effect into luminance JND (LJND) by a selective low pass Gaussian filter to minimize the visual stimulus in S3D videos. Finally, we incorporate disparity information into the filtered JND models to obtain DJND. Experimental results demonstrate that the proposed method successfully improves both image quality and visual comfort in viewing S3D videos without increasing the bit-rate. 相似文献
7.
Hyungki Hong 《Journal of the Society for Information Display》2017,25(7):450-457
Current 3D crosstalk equation was defined from the characteristics of 3D display using glasses. This equation is not suitable for multi‐view 3D display with larger view number as it gives the inappropriately large value. In 3D display using eyeglass, double images occur at large depth. But, in multi‐view 3D display with larger view number, blur occurs to larger width for the larger depth. Hence, blur phenomenon of multi‐view 3D display was investigated to understand the unique characteristics of multi‐view 3D display. For this purpose, ray tracing S/W was used to simulate 3D display image seen at the designed viewing distance, to calculate the relative luminance distribution, and to quantify the relation between blur and depth. Calculated results showed that incomplete image separation caused the overlap of multiple view images and the blur. Blur edge width (BEW) was proportional to the horizontal disparity and related to the depth. BEWR = (BEW) / (binocular disparity) was newly defined, and its usefulness for 3D characterization was investigated. BEW and BEWR might be useful as new measuring items to characterize multi‐view 3D display regarding 3D crosstalk. 相似文献
8.
Anisotropic blur and mis-registration frequently happen in multi-focus images due to object or camera motion. These factors severely degrade the fusion quality of multi-focus images. In this paper, we present a novel multi-scale weighted gradient-based fusion method to solve this problem. This method is based on a multi-scale structure-based focus measure that reflects the sharpness of edge and corner structures at multiple scales. This focus measure is derived based on an image structure saliency and introduced to determine the gradient weights in the proposed gradient-based fusion method for multi-focus images with a novel multi-scale approach. In particular, we focus on a two-scale scheme, i.e., a large scale and a small scale, to effectively solve the fusion problems raised by anisotropic blur and mis-registration. The large-scale structure-based focus measure is used first to attenuate the impacts of anisotropic blur and mis-registration on the focused region detection, and then the gradient weights near the boundaries of the focused regions are carefully determined by applying the small-scale focus measure. Experimental results clearly demonstrate that the proposed method outperforms the conventional fusion methods in the presence of anisotropic blur and mis-registration. 相似文献
9.
10.