This paper presents a robotic head for social robots to attend to scene saliency with bio-inspired saccadic behaviors. Scene saliency is determined by measuring low-level static scene information, motion, and object prior knowledge. Towards the extracted saliency spots, the designed robotic head is able to turn gazes in a saccadic manner while obeying eye–head coordination laws with the proposed control scheme. The results of the simulation study and actual applications show the effectiveness of the proposed method in discovering of scene saliency and human-like head motion. The proposed techniques could possibly be applied to social robots to improve social sense and user experience in human–robot interaction. 相似文献
Mathematical morphology offers popular image processing tools, successfully used for binary and grayscale images. Recently, its extension to color images has become of interest and several approaches were proposed. Due to various issues arising from the vectorial nature of the data, none of them imposed as a generally valid solution. We propose a probabilistic pseudo-morphological approach, by estimating two pseudo-extrema based on Chebyshev inequality. The framework embeds a parameter which allows controlling the linear versus non-linear behavior of the probabilistic pseudo-morphological operators. We compare our approach for grayscale images with the classical morphology and we emphasize the impact of this parameter on the results. Then, we extend the approach to color images, using principal component analysis. As validation criteria, we use the estimation of the color fractal dimension, color textured image segmentation and color texture classification. Furthermore, we compare our proposed method against two widely used approaches, one morphological and one pseudo-morphological. 相似文献
Over the last few years, there has been a growing interest in augmented reality (AR) technology for education. However, current AR education applications are often used as a new type of knowledge display platform, and they cannot fully participate in educational activities to improve educational results. To enable AR technology to participate in educational activities more effectively, according to learning-by-doing theory, we explore the form of a future experimental course and propose a new AR-based multimedia environment for experimental education. The framework of the multimedia environment consists of three components: the AR experiment authoring tool, the AR experiment application, and the management application. In this AR-based multimedia environment, teachers can independently create AR experiments using the what you see is what you get (WYSIWYG) editing method. Students can manipulate the AR-based experimental object to complete the experiment in class. Moreover, teachers can observe students’ experimental behaviour, obtain evaluations in real time, and even guide students remotely. We also present an application case of a chemistry experiment and obtain results of the usability test, demonstrating improvements in AR technology participation in educational activities.