In this paper, the dynamic characteristics of piezoelectric inkjet print head were studied by analyzing vibration mode and resonant frequency. The modal analysis was carried out by using the multiphysics simulation software COMSOL. The vibration characteristics of print head were measured. The piezoelectric actuator and print head were excited by two basic excitation methods: fast sine sweep and white noise, respectively. The speed response was measured by the laser Doppler vibrometer and the power spectrum was obtained by using MATLAB software to obtain the first-order resonant frequency. The results show that the first-order mode is most suitable for droplet formation and ejection, and the print head in the working state is indeed under the first-order mode. The fluid–solid coupling effect has great influence on the shape and amplitude of the vibration curve. The presence of liquid lower the peak of the resonant frequency and the severe liquid motion affect the number of peaks of the resonant frequency.
Facial expression and emotion recognition from thermal infrared images has attracted more and more attentions in recent years. However, the features adopted in current work are either temperature statistical parameters extracted from the facial regions of interest or several hand-crafted features that are commonly used in visible spectrum. Till now there are no image features specially designed for thermal infrared images. In this paper, we propose using the deep Boltzmann machine to learn thermal features for emotion recognition from thermal infrared facial images. First, the face is located and normalized from the thermal infrared im- ages. Then, a deep Boltzmann machine model composed of two layers is trained. The parameters of the deep Boltzmann machine model are further fine-tuned for emotion recognition after pre-tralning of feature learning. Comparative experimental results on the NVIE database demonstrate that our approach outperforms other approaches using temperature statistic features or hand-crafted features borrowed from visible domain. The learned features from the forehead, eye, and mouth are more effective for discriminating valence dimension of emotion than other facial areas. In addition, our study shows that adding unlabeled data from other database during training can also improve feature learning performance. 相似文献