首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 25 毫秒
1.
Two host materials, SFCA and SFCC, consist of a diphenylamine or carbazole unit linking to spiro-fused phenyl carbazole (SFC) backbone, were designed and synthesized. By choosing the meta linkage way between diphenylamine/carbazole units and SFC ring, higher triplet energies could be easily achieved for the two new materials, which mean that they could be used as effective host material for popular blue phosphorescent material Iridium(III) bis[(4,6-difluorophenyl)pyridinato-N,C2′] picolinate (FIrpic, ET = 2.65). Besides that, the steric SFC structure could guarantee their good thermal stabilities. Their thermal, photophysical and electroluminescent properties were systematically investigated. The blue phosphorescent OLEDs with the two materials as hosts and FIrpic as a dopant exhibited excellent performance with maximum current efficiencies of 33.9 and 40.8 cd/A, respectively.  相似文献   

2.
The synthesis, characterization and solar cell performance of PCDTBT and its highly soluble analogue hexyl-PCDTBT with cross-conjugated benzoyl moieties at the carbazole comonomer are presented. Through the use of both model reactions and time-controlled microwave-assisted Suzuki polycondensation, the base-induced cleavage of the benzoyl group from the polymer backbone has been successfully suppressed. Compared to the commonly used symmetrically branched alkyl motif, the benzoyl substituent lowers the energy levels of PCDTBT as well as the band gap, and consequently increases energy of the charge transfer state in blends with PC71BM. As a result, photovoltaic diodes with high-open circuit voltage of above 1 V are realized.  相似文献   

3.
《Microelectronics Journal》2015,46(7):598-616
Classical manufacturing test verifies that a circuit is fault free during fabrication, however, cannot detect any fault that occurs after deployment or during operation. As complexity of integration rises, frequency of such failures is increasing for which on-line testing (OLT) is becoming an essential part in design for testability. In majority of the works on OLT, single stuck at fault model is considered. However in modern integration technology, single stuck at fault model can capture only a small fraction of real defects and as a remedy, advanced fault models such as bridging faults, transition faults, delay faults, etc. are now being considered. In this paper we concentrate on bridging faults for OLT. The reported works on OLT using bridging fault model have considered non-feedback faults only. The basic idea is, as feedback bridging faults may cause oscillations, detecting them on-line using logic testing is difficult. However, not all feedback bridging faults create oscillations and even if some does, there are test patterns for which the fault effect is manifested logically. In this paper it is shown that the number of such cases is not insignificant and discarding them impacts OLT in terms of fault coverage and detection latency. The present work aims at developing an OLT scheme for bridging faults including the feedback bridging faults also, that can be detected using logic test patterns. The proposed scheme is based on Binary Decision Diagrams, which enables it to handle fairly large circuits. Results on ISCAS 89 benchmarks illustrate that consideration of feedback bridging faults along with non-feedback ones improves fault coverage, however, increase in area overhead is marginal, compared to schemes only involving non-feedback faults.  相似文献   

4.
This article reports the magneto-optical effects on the singlet fission of the p-type organic semiconductor, tetracene, from a ferromagnetic/semiconductor interface between thin films of cobalt and tetracene. We experimentally show that this interface has two effects on the thin films of tetracene: spin interactions and electrical polarization. The experimental tools used to study the interface include magnetic field effect photoluminescence (MFEPL), photoluminescence and absorption. Spin interaction effects are shown by MFEPL data, where we observe a large increase in the maximum MFEPL when cobalt is introduced, as well as changes in the hyperfine interactions at low magnetic fields. Electrical polarization is analyzed with photoluminescence and absorption measurements, showing small changes in the energy difference between the HOMO and LUMO levels of tetracene, as well as an increase in the electron-phonon coupling in tetracene. Also, electrical polarization is shown to increase electrical interactions between tetracene molecules. Therefore, we conclude that using spin interactions and electrical polarization from the ferromagnetic/organic semiconductor interface can tune the properties of tetracene, ultimately enhancing singlet fission. This work gives new insight to understand the singlet fission process using a ferromagnetic interface. These changes can be further utilized in photovoltaic applications based on this singlet fission material and be applied to other similar types of singlet fission organic semiconductors.  相似文献   

5.
In this work cellulose acetate (CA) nanostructures were synthesized using electrospinning process. Silver nanoparticles (Ag NPs) were synthesized using silver nitrate as the starting precursor, ethanol as solvent and polyvinyl pyrolydone (PVP) as capping agent. The Ag NPs were added to the cellulose acetate (CA) nanostructures before and after CA electrospinning. The obtained CA and Ag-CA composite were characterized by various techniques such as, Fourier Transform Infrared (FTIR) spectroscopy, Raman Spectroscopy, Scanning electron microscopy (SEM) and Differential Scanning Calorimetry (DSC). It was found that Ag NPs can be effectively coated on or embedded into the electrospun CA and the PVP can lead to noticeable change in morphology and structure.  相似文献   

6.
In 3D model retrieval, preprocessing of 3D models is needed, in which alignment is a key factor that significantly affects retrieval performance. In particular, the anti-rotation image feature can obtain the alignment effect of 3D model views. In practice, the focus of many users of 3D models is not just on retrieval performance, but the use of aligned models for different purposes. In this paper, we propose a method, namely Sample Based Alignment (SBA) for better 3D model alignment and retrieval. In SBA, given a class, a sample model is used as the target for alignment, after which each 3D model in this class is then aligned one by one, i.e., the 3D model is actually rotated. Our experimental results, based on two 3D model datasets and performance comparisons with other methods, demonstrate the superiority of the SBA method over state-of-the-art methods in terms of 3D model retrieval and classification.  相似文献   

7.
The main focus of this review article is the introduction of relevant parameters in spray coating processes to provide better understanding on controlling the morphology of spray coated thin films for producing high performance polymer solar cells (PSC). Three main parameters have been identified as major influences on the spray coating processes. These are nozzle to substrate distance, solvent and mixed solvents effects, and substrate temperature and annealing treatment. Such spray coating techniques show great potential for large scale production, since these methods have no limitation in substrate size and low utilization of polymers which is promising to substitute the conventional spin coating methods. Currently available printing and coating methods are also briefly discussed in this review.  相似文献   

8.
One of the classic problems of digital image processing is to encode true-color images for the optimal viewing on displays with a limited set of colors. A major manifestation of optimal viewing in this regard is to maximally remove parasitic artifacts in the degraded encoded images such as the contouring effect. Several robust attempts have been made to solve this problem over the past 50 years, and the first contribution of this paper is to introduce a simple – yet effective – novel solution that is based on soft vector clustering.The other contribution of this paper is to propose the application of the soft clustering methodology deployed in our color-encoding solution for the dithering of multidimensional signals. Dithering essentially adds controlled noise to the analog signal upon its digitization so that the resulting quantization noise is dispersed over a much wider band of the frequency domain and is therefore less perceptible in the digitized signal. This comes of course at the price of more overall quantization noise. Dithering is a vital operation that is performed via well-known simple schemes upon the analog-to-digital conversion of one-dimensional signals; however, the published literature is still missing a general neat scheme for the dithering of multidimensional signals that is able to handle arbitrary dimensionality, arbitrary number and distribution of quantization centroids, and with computable and controllable noise power. This gap is also filled by this paper.  相似文献   

9.
针对结构相似(SSIM)图像质量评价算法没有考虑人眼视觉多通道性和对图像高失真评价的不稳定性,提出一种基于视觉显著失真度的图像质量自适应融合(VSAP)评价方法。该方法首先采用log-Gabor滤波提取图像的高频、中频及低频3层视觉特征,基于log-Gabor变换尺度和方向权重系数计算特征值的相似度;然后基于视觉阈值多分辨性迭加计算出特征值的失真度;最后,根据视觉失真度自适应融合相似度评价与失真度评价获得图像质量的最终客观评价。实验结果表明,VSAP方法不但对图像不同类型失真的客观评价与主观感知具有更高的相关性,而且3个主要指标斯皮尔曼等级相关系数(SROCC)、曲线拟合相关系数(CC)和均方根误差(RMSE)对图像不同水平失真的整体评价性能更稳定,明显优于其它评价方法。  相似文献   

10.
With the development of modern imaging techniques, every medical examination would result in a huge volume of image data. Analysis, storage and/or transmission of these data demands high compression without any loss of diagnostically significant data. Although, various 3-D compression techniques have been proposed, they have not been able to meet the current requirements. This paper proposes a novel method to compress 3-D medical images based on human vision model to remove visually insignificant information. The block matching algorithm applied to exploit the anatomical symmetry remove the spatial redundancies. The results obtained are compared with those of lossless compression techniques. The results show better compression without any degradation in visual quality. The rate-distortion performance of the proposed coders is compared with that of the state-of-the-art lossy coders. The subjective evaluation performed by the medical experts confirms that the visual quality of the reconstructed image is excellent.  相似文献   

11.
This paper addresses the use of independent component analysis (ICA) for image compression. Our goal is to study the adequacy (for lossy transform compression) of bases learned from data using ICA. Since these bases are, in general, non-orthogonal, two methods are considered to obtain image representations: matching pursuit type algorithms and orthogonalization of the ICA bases followed by standard orthogonal projection.Several coder architectures are evaluated and compared, using both the usual SNR and a perceptual quality measure called picture quality scale. We consider four classes of images (natural, faces, fingerprints, and synthetic) to study the generalization and adaptation abilities of the data-dependent ICA bases. In this study, we have observed that: bases learned from natural images generalize well to other classes of images; bases learned from the other specific classes show good specialization. For example, for fingerprint images, our coders perform close to the special-purpose WSQ coder developed by the FBI. For some classes, the visual quality of the images obtained with our coders is similar to that obtained with JPEG2000, which is currently the state-of-the-art coder and much more sophisticated than a simple transform coder.We conclude that ICA provides a excellent tool for learning a coder for a specific image class, which can even be done using a single image from that class. This is an alternative to hand tailoring a coder for a given class (as was done, for example, in the WSQ for fingerprint images). Another conclusion is that a coder learned from natural images acts like an universal coder, that is, generalizes very well for a wide range of image classes.  相似文献   

12.
基于人眼视觉特性的彩色图像质量评价   总被引:4,自引:0,他引:4  
图像处理系统的性能优劣的评判往往需要一个合理迅速的图像质量评价算法作为支撑.传统的图像质量评价算法由于没有充分考虑人眼的视觉特性,使得质量评价结果与实际图像的人眼感知质量不符.根据人眼对图像边缘信息非常敏感这一人眼视觉特性,提出一种综合图像边缘和背录相似度的算法(EBS)来评价彩色图像质量,即通过比较失真彩色图像与原始参考图像的边缘以及除边缘之外的背景相似程度最终确定失真图像的质量.应用于由779幅包含五种类型失真的图像质量评价库的实验结果表明,该算法的评价结果相比PSNR,MSSIM,IFC以及基于像素域的VIF等算法与图像的主观评价结果(由DMOS值表示--将背景不同的一组观察者对失真图像的评分进行统计平均后所得到的评价结果)更一致,也即该算法的评价结果更接近图像的实际视觉感知质量.  相似文献   

13.
No-reference image quality assessment using structural activity   总被引:2,自引:0,他引:2  
Presuming that human visual perception is highly sensitive to the structural information in a scene, we propose the concept of structural activity (SA) together with a model of SA indicator in a new framework for no-reference (NR) image quality assessment (QA) in this study. The proposed framework estimates image quality based on the quantification of the SA information of different visual significance. We propose some alternative implementations of SA indicator in this paper as examples to demonstrate the effectiveness of the SA-motivated framework. Comprehensive testing demonstrates that the model of SA indicator exhibits satisfactory performance in comparison with subjective quality scores as well as representative full-reference (FR) image quality measures.  相似文献   

14.
No-reference (NR) image quality assessment (QA) presumes no prior knowledge of reference (distortion-free) images and seeks to quantitatively predict visual quality solely from the distorted images. We develop kurtosis-based NR quality measures for JPEG2000 compressed images in this paper. The proposed measures are based on either 1-D or 2-D kurtosis in the discrete cosine transform (DCT) domain of general image blocks. Comprehensive testing demonstrates their good consistency with subjective quality scores as well as satisfactory performance in comparison with both the representative full-reference (FR) and state-of-the-art NR image quality measures.  相似文献   

15.
The performance of computer vision algorithms can severely degrade in the presence of a variety of distortions. While image enhancement algorithms have evolved to optimize image quality as measured according to human visual perception, their relevance in maximizing the success of computer vision algorithms operating on the enhanced image has been much less investigated. We consider the problem of image enhancement to combat Gaussian noise and low resolution with respect to the specific application of image retrieval from a dataset. We define the notion of image quality as determined by the success of image retrieval and design a deep convolutional neural network (CNN) to predict this quality. This network is then cascaded with a deep CNN designed for image denoising or super resolution, allowing for optimization of the enhancement CNN to maximize retrieval performance. This framework allows us to couple enhancement to the retrieval problem. We also consider the problem of adapting image features for robust retrieval performance in the presence of distortions. We show through experiments on distorted images of the Oxford and Paris buildings datasets that our algorithms yield improved mean average precision when compared to using enhancement methods that are oblivious to the task of image retrieval. 1  相似文献   

16.
This paper describes a multistage perceptual quality assessment (MPQA) model for compressed images. The motivation for the development of a perceptual quality assessment is to measure (in)visible differences between original and processed images. The MPQA produces visible distortion maps and quantitative error measures informed by considerations of the human visual system (HVS). Original and decompressed images are decomposed into different spatial frequency bands and orientations modeling the human cortex. Contrast errors are calculated for each frequency and orientation, and masked as a function of contrast sensitivity and background uncertainty. Spatially masked contrast error measurements are then made across frequency bands and orientations to produce a single perceptual distortion visibility map (PDVM). A perceptual quality rating (PQR) is calculated from the PDVM and transformed into a one to five scale, PQR(1-5), for direct comparison with the mean opinion score, generally used in subjective ratings. The proposed MPQA model is based on existing perceptual quality assessment models, while it is differentiated by the inclusion of contrast masking as a function of background uncertainty. A pilot study of clinical experiments on wavelet-compressed digital angiogram has been performed on a sample set of angiogram images to identify diagnostically acceptable reconstruction. Our results show that the PQR(1-5) of diagnostically acceptable lossy image reconstructions have better agreement with cardiologists' responses than objective error measurement methods, such as peak signal-to-noise ratio A Perceptual thresholding and CSF-based Uniform quantization (PCU) method is also proposed using the vision models presented in this paper. The vision models are implemented in the thresholding and quantization stages of a compression algorithm and shown to produce improved compression ratio performance with less visible distortion than that of the embedded zerotrees wavelet (EZWs).  相似文献   

17.
《Signal processing》2013,93(11):3182-3191
Full reference image quality assessment (FR-IQA) algorithms aim to establish generic measures of perceptual image quality independent of distortion types. Recent developments in FR-IQA have marked the use of phase congruency features. Phase congruency is a dimensionless, normalized feature calculated from the log-Gabor energy of the image, equipped to be relatively insensitive to noise variations due to calculation of noise circle. The underlying assumptions on the nature of the noise used in this calculation affect the performance of phase congruency based FR-IQA measures. In this work, we (a) test the hypothesis that using the phase deviation sensitive energy features obtained from the log-Gabor filtered image instead of the noise adjusted, normalized phase congruency features will improve the general applicability of an FR-IQA measure, (b) reduce execution time by omitting noise circle calculation and (c) study how the modifications in parameter values, changes the correlation between the subjective scores and objective image quality values. Experiments on six benchmark databases suggest the effectiveness of the proposed method which improves over the existing phase congruency based algorithms, achieves competitive performance with the state-of-the-art methods and delivers the best average performance across all databases in terms of prediction monotonicity and accuracy.  相似文献   

18.
A highly promising approach to assess the quality of an image involves comparing the perceptually important structural information in this image with that in its reference image. The extraction of the perceptually important structural information is however a challenging task. This paper employs a sparse representation-based approach to extract such structural information. It proposes a new metric called the sparse representation-based quality (SPARQ) index that measures the visual quality of an image. The proposed approach learns the inherent structures of the reference image as a set of basis vectors. These vectors are obtained such that any structure in the image can be efficiently represented by a linear combination of only a few of these basis vectors. Such a sparse strategy is known to generate basis vectors that are qualitatively similar to the receptive field of the simple cells present in the mammalian primary visual cortex. To estimate the visual quality of the distorted image, structures in the visually important areas in this image are compared with those in the reference image, in terms of the learnt basis vectors. Our approach is evaluated on six publicly available subject-rated image quality assessment datasets. The proposed SPARQ index consistently exhibits high correlation with the subjective ratings of all datasets and overall, performs better than a number of popular image quality metrics.  相似文献   

19.
Quality assessment of natural images is influenced by perceptual mechanisms, e.g., attention and contrast sensitivity, and quality perception can be generated in a hierarchical process. This paper proposes an architecture of Attention Integrated Hierarchical Image Quality networks (AIHIQnet) for no-reference quality assessment. AIHIQnet consists of three components: general backbone network, perceptually guided neck network, and head network. Multi-scale features extracted from the backbone network are fused to simulate image quality perception in a hierarchical manner. The attention and contrast sensitivity mechanisms modelled by an attention module capture essential information for quality perception. Considering that image rescaling potentially affects perceived quality, appropriate pooling methods in the non-convolution layers in AIHIQnet are employed to accept images with arbitrary resolutions. Comprehensive experiments on publicly available databases demonstrate outstanding performance of AIHIQnet compared to state-of-the-art models. Ablation experiments were performed to investigate the variants of the proposed architecture and reveal importance of individual components.  相似文献   

20.
Screen content image (SCI) is a composite image including textual and pictorial regions resulting in many difficulties in image quality assessment (IQA). Large SCIs are divided into image patches to increase training samples for CNN training of IQA model, and this brings two problems: (1) local quality of each image patch is not equal to subjective differential mean opinion score (DMOS) of an entire image; (2) importance of different image patches is not same for quality assessment. In this paper, we propose a novel no-reference (NR) IQA model based on the convolutional neural network (CNN) for assessing the perceptual quality of SCIs. Our model conducts two designs solving problems which benefits from two strategies. For the first strategy, to imitate full-reference (FR) CNN-based model behavior, a CNN-based model is designed for both FR and NR IQA, and performance of NR-IQA part improves when the image patch scores predicted by FR-IQA part are adopted as the ground-truth to train NR-IQA part. For the second strategy, image patch qualities of one entire SCI are fused to obtain the SCI quality with an adaptive weighting method taking account the effect of the different image patch contents. Experimental results verify that our model outperforms all test NR IQA methods and most FR IQA methods on the screen content image quality assessment database (SIQAD). On the cross-database evaluation, the proposed method outperforms the existing NR IQA method in terms of at least 2.4 percent in PLCC and 2.8 percent in SRCC, which shows high generalization ability and high effectiveness of our model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号