首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   61篇
  免费   2篇
  国内免费   1篇
电工技术   1篇
化学工业   5篇
金属工艺   1篇
建筑科学   3篇
能源动力   1篇
轻工业   1篇
水利工程   3篇
无线电   1篇
一般工业技术   11篇
冶金工业   1篇
自动化技术   36篇
  2023年   3篇
  2022年   3篇
  2021年   5篇
  2020年   6篇
  2019年   3篇
  2018年   4篇
  2017年   1篇
  2016年   6篇
  2015年   3篇
  2014年   1篇
  2013年   12篇
  2012年   2篇
  2011年   3篇
  2009年   1篇
  2002年   1篇
  2000年   1篇
  1999年   2篇
  1998年   1篇
  1991年   1篇
  1989年   1篇
  1987年   1篇
  1976年   2篇
  1974年   1篇
排序方式: 共有64条查询结果,搜索用时 15 毫秒
31.
Texture analysis and classification remain as one of the biggest challenges for the field of computer vision and pattern recognition. This article presents a robust hybrid combination technique to build a combined classifier that is able to tackle the problem of classification of rotation-invariant 2D textures. Diversity in the components of the combined classifier is enforced through variation of the parameters related to both architecture design and training stages of a neural network classifier. The boosting algorithm is used to make perturbation of the training set using Multi-Layer Perceptron (MLP) as the base classifier. The final decision of the proposed combined classifier is based on the majority voting. Experiments’ results on a standard benchmark database of rotated textures show that the proposed hybrid combination method is very robust, and it presents an excellent texture discrimination for all considered classes, overcoming traditional texture modification methods.  相似文献   
32.
Multimedia Tools and Applications - This paper presents two proposed approaches for enhancing the visibility of the infrared (IR) night vision images. The first approach is based on merging gamma...  相似文献   
33.
In recent years, big data has been one of the hottest development directions in the information field. With the development of artificial intelligence technology, mobile smart terminals and high-bandwidth wireless Internet, various types of data are increasing exponentially. Huge amounts of data contain a lot of potential value, therefore how to effectively store and process data efficiently becomes very important. Hadoop Distributed File System (HDFS) has emerged as a typical representative of dataintensive distributed big data file systems, and it has features such as high fault tolerance, high throughput, and can be deployed on low-cost hardwares. HDFS nodes communicate with each other to make the big data systems work properly, using the Remote Procedure Call (RPC) mechanism. However, the RPC in HDFS is still not good enough to work better in terms of network throughput and abnormal response. This paper presents an optimization method to improve the performance of HDFS. The proposed method dynamically adjusts the RPC configurations between NameNode and DataNodes by sensing the data characters that stored in DataNodes. This method can effectively reduce the NameNode processing pressure, and improve the network throughput generated by the information transmission between NameNode and DataNodes. It can also reduce the abnormal response time of the whole system. Finally, the extensive experiments show the effectiveness and efficiency of our proposed method.  相似文献   
34.

This research presents new three proposed approaches to enhancement the visibility of the Infrared (IR) night vision images. The first proposed approach depends on Hybrid Adaptive Gamma Correction (AGC) with Histogram Matching (HGCHM). The second proposed approach stands up Merging Gamma Correction with Contrast Limited Adaptive Histogram Equalization (MGCCLAHE). The HM uses a reference visual image for converting of night vision images into daytime images. The third approach mixes the benefits of the CLAHE with the undecimated Additive Wavelet Transform (AWT) Using Homomorphic processing (CSAWUH). The quality assessments for the suggested approaches are entropy, average gradient, contrast improvement factor, Sobel edge magnitude, spectral entropy, lightness order error and the similarity of edges. Simulation results clear that the third proposed approach gives superior results to the two proposed approaches from entropy, average gradient, contrast improvement factor, Sobel edge magnitude, spectral entropy and the computation time perspectives. On the other hand, the second proposed approach takes long computation time in the implementation with respect to the two proposed approaches. The second proposed approach gives better results to the first proposed approach entropy, average gradient, contrast improvement factor, Sobel edge magnitude, and spectral entropy perspectives. The first proposed approach gives better results to the two proposed approaches from lightness order error and the similarity of edges perspectives.

  相似文献   
35.
Although the electrocardiogram (ECG) has been a reliable diagnostic tool for decades, its deployment in the context of biometrics is relatively recent. Its robustness to falsification, the evidence it carries about aliveness and its rich feature space has rendered the deployment of ECG based biometrics an interesting prospect. The rich feature space contains fiducial based information such as characteristic peaks which reflect the underlying physiological properties of the heart. The principal goal of this study is to quantitatively evaluate the information content of the fiducial based feature set in terms of their effect on subject and heart beat classification accuracy (ECG data acquired from the PhysioNet ECG repository). To this end, a comprehensive set of fiducial based features was extracted from a collection of ECG records. This feature set was subsequently reduced using a variety of feature extraction/selection methods such as principle component analysis (PCA), linear discriminant analysis (LDA), information-gain ratio (IGR), and rough sets (in conjunction with the PASH algorithm). The performance of the reduced feature set was examined and the results evaluated with respect to the full feature set in terms of the overall classification accuracy and false (acceptance/rejection) ratios (FAR/FRR). The results of this study indicate that the PASH algorithm, deployed within the context of rough sets, reduced the dimensionality of the feature space maximally, while maintaining maximal classification accuracy.  相似文献   
36.
Extending from our previous work on applications of biotechnology in textile wet processing, this article reports the impact of post-biopolishing on the performance properties of pigment-printed cellulosic fabrics. The data demonstrate that the extent of enzymatic attack is determined by the nature of the cellulase enzyme, the depth of pigment prints, the nature of the cellulosic substrate, the fabric structure, and the finishing formulation contents. Furthermore, the loss in weight, the decrease in the depth of shade, and the improvement in softness degree of the biopolished pigment prints follow the descending orders:

Acid cellulases > Acid cellulases/softener > Neutral cellulase > None, None > Neutral cellulase > Acid cellulases/softener > Acid cellulases, and

Acid cellulases/softener > Acid cellulases > Neutral cellulase >> None, respectively.

The effect of enzymatic treatment on the fastness properties of the treated pigment prints were also investigated.  相似文献   
37.
Multimedia Tools and Applications - This paper presents a proposed approach for the enhancement of Infrared (IR) night vision images. This approach is based on a trilateral contrast enhancement in...  相似文献   
38.
Recommender systems are becoming increasingly important and prevalent because of the ability of solving information overload. In recent years, researchers are paying increasing attention to aggregate diversity as a key metric beyond accuracy, because improving aggregate recommendation diversity may increase long tails and sales diversity. Trust is often used to improve recommendation accuracy. However, how to utilize trust to improve aggregate recommendation diversity is unexplored. In this paper, we focus on solving this problem and propose a novel trust-aware recommendation method by incorporating time factor into similarity computation. The rationale underlying the proposed method is that, trustees with later creation time of trust relation can bring more diverse items to recommend to their trustors than other trustees with earlier creation time of trust relation. Through relevant experiments on publicly available dataset, we demonstrate that the proposed method outperforms the baseline method in terms of aggregate diversity while maintaining almost the same recall.  相似文献   
39.
Thermoluminescence glow curves of TLD-100 revealed three peaks at 373, 460 and 518 K for all samples irradiated with gamma ray doses of 0.5 to 700 Gy. The total thermoluminescence response and the height of the main peak at 460 K showed similar characteristics to radiation dose. On the other hand, the total area under the glow curve increases continuously with radiation dose up to 1000 Gy. All irradiated samples investigated showed no significant fading over 28 d. Activation energy, E, and escape frequency factor, s, for the main glow peak were calculated by the modified empirical equation, as well as by methods depending on the shape of the glow peak. It was found that E has a value of 1.33 to 1.83 eV and s falls between 5.8×1013 and 3.06×1019 sec–1, depending on the method used.  相似文献   
40.
User authentication is a crucial requirement for cloud service providers to prove that the outsourced data and services are safe from imposters. Keystroke dynamics is a promising behavioral biometrics for strengthening user authentication, however, current keystroke based solutions designed for certain datasets, for example, a fixed length text typed on a traditional personal computer keyboard and their authentication performances were not acceptable for other input devices nor free length text. Moreover, they suffer from a high dimensional feature space that degrades the authentication accuracy and performance. In this paper, a keystroke dynamics based authentication system is proposed for cloud environments that is applicable to fixed and free text typed on traditional and touch screen keyboards. The proposed system utilizes different feature extraction methods, as a preprocessing step, to minimize the feature space dimensionality. Moreover, different fusion rules are evaluated to combine the different feature extraction methods so that a set of the most relevant features is chosen. Because of the huge number of users' samples, a clustering method is applied to the users' profile templates to reduce the verification time. The proposed system is applied to three different benchmark datasets using three different classifiers. Experimental results demonstrate the effectiveness and efficiency of the proposed system. Copyright © 2015 John Wiley & Sons, Ltd.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号