首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   193篇
  免费   29篇
  国内免费   24篇
综合类   3篇
化学工业   2篇
机械仪表   2篇
无线电   77篇
一般工业技术   1篇
自动化技术   161篇
  2023年   4篇
  2022年   1篇
  2021年   3篇
  2020年   3篇
  2019年   10篇
  2018年   2篇
  2017年   19篇
  2016年   17篇
  2015年   14篇
  2014年   31篇
  2013年   21篇
  2012年   38篇
  2011年   26篇
  2010年   18篇
  2009年   25篇
  2008年   6篇
  2007年   1篇
  2006年   3篇
  2005年   1篇
  2002年   1篇
  2001年   2篇
排序方式: 共有246条查询结果,搜索用时 15 毫秒
1.
In this paper, we propose a novel change detection method for synthetic aperture radar images based on unsupervised artificial immune systems. After generating the difference image from the multitemporal images, we take each pixel as an antigen and build an immune model to deal with the antigens. By continuously stimulating the immune model, the antigens are classified into two groups, changed and unchanged. Firstly, the proposed method incorporates the local information in order to restrain the impact of speckle noise. Secondly, the proposed method simulates the immune response process in a fuzzy way to get an accurate result by retaining more image details. We introduce a fuzzy membership of the antigen and then update the antibodies and memory cells according to the membership. Compared with the clustering algorithms we have proposed in our previous works, the new method inherits immunological properties from immune systems and is robust to speckle noise due to the use of local information as well as fuzzy strategy. Experiments on real synthetic aperture radar images show that the proposed method performs well on several kinds of difference images and engenders more robust result than the other compared methods.  相似文献   
2.
细粒度图像识别旨在对某一传统语义类别下细粒度级别的不同子类类别进行视觉识别,在智慧新经济和工业物联网等领域(如智慧城市、公共安全、生态保护、农业生产与安全保障)具有重要的科学意义和应用价值。细粒度图像识别在深度学习的助力下取得了长足进步,但其对大规模优质细粒度图像数据的依赖成为制约细粒度图像识别推广和普及的瓶颈。随着互联网和大数据的快速发展,网络监督图像数据作为免费的数据来源成为缓解深度学习对大数据依赖的可行解决方案,如何有效利用网络监督数据成为提升细粒度图像识别推广性和泛化性的热门课题。本文围绕细粒度图像识别主题,以网络监督数据下的细粒度识别为重点,先后对细粒度识别数据集、传统细粒度识别方法、网络监督下细粒度识别特点与方法进行介绍,并回顾了全球首届网络监督下的细粒度图像识别竞赛的相关情况及冠军解决方案。最后,在上述内容基础上总结和讨论了该领域的未来发展趋势。  相似文献   
3.
In this paper, we present a hyperspectral image compression system based on the lapped transform and Tucker decomposition (LT-TD). In the proposed method, each band of a hyperspectral image is first decorrelated by a lapped transform. The transformed coefficients of different frequencies are rearranged into three-dimensional (3D) wavelet sub-band structures. The 3D sub-bands are viewed as third-order tensors. Then they are decomposed by Tucker decomposition into a core tensor and three factor matrices. The core tensor preserves most of the energy of the original tensor, and it is encoded using a bit-plane coding algorithm into bit-streams. Comparison experiments have been performed and provided, as well as an analysis regarding the contributing factors for the compression performance, such as the rank of the core tensor and quantization of the factor matrices.  相似文献   
4.
Conventional machine learning methods such as neural network (NN) uses empirical risk minimization (ERM) based on infinite samples, which is disadvantageous to the gait learning control based on small sample sizes for biped robots walking in unstructured, uncertain and dynamic environments. Aiming at the stable walking control problem in the dynamic environments for biped robots, this paper puts forward a method of gait control based on support vector machines (SVM), which provides a solution for the learning control issue based on small sample sizes. The SVM is equipped with a mixed kernel function for the gait learning. Using ankle trajectory and hip trajectory as inputs, and the corresponding trunk trajectory as outputs, the SVM is trained based on small sample sizes to learn the dynamic kinematics relationships between the legs and the trunk of the biped robots. Robustness of the gait control is enhanced, which is propitious to realize the stable biped walking, and the proposed method shows superior performance when compared to SVM with radial basis function (RBF) kernels and polynomial kernels, respectively. Simulation results demonstrate the superiority of the proposed methods.  相似文献   
5.
Faults in a rotor-bearing system due to bearings and unbalance have been classified using support vector machines (SVMs). Vibration signals on a rotor-bearing system were measured simultaneously at five different rotating speeds using seven transducers. The most sensitive feature of the vibration signals has been determined using compensation distance evaluation technique. Multi-class SVMs classification algorithm was then implemented for classification of the faults by considering SVMs created by the possible combinations of the most two sensitive features for each type of fault. By using optimal SVM parameters, the effective location of transducer among seven transducers for best classification of the faults has been investigated and found that any transducer provides a classification of 75% or better and this classification rate increases when more transducers are considered. This paper provides a robust SVM based technique using only time domain data without any additional preprocessing for classifying bearing and unbalance faults.  相似文献   
6.
In full reference image quality assessment (IQA), the images without distortion are usually employed as reference, while the structures in both reference images and distorted images are ignored and all pixels are equally treated. In addition, the role of human visual system (HVS) is not taken account into subjective IQA metric. In this paper, a weighted full-reference image quality metric is proposed, where a weight imposed on each pixel indicates its importance in IQA. Furthermore, the weights can be estimated via visual saliency computation, which can approximate the subjective IQA via exploiting the HVS. In the experiments, the proposed metric is compared with several objective IQA metrics on LIVE release 2 and TID 2008 database. The results demonstrate that SROCC and PLCC of the proposed metric are 0.9647 and 0.9721, respectively,which are higher than other methods and it only takes 427.5 s, which is lower than that of most other methods.  相似文献   
7.
Dynamic time-linkage optimization problems (DTPs) are a special class of dynamic optimization problems (DOPs) with the feature of time-linkage. Time-linkage means that the decisions taken now could influence the problem states in future. Although DTPs are common in practice, attention from the field of evolutionary optimization is little. To date, the prediction method is the major approach to solve DTPs in the field of evolutionary optimization. However, in existing studies, the method of how to deal with the situation where the prediction is unreliable has not been studied yet for the complete Black-Box Optimization (BBO) case. In this paper, the prediction approach EA + predictor, proposed by Bosman, is improved to handle such situation. A stochastic-ranking selection scheme based on the prediction accuracy is designed to improve EA + predictor under unreliable prediction, where the prediction accuracy is based on the rank of the individuals but not the fitness. Experimental results show that, compared with the original prediction approach, the performance of the improved algorithm is competitive.  相似文献   
8.
To cluster data set with the character of symmetry, a point symmetry-based clonal selection clustering algorithm (PSCSCA) is proposed in this paper. Firstly, an immune vaccine operator is introduced to the classical clonal selection algorithm, which can gain a priori knowledge of pending problems so as to accelerate the convergent speed. Secondly, a point symmetry-based similarity measure is used to evaluate the similarity between two samples. Finally, both kd-trees-based approximate nearest neighbor searching and k-nearest-neighbor consistency strategy is used to reduce the computation complexity and improve the clustering accuracy. In the experiments, first of all, four real-life data sets and four synthetic data sets are used to test the performance of PSCSCA. PSCSCA is also compared with multiple existing algorithms in terms of clustering accuracy and convergent speed. In addition, PSCSCA is applied to a real-world application, namely natural image compression, with good performance being obtained.  相似文献   
9.
The box-covering method is widely used on measuring the fractal property on complex networks. The problem of finding the minimum number of boxes to tile a network is known as a NP-hard problem. Many algorithms have been proposed to solve this problem. All the current box-covering algorithms regard the box number minimization as the only objective. However, the fractal modularity of the network partition divided by the box-covering method, has been proved to be strongly related to the information transportation in complex networks. Maximizing the fractal modularity is also important in the box-covering method, which can be divided into two objectives: maximization of ratio association and minimization of ratio cut. In this paper, to solve the dilemma of minimizing the box number and maximizing the fractal modularity at the same time, a multiobjective discrete particle swarm optimization box-covering (MOPSOBC) algorithm is proposed. The MOPSOBC algorithm applies the decomposition approach on the two objectives to approximate the Pareto front. The proposed MOPSOBC algorithm has been applied to six benchmark networks and compared with the state-of-the-art algorithms, including two classical box-covering algorithms, four single objective optimization algorithms and six multiobjective optimization algorithms. The experimental results show that the MOPSOBC algorithm can get similar box numbers with the current best algorithm, and it outperforms the state-of-the-art algorithms on the fractal modularity and normalized mutual information.  相似文献   
10.
Automatic network clustering is an important technique for mining the meaningful communities (or clusters) of a network. Communities in a network are clusters of nodes where the intra-cluster connection density is high and the inter-cluster connection density is low. The most popular scheme of automatic network clustering aims at maximizing a criterion function known as modularity in partitioning all the nodes into clusters. But it is found that the modularity suffers from the resolution limit problem, which remains an open challenge. In this paper, the automatic network clustering is formulated as a constrained optimization problem: maximizing a criterion function with a density constraint. With this scheme, the established algorithm can be free from the resolution limit problem. Furthermore, it is found that the density constraint can improve the detection accuracy of the modularity optimization. The efficiency of the proposed scheme is verified by comparative experiments on large scale benchmark networks.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号