首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
视频监控中基于在线多核学习的目标再现识别   总被引:1,自引:0,他引:1  
陈方  许允喜 《光电工程》2012,39(9):65-71
在非重叠多摄像机或单摄像机视频监控中,识别跟踪目标的再次出现很重要.针对传统支持向量机方法在特征融合方面的缺陷,本文提出了一种新的基于在线多核学习的人体目标再现识别方法.该方法对跟踪目标视频前景图像序列提取具有互补性的视觉单词树直方图和全局颜色直方图二种特征,再采用多核学习方法在线训练人体目标视觉外观,从而得到多核特征融合模型.实验结果表明,该方法能快速训练人体目标外观模型,满足视频监控的实时要求,多核融合模型获得了比单一特征模型和单核支持向量机方法更高的识别性能.  相似文献   

2.
材料数据由于小样本、高维度、噪音大等特性, 用于机器学习建模时常常会产生与领域专家认知不一致的结果。面向机器学习全流程, 开发材料领域知识嵌入的机器学习模型是解决这一问题的有效途径。材料数据的准确性直接影响了数据驱动的材料性能预测的可靠性。本研究针对机器学习应用过程中的数据预处理阶段, 提出了融合材料领域知识的数据准确性检测方法。该方法首先结合材料专家认知构建了材料领域知识库。然后, 将其与数据驱动的数据准确性检测方法结合, 从数据和领域知识两个角度对材料数据集进行基于描述符取值规则的单维度数据正确性检测、基于描述符相关性规则的多维度数据相关性检测以及基于多维相似样本识别策略的全维度数据可靠性检测。对于每一阶段识别出的异常数据, 结合材料领域知识进行修正, 并将领域知识融入到数据准确性检测方法的全过程以确保数据集从初始阶段就具有较高准确性。最后该方法在NASICON型固态电解质激活能预测数据集上的实验结果表明: 本研究提出的方法可以有效识别异常数据并进行合理修正。与原始数据集相比, 基于修正数据集的6种机器学习模型的预测精度都有不同程度的提升。其中, 在最优模型上R2提升了33%。  相似文献   

3.
Despite the advancement within the last decades in the field of smart grids, energy consumption forecasting utilizing the metrological features is still challenging. This paper proposes a genetic algorithm-based adaptive error curve learning ensemble (GA-ECLE) model. The proposed technique copes with the stochastic variations of improving energy consumption forecasting using a machine learning-based ensembled approach. A modified ensemble model based on a utilizing error of model as a feature is used to improve the forecast accuracy. This approach combines three models, namely CatBoost (CB), Gradient Boost (GB), and Multilayer Perceptron (MLP). The ensembled CB-GB-MLP model’s inner mechanism consists of generating a meta-data from Gradient Boosting and CatBoost models to compute the final predictions using the Multilayer Perceptron network. A genetic algorithm is used to obtain the optimal features to be used for the model. To prove the proposed model’s effectiveness, we have used a four-phase technique using Jeju island’s real energy consumption data. In the first phase, we have obtained the results by applying the CB-GB-MLP model. In the second phase, we have utilized a GA-ensembled model with optimal features. The third phase is for the comparison of the energy forecasting result with the proposed ECL-based model. The fourth stage is the final stage, where we have applied the GA-ECLE model. We obtained a mean absolute error of 3.05, and a root mean square error of 5.05. Extensive experimental results are provided, demonstrating the superiority of the proposed GA-ECLE model over traditional ensemble models.  相似文献   

4.
Quantum machine learning (QML) is a rapidly rising research field that incorporates ideas from quantum computing and machine learning to develop emerging tools for scientific research and improving data processing. How to efficiently control or manipulate the quantum system is a fundamental and vexing problem in quantum computing. It can be described as learning or approximating a unitary operator. Since the success of the hybrid-based quantum machine learning model proposed in recent years, we investigate to apply the techniques from QML to tackle this problem. Based on the Choi–Jamiołkowski isomorphism in quantum computing, we transfer the original problem of learning a unitary operator to a min–max optimization problem which can also be viewed as a quantum generative adversarial network. Besides, we select the spectral norm between the target and generated unitary operators as the regularization term in the loss function. Inspired by the hybrid quantum-classical framework widely used in quantum machine learning, we employ the variational quantum circuit and gradient descent based optimizers to solve the min-max optimization problem. In our numerical experiments, the results imply that our proposed method can successfully approximate the desired unitary operator and dramatically reduce the number of quantum gates of the traditional approach. The average fidelity between the states that are produced by applying target and generated unitary on random input states is around 0.997.  相似文献   

5.
The increasing penetration rate of electric kickboard vehicles has been popularized and promoted primarily because of its clean and efficient features. Electric kickboards are gradually growing in popularity in tourist and education-centric localities. In the upcoming arrival of electric kickboard vehicles, deploying a customer rental service is essential. Due to its free-floating nature, the shared electric kickboard is a common and practical means of transportation. Relocation plans for shared electric kickboards are required to increase the quality of service, and forecasting demand for their use in a specific region is crucial. Predicting demand accurately with small data is troublesome. Extensive data is necessary for training machine learning algorithms for effective prediction. Data generation is a method for expanding the amount of data that will be further accessible for training. In this work, we proposed a model that takes time-series customers’ electric kickboard demand data as input, pre-processes it, and generates synthetic data according to the original data distribution using generative adversarial networks (GAN). The electric kickboard mobility demand prediction error was reduced when we combined synthetic data with the original data. We proposed Tabular-GAN-Modified-WGAN-GP for generating synthetic data for better prediction results. We modified The Wasserstein GAN-gradient penalty (GP) with the RMSprop optimizer and then employed Spectral Normalization (SN) to improve training stability and faster convergence. Finally, we applied a regression-based blending ensemble technique that can help us to improve performance of demand prediction. We used various evaluation criteria and visual representations to compare our proposed model’s performance. Synthetic data generated by our suggested GAN model is also evaluated. The TGAN-Modified-WGAN-GP model mitigates the overfitting and mode collapse problem, and it also converges faster than previous GAN models for synthetic data creation. The presented model’s performance is compared to existing ensemble and baseline models. The experimental findings imply that combining synthetic and actual data can significantly reduce prediction error rates in the mean absolute percentage error (MAPE) of 4.476 and increase prediction accuracy.  相似文献   

6.
A bit hurdle for financial institutions is to decide potential candidates to give a line of credit identifying the right people without any credit risk. For such a crucial decision, past demographic and financial data of debtors is important to build an automated artificial intelligence credit score prediction model based on machine learning classifier. In addition, for building robust and accurate machine learning models, important input predictors (debtor's information) must be selected. The present computational work focuses on building a credit scoring prediction model. A publicly available German credit data is incorporated in this study. An improvement in the credit scoring prediction has been shown with the use of different feature selection techniques (such as Information-gain, Gain-Ratio and Chi-Square) and machine learning classifiers (Bayesian, Naïve Bayes, Random Forest, Decision Tree (C5.0) and SVM (support Vector Machine)). Further, a comparative analysis is performed between different machine learning classifiers and between different feature selection techniques. Different evaluation metrics are considered for analyzing performance of the models (such as accuracy, F-measure, false positive rate, false negative rate and training time). After analysis, a best combination of machine learning classifier and feature selection technique are identified. In this study, a combination of random forest (RF) and Chi-Square (CS) is found good, among other combinations, with respect to good performance accuracy, F-measure and low false positive and false negative rates. However, training time for this particular combination was found to be slightly higher. Result of C5.0 with chi-square was comparable with the best one. This study provides an opportunity to financial institutions to build an automated model for better credit scoring.  相似文献   

7.
Due to its outstanding ability in processing large quantity and high-dimensional data, machine learning models have been used in many cases, such as pattern recognition, classification, spam filtering, data mining and forecasting. As an outstanding machine learning algorithm, K-Nearest Neighbor (KNN) has been widely used in different situations, yet in selecting qualified applicants for winning a funding is almost new. The major problem lies in how to accurately determine the importance of attributes. In this paper, we propose a Feature-weighted Gradient Decent K-Nearest Neighbor (FGDKNN) method to classify funding applicants in to two types: approved ones or not approved ones. The FGDKNN is based on a gradient decent learning algorithm to update weight. It updatesthe weight of labels by minimizing error ratio iteratively, so that the importance of attributes can be described better. We investigate the performance of FGDKNN with Beijing Innofund. The results show that FGDKNN performs about 23%, 20%, 18%, 15% better than KNN, SVM, DT and ANN, respectively. Moreover, the FGDKNN has fast convergence time under different training scales, and has good performance under different settings.  相似文献   

8.
Crowd Anomaly Detection has become a challenge in intelligent video surveillance system and security. Intelligent video surveillance systems make extensive use of data mining, machine learning and deep learning methods. In this paper a novel approach is proposed to identify abnormal occurrences in crowded situations using deep learning. In this approach, Adaptive GoogleNet Neural Network Classifier with Multi-Objective Whale Optimization Algorithm are applied to predict the abnormal video frames in the crowded scenes. We use multiple instance learning (MIL) to dynamically develop a deep anomalous ranking framework. This technique predicts higher anomalous values for abnormal video frames by treating regular and irregular video bags and video sections. We use the multi-objective whale optimization algorithm to optimize the entire process and get the best results. The performance parameters such as accuracy, precision, recall, and F-score are considered to evaluate the proposed technique using the Python simulation tool. Our simulation results show that the proposed method performs better than the conventional methods on the public live video dataset.  相似文献   

9.
张煜莹  陆艺  赵静 《计量学报》2022,43(11):1456-1463
针对数控机床中主轴轴承和刀具同时出现故障或机床主轴转速改变时的故障诊断问题,提出了基于增量学习的深度卷积诊断模型。首先,将常用转速下的主轴轴承和刀具振动数据集,输入结合了批量归一化算法的一维卷积神经网络,实现单一转速下故障诊断;然后,人工判断跨转速诊断时的未知故障类型,对其打标签后重新输入网络,通过增量学习实现知识迁移并使模型学习新数据特征;最后模型在跨转速故障诊断领域的准确率为76.49%~86.09%,且与Fine Tuning和Joint Training两种经典跨领域算法相比,基于增量学习的深度卷积诊断模型提高了准确率,缩短了训练用时。  相似文献   

10.
Routine immunization (RI) of children is the most effective and timely public health intervention for decreasing child mortality rates around the globe. Pakistan being a low-and-middle-income-country (LMIC) has one of the highest child mortality rates in the world occurring mainly due to vaccine-preventable diseases (VPDs). For improving RI coverage, a critical need is to establish potential RI defaulters at an early stage, so that appropriate interventions can be targeted towards such population who are identified to be at risk of missing on their scheduled vaccine uptakes. In this paper, a machine learning (ML) based predictive model has been proposed to predict defaulting and non-defaulting children on upcoming immunization visits and examine the effect of its underlying contributing factors. The predictive model uses data obtained from Paigham-e-Sehat study having immunization records of 3,113 children. The design of predictive model is based on obtaining optimal results across accuracy, specificity, and sensitivity, to ensure model outcomes remain practically relevant to the problem addressed. Further optimization of predictive model is obtained through selection of significant features and removing data bias. Nine machine learning algorithms were applied for prediction of defaulting children for the next immunization visit. The results showed that the random forest model achieves the optimal accuracy of 81.9% with 83.6% sensitivity and 80.3% specificity. The main determinants of vaccination coverage were found to be vaccine coverage at birth, parental education, and socio-economic conditions of the defaulting group. This information can assist relevant policy makers to take proactive and effective measures for developing evidence based targeted and timely interventions for defaulting children.  相似文献   

11.
In recent times, the images and videos have emerged as one of the most important information source depicting the real time scenarios. Digital images nowadays serve as input for many applications and replacing the manual methods due to their capabilities of 3D scene representation in 2D plane. The capabilities of digital images along with utilization of machine learning methodologies are showing promising accuracies in many applications of prediction and pattern recognition. One of the application fields pertains to detection of diseases occurring in the plants, which are destroying the widespread fields. Traditionally the disease detection process was done by a domain expert using manual examination and laboratory tests. This is a tedious and time consuming process and does not suffice the accuracy levels. This creates a room for the research in developing automation based methods where the images captured through sensors and cameras will be used for detection of disease and control its spreading. The digital images captured from the field's forms the dataset which trains the machine learning models to predict the nature of the disease. The accuracy of these models is greatly affected by the amount of noise and ailments present in the input images, appropriate segmentation methodology, feature vector development and the choice of machine learning algorithm. To ensure the high rated performance of the designed system the research is moving in a direction to fine tune each and every stage separately considering their dependencies on subsequent stages. Therefore the most optimum solution can be obtained by considering the image processing methodologies for improving the quality of image and then applying statistical methods for feature extraction and selection. The training vector thus developed is capable of presenting the relationship between the feature values and the target class. In this article, a highly accurate system model for detecting the diseases occurring in citrus fruits using a hybrid feature development approach is proposed. The overall improvement in terms of accuracy is measured and depicted.  相似文献   

12.
提出了一种基于自适应差分进化人工蜂群优化极限学习机预测血液各组分浓度的方法。首先应用人工蜂群算法对输入权值和隐含层阈值迭代寻优;其次结合差分进化进一步提高模型精度且避免后期易陷入局部最优等问题;由于差分进化算法交叉率和变异率存在凭经验给定的不确定性,最后引入了自适应调整的思想提出自适应差分进化人工蜂群算法优化极限学习机算法的模型,将其应用于血液成分定量分析中。实验表明,自适应差分进化人工蜂群算法优化的极限学习机模型具有较高的预测精度,模型具有较强的稳健性。  相似文献   

13.
The remaining useful life (RUL) of the machine is one of the key information for predictive maintenance. If there is a lack of predictive maintenance strategy, it will increase the maintenance and breakdown costs of the machine. We apply transfer learning techniques to develop a new method that predicts the RUL of target data using degradation trends learned from complete bearing test data called source data. The training length of the model plays a crucial role in RUL prediction. First, the exponentially weighted moving average (EWMA) chart is used to identify the abnormal points of the bearing to determine the starting point of the model's training. Secondly, we propose transfer learning based on a bidirectional long and short-term memory with attention mechanism (BiLSTMAM) model to estimate the RUL of the ball bearing. At the same time, the public data set is used to compare the estimation effect of the BiLSTMAM model with some published models. The BiLSTMAM model with the EWMA chart can achieve a score of 0.6702 for 11 target bearings. The accuracy of the RUL estimation ensures a reliable maintenance strategy to reduce unpredictable failures.  相似文献   

14.
L. Leydesdorff 《Scientometrics》1990,19(3-4):297-324
The study discusses the application of various forms of time series analysis to national performance data for EEC countries and the US. First, it is shown that at the aggregated level, a straightforward relation exists between output and input, which varies with time. Various analytical techniques to account for the time factor are discussed. By using information theory, a simple formula can be derived which gives the best prediction for the following year's data. Subsequently, this model is extended to multi-variate forecasting of distributions. Additionally, it can be shown by using this method that in terms of percentage of world share of publications the hypothesis that the EEC develops as a single publication system has to be rejected. However, when co-authorship relations among EEC member countries are used as an indicator, the predominance of a system is suggested.  相似文献   

15.
Load forecasting has received crucial research attention to reduce peak load and contribute to the stability of power grid using machine learning or deep learning models. Especially, we need the adequate model to forecast the maximum load duration based on time-of-use, which is the electricity usage fare policy in order to achieve the goals such as peak load reduction in a power grid. However, the existing single machine learning or deep learning forecasting cannot easily avoid overfitting. Moreover, a majority of the ensemble or hybrid models do not achieve optimal results for forecasting the maximum load duration based on time-of-use. To overcome these limitations, we propose a hybrid deep learning architecture to forecast maximum load duration based on time-of-use. Experimental results indicate that this architecture could achieve the highest average of recall and accuracy (83.43%) compared to benchmark models. To verify the effectiveness of the architecture, another experimental result shows that energy storage system (ESS) scheme in accordance with the forecast results of the proposed model (LSTM-MATO) in the architecture could provide peak load cost savings of 17,535,700 KRW each year comparing with original peak load costs without the method. Therefore, the proposed architecture could be utilized for practical applications such as peak load reduction in the grid.  相似文献   

16.
The rapid development and progress in deep machine-learning techniques have become a key factor in solving the future challenges of humanity. Vision-based target detection and object classification have been improved due to the development of deep learning algorithms. Data fusion in autonomous driving is a fact and a prerequisite task of data preprocessing from multi-sensors that provide a precise, well-engineered, and complete detection of objects, scene or events. The target of the current study is to develop an in-vehicle information system to prevent or at least mitigate traffic issues related to parking detection and traffic congestion detection. In this study we examined to solve these problems described by (1) extracting region-of-interest in the images (2) vehicle detection based on instance segmentation, and (3) building deep learning model based on the key features obtained from input parking images. We build a deep machine learning algorithm that enables collecting real video-camera feeds from vision sensors and predicting free parking spaces. Image augmentation techniques were performed using edge detection, cropping, refined by rotating, thresholding, resizing, or color augment to predict the region of bounding boxes. A deep convolutional neural network F-MTCNN model is proposed that simultaneously capable for compiling, training, validating and testing on parking video frames through video-camera. The results of proposed model employing on publicly available PK-Lot parking dataset and the optimized model achieved a relatively higher accuracy 97.6% than previous reported methodologies. Moreover, this article presents mathematical and simulation results using state-of-the-art deep learning technologies for smart parking space detection. The results are verified using Python, TensorFlow, OpenCV computer simulation frameworks.  相似文献   

17.
传统的理论研究、实验研究及计算仿真已无法满足科学家对新材料的探索与设计。数据驱动的机器学习算法对材料的筛选与性能预测有着推动作用。将机器学习算法应用到材料信息学,基于现有材料热导率数据集,建立机器学习热导率预测模型,通过交叉验证来对机器学习回归模型进行评估。利用机器学习算法建立描述符与热导率属性之间的映射模型,可用于大规模的材料筛选,从而指导实验研究。  相似文献   

18.
This paper proposes a novel forecasting method that combines the deep learning method – long short-term memory (LSTM) networks and random forest (RF). The proposed method can model complex relationships of both temporal and regression type which gives it an edge in accuracy over other forecasting methods. We evaluated the new method on a real-world multivariate dataset from a multi-channel retailer. We benchmark the forecasting performance of the new proposition against neural networks, multiple regression, ARIMAX, LSTM networks, and RF. We employed forecasting performance metrics to measure bias, accuracy, and variance, and the empirical evidence suggests that the new proposition is (statistically) significantly better. Furthermore, our method ranks the explanatory variables in terms of their relative importance. The empirical evaluations are replicated for longer forecasting horizons, and online and offline channels and the same conclusions hold; thus, advocating for the robustness of our forecasting proposition as well as the suitability in multi-channel retail demand forecasting.  相似文献   

19.
水声目标智能识别是水声装备智能化的重要组成部分,深度学习则是实现水声目标智能识别的重要技术手段之一。当前水声目标智能识别经常面临数据集较小带来的训练样本量不足的情况,针对小数据集识别中存在的因过拟合导致模型泛化能力不足,以及输入的水声信号二维谱图样式不统一的问题,文章提出了一种基于VGGish神经网络模型的水声目标识别方法。该方法以VGGish网络作为特征提取器,并在VGGish网络前部加入了信号预处理模块,同时设计了一种基于传统机器学习算法的联合分类器,通过以上措施解决了过拟合问题和二维谱图样式不统一问题。实验结果显示,该方法应用在ShipsEar数据集上得到了94.397%的识别准确率,高于传统预训练-微调法得到的最高90.977%的准确率,并且在相同条件下该方法的模型训练耗时仅为传统预训练-微调方法的0.5%左右,有效提高了识别准确率和模型训练速度。  相似文献   

20.
The accurate and stable prediction of protein domain boundaries is an important avenue for the prediction of protein structure, function, evolution, and design. Recent research on protein domain boundary prediction has been mainly based on widely known machine learning techniques. In this paper, we propose a new machine learning based domain predictor namely, DomNet that can show a more accurate and stable predictive performance than the existing state-of-the-art models. The DomNet is trained using a novel compact domain profile, secondary structure, solvent accessibility information, and interdomain linker index to detect possible domain boundaries for a target sequence. The performance of the proposed model was compared to nine different machine learning models on the Benchmark_2 dataset in terms of accuracy, sensitivity, specificity, and correlation coefficient. The DomNet achieved the best performance with 71% accuracy for domain boundary identification in multidomains proteins. With the CASP7 benchmark dataset, it again demonstrated superior performance to contemporary domain boundary predictors such as DOMpro, DomPred, DomSSEA, DomCut, and DomainDiscovery.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号