首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到9条相似文献,搜索用时 0 毫秒
1.
One of the main benefits of unsupervised learning is that there is no need for labelled data. As a method of this category, latent Dirichlet allocation (LDA) estimates the semantic relations between the words of the text effectively and can play an important role in solving various issues, including emotional analysis in combination with other parameters. In this study, three novel topic models called date sentiment LDA (DSLDA), author–date sentiment LDA (ADSLDA), and pack–author–date sentiment LDA (PADSLDA) are proposed. The proposed models extend LDA through some extra parameters such as date, author, helpfulness, sentiment, and subtopic. The proposed models use helpfulness in the Gibbs sampling algorithm. Helpfulness is a part of readers who found the review helpful. The proposed models divide the words into two categories: the words more affected by the distribution of subtopic and the words more affected by the main topic. In this study, a new concept called pack is introduced, and a new model called PADSLDA is proposed for sentiment analysis at pack level. The proposed models outperformed the baseline models because according to evaluations results, the extra parameters can appropriately affect the generating process of words in a review. Sentiment analysis at the document level, perplexity, and topic coherence are the main parameters used in the evaluations.  相似文献   

2.
Literature on supervised Machine-Learning (ML) approaches for classifying text-based safety reports for the construction sector has been growing. Recent studies have emphasized the need to build ML approaches that balance high classification accuracy and performance on management criteria, such as resource intensiveness. However, despite being highly accurate, the extensively focused, supervised ML approaches may not perform well on management criteria as many factors contribute to their resource intensiveness. Alternatively, the potential for semi-supervised ML approaches to achieve balanced performance has rarely been explored in the construction safety literature. The current study contributes to the scarce knowledge on semi-supervised ML approaches by demonstrating the applicability of a state-of-the-art semi-supervised learning approach, i.e., Yet, Another Keyword Extractor (YAKE) integrated with Guided Latent Dirichlet Allocation (GLDA) for construction safety report classification. Construction-safety-specific knowledge is extracted as keywords through YAKE, relying on accessible literature with minimal manual intervention. Keywords from YAKE are then seeded in the GLDA model for the automatic classification of safety reports without requiring a large quantity of prelabeled datasets. The YAKE-GLDA classification performance (F1 score of 0.66) is superior to existing unsupervised methods for the benchmark data containing injury narratives from Occupational Health and Safety Administration (OSHA). The YAKE-GLDA approach is also applied to near-miss safety reports from a construction site. The study demonstrates a high degree of generality of the YAKE-GLDA approach through a moderately high F1 score of 0.86 for a few categories in the near-miss data. The current research demonstrates that, unlike the existing supervised approaches, the semi-supervised YAKE-GLDA approach can achieve a novel possibility of consistently achieving reasonably good classification performance across various construction-specific safety datasets yet being resource-efficient. Results from an objective comparative and sensitivity analysis contribute to much-required knowledge-contesting insights into the functioning and applicability of the YAKE-GLDA. The results from the current study will help construction organizations implement and optimize an efficient ML-based knowledge-mining strategy for domains beyond safety and across sites where the availability of a pre-labeled dataset is a significant limitation.  相似文献   

3.
基于隐含狄利克雷分配模型的图像分类算法   总被引:2,自引:0,他引:2       下载免费PDF全文
杨赛  赵春霞 《计算机工程》2012,38(14):181-183
概率隐含语义分析模型不适用于大规模图像数据集,为此,提出一种基于隐含狄利克雷分配模型(LDA)的图像分类算法。以BOF特征作为图像内容的初始描述,利用Gibbs抽样算法近似估算LDA模型参数,得到图像的隐含主题分布特征,并采用k近邻算法对图像进行分类。实验结果表明,与基于概率隐含语义分析模型的分类算法相比,该算法的分类性能较优。  相似文献   

4.
Software flexibility and project efficiency are deemed to be desirable but conflicting goals during software development. We considered the link between project performance, software flexibility, and management interventions. Specially, we examined software flexibility as a mediator between two recommended management control mechanisms (management review and change control) and project performance. The model was empirically evaluated using data collected from 212 project managers in the Project Management Institute. Our results confirmed that the level of control activities during the system development process was a significant facilitator of software flexibility, which, in turn, enhanced project success. A mediator role of software flexibility implied that higher levels of management controls could achieve higher levels of software flexibility and that this was beneficial not only to the maintainability of complex applications but also to project performance.  相似文献   

5.
苏莹  张勇  胡珀  涂新辉 《计算机应用》2016,36(6):1613-1618
针对情感分析需要大量人工标注语料的难点,提出了一种面向无指导情感分析的层次性生成模型。该模型将朴素贝叶斯(NB)模型和潜在狄利克雷分布(LDA)相结合,仅仅需要合适的情感词典,不需要篇章级别和句子级别的标注信息即可同时对网络评论的篇章级别和句子级别的情感倾向进行分析。该模型假设每个句子而不是每个单词拥有一个潜在的情感变量;然后,该情感变量再以朴素贝叶斯的方式生成一系列独立的特征。在该模型中,朴素贝叶斯假设的引入使得该模型可以结合自然语言处理(NLP)相关的技术,例如依存分析、句法分析等,用以提高无指导情感分析的性能。在两个情感语料数据集上的实验结果显示,该模型能够自动推导出篇章级别和句子级别的情感极性,该模型的正确率显著优于其他无指导的方法,甚至接近部分半指导或有指导的研究方法。  相似文献   

6.
Utilization of data mining in software engineering has been the subject of several research papers. Majority of subjects of those paper were in making use of historical data for decision making activities such as cost estimation and product or project attributes prediction and estimation. The ability to predict software fault modules and the ability to correlate relations between faulty modules and product attributes using statistics is the subject of this paper. Correlations and relations between the attributes and the categorical variable or the class are studied through generating a pool of records from each dataset and then select two samples every time from the dataset and compare them. The correlation between the two selected records is studied in terms of changing from faulty to non-faulty or the opposite for the module defect attribute and the value change between the two records in each evaluated attribute (e.g. equal, larger or smaller). The goal was to study if there are certain attributes that are consistently affecting changing the state of the module from faulty to none, or the opposite. Results indicated that such technique can be very useful in studying the correlations between each attribute and the defect status attribute. Another prediction algorithm is developed based on statistics of the module and the overall dataset. The algorithm gave each attribute true class and faulty class predictions. We found that dividing prediction capability for each attribute into those two (i.e. correct and faulty module prediction) facilitate understanding the impact of attribute values on the class and hence improve the overall prediction relative to previous studies and data mining algorithms. Results were evaluated and compared with other algorithms and previous studies. ROC metrics were used to evaluate the performance of the developed metrics. Results from those metrics showed that accuracy or prediction performance calculated traditionally using accurately predicted records divided by the total number of records in the dataset does not necessarily give the best indicator of a good metric or algorithm predictability. Those predictions may give wrong implication if other metrics are not considered with them. The ROC metrics were able to show some other important aspects of performance or accuracy.  相似文献   

7.
There are numerous reasons leading to change in software such as changing requirements, changing technology, increasing customer demands, fixing of defects etc. Thus, identifying and analyzing the change-prone classes of the software during software evolution is gaining wide importance in the field of software engineering. This would help software developers to judiciously allocate the resources used for testing and maintenance. Software metrics can be used for constructing various classification models which can be used for timely identification of change prone classes. Search based algorithms which form a subset of machine learning algorithms can be utilized for constructing prediction models to identify change prone classes of software. Search based algorithms use a fitness function to find the best optimal solution among all the possible solutions. In this work, we analyze the effectiveness of hybridized search based algorithms for change prediction. In other words, the aim of this work is to find whether search based algorithms are capable for accurate model construction to predict change prone classes. We have also constructed models using machine learning techniques and compared the performance of these models with the models constructed using Search Based Algorithms. The validation is carried out on two open source Apache projects, Rave and Commons Math. The results prove the effectiveness of hybridized search based algorithms in predicting change prone classes of software. Thus, they can be utilized by the software developers to produce an efficient and better developed software.  相似文献   

8.
Susan  James  Dan  Gerald   《Journal of Systems and Software》2009,82(10):1568-1577
This paper introduces an executable system dynamics simulation model developed to help project managers comprehend the complex impacts related to requirements volatility on a software development project. The simulator extends previous research and adds research results from an empirical survey, including over 50 new parameters derived from the associated survey data, to a base model. The paper discusses detailed results from two cases that show significant cost, schedule, and quality impacts as a result of requirements volatility. The simulator can be used as an effective tool to demonstrate the complex set of factor relationships and effects related to requirements volatility.  相似文献   

9.
This paper comparatively analyzes a method to automatically classify case studies of building information modeling (BIM) in construction projects by BIM use. It generally takes a minimum of thirty minutes to hours of collection and review and an average of four information sources to identify a project that has used BIM in a manner that is of interest. To automate and expedite the analysis tasks, this study deployed natural language processing (NLP) and commonly used unsupervised learning for text classification, namely latent semantic analysis (LSA) and latent Dirichlet allocation (LDA). The results were validated against one of representative supervised learning methods for text classification—support vector machine (SVM). When LSA and LDA detected phrases in a BIM case study that had higher similarity values to the definition of each BIM use than the threshold values, the system determined that the project had deployed BIM in the detected approach. For the classification of BIM use, the BIM uses specified by Pennsylvania State University were utilized. The approach was validated using 240 BIM case studies (512,892 features). When BIM uses were employed in a project, the project was labeled as “1”; when they were not, the project was labeled as “0.” The performance was analyzed by changing parameters: namely, document segmentation, feature weighting, dimensionality reduction coefficient (k-value), the number of topics, and the number of iterations. LDA yielded the highest F1 score, 80.75% on average. LDA and LSA yielded high recall and low precision in most cases. Conversely, SVM yielded high precision and low recall in most cases and fluctuations in F1 scores.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号