首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Santini  Cristian  Gesese  Genet Asefa  Peroni  Silvio  Gangemi  Aldo  Sack  Harald  Alam  Mehwish 《Scientometrics》2022,127(8):4887-4912
Scientometrics - Scholarly data is growing continuously containing information about the articles from a plethora of venues including conferences, journals, etc. Many initiatives have been taken to...  相似文献   

2.
3.
With the modern technology fast developing, most of entities can be observed by different perspectives. These multiple view information allows us to find a better pattern as long as we integrate them in an appropriate way. So clustering by integrating multi-view representations that describe the same class of entities has become a crucial issue for knowledge discovering. We integrate multi-view data by a tensor model and present a hybrid clustering method based on Tucker-2 model, which can be regarded as an extension of spectral clustering. We apply our hybrid clustering method to scientific publication analysis by integrating citation-link and lexical content. Clustering experiments are conducted on a large-scale journal set retrieved from the Web of Science (WoS) database. Several relevant hybrid clustering methods are cross compared with our method. The analysis of clustering results demonstrate the effectiveness of the proposed algorithm. Furthermore, we provide a cognitive analysis of the clustering results as well as the visualization as a mapping of the journal set.  相似文献   

4.
目的:为了解决在特征聚合过程中的确定性传播所导致的节点相似性破坏和节点对邻域依赖性高的问题,构建基于随机重构图结构的图神经网络分类算法。方法:首先,随机特征变换根据学习的权重值对随机保留的部分节点特征进行增强,生成随机特征。然后,利用生成的特征计算融合系数,对原始图和k近邻图进行自适应融合,重构出随机图结构。最后,将提取的多支浅层特征加入到重构图结构的卷积层,使模型随着层数的加深可得到浅层信息的补充。此外,对联合优化分类损失和自监督学习损失,保持节点相似性和平滑性。结果:与其他节点分类方法在Cora、Citeseer和Pubmed数据集上进行半监督实验和全监督实验结果对比,本文的算法精度提高了0.9%~2.3%。结论:基于随机重构图结构的网络分类算法在节点分类任务中取得较好的性能。  相似文献   

5.
Indicators in a research Institute ought to be readable at several decision levels, and particularly with different break-downs of the publication set chosen as reference. Citation transactions between journals have been widely used to structure scientific subfields in ISI databases. We tried a seed-free structuration of SCI/CMCI journals (a) to test convergence of pure citation-built specialties (roughly 150) on SCI/CMCI journals with existing classifications at the subfield level (b) to explore the interest and the limits of this approach for upper levels of aggregation (roughly 30 fields). A few limits of journal-level classification are addressed. At the subfield level, the convergence is large with some discrepancies worth noticing. At the subdiscipline level, the method is not sufficient to achieve a satisfactory 30-level delineation, but gives a good basis for informed expert validation.  相似文献   

6.

Probabilistic topic modeling algorithms like Latent Dirichlet Allocation (LDA) have become powerful tools for the analysis of large collections of documents (such as papers, projects, or funding applications) in science, technology an innovation (STI) policy design and monitoring. However, selecting an appropriate and stable topic model for a specific application (by adjusting the hyperparameters of the algorithm) is not a trivial problem. Common validation metrics like coherence or perplexity, which are focused on the quality of topics, are not a good fit in applications where the quality of the document similarity relations inferred from the topic model is especially relevant. Relying on graph analysis techniques, the aim of our work is to state a new methodology for the selection of hyperparameters which is specifically oriented to optimize the similarity metrics emanating from the topic model. In order to do this, we propose two graph metrics: the first measures the variability of the similarity graphs that result from different runs of the algorithm for a fixed value of the hyperparameters, while the second metric measures the alignment between the graph derived from the LDA model and another obtained using metadata available for the corresponding corpus. Through experiments on various corpora related to STI, it is shown that the proposed metrics provide relevant indicators to select the number of topics and build persistent topic models that are consistent with the metadata. Their use, which can be extended to other topic models beyond LDA, could facilitate the systematic adoption of this kind of techniques in STI policy analysis and design.

  相似文献   

7.
Reviewer recommendation problem in the research field usually refers to invite experts to comment on the quality of papers, proposals, etc. How to effectively and accurately recommend reviewers for the submitted papers and proposals is a meaningful and still tough task. At present, many unsupervised recommendation methods have been researched to solve this task. In this paper, a novel classification method named Word Mover’s Distance–Constructive Covering Algorithm (WMD–CCA, for short) is proposed to solve the reviewer recommendation problem as a classification issue. A submission or a reviewer is described by some tags, such as keywords, research interests, and so on. First, the submission or the reviewer is represented as some vectors by a word embedding method. That is to say, each tag describing a submission or a reviewer is represented as a vector. Second, the Word Mover’s Distance (WMD, for short) method is used to measure the minimum distances between submissions and reviewers. Actually, the papers usually have research field information, and utilizing them well might improve the reviewer recommendation accuracy. So finally, the reviewer recommendation task is transformed into a classification problem which is solved by a supervised learning method- Constructive Covering Algorithm (CCA, for short). Comparative experiments are conducted with 4 public datasets and a synthetic dataset from Baidu Scholar, which show that the proposed method WMD–CCA effectively solves the reviewer recommendation task as a classification issue and improves the recommendation accuracy.  相似文献   

8.
针对异构车联网系统车辆较多导致时隙分配困难的问题,提出了一种基于图着色理论的时隙分配方案。该方案以考虑两跳内节点的图为模型,通过图着色的方法分配时隙,有效降低了隐藏终端带来的丢包;另外给出了一种高效实用的时隙重用分配算法。该算法根据度定义权值以确定车辆分配时隙数目,保证了公平性,提高了时隙重用,进而提高了消息发送的可靠性,同时也适用于网络拓扑多变的车联网场景。仿真结果表明,在车辆数为200、时隙数为100时,与传统时隙分配方法相比,该方案的车辆平均收包率获得大幅提升。此外,随着时隙重用的增加,车辆间干扰增强,从而导致平均收包率降低。研究还发现,增加车辆发射功率时,由于接收端信干噪比先增加后趋于不变,所以平均收包率也先增加后趋于不变。  相似文献   

9.
10.
Methods to link academic research achievements with innovative industries have gained considerable awareness worldwide in recent years. Subsequently, responding to industries’ demand to reinforce the linkage between scientific research and industries is an issue awaiting urgent resolution for the government. Previous scientific pertaining to the linkage between scientific fields and (academic papers) technological fields (technology patents) primarily focus on non-patent research or university–industry collaboration. However, these studies failed to highlight the type of linkages between science and technological fields. Therefore, we conducted a pilot study to identify the core scientific fields in different technological fields. In addition to the proposed network maps linking scientific and technological fields, this study also identified the core scientific fields for patent development, including materials science, multidisciplinary; engineering, chemical; physics, applied; nanoscience and nanotechnology; and chemistry, physical. Due to the scarcity of research pertaining to the linkage of scientific fields and technological fields, the government, research and development units, and universities lack a framework for linking fundamental scientific research with the development of industry technologies. Therefore, in this study, we used an author–inventor network to analyze this research topic, expecting that the results can serve as a reference for further research.  相似文献   

11.
针对现有视觉测量中的检测代价高,精度低和速度慢问题,该文提出一种基于计算机机器视觉的紧密内插值亚像素测量方法。该方法基于线性插值算法的原理,结合常规边缘检测方法和图像的灰度曲线图,利用阈值分割和标准长度进行亚像素自适应阈值选择。为验证该方法的有效性,对标准量块长度进行测量实验,并分析测量系统的误差影响因素。相比于传统的Canny算子方法检测结果,该方法的平均测量准确度提升46.2%。实验结果表明该算法的测量精度较高,可以快速、精确地测量出物体的几何尺寸。  相似文献   

12.
A new multiscale computational method is developed for the elasto-plastic analysis of heterogeneous continuum materials with both periodic and random microstructures. In the method, the multiscale base functions which can efficiently capture the small-scale features of elements are constructed numerically and employed to establish the relationship between the macroscopic and microscopic variables. Thus, the detailed microscopic stress fields within the elements can be obtained easily. For the construction of the numerical base functions, several different kinds of boundary conditions are introduced and their influences are investigated. In this context, a two-scale computational modeling with successive iteration scheme is proposed. The new method could be implemented conveniently and adopted to the general problems without scale separation and periodicity assumptions. Extensive numerical experiments are carried out and the results are compared with the direct FEM. It is shown that the method developed provides excellent precision of the nonlinear response for the heterogeneous materials. Moreover, the computational cost is reduced dramatically.  相似文献   

13.
We introduce a novel heterogeneous multiscale method for the elastic analysis of two-dimensional domains with a complex microstructure. To this end, the multiscale finite element method is revisited and originally upgraded by introducing virtual element discretizations at the microscale, hence allowing for generalized polygonal and nonconvex elements. The microscale is upscaled through the numerical evaluation of a set of multiscale basis functions. The solution of the equilibrium equations is performed at the coarse scale at a reduced computational cost. We discuss the computation of the multiscale basis functions and corresponding virtual projection operators. The performance of the method in terms of accuracy and computational efficiency is evaluated through a set of numerical examples.  相似文献   

14.
《Advanced Powder Technology》2021,32(11):4004-4016
Aggregate is an important component of asphalt mixtures, and its shape has a significant influence on the road quality. In this study, a single industrial camera was used to collect images of aggregate particles during falling; then, their morphologies obtained in multiple views were analyzed. Using the equivalent geometric model, four shape characterization parameters—area variety factor, minor diameter variety factor, maximum elongation factor, and Strip-Block area variety factor—were proposed to compose the multi-view shape feature. On this basis, a general regression neural network was adopted to realize the classification of aggregate particles. The results show that the aggregate classification is slightly different when using different equivalent geometric models, while the aggregate shape can be effectively classified. The accuracy of aggregate classification can be improved by fusing parameters from different equivalent models using principal component analysis; another way is through increasing the frame rate of image collection that may increase the number of views. In general, the findings indicate that the proposed detection method can be applied to actual road engineering, which is of great significance to guarantee pavement quality.  相似文献   

15.
本文为自动洗片机用高温快速显影液的科学管理与质量保证提供了简便、科学的检测方法,力医(?)影像质量的提高提供了可靠的保证。具有科学性、实用性、普及性。  相似文献   

16.
提出了一种基于矢量图进行遥感影像的区域变化检测的方法,该方法通过将历史矢量图与当前影像进行匹配后叠加、设定全局变化系数与局部变化系数等步骤,对各矢量图斑区域内的影像DN值做统计分析,达到检测区域变化的目的.试算分析表明,全局变化系数比局部变化系数表现得更加敏感,更易对错判数目造成影响.利用历史矢量图做变化检测,不仅可以避免复杂的影像辐射校正工作,还可以充分利用历史形成的矢量库资料,实现数据复用,同时也便于矢量库的历史继承与发展.  相似文献   

17.
Multivariate time series classification is of significance in machine learning area. In this paper, we present a novel time series classification algorithm, which adopts triangle distance function as similarity measure, extracts some meaningful patterns from original data and uses traditional machine learning algorithm to create classifier based on the extracted patterns. During the stage of pattern extraction, Gini function is used to determine the starting position in the original data and the length of each pattern. In order to improve computing efficiency, we also apply sampling method to reduce the searching space of patterns. The common datasets are used to check our algorithm and compare with the naive algorithms. Experimental results are shown to reveal that much improvement can be gained in terms of interpretability, simplicity and accuracy.  相似文献   

18.
In this paper, a collocation method with mixed degrees of freedom (DOFs) is proposed for heterogeneous structures. Local tractions of the outer and interface boundaries are introduced as DOFs in the mixed collocation scheme. Then, the equilibrium equations of all the nodes and the outer boundary conditions are discretized and assembled into the global stiffness matrix. A local force equilibrium equation for modeling the stress discontinuity through the interface is developed and added into the global stiffness matrix as well. With those contributions, a statically determined stiffness matrix is obtained. Numerical examples show that the present method is superior to the classical mixed collocation method in the heterogeneous structure because it improves the accuracy and the convergence and remains the efficiency. Besides, almost constant convergence rates of displacements and stresses are found in all the examples, even for three-dimensional problems.  相似文献   

19.
Yair  Gad  Goldstein  Keith 《Scientometrics》2020,124(2):887-902
Scientometrics - This paper defines the ‘miraculous year’ as the most productive year in academics’ scientific careers. It tests the hypothesis that annual productivity is...  相似文献   

20.
代价敏感普遍应用于解决分类不平衡问题,但代价敏感算法一直没有一个客观的评价标准.本文提出一种针对代价敏感算法的分类精度计算方法,以平衡精度替换总体精度来有效地评定代价敏感算法的分类性能.相比于传统的总体精度,该平衡精度不会忽略小类样本的贡献.通过代价敏感超限学习机对基因表达数据进行分类对比实验,结果表明,平衡精度可以更为客观、合理地表示代价敏感算法的分类性能.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号