共查询到20条相似文献,搜索用时 159 毫秒
1.
推荐系统根据用户的偏好为用户推荐个性化的信息、产品和服务等,能够帮助用户有效解决信息过载问题。基于内容的协同过滤算法缺少合适的度量指标用来计算项目之间的相似度。提出一种基于耦合对象相似度的项目推荐算法,即通过耦合对象相似度捕获项目特征频率分布相似性和特征依赖聚合相似度。首先从项目文本中抽取项目的关键特征,然后利用耦合对象相似度构建项目相似度模型,最后使用协同过滤的方法为活动用户推荐用户可能感兴趣的项目。在真实数据集上的实验结果表明,基于耦合对象相似度的推荐算法可以有效解决基于内容推荐系统的项目相似度度量问题,在缺失大量项目特征数据的情况下改进传统基于内容推荐系统的推荐质量。 相似文献
2.
针对不同尺度空间集合中数据样本无法直接匹配的问题,提出融合类别和结构信息的多尺度协同耦合度量学习方法.首先将类别信息作为主要监督信息,样本分布结构信息作为辅助监督信息,构建相关关系矩阵.然后基于该相关关系矩阵构建线性和非线性最优化目标方程,通过最优化目标方程求解将不同尺度数据集合中的数据样本变换至尺度统一的公共空间,最终实现不同尺度空间中数据样本的度量.人脸识别的实验表明,多尺度空间的非线性协同耦合度量是一种有效的度量方法,运算简单方便,能够获得较高的识别率. 相似文献
3.
4.
在基于构件的软件系统中,耦合性是软件中各个构件之间相互关联程度的一种度量.为了能够较好地对软件系统中构件之间的耦合性进行度量,首先对基于构件的软件系统进行形式化表示;然后,对软件系统中的构件之间的耦合关系进行分析;最后,提出结构熵的概念和一种基于结构熵的构件耦合度的度量方法. 相似文献
5.
6.
7.
程序复杂性度量的一种新方法 总被引:1,自引:0,他引:1
本文通过分析传统的McCabe度量方法和Halstead度量方法的不足之处,提出一种新的基于公理的测试复杂性度量方法。新的度量方法明显优于McCabe和Halstead两种方法。 相似文献
8.
针对已有基于线性变换的耦合度量学习方法在解决实际问题时会遇到维数灾难和无法很好描述非线性模型等问题,通过引入核方法,提出核耦合度量学习方法。首先采用非线性变换将来自不同集合的数据投影到同一个高维耦合空间,使两个集合中具有相关关系的元素投影后尽可能接近。然后在这个公共的耦合空间使用传统的核方法进行运算。最后将其应用到步态识别中,以解决步态识别中不同集合间的匹配问题。采用CASIA(B)步态数据库进行实验分析,结果表明文中方法取得较满意的识别效果。 相似文献
9.
基于构件的软件复用被看成是提高软件生产率和软件质量的有效途径,被称为是有效解决软件危机的方法之一。近几年来,随着基于构件的软件工程的发展,构件的度量方法有了很大的发展。但多是研究可复用性,对构件的内聚耦合研究较少。而有效的系统分解是构件获取的主要手段,它为构件的复用提供了强有力的支持。文章在对系统分解后,以构件的内聚耦合度量为研究重心,利用其度量结果,再对构件可复用性进行研究。文章以有向带权依赖图为基础,运用图的谱分割技术,及结合信息熵技术的构件内聚耦合度量方法,实现对构件的自动聚集和自动度量。实验结果表明,图的谱分割技术能够较合理、有效的分解系统,文章中的度量方法是一种较准确的软件构件度量方法,为权衡构件的设计质量提供了可靠的数据。 相似文献
10.
对数据信息不确定性的度量是学术界和工业界十分关注的课题,目前常用的不确定性度量方法基本上是基于方差或信息熵的,在方差计算的基础上提出了一种简单的度量方法来度量随机数据的不确定性,这种度量方法基于累积方差但与传统方的基于方差的形式有所不同,和信息熵具有类似的表示意义而且计算度更快,分析结果表明该方法可以在对离散随机数据的不确定性度量应用中作为一种选择方案。此外,对定义的度量方法的性质和效果进行了讨论。 相似文献
11.
Dr. habil. Björn Wolle 《WIRTSCHAFTSINFORMATIK》2003,45(1):29-40
Many of the recently developed software systems are implemented in Java. For these systems, activities presently are mainly related to software development tasks rather than to dedicated software maintenance tasks. For these Java systems, therefore, experimental confirmation of established metrics for measuring code quantities that are related to software maintenance is not available. This also includes very basic size measures such as the LOC metric and the Halstead length. In this article, the application of these metrics for Java systems as well as some of the associated difficulties are outlined. The presented results are based on experimental data and include empirical correlations between the basic size metrics as well as newly derived scaling laws which are suitable for maintenance related software measurement. 相似文献
12.
One purpose of software metrics is to measure the quality of programs. The results can be for example used to predict maintenance costs or improve code quality. An emerging view is that if software metrics are going to be used to improve quality, they must help in finding code that should be refactored. Often refactoring or applying a design pattern is related to the role of the class to be refactored. In client-based metrics, a project gives the class a context. These metrics measure how a class is used by other classes in the context. We present a new client-based metric LCIC (Lack of Coherence in Clients), which analyses if the class being measured has a coherent set of roles in the program. Interfaces represent the roles of classes. If a class does not have a coherent set of roles, it should be refactored, or a new interface should be defined for the class.We have implemented a tool for measuring the metric LCIC for Java projects in the Eclipse environment. We calculated LCIC values for classes of several open source projects. We compare these results with results of other related metrics, and inspect the measured classes to find out what kind of refactorings are needed. We also analyse the relation of different design patterns and refactorings to our metric. Our experiments reveal the usefulness of client-based metrics to improve the quality of code. 相似文献
13.
We present a coverage metric targeted at shared-memory concurrent programs: the Location Pairs (LP) coverage metric. The goals
of this metric are (i) to measure how thoroughly a program has been tested from a concurrency standpoint, i.e., whether enough
qualitatively different thread interleavings have been explored, and (ii) to guide testing towards unexplored concurrency
scenarios. This metric was inspired by an access pattern known to lead to high-level concurrency errors in industrial software
and in the literature. We built a monitoring tool to measure LP coverage of test programs. We used the LP metric for interactive
debugging, and compared LP coverage with other concurrency coverage metrics on Java benchmarks. We demonstrated that LP coverage
corresponds better to concurrency errors, is a better measure of how well a program is exercised concurrency-wise by a test
set, reaches saturation later than other coverage metrics, and is viable and useful as an interactive testing and debugging
tool. 相似文献
14.
Yong Yan Xiaodong Zhang Qian Ma 《IEEE transactions on pattern analysis and machine intelligence》1997,23(1):4-16
Parallel computing scalability evaluates the extent to which parallel programs and architectures can effectively utilize increasing numbers of processors. In this paper, we compare a group of existing scalability metrics and evaluation models with an experimental metric which uses network latency to measure and evaluate the scalability of parallel programs and architectures. To provide insight into dynamic system performance, we have developed an integrated software environment prototype for measuring and evaluating multiprocessor scalability performance, called Scale-Graph. Scale-Graph uses a graphical instrumentation monitor to collect, measure and analyze latency-related data, and to display scalability performance based on various program execution patterns. The graphical software tool is X-Windows based and is currently implemented on standard workstations to analyze performance data of the KSR-1, a hierarchical ring-based shared-memory architecture 相似文献
15.
Jean Mayrand Jean-François Patenaude Ettore Merlo Michel Dagenais Bruno Laguë 《Annals of Software Engineering》2000,9(1-2):117-141
This paper presents an assessment method to evaluate the quality of object oriented software systems. The assessment method is based on source code abstraction, object–oriented metrics and graphical representation. The metrics used and the underlying model representing the software are presented. The assessment method experiment is part of an industrial research effort with the Bell Canada Quality Engineering and Research Group. It helps evaluators assess the quality and risks associated with software by identifying code fragments presenting unusual characteristics. The assessment method evaluates object–oriented software systems at three levels of granularity: system level, class level and method level. One large C++ and eight Java software systems, for a total of over one million lines of code, are presented as case studies. A critical analysis of the results is presented comparing the systems and the two languages. 相似文献
16.
17.
Power-Laws in a Large Object-Oriented Software System 总被引:3,自引:0,他引:3
《IEEE transactions on pattern analysis and machine intelligence》2007,33(10):687-708
We present a comprehensive study of an implementation of the Smalltalk object oriented system, one of the first and purest object-oriented programming environment, searching for scaling laws in its properties. We study ten system properties, including the distributions of variable and method names, inheritance hierarchies, class and method sizes, system architecture graph. We systematically found Pareto - or sometimes log-normal - distributions in these properties. This denotes that the programming activity, even when modeled from a statistical perspective, can in no way be simply modeled as a random addition of independent increments with finite variance, but exhibits strong organic dependencies on what has been already developed. We compare our results with similar ones obtained for large Java systems, reported in the literature or computed by ourselves for those properties never studied before, showing that the behavior found is similar in all studied object oriented systems. We show how the Yule process is able to stochastically model the generation of several of the power-laws found, identifying the process parameters and comparing theoretical and empirical tail indexes. Lastly, we discuss how the distributions found are related to existing object-oriented metrics, like Chidamber and Kemerer's, and how they could provide a starting point for measuring the quality of a whole system, versus that of single classes. In fact, the usual evaluation of systems based on mean and standard deviation of metrics can be misleading. It is more interesting to measure differences in the shape and coefficients of the data?s statistical distributions. 相似文献
18.
19.
The year 2000 problem is omnipresent, fast approaching, and will present us with something we're not used to: a deadline that can't slip. It will also confront us with two problems, one technical, the other managerial. My cyclomatic complexity measure, implemented using my company's tools, can address both of these concerns directly. The technical problem is that most of the programs using a date or time function have abbreviated the year field to two digits. Thus, as the rest of society progresses into the 21st century, our software will think it's the year 00. The managerial problem is that date references in software are everywhere; every line of code in every program in every system will have to be examined and made date compliant. In this article, I elaborate on an adaptation of the cyclomatic complexity measure to quantify and derive the specific tests for date conversion. I originated the use of cyclomatic complexity as a software metric. The specified data-complexity metric is calculated by first removing all control constructs that do not interact with the referenced data elements in the specified set, and then computing cyclomatic complexity. Specifying all global data elements gives an external coupling measure that determines encapsulation. Specifying all the date elements would quantify the effort for a year-2000 upgrade. This effort will vary depending on the quality of the code that must be changed 相似文献
20.
Jean Mayrand Jean-Fran?ois Patenaude Ettore Merlo Michel Dagenais Bruno Lagu? 《Annals of Software Engineering》2000,9(1-4):117-141
This paper presents an assessment method to evaluate the quality of object oriented software systems. The assessment method
is based on source code abstraction, object–oriented metrics and graphical representation. The metrics used and the underlying
model representing the software are presented. The assessment method experiment is part of an industrial research effort with
the Bell Canada Quality Engineering and Research Group. It helps evaluators assess the quality and risks associated with software
by identifying code fragments presenting unusual characteristics. The assessment method evaluates object–oriented software
systems at three levels of granularity: system level, class level and method level. One large C++ and eight Java software
systems, for a total of over one million lines of code, are presented as case studies. A critical analysis of the results
is presented comparing the systems and the two languages.
This revised version was published online in June 2006 with corrections to the Cover Date. 相似文献