首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   34篇
  免费   0篇
无线电   2篇
自动化技术   32篇
  2019年   1篇
  2018年   3篇
  2017年   1篇
  2016年   2篇
  2015年   2篇
  2013年   2篇
  2012年   1篇
  2011年   2篇
  2010年   1篇
  2008年   2篇
  2007年   2篇
  2006年   2篇
  2005年   1篇
  2004年   2篇
  2003年   1篇
  2002年   1篇
  2001年   2篇
  2000年   3篇
  1999年   1篇
  1997年   1篇
  1995年   1篇
排序方式: 共有34条查询结果,搜索用时 15 毫秒
1.
DeMIMA: A Multilayered Approach for Design Pattern Identification   总被引:1,自引:0,他引:1  
Design patterns are important in object-oriented programming because they offer design motifs, elegant solutions to recurrent design problems, which improve the quality of software systems. Design motifs facilitate system maintenance by helping to understand design and implementation. However, after implementation, design motifs are spread throughout the source code and are thus not directly available to maintainers. We present DeMIMA, an approach to identify semi-automatically micro-architectures that are similar to design motifs in source code and to ensure the traceability of these micro-architectures between implementation and design. DeMIMA consists of three layers: two layers to recover an abstract model of the source code, including binary class relationships, and a third layer to identify design patterns in the abstract model. We apply DeMIMA to five open-source systems and, on average, we observe 34% precision for the considered 12 design motifs. Through the use of explanation-based constraint programming, DeMIMA ensures 100% recall on the five systems. We also apply DeMIMA on 33 industrial components.  相似文献   
2.
Context:How can quality of software systems be predicted before deployment? In attempting to answer this question, prediction models are advocated in several studies. The performance of such models drops dramatically, with very low accuracy, when they are used in new software development environments or in new circumstances.ObjectiveThe main objective of this work is to circumvent the model generalizability problem. We propose a new approach that substitutes traditional ways of building prediction models which use historical data and machine learning techniques.MethodIn this paper, existing models are decision trees built to predict module fault-proneness within the NASA Critical Mission Software. A genetic algorithm is developed to combine and adapt expertise extracted from existing models in order to derive a “composite” model that performs accurately in a given context of software development. Experimental evaluation of the approach is carried out in three different software development circumstances.ResultsThe results show that derived prediction models work more accurately not only for a particular state of a software organization but also for evolving and modified ones.ConclusionOur approach is considered suitable for software data nature and at the same time superior to model selection and data combination approaches. It is then concluded that learning from existing software models (i.e., software expertise) has two immediate advantages; circumventing model generalizability and alleviating the lack of data in software-engineering.  相似文献   
3.
The fragile base-class problem (FBCP) has been described in the literature as a consequence of “misusing” inheritance and composition in object-oriented programming when (re)using frameworks. Many research works have focused on preventing the FBCP by proposing alternative mechanisms for reuse, but, to the best of our knowledge, there is no previous research work studying the prevalence and impact of the FBCP in real-world software systems. The goal of our work is thus twofold: (1) assess, in different systems, the prevalence of micro-architectures, called FBCS, that could lead to two aspects of the FBCP, (2) investigate the relation between the detected occurrences and the quality of the systems in terms of change and fault proneness, and (3) assess whether there exist bugs in these systems that are related to the FBCP. We therefore perform a quantitative and a qualitative study. Quantitatively, we analyse multiple versions of seven different open-source systems that use 58 different frameworks, resulting in 301 configurations. We detect in these systems 112,263 FBCS occurrences and we analyse whether classes playing the role of sub-classes in FBCS occurrences are more change and–or fault prone than other classes. Results show that classes participating in the analysed FBCS are neither more likely to change nor more likely to have faults. Qualitatively, we conduct a survey to confirm/infirm that some bugs are related to the FBCP. The survey involves 41 participants that analyse a total of 104 bugs of three open-source systems. Results indicate that none of the analysed bugs is related to the FBCP. Thus, despite large, rigorous quantitative and qualitative studies, we must conclude that the two aspects of the FBCP that we analyse may not be as problematic in terms of change and fault-proneness as previously thought in the literature. We propose reasons why the FBCP may not be so prevalent in the analysed systems and in other systems in general.  相似文献   
4.
Design-code traceability recovery: selecting the basic linkage properties   总被引:1,自引:0,他引:1  
Traceability ensures that software artifacts of subsequent phases of the development cycle are consistent. Few works have so far addressed the problem of automatically recovering traceability links between object-oriented (OO) design and code entities. Such a recovery process is required whenever there is no explicit support of traceability from the development process. The recovered information can drive the evolution of the available design so that it corresponds to the code, thus providing a still useful and updated high-level view of the system.

Automatic recovery of traceability links can be achieved by determining the similarity of paired elements from design and code. The choice of the properties involved in the similarity computation is crucial for the success of the recovery process. In fact, design and code objects are complex artifacts with several properties attached. The basic anchors of the recovered traceability links should be chosen as those properties (or property combinations) which are expected to be maintained during the transformation of design into code. This may depend on specific practices and/or the development environment, which should therefore be properly accounted for.

In this paper different categories of basic properties of design and code entities will be analyzed with respect to the contribution they give to traceability recovery. Several industrial software components will be employed as a benchmark on which the performances of the alternatives are measured.  相似文献   

5.
We present a method for estimating the size, and consequently effort and duration, of object oriented software development projects. Different estimates may be made in different phases of the development process, according to the available information. We define an adaptation of traditional function points, called Object Oriented Function Points, to enable the measurement of object oriented analysis and design specifications. Tools have been constructed to automate the counting method. The novel aspect of our method is its flexibility. An organization can experiment with different counting policies, to find the most accurate predictors of size, effort, etc. in its environment. The method and preliminary results of its application in an industrial environment are presented and discussed.  相似文献   
6.
Frameworks are widely used in modern software development to reduce development costs. They are accessed through their Application Programming Interfaces (APIs), which specify the contracts with client programs. When frameworks evolve, API backward-compatibility cannot always be guaranteed and client programs must upgrade to use the new releases. Because framework upgrades are not cost-free, observing API changes and usages together at fine-grained levels is necessary to help developers understand, assess, and forecast the cost of each framework upgrade. Whereas previous work studied API changes in frameworks and API usages in client programs separately, we analyse and classify API changes and usages together in 22 framework releases from the Apache and Eclipse ecosystems and their client programs. We find that (1) missing classes and methods happen more often in frameworks and affect client programs more often than the other API change types do, (2) missing interfaces occur rarely in frameworks but affect client programs often, (3) framework APIs are used on average in 35 % of client classes and interfaces, (4) most of such usages could be encapsulated locally and reduced in number, and (5) about 11 % of APIs usages could cause ripple effects in client programs when these APIs change. Based on these findings, we provide suggestions for developers and researchers to reduce the impact of API evolution through language mechanisms and design strategies.  相似文献   
7.
Comparison and Evaluation of Clone Detection Tools   总被引:3,自引:0,他引:3  
Many techniques for detecting duplicated source code (software clones) have been proposed in the past. However, it is not yet clear how these techniques compare in terms of recall and precision as well as space and time requirements. This paper presents an experiment that evaluates six clone detectors based on eight large C and Java programs (altogether almost 850 KLOC). Their clone candidates were evaluated by one of the authors as an independent third party. The selected techniques cover the whole spectrum of the state-of-the-art in clone detection. The techniques work on text, lexical and syntactic information, software metrics, and program dependency graphs.  相似文献   
8.
EEG data compression techniques   总被引:4,自引:0,他引:4  
Electroencephalograph (EEG) and Holter EEG data compression techniques which allow perfect reconstruction of the recorded waveform from the compressed one are presented and discussed. Data compression permits one to achieve significant reduction in the space required to store signals and in transmission time. The Huffman coding technique in conjunction with derivative computation reaches high compression ratios (on average 49% on Holter and 58% on EEG signals) with low computational complexity. By exploiting this result a simple and fast encoder/decoder scheme capable of real-time performance on a PC was implemented. This simple technique is compared with other predictive transformations, vector quantization, discrete cosine transform (DCT), and repetition count compression methods. Finally, it is shown that the adoption of a collapsed Huffman tree for the encoding/decoding operations allows one to choose the maximum codeword length without significantly affecting the compression ratio. Therefore, low cost commercial microcontrollers and storage devices can be effectively used to store long Holter EEG's in a compressed format  相似文献   
9.
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号