首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Summary In this paper the pathlisting mechanism is developed as a new tool useful in performing efficient data flow analysis of programs for a wide variety of problems. An algorithm using this tool for forward flow, code improvement problems is presented. It is shown that for all practical purposes this algorithm is linear in the size of the input which is, generally speaking, a reducible flow graph modeling the given program. Pathlistings generalize the nodelisting approach, introduced by Kennedy, for solving data flow problems. The efficiency of the pathlisting algorithm is due to the reuse of intermediate values and due to the fact that the cycles of a reducible flow graph can be ordered. Other advantages of the approach are also discussed.Work supported by National Science Foundation grant DCR73-00365-AO  相似文献   

2.
3.
With the era of Grid computing, data driven experiments and simulations have become very advanced and complicated. To allow specialists from various domains to deal with large datasets, aside from developing efficient extraction techniques, it is necessary to have available computational facilities to visualize and interact with the results of an extraction process. Having this in mind, we developed an Interactive Visualization Framework, which supports a service-oriented architecture. This framework allows, on one hand visualization experts to construct visualizations to view and interact with large datasets, and on the other hand end-users (e.g., medical specialists) to explore these visualizations irrespective of their geographical location and available computing resources. The image-based analysis of vascular disorders served as a case study for this project. The paper presents main research findings and reports on the current implementation status.  相似文献   

4.
Video document retrieval is now an active part of the domain of multimedia retrieval. However, unlike for other media, the management of a collection of video documents adds the problem of efficiently handling an overwhelming volume of temporal data. Challenges include balancing efficient content modeling and storage against fast access at various levels. In this paper, we detail the framework we have built to accommodate our developments in content-based multimedia retrieval. We show that not only our framework facilitates the development of processing and indexing algorithms but it also opens the way to several other possibilities such as rapid interface prototyping or retrieval algorithm benchmarking. Here, we discuss our developments in relation to wider contexts such as MPEG-7 and the TREC Video Track.This work is funded by EU-FP6 IST-NoE SIMILAR () and the Swiss NCCR IM2 (Interactive Multimodal Information Management).  相似文献   

5.
The aim of this paper is to present a model for the Computer Centre of an important Italian banking group. The model groups data and transactions to deal with the large dimension of the Centre. The transactions arrivals are considered as Posson stochastic variables and the probability values are estimated. Some computational results are given.  相似文献   

6.
A signal processing technique is presented and applied to annual patterns of the Global Vegetation Index (GVI) derived from the Advanced Very High Resolution Radiometer (AVHRR) to examine the frequency distribution of the multi-temporal signal. It is shown that frequencies of the signal are linked to integrated GVI, seasonal variability and subseasonal variability of the land cover type. These characteristics are used to derive a land cover classification.  相似文献   

7.
This paper describes VIPER, the video image-processing system Erlangen. It consists of a general purpose microcomputer, commercially available image-processing hardware modules connected directly to the computer, video input/output-modules such as a TV camera, video recorders and monitors, and a software package. The modular structure and the capabilities of this system are explained. The software is user-friendly, menu-driven and performs image acquisition, transfers, greyscale processing, arithmetics, logical operations, filtering display, colour assignment, graphics, and a couple of management functions. More than 100 image-processing functions are implemented. They are available either by typing a key or by a simple call to the function-subroutine library in application programs. Examples are supplied in the area of biomedical research, e.g. in in-vivo microscopy.  相似文献   

8.
9.
The stability test of polynomials whose coefficients depend multilinearly on interval parameters is considered. The authors describe and compare four brute-force solution approaches. These are eigenvalue calculation, zero exclusion from a specified value set, algebraic tests of real and complex Hurwitz roots, and the parameter space method. They are applied to a simple example with two parameters and third-order polynomial. An interesting feature of the example is that it can have an isolated unstable point. The example may be useful as a benchmark for future approaches to the multilinear problem. All four methods are shown to be feasible for the simple example, but they require effort  相似文献   

10.
Intelligent data analysis applied to debug complex software systems   总被引:1,自引:0,他引:1  
Emilio  Jorge J.  Juan A.  Juan   《Neurocomputing》2009,72(13-15):2785
The emergent behavior of complex systems, which arises from the interaction of multiple entities, can be difficult to validate, especially when the number of entities or their relationships grows. This validation requires understanding of what happens inside the system. In the case of multi-agent systems, which are complex systems as well, this understanding requires analyzing and interpreting execution traces containing agent specific information, deducing how the entities relate to each other, guessing which acquaintances are being built, and how the total amount of data can be interpreted. The paper introduces some techniques which have been applied in developments made with an agent oriented methodology, INGENIAS, which provides a framework for modeling complex agent oriented systems. These techniques can be regarded as intelligent data analysis techniques, all of which are oriented towards providing simplified representations of the system. These techniques range from raw data visualization to clustering and extraction of association rules.  相似文献   

11.
Ordination is a powerful method for analysing complex data sets but has been largely ignored in sequence analysis. This paper shows how to use principal coordinates analysis to find low-dimensional representations of distance matrices derived from aligned sets of sequences. The method takes a matrix of Euclidean distances between all pairs of sequence and finds a coordinate space where the distances are exactly preserved. The main problem is to find a measure of distance between aligned sequences that is Euclidean. The simplest distance function is the square root of the percentage difference (as measured by identities) between two sequences, where one ignores any positions in the alignment where there is a gap in any sequence. If one does not ignore positions with a gap, the distances cannot be guaranteed to be Euclidean but the deleterious effects are trivial. Two examples of using the method are shown. A set of 226 aligned globins were analysed and the resulting ordination very successfully represents the known patterns of relationship between the sequences. In the other example, a set of 610 aligned 5S rRNA sequences were analysed. Sequence ordinations complement phylogenetic analyses. They should not be viewed as a complete alternative.  相似文献   

12.
13.
The morphological properties of axons, such as their branching patterns and oriented structures, are of great interest for biologists in the study of the synaptic connectivity of neurons. In these studies, researchers use triple immunofluorescent confocal microscopy to record morphological changes of neuronal processes. Three-dimensional (3D) microscopy image analysis is then required to extract morphological features of the neuronal structures. In this article, we propose a highly automated 3D centerline extraction tool to assist in this task. For this project, the most difficult part is that some axons are overlapping such that the boundaries distinguishing them are barely visible. Our approach combines a 3D dynamic programming (DP) technique and marker-controlled watershed algorithm to solve this problem. The approach consists of tracking and updating along the navigation directions of multiple axons simultaneously. The experimental results show that the proposed method can rapidly and accurately extract multiple axon centerlines and can handle complicated axon structures such as cross-over sections and overlapping objects.  相似文献   

14.
Assessing loan risks: a data mining case study   总被引:2,自引:0,他引:2  
Gerritsen  R. 《IT Professional》1999,1(6):16-21
The USDA's (US Department of Agriculture) Rural Housing Service administers a loan program that lends or guarantees mortgage loans to people living in rural areas. To administer these nearly 600000 loans, the department maintains extensive information about each one in its data warehouse. As with most lending programs, some USDA loans perform better than others. Basic data mining techniques helped the USDA's Rural Housing Service better understand and classify problem loans  相似文献   

15.
Software evolution studies have traditionally focused on individual products. In this study we scale up the idea of software evolution by considering software compilations composed of a large quantity of independently developed products, engineered to work together. With the success of libre (free, open source) software, these compilations have become common in the form of ‘software distributions’, which group hundreds or thousands of software applications and libraries into an integrated system. We have performed an exploratory case study on one of them, Debian GNU/Linux, finding some significant results. First, Debian has been doubling in size every 2 years, totalling about 300 million lines of code as of 2007. Second, the mean size of packages has remained stable over time. Third, the number of dependencies between packages has been growing quickly. Finally, while C is still by far the most commonly used programming language for applications, use of the C++, Java, and Python languages have all significantly increased. The study helps not only to understand the evolution of Debian, but also yields insights into the evolution of mature libre software systems in general.
Daniel M. GermanEmail:

Jesus M. Gonzalez-Barahona   teaches and researches in Universidad Rey Juan Carlos, Mostoles (Spain). His research interests include libre software development, with a focus on quantitative and empirical studies, and distributed tools for collaboration in libre software projects. He works in the GSyC/LibreSoft research team, . Gregorio Robles   is Associate Professor at the Universidad Rey Juan Carlos, where he earned his PhD in 2006. His research interests lie in the empirical study of libre software, ranging from technical issues to those related to the human resources of the projects. Martin Michlmayr   has been involved in various free and open source software projects for well over 10 years. He acted as the leader of the Debian project for two years and currently serves on the board of the Open Source Initiative (OSI). Martin works for HP as an Open Source Community Expert and acts as the community manager of FOSSBazaar. Martin holds Master degrees in Philosophy, Psychology and Software Engineering, and earned a PhD from the University of Cambridge. Juan José Amor   has a M.Sc. in Computer Science from the Universidad Politécnica de Madrid and he is currently pursuing a Ph.D. at the Universidad Rey Juan Carlos, where he is also a project manager. His research interests are related to libre software engineering, mainly effort and schedule estimates in libre software projects. Since 1995 he has collaborated in several libre software organizations; he is also co-founder of LuCAS, the best known libre software documentation portal in Spanish, and Hispalinux, the biggest spanish Linux user group. He also collaborates with and Linux+. Daniel M. German   is associate professor of computer science at the University of Victoria, Canada. His main areas of interest are software evolution, open source software engineering and intellectual property.   相似文献   

16.
Windows of vulnerability: a case study analysis   总被引:1,自引:0,他引:1  
Arbaugh  W.A. Fithen  W.L. McHugh  J. 《Computer》2000,33(12):52-59
The authors propose a life cycle model for system vulnerabilities, then apply it to three case studies to reveal how systems often remain vulnerable long after security fixes are available. For each case, we provide background information about the vulnerability, such as how attackers exploited it and which systems were affected. We then tie the case to the life-cycle model by identifying the dates for each state within the model. Finally, we use a histogram of reported intrusions to show the life of the vulnerability, and we conclude with an analysis specific to the particular vulnerability.  相似文献   

17.
The architecture of a large software system is widely considered important for such reasons as: providing a common goal to the stakeholders in realising the envisaged system; helping to organise the various development teams; and capturing foundational design decisions early in the development. Studies have shown that defects originating in system architectures can consume twice as much correction effort as that for other defects. Clearly, then, scientific studies on architectural defects are important for their improved treatment and prevention. Previous research has focused on the extent of architectural defects in software systems. For this paper, we were motivated to ask the following two complementary questions in a case study: (i) How do multiple-component defects (MCDs)—which are of architectural importance—differ from other types of defects in terms of (a) complexity and (b) persistence across development phases and releases? and (ii) How do highly MCD-concentrated components (the so called, architectural hotspots) differ from other types of components in terms of their (a) interrelationships and (b) persistence across development phases and releases? Results indicate that MCDs are complex to fix and are persistent across phases and releases. In comparison to a non-MCD, a MCD requires over 20 times more changes to fix it and is 6 to 8 times more likely to cross a phase or a release. These findings have implications for defect detection and correction. Results also show that 20% of the subject system’s components contain over 80% of the MCDs and that these components are 2–3 times more likely to persist across multiple system releases than other components in the system. Such MCD-concentrated components constitute architectural “hotspots” which management can focus upon for preventive maintenance and architectural quality improvement. The findings described are from an empirical study of a large legacy software system of size over 20 million lines of code and age over 17 years.  相似文献   

18.
In this paper we present a methodology for a rural and semi-urban data network placement. In order to optimally place the network and to ensure that the network is realistic and viable we address four key issues, namely, the demographic and socio-economic issues, geographical estimation, optimization of the network placement and financial optimization. A digital representation of the map of the region where the network has to be placed is used. A continuous optimization algorithm is applied to optimally place the backbone rings, and a combinatorial optimization algorithm is applied to obtain the optimal rollout order for the network. Mathematical formulations for both the optimization problems are presented. Optimal financial indicators are obtained.  相似文献   

19.
《Computers & Structures》2006,84(17-18):1164-1171
A large number of towers for telecommunication were installed during the implantation of cellular telephony services in Brazil. Some of those towers presented problems as excessive displacements, residual displacements, cracking and some accidents happened. On the other hand, the computation of large displacements in slender reinforced concrete structures is a very difficult task as the flexural stiffness of the sections changes continuously as the bending moment increases, due to the very non-linear material behavior of concrete, involving such phenomena as formation of cracks and plastification. The goal of this paper is to present some initial results of the application of optimization techniques to experimental data relative to the determination of the effective bending stiffness of transverse sections of reinforced concrete structures. The objective is to determine parameters of reduction of the stiffness of unstressed sections for the correct calculation of the displacements of those structures. The results of a test with a reinforced concrete tower for telecommunications of 30 m of length, circular ring cross-section with 50 cm diameter, were used. For several cross-sections along the axis of this structure the effective stiffness was computed. For analysis purposes, the structure was discretized and the differential equation of the elastic line integrated to obtain the rotations and displacements. The values of the effective stiffness of the cross-sections were obtained using optimization techniques. The effective stiffness is presented in graphs as function of the solicitation level (the ratio between the characteristic bending moment and the ultimate moment of the cross-section). The section where the largest stiffness loss happened is the section that indeed collapsed in a real similar structure. Directions for future researches are presented.  相似文献   

20.
A modified scattering model-based speckle filter (SMBSF) based on the spatial proximity principle was applied to the analysis of phased array type L-band synthetic aperture radar (PALSAR) polarimetric data in the coastal environment of North Carolina, USA. The modified filter preserved polarimetric characteristics and further reduced speckle noise qualitatively and quantitatively. Classification accuracy using the SAR data filtered by the modified filter was improved, especially for the forest class.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号