共查询到20条相似文献,搜索用时 15 毫秒
1.
Markus Krätzig 《Computational statistics & data analysis》2007,52(2):618-634
The open-source Java software framework JStatCom is presented which supports the development of rich desktop clients for data analysis in a rather general way. The concept is to solve all recurring tasks with the help of reusable components and to enable rapid application development by adopting a standards based approach which is readily supported by existing programming tools. Furthermore, JStatCom allows to call external procedures from within Java that are written in other languages, for example Gauss, Ox or Matlab. This way it is possible to reuse an already existing code base for numerical routines written in domain-specific programming languages and to link them with the Java world. A reference application for JStatCom is the econometric software package JMulTi, which will shortly be introduced. 相似文献
2.
A stochastic process-based server consolidation approach for dynamic workloads in cloud data centers
Monshizadeh Naeen Hossein Zeinali Esmaeil Toroghi Haghighat Abolfazl 《The Journal of supercomputing》2020,76(3):1903-1930
The Journal of Supercomputing - With the development of information technology, there is a need for computational works everywhere and every time. Thus, people should be able to carry out their... 相似文献
3.
This paper presents ST-Hadoop; the first full-fledged open-source MapReduce framework with a native support for spatio-temporal data. ST-Hadoop is a comprehensive extension to Hadoop and SpatialHadoop that injects spatio-temporal data awareness inside each of their layers, mainly, language, indexing, and operations layers. In the language layer, ST-Hadoop provides built in spatio-temporal data types and operations. In the indexing layer, ST-Hadoop spatiotemporally loads and divides data across computation nodes in Hadoop Distributed File System in a way that mimics spatio-temporal index structures, which result in achieving orders of magnitude better performance than Hadoop and SpatialHadoop when dealing with spatio-temporal data and queries. In the operations layer, ST-Hadoop shipped with support for three fundamental spatio-temporal queries, namely, spatio-temporal range, top-k nearest neighbor, and join queries. Extensibility of ST-Hadoop allows others to extend features and operations easily using similar approaches described in the paper. Extensive experiments conducted on large-scale dataset of size 10 TB that contains over 1 Billion spatio-temporal records, to show that ST-Hadoop achieves orders of magnitude better performance than Hadoop and SpaitalHadoop when dealing with spatio-temporal data and operations. The key idea behind the performance gained in ST-Hadoop is its ability in indexing spatio-temporal data within Hadoop Distributed File System. 相似文献
4.
5.
《Computers & Geosciences》2006,32(6):767-775
We present an integrated software framework for geophysical data processing, based on an updated seismic data processing program package originally developed at the Program for Crustal Studies at the University of Wyoming. Unlike other systems, this processing monitor supports structured multi-component seismic data streams, multi-dimensional data traces, and employs a unique backpropagation execution logic. This results in an unusual flexibility of processing, allowing the system to handle nearly any geophysical data. A modern and feature-rich graphical user interface (GUI) was developed for the system, allowing editing and submission of processing flows and interaction with running jobs. Multiple jobs can be executed in a distributed multi-processor networks and controlled from the same GUI. Jobs, in their turn, can also be parallelized to take advantage of parallel processing environments, such as local area networks and Beowulf clusters. 相似文献
6.
The mixed model approach to semiparametric regression is considered for stochastic frontier models, with focus on clustered data. Standard assumptions about the model component representing the inefficiency effect lead to a closed skew normal distribution for the response. Model parameters are estimated by a generalization of restricted maximum likelihood, and random effects are estimated by an orthodox best linear unbiased prediction procedure. The method is assessed by means of Monte Carlo studies, and illustrated by an empirical application on hospital productivity. 相似文献
7.
Four-dimensional variational data assimilation (4D-Var) is used in environmental prediction to estimate the state of a system from measurements. When 4D-Var is applied in the context of high resolution nested models, problems may arise in the representation of spatial scales longer than the domain of the model. In this paper we study how well 4D-Var is able to estimate the whole range of spatial scales present in one-way nested models. Using a model of the one-dimensional advection–diffusion equation we show that small spatial scales that are observed can be captured by a 4D-Var assimilation, but that information in the larger scales may be degraded. We propose a modification to 4D-Var which allows a better representation of these larger scales. 相似文献
8.
《Advanced Engineering Informatics》2014,28(2):127-137
Construction work typically means producing on shifting locations. Moving materials, equipment and men efficiently from place to place, in and in between projects, depends on good coordination and requires specialized information systems. The key to such information systems are appropriate approaches to collect de-centralized sensor readings and to process, and distribute them to multiple end users at different locations both during the construction process and after the project is finished. This paper introduces a framework for the support of such distributed data collection and management to foster real-time data collection and processing along with the provision of opportunities to retain highly precise data for post-process analyses. In particular, the framework suggests a scheme to benefit from exploiting readings from the same sensors in varying levels of detail for informing different levels of decision making: operational, tactical, and strategic. The sensor readings collected in this way are not only potentially useful to track, assess, and analyse construction operations, but can also serve as reference during the maintenance stage. To this extent, the framework contributes to the existing body of knowledge of construction informatics. The operationality of the framework is demonstrated by developing and applying two on site information systems to track asphalt paving operations. 相似文献
9.
To support analysis and modelling of large amounts of spatio-temporal data having the form of spatially referenced time series (TS) of numeric values, we combine interactive visual techniques with computational methods from machine learning and statistics. Clustering methods and interactive techniques are used to group TS by similarity. Statistical methods for TS modelling are then applied to representative TS derived from the groups of similar TS. The framework includes interactive visual interfaces to a library of modelling methods supporting the selection of a suitable method, adjustment of model parameters, and evaluation of the models obtained. The models can be externally stored, communicated, and used for prediction and in further computational analyses. From the visual analytics perspective, the framework suggests a way to externalize spatio-temporal patterns emerging in the mind of the analyst as a result of interactive visual analysis: the patterns are represented in the form of computer-processable and reusable models. From the statistical analysis perspective, the framework demonstrates how TS analysis and modelling can be supported by interactive visual interfaces, particularly, in a case of numerous TS that are hard to analyse individually. From the application perspective, the framework suggests a way to analyse large numbers of spatial TS with the use of well-established statistical methods for TS analysis. 相似文献
10.
Raimundo F. Dos SantosJr. Sumit Shah Arnold Boedihardjo Feng Chen Chang-Tien Lu Patrick Butler Naren Ramakrishnan 《GeoInformatica》2016,20(2):285-326
Social media have ushered in alternative modalities to propagate news and developments rapidly. Just as traditional IR matured to modeling storylines from search results, we are now at a point to study how stories organize and evolve in additional mediums such as Twitter, a new frontier for intelligence analysis. This study takes as input news articles as well as social media feeds and extracts and connects entities into interesting storylines not explicitly stated in the underlying data. First, it proposes a novel method of spatio-temporal analysis on induced concept graphs that models storylines propagating through spatial regions in a time sequence. Second, it describes a method to control search space complexity by providing regions of exploration. And third, it describes ConceptRank as a ranking strategy that differentiates strongly-typed connections from weakly-bound ones. Extensive experiments on the Boston Marathon Bombings of April 15, 2013 as well as socio-political and medical events in Latin America, the Middle East, and the United States demonstrate storytelling’s high application potential, showcasing its use in event summarization and association analysis that identifies events before they hit the newswire. 相似文献
11.
A framework for evaluating software technology 总被引:1,自引:0,他引:1
Many software development organizations struggle to make informed decisions when investing in new software technologies. The authors' experimental framework can help companies evaluate a new software technology by examining its features in relation to its peers and competitors through a systematic approach that includes modeling experiments 相似文献
12.
Bernhardt K 《Evolutionary computation》2008,16(1):63-88
This paper addresses the problem of model complexity commonly arising in constructing and using process-based models with intricate interactions. Apart from complex process details the dynamic behavior of such systems is often limited to a discrete number of typical states. Thus, models reproducing the system's processes in all details are often too complex and over-parameterized. In order to reduce simulation times and to get a better impression of the important mechanisms, simplified formulations are desirable. In this work a data adaptive model reduction scheme that automatically builds simple models from complex ones is proposed. The method can be applied to the transformation and reduction of systems of ordinary differential equations. It consists of a multistep approach using a low dimensional projection of the model data followed by a Genetic Programming/Genetic Algorithm hybrid to evolve new model systems. As the resulting models again consist of differential equations, their process-based interpretation in terms of new state variables becomes possible. Transformations of two simple models with oscillatory dynamics, simulating a mathematical pendulum and predator-prey interactions respectively, serve as introductory examples of the method's application. The resulting equations of force indicate the predator-prey system's equivalence to a nonlinear oscillator. In contrast to the simple pendulum it contains driving and damping forces that produce a stable limit cycle. 相似文献
13.
Many developing countries grapple with the problem of rapid informal settlement emergence and expansion. This exacts considerable costs from neighbouring urban areas, largely as a result of environmental, sustainability and health-related problems associated with such settlements, which can threaten the local economy. Hence, there is a need to understand the nature of, and to be able to predict, future informal settlement emergence locations as well as the rate and extent of such settlement expansion in developing countries.A novel generic framework is proposed in this paper for machine learning-inspired prediction of future spatio-temporal informal settlement population growth. This data-driven framework comprises three functional components which facilitate informal settlement emergence and growth modelling within an area under investigation. The framework outputs are based on a computed set of influential spatial feature predictors pertaining to the area in question. The objective of the framework is ultimately to identify those spatial and other factors that influence the location, formation and growth rate of an informal settlement most significantly, by applying a machine learning modelling approach to multiple data sets related to the households and spatial attributes associated with informal settlements. Based on the aforementioned influential spatial features, a cellular automaton transition rule is developed, enabling the spatio-temporal modelling of the rate and extent of future formations and expansions of informal settlements. 相似文献
14.
The protection of software applications is one of the most important problems to solve in information security because it has a crucial effect on other security issues. We can find in the literature many research initiatives that have tried to solve this problem, many of them based on the use of tamperproof hardware tokens. This type of solution depends on two basic premises: (i) increasing the physical security by using tamperproof devices and (ii) increasing the complexity of the analysis of the software. The first premise is reasonable. The second one is certainly related to the first one. In fact, its main goal is that the pirate user not be able to modify the software to bypass an operation that is crucial: checking the presence of the token. However, experience shows that the second premise is not realistic because analysis of the executable code is always possible. Moreover, the techniques used to obstruct the analysis process are not enough to discourage an attacker with average resources.In this paper, we review the most relevant works related to software protection, present a taxonomy of those works, and, most important, introduce a new and robust software protection scheme. This solution, called SmartProt, is based on the use of smart cards and cryptographic techniques, and its security relies only on the first of the premises given above; that is, SmartProt has been designed to avoid attacks based on code analysis and software modification. The entire system is described following a lifecycle approach, explaining in detail the card setup, production, authorization, and execution phases. We also present some interesting applications of SmartProt as well as the protocols developed to manage licences. Finally, we provide an analysis of its implementation details. 相似文献
15.
ContextSoftware Process Engineering promotes the systematic production of software by following a set of well-defined technical and management processes. A comprehensive management of these processes involves the accomplishment of a number of activities such as model design, verification, validation, deployment and evaluation. However, the deployment and evaluation activities need more research efforts in order to achieve greater automation.ObjectiveWith the aim of minimizing the required time to adapt the tools at the beginning of each new project and reducing the complexity of the construction of mechanisms for automated evaluation, the Software Process Deployment & Evaluation Framework (SPDEF) has been elaborated and is described in this paper.MethodThe proposed framework is based on the application of well-known techniques in Software Engineering, such as Model Driven Engineering and Information Integration through Linked Open Data. It comprises a systematic method for the deployment and evaluation, a number of models and relationships between models, and some software tools.ResultsAutomated deployment of the OpenUP methodology is tested through the application of the SPDEF framework and support tools to enable the automated quality assessment of software development or maintenance projects.ConclusionsMaking use of the method and the software components developed in the context of the proposed framework, the alignment between the definition of the processes and the supporting tools is improved, while the existing complexity is reduced when it comes to automating the quality evaluation of software processes. 相似文献
16.
《Information and Software Technology》2013,55(11):1925-1947
ContextA Software Product Line is a set of software systems that are built from a common set of features. These systems are developed in a prescribed way and they can be adapted to fit the needs of customers. Feature models specify the properties of the systems that are meaningful to customers. A semantics that models the feature level has the potential to support the automatic analysis of entire software product lines.ObjectiveThe objective of this paper is to define a formal framework for Software Product Lines. This framework needs to be general enough to provide a formal semantics for existing frameworks like FODA (Feature Oriented Domain Analysis), but also to be easily adaptable to new problems.MethodWe define an algebraic language, called SPLA, to describe Software Product Lines. We provide the semantics for the algebra in three different ways. The approach followed to give the semantics is inspired by the semantics of process algebras. First we define an operational semantics, next a denotational semantics, and finally an axiomatic semantics. We also have defined a representation of the algebra into propositional logic.ResultsWe prove that the three semantics are equivalent. We also show how FODA diagrams can be automatically translated into SPLA. Furthermore, we have developed our tool, called AT, that implements the formal framework presented in this paper. This tool uses a SAT-solver to check the satisfiability of an SPL.ConclusionThis paper defines a general formal framework for software product lines. We have defined three different semantics that are equivalent; this means that depending on the context we can choose the most convenient approach: operational, denotational or axiomatic. The framework is flexible enough because it is closely related to process algebras. Process algebras are a well-known paradigm for which many extensions have been defined. 相似文献
17.
We present a framework for testing applications for mobile computing devices. When a device is moved into and attached to a new network, the proper functioning of applications running on the device often depends on the resources and services provided locally in the current network. This framework provides an application-level emulator for mobile computing devices to solve this problem. Since the emulator is constructed as a mobile agent, it can carry applications across networks on behalf of its target device and allow the applications to connect to local servers in its current network in the same way as if they had been moved with and executed on the device itself. This paper also demonstrates the utility of this framework by describing the development of typical network-dependent applications in mobile and ubiquitous computing settings. 相似文献
18.
A framework for hardware/software codesign 总被引:1,自引:0,他引:1
It is argued that a hardware/software codesign methodology should support the following capabilities: integration of the hardware and software design processes; exploration of hardware/software tradeoffs and evaluation of hardware/software alternatives; and model continuity. A codesign methodology that supports many of these capabilities is outlined. The methodology is iterative in nature and serves to guide codesign exploration with the uninterpreted/interpreted modeling approach. It integrates performance (uninterpreted) models and functional (interpreted) models in a common simulation environment 相似文献
19.
《Advances in Engineering Software (1978)》1987,9(3):150-161
A new hybrid stress finite plate vibration capability, providing high accuracy for coarse-meshes is presented with a view to enhancing the behavioural characteristics of the standard hybrid FEM. The software meets demands of the real-life user for reliable and cost-objective identification of a wide range of vibration modes. The FE matrices are constructed through the expedient of introducing a system of algorithms which provides an efficient and easily implemented capability that can be translated into any of the existing high level computing languages, viz. FORTRAN.The computational scheme enables the development of a large number of increasingly sophisticated elements from a single element module as easily as possible by providing it with a library of datasets. All the previously recognised advantages of the hybrid FEM are retained, whilst an exact analytical integrator returns the requisite information for the elemental matrices, and thereby obviates an aliasing problem that has plagued the cost-objectiveness of the conventional hybrid stress implementations. Extensive numerical tests manifest the numerical potentials of the present hybrid FE computational strategy. 相似文献