首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
3.
The automation of the analysis of large volumes of seismic data is a data mining problem, where a large database of 3D images is searched by content, for the identification of the regions that are of most interest to the oil industry. In this paper we perform this search using the 3D orientation histogram as a texture analysis tool to represent and identify regions within the data, which are compatible with a query texture.  相似文献   

4.
In this paper we propose an approach in which interactive visualization and analysis are combined with batch tools for the processing of large data collections. Large and heterogeneous data collections are difficult to analyze and pose specific problems to interactive visualization. Application of the traditional interactive processing and visualization approaches as well as batch processing encounter considerable drawbacks for such large and heterogeneous data collections due to the amount and type of data. Computing resources are not sufficient for interactive exploration of the data and automated analysis has the disadvantage that the user has only limited control and feedback on the analysis process. In our approach, an analysis procedure with features and attributes of interest for the analysis is defined interactively. This procedure is used for off-line processing of large collections of data sets. The results of the batch process along with "visual summaries" are used for further analysis. Visualization is not only used for the presentation of the result, but also as a tool to monitor the validity and quality of the operations performed during the batch process. Operations such as feature extraction and attribute calculation of the collected data sets are validated by visual inspection. This approach is illustrated by an extensive case study, in which a collection of confocal microscopy data sets is analyzed.  相似文献   

5.
This paper deals with the dynamics of jointed flexible structures in multibody simulations. Joints are areas where the surfaces of substructures come into contact, for example, screwed or bolted joints. Depending on the spatial distribution of the joint, the overall dynamic behavior can be influenced significantly. Therefore, it is essential to consider the nonlinear contact and friction phenomena over the entire joint. In multibody dynamics, flexible bodies are often treated by the use of reduction methods, such as component mode synthesis (CMS). For jointed flexible structures, it is important to accurately compute the local deformations inside the joint in order to get a realistic representation of the nonlinear contact and friction forces. CMS alone is not suitable for the capture of these local nonlinearities and therefore is extended in this paper with problem-oriented trial vectors. The computation of these trial vectors is based on trial vector derivatives of the CMS reduction base. This paper describes the application of this extended reduction method to general multibody systems, under consideration of the contact and friction forces in the vector of generalized forces and the Jacobian. To ensure accuracy and numerical efficiency, different contact and friction models are investigated and evaluated. The complete strategy is applied to a multibody system containing a multilayered flexible structure. The numerical results confirm that the method leads to accurate results with low computational effort.  相似文献   

6.
In recent years, fog computing has emerged as a new distributed system model for a large class of applications that are data-intensive or delay-sensitive. By exploiting widely distributed computing infrastructure that is located closer to the network edge, communication cost and service response time can be significantly reduced. However, developing this class of applications is not straightforward and requires addressing three key challenges, ie, supporting the dynamic nature of the edge network, managing the context-dependent characteristics of application logic, and dealing with the large scale of the system. In this paper, we present a case study in building fog computing applications using our open source platform Distributed Node-RED (DNR). In particular, we show how applications can be decomposed and deployed to a geographically distributed infrastructure using DNR, and how existing software components can be adapted and reused to participate in fog applications. We present a lab-based implementation of a fog application built using DNR that addresses the first two of the issues highlighted earlier. To validate that our approach also deals with large scale, we augment our live trial with a large scale simulation of the application model, conducted in Omnet++, which shows the scalability of the model and how it supports the dynamic nature of fog applications.  相似文献   

7.
Peter Freeman 《Software》1978,8(5):501-511
In on attempt to improve the representation used for software design, this paper describes the representation used on an actual project. Improvement through the use of case studies such as this one requires explicit analysis; this is presented in the following paper, so that the two should be read together.  相似文献   

8.

A tool was developed for structured and detailed analysis of video data from user tests of interactive systems. It makes use of a table format for representing an interaction at multiple levels of abstraction. Interactions are segmented based on threshold times for pauses between actions. Usability problems are found using a list of observable indications for the occurrence of problems. The tool was evaluated by having two analysts apply it to three data sets from user tests on two different products. The segmentation technique proved to yield meaningful segments that helped in understanding the interaction. The interaction table was explicit enough to discuss in detail what had caused the differences in the analysts' lists of usability problems. The results suggested that the majority of differences were caused by unavoidable differences in interpretations of subjects' behaviour and that only minor improvements should be expected by refining the tool.  相似文献   

9.
10.
As data volumes increase at a high speed in more and more application fields of science, engineering, information services, etc., the challenges posed by data-intensive computing gain increasing importance. The emergence of highly scalable infrastructures, e.g. for cloud computing and for petascale computing and beyond, introduces additional issues for which scalable data management becomes an immediate need. This paper makes several contributions. First, it proposes a set of principles for designing highly scalable distributed storage systems that are optimized for heavy data access concurrency. In particular, we highlight the potentially large benefits of using versioning in this context. Second, based on these principles, we propose a set of versioning algorithms, both for data and metadata, that enable a high throughput under concurrency. Finally, we implement and evaluate these algorithms in the BlobSeer prototype, that we integrate as a storage backend in the Hadoop MapReduce framework. We perform extensive microbenchmarks as well as experiments with real MapReduce applications: they demonstrate that applying the principles defended in our approach brings substantial benefits to data intensive applications.  相似文献   

11.
Peng  Haijun  Zhang  Mengru  Song  Ningning  Kan  Ziyun 《Multibody System Dynamics》2022,54(3):345-371
Multibody System Dynamics - A large number of collisions and frictions often occur in multibody systems. These nonsmooth events lead to discontinuous or piecewise-continuous dynamic equations of...  相似文献   

12.
13.
The adjoint method is an elegant approach for the computation of the gradient of a cost function to identify a set of parameters. An additional set of differential equations has to be solved to compute the adjoint variables, which are further used for the gradient computation. However, the accuracy of the numerical solution of the adjoint differential equation has a great impact on the gradient. Hence, an alternative approach is the discrete adjoint method, where the adjoint differential equations are replaced by algebraic equations. Therefore, a finite difference scheme is constructed for the adjoint system directly from the numerical time integration method. The method provides the exact gradient of the discretized cost function subjected to the discretized equations of motion.  相似文献   

14.
This paper presents a quasistatic problem of an elastic body in frictional contact with a moving foundation. The model takes into account wear of the contact surface of the body caused by the friction. We recall existence and uniqueness results obtained in Sofonea et al. (2017). The main aim of this paper is to present a fully discrete scheme for numerical approximation together with an error estimation of a solution to this problem. Finally, computational simulations are performed to illustrate the mathematical model.  相似文献   

15.
《Computers & Education》2002,39(3):271-282
A case study of student engagement with simulations in a materials engineering course is presented. The aim of the work was to better understand the characteristics of simulations which support learning: the attributes, qualities, and circumstances of their use which lead to improved understanding . Simulations were introduced into the teaching of engineering heat transfer in 2000 and were modified for 2001 based on feedback received in 2000. The response of the two student cohorts to the simulations were recorded by observation and questioning during class sessions, questionnaires administered before, during, and after the teaching period, and by informal interviews after the teaching period. The responses of the two cohorts on the final questionnaire were compared using the Mann–Whitney U test. The features which were found in the current study to be important for engagement were: complexity of the simulation; the learning environment as a whole (as distinct from simply the software), and overcoming the ‘barrier’ of navigational opacity. Allowing sufficient time for engagement to develop was found to be critical to achieving learning outcomes. Significantly, the study highlights the need to differentiate carefully between the replication of a scenario and the provision of a simulation designed with learning as the primary goal. The objective of the simulation must be congruent with the objective of the learner, and must support the learner's objective.  相似文献   

16.
Sun  Xiao  He  Jiajin 《Multimedia Tools and Applications》2020,79(9-10):5439-5459
Multimedia Tools and Applications - As for the complexity of language structure, the semantic structure, and the relative scarcity of labeled data and context information, sentiment analysis has...  相似文献   

17.
We introduce the ClusterTree, a new indexing approach for representing clusters generated by any existing clustering approach. A cluster is decomposed into several subclusters and represented as the union of the subclusters. The subclusters can be further decomposed, which isolates the most related groups within the clusters. A ClusterTree is a hierarchy of clusters and subclusters which incorporates the cluster representation into the index structure to achieve effective and efficient retrieval. Our cluster representation is highly adaptive to any kind of cluster. It is well accepted that most existing indexing techniques degrade rapidly as the dimensions increase. The ClusterTree provides a practical solution to index clustered data sets and supports the retrieval of the nearest-neighbors effectively without having to linearly scan the high-dimensional data set. We also discuss an approach to dynamically reconstruct the ClusterTree when new data is added. We present the detailed analysis of this approach and justify it extensively with experiments.  相似文献   

18.
Neal M. Bengtson 《Software》1989,19(10):957-965
A comparison was made between using a simulation language to run models on a mainframe computer and on a microcomputer with a hard disk. The study was performed at NASA Langley using both the SLAM II mainframe and PC versions. The procedure for executing SLAM II on a PC is given. A batch job was created to simplify this procedure and allows PCs with a hard disk to execute simulations with one command. NASA's space transportation system operations model and the examples in the SLAM II text were used as a basis for the comparison. The PC simulations completed in predictable times. These were almost always faster than the more unpredictable mainframe times.  相似文献   

19.
To meet the huge demands of computation power and storage space, a future data center may have to include up to millions of servers. The conventional hierarchical tree-based data center network architecture faces several challenges in scaling a data center to that size. Previous research effort has shown that a server-centric architecture, where servers are not only computation and storage workstations but also intermediate nodes relaying traffic for other servers, performs well in scaling a data center to a huge number of servers. This paper presents a server-centric data center network called DPillar, whose topology is inspired by the classic butterfly network. DPillar provides several nice properties and achieves the balance between topological scalability, network performance, and cost efficiency, which make it suitable for building large scale future data centers. Using only commodity hardware, a DPillar network can easily accommodate millions of servers. The structure of a DPillar network is symmetric so that any network bottleneck is eliminated at the architectural level. With each server having only two ports, DPillar is able to provide the bandwidth to support communication intensive distributed applications. This paper studies the interconnection features of DPillar, how to compute routes in DPillar, and how to forward packets in DPillar. Extensive simulation experiments have been performed to evaluate the performance of DPillar. The results show that DPillar performs well even in the presence of a large number of server and switch failures.  相似文献   

20.
Dynamic analysis through execution traces is frequently used to analyze the runtime behavior of software systems. However, tracing long running executions generates voluminous data, which are complicated to analyze and manage. Extracting interesting performance or correctness characteristics out of large traces of data from several processes and threads is a challenging task. Trace abstraction and visualization are potential solutions to alleviate this challenge. Several efforts have been made over the years in many subfields of computer science for trace data collection, maintenance, analysis, and visualization. Many analyses start with an inspection of an overview of the trace, before digging deeper and studying more focused and detailed data. These techniques are common and well supported in geographical information systems, automatically adjusting the level of details depending on the scale. However, most trace visualization tools operate at a single level of representation, which are not adequate to support multilevel analysis. Sophisticated techniques and heuristics are needed to address this problem. Multi‐scale (multilevel) visualization with support for zoom and focus operations is an effective way to enable this kind of analysis. Considerable research and several surveys are proposed in the literature in the field of trace visualization. However, multi‐scale visualization has yet received little attention. In this paper, we provide a survey and methodological structure for categorizing tools and techniques aiming at multi‐scale abstraction and visualization of execution trace data and discuss the requirements and challenges faced to be able to meet evolving user demands.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号