共查询到20条相似文献,搜索用时 171 毫秒
1.
《Calphad》2020
Emerging modern data analytics attracts much attention in materials research and shows great potential for enabling data-driven design. Data populated from the high-throughput CALPHAD approach enables researchers to better understand underlying mechanisms and to facilitate novel hypotheses generation, but the increasing volume of data makes the analysis extremely challenging. Herein, we introduce an easy-to-use, versatile, and open-source data analytics frontend, ASCENDS (Advanced data SCiENce toolkit for Non-Data Scientists), designed with the intent of accelerating data-driven materials research and development. The toolkit is also of value beyond materials science as it can analyze the correlation between input features and target values, train machine learning models, and make predictions from the trained surrogate models of any scientific dataset. Various algorithms implemented in ASCENDS allow users performing quantified correlation analyses and supervised machine learning to explore any datasets of interest without extensive computing and data science background. The detailed usage of ASCENDS is introduced with an example of experimental high-temperature alloy data. 相似文献
2.
Timo Engelke Mario Becker Harald Wuest Jens Keil Arjan Kuijper 《Expert systems with applications》2013,40(7):2704-2714
We present our novel generic approach for interfacing web components on mobile devices in order to rapidly develop Augmented Reality (AR) applications using HTML5, JavaScript, X3D and a vision engine. A general concept is presented exposing a generalized abstraction of components that are to be integrated in order to allow the creation of AR capable interfaces on widely available mobile devices.Requirements are given, yielding a set of abstractions, components, and helpful interfaces that allow rapid prototyping, research at application level, as well as commercial applications. A selection of various applications (also commercial) using the developed framework is given, proving the generality of the architecture of our MobileAR Browser.Using this concept a large number of developers can be reached. The system is designed to work with different standards and allows for domain separation of tracking algorithms, render content, interaction and GUI design. This can potentially help groups of developers and researchers with different competences creating their application in parallel, while the declarative content remains exchangeable. 相似文献
3.
Occupational postures are considered to be an important group of risk factors for musculoskeletal pain. However, the exposure-outcome association is not clear yet. Therefore, we aimed to determine the exposure-outcome association of working postures and musculoskeletal symptoms. Also, we aimed to establish exposure limits for working postures. In a prospective cohort study among 789 workers, intensity, frequency and duration of postures were assessed at baseline using observations. Musculoskeletal pain was assessed cross-sectionally and longitudinally and associations of postures and pain were addressed using logistic regression analyses. Cut-off points were estimated based on ROC-curve analyses. Associations were found for kneeling/crouching and low-back pain, neck flexion and rotation and neck pain, trunk flexion and low-back pain, and arm elevation and neck and shoulder pain. The results provide insight into exposure-outcome relations between working postures and musculoskeletal symptoms as well as evidence-based working posture exposure limits that can be used in future guidelines and risk assessment tools.
Practitioner Summary: Our study gives insight into exposure-outcome associations of working postures and musculoskeletal symptoms (kneeling/crouching and low-back pain, neck flexion/rotation and neck pain, trunk flexion and low-back pain, and arm elevation and neck and shoulder pain). Results furthermore deliver evidence-based postural exposure limits that can be used in guidelines and risk assessments. 相似文献
4.
《Computer Speech and Language》2003,17(1):69-85
This paper focuses on modeling pronunciation variation in two different ways: data-derived and knowledge-based. The knowledge-based approach consists of using phonological rules to generate variants. The data-derived approach consists of performing phone recognition, followed by smoothing using decision trees (D-trees) to alleviate some of the errors in the phone recognition. Using phonological rules led to a small improvement in WER; a data-derived approach in which the phone recognition was smoothed using D-trees prior to lexicon generation led to larger improvements compared to the baseline. The lexicon was employed in two different recognition systems: a hybrid HMM/ANN system and a HMM-based system, to ascertain whether pronunciation variation was truly being modeled. This proved to be the case as no significant differences were found between the results obtained with the two systems. A comparison between the knowledge-based and data-derived methods showed that 17% of variants generated by the phonological rules were also found using phone recognition, and this increases to 46% when the phone recognition output is smoothed by using D-trees. 相似文献
5.
Jörg Becker Patrick Delfmann Hanns-Alexander Dietrich Matthias Steinhorst Mathias Eggert 《Information Systems Frontiers》2016,18(2):359-405
Given the strong increase in regulatory requirements for business processes the management of business process compliance becomes a more and more regarded field in IS research. Several methods have been developed to support compliance checking of conceptual models. However, their focus on distinct modeling languages and mostly linear (i.e., predecessor-successor related) compliance rules may hinder widespread adoption and application in practice. Furthermore, hardly any of them has been evaluated in a real-world setting. We address this issue by applying a generic pattern matching approach for conceptual models to business process compliance checking in the financial sector. It consists of a model query language, a search algorithm and a corresponding modelling tool prototype. It is (1) applicable for all graph-based conceptual modeling languages and (2) for different kinds of compliance rules. Furthermore, based on an applicability check, we (3) evaluate the approach in a financial industry project setting against its relevance for decision support of audit and compliance management tasks. 相似文献
6.
This paper presents SCRAM–CK, a method to elicit requirements by means of strong user involvement supported by prototyping activities. The method integrates two existing approaches, SCRAM and CK theory. SCRAM provides the framework for requirements management, while CK theory provides a framework for reasoning about design and its evolution. The method is demonstrated with the definition and refining of requirements for the BioVeL web toolkit. The objective of BioVeL is to allow scientists to understand, run, modify and construct workflows for data analysis with minimal training using a web-based interface. The proposed method is supported by prototyping activities for gathering user feedback, and refining requirements and design proposals. Using this method, the prototypes evolved from simple workflow execution enablers to include more complex functionalities for reviewing, modifying and building workflows in later versions. This paper presents a contribution to the application of techniques for requirements engineering. SCRAM–CK is an amalgamated method that combines a user-centred continuous refinement approach with support for design evolution through prototyping. The paper also shows the influence of the requirements engineering process in the evolution of design proposals. 相似文献
7.
Advances in computer technology, encompassed with fast emerging of multicore processor technology, have made the many-core
personal computers available and more affordable. The availability of network of workstations and cluster of many-core SMPs
have made them an attractive solution for high performance computing by providing computational power equal or superior to
supercomputers or mainframes at an affordable cost using commodity components. In order to search alternative ways to extract
unused and idle computing power from these computing resources targeting to improve overall performance, as well as to fully
utilize the underlying new hardware platforms, these are major topics in this field of research. In this research paper, the
design rationale and implementation of an effective toolkit for performance measurement and analysis of parallel applications
in cluster environments is introduced; not only generating parallel applications’ timing graph representation, but also to
provide application execution’s performance data charts. The goal in developing this toolkit is to permit application developers
have a better understanding of the application’s behavior among selected computing nodes purposed for that particular execution.
Additionally, multiple execution results of a given application under development can be combined and overlapped, permitting
application developers to perform “what-if” analysis, i.e., to deeper understand the utilization of allocated computational
resources. Experimentations using this toolkit have shown its effectiveness on the development and performance tuning of parallel
applications, extending the use in teaching of message passing, and shared memory model parallel programming courses.
相似文献
Tien-Hsiung WengEmail: |
8.
Flood risk management must rely on a proper and encompassing flood risk assessment, which possibly reflects the individual characteristics of all elements at risk of being flooded. In addition to prevalent expert knowledge, such an approach must also rely on local knowledge. In this context, stakeholder preferences for risk assessment indicators and assessment deliverables hold great importance but are often neglected. This paper proposes to put this body of information into operation in form of a knowledge base, thereby making it accessible and reusable in multi-criteria risk assessment. Selected use cases discuss the advantages of such a semantically enhanced assessment approach. 相似文献
9.
Proper planning and execution of mass vaccination at the onset of a pandemic outbreak is important for local health departments. Mass vaccination clinics are required to be setup and run for naturally occurring pandemic outbreaks or even in response to terrorist attacks, e.g., anthrax attack. Walk-in clinics have often been used to administer vaccines. When a large percentage of a population must be vaccinated to mitigate the ill-effects of an attack or pandemic, drive-through clinics appear to be more effective because a much higher throughput can be achieved when compared to walk-in clinics. There are other benefits as well. For example, the spread of the disease can be minimized because infected patients are not exposed to uninfected patients. This research extends the simulation modeling work that was done for a mass vaccination drive-through clinic in the city of Louisville in November 2009. This clinic is one of the largest clinics ever set up with more than 19,000 patients served, over two-thirds via ten drive-through lanes. The intent of the model in this paper is to illustrate a general tool that can be customized for a community of any size. The simulation–optimization tool will allow decision makers to investigate several interacting control variables in a simultaneous fashion; any of several criterion models in which various performance measures are either optimized or constrained, can be investigated. The model helps the decision maker determine the required number of Points of Dispense (POD) lanes, number and length of the lanes for consent hand outs and fill in, staff needed at the consent handout stations and PODs, and average user waiting time in the system. 相似文献
10.
This paper describes the development of a Climate Change Toolkit (CCT) to perform tasks needed in a climate change study plus projection of extreme weather conditions by analyzing historical weather patterns. CCT consists of Data Extraction, Global Climate Data Management, Bias Correction and Statistical Downscaling, Spatial Interpolation, and Critical Consecutive Day Analyzer (CCDA). CCDA uses a customized data mining approach to recognize spatial and temporal patterns of extreme events. CCT is linked to an archive of 0.5° historical global daily dataset (CRU, 1970–2005), and GCM data (1960–2099) for five models and four carbon scenarios. Application of CCT in California using ensemble results of scenario RCP8.5 showed a probable increase in the frequency of dry periods in the southern part of the region, while decreasing in the north. The frequency of wet periods may suggest higher risks of flooding in the north and coastal strips. We further found that every county in northern California may experience flooding conditions of 1986 at least once between 2020 and 2050. 相似文献
11.
This article presents an overview of COSA, a cognitive system architecture, which is a generic framework proposing a unified architecture for cognitive systems. Conventional automation
and similar systems lack the ability of cooperation and cognition, leading to serious deficiencies when acting in complex
environments, especially in the context of human-computer interaction. Cognitive systems based on cognitive automation can
overcome these deficiencies. Designing such artificial cognitive systems can be considered a very complex software development
process. Although a number of developments of artificial cognitive systems have already demonstrated great functional potentials
in field tests, the engineering approach of this kind of software is still a candidate for further improvement. Therefore,
wide-spread application of cognitive systems has not been achieved yet. This article presents a new engineering approach for
cognitive systems, implemented by the COSA framework, which may be a crucial step forward to achieve a wide-spread application
of cognitive systems. The approach is based on a new concept of generating cognitive behaviour, the cognitive process (CP).
The CP can be regarded as a model of the human information processing loop whose behaviour is solely driven by "a-priori knowledge".
The main features of COSA are the implementation of the CP as its kernel and the separation of architecture from application
leading to reduced development time and increased knowledge reuse. Additionally, separating the knowledge modelling process
from behaviour generation enables the knowledge designer to use the knowledge representation that is best suited to his modelling
problem. A first application based on COSA implements an autonomous unmanned air vehicle accomplishing a military reconnaissance
mission. Some of the application experiences with the new approach are presented. 相似文献
12.
Zakaria Maamar Djamal Benslimane Michael Mrissa Chirine Ghedira 《Software and Systems Modeling》2006,5(2):219-229
This paper presents oo which is a method for oordinating ersonalized Services. These services are primarily offered to mobile users. The concept of services is the object of intense investigations from both academia and industry. However, very little has been accomplished so far regarding first, personalizing services for the benefit of mobile users, and second, providing the appropriate methodological support for those (i.e., designers) who will be specifying the operations of personalization. Various obstacles still exist such as lack of techniques for modeling and specifying the integration of personalization into services, and existing approaches for service composition typically facilitate orchestration only, while neglecting contexts of users and services. ooconsists of several steps ranging from service definition and personalization to service deployment. Each step has some representation techniques, which aim at facilitating the specification and validation of the operations of coordinating personalized services.
Zakaria Maamar is an associate professor in computer sciences at Zayed University, Dubai, United Arab Emirates. His research interests include Web services, software agents, and context-aware computing. Maamar has a PhD in computer sciences from Laval University.
Djamal Benslimane is a full professor in computer sciences at Claude Bernard Lyon 1 University and a member of the Laboratoire d'InfoRmatique en Images et Systèmes d'information- Centre National De la Recherche Scientifique (LIRIS-CNRS), both in Lyon, France. His research interests include interoperability, Web services, and ontologies. Benslimane has a PhD in computer sciences from Blaise Pascal University.
Michael Mrissa is a Ph.D. candidate in computer sciences at Claude Bernard Lyon 1 University and a member of the Laboratoire d'InfoRmatique en Images et Systèmes d'information - Centre National De la Recherche Scientifique (LIRIS-CNRS), both in Lyon, France. His research interests include semantic Web services, interoperability and peer-to-peer networks.
Chirine Ghedira is an associate professor in computer sciences at Claude Bernard Lyon 1 University and a member of the Laboratoire d'InfoRmatique en Images et Systèmes d'information- Centre National De la Recherche Scientifique (LIRIS-CNRS), both in Lyon, France. Her research interests include Web services and context-aware computing. Ghedira has a PhD in computer sciences from the National Institute for Applied Sciences (INSA). 相似文献
13.
《Advanced Robotics》2013,27(12-13):1641-1662
The goal of this work is to develop a control framework to provide assistance to the subjects in such a manner that the interaction between the subjects and a robot-assisted rehabilitation system is smooth during the rehabilitation therapy. In order to achieve smoothness of interaction, a control framework is designed in such a way that it can automatically adjust the control gains of the robot-assisted rehabilitation system to modify the interaction dynamics between the system and the subject. An artificial neural network (ANN)-based proportional–integral (PI) gain scheduling controller is proposed to automatically determine the appropriate control gains for each individual subject. The human arm model is integrated with the ANN-based PI gain scheduling controller where the ANN uses estimated human arm parameters to select the appropriate PI gains for each subject such that the resultant interaction dynamics between the subject and the robot-assisted rehabilitation system could result in smooth interaction. Experimental results involving unimpaired subjects on a PUMA robot-based rehabilitation system are presented to demonstrate the efficacy of the proposed ANN-based PI gain scheduling controller on unimpaired subjects. 相似文献
14.
Irina Măriuca Asăvoae Mihail Asăvoae Adrián Riesco 《International Journal on Software Tools for Technology Transfer (STTT)》2018,20(6):739-769
We describe Chisel, a tool that synthesizes a program slicer directly from a given algebraic specification of a programming language operational semantics \(\mathcal {S}\). \(\mathcal {S}\) is assumed to be a rewriting logic specification, given in Maude, while the program is a ground term of this specification. Chisel takes \(\mathcal {S}\) and synthesizes language constructs, i.e., instructions, that produce features relevant for slicing, e.g., data dependency. We implement syntheses adjusted to each feature as model checking properties over an abstract representation of \(\mathcal {S}\). The syntheses results are used by a traditional interprocedural slicing algorithm that we parameterize by the synthesized language features. We present the tool on two language paradigms: high-level, imperative and low-level, assembly languages. Computing program slices for these languages allows for extracting traceability properties in standard compilation chains and makes our tool fitting for the validation of embedded system designs. Chisel’s slicing benchmark evaluation is based on benchmarks used in avionics. 相似文献
15.
16.
E-Science has the potential to transform school science by enabling learners, teachers and research scientists to engage together in authentic scientific enquiry, collaboration and learning. However, if we are to reap the benefits of this potential as part of everyday teaching and learning, we need to explicitly think about and support the work required to set up and run e-Science experiences within any particular educational context. In this paper, we present a framework for identifying and describing the resources, tools and services necessary to move e-Science into the classroom together with examples of these. This framework is derived from previous experiences conducting educational e-Science projects and systematic analysis of the categories of ‘hidden work’ needed to run these projects. The articulation of resources, tools and services based on these categories provides a starting point for more methodical design and deployment of future educational e-Science projects, reflection on which can also help further develop the framework. It also points to the technological infrastructure from which such tools and services could be built. As such it provides an agenda of work to develop both processes and technologies that would make it practical for teachers to deliver active, and collaborative e-Science learning experiences on a larger scale within and across schools. Routine school e-Science will only be possible if such support is specified, implemented and made available to teachers within their work contexts in an appropriate and usable form. 相似文献
17.
《Control Engineering Practice》2000,8(10):1119-1133
This paper presents a generic natural language interface that can be applied to the teleoperation of different kinds of complex interactive systems. Through this interface the operators can ask for simple actions or more complex tasks to be executed by the system. Complex tasks will be decomposed into simpler actions generating a network of actions whose execution will result in the accomplishment of the required task. As a practical application, the system has been applied to the teleoperation of a real mobile robot, allowing the operator to move the robot in a partially structured environment through natural language sentences. 相似文献
18.
A unified approach for structural reanalysis of all types of topological modifications is presented. The modifications considered
include various cases of deletion and addition of members and joints. The most challenging problem where the structural model
is itself allowed to vary is presented. The two cases, where the number of degrees of freedom is decreased and increased,
are considered. Various types of modified topologies are discussed, including the common conditionally unstable structures.
The solution procedure is based on the combined approximations approach and involves small computational effort. Numerical
examples show that accurate results are achieved for significant topological modifications. Exact solutions are obtained efficiently
for modifications in a small number of members.
Received April 4, 2000 相似文献
19.
In Germany, bridges have an average age of 40 years. A bridge consumes between 0.4% and 2% of its construction cost per year over its entire life cycle. This means that up to 80% of the construction cost are additionally needed for operation, inspection, maintenance, and destruction. Current practices rely either on paper-based inspections or on abstract specialist software. Every application in the inspection and maintenance sector uses its own data model for structures, inspections, defects, and maintenance. Due to this, data and properties have to be transferred manually, otherwise a converter is necessary for every data exchange between two applications. To overcome this issue, an adequate model standard for inspections, damage, and maintenance is necessary. Modern 3D models may serve as a single source of truth, which has been suggested in the Building Information Modeling (BIM) concept. Further, these models offer a clear visualization of the built infrastructure, and improve not only the planning and construction phases, but also the operation phase of construction projects. BIM is established mostly in the Architecture, Engineering, and Construction (AEC) sector to plan and construct new buildings. Currently, BIM does not cover the whole life cycle of a building, especially not inspection and maintenance. Creating damage models needs the building model first, because a defect is dependent on the building component, its properties and material. Hence, a building information model is necessary to obtain meaningful conclusions from damage information. This paper analyzes the requirements, which arise from practice, and the research that has been done in modeling damage and related information for bridges. With a look at damage categories and use cases related to inspection and maintenance, scientific literature is discussed and synthesized. Finally, research gaps and needs are identified and discussed. 相似文献
20.
U. Kirsch 《Structural and Multidisciplinary Optimization》2000,20(2):97-106
The Combined Approximations (CA) method developed recently, is an effective reanalysis approach providing high quality results.
In the solution process the terms of the binomial series, used as basis vectors, are first calculated by forward and back
substitutions. Utilizing a Gram–Schmidt orthogonalization procedure, a new set of uncoupled basis vectors is then generated
and normalized. Consequently, accurate results can be achieved by considering additional vectors, without modifying the calculations
that were already carried out. In previous studies, the CA method has been used to obtain efficiently accurate approximations
of the structural response in problems of linear reanalysis. It is shown in this paper that the method is most suitable for
a wide range of structural optimization problems including linear reanalysis, nonlinear reanalysis and eigenvalue reanalysis.
Some considerations related to the efficiency of the solution process and the accuracy of the results are discussed, and numerical
examples are demonstrated. It is shown that efficient and accurate approximations are achieved for very large changes in the
design.
Received July 7, 1999 相似文献