首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
This paper deals with the verification and assignment into the execution environment of Reconfigurable Control Applications following the Component‐based International Industrial Standard IEC61499. According to this Standard, a Function Block (FB) is an event‐triggered component and an application is an FB network that has to meet temporal properties according to user requirements. If a reconfiguration scenario is applied at run‐time, then the FB network implementing the application is totally changed or modified. To cover all possible cases, we classify such scenarios into three classes and we define an agent‐based architecture designed with nested state machines to automatically handle all possible reconfigurations. To verify and assign Function Blocks corresponding to each reconfiguration scenario into the execution environment, we define an approach based on the exploration of reachability graphs to verify temporal properties. This approach constructs feasible Operating System tasks encoding the FB network that corresponds to each scenario. Therefore, the application is considered as sets of Operating System (OS) tasks where each set is to load in memory when the corresponding reconfiguration scenario is applied by the agent. We developed the tool XAssign supporting these contributions that we apply on the FESTO production system available in our research laboratory. Copyright © 2009 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

2.
The Internet of Things (IoT) is an emerging technology paradigm where millions of sensors and actuators help monitor and manage physical, environmental, and human systems in real time. The inherent closed‐loop responsiveness and decision making of IoT applications make them ideal candidates for using low latency and scalable stream processing platforms. Distributed stream processing systems (DSPS) hosted in cloud data centers are becoming the vital engine for real‐time data processing and analytics in any IoT software architecture. But the efficacy and performance of contemporary DSPS have not been rigorously studied for IoT applications and data streams. Here, we propose RIoTBench , a real‐time IoT benchmark suite, along with performance metrics, to evaluate DSPS for streaming IoT applications. The benchmark includes 27 common IoT tasks classified across various functional categories and implemented as modular microbenchmarks. Further, we define four IoT application benchmarks composed from these tasks based on common patterns of data preprocessing, statistical summarization, and predictive analytics that are intrinsic to the closed‐loop IoT decision‐making life cycle. These are coupled with four stream workloads sourced from real IoT observations on smart cities and smart health, with peak streams rates that range from 500 to 10 000 messages/second from up to 3 million sensors. We validate the RIoTBench suite for the popular Apache Storm DSPS on the Microsoft Azure public cloud and present empirical observations. This suite can be used by DSPS researchers for performance analysis and resource scheduling, by IoT practitioners to evaluate DSPS platforms, and even reused within IoT solutions.  相似文献   

3.
This paper addresses scheduling problems for tasks with release and execution times. We present a number of efficient and easy to implement algorithms for constructing schedules of minimum makespan when the number of distinct task execution times is fixed. For a set of independent tasks, our algorithm in the single processor case runs in time linear in the number of tasks; with precedence constraints, our algorithm runs in time linear in the sum of the number of tasks and the size of the precedence constraints. In the multi-processor case, our algorithm constructs minimum makespan schedules for independent tasks with uniform execution times. The algorithm runs in O(n log m) time where n is the number of tasks and m is the number of processors. Received September 25, 1997; revised June 11, 1998.  相似文献   

4.
Recovering network connectivity structure from high‐dimensional observations is of increasing importance in statistical learning applications. A prominent approach is to learn a Sparse Gaussian Markov Random Field by optimizing regularized maximum likelihood, where the sparsity is induced by imposing L1 norm on the entries of a precision matrix. In this article, we shed light on an alternative objective, where instead of precision, its Cholesky factor is penalized by the L1 norm. We show that such an objective criterion possesses attractive properties that allowed us to develop a very fast Scale‐Free Networks Estimation Through Cholesky factorization (SNETCH) optimization algorithm based on coordinate descent, which is highly parallelizable and can exploit an active set approach. The approach is particularly suited for problems with structures that allow sparse Cholesky factor, an important example being scale‐free networks. Evaluation on synthetically generated examples and high‐impact applications from a biomedical domain of up to more than 900,000 variables provides evidence that for such tasks the SNETCH algorithm can learn the underlying structure more accurately, and an order of magnitude faster than state‐of‐the‐art approaches based on the L1 penalized precision matrix.  相似文献   

5.
Interruptions are prevalent phenomena in modern working environments; yet, few interruption studies have been conducted on different types of human tasks. A study using computer‐based human behavior tasks was carried out to investigate the effects of interruptions with 4 different primary‐interrupting task sets. The tasks used in this study were determined by cognitive and motor skill processes based on human behavior classification theory. The results showed that interruption effects were most negative in cognitive/cognitive task sets, and skill/cognitive task sets showed the least amount of effects in task completion time, F(3, 332) = 77.88, p < .001. In error rate performance, skill/skill task sets showed the highest rate, and error rates for cognitive primary task sets (cognitive/cognitive and cognitive/skill task sets) were not significantly different, F(1, 42) = 41.18, p < .001; F(1, 42) = 3.56, p = 0.0661. Interruption frequency also negatively affected task performance, F(1, 412) = 89.88, p < .001, but skill tasks’ quantitative performance did not show significant effects at different interruption frequencies, F(1, 187) = 3.78, p = .0534. The results showed that interruptions increase more time to complete in cognitive tasks and produce more errors in skill tasks. Also, similar types of primary‐interrupting tasks were more susceptible from interruptions. Thus, based on task composition of work process, we can estimate different effects from the interruptions and memory load, and task similarity in primary‐interrupting task relationship were considered a main factor.  相似文献   

6.
For an industrial application in the chemical industry, we were confronted with the planning of experiments, where human intervention of a chemist is required to handle the starting and termination of each of the experiments. This gives rise to a new type of scheduling problem, namely problems of finding schedules with time periods when the tasks can neither start nor finish. We consider in this paper the natural case of small periods where the duration of the periods is smaller than any processing time. This assumption corresponds to our chemical experiments lasting several days, whereas the operator unavailability periods are typically single days or week-ends. These problems are analyzed on a single machine with the makespan as criterion.  相似文献   

7.
In some hard real-time systems, relative timing constraints may be imposed on task executions, in addition to the release time and deadline constraints. Relative timing constraints such as separation or relative deadline constraints may be given between start or finish times of tasks (Gerber et al., 1995; Han and Lin, 1989; Han et al., 1992; Han and Lin, 1992; Han et al., 1996).One approach in real-time scheduling is to find a total order on a set of N tasks in a scheduling window, and cyclically use this order at run time to execute tasks. However, in the presence of relative timing constraints, if the task execution times are nondeterministic with defined lower and upper bounds, it is not always possible to statically assign task start times at pre-runtime for a given task ordering (Gerber et al., 1995).We develop a technique called dynamic cyclic dispatching as an extension of a parametric dispatching mechanism in (Gerber et al., 1995). An ordered set of N tasks is assumed to be given in a scheduling window and this schedule(ordering) is cyclically repeated at runtime in consecutive scheduling windows. Relative timing constraints between tasks may be defined across scheduling window boundaries as well as within one scheduling window. A task set is defined to be dispatchable if there exists any way in which the tasks can be dispatched with all their timing constraints satisfied. An off-line algorithm is presented to check the dispatchability of a task set and to obtain parametric lower and upper bound functions for task start times if the task set is dispatchable. These parametric bound functions are evaluated at runtime to obtain a valid time interval during which a task can be started. The complexity of this off-line component is shown to be O(n 2 N 3) where n is the number of tasks in a scheduling window that have relative timing constraints with tasks in the next scheduling window. An online algorithm can evaluate these bounds in O(N) time.Unlike static approaches which assign fixed start times to tasks in the scheduling window, our approach allows us to flexibly manage the slack times at runtime without sacrificing the dispatchability of tasks. Also, a wider class of relative timing constraints can be imposed to the task set compared to the traditional approaches.  相似文献   

8.
Since a static work distribution does not allow for satisfactory speed‐ups of parallel irregular algorithms, there is a need for a dynamic distribution of work and data that can be adapted to the runtime behavior of the algorithm. Task pools are data structures which can distribute tasks dynamically to different processors where each task specifies computations to be performed and provides the data for these computations. This paper discusses the characteristics of task‐based algorithms and describes the implementation of selected types of task pools for shared‐memory multiprocessors. Several task pools have been implemented in C with POSIX threads and in Java. The task pools differ in the data structures to store the tasks, the mechanism to achieve load balance, and the memory manager used to store the tasks. Runtime experiments have been performed on three different shared‐memory systems using a synthetic algorithm, the hierarchical radiosity method, and a volume rendering algorithm. Copyright © 2004 John Wiley & Sons, Ltd.  相似文献   

9.
Time‐series data is a common target for visual analytics, as they appear in a wide range of application domains. Typical tasks in analyzing time‐series data include identifying cyclic behavior, outliers, trends, and periods of time that share distinctive shape characteristics. Many methods for visualizing time series data exist, generally mapping the data values to positions or colors. While each can be used to perform a subset of the above tasks, none to date is a complete solution. In this paper we present a novel approach to time‐series data visualization, namely creating multivariate data records out of short subsequences of the data and then using multivariate visualization methods to display and explore the data in the resulting shape space . We borrow ideas from text analysis, where the use of N‐grams is a common approach to decomposing and processing unstructured text. By mapping each temporal N‐gram to a glyph, and then positioning the glyphs via PCA (basically a projection in shape space), many different kinds of patterns in the sequence can be readily identified. Interactive selection via brushing, in conjunction with linking to other visualizations, provides a wide range of tools for exploring the data. We validate the usefulness of this approach with examples from several application domains and tasks, comparing our methods with traditional time‐series visualizations.  相似文献   

10.
A new kind of robotic mechanism is proposed to be used for inspection tasks in complex setups of industrial plants. We propose a multiarticulated snake‐like mobile robot, with a body consisting of repeating modules, capable of both efficiently moving and reaching points inside complicated or unstructured areas, where human personnel cannot reach or work properly. An analysis of the basic design along with most of the component specifications is presented. This mechanical system is subject to nonholonomic constraints. The kinematic model for motion on‐plane of the mobile robot is derived by taking into consideration these constraints. The nonholonomic motion planning is partially solved by converting the multiple‐input system to a multiple‐chain, single‐generator chained form via state feedback and a coordinate transformation. Stabilization and trajectory tracking issues are also considered. We also consider the general case of the n‐trailer (or n‐module) robotic snake. Simulation results are provided for various test cases. ©1999 John Wiley & Sons, Inc.  相似文献   

11.
《国际计算机数学杂志》2012,89(11):2387-2397
Grids or multicluster computing environments are becoming increasingly popular to both scientific and commercial applications. Process scheduling remains a central issue to be effectively resolved in order to exploit the full potential that the grid or multicluster environment can offer. We use a directed acyclic graph (DAG) to model a process or an application where the nodes of the DAG represent the tasks of the process. Prior to the execution of a process in a multicluster environment, the tasks are required to be mapped onto the clusters. In this article, it is shown that the algorithm developed by He et al. [L. He, S.A. Jarvis, D.P. Spooner, D. Bacigalupo, G. Tan, and G.R. Nudd, Mapping DAG-based applications to multiclusters with background workload, Proceedings of the 2005 IEEE International Symposium on Cluster Computing and the Grid, Cardiff, 2005, pp 855–862.] for the multicluster DAG mapping problem can be significantly improved by incorporating the task duplication strategy. The proposed process scheduling algorithm has a time complexity O(| V|2(r+d+1)), where |V| represents the number of tasks; r, the number of clusters; and d, the maximum in-degree of tasks.  相似文献   

12.
Navigation is the process by which people control their movement in virtual environments and is a corefunctional requirement for all virtual environment (VE) applications. Users require the ability to move, controllingorientation, direction of movement and speed, in order to achieve a particular goal within a VE. Navigation israrely the end point in itself (which is typically interaction with the visual representations of data) but applicationsoften place a high demand on navigation skills, which in turn means that a high level of support for navigationis required from the application. On desktop systems navigation in non‐immersive systems is usually supportedthrough the usual hardware devices of mouse and keyboard. Previous work by the authors shows that many usersexperience frustration when trying to perform even simple navigation tasks — users complain about getting lost,becoming disorientated and finding the interface `difficult to use'. In this paper we report on work in progressin exploiting natural language processing (NLP) technology to support navigation in non‐immersive virtualenvironments. A multi‐modal system has been developed which supports a range of high‐level (spoken) navigationcommands and indications are that spoken dialogue interaction is an effective alternative to mouse and keyboardinteraction for many tasks. We conclude that multi‐modal interaction, combining technologies such as NLP withmouse and keyboard may offer the most effective interaction with VEs and identify a number of areas where furtherwork is necessary. ACM CSS: I.3.6 Computer Graphics Methodology and Techniques—Interaction and Techniques, I.3.7 Three‐DimensionalGraphics and Realism—Virtual Reality, I.2.7 Natural Language Processing—Speech Recognitionand Synthesis  相似文献   

13.
In this paper, we present a framework where a learning rule can be optimized within a parametric learning rule space. We define what we callparametric learning rules and present a theoretical study of theirgeneralization properties when estimated from a set of learning tasks and tested over another set of tasks. We corroborate the results of this study with practical experiments.  相似文献   

14.
Recent robotics efforts have automated simple, repetitive tasks to increase execution speed and lessen an operator's cognitive load, allowing them to focus on higher‐level objectives. However, an autonomous system will eventually encounter something unexpected, and if this exceeds the tolerance of automated solutions, there must be a way to fall back to teleoperation. Our solution is a largely autonomous system with the ability to determine when it is necessary to ask a human operator for guidance. We call this approach human‐guided autonomy. Our design emphasizes human‐on‐the‐loop control where an operator expresses a desired high‐level goal for which the reasoning component assembles an appropriate chain of subtasks. We introduce our work in the context of the DARPA Robotics Challenge (DRC) Finals. We describe the software architecture Team TROOPER developed and used to control an Atlas humanoid robot. We employ perception, planning, and control automation for execution of subtasks. If subtasks fail, or if changing environmental conditions invalidate the planned subtasks, the system automatically generates a new task chain. The operator is able to intervene at any stage of execution, to provide input and adjustment to any control layer, enabling operator involvement to increase as confidence in automation decreases. We present our performance at the DRC Finals and a discussion about lessons learned.  相似文献   

15.
Fast multipole methods (FMMs) have complexity, are compute bound, and require very little synchronization, which makes them a favorable algorithm on next‐generation supercomputers. Their most common application is to accelerate N‐body problems, but they can also be used to solve boundary integral equations. When the particle distribution is irregular and the tree structure is adaptive, load balancing becomes a non‐trivial question. A common strategy for load balancing FMMs is to use the work load from the previous step as weights to statically repartition the next step. The authors discuss in the paper another approach based on data‐driven execution to efficiently tackle this challenging load balancing problem. The core idea consists of breaking the most time‐consuming stages of the FMMs into smaller tasks. The algorithm can then be represented as a directed acyclic graph where nodes represent tasks and edges represent dependencies among them. The execution of the algorithm is performed by asynchronously scheduling the tasks using the queueing and runtime for kernels runtime environment, in a way such that data dependencies are not violated for numerical correctness purposes. This asynchronous scheduling results in an out‐of‐order execution. The performance results of the data‐driven FMM execution outperform the previous strategy and show linear speedup on a quad‐socket quad‐core Intel Xeon system.Copyright © 2013 John Wiley & Sons, Ltd.  相似文献   

16.
Many tasks in geometry processing are modelled as variational problems solved numerically using the finite element method. For solid shapes, this requires a volumetric discretization, such as a boundary conforming tetrahedral mesh. Unfortunately, tetrahedral meshing remains an open challenge and existing methods either struggle to conform to complex boundary surfaces or require manual intervention to prevent failure. Rather than create a single volumetric mesh for the entire shape, we advocate for solid geometry processing on deconstructed domains, where a large and complex shape is composed of overlapping solid subdomains. As each smaller and simpler part is now easier to tetrahedralize, the question becomes how to account for overlaps during problem modelling and how to couple solutions on each subdomain together algebraically. We explore how and why previous coupling methods fail, and propose a method that couples solid domains only along their boundary surfaces. We demonstrate the superiority of this method through empirical convergence tests and qualitative applications to solid geometry processing on a variety of popular second‐order and fourth‐order partial differential equations.  相似文献   

17.
Several classes of scientific and commercial applications require the execution of a large number of independent tasks. One highly successful and low‐cost mechanism for acquiring the necessary computing power for these applications is the ‘public‐resource computing’, or ‘desktop Grid’ paradigm, which exploits the computational power of private computers. So far, this paradigm has not been applied to data mining applications for two main reasons. First, it is not straightforward to decompose a data mining algorithm into truly independent sub‐tasks. Second, the large volume of the involved data makes it difficult to handle the communication costs of a parallel paradigm. This paper introduces a general framework for distributed data mining applications called Mining@home. In particular, we focus on one of the main data mining problems: the extraction of closed frequent itemsets from transactional databases. We show that it is possible to decompose this problem into independent tasks, which however need to share a large volume of the data. We thus introduce a data‐intensive computing network, which adopts a P2P topology based on super peers with caching capabilities, aiming to support the dissemination of large amounts of information. Finally, we evaluate the execution of a pattern extraction task on such network. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   

18.
Consider a distributed system consisting of n computers connected by a number of identical broadcast channels. All computers may receive messages from all channels. We distinguish between two kinds of systems: systems in which the computers may send on any channel (dynamic allocation) and system where the send port of each computer is statically allocated to a particular channel. A distributed task (application) is executed on the distributed system. A task performs execution as well as communication between its subtasks. We compare the completion time of the communication for such a task using dynamic allocation and channels with the completion time using static allocation and channels. Some distributed tasks will benefit very much from allowing dynamic allocation, whereas others will work fine with static allocation. In this paper we define optimal upper and lower bounds on the gain (or loss) of using dynamic allocation and channels compared to static allocation and channels. Our results show that, for some tasks, the gain of permitting dynamic allocation is substantial, e.g. when , there are tasks which will complete 1.89 times faster using dynamic allocation compared to using the best possible static allocation, but there are no tasks with a higher such ratio. Received: 26 February 1998 / 26 July 1999  相似文献   

19.
The most efficient way to parallelize computation is to build and evaluate the task graph constrained only by the data dependencies between the tasks. Both Intel's C++ Concurrent Collections (CnC) and Threading Building Blocks (TBB) libraries allow such task‐based parallel programming. CnC also adapts the macro data flow model by providing only single‐assignment data objects in its global data space. Although CnC makes parallel programming easier, by specifying data flow dependencies only through single‐assignment data objects, its macro data flow model incurs overhead. Intel's C++ CnC library is implemented on top of its C++ TBB library. We can measure the overhead of CnC by comparing its performance with that of TBB. In this paper, we analyze all three types of data dependencies in the tiled in‐place Gauss–Jordan elimination algorithm for the first time. We implement the task‐based parallel tiled Gauss–Jordan algorithm in TBB using the data dependencies analyzed and compare its performance with that of the CnC implementation. We find that the overhead of CnC over TBB is only 12%– 15% of the TBB time, and CnC can deliver as much as 87%– 89% of the TBB performance for Gauss–Jordan elimination, using the optimal tile size. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

20.
The objective of this study was to establish models to predict maximum endurance time (MET) for one‐handed carrying tasks. A one‐handed carrying experiment, under three loads and two walking speeds, was performed. The participant carried a dumbbell on the side using either dominant or nondominant hand. His or her arm was straight‐down. The MET values were collected. Both power and exponential functions were adopted in establishing simplified MET models considering the relative force (fMVC) applied. The simplified models were further expanded to full models where body weight and walking speed were also included. It was found that the power models provided better estimates than the exponential models when the fMVC was lower than approximately 0.17 for both the simplified and full models. Considering the effects of the walking speed, the full models are recommended when the fMVC is 0.3 or lower.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号