首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Security is becoming an increasingly important issue in the design of multimedia applications, which are widely used in the industry and academic organizations. However, existing scheduling schemes for real-time multimedia service in heterogeneous networks generally do not take into account security requirements when making allocation and control decisions. In this paper, we develop and evaluate a security-critical multimedia scheduling scheme in the framework of heterogeneous networks. At first, we construct a general media distortion model according to the observed parameters in each network, as well as each application’s characteristic. After that, we exploit a scalable graph-based authentication method which achieves a good trade-off between flexibility and efficiency. Furthermore, a security-critical scheduling scheme is proposed by taking into account applications’ timing and security requirements in addition to precedence constraints. The proposed scheme is applied to heuristically find resource allocations, which maximize the quality of security and the probability of meeting deadlines for all the multimedia applications running on heterogeneous networks. Extensive simulations are provided to demonstrate the effectiveness and feasibility of the proposed scheme.  相似文献   

2.
Trends in big data analytics   总被引:1,自引:0,他引:1  
One of the major applications of future generation parallel and distributed systems is in big-data analytics. Data repositories for such applications currently exceed exabytes and are rapidly increasing in size. Beyond their sheer magnitude, these datasets and associated applications’ considerations pose significant challenges for method and software development. Datasets are often distributed and their size and privacy considerations warrant distributed techniques. Data often resides on platforms with widely varying computational and network capabilities. Considerations of fault-tolerance, security, and access control are critical in many applications (Dean and Ghemawat, 2004; Apache hadoop). Analysis tasks often have hard deadlines, and data quality is a major concern in yet other applications. For most emerging applications, data-driven models and methods, capable of operating at scale, are as-yet unknown. Even when known methods can be scaled, validation of results is a major issue. Characteristics of hardware platforms and the software stack fundamentally impact data analytics. In this article, we provide an overview of the state-of-the-art and focus on emerging trends to highlight the hardware, software, and application landscape of big-data analytics.  相似文献   

3.
To get more results or greater accuracy, computational scientists execute their applications on distributed computing platforms such as clusters, grids, and clouds. These platforms are different in terms of hardware and software resources as well as locality: some span across multiple sites and multiple administrative domains, whereas others are limited to a single site/domain. As a consequence, in order to scale their applications up, the scientists have to manage technical details for each target platform. From our point of view, this complexity should be hidden from the scientists, who, in most cases, would prefer to focus on their research rather than spending time dealing with platform configuration concerns.In this article, we advocate for a system management framework that aims to automatically set up the whole run-time environment according to the applications’ needs. The main difference with regards to usual approaches is that they generally only focus on the software layer whereas we address both the hardware and the software expectations through a unique system. For each application, scientists describe their requirements through the definition of a virtual platform (VP) and a virtual system environment (VSE). Relying on the VP/VSE definitions, the framework is in charge of (i) the configuration of the physical infrastructure to satisfy the VP requirements, (ii) the set-up of the VP, and (iii) the customization of the execution environment (VSE) upon the former VP. We propose a new formalism that the system can rely upon to successfully perform each of these three steps without burdening the user with the specifics of the configuration for the physical resources, and system management tools. This formalism leverages Goldberg’s theory for recursive virtual machines (Goldberg, 1973 [6]) by introducing new concepts based on system virtualization (identity, partitioning, aggregation) and emulation (simple, abstraction). This enables the definition of complex VP/VSE configurations without making assumptions about the hardware and the software resources. For each requirement, the system executes the corresponding operation with the appropriate management tool.As a proof of concept, we implemented a first prototype that currently interacts with several system management tools (e.g., OSCAR, the Grid’5000 toolkit, and XtreemOS) and that can be easily extended to integrate new resource brokers or cloud systems such as Nimbus, OpenNebula, or Eucalyptus, for instance.  相似文献   

4.
Artificial immune systems (AIS) are the computational systems inspired by the principles and processes of the vertebrate immune system. AIS-based algorithms typically mimic the human immune system’s characteristics of learning and adaptability to solve some complicated problems. Here, an artificial immune multi-objective optimization framework is formulated and applied to synthetic aperture radar (SAR) image segmentation. The important innovations of the framework are listed as follows: (1) an efficient and robust immune, multi-objective optimization algorithm is proposed, which has the features of adaptive rank clones and diversity maintenance by K-nearest-neighbor list; (2) besides, two conflicting, fuzzy clustering validity indices are incorporated into this framework and optimized simultaneously and (3) moreover, an effective, fused feature set for texture representation and discrimination is constructed and researched, which utilizes both the Gabor filter’s ability to precisely extract texture features in low- and mid-frequency components and the gray level co-occurrence probability’s (GLCP) ability to measure information in high-frequency. Two experiments with synthetic texture images and SAR images are implemented to evaluate the performance of the proposed framework in comparison with other five clustering algorithms: fuzzy C-means (FCM), single-objective genetic algorithm (SOGA), self-organizing map (SOM), wavelet-domain hidden Markov models (HMTseg), and spectral clustering ensemble (SCE). Experimental results show the proposed framework has obtained the better performance in segmenting SAR images than other five algorithms and behaves insensitive to the speckle noise.  相似文献   

5.
A new approach for supporting reactive capability is described in the context of an advanced object-oriented database system called ADOME-II. Besides having a rich set of pre-defined composite event expressions and a well-defined execution model, ADOME-II supports an extensible approach to reactive processing so as to be able to gracefully accommodate dynamic applications’ requirements. In this approach, production rules combined with methods are used as a unifying mechanism to process rules, to enable incremental detection of composite events, and to allow new composite event expressions to be introduced into the system declaratively. This allows the definition of new production rules each time an extension of the model takes place. Methods of supporting new composite event expressions are described, and comparisons with other relevant approaches are also conducted. A prototype of ADOME-II has been constructed, which has as its implementation base an ordinary (passive) OODBMS and a production rule base system.  相似文献   

6.
This paper addresses the problem of selecting the best connectivity alternative for the user in a generic Heterogeneous Wireless Multi-hop Network (HWMN), integrating distinct wireless technologies and multi-mode cooperating stations. We propose a Connectivity opportunity Selection Algorithm (CSA) that uses network state information and mobility profile information to select the best connectivity based on the applications’ requirements. We provide a simulation-based performance evaluation of the CSA and compare it with a greedy network selection scheme. Furthermore, we propose an extended reference model that allows the integration of the concept of connectivity opportunity and our proposed CSA with the framework being defined by the upcoming IEEE 802.21 standard for Media Independent Handover services.  相似文献   

7.
Over the last few years, the adaptation ability has become an essential characteristic for grid applications due to the fact that it allows applications to face the dynamic and changing nature of grid systems. This adaptive capability is applied within different grid processes such as resource monitoring, resource discovery, or resource selection. In this regard, the present approach provides a self-adaptive ability to grid applications, focusing on enhancing the resources selection process. This contribution proposes an Efficient Resources Selection model to determine the resources that best fit the application requirements. Hence, the model guides applications during their execution without modifying or controlling grid resources. Within the evaluation phase, the experiments were carried out in a real European grid infrastructure. Finally, the results show that not only a self-adaptive ability is provided by the model but also a reduction in the applications’ execution time and an improvement in the successfully completed tasks rate are accomplished.  相似文献   

8.
The squeeze film behavior of MEMS torsion mirrors is modeled, analyzed and discussed. Effects of gas rarefaction (first-order slip-flow model with non-symmetric accommodation coefficients, ACs) and surface roughness are considered simultaneously by using the average Reynolds type equation (ARTE). Based on the operating conditions with small variations in film thickness and pressure, the ARTE is linearized. A coordinate transformation, by stretching or contracting the axes by referring to the roughness flow factors, is proposed to transform the linearized ARTE into a diffusion type modal equation. The dynamic coefficients (stiffness and damping coefficients) are then derived and expressed in analytical form. The results show that the tilting frequency (or Γ0 squeeze number), roughness parameters (γ Peklenik numbers, σ standard deviation of composite roughness) and gas rarefaction parameters (D inverse Knudsen number, ACs) are all important parameters on analyzing the dynamic performance of MEMS torsion mirrors.  相似文献   

9.
张春元  朱清新 《控制与决策》2015,30(12):2161-2167

针对传统Actor-critic (AC) 方法在求解连续空间序贯决策问题时收敛速度较慢、收敛质量不高的问题, 提出一种基于对称扰动采样的AC算法框架. 首先, 框架采用高斯分布作为策略分布, 在每一时间步对当前动作均值对称扰动, 从而生成两个动作与环境并行交互; 然后, 基于两者的最大时域差分(TD) 误差选取Agent 的行为动作, 并对值函数参数进行更新; 最后, 基于两者的平均常规梯度或增量自然梯度对策略参数进行更新. 理论分析和仿真结果表明, 所提框架具有较好的收敛性和计算效率.

  相似文献   

10.
Today’s software for laser-based additive manufacturing compensates for the finite dimensions of the laser spot by insetting the contours of a solid part. However, features having smaller dimensions are removed by this operation, which may significantly alter the structure of thin-walled parts. To avoid potential production errors, this work describes in detail an algorithmic framework that makes beam compensation more reliable by computing laser scan paths for thin features. The geometry of the features can be adjusted by the scan paths by means of five intuitive parameters, which are illustrated with examples. Benchmarks show that the scan path generation comes at a reasonable cost without altering the computational complexity of the overall beam compensation framework. The framework was applied to Selective Laser Melting (SLM) to demonstrate that it can significantly improve the robustness of additive manufacturing. Besides robustness, the framework is expected to allow further improvements to the accuracy of additive manufacturing by enabling a geometry-dependent determination of the laser parameters.  相似文献   

11.
Next Generation Networks (NGNs) will be comprised of different access technologies. We are already seeing the emergence of mobile devices with the capability of connecting to heterogeneous networks with different capabilities and constraints. In addition, many bandwidth intensive applications have rather relaxed real-time constraints allowing for alternative scheduling mechanisms which can take into account user preferences, network characteristics as well as future network resource availability to better exploit network heterogeneity. The current approaches either simply react to changes, or assume that availability predictions are perfect.In this paper, we propose a scheduling scheme based on stochastic modeling to account for prediction errors. The scheme optimizes overall user utility gain considering imperfect predictions taken over realistic time intervals while catering for different applications’ needs. We use 180 days of real user data of many users to demonstrate that it consistently outperforms other non-stochastic and greedy approaches in typical networking environments.  相似文献   

12.
In recent years, the spectral clustering method has gained attentions because of its superior performance. To the best of our knowledge, the existing spectral clustering algorithms cannot incrementally update the clustering results given a small change of the data set. However, the capability of incrementally updating is essential to some applications such as websphere or blogsphere. Unlike the traditional stream data, these applications require incremental algorithms to handle not only insertion/deletion of data points but also similarity changes between existing points. In this paper, we extend the standard spectral clustering to such evolving data, by introducing the incidence vector/matrix to represent two kinds of dynamics in the same framework and by incrementally updating the eigen-system. Our incremental algorithm, initialized by a standard spectral clustering, continuously and efficiently updates the eigenvalue system and generates instant cluster labels, as the data set is evolving. The algorithm is applied to a blog data set. Compared with recomputation of the solution by the standard spectral clustering, it achieves similar accuracy but with much lower computational cost. It can discover not only the stable blog communities but also the evolution of the individual multi-topic blogs. The core technique of incrementally updating the eigenvalue system is a general algorithm and has a wide range of applications—as well as incremental spectral clustering—where dynamic graphs are involved. This demonstrates the wide applicability of our incremental algorithm.  相似文献   

13.
This paper presents the design and development of an object-oriented framework for computational mechanics. The framework has been designed to address some of the major deficiencies in existing computational mechanics software packages. The framework addresses the deficiencies of existing computational mechanics software packages by (a) having a sound design using the state of the art in software engineering, and (b) providing model manipulation features that are common to a large set of computational mechanics problems. The framework provides features that are essential to a large set of computational mechanics problems. The domainspecific features provided by the framework are a geometry sub-system specifically designed for computational mechanics, an interpreted Computational Mechanics Language (CML), a structure for management of analysis projects, a comprehensive data model, model development, model query and analysis management. The domain independent features provided by the framework are a drawing subsystem for data visualization, a database server, a quantity subsystem, a simple GUI and an online help server. It is demonstrated that the framework can be used to develop applications that can: (a) extend or modify important parts of the framework to suit their own needs; (b) use CML for rapid prototyping and extending the functionality of the framework; (c) significantly ease the task of conducting parametric studies; (d) significantly ease the task of modeling evolutionary problems; (e) be easily interfaced with existing analysis programs; and (f) be used to carry out basic computational mechanics research. It is hoped that the framework will substantially ease the task of creating families of software applications that apply existing and upcoming theories of computational mechanics to solve both academic and real world interdisciplinary simulation problems.  相似文献   

14.
Morley’s theorem states that for any triangle, the intersections of its adjacent angle trisectors form an equilateral triangle. The construction of Morley’s triangle by the straightedge and compass method is impossible because of the well-known impossibility result for angle trisection. However, by origami, the construction of an angle trisector is possible, and hence that of Morley’s triangle. In this paper we present a computational origami construction of Morley’s triangle and an automated correctness proof of the generalized Morley’s theorem.During the computational origami construction, geometrical constraints in symbolic representation are generated and accumulated. Those constraints are then transformed into algebraic forms, i.e. a set of polynomials, which in turn are used to prove the correctness of the construction. The automated proof is based on the Gröbner bases method. The timings of the experiments of the Gröbner bases computations for our proofs are given. They vary greatly depending on the origami construction methods, the algorithms for the Gröbner bases computation, and variable orderings.  相似文献   

15.
The dynamic distributed real-time applications run on clusters with varying execution time, so re-allocation of resources is critical to meet the applications’s deadline. In this paper we present two adaptive recourse management techniques for dynamic real-time applications by employing the prediction of responses of real-time tasks that operate in time sharing environment and run-time analysis of scheduling policies. Prediction of response time for resource reallocation is accomplished by historical profiling of applications’ resource usage to estimate resource requirements on the target machine and a probabilistic approach is applied for calculating the queuing delay that a process will experience on distributed hosts. Results show that as compared to statistical and worst-case approaches, our technique uses system resource more efficiently.  相似文献   

16.
One of the widely used methods for solving a nonlinear system of equations is the quasi-Newton method. The basic idea underlining this type of method is to approximate the solution of Newton’s equation by means of approximating the Jacobian matrix via quasi-Newton update. Application of quasi-Newton methods for large scale problems requires, in principle, vast computational resource to form and store an approximation to the Jacobian matrix of the underlying problem. Hence, this paper proposes an approximation for Newton-step based on the update of approximation requiring a computational effort similar to that of matrix-free settings. It is made possible by approximating the Jacobian into a diagonal matrix using the least-change secant updating strategy, commonly employed in the development of quasi-Newton methods. Under suitable assumptions, local convergence of the proposed method is proved for nonsingular systems. Numerical experiments on popular test problems confirm the effectiveness of the approach in comparison with Newton’s, Chord Newton’s and Broyden’s methods.  相似文献   

17.
In advancing discrete-based computational cancer models towards clinical applications, one faces the dilemma of how to deal with an ever growing amount of biomedical data that ought to be incorporated eventually in one form or another. Model scalability becomes of paramount interest. In an effort to start addressing this critical issue, here, we present a novel multi-scale and multi-resolution agent-based in silico glioma model. While ‘multi-scale’ refers to employing an epidermal growth factor receptor (EGFR)-driven molecular network to process cellular phenotypic decisions within the micro-macroscopic environment, ‘multi-resolution’ is achieved through algorithms that classify cells to either active or inactive spatial clusters, which determine the resolution they are simulated at. The aim is to assign computational resources where and when they matter most for maintaining or improving the predictive power of the algorithm, onto specific tumor areas and at particular times. Using a previously described 2D brain tumor model, we have developed four different computational methods for achieving the multi-resolution scheme, three of which are designed to dynamically train on the high-resolution simulation that serves as control. To quantify the algorithms’ performance, we rank them by weighing the distinct computational time savings of the simulation runs vs. the methods’ ability to accurately reproduce the high-resolution results of the control. Finally, to demonstrate the flexibility of the underlying concept, we show the added value of combining the two highest-ranked methods. The main finding of this work is that by pursuing a multi-resolution approach, one can reduce the computation time of a discrete-based model substantially while still maintaining a comparably high predictive power. This hints at even more computational savings in the more realistic 3D setting over time, and thus appears to outline a possible path to achieve scalability for the all-important clinical translation.  相似文献   

18.
In this paper, we present a novel computational modeling and simulation framework based on dynamic spherical volumetric simplex splines. The framework can handle the modeling and simulation of genus-zero objects with real physical properties. In this framework, we first develop an accurate and efficient algorithm to reconstruct the high-fidelity digital model of a real-world object with spherical volumetric simplex splines which can represent with accuracy geometric, material, and other properties of the object simultaneously. With the tight coupling of Lagrangian mechanics, the dynamic volumetric simplex splines representing the object can accurately simulate its physical behavior because it can unify the geometric and material properties in the simulation. The visualization can be directly computed from the object’s geometric or physical representation based on the dynamic spherical volumetric simplex splines during simulation without interpolation or resampling. We have applied the framework for biomechanic simulation of brain deformations, such as the brain shifting during surgery and brain injury under blunt impact. We have compared our simulation results with the ground truth obtained through intra-operative magnetic resonance imaging and real biomechanic experiments. The evaluations demonstrate the excellent performance of our new technique.  相似文献   

19.
Rewriting logic is a flexible and expressive logical framework that unifies algebraic denotational semantics and structural operational semantics (SOS) in a novel way, avoiding their respective limitations and allowing succinct semantic definitions. The fact that a rewrite logic theory’s axioms include both equations and rewrite rules provides a useful “abstraction dial” to find the right balance between abstraction and computational observability in semantic definitions. Such semantic definitions are directly executable as interpreters in a rewriting logic language such as Maude, whose generic formal tools can be used to endow those interpreters with powerful program analysis capabilities.  相似文献   

20.
A new machine learning framework is introduced in this paper, based on the hidden Markov model (HMM), designed to provide scheduling in dynamic wireless push systems. In realistic wireless systems, the clients’ intentions change dynamically; hence a cognitive scheduling scheme is needed to estimate the desirability of the connected clients. The proposed scheduling scheme is enhanced with self-organized HMMs, supporting the network with an estimated expectation of the clients’ intentions, since the system’s environment characteristics alter dynamically and the base station (server side) has no a priori knowledge of such changes. Compared to the original pure scheme, the proposed machine learning framework succeeds in predicting the clients’ information desires and overcomes the limitation of the original static scheme, in terms of mean delay and system efficiency.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号