共查询到20条相似文献,搜索用时 10 毫秒
1.
An algorithm with space dilation is presented, which is the circumscribed ellipsoid method under a certain choice of dilation coefficient. It is shown that its special case is the Yudin–Nemirovsky–Shor ellipsoid method. The application of the algorithm to solving a convex programming problem and the problem of finding a saddle point of a convex-concave function are described. 相似文献
2.
Stochastic behavioral models are specified by a difference evolutionary equation for the probabilities of binary alternatives. The classification of stochastic behavioral models is analyzed by the limit behavior of alternatives probabilities. The main property of classification is characterized by three types of equilibrium: attractive, repulsive, and dominant. The stochastic behavioral models are classified by using the stochastic approximation. 相似文献
3.
E. R. Smol’yakov 《Cybernetics and Systems Analysis》2018,54(4):552-562
A new concept of equilibrium is offered. It can be used to define a fair distribution in cooperative games on intersecting game sets and to refine the hierarchical dependence between known equilibria. 相似文献
4.
In the context of the qualitative theory of realization of infinite-dimensional dynamic systems, the authors demonstrate some results related to investigation of the geometrical properties of families of continuous controlled dynamic processes (“input–output” mappings) in the problem of solvability of this differential realization in a class of linear ordinary non-stationary differential equations in a separable Hilbert space. 相似文献
5.
In order to obtain a deeper understanding of the collaboration status in the big data field, we investigated the author collaboration groups and the core author collaboration groups as well as the collaboration trends in big data by combining bibliometric analysis and social network analysis. A total of 4130 papers from 13,759 authors during the period of 2011–2015 was collected. The main results indicate that 3483 of the papers are coauthored (i.e., 84.33% of all papers) from 12,016 coauthors (i.e., 87.33% of all authors), which represent a reputable level of collaboration. On the other hand, 91.83% of all the identified coauthors have published only one paper so far, reflecting a poor level of maturity of such authors. Through social network analysis, we observed that the author collaboration network is composed of small author collaboration groups and also that the authors are mainly from the computer science & technology field. As an important contribution of our study, we further analyzed the author collaboration network, culminating in the generalization of four subnet modes, which were defined by some papers: ‘dual-core’, ‘complete’, ‘bridge’ and ‘sustainable development’. It was found that the dual-core mode stands for the stage that researchers have just begun to study big data. Beginning of big data research, the complete mode tends to joint research, both the dual-core and complete modes are mostly engaged in the same project, and the bridge mode and the sustainable development mode represent, respectively, the popular and valued directions in the big data field. The results of this study can be useful for researchers interested in finding suitable partners in the big data field. By tracking the core authors and the key author collaboration groups, one can learn about the current developments in the big data field as well as predict the development prospects of such a field. Therefore, we expect with the results of our study summarized in this paper to contribute to a faster development of the big data field. 相似文献
6.
A solution to the integrity verification problem is proposed for arithmetic programs with branching and looping statements executed on a remote computational resource. The solution is to replace arithmetic operations such as multiplication and division by corresponding procedures of the addition machine introduced by R. Floyd and D. Knuth. Instruction sequencing and current values of variables are signed by dynamic digital signatures homomorphic with respect to addition and subtraction. A modification of the Benaloh scheme is used for the implementation of digital signatures. Verification of digital signatures of the results of executing a program ensures the detection of any unauthorized changes in the source code of the program. 相似文献
7.
We investigate the dynamics of a bypass in terms of determining the optimal parameters of shunt operation. The equations that describe the propagation of pressure pulse waves in blood vessels are presented in differential form and in approximate form for the averaged values based on the conservation laws. The correspondence of these forms is shown from the solution of the initial–boundary-value problem for vascular junction. The influence of the parameters such as various diameters, wall thicknesses, and elastic properties of blood vessels on the shunt effectiveness is investigated. In particular, we investigate the influence of the shunt form and junction area, as well as the angle of the shunt connection with the vessel, on the passage of the coronary vessels and blood coagulation. On the basis of the calculations, conclusions about the influence of various parameters on the dynamics of bypass are made. 相似文献
8.
S. I. Lyashko D. A. Klyushin V. V. Onotskyi N. I. Lyashko 《Cybernetics and Systems Analysis》2018,54(3):357-365
This article considers the simulation and optimization of transdermal drug transport from systems of solvable microneedles. A two-dimensional problem of vertical transport of solvable drugs through a porous medium with point sources imitating solvable microneedles is solved. It is shown that, by controlling the intensity and specifying the coordinates of the sources, the problem of optimal control of transdermal drug transport can be solved and a desired distribution of drugs in epidermis can be achieved with acceptable accuracy. To solve initial-boundary value problems, finite-difference methods and a two-step symmetrizable algorithm are used. 相似文献
9.
Graph-based modeling has emerged as a powerful abstraction capable of capturing in a single and unified framework many of
the relational, spatial, topological, and other characteristics that are present in a variety of datasets and application
areas. Computationally efficient algorithms that find patterns corresponding to frequently occurring subgraphs play an important
role in developing data mining-driven methodologies for analyzing the graphs resulting from such datasets. This paper presents
two algorithms, based on the horizontal and vertical pattern discovery paradigms, that find the connected subgraphs that have
a sufficient number of edge-disjoint embeddings in a single large undirected labeled sparse graph. These algorithms use three
different methods for determining the number of edge-disjoint embeddings of a subgraph and employ novel algorithms for candidate
generation and frequency counting, which allow them to operate on datasets with different characteristics and to quickly prune
unpromising subgraphs. Experimental evaluation on real datasets from various domains show that both algorithms achieve good
performance, scale well to sparse input graphs with more than 120,000 vertices or 110,000 edges, and significantly outperform
previously developed algorithms. 相似文献
10.
V. V. Semenov 《Cybernetics and Systems Analysis》2018,54(6):936-943
In this paper, new algorithms are proposed to solve operator inclusion problems with maximal monotone operators acting in a Hilbert space. The algorithms are based on inertial extrapolation and three well-known methods: Tseng forward-backward splitting and two hybrid algorithms for approximation of fixed points of nonexpansive operators. Theorems about strong convergence of the sequences generated by the algorithms are proved. 相似文献
11.
In this paper a contribution to the practice of path planning using a new hierarchical extension of the D* algorithm is introduced. A hierarchical graph is stratified into several abstraction levels and used to model environments for path planning. The hierarchical D* algorithm uses a down-top strategy and a set of pre-calculated trajectories in order to improve performance. This allows optimality and specially lower computational time. It is experimentally proved how hierarchical search algorithms and on-line path planning algorithms based on topological abstractions can be combined successfully. 相似文献
12.
E. R. Smol’yakov 《Cybernetics and Systems Analysis》2016,52(4):524-535
Complicated concepts of equilibrium are given for static and dynamic conflict problems described by differential equations. The problems are considered both on a game set common to all participants and on partially overlapping game sets. The concepts are useful in searching for the strongest equilibrium in game problems and for determining a fair division of cooperative income. 相似文献
13.
The learning-based automated Assume–Guarantee reasoning paradigm has been applied in the last few years for the compositional
verification of concurrent systems. Specifically, L* has been used for learning the assumption, based on strings derived from counterexamples, which are given to it by a model-checker
that attempts to verify the Assume–Guarantee rules. We suggest three optimizations to this paradigm. First, we derive from
each counterexample multiple strings to L*, rather than a single one as in previous approaches. This small improvement saves candidate queries and hence model-checking
runs. Second, we observe that in existing instances of this paradigm, the learning algorithm is coupled weakly with the teacher.
Thus, the learner completely ignores the details of the internal structure of the system and specification being verified,
which are available already to the teacher. We suggest an optimization that uses this information in order to avoid many unnecessary
membership queries (it reduces the number of such queries by more than an order of magnitude). Finally, we develop a method
for minimizing the alphabet used by the assumption, which reduces the size of the assumption and the number of queries required
to construct it. We present these three optimizations in the context of verifying trace containment for concurrent systems
composed of finite state machines. We have implemented our approach in the ComFoRT tool, and experimented with real-life examples. Our results exhibit an average speedup of between 4 to 11 times, depending
on the Assume–Guarantee rule used and the set of activated optimizations.
This research was supported by the Predictable Assembly from Certifiable Components (PACC) initiative at the Software Engineering
Institute, Pittsburgh. 相似文献
14.
Recently, a new class of data mining methods, known as privacy preserving data mining (PPDM) algorithms, has been developed by the research community working on security and knowledge discovery. The aim of these
algorithms is the extraction of relevant knowledge from large amount of data, while protecting at the same time sensitive
information. Several data mining techniques, incorporating privacy protection mechanisms, have been developed that allow one
to hide sensitive itemsets or patterns, before the data mining process is executed. Privacy preserving classification methods,
instead, prevent a miner from building a classifier which is able to predict sensitive data. Additionally, privacy preserving
clustering techniques have been recently proposed, which distort sensitive numerical attributes, while preserving general
features for clustering analysis. A crucial issue is to determine which ones among these privacy-preserving techniques better
protect sensitive information. However, this is not the only criteria with respect to which these algorithms can be evaluated.
It is also important to assess the quality of the data resulting from the modifications applied by each algorithm, as well
as the performance of the algorithms. There is thus the need of identifying a comprehensive set of criteria with respect to
which to assess the existing PPDM algorithms and determine which algorithm meets specific requirements.
In this paper, we present a first evaluation framework for estimating and comparing different kinds of PPDM algorithms. Then,
we apply our criteria to a specific set of algorithms and discuss the evaluation results we obtain. Finally, some considerations
about future work and promising directions in the context of privacy preservation in data mining are discussed.
*The work reported in this paper has been partially supported by the EU under the IST Project CODMINE and by the Sponsors of
CERIAS.
Editor: Geoff Webb
相似文献
Elisa Bertino (Corresponding author)Email: |
Igor Nai FovinoEmail: |
Loredana Parasiliti ProvenzaEmail: |
15.
A Version of the Mirror descent Method to Solve Variational Inequalities<Superscript>*</Superscript>
V. V. Semenov 《Cybernetics and Systems Analysis》2017,53(2):234-243
Nemirovski and Yudin proposed the mirror descent algorithm at the late 1970s to solve convex optimization problems. This method is suitable to solve huge-scale optimization problems. In the paper, we describe a new version of the mirror descent method to solve variational inequalities with pseudomonotone operators. The method can be interpreted as a modification of Popov’s two-step algorithm with the use of Bregman projections on the feasible set. We prove the convergence of the sequences generated by the proposed method. 相似文献
16.
Mathematical tools are proposed for the optimization of chemical compositions of alloys and technological parameters of their processing during the development of new steel grades under considerable uncertainty. By reason of uncertainty, statistical models for mechanical properties of new alloys are used in optimization problems. Particularly, CVaR is used to estimate the right tail of CVN (Charpy V-Notch impact on toughness) distribution. This implies the use of nonconcave (convex) objective functions in maximization problems. Mathematical methods are proposed for solving maximization problems. 相似文献
17.
D. A. Rachkovskij 《Cybernetics and Systems Analysis》2018,54(2):320-335
This survey article considers index structures for fast similarity search for objects represented by real-valued vectors. Structures for both exact and faster but approximate similarity search are considered. Index structures based on partitioning into regions (including hierarchical ones) and on proximity graphs are mainly presented. The acceleration of similarity search using the transformation of initial data is also discussed. The ideas of concrete algorithms including recently proposed ones are outlined. The approaches to the acceleration of similarity search in index structures of the considered types and also on the basis of similarity-preserving hashing are discussed and compared. 相似文献
18.
Every rectilinear Steiner tree problem admits an optimal tree T * which is composed of tree stars. Moreover, the currently fastest algorithms for the rectilinear Steiner tree problem proceed by composing an optimum tree T * from tree star components in the cheapest way. The efficiency of such algorithms depends heavily on the number of tree stars (candidate components). Fößmeier and Kaufmann (Algorithmica 26, 68–99, 2000) showed that any problem instance with k terminals has a number of tree stars in between 1.32 k and 1.38 k (modulo polynomial factors) in the worst case. We determine the exact bound O *(ρ k ) where ρ≈1.357 and mention some consequences of this result. 相似文献
19.
The authors analyze asymptotic dissipation of pre-limit normalized stochastic evolutionary system in ergodic Markov environment, which significantly influences the behavior of the limiting process. 相似文献
20.
A minimax modification of a fuzzy constraint satisfaction problem is considered, where constraints determine not whether a given solution is feasible but the numerical value of satisfiability. The algorithm is proposed that finds a given number of solutions with the highest value of satisfiability in polynomial time for a subclass of problems with constraints invariant to some majority operator. It is important that knowing the operator itself is not required. Moreover, it is not necessary to guarantee its existence. For any system of fuzzy constraints, the algorithm either finds a given number of best solutions or declines the problem. The latter is only possible when no such operator exists. 相似文献