首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 62 毫秒
1.
Model checking LTL with regular valuations for pushdown systems   总被引:1,自引:0,他引:1  
Recent works have proposed pushdown systems as a tool for analyzing programs with (recursive) procedures, and the model-checking problem for LTL has received special attention. However, all these works impose a strong restriction on the possible valuations of atomic propositions: whether a configuration of the pushdown system satisfies an atomic proposition or not can only depend on the current control state of the pushdown automaton and on its topmost stack symbol. In this paper we consider LTL with regular valuations: the set of configurations satisfying an atomic proposition can be an arbitrary regular language. The model-checking problem is solved via two different techniques, with an eye on efficiency. The resulting algorithms are polynomial in certain measures of the problem which are usually small, but can be exponential in the size of the problem instance. However, we show that this exponential blowup is inevitable. The extension to regular valuations allows to model problems in different areas; for instance, we show an application to the analysis of systems with checkpoints. We claim that our model-checking algorithms provide a general, unifying and efficient framework for solving them.  相似文献   

2.
Service innovation is focused on customer value creation. At its core, customer centric service innovation is technology-enabled, human-centered, and process-oriented. To profit from such innovation, firms need an integrated cross-disciplinary, holistic method to design and commercialize service innovation. From diverse but interrelated strands of theories from service science, strategic management, organization science and information systems literatures, this article develops a new integrated design method, known as iSIM (integrated Service Innovation Method), for simultaneous service innovation and business model design for sustained customer value co-creation with the firm. Following design science research method, the article theoretically defines and integrates iSIM’s seven constitutive design process-elements: service strategy, customer type / value proposition, service concept, service system, customer experience, service architecture and monetization into a coherent and end-to-end aligned integrated design method. It explains how iSIM would be holistically and iteratively practiced by practitioners, and conceptually exemplifies its utility via telco and Amazon case studies using secondary data. Perspectives on iSIM from selected practitioners are discussed which confirm iSIM’s potential utility for their business. Managerial implications of implementing the iSIM and potential areas for further research are also discussed.  相似文献   

3.
This work suggests a method for systematically constructing a software-level environment model for safety checking automotive operating systems by introducing a constraint specification language, OSEK_CSL. OSEK_CSL is designed to specify the usage constraints of automotive operating systems using a pre-defined set of constraint types identified from the international standard OSEK/VDX. Each constraint specified in OSEK_CSL is interpreted as either a regular language or a context-free language that can be checked by a finite automaton or a pushdown automaton. The set of usage constraints is used to systematically classify the universal usage model of OSEK-/VDX-based operating systems and to generate test sequences with varying degrees of constraint satisfaction using LTL model checking. With pre-defined constraint patterns and the full support of automation, test engineers can choose the degree of constraint satisfaction and generate test cases using combinatorial intersections of selected constraints that cover all corner cases classified by constraints. A series of experiments on an open-source automotive operating system show that our approach finds safety issues more effectively than conventional specification-based testing, scenario-based testing, and conformance testing.  相似文献   

4.
This paper proposes a game-theoretic model of the two-player best-choice problem with incomplete information. The players (experts) choose between objects by observing their quality in the form of two components forming a sequence of random variables (xi, yi), i = 1,..., n. By assumption, the first quality component xi is known to the players and the second one yi is hidden. A player accepts or declines an object based on the first quality component only. A player with the maximal sum of the components becomes the winner in the game. The optimal strategies are derived in the cases of independent and correlated quality components.  相似文献   

5.
A mathematical model f(x) given in the unit n-dimensional cube, where x = (x 1, ..., x n ), is considered. How can one estimate the global sensitivity of f(x) with respect to x i ? If f(x) ∈ L 2, global sensitivity indices help answer this question. Being less reliable, derivative-based sensitivity criteria are sometimes easier-to-compute. In this work, a new derivative-based global sensitivity criterion is compared to the respective global sensitivity index. These estimates are proven to coincide in the particular case when f(x) linearly depends on x i . However, Monte Carlo approximations to the derivative-based criterion converge faster. Thus, the global derivative-based sensitivity criterion can prove useful when f(x) depends on x i almost linearly. It can be also used to find nonessential variables x i .  相似文献   

6.
We consider a geographic optimization problem in which we are given a region R, a probability density function f(?) defined on R, and a collection of n utility density functions u i (?) defined on R. Our objective is to divide R into n sub-regions R i so as to “balance” the overall utilities on the regions, which are given by the integrals \(\iint _{R_{i}}f(x)u_{i}(x)\, dA\). Using a simple complementary slackness argument, we show that (depending on what we mean precisely by “balancing” the utility functions) the boundary curves between optimal sub-regions are level curves of either the difference function u i (x) ? u j (x) or the ratio u i (x)/u j (x). This allows us to solve the problem of optimally partitioning the region efficiently by reducing it to a low-dimensional convex optimization problem. This result generalizes, and gives very short and constructive proofs of, several existing results in the literature on equitable partitioning for particular forms of f(?) and u i (?). We next give two economic applications of our results in which we show how to compute a market-clearing price vector in an aggregate demand system or a variation of the classical Fisher exchange market. Finally, we consider a dynamic problem in which the density function f(?) varies over time (simulating population migration or transport of a resource, for example) and derive a set of partial differential equations that describe the evolution of the optimal sub-regions over time. Numerical simulations for both static and dynamic problems confirm that such partitioning problems become tractable when using our methods.  相似文献   

7.
The non-configurational geometrization of the electromagnetic field can be realized using the Model of Embedded Spaces (MES). This model assumes the existence of proper 4D space-time manifolds of particles with a nonzero rest mass and declares that physical space-time is the metric result of the dynamic embedding of these manifolds: the value of the partial contribution of the element manifold is determined by element interactions. The space of the model is provided with a Riemann-like geometry, whose differential formalism is described by a generalization of the gradient operator ?/?x i ?/?x i + 2u k ?2/?x[ i ?u k ], where u i = dx i /ds is a matter velocity. In the paper, the redshift effect existing in the space of MES is considered, and its electromagnetic component is analyzed. It is shown that for cold matter of the modern Universe this component reduces to a shift in electric fields and is described by the expression \(\Delta {\omega _e}/\omega \simeq \mp \sqrt k \Delta {\varphi _e}/{c^2} = \mp 0.861 \cdot {10^{ - 21}}\Delta {\varphi _e}\left( V \right)\), where the potential is measured in volts and the sign must be determined experimentally. Testing of the effect is the “experimentum crusis” for MES.  相似文献   

8.
Systems of equations of the form X i =φ i (X 1,…,X n ) (1 i n) are considered, in which the unknowns are sets of natural numbers. Expressions φ i may contain the operations of union, intersection and elementwise addition \(S+T=\{m+n\mid m\in S\), nT}. A system with an EXPTIME-complete least solution is constructed in the paper through a complete arithmetization of EXPTIME-completeness. At the same time, it is established that least solutions of all such systems are in EXPTIME. The general membership problem for these equations is proved to be EXPTIME-complete. Among the consequences of the result is EXPTIME-completeness of the compressed membership problem for conjunctive grammars.  相似文献   

9.
Hatem M. Bahig 《Computing》2011,91(4):335-352
An addition chain for a natural number n is a sequence \({1=a_0 < a_1 < \cdots < a_r=n}\) of numbers such that for each 0 < i ≤ r, a i  = a j  + a k for some 0 ≤ k ≤ j < i. The minimal length of an addition chain for n is denoted by ?(n). If j = i ? 1, then step i is called a star step. We show that there is a minimal length addition chain for n such that the last four steps are stars. Then we conjecture that there is a minimal length addition chain for n such that the last \({\lfloor\frac{\ell(n)}{2}\rfloor}\)-steps are stars. We verify that the conjecture is true for all numbers up to 218. An application of the result and the conjecture to generate a minimal length addition chain reduce the average CPU time by 23–29% and 38–58% respectively, and memory storage by 16–18% and 26–45% respectively for m-bit numbers with 14 ≤ m ≤ 22.  相似文献   

10.
A flow-shop batching problem with consistent batches is considered in which the processing times of all jobs on each machine are equal to p and all batch set-up times are equal to s. In such a problem, one has to partition the set of jobs into batches and to schedule the batches on each machine. The processing time of a batch B i is the sum of processing times of operations in B i and the earliest start of B i on a machine is the finishing time of B i on the previous machine plus the set-up time s. Cheng et al. (Naval Research Logistics 47:128–144, 2000) provided an O(n) pseudopolynomial-time algorithm for solving the special case of the problem with two machines. Mosheiov and Oron (European Journal of Operational Research 161:285–291, 2005) developed an algorithm of the same time complexity for the general case with more than two machines. Ng and Kovalyov (Journal of Scheduling 10:353–364, 2007) improved the pseudopolynomial complexity to \(O(\sqrt{n})\). In this paper, we provide a polynomial-time algorithm of time complexity O(log?3 n).  相似文献   

11.
12.
A Steiner triple system of order n (for short, STS(n)) is a system of three-element blocks (triples) of elements of an n-set such that each unordered pair of elements occurs in precisely one triple. Assign to each triple (i,j,k) ? STS(n) a topological triangle with vertices i, j, and k. Gluing together like sides of the triangles that correspond to a pair of disjoint STS(n) of a special form yields a black-and-white tiling of some closed surface. For each n ≡ 3 (mod 6) we prove that there exist nonisomorphic tilings of nonorientable surfaces by pairs of Steiner triple systems of order n. We also show that for half of the values n ≡ 1 (mod 6) there are nonisomorphic tilings of nonorientable closed surfaces.  相似文献   

13.
Intermittent Fault Diagnosability of Interconnection Networks   总被引:1,自引:0,他引:1       下载免费PDF全文
An interconnection network’s diagnosability is an important metric for measuring its self-diagnostic capability. Permanent fault and intermittent fault are two different fault models that exist in an interconnection network. In this paper, we focus on the problem pertaining to the diagnosability of interconnection networks in an intermittent fault situation. First, we study a class of interconnection networks called crisp three-cycle networks, in which the cn in -number (the number of common vertices each pair of vertices share) is no more than one. Necessary and sufficient conditions are derived for the diagnosability of crisp three-cycle networks under the PMC (Preparata, Metze, and Chien) model. A simple check can show that many well-known interconnection networks are crisp three-cycle networks. Second, we prove that an interconnection network S is a t i -fault diagnosable system without repair if and only if its minimum in-degree is greater than t i under the BGM (Barsi, Grandoni, and Masetrini) model. Finally, we extend the necessary and sufficient conditions to determine whether an interconnection network S is t i -fault diagnosable without repair under the MM (Maeng and Malek) model from the permanent fault situation to the intermittent fault situation.  相似文献   

14.
Recent years have witnessed the rapid growth of text data, and thus the increasing importance of in-depth analysis of text data for various applications. Text data are often organized in a database with documents labeled by attributes like time and location. Different documents manifest different topics. The topics of the documents may change along the attributes of the documents, and such changes have been the subject of research in the past. However, previous analyses techniques, such as topic detection and tracking, topic lifetime, and burstiness, all focus on the topic behavior of the documents in a given attribute range without contrasting to the documents in the overall range. This paper introduces the concept of u n i q u e t o p i c s, referring to those topics that only appear frequently within a small range of documents but not in the whole range. These unique topics may reflect some unique characteristics of documents in this small range not found outside of the range. The paper aims at an efficient pruning-based algorithm that, for a user-given set of keywords and a user-given attribute, finds the maximal ranges along the given attribute and their unique topics that are highly related to the given keyword set. Thorough experiments show that the algorithm is effective in various scenarios.  相似文献   

15.
In this paper, we address the problem of verifying probabilistic and epistemic properties in concurrent probabilistic systems expressed in PCTLK. PCTLK is an extension of the Probabilistic Computation Tree Logic (PCTL) augmented with Knowledge (K). In fact, PCTLK enjoys two epistemic modalities Ki for knowledge and \(Pr_{\triangledown b}K_{i}\) for probabilistic knowledge. The approach presented for verifying PCTLK specifications in such concurrent systems is based on a transformation technique. More precisely, we convert PCTLK model checking into the problem of model checking Probabilistic Branching Time Logic (PBTL), which enjoys path quantifiers in the range of adversaries. We then prove that model checking a formula of PCTLK in concurrent probabilistic programs is PSPACE-complete. Furthermore, we represent models associated with PCTLK logic symbolically with Multi-Terminal Binary Decision Diagrams (MTBDDs), which are supported by the probabilistic model checker PRISM. Finally, an application, namely the NetBill online shopping payment protocol, and an example about synchronization illustrated through the dining philosophers problem are implemented with the MTBDD engine of this model checker to verify probabilistic epistemic properties and evaluate the practical complexity of this verification.  相似文献   

16.
An optimal probabilistic-planning algorithm solves a problem, usually modeled by a Markov decision process, by finding an optimal policy. In this paper, we study the k best policies problem. The problem is to find the k best policies of a discrete Markov decision process. The k best policies, k?>?1, cannot be found directly using dynamic programming. Naïvely, finding the k-th best policy can be Turing reduced to the optimal planning problem, but the number of problems queried in the naïve algorithm is exponential in k. We show empirically that solving k best policies problem by using this reduction requires unreasonable amounts of time even when k?=?3. We then provide two new algorithms. The first is a complete algorithm, based on our theoretical contribution that the k-th best policy differs from the i-th policy, for some i?k, on exactly one state. The second is an approximate algorithm that skips many less useful policies. We show that both algorithms have good scalability. We also show that the approximate algorithms runs much faster and finds interesting, high-quality policies.  相似文献   

17.
Mutually independent Hamiltonian cycles in dual-cubes   总被引:1,自引:0,他引:1  
The hypercube family Q n is one of the most well-known interconnection networks in parallel computers. With Q n , dual-cube networks, denoted by DC n , was introduced and shown to be a (n+1)-regular, vertex symmetric graph with some fault-tolerant Hamiltonian properties. In addition, DC n ’s are shown to be superior to Q n ’s in many aspects. In this article, we will prove that the n-dimensional dual-cube DC n contains n+1 mutually independent Hamiltonian cycles for n≥2. More specifically, let v i V(DC n ) for 0≤i≤|V(DC n )|?1 and let \(\langle v_{0},v_{1},\ldots ,v_{|V(\mathit{DC}_{n})|-1},v_{0}\rangle\) be a Hamiltonian cycle of DC n . We prove that DC n contains n+1 Hamiltonian cycles of the form \(\langle v_{0},v_{1}^{k},\ldots,v_{|V(\mathit{DC}_{n})|-1}^{k},v_{0}\rangle\) for 0≤kn, in which v i k v i k whenever kk′. The result is optimal since each vertex of DC n has only n+1 neighbors.  相似文献   

18.
We study a network formation game where players wish to send traffic to other players. Players can be seen as nodes of an undirected graph whose edges are defined by contracts between the corresponding players. Each player can contract bilaterally with others to form bidirectional links or break unilaterally contracts to eliminate the corresponding links. Our model is an extension of the traffic routing model considered in Arcaute, E., Johari, R., Mannor, S., (IEEE Trans. Automat. Contr. 54(8), 1765–1778 2009) in which we do not require the traffic to be uniform and all-to-all. Player i specifies the amount of traffic tij ≥ 0 that wants to send to player j. Our notion of stability is the network pairwise Nash stability, when no node wishes to deviate unilaterally and no pair of nodes can obtain benefit from deviating bilaterally. We show a characterization of the topologies that are pairwise Nash stable for a given traffic matrix. We prove that the best response problem is NP-hard and devise a myopic dynamics so that the deviation of the active node can be computed in polynomial time. We show the convergence of the dynamics to pairwise Nash configurations, when the contracting functions are anti-symmetric and affine, and that the expected convergence time is polynomial in the number of nodes when the node activation process is uniform.  相似文献   

19.
We study the efficiency of the proportional allocation mechanism that is widely used to allocate divisible resources. Each agent submits a bid for each divisible resource and receives a fraction proportional to her bids. We quantify the inefficiency of Nash equilibria by studying the Price of Anarchy (PoA) of the induced game under complete and incomplete information. When agents’ valuations are concave, we show that the Bayesian Nash equilibria can be arbitrarily inefficient, in contrast to the well-known 4/3 bound for pure equilibria Johari and Tsitsiklis (Math. Oper. Res. 29(3), 407–435 2004). Next, we upper bound the PoA over Bayesian equilibria by 2 when agents’ valuations are subadditive, generalizing and strengthening previous bounds on lattice submodular valuations. Furthermore, we show that this bound is tight and cannot be improved by any simple or scale-free mechanism. Then we switch to settings with budget constraints, and we show an improved upper bound on the PoA over coarse-correlated equilibria. Finally, we prove that the PoA is exactly 2 for pure equilibria in the polyhedral environment.  相似文献   

20.
In this paper, an innovative framework labeled as cooperative cognitive maritime big data systems (CCMBDSs) on the sea is developed to provide opportunistic channel access and secure communication. A two-phase frame structure is applied to let Secondary users (SUs) entirely utilize the transmission opportunities for a portion of time as the reward by cooperation with Primary users (PUs). Amplify-and-forward (AF) relaying mode is exploited in SU nodes, and Backward induction method based Stackelberg game is employed to achieve optimal determination of SU, power consumption and time portion of cooperation both for non-secure communication scenario and secure communication. Specifically, a jammer-based secure communications scheme is developed to maximize the secure utility of PU, to confront of the situation that the eavesdropper could overheard the signals from SU i and the jammer. Close-form solutions for the best access time portion as well as the power for SU i and jammer are derived to realize the Nash Equilibrium. Simulation results validate the effectiveness of our proposed strategy.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号