共查询到20条相似文献,搜索用时 31 毫秒
1.
Lakshmi Manasa Shankara Narayanan Krishna Chinmay Jain 《Theory of Computing Systems》2011,48(3):648-679
Weighted timed automata (WTA), introduced in Alur et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 49–62, Springer, Berlin,
2001), Behrmann et al. (Proceedings of HSCC’01, LNCS, vol. 2034, pp. 147–161, Springer, Berlin, 2001) are an extension of Alur and Dill (Theor. Comput. Sci. 126(2):183–235, 1994) timed automata, a widely accepted formalism for the modelling and verification of real time systems. Weighted timed automata
extend timed automata by allowing costs on the locations and edges. There has been a lot of interest Bouyer et al. (Inf. Process.
Lett. 98(5):188–194, 2006), Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008), Brihaye et al. (Proceedings of FORMATS/FTRTFT’04, LNCS, vol. 3253, pp. 277–292, Springer, Berlin, 2004), Brihaye et al. (Inf. Comput. 204(3):408–433, 2006) in studying the model checking problem of weighted timed automata. The properties of interest are written using logic weighted
CTL (WCTL), an extension of CTL with costs. It has been shown Bouyer et al. (Log. Methods Comput. Sci. 4(2):9, 2008) that the problem of model checking WTAs with a single clock using WCTL with no external cost variables is decidable, while 3 clocks render the problem undecidable Bouyer et al. (Inf. Process. Lett. 98(5):188–194, 2006). The question of 2 clocks is open. In this paper, we introduce a subclass of weighted timed automata called weighted integer
reset timed automata (WIRTA) and study the model checking problem. We give a clock reduction technique for WIRTA. Given a
WIRTA A\mathcal{A} with n≥1 clocks, we show that a single clock WIRTA A¢\mathcal{A}' preserving the paths and costs of A\mathcal{A} can be obtained. This gives us the decidability of model checking WIRTA with n≥1 clocks and m≥1 costs using WCTL with no external cost variables. We then show that for a restricted version of WCTL with external cost variables, the model checking problem is undecidable for WIRTA with 3 stopwatch costs and 1 clock. Finally, we show that model checking
WTA with 2 clocks and 1 stopwatch cost against WCTL with no external cost variables is undecidable, thereby answering a question that has remained long open. 相似文献
2.
In this paper, we propose a unified primal-dual algorithm framework for two classes of problems that arise from various signal
and image processing applications. We also show the connections to existing methods, in particular Bregman iteration (Osher
et al., Multiscale Model. Simul. 4(2):460–489, 2005) based methods, such as linearized Bregman (Osher et al., Commun. Math. Sci. 8(1):93–111, 2010; Cai et al., SIAM J. Imag. Sci. 2(1):226–252, 2009, CAM Report 09-28, UCLA, March 2009; Yin, CAAM Report, Rice University, 2009) and split Bregman (Goldstein and Osher, SIAM J. Imag. Sci., 2, 2009). The convergence of the general algorithm framework is proved under mild assumptions. The applications to ℓ
1 basis pursuit, TV−L
2 minimization and matrix completion are demonstrated. Finally, the numerical examples show the algorithms proposed are easy
to implement, efficient, stable and flexible enough to cover a wide variety of applications. 相似文献
3.
Generating economical single-part flow-line (SPFL) configurations as candidates for a given demand period is an important
optimization problem for reconfigurable manufacturing systems (RMS). The optimization problem addresses the questions of selecting
number of workstations, number and type of paralleling identical machines as well as operation setups (OSs) for each workstation.
The inputs include a precedence graph for a part, relationships between OSs and operations, machine options for each OS. The
objective is to minimize the capital costs of the SPFL configurations. A 0–1 nonlinear programming (NLP) model is developed
to handle the key issue of sharing machine utilization over consecutive OSs which is ignored in existing 0–1 integer linear
programming (ILP) model. Then a GA-based approach is proposed to identify a set of economical solutions. To overcome the complexity
of search space, a novel procedure is presented to guide GA to search within a refined solution space comprising the optimal
configurations associated with feasible OS sequences. A case study shows that the best solution derived from the 0–1 NLP model
through GA is better than the optimum of existing 0–1 ILP model. The results illustrate the effectiveness of our model and
the efficiency of the GA-based approach. 相似文献
4.
Chun-Che Lin Jue-Liang Hsu Chin-Chung Tseng Gwo-Bin Lee 《Microfluidics and nanofluidics》2011,10(5):1055-1067
This study presents an integrated microfluidic system for the determination of microalbuminuria (MAU) through the measurements
of the albumin-to-creatinine ratios in patients’ urinary samples. Albumin concentrations are determined based on a non-immunological
dye binding assay in which the dyes react specifically with albumin to undergo a strong fluorescence enhancement. Creatinine
concentrations are determined based on the Jaffé reaction in which the reagents react specifically with creatinine to form
orange–red colored complexes. Two calibration curves for determining the concentrations of urinary albumin and creatinine
are constructed with assay ranges of 5–220 and 1–100 mg/l, respectively. Using this system to determine the ACRs of collected
clinical urine samples, statistical tools including Bland–Altman bias plot and Passing–Bablok regression analysis show that
the results obtained by the proposed microfluidic system are in good agreement with those obtained by conventional methods.
This simple, automatic, inexpensive, and microchip-based platform demonstrates a promising alternative to the conventional
assays for determining MAU and may be suited for clinical applications. 相似文献
5.
R. Vyzhikovski Yu. S. Kanevskii O. V. Maslennikov 《Cybernetics and Systems Analysis》1996,32(2):190-204
Conclusion We have proposed a modification of the orthogonal Faddeev method [6] for solving various SLAE and also for inversion and pseudoinversion
of matrices. The proposed version of the method relies on Householder and Jordan-Gauss methods and its computational complexity
is approximately half that of [6]. This method, combined with the matrix-graph method [9] of formalized SPPC structure design,
has been applied to synthesize a number of AP architectures that efficiently implement the proposed method. Goal-directed
isomorphic and homeomorphic transformations of the LFG of the original algorithm (5) lead to a one-dimensional (linear) AP
of fixed size, with minimum hardware and time costs and with minimized input-output channel width.
The proposed algorithm (5) has been implemented using a 4-processor AP, with Motorola DSP96002 processors as PEs (Fig. 7).
Application of the algorithm (5) to solve an SLAE with a coefficient matrixA withM=N=100 and one righthand side on this AP produced a load factor η=0.82; for inversion of the matrixA of the same size we achieved η=0.77.
The sequence of transformations and the partitioning of a trapezoidal planaer LFG described in this article have been generalized
to the case of other LA algorithms decribed by triangular planar LFGs and executed on linear APs. It is shown that the AP
structures synthesized in this study execute all the above-listed algorithms no less efficiently than the modified Faddeev
algorithm, provided their PEs are initially tuned to the execution of the corresponding operators.
Translated from Kibernetika i Sistemnyi Analiz, No. 2, pp. 47–66, March–April, 1996. 相似文献
6.
Laurent Bienvenu 《Theory of Computing Systems》2010,46(3):598-617
Merkle et al. (Ann. Pure Appl. Logic 138(1–3):183–210, 2006) showed that all Kolmogorov-Loveland stochastic infinite binary sequences have constructive Hausdorff dimension 1. In this
paper, we go even further, showing that from an infinite sequence of dimension less than
H(\frac 12+d)\mathcal {H}(\frac {1}{2}+\delta)
(ℋ being the Shannon entropy function) one can extract by an effective selection rule a biased subsequence with bias at least
δ. We also prove an analogous result for finite strings. 相似文献
7.
Mark Huber 《Algorithmica》2006,44(3):183-193
We present the first algorithm for generating random variates
exactly uniformly from the set of perfect matchings of a bipartite
graph with a polynomial expected running time over a nontrivial
set of graphs. Previous Markov chain results obtain
approximately uniform variates for arbitrary graphs in polynomial
time, but their general running time is Θ(n10 (ln n)2). Other algorithms (such as Kasteleyn's O(n3) algorithm for planar graphs)
concentrated on restricted versions of the problem.
Our algorithm employs acceptance/rejection together with a new upper
limit on the permanent of a form similar to Bregman's theorem.
For graphs with 2n nodes, where the degree of every node is
γn for a constant γ, the expected running time is
O(n1.5 + .5/γ). Under these conditions, Jerrum and Sinclair showed that a Markov chain of Broder can generate approximately uniform variates
in Θ(n4.5 + .5/γ ln n) time, making our algorithm significantly faster on this class of graphs. The problem of counting the number of perfect
matchings in these types of graphs is # P complete. In addition, we give a 1 + σ approximation algorithm for finding the permanent
of 0–1 matrices with identical row and column sums that runs in O(n1.5 + .5/γ
(1/σ2) log (1/δ))$, where the
probability that the output is within 1 + \sigma$ of the permanent is at least 1 – δ. 相似文献
8.
Quantum key distribution (QKD) refers to specific quantum strategies which permit the secure distribution of a secret key
between two parties that wish to communicate secretly. Quantum cryptography has proven unconditionally secure in ideal scenarios
and has been successfully implemented using quantum states with finite (discrete) as well as infinite (continuous) degrees
of freedom. Here, we analyze the efficiency of QKD protocols that use as a resource entangled gaussian states and gaussian
operations only. In this framework, it has already been shown that QKD is possible [1] but the issue of its efficiency has
not been considered. We propose a figure of merit (the efficiency E) to quantify the number of classical correlated bits that can be used to distill a key from a sample of N entangled states. We relate the efficiency of the protocol to the entanglement and purity of the states shared between the
parties.
Presented at the 38th Symposium on Mathematical Physics “Quantum Entanglement & Geometry”, Toruń, June 4–7, 2006. 相似文献
9.
We offer evidence in the disproof of the continuity of the length of minimum inner spanning trees with respect to a parameter
vector having a zero component. The continuity property is the key step of the proof of the conjecture in Du and Hwang (Proc.
Nat. Acad. Sci. U.S.A. 87:9464–9466, 1990; Algorithmica 7(1):121–135, 1992). Therefore the Steiner ratio conjecture proposed by Gilbert-Pollak (SIAM J. Appl. Math. 16(1):1–29, 1968) has not been proved yet. The Steiner ratio of a round sphere has been discussed in Rubinstein and Weng (J. Comb. Optim.
1:67–78, 1997) by assuming the validity of the conjecture on a Euclidean plane in Du and Hwang (Proc. Nat. Acad. Sci. U.S.A. 87:9464–9466,
1990; Algorithmica 7(1):121–135, 1992). Hence the results in Rubinstein and Weng (J. Comb. Optim. 1:67–78, 1997) have not been proved yet. 相似文献
10.
Zoran Stejić Yasufumi Takama Kaoru Hirota 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2007,11(7):669-678
We systematically compare four variants of evolutionary algorithms (EAs) for relevance feedback (RF) in image retrieval. We
focus on the small sample size (SSS) performance, i.e., the requirement to learn the user’s information need based on a small
– between 2 and 25 – number of positive and negative training images. Despite this being a fundamental requirement, none of
the existing works dealing with EAs for RF in image retrieval addresses it. Common for the four variants is the hierarchical,
region-based image similarity model, with region and feature weights as parameters. The difference between the variants is
in the objective function of the EA used to adjust the model parameters. The objective functions include: (O-1) precision;
(O-2) average rank; (O-3) ratio of within-class (i.e., positive images) and between-class (i.e., positive and negative images)
scatter; and (O-4) combination of O-2 and O-3. We note that – unlike O-1 and O-2 – O-3 and O-4 are not used in any of the
existing works dealing with EAs for RF. The four variants are evaluated on five test databases, containing 61,895 general-purpose
images, in 619 semantic categories. Results of the evaluation reveal that variants with objective functions O-3 and O-4 consistently
outperform those with O-1 and O-2. Furthermore, comparison with the representative of the existing RF methods shows that EAs
are both effective and efficient approaches for SSS learning in region-based image retrieval. 相似文献
11.
Maurine Malak Ahmed Hisham Morshed Khaled Hassan Tarik Bourouina Hanan Anis Diaa Khalil 《Microsystem Technologies》2010,16(7):1139-1156
New designs for a 1 × 4 and a 1 × 8 CWDM multiplexers based on cascaded groups of series coupled ring resonators (Little et
al. in J Lightwave Technol 15:998–1005, 1997; IEEE Photon Technol Lett 10:2263–2265, 2004; Hryniewicz et al. in IEEE Photon
Technol Lett 12:320–322, 2000) are presented. Compared to other integrated optical alternatives such as MMI phasars (Paiam
and MacDonald in Appl Opt 36: 5097–5108, 1997), cascaded Mach–Zehnder interferometers (Wang and He in J Lightwave Technol
23:1284–1290, 2005) and cascaded AWG (Dragone in IEEE Photon Technol Lett 3:812–815, 1991; Uetsuka in IEEE J Sel Top Quant
Electron 10:393–402, 2004), the proposed circuits offer superior performance in their very sharp roll-off factor that exceeds
0.75, their reduced crosstalk level that lies below −60 dB and their negligible insertion loss for the 1 × 4 design. For the
1 × 8 design, the worst case insertion loss is 4 dB. However, the performances obtained exhibit passband ripples in the order
of 5 dB, and besides, they are not very tolerant to fabrication errors. Being designed for SOI technology, the proposed circuits
are compact as the circuit areas are 130 × 130 and 90 × 150 μm2 for the 1 × 4 and 1 × 8 designs, respectively. They also have a high potential for MEMS tunability. 相似文献
12.
13.
Much work on skewed, stochastic, high dimensional, and biased datasets usually implicitly solve each problem separately. Recently,
we have been approached by Texas Commission on Environmental Quality (TCEQ) to help them build highly accurate ozone level
alarm forecasting models for the Houston area, where these technical difficulties come together in one single problem. Key
characteristics of this problem that is challenging and interesting include: (1) the dataset is sparse (72 features, and 2
or 5% positives depending on the criteria of “ozone days”), (2) evolving over time from year to year, (3) limited in collected
data size (7 years or around 2,500 data entries), (4) contains a large number of irrelevant features, (5) is biased in terms
of “sample selection bias”, and (6) the true model is stochastic as a function of measurable factors. Besides solving a difficult
application problem, this dataset offers a unique opportunity to explore new and existing data mining techniques, and to provide
experience, guidance and solution for similar problems. Our main technical focus addresses on how to estimate reliable probability
given both sample selection bias and a large number of irrelevant features, and how to choose the most reliable decision threshold
to predict the unknown future with different distribution. On the application side, the prediction accuracy of our chosen
approach (bagging probabilistic decision trees and random decision trees) is 20% higher in recall (correctly detects 1–3 more
ozone days, depending on the year) and 10% higher in precision (15–30 fewer false alarm days per year) than state-of-the-art
methods used by air quality control scientists, and these results are significant for TCEQ. On the technical side of data
mining, extensive empirical results demonstrate that, at least for this problem, and probably other problems with similar
characteristics, these two straight-forward non-parametric methods can provide significantly more accurate and reliable solutions
than a number of sophisticated and well-known algorithms, such as SVM and AdaBoost among many others. 相似文献
14.
Previous work has demonstrated that the use of structured abstracts can lead to greater completeness and clarity of information,
making it easier for researchers to extract information about a study. In academic year 2007/08, Durham University’s Computer Science Department revised the format of
the project report that final year students were required to write, from a ‘traditional dissertation’ format, using a conventional
abstract, to that of a 20-page technical paper, together with a structured abstract. This study set out to determine whether
inexperienced authors (students writing their final project reports for computing topics) find it easier to produce good abstracts, in terms of completeness and clarity, when using a structured form rather than a conventional form. We performed
a controlled quasi-experiment in which a set of ‘judges’ each assessed one conventional and one structured abstract for its
completeness and clarity. These abstracts were drawn from those produced by four cohorts of final year students: two preceding
the change, and the two following. The assessments were performed using a form of checklist that is similar to those used
for previous experimental studies. We used 40 abstracts (10 per cohort) and 20 student ‘judges’ to perform the evaluation.
Scored on a scale of 0.1–1.0, the mean for completeness increased from 0.37 to 0.61 when using a structured form. For clarity,
using a scale of 1–10, the mean score increased from 5.1 to 7.2. For a minimum goal of scoring 50% for both completeness and
clarity, only 3 from 19 conventional abstracts achieved this level, while only 3 from 20 structured abstracts failed to reach
it. We conclude that the use of a structured form for organising the material of an abstract can assist inexperienced authors
with writing technical abstracts that are clearer and more complete than those produced without the framework provided by
such a mechanism. 相似文献
15.
Magdalena Stobińska G. J. Milburn Krzysztof Wódkiewicz 《Open Systems & Information Dynamics》2007,14(1):81-90
We present an effective method of coherent state superposition (cat state) generation using single trapped ion in a Paul trap.
The method is experimentally feasible for coherent states with amplitude α ≤ 2 using available technology. It works both in
and beyond the Lamb-Dicke regime.
Presented at the 38th Symposium on Mathematical Physics “Quantum Entanglement & Geometry”, Toruń, June 4–7, 2006. 相似文献
16.
Chung-Chih Li 《Theory of Computing Systems》2009,45(4):880-896
A classic result known as the speed-up theorem in machine-independent complexity theory shows that there exist some computable
functions that do not have best programs for them (Blum in J. ACM 14(2):322–336, 1967 and J. ACM 18(2):290–305, 1971). In this paper we lift this result into type-2 computations. Although the speed-up phenomenon is essentially inherited from
type-1 computations, we observe that a direct application of the original proof to our type-2 speed-up theorem is problematic
because the oracle queries can interfere with the speed of the programs and hence the cancellation strategy used in the original
proof is no longer correct at type-2. We also argue that a type-2 analog of the operator speed-up theorem (Meyer and Fischer
in J. Symb. Log. 37:55–68, 1972) does not hold, which suggests that this curious speed-up phenomenon disappears in higher-typed computations beyond type-2.
The result of this paper adds one more piece of evidence to support the general type-2 complexity theory under the framework
proposed in Li (Proceedings of the Third International Conference on Theoretical Computer Science, pp. 471–484, 2004 and Proceedings of Computability in Europe: Logical Approach to Computational Barriers, pp. 182–192, 2006) and Li and Royer (On type-2 complexity classes: Preliminary report, pp. 123–138, 2001) as a reasonable setup. 相似文献
17.
Wei Chen Zhenming Liu Xiaorui Sun Yajun Wang 《Data mining and knowledge discovery》2010,21(2):224-240
In this paper, we introduce a game-theoretic framework to address the community detection problem based on the structures
of social networks. We formulate the dynamics of community formation as a strategic game called community formation game: Given an underlying social graph, we assume that each node is a selfish agent who selects communities to join or leave based
on her own utility measurement. A community structure can be interpreted as an equilibrium of this game. We formulate the
agents’ utility by the combination of a gain function and a loss function. We allow each agent to select multiple communities,
which naturally captures the concept of “overlapping communities”. We propose a gain function based on the modularity concept
introduced by Newman (Proc Natl Acad Sci 103(23):8577–8582, 2006), and a simple loss function that reflects the intrinsic
costs incurred when people join the communities. We conduct extensive experiments under this framework, and our results show
that our algorithm is effective in identifying overlapping communities, and are often better then other algorithms we evaluated
especially when many people belong to multiple communities. To the best of our knowledge, this is the first time the community
detection problem is addressed by a game-theoretic framework that considers community formation as the result of individual
agents’ rational behaviors. 相似文献
18.
We develop an algorithm to compute asset allocations for Kahneman and Tversky’s (Econometrica, 47(2), 263–291, 1979) prospect theory. An application to benchmark data as in Fama and French (Journal of Financial Economics, 47(2), 427–465, 1992) shows that the equity premium puzzle is resolved for parameter values similar to those found in the laboratory
experiments of Kahneman and Tversky (Econometrica, 47(2), 263–291, 1979). While previous studies like Benartzi and Thaler (The Quarterly Journal of Economics, 110(1), 73–92, 1995), Barberis, Huang and Santos (The Quarterly Journal of Economics, 116(1), 1–53, 2001), and Grüne and Semmler (Asset prices and loss aversion, Germany, Mimeo Bielefeld University, 2005) focussed on dynamic aspects of asset pricing but only used loss aversion to explain
the equity premium puzzle our paper explains the unconditional moments of asset pricing by a static two-period optimization
problem. However, we incorporate asymmetric risk aversion. Our approach allows reducing the degree of loss aversion from 2.353
to 2.25, which is the value found by Tversky and Kahneman (Journal of Risk and Uncertainty, 5, 297–323, 1992) while increasing the risk aversion from 1 to 0.894, which is a slightly higher value than the 0.88 found
by Tversky and Kahneman (Journal of Risk and Uncertainty, 5, 297–323, 1992). The equivalence of these parameter settings is robust to incorporating the size and the value portfolios
of Fama and French (Journal of Finance, 47(2), 427–465, 1992). However, the optimal prospect theory portfolios found on this larger set of assets differ drastically
from the optimal mean-variance portfolio. 相似文献
19.
In this paper, we extend the adjoint error correction of Pierce and Giles (SIAM Rev. 42, 247–264 (2000)) for obtaining superconvergent approximations of functionals to Galerkin methods. We illustrate the technique
in the framework of discontinuous Galerkin methods for ordinary differential and convection–diffusion equations in one space
dimension. It is well known that approximations to linear functionals obtained by discontinuous Galerkin methods with polynomials
of degree k can be proven to converge with order 2k + 1 and 2k for ordinary differential and convection–diffusion equations, respectively. In contrast, the order of convergence of the
adjoint error correction method can be proven to be 4k + 1 and 4k, respectively. Since both approaches have a computational complexity of the same order, the adjoint error correction method
is clearly a competitive alternative. Numerical results which confirm the theoretical predictions are presented. 相似文献
20.
This paper reports the findings of a detailed study of Web-based systems design (WBSD) practices in Ireland based on data
collected over a 3-year period (2002–2005), the objectives of which were to (1) contribute towards a richer understanding
of the current “real-world” context of WBSD by characterising the profile of a typical project (team size, timeframe, nature
of requirements, etc.) and identifying the key challenges, constraints, and imperatives (i.e. “mediating factors”) faced by
Web-based system designers, and (2) understand how those contextual parameters and mediating factors influence the activity
of WBSD as regards the selection and enactment of whatever design practices are therefore engaged (i.e. the use of methods,
procedures, etc.). Data was gathered through a survey which yielded 165 usable responses, and later through a series of semi-structured
qualitative interviews. Using grounded theory, an explanatory conceptual framework is derived, based on an extension of the
“method-in-action” model, the application of which to WBSD has not been previously investigated in depth. It is proposed that
this framework of WBSD issues is valuable in a number of ways to educators, researchers, practitioners, and method engineers. 相似文献