首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We consider the problem of deciding whether a fine-grained access control policy for tree updates allows a particular document to be constructed. This problem arises from a number of natural questions related to document security, authenticity, and verifiability. Fine-grained access control is the problem of specifying the set of operations that may be performed on a complex structure. For tree-structured databases and documents, particularly XML, a rule-based approach is most common. In this model, access control policies consist of rules that select the allowed or disallowed targets of queries or updates based on their hierarchical relationships to other nodes.We show that, for a typical form of rule-based fine-grained access control policies based on a simple fragment of XPath, this problem is undecidable. We also prove lower bounds on the complexity of various restrictions of this problem, and demonstrate deterministic and nondeterministic polynomial-time algorithms for two restrictions in particular.These results show that, for sufficiently complex access control languages, certain forms of analysis are very difficult or even impossible, limiting the ability to verify documents, audit existing policies, and evaluate new policies. Thus rule-based access control policies based on XPath are, in some sense, too powerful, demonstrating the need for a model of access control of tree updates that bridges the gap between expressive and analyzable policies.  相似文献   

2.
Fix a number K, of colors. We consider the usual backtrack algorithm for the decision problem of K-colorability of a graph G. We show that the algorithm operates in average time that is O(1), as the number of vertices of G approaches infinity. For instance, a backtrack search tree for 3-coloring a graph has an average of about 197 nodes, averaged over all graphs of all sizes.  相似文献   

3.
Artificial neural networks and fuzzy systems, have gradually established themselves as a popular tool in approximating complicated nonlinear systems and time series forecasting. This paper investigates the hypothesis that the nonlinear mathematical models of multilayer perceptron and radial basis function neural networks and the Takagi–Sugeno (TS) fuzzy system are able to provide a more accurate out-of-sample forecast than the traditional auto regressive moving average (ARMA) and ARMA generalized auto regressive conditional heteroskedasticity (ARMA-GARCH) linear models. Using series of Brazilian exchange rate (R$/US$) returns with 15 min, 60 min and 120 min, daily and weekly basis, the one-step-ahead forecast performance is compared. Results indicate that forecast performance is strongly related to the series’ frequency and the forecasting evaluation shows that nonlinear models perform better than their linear counterparts. In the trade strategy based on forecasts, nonlinear models achieve higher returns when compared to a buy-and-hold strategy and to the linear models.  相似文献   

4.
Many problems consist in splitting a set of objects into different groups so that each group verifies some properties. In practice, a partitioning is often encoded by an array mapping each object to its group numbering. In fact, the group number of an object does not really matter, and one can simply rename each group to obtain a new encoding. That is what we call the symmetry of the search space in a partitioning problem. This property may be prejudicial for optimization methods such as evolutionary algorithms (EA) which require some diversity during the search.  相似文献   

5.
The tensor‐product (TP) model transformation is a recently proposed numerical method capable of transforming linear parameter varying state‐space models to the higher order singular value decomposition (HOSVD) based canonical form of polytopic models. It is also capable of generating various types of convex TP models, a type of polytop models, for linear matrix inequality based controller design. The crucial point of the TP model transformation is that its computational load exponentially explodes with the dimensionality of the parameter vector of the parameter‐varying state‐space model. In this paper we propose a modified TP model transformation that leads to considerable reduction of the computation. The key idea of the method is that instead of transforming the whole system matrix at once in the whole parameter space, we decompose the problem and perform the transformation element wise and restrict the computation to the subspace where the given element of the model varies. The modified TP model transformation can readily be executed in higher dimensional cases when the original TP model transformation fails. The effectiveness of the new method is illustrated with numerical examples. Copyright © 2009 John Wiley and Sons Asia Pte Ltd and Chinese Automatic Control Society  相似文献   

6.
7.
An experimental consensus conference on the topic of gene therapy was held in order to discover whether the method, a means for participatory technology assessment born in Denmark in 1986, could be feasible in Japan. This article summarises the overall experience of this experiment and concludes that the method is indeed feasible in Japan. Enumerating some issues and problems we faced in this project, I will discuss their meaning and significance from the viewpoint of practitioner or initiator of participatory technology assessment in Japan.  相似文献   

8.
9.
The ant colony optimization is a meta-heuristic inspired by knowledge sharing amongst ants using pheromone, which serves as a kind of collective memory. Since the past few years, there have been several successful applications of this new approach for finding approximate solutions for computationally difficult problems in reasonable times. In this paper, we study the generalized minimum spanning tree problem that involves the design of a minimum weight connected network spanning at least one node out of every disjoint subset of the nodes in a graph. This problem has a wealth of pertinence to a wide range of applications in different areas. As the problem is known as computationally challenging, we adopt the ant colony optimization strategy and present a new solution method, called Ant-Tree, to develop approximate solutions. As an initial attempt, our study aims to provide an investigation of the ant colony optimization approach for coping with tree optimization problems. Through computational experiments, we compare the performances of our approach and the method available in the literature. Numerical results indicate that the proposed method is effective in producing quality approximate solutions.  相似文献   

10.
Recommender systems apply data mining and machine learning techniques for filtering unseen information and can predict whether a user would like a given item. This paper focuses on gray-sheep users problem responsible for the increased error rate in collaborative filtering based recommender systems. This paper makes the following contributions: we show that (1) the presence of gray-sheep users can affect the performance – accuracy and coverage – of the collaborative filtering based algorithms, depending on the data sparsity and distribution; (2) gray-sheep users can be identified using clustering algorithms in offline fashion, where the similarity threshold to isolate these users from the rest of community can be found empirically. We propose various improved centroid selection approaches and distance measures for the K-means clustering algorithm; (3) content-based profile of gray-sheep users can be used for making accurate recommendations. We offer a hybrid recommendation algorithm to make reliable recommendations for gray-sheep users. To the best of our knowledge, this is the first attempt to propose a formal solution for gray-sheep users problem. By extensive experimental results on two different datasets (MovieLens and community of movie fans in the FilmTrust website), we showed that the proposed approach reduces the recommendation error rate for the gray-sheep users while maintaining reasonable computational performance.  相似文献   

11.
The online computational burden of linear model predictive control (MPC) can be moved offline by using multi-parametric programming, so-called explicit MPC. The solution to the explicit MPC problem is a piecewise affine (PWA) state feedback function defined over a polyhedral subdivision of the set of feasible states. The online evaluation of such a control law needs to determine the polyhedral region in which the current state lies. This procedure is called point location; its computational complexity is challenging, and determines the minimum possible sampling time of the system. A new flexible algorithm is proposed which enables the designer to trade off between time and storage complexities. Utilizing the concept of hash tables and the associated hash functions, the proposed method solves an aggregated point location problem that overcomes prohibitive complexity growth with the number of polyhedral regions, while the storage–processing trade-off can be optimized via scaling parameters. The flexibility and power of this approach is supported by several numerical examples.  相似文献   

12.
This paper considers the lot scheduling problem in the flexible flow shop with limited intermediate buffers to minimize total cost which includes the inventory holding and setup costs. The single available mathematical model by Akrami et al. (2006) for this problem suffers from not only being non-linear but also high size-complexity. In this paper, two new mixed integer linear programming models are developed for the problem. Moreover, a fruit fly optimization algorithm is developed to effectively solve the large problems. For model’s evaluation, this paper experimentally compares the proposed models with the available model. Moreover, the proposed algorithm is also evaluated by comparing with two well-known algorithms (tabu search and genetic algorithm) in the literature and adaption of three recent algorithms for the flexible flow shop problem. All the results and analyses show the high performance of the proposed mathematical models as well as fruit fly optimization algorithm.  相似文献   

13.
Considerable attention has been given to strategies for variable selection in spectroscopic analysis. Here we introduce a different approach, the self organising map as a feature compressor, which also helps reducing the dimensionality of the problem. The method is straightforward and does not need previous knowledge about the regions of the spectra that contain relevant variables or information, so it applies generally. We coupled the method to multiple linear regression, partial component analysis and partial least squares and used it to quantitatively analyse 2-component liquid samples using FTIR spectroscopy. The predicted concentrations of the species within the mixture were extremely accurate (the correlation coefficients of estimated versus real concentrations were 0.997 and 0.995 for methanol and p-xylene, respectively). Furthermore, when applying the feature compression step, calibration models become more stable since they are able to better estimate a concentration not present in the training set.  相似文献   

14.
Digital business model innovation (BMI) is critical to achieving and sustaining competitiveness in technology-driven environments. In those environments, firms must not only sense changes to identify opportunities but also effectively seize them in BMI. Therefore, sensing and seizing cannot be considered as isolated dynamic capabilities, but must be combined for successful BMI. However, research on sensing and seizing does not offer compelling suggestions for firms that struggle with connecting both while pursuing digital BMI. We use qualitative configurational analysis (QCA) to analyze a sample of 49 case studies on digital BMI to identify the antecedents that firms sense before seizing these changes with digital BMI. Based on ten configurations of sensing (represented by six antecedents) and seizing (represented by four BMI types), we explain the relationship between sensed antecedents and seized digital BMI. In addition, we derived four variables that explain “what” and “how” firms connect sensing and seizing. Based on the sensing-seizing connection, we introduce consolidating BMI as a new type of BMI unique to the digital context. This novel type enables firms to exploit and explore new BMs and subsequent digital BMIs through the means of digital infrastructure. This study extends the understanding of how different business models emerge and how firms create digital BMIs.  相似文献   

15.
Abstract. Many recent studies have shown that computer-based systems continue to ‘fail’ at a number of different levels (Romtec, 1988; KPMG, 1990) and it is increasingly apparent (Maclaren et al., 1991) that the most serious failures of information technology (IT) lie in the continuing inability to address those concerns which are central to the successful achievement of individual, organizational and social goals. It is the contention of this paper that this failing is precisely because these are the areas which are ignored or inadequately treated by conventional system development methods. There is, of course, a vast body of literature concerned with the understanding of complex human activity systems. This literature often reflects a mass of contradictions at the epistemological and the ontological level about the behaviour of such systems and has also spawned numerous methods (and methodologies) which seek to guide the individual in making successful interventions into organizational situations (Rosenhead, 1989). Despite this multiplicity of viewpoints many writers have posited a dichotomy between so-called 'soft and ‘hard’ approaches to problem situations and use this dichotomy to inform the choice of an appropriate problem-solving methodology (Checkland, 1985). In this paper we characterize these two approaches as being concerned with either the purpose(s) of the human activity system (i.e. ‘doing the right thing’) or with the design of the efficient means of achieving such purpose(s) (i.e. ‘doing the thing right’). It is our belief that much of the literature and work in either area has not concerned itself with the issues of the other. Writers on ‘hard’ engineering methods often assume the question of purpose to be either straightforward (e.g. given in the project brief) or, paradoxically, too difficult (e.g. it is not our concern as mere systems analysts). Writers on ‘soft’ methods on the other hand rarely have anything to say about the design and implementation of well-engineered computer-based systems, giving the impression that this is a somewhat mundane activity better left to technical experts. This paper, therefore, attempts to set out a rationale for the bringing together of principles from both ‘hard’ engineering and ‘soft’ inquiry methods without doing epistemological damage to either. To illustrate our argument we concentrate on JSD (Jackson system development) as an example of system engineering (Cameron, 1983) and SSM (soft systems methodology) as an example of system inquiry (Checkland, 1981; Checkland & Scholes, 1990). Our general thesis, however, does not depend upon either of these two approaches per se but applies to the overall issue of bringing together insights from two apparently opposed epistemological positions in an effort better to harness the power of IT in pursuit of purposeful human activity.  相似文献   

16.
Spatial interaction models are frequently used to predict and explain interregional commodity flows. Studies suggest that the effects of spatial structure significantly influence spatial interaction models, often resulting in model misspecification. Competing destinations and intervening opportunities have been used to mitigate this issue. Some recent studies also show that the effects of spatial structure can be successfully modeled by incorporating network autocorrelation among flow data. The purpose of this paper is to investigate the existence of network autocorrelation among commodity origin–destination flow data and its effect on model estimation in spatial interaction models. This approach is demonstrated using commodity origin–destination flow data for 111 regions of the United States from the 2002 Commodity Flow Survey. The results empirically show how network autocorrelation affects modeling interregional flows and can be successfully captured in spatial autoregressive model specifications.  相似文献   

17.
We investigate the effects of individual difference with the framework of task-individual-technology fit under a multi-DSS models context using a two-phase view. Our research question is: in addition to task-technology fit, does individual-technology fit or individual-task fit matter in users' attitude and performance in the multi-tasks and multi-DSS models context? We first divide the concept of task-individual-technology fit into three dimensions - task-technology fit (TTF), individual-technology fit (ITeF), and task-individual fit (TaIF) - so that we can explore mechanisms and effects of interaction among these factors (i.e., task, individual difference, and technology). We then propose a two-phase view of task-individual-technology fit (i.e., pre-paradigm phase and paradigm phase) based on Kuhn's concept of revolutionary science. We conducted a controlled laboratory experiment with multiple DSS models and decision-making tasks to test our hypotheses. Results confirmed our arguments that in the paradigm phase, the effects of individual differences on user attitudes toward DSS models can be ignored and that in the pre-paradigm phases individual differences play an important role. In addition, we found that effects of individual difference can be a two-blade sword: ITeF can enhance but TaIF can diminish users' attitude to DSS model (i.e., technology). Our results also suggested that different dimensions of fit may affect performance directly or indirectly.  相似文献   

18.
19.
20.
Research into software design models in general, and into the UML in particular, focuses on answering the question how design models are used, completely ignoring the question if they are used. There is an assumption in the literature that the UML is the de facto standard, and that use of design models has had a profound and substantial effect on how software is designed by virtue of models giving the ability to do model-checking, code generation, or automated test generation. However for this assumption to be true, there has to be significant use of design models in practice by developers.This paper presents the results of a survey summarizing the answers of 3785 developers answering the simple question on the extent to which design models are used before coding. We relate their use of models with (i) total years of programming experience, (ii) open or closed development, (iii) educational level, (iv) programming language used, and (v) development type.The answer to our question was that design models are not used very extensively in industry, and where they are used, the use is informal and without tool support, and the notation is often not UML. The use of models decreased with an increase in experience and increased with higher level of qualification. Overall we found that models are used primarily as a communication and collaboration mechanism where there is a need to solve problems and/or get a joint understanding of the overall design in a group. We also conclude that models are seldom updated after initially created and are usually drawn on a whiteboard or on paper.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号