首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
In their works on the theoretical side of Polymer, Ligatti and his co-authors have identified a new class of enforcement mechanisms based on the notion of edit automata that can transform sequences and enforce more than simple safety properties. We show that there is a gap between the edit automata that one can possibly write (e.g., by Ligatti et?al in their IJIS running example) and the edit automata that are actually constructed according the theorems from Ligatti??s IJIS paper or from Talhi et?al. ??Ligatti??s automata?? are just a particular kind of edit automata. Thus, we re-open a question which seemed to have received a definitive answer: you have written your security enforcement mechanism (aka your edit automata); does it really enforce the security policy you wanted?  相似文献   

3.
The amount of money spent in a store is positively correlated with the amount of time spent inside. We argue this is an opportunity for multimedia installations that can entertain shoppers and promote interaction with the shop??s products. This was the main principle behind our design idea for two interactive installations specifically conceived for shoe shops. We present two applications of interactive multimedia to shoe shopping: an interactive semantic mirror and an interactive window logo. We also describe the results of ethnographic studies, before and after the design process. Our contribution is two-fold: (i) we develop and apply a new multimedia architecture that combines RFID in-store technology with high-end motion detection algorithms, and (ii) we describe one of the first few studies about multimedia installations for improving the shoe shopping experience, in what we call ??foot-turistic?? interactions.  相似文献   

4.
This paper describes the workshop which took place during the international symposium ??Digital Fabrication ?C a State of Art??, which took place at the School of Technology and Architecture ISCTE-IUL, on 15-16September 2011. Its main goal was to introduce a group of about twenty people to the use of digital fabrication in architecture, in a country where these technologies are not yet fully implemented in architecture schools and curricula.  相似文献   

5.
6.
A data stream is a massive unbounded sequence of data elements continuously generated at a rapid rate. Query processing for such a data stream should also be continuous and rapid, which requires strict time and space constraints. In order to guarantee these constraints, we have proposed a new scheme called an Attribute Selection Construct (ASC) for an attribute of a data stream in our previous study (Lee and Lee, Information Sciences 178:2416?C2432, 2008). As its optimization technique, this paper proposes the new strategy that determines the evaluation order of multiple ASC??s for a given query set at two different levels??macro and micro levels. Based on the two levels, it also proposes two different strategies??macro-sequence and hybrid-sequence??that find the optimized full evaluation sequence of all the ASC??s. In addition, it provides the adaptive strategy that periodically rearranges the evaluation sequence of multiple ASC??s. The performance of the proposed technique is verified by a series of experiments.  相似文献   

7.
In this study, we evaluated the read-out signal quality from narrow track patterns, utilizing linearly arranged slender track patterns, while changing the track width from 10 to 0.1 ??m and the pattern density from 200?nm line-and-space (L/S) to 120?nm L/S. To acquire narrow track readout signals, we adjusted the aperture??s in-plane position to cross a linear track at shallow angles of less than 1°, and we could successfully transform directly acquired signals into those of an aperture crossing tracks at a right angles. The results of an experiment utilizing a 10-??m-wide track (which is thought to represent an infinitely wide track) clarified that the stationary field was spread to an approximately 1.2?C1.6?times larger region than the typical aperture size of 330?nm. The results also clarified that the ??field spread?? depended on the pattern density, that is, the case in which polarization direction ?? equals 0 or 45°, and that the field spread increased monotonically as the line or space width became smaller. When the polarization direction equals 90°, the field spread had its local maximum when the line or space width was approximately 150?nm. An approximate prediction of the read-out signal amplitude was based on the rule that the signal amplitude was proportional to the net field spread that passed across the track pattern, and this prediction corresponded well to the experimental results, except when the interaction between the stationary field and the track side walls was not taken into account.  相似文献   

8.
In this paper we present Caesar, an intelligent domestic service robot. In domestic settings for service robots complex tasks have to be accomplished. Those tasks benefit from deliberation, from robust action execution and from flexible methods for human?Crobot interaction that account for qualitative notions used in natural language as well as human fallibility. Our robot Caesar deploys AI techniques on several levels of its system architecture. On the low-level side, system modules for localization or navigation make, for instance, use of path-planning methods, heuristic search, and Bayesian filters. For face recognition and human?Cmachine interaction, random trees and well-known methods from natural language processing are deployed. For deliberation, we use the robot programming and plan language Readylog, which was developed for the high-level control of agents and robots; it allows combining programming the behaviour using planning to find a course of action. Readylog is a variant of the robot programming language Golog. We extended Readylog to be able to cope with qualitative notions of space frequently used by humans, such as ??near?? and ??far??. This facilitates human?Crobot interaction by bridging the gap between human natural language and the numerical values needed by the robot. Further, we use Readylog to increase the flexible interpretation of human commands with decision-theoretic planning. We give an overview of the different methods deployed in Caesar and show the applicability of a system equipped with these AI techniques in domestic service robotics.  相似文献   

9.
The repletion rate of guide dogs for visually handicapped persons is roughly 10% nationwide. The reasons for this low rate are the long training period and the expense for obtaining a guide dog in Japan. Motivated by these two reasons, we are developing guide-dog robots. The major objective is to develop an intelligent human?Crobot interface. This paper describes two novel interface algorithms and strategy to guide visually handicapped person. We developed new leading edge searching method, which uses a single laser range finder (LRF) developed to find the center of the corridor in an indoor environment. We also developed a new twin cluster trace method that can recognize the led-person??s walking conditions measured by the LRF. The algorithm allows the guide-dog robot to accurately estimate and anticipate the led-person??s next move. We experimentally verified these algorithms. The results show that the algorithms are reliable enough to enable the guide-dog robot and the led-person to maneuver in a complex corridor environment.  相似文献   

10.
The ??digital fabrication?? revolution being lived today, both in knowledge creation and in technological developments will become more than a simple formal exploration in architecture and design, or a set of tools exclusive to advanced industries. New tools and processes are becoming more accessible to the masses and are being shared all over the world through Internet platforms, with an open source philosophy, both in software and hardware. The collective mind that is being empowered everyday will define the future of production in the life of mankind and its relation with the environment. The role of architects, engineers, designers and many other professionals, will be reshaped and reconfigured to fit into new models of production and creation. These will need to be supported by new manufacturing platforms, knowledge generating and sharing know-how.  相似文献   

11.
A propagation method for the time dependent Schr?dinger equation was studied leading to a general scheme of solving ode type equations. Standard space discretization of time-dependent pde??s usually results in system of ode??s of the form 0.1 $$u_t - G u =s$$ where G is a operator (matrix) and u is a time-dependent solution vector. Highly accurate methods, based on polynomial approximation of a modified exponential evolution operator, had been developed already for this type of problems where G is a linear, time independent matrix and s is a constant vector. In this paper we will describe a new algorithm for the more general case where s is a time-dependent r.h.s vector. An iterative version of the new algorithm can be applied to the general case where G depends on t or u. Numerical results for Schr?dinger equation with time-dependent potential and to non-linear Schr?dinger equation will be presented.  相似文献   

12.
A journey across the lands that were part of Persia long ago offers a friendly introduction to symmetry and symmetry groups, as presented in Hermann Weyl??s seminal and popular book, Symmetry (1952). Weyl??s intent was to show how geometrical transformations first, then mathematical structures, could be better understood from a cultural point of view through art and architecture. Our intent is to provide a complementary set of selected pictures of Persian monuments to illustrate Weyl??s ideas. Following the master, we have focused on different kinds of symmetries, starting from the simplest and oldest to those that are more complex, disregarding chronology or geography within the lands of Persia.  相似文献   

13.
In this paper, we propose a new algorithm based on semantic model for inference in CLG Bayesian networks which is strongly inspired by the architecture of Madsen (in, Int J Approx Reason 49:503?C521, 2008). By performing semantic modeling before physical computation, the proposed algorithm takes advantage of the semantic knowledge induced by the structure of the graph and the evidence. Thus, iteration between semantic modeling and physical computation can be avoided. Also,the presented architecture can exploit some remaining independencies in the relevant potentials which were ignored by the previous architecture. The correctness of the proposed algorithm has been proved and the resulting benefits are illustrated by examples. The results indicate a significant potential in semantic knowledge.  相似文献   

14.
In this paper a new clustering algorithm is presented: A complex-based Fuzzy c-means (CFCM) algorithm. While the Fuzzy c-means uses a real vector as a prototype characterizing a cluster, the CFCM??s prototype is generalized to be a complex vector (complex center). CFCM uses a new real distance measure which is derived from a complex one. CFCM??s formulas for the fuzzy membership are derived. These formulas are extended to derive the complex Gustafson?CKessel algorithm (CGK). Cluster validity measures are used to assess the goodness of the partitions obtained by the complex centers compared those obtained by the real centers. The validity measures used in this paper are the Partition Coefficient, Classification Entropy, Partition Index, Separation Index, Xie and Beni??s Index, Dunn??s Index. It is shown in this paper that the CFCM give better partitions of the data than the FCM and the GK algorithms. It is also shown that the CGK algorithm outperforms the CFCM but at the expense of much higher computational complexity.  相似文献   

15.
A multi-class classifier based on the Bradley-Terry model predicts the multi-class label of an input by combining the outputs from multiple binary classifiers, where the combination should be a priori designed as a code word matrix. The code word matrix was originally designed to consist of +1 and ?1 codes, and was later extended into deal with ternary code {+1,0,?1}, that is, allowing 0 codes. This extension has seemed to work effectively but, in fact, contains a problem: a binary classifier forcibly categorizes examples with 0 codes into either +1 or ?1, but this forcible decision makes the prediction of the multi-class label obscure. In this article, we propose a Boosting algorithm that deals with three categories by allowing a ??don??t care?? category corresponding to 0 codes, and present a modified decoding method called a ??ternary?? Bradley-Terry model. In addition, we propose a couple of fast decoding schemes that reduce the heavy computation by the existing Bradley-Terry model-based decoding.  相似文献   

16.
Studying how collaborative activity takes shape interactionally in the context of technological settings is one of the main challenges in the field of Computer-Supported Collaborative Learning (CSCL). It requires us, amongst other things, to look into the ??black box?? of how technical artifacts are brought into use, or rather, how they are attuned to, interacted with, and shaped in various and varied practices. This article explores the establishment of a purposeful connection of human agents and technical artifacts in CSCL, that we call ??the agent-artifact connection??. In order to contribute to a grounded conception of this connection, we reviewed three theoretical positions: affordance, structures and instrument. Although these three positions differ in how they conceptualise the connection, they share the assumption that a technical artifact carries a potential for action that becomes available when artifact and agent connect, and that the availability of action opportunities is relative to the ones who interact with the artifact. In this article, we map out the conceptual and methodological implications for each of the positions. We argue that the rationale of ??shaping?? collaborative interactions that underlies a part of CSCL research should be replaced by a rationale of ??mutual shaping?? of human agents and technical artifacts.  相似文献   

17.
Two problems may arise when an intelligent (recommender) system elicits users?? preferences. First, there may be a mismatch between the quantitative preference representations in most preference models and the users?? mental preference models. Giving exact numbers, e.g., such as ??I like 30?days of vacation 2.5?times better than 28?days?? is difficult for people. Second, the elicitation process can greatly influence the acquired model (e.g., people may prefer different options based on whether a choice is represented as a loss or gain). We explored these issues in three studies. In the first experiment we presented users with different preference elicitation methods and found that cognitively less demanding methods were perceived low in effort and high in liking. However, for methods enabling users to be more expressive, the perceived effort was not an indicator of how much the methods were liked. We thus hypothesized that users are willing to spend more effort if the feedback mechanism enables them to be more expressive. We examined this hypothesis in two follow-up studies. In the second experiment, we explored the trade-off between giving detailed preference feedback and effort. We found that familiarity with and opinion about an item are important factors mediating this trade-off. Additionally, affective feedback was preferred over a finer grained one-dimensional rating scale for giving additional detail. In the third study, we explored the influence of the interface on the elicitation process in a participatory set-up. People considered it helpful to be able to explore the link between their interests, preferences and the desirability of outcomes. We also confirmed that people do not want to spend additional effort in cases where it seemed unnecessary. Based on the findings, we propose four design guidelines to foster interface design of preference elicitation from a user view.  相似文献   

18.
The SHARC framework for data quality in Web archiving   总被引:1,自引:0,他引:1  
Web archives preserve the history of born-digital content and offer great potential for sociologists, business analysts, and legal experts on intellectual property and compliance issues. Data quality is crucial for these purposes. Ideally, crawlers should gather coherent captures of entire Web sites, but the politeness etiquette and completeness requirement mandate very slow, long-duration crawling while Web sites undergo changes. This paper presents the SHARC framework for assessing the data quality in Web archives and for tuning capturing strategies toward better quality with given resources. We define data quality measures, characterize their properties, and develop a suite of quality-conscious scheduling strategies for archive crawling. Our framework includes single-visit and visit?Crevisit crawls. Single-visit crawls download every page of a site exactly once in an order that aims to minimize the ??blur?? in capturing the site. Visit?Crevisit strategies revisit pages after their initial downloads to check for intermediate changes. The revisiting order aims to maximize the ??coherence?? of the site capture(number pages that did not change during the capture). The quality notions of blur and coherence are formalized in the paper. Blur is a stochastic notion that reflects the expected number of page changes that a time-travel access to a site capture would accidentally see, instead of the ideal view of a instantaneously captured, ??sharp?? site. Coherence is a deterministic quality measure that counts the number of unchanged and thus coherently captured pages in a site snapshot. Strategies that aim to either minimize blur or maximize coherence are based on prior knowledge of or predictions for the change rates of individual pages. Our framework includes fairly accurate classifiers for change predictions. All strategies are fully implemented in a testbed and shown to be effective by experiments with both synthetically generated sites and a periodic crawl series for different Web sites.  相似文献   

19.
We consider here both fault identification and fault signal estimation. Regarding fault identification, we seek either exact or almost fault identification. On the other hand, regarding fault signal estimation, we seek either ??2 optimal, ??2 suboptimal or ?? suboptimal estimation. By appropriate combination of these two aspects, we formulate five different fault identification and fault signal estimation problems. We analyse and synthesize appropriate residual generators for fault identification and estimators which generate fault signal estimates for all these problems. Solvability conditions for all these problems are given. Also, for each of these problems, the architecture of integrating the residual generator that isolates the fault and estimators that estimate the fault signals is developed. Copyright © 2002 John Wiley & Sons, Ltd.  相似文献   

20.
The newest surveillance applications is attempting more complex tasks such as the analysis of the behavior of individuals and crowds. These complex tasks may use a distributed visual sensor network in order to gain coverage and exploit the inherent redundancy of the overlapped field of views. This article, presents a Multi-agent architecture based on the Belief-Desire-Intention (BDI) model for processing the information and fusing the data in a distributed visual sensor network. Instead of exchanging raw images between the agents involved in the visual network, local signal processing is performed and only the key observed features are shared. After a registration or calibration phase, the proposed architecture performs tracking, data fusion and coordination. Using the proposed Multi-agent architecture, we focus on the means of fusing the estimated positions on the ground plane from different agents which are applied to the same object. This fusion process is used for two different purposes: (1) to obtain a continuity in the tracking along the field of view of the cameras involved in the distributed network, (2) to improve the quality of the tracking by means of data fusion techniques, and by discarding non reliable sensors. Experimental results on two different scenarios show that the designed architecture can successfully track an object even when occlusions or sensor??s errors take place. The sensor??s errors are reduced by exploiting the inherent redundancy of a visual sensor network with overlapped field of views.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号