首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The efficiency of an experimental design is ultimately measured in terms of time and resources needed for the experiment. Optimal sequential (multi-stage) design is studied in the situation where each stage involves a fixed cost. The problem is motivated by switching measurements on superconducting Josephson junctions. In this quantum mechanical experiment, the sequences of current pulses are applied to the Josephson junction sample and a binary response indicating the presence or the absence of a voltage response is measured. The binary response can be modeled by a generalized linear model with the complementary log-log link function. The other models considered are the logit model and the probit model. For these three models, the approximately optimal sample size for the next stage as a function of the current Fisher information and the stage cost is determined. The cost-efficiency of the proposed design is demonstrated in simulations based on real data from switching measurements. The results can be directly applied to switching measurements and they may lead to substantial savings in the time needed for the experiment.  相似文献   

2.
3.
《Information Systems》1987,12(2):215-221
We introduce the fact structure of a relation: a user defined semantic structure describing the information (called facts) represented by a tuple. We argue that regarding tuples as units of information leads to many problems in the relational model, rather, we propose facts as units of information, with tuples used to represent facts. These ideas are further refined by introducing primary facts, namely, those facts that are considered more important by a user. Then we study the problem of updates in such a fact-oriented environment, and characterize “deletable” and “nondeletable” facts, and prove that it is possible to delete deletable facts with no side effect.  相似文献   

4.
We present a theoretical basis for supporting subjective and conditional probabilities in deductive databases. We design a language that allows a user greater expressive power than classical logic programming. In particular, a user can express the fact thatA is possible (i.e.A has non-zero probability),B is possible, but (A B) as a whole is impossible. A user can also freely specify probability annotations that may contain variables. The focus of this paper is to study the semantics of programs written in such a language in relation to probability theory. Our model theory which is founded on the classical one captures the uncertainty described in a probabilistic program at the level of Herbrand interpretations. Furthermore, we develop a fixpoint theory and a proof procedure for such programs and present soundness and completeness results. Finally we characterize the relationships between probability theory and the fixpoint, model, and proof theory of our programs.  相似文献   

5.
Active database management systems (DBMSs) are a fast-growing area of research, mainly due to the large number of applications which can benefit from this active dimension. These applications are far from being homogeneous, requiring different kinds of functionalities. However, most of the active DBMSs described in the literature only provide a fixed, hard-wired execution model to support the active dimension. In object-oriented DBMSs, event-condition-action rules have been propo sed for providing active behaviour. This paper presents EXACT, a rule manager for object-oriented DBMSs which provides a variety of options from which the designer can choose the one that best fits the semantics of the concept to be supported by rules. Due to the difficulty of foreseeing future requirements, special attention has been paid to making rule management easily extensible, so that the user can tailor it to suit specific applications. This has been borne out by an implementation in ADAM, an object -oriented DBMS. An example is shown of how the default mechanism can be easily extended to support new requirements. Edited by Y. Vassiliou. Received May 26, 1994 / Revised January 26, 1995, June 22, 1996 / Accepted November 4, 1996  相似文献   

6.
A simple parametrization, built from the definition of cubic splines, is shown to facilitate the implementation and interpretation of penalized spline models, whatever configuration of knots is used. The parametrization is termed value-first derivative parametrization. Inference is Bayesian and explores the natural link between quadratic penalties and Gaussian priors. However, a full Bayesian analysis seems feasible only for some penalty functionals. Alternatives include empirical Bayes inference methods involving model selection type criteria. The proposed methodology is illustrated by an application to survival analysis where the usual Cox model is extended to allow for time-varying regression coefficients.  相似文献   

7.
《Ergonomics》2012,55(12):907-916
The psychophysical method used by Snook (1978) to determine maximum acceptable workloads for repetitive lifting during an 8-hour work-day in industrial populations was evaluated for application in military ergonomics. Under the conditions ofthe present experiment, the mean load selected by 10 soldiers (17·5 kg) was lower than reported by Snook (1978) for industrial workers and by Garg and Saxena (1979) for college students. When the soldiers lifted and lowered their selected load for an 8-hour work-day, the average heart rate was 92 beats min?1 and the mean oxygen cost was 21% of their maximum oxygen uptake (determined for uphill treadmill running). There was no evidence of cardiovascular, metabolic or subjective fatigue. A subjective rating method tended to identify slightly lower loads than the psychophysical method. The results indicate that with good subject cooperation and firm experimental control in a laboratory, the psychophysical method can identify loads that soldiers can lift repetitively for an 8-hour work-day without metabolic, cardiovascular or subjective evidence of fatigue  相似文献   

8.
Moving objects produce trajectories. We describe a data model for trajectories and trajectory samples and an efficient way of modeling uncertainty via beads for trajectory samples. We study transformations of the ambient space for which important physical properties of trajectories, such as speed, are invariant. We also determine which transformations preserve beads. We give conceptually easy first-order complete query languages and computationally complete query languages for trajectory databases, which allow to talk directly about speed and uncertainty in terms of beads. The queries expressible in these languages are invariant under speed- and bead-preserving transformations.  相似文献   

9.
The development of efficient algorithms for learning from large relational databases is an important task in applicative machine learning. In this paper, we study knowledge discovery in relational databases and develop an attribute-oriented learning method which extracts generalization rules from relational databases. The method adopts the artificial intelligence “learning-from-examples” paradigm and applies in the learning process an attribute-oriented concept tree ascending technique which integrates database operations with the learning process and provides a simple and efficient way of learning from databases. The method learns both characteristic rules and classification rules of a learning concept, where a characteristic rule characterizes the properties shared by all the facts of the class being learned; while a classification rule characterizes the properties that distinguish the class being learned from other classes. The learning result could be a conjunctive rule or a rule with a small number of disjuncts. Moreover, learning can be performed with databases containing noisy data and exceptional cases using database statistics. Our analysis of the algorithms shows that attribute-oriented induction substantially reduces the computational complexity of the database learning process. Le développement d'algorithmes efficaces permettant l'apprentissage à partir de bases de donnees relationnelles est une fonction importante de l'apprentissage automatique applicatif. Dans cet article, les auteurs examinent la découverte des connaissances dans les bases de données relationnelles et élaborent une méthode d'apprentissage orientée sur l'attribut qui extrait des bases de données relationnelles les règies de généralisation. La méthode adopte le paradigme d'apprentissage à partir d'exemples et applique au processus d'apprentissage la technique de l'arbre des concepts orientés sur l'attribut qui incorpore les opérations de base de données au processus d'apprentissage, ce qui permet d'obtenir une méthode simple et efficace d'apprentissage à partir des bases de données. La méthode fait l'apprentissage des règies caractéristiques et des règies de classification d'un concept d'apprentissage; la règie caractéristique qualifie les pro-priétés communes à tous les faits d'une categorie faisant l'objet d'un apprentissage alors que la règie de classification caractérise les propriétés qui distinguent la catégorie faisant l'objet d'un apprentissage des autres catégories. Le résultat peut ětre une règie conjonctive ou une règie ayant un petit nombre de disjonctifs. Qui plus est, 1′apprentissage peut se faire avec des bases de données contenant des donnees bruitees et des cas exceptionnels utilisant des statistiques de bases de données. L'analyse des algorithmes démontre que l'induction orientée sur l'attribut réduit considérablement la complexité informàtique du processus d'apprentissage des bases de données.  相似文献   

10.
Managerial decision-making processes often involve data of the time nature and need to understand complex temporal associations among events. Extending classical association rule mining approaches in consideration of time in order to obtain temporal information/knowledge is deemed important for decision support, which is nowadays one of the key issues in business intelligence. This paper presents the notion of multi-temporal patterns with four different temporal predicates, namely before, during, equal and overlap, and discusses a number of related properties, based on which a mining algorithm is designed. This enables us to effectively discover multi-temporal patterns in large-scale temporal databases by reducing the database scan in the generation of candidate patterns. The proposed approach is then applied to stock markets, aimed at exploring possible associative movements between the stock markets of Chinese mainland and Hong Kong so as to provide helpful knowledge for investment decisions.  相似文献   

11.
12.
Neuroscience is generating vast amounts of highly diverse data which is of potential interest to researchers beyond the laboratories in which it is collected. In particular, quantitative neuroanatomical data is relevant to a wide variety of areas, including studies of development, aging, pathology and in biophysically oriented computational modelling. Moreover, the relatively discrete and well-defined nature of the data make it an ideal application for developing systems designed to facilitate data archiving, sharing and reuse. At present, the only widely used forms of dissemination are figures and tables in published papers which suffer from inaccessibility and the loss of machine readability. They may also present only an averaged or otherwise selected subset of the available data. Numerous database projects are in progress to address these shortcomings. They employ a variety of architectures and philosophies, each with its own merits and disadvantages. One axis on which they may be distinguished is the degree of top-down control, or curation, involved in data entry. Here we consider one extreme of this scale in which there is no curation, minimal standardization and a wide degree of freedom in the form of records used to document data. Such a scheme has advantages in the ease of database creation and in the equitable assignment of perceived intellectual property by keeping the control of data in the hands of the experts who collected it. It does, however, require a more sophisticated infrastructure than conventional databases since the software must be capable of organizing diverse and differently documented data sets in an effective way. Several components of a software system to provide this infrastructure are now in place. Examples are presented, showing how these tools can be used to archive and publish neuronal morphology data, and how they can give an integrated view of data stored at many different sites.  相似文献   

13.
Knowledge and Information Systems - The literature on the modeling and management of data generated through the lifecycle of a manufacturing system is split into two main paradigms: product...  相似文献   

14.
15.
For the analysis of caries experience in seven-year old children the association between the presence or absence of caries experience among deciduous molars within each child is explored. Some of the high associations have an etiological basis (e.g., between symmetrically opponent molars), while others (diagonally opponent molars) are assumed to be the result of the transitivity of association and to disappear once conditioned on the caries experience status of the other deciduous molars, covariates and random effects. However, using discrete models for multivariate binary data, conditioning does not remove the diagonal association. When the association is explored on a latent scale, e.g., by a multivariate probit model, then conditional independence can be concluded. This contrast is confirmed when using other models on the (observed) binary scale and on the latent scale. Depending on the point of view, the differences in conditional independence might be seen as a consequence of different types of measurements or as a consequence of different models. An example shows that the results and conclusions can be markedly different with important consequences on model building. The explanation for this result is exemplified mathematically and illustrated using dental data from the Signal-Tandmobiel® study.  相似文献   

16.
This paper introduces a generic framework for defining instructions, programs, and the semantics of their instantiation by operations in a multiprocessor environment. The framework captures information flow between operations in a multiprocessor program by means of a reads-from mapping from read operations to write operations. Two fundamental relations are defined on the operations: a program order between operations which instantiate the program of some processor and view orders which are specific to each shared memory model. An operation cannot read from the "hidden" pastor from the future; the future and the past causality can be examined either relative to the program order or relative to the view orders. A shared memory model specifies, for a given program, the permissible transformation of resource states. The memory model should reflect the programmer's view by citing the guaranteed behavior of the multiprocessor in the interface visible to the programmer. The model should retrain from dictating the design practices that should be followed by the implementation. Our framework allows an architect to reveal the programming view induced by a shared-memory architecture; it serves programmers exploring the limits of the programming interface and guides architecture-level verification. The framework is applicable for complex, commercial architectures as it can capture subtle programming-interface details, exposing the underlying aggressive microarchitecture mechanisms. As an illustration, we define the shared memory model supported by the PowerPC architecture, within our framework.  相似文献   

17.
The Web offers a single user interface to data sharing across heterogeneous and autonomous databases, but it was not designed to handle the rigid DBMS protocols and data formats used by relational and object-oriented databases. WebFindIt is an ongoing project to develop the database equivalent of the World Wide Web-namely, a World Wide Database-through a middleware infrastructure for describing, locating, and accessing data from any kind of Web-accessible database. A special-purpose language, Web-Tassili, supports the definition and manipulation of middleware constructs for organizing the information space. An implementation of WebFindIt combines Java, CORBA and database technologies  相似文献   

18.
ES-RU is a system for video sequence indexing. Video frames are annotated according to the identities of appearing subjects. The system architecture is designed by distributing the different processing steps across dedicated modules. These modules interact with each other to accomplish the final task. Such modularity is also designed to allow a high system flexibility, because it is possible to independently substitute each component with a different one performing the same task using a different method. As an example, face detection is presently performed by Viola–Jones algorithm, but the corresponding module might be substituted by one exploiting neural networks or support vector machines (which are actually more computationally demanding). In detail, ES-RU implements both face location and analysis, and an algorithm to select the most representative templates for the selected identities. The novelty of the algorithm for template analysis and selection relies on the proposed use of the concept of entropy. This concept is the base of most techniques that exploit relative entropy to estimate the degree of uniqueness which is assured by a biometric trait, when processed by a Feature Extraction Technique (FET). In this paper, entropy is introduced as a tool to evaluate the contribution of each sample in guaranteeing a suitable diversification of the templates that make up the gallery of a relevant subject. Video-surveillance activities cause to gather a huge amount of templates to be used for tracking and re-identifying subjects. However, most of these templates are not informative enough to be useful. The aim of our approach is to provide an effective technique to keep only the most “representative” of them, i.e. those that provide a sufficient level of diversification. This allows faster processing (less comparisons) and better results (it is possible to recognize a subject under different conditions). ES-RU was tested on six video clips and on a subset of the SCFace database to assess its performances.  相似文献   

19.
In object-oriented database systems where the concept of the superclass-subclass is supported, an instance of a subclass is also an instance of its superclass. Consequently, the access scope of a query against a class in general includes the access scope of all its subclasses, unless specified otherwise. An index to support superclass-subclass relationship efficiently must provide efficient associative retrievals of objects from a single class or from several classes in a class hierarchy. This paper presents an efficient index called the hierarchical tree (the H-tree). For each class, an H-tree is maintained, allowing efficient search on a single class. These H-trees are appropriately linked to capture the superclass-subclass relationships, thus allowing efficient retrievals of instances from a class hierarchy. Both experimental and analytical results indicate that the H-tree is an efficient indexing structure. Edited by Ron Sacks-Davis.?Received December 1992 / Revised May 1994 / Accepted May 1995  相似文献   

20.
A new statistical language model is presented which combines collocational dependencies with two important sources of long-range statistical dependence: the syntactic structure and the topic of a sentence. These dependencies or constraints are integrated using the maximum entropy technique. Substantial improvements are demonstrated over a trigram model in both perplexity and speech recognition accuracy on the Switchboard task. A detailed analysis of the performance of this language model is provided in order to characterize the manner in which it performs better than a standard N -gram model. It is shown that topic dependencies are most useful in predicting words which are semantically related by the subject matter of the conversation. Syntactic dependencies on the other hand are found to be most helpful in positions where the best predictors of the following word are not within N -gram range due to an intervening phrase or clause. It is also shown that these two methods individually enhance an N -gram model in complementary ways and the overall improvement from their combination is nearly additive.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号