首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 10 毫秒
1.
We present a physical design methodology for network model databases based on the theory of separability. In particular, we present a cost model and a usage specification scheme that are suitable for describing the network model database environment. We subsequently prove that, under these usage and cost models, a large subset of practically important access structures that are available in network model database systems satisfies the conditions for separability. The theory of separability was introduced in an earlier work, in the context of relational systems, as a formal basis for partitioning the problem of designing the optimal physical database. The theory proves that, given a certain set of access structures and a usage specification scheme, the problem of optimal assignment of access structures to the entire database can be reduced to the subproblem of optimizing individual record types independently of one another. The approach we present significantly reduces the complexity of the design problem which has the potential of being combinatorially explosive. A heuristic extension of the formal methodology to the access structures not incorporated in the theory is also discussed.  相似文献   

2.
In this paper, a model which combines relational databases with self-processing networks is proposed in order to improve the performance of very large databases. The proposed model uses an approach which is radically different from all other distributed database models, where each computer processes a portion of the database. In the self-processing network model, the network structure which consists of nodes and connections, captures the data and the relationships by assigning them unique, connected control, and data nodes. The network activity is the mechanism that performs the relational algebra operations. No data transmission is needed, and since data nodes are common to all the relations, integrity and elimination of data redundancy are achieved. An extension of the model, by interconnecting the data nodes via weighted links, provides us with properties that are embedded in neural networks, such as fuzziness and learning.  相似文献   

3.
An extended authorization model for relational databases   总被引:3,自引:0,他引:3  
We propose two extensions to the authorization model for relational databases defined originally by P.G. Griffiths and B. Wade (1976). The first extension concerns a new type of revoke operation, called noncascading revoke operation. The original model contains a single, cascading revoke operation, meaning that when a privilege is revoked from a user, a recursive revocation takes place that deletes all authorizations granted by this user that do not have other supporting authorizations. The new type of revocation avoids the recursive revocation of authorizations. The second extension concerns negative authorization which permits specification of explicit denial for a user to access an object under a particular mode. We also address the management of views and groups with respect to the proposed extensions  相似文献   

4.
李东艳  李绍滋  柯逍 《计算机应用》2010,30(10):2610-2613
针对图像标注中所使用数据集存在的数据不平衡问题,提出一种新的基于外部数据库的自动平衡模型。该模型先依据原始数据库中词频分布来找出低频点,再根据自动平衡模式,对每个低频词,从外部数据库中增加相应的图片;然后对图片进行特征提取,对Corel 5k数据集中的47065个视觉词汇和从外部数据库中追加的图片中提取出来的996个视觉词汇进行聚类;最后利用基于外部数据库的图像自动标注改善模型对图像进行标注。此方法克服了图像标注中数据库存在的不平衡问题,使得至少被正确标注一次的词的数量、精确率和召回率等均明显提高。  相似文献   

5.
Vertical partitioning can be used to enhance the performance of relational database systems by reducing the number of disk accesses. The authors identify the key parameters for capturing the behavior of an access plan and propose a two-step methodology consisting of a query analysis step to estimate the parameters and a binary partitioning step which can be applied recursively. The partitioning uses an integer linear programming technique to minimize the number of disk accesses. Significant performance benefit would be achieved for join if the partitioned (inner) relation could fit into the memory buffer under the inner-outer loop join method, or if the partitioned relation could fit into the sort buffer under the sort-merge join method, but not the original relation. For cases where a segment scan or a cluster index scan is used, vertical partitioning of the relation with the algorithm described is still often found to lead to substantial performance improvement  相似文献   

6.
Database systems employ physical structures such as indexes and materialized views to improve query performance, potentially by orders of magnitude. It is therefore important for a database administrator to choose the appropriate configuration of these physical structures for a given database. XML database systems are increasingly being used to manage semi-structured data, and XML support has been added to commercial database systems. In this paper, we address the problem of automatic physical design for XML databases, which is the process of automatically selecting the best set of physical structures for a database and a query workload. We focus on recommending two types of physical structures: XML indexes and relational materialized views of XML data. We present a design advisor for recommending XML indexes, one for recommending materialized views, and an integrated design advisor that recommends both indexes and materialized views. A key characteristic of our advisors is that they are tightly coupled with the query optimizer of the database system, and they rely on the optimizer for enumerating and evaluating physical designs. We have implemented our advisors in a prototype version of IBM DB2 V9, and we experimentally demonstrate the effectiveness of their recommendations using this implementation.  相似文献   

7.
In this paper, a new approach based on Differential Evolution (DE) for the automatic classification of items in medical databases is proposed. Based on it, a tool called DEREx is presented, which automatically extracts explicit knowledge from the database under the form of IF-THEN rules containing AND-connected clauses on the database variables. Each DE individual codes for a set of rules. For each class more than one rule can be contained in the individual, and these rules can be seen as logically connected in OR. Furthermore, all the classifying rules for all the classes are found all at once in one step. DEREx is thought as a useful support to decision making whenever explanations on why an item is assigned to a given class should be provided, as it is the case for diagnosis in the medical domain. The major contribution of this paper is that DEREx is the first classification tool in literature that is based on DE and automatically extracts sets of IF-THEN rules without the intervention of any other mechanism. In fact, all other classification tools based on DE existing in literature either simply find centroids for the classes rather than extracting rules, or are hybrid systems in which DE simply optimizes some parameters whereas the classification capabilities are provided by other mechanisms. For the experiments eight databases from the medical domain have been considered. First, among ten classical DE variants, the most effective of them in terms of highest classification accuracy in a ten-fold cross-validation has been found. Secondly, the tool has been compared over the same eight databases against a set of fifteen classifiers widely used in literature. The results have proven the effectiveness of the proposed approach, since DEREx turns out to be the best performing tool in terms of highest classification accuracy. Also statistical analysis has confirmed that DEREx is the best classifier. When compared to the other rule-based classification tools here used, DEREx needs the lowest average number of rules to face a problem, and the average number of clauses per rule is not very high. In conclusion, the tool here presented is preferable to the other classifiers because it shows good classification accuracy, automatically extracts knowledge, and provides users with it under an easily comprehensible form.  相似文献   

8.
We introduce a fuzzy set theoretic approach for dealing with uncertainty in images in the context of spatial and topological relations existing among the objects in the image. We propose an object-oriented graph theoretic model for representing an image and this model allows us to assess the similarity between images using the concept of (fuzzy) graph matching. Sufficient flexibility has been provided in the similarity algorithm so that different features of an image may be independently focused upon.  相似文献   

9.
随着训练数据规模的增大以及训练模型的日趋复杂,深度神经网络的训练成本越来越高,对计算平台提出了更高的算力需求,模型训练并行化成为增强其应用时效性的迫切需求。近年来基于分布式训练的AI加速器(如FPGA、TPU、AI芯片等)层出不穷,为深度神经网络并行训练提供了硬件基础。为了充分利用各种硬件资源,研究人员需要在集合了多种不同算力、不同硬件架构AI加速器的计算平台上进行神经网络的模型并行训练,因此,如何高效利用各种AI加速器计算资源,并实现训练任务在多种加速器上的负载均衡,一直是研究人员关心的热点问题。提出了一种面向模型并行训练的模型拆分策略自动生成方法,该方法能够基于静态的网络模型自动生成模型拆分策略,实现网络层在不同AI加速器上的任务分配。基于该方法自动生成的模型分配策略,能够高效利用单个计算平台上的所有计算资源,并保证模型训练任务在各设备之间的负载均衡,与目前使用的人工拆分策略相比,具有更高的时效性,节省拆分策略生成时间100倍以上,且降低了由于人为因素带来的不确定性。  相似文献   

10.
Computer models are widely used to simulate dynamic systems in automobile industry. It is imperative to have high quality CAE models with good predictive capability. This requires CAE engineers to conduct model calibration with physical tests. The challenges in the occupant restraint system model calibration are: (1) the dynamic system usually consists of multiple responses, (2) most of the responses are functional data or time histories, and (3) the traditional trial-and-error calibration approach is time consuming and highly depends on analyst??s expertise. These call for the development of an automatic and effective model calibration method. This paper presents a newly developed automatic model calibration method, based on the Error Assessment of Response Time Histories (EARTH) metric. The EARTH metric is used to perform model assessment on various important features of the functional responses. A new multi-objective optimization problem is formulated and solved by a Non-dominated Sorting Genetic Algorithm to automatically update CAE model parameters. A real-world example is used to demonstrate the use of the proposed method.  相似文献   

11.
An algebra for probabilistic databases   总被引:3,自引:0,他引:3  
An algebra is presented for a simple probabilistic data model that may be regarded as an extension of the standard relational model. The probabilistic algebra is developed in such a way that (restricted to α-acyclic database schemes) the relational algebra is a homomorphic image of it. Strictly probabilistic results are emphasized. Variations on the basic probabilistic data model are discussed. The algebra is used to explicate a commonly used statistical smoothing procedure and is shown to be potentially very useful for decision support with uncertain information  相似文献   

12.
Peng  Yuzhong  Gong  Daoqing  Deng  Chuyan  Li  Hongya  Cai  Hongguo  Zhang  Hao 《Applied Intelligence》2022,52(3):2703-2719
Applied Intelligence - Deep neural networks (DNN) have gained remarkable success on many rainfall predictions tasks in recent years. However, the performance of DNN highly relies upon the...  相似文献   

13.
Semiconductor wafer defect inspection is an important process before die packaging. The defective regions are usually identified through visual judgment with the aid of a scanning electron microscope. Dozens of people visually check wafers and hand-mark their defective regions. Consequently, potential misjudgment may be introduced due to human fatigue. In addition, the process can incur significant personnel costs. Prior work has proposed automated visual wafer defect inspection that is based on supervised neural networks. Since it requires learned patterns specific to each application, its disadvantage is the lack of product flexibility. Self-organizing neural networks (SONNs) have been proven to have the capabilities of unsupervised auto-clustering. In this paper, an automatic wafer inspection system based on a self-organizing neural network is proposed. Based on real-world data, experimental results show, with good performance, that the proposed method successfully identifies the defective regions on wafers.  相似文献   

14.
Traditional network management approach involves the management of each vendor‘s equipment and networkd segment in isolation through its own proprietary element management system.It is necessary to set up a new network management architecture that calls for operation consolidation across vendor and technology boundaries.In this paper,an architerctural model for Intelligent Network Management(INM)is presented.The INM system includes a manager system,which controls all subsystems and coordinates different management tasks;an expert system,which is responsible for handling particularly difficult problems,and intelligent agents,which bring the management closer to applications and user requirements by spreading intellignet agents through network segments or domain.In the expert system model proposed,especially an intellignet fault management system is given.The architectural model is to build the INM system to meet the need of managing modern network systems.  相似文献   

15.
实现网络图形中节点和边自动布局一直是可视化研究中一个重要内容,基于力导向模型的自动布局算法则是该类研究中应用最广、文献最多的一类方法。根据研究方向出现的时间顺序,从基本模型、基于多维尺度分析的布局算法、多层迭代布局算法、非欧空间节点布局算法、受约束图形自动布局算法等五个方面对基于力引导模型的网络图自动布局算法的典型方法、研究进展、分支情况等进行了描述,并对发展前沿进行了讨论。  相似文献   

16.
李鹏  李玲  李敏 《计算机应用研究》2013,30(4):1240-1243
由于贝叶斯模型和各种图像测量结果,置信传播会更新每个节点的相关概率,提出了在自动交互图像分割过程中应用的新型贝叶斯网络模型。从过度分割模型中的超级像素点区域、边区域、顶点和测量结果之间的统计相关性来构造多层贝叶斯网络模型。除了自动图像分割,贝叶斯网络模型也可用于交互式图像分割中,现有交互分割往往被动地依靠用户提供的准确调整,提出新型主动输入选择方式作为准确调整。实验采用Weizmann数据集和VOC 2006图像集来评估,实验结果表明贝叶斯网络模型可以进行效果更好的自动分割,主动输入选择可以提高整体分割精度。  相似文献   

17.
An interactive design tool for designing CODASYL databases is described. The system is composed of three main modules: a user interface, a transaction analyzer, and a core module. The user interface allows a designer to enter interactively information concerning a database design which is to be evaluated. The transaction analyzer allows the designer to specify the processing requirements in terms of typical logical transactions to be executed against the database and translates these logical transaction into physical transaction which access and manipulate the physical databases. The core module is the implementation of a set of analytical models and cost formulas developed for the manipulation of indexed sequential and hash-based files and CODASYL sets. These models and formulas account for the situation in which occurrences of multiple record types are stored in the same area. Also presented are the results of a series of experiments in which key design parameters are varied. The system is implemented in UCSD Pascal running on IBM PCs  相似文献   

18.
A novel online approach to exact string matching and filtering of large databases is presented. String matching/filtering is based on artificial neural networks and operates in two stages: initially, a self‐organizing map retrieves the cluster of database strings that are most similar to the query string; subsequently, a harmony theory network compares the retrieved strings with the query string and determines whether an exact match exists. The similarity measure is configured to the specific characteristics of the database so as to expose overall string similarity rather than character coincidence at homologous string locations. The experimental results demonstrate foolproof, fast, and practically database‐size independent operation that is especially robust to database modifications. The proposed approach is put forward for general‐purpose (directory, catalogue, glossary search) as well as Internet‐oriented (e‐mail blocking, URL, username classification) applications. © 2010 Wiley Periodicals, Inc.  相似文献   

19.
Constraint relational databases use constraints to both model and query data. A constraint relation contains a finite set of generalized tuples. Each generalized tuple is represented by a conjunction of constraints on a given logical theory and, depending on the logical theory and the specific conjunction of constraints, it may possibly represent an infinite set of relational tuples. For their characteristics, constraint databases are well suited to model multidimensional and structured data, like spatial and temporal data. The definition of an algebra for constraint relational databases is important in order to make constraint databases a practical technology. We extend the previously defined constraint algebra (called generalized relational algebra). First, we show that the relational model is not the only possible semantic reference model for constraint relational databases and we show how constraint relations can be interpreted under the nested relational model. Then, we introduce two distinct classes of constraint algebras, one based on the relational algebra, and one based on the nested relational algebra, and we present an algebra of the latter type. The algebra is proved equivalent to the generalized relational algebra when input relations are modified by introducing generalized tuple identifiers. However, from a user point of view, it is more suitable. Thus, the difference existing between such algebras is similar to the difference existing between the relational algebra and the nested relational algebra, dealing with only one level of nesting. We also show how external functions can be added to the proposed algebra  相似文献   

20.
Faudemay  P. Mhiri  M. 《Micro, IEEE》1991,11(6):22-34
The RAPID-1 (relational access processor for intelligent data), an associative accelerator that recognizes tuples and logical formulas, is presented. It evaluates logical formulas instantiated by the current tuple, or record, and operates on whole relations or on hashing buckets. RAPID- 1 uses a reduced instruction set and hardwired control and executes all comparisons in a bit-parallel mode. It speeds up the database by a significant factor and will adapt to future generations of microprocessors. The principal design issues, data structures, instruction set, architecture, environments and performance are discussed  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号