首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
2.
Abstract

There is a level of sensitivity to almost all data, but what is most important — keeping sensitive data sensitive or guaranteeing the integrity of the data? When it comes to sensitivity or integrity, access control models have provided several options. Yet, wouldn't a combination of access control models provide a more desirable result than just settling for one? Is it possible to have a slice of data sensitivity with a dollop of data integrity? In finding the answer to this question it must be understood exactly what an access control model is and, in terms of sensitivity and integrity, which models provide the best security.  相似文献   

3.

Goal-oriented requirements engineering aims to capture desired goals and strategies of relevant stakeholders during early requirements engineering stages, using goal models. Goal-oriented modeling techniques support the analysis of system requirements (especially non-functional ones) from an operationalization perspective, through the evaluation of alternative design options. However, conflicts and undesirable interactions between requirements produced from goals are inevitable, especially as stakeholders often aim for different objectives. In this paper, we propose an approach based on game theory and the Goal-oriented Requirement Language (GRL) to reconcile interacting stakeholders (captured as GRL actors), leading to reasonable trade-offs. This approach consists in building a payoff bimatrix that considers all actor’s valid GRL strategies, and computing its Nash equilibrium. Furthermore, we use two optimization techniques to reduce the size of the payoff bimatrix, hence reducing the computational cost of the Nash equilibrium. The approach goes beyond existing work by supporting nonzero-sum games, multiple alternatives, and inter-actor dependencies. We demonstrate the applicability of our game-theoretic modeling and analysis approach using a running example and two GRL models from the literature, with positive results on feasibility and applicability, including performance results.

  相似文献   

4.
The purpose of this research is to examine whether decision-theoretic planning techniques can be used to help managers evaluate strategic options in complex and uncertain environments. Firms faced with choices such as whether to acquire a start-up, develop a new product, or invest in updated production technology continue to make decisions based on unreliable heuristics, “gut feel” or misleading financial measures such as net present value (NPV). In this paper we show that decision-theoretic planning techniques originally developed for robot planning permit us to gain the insights provided by real options analysis without working within the restrictions of models designed to price financial options or incurring the overhead of constructing huge decision trees. A biotechnology licensing problem similar to those addressed elsewhere in the real options literature is used to illustrate the methodology and demonstrate its feasibility.  相似文献   

5.

Local learning algorithms use a neighborhood of training data close to a given testing query point in order to learn the local parameters and create on-the-fly a local model specifically designed for this query point. The local approach delivers breakthrough performance in many application domains. This paper considers local learning versions of regularization networks (RN) and investigates several options for improving their online prediction performance, both in accuracy and speed. First, we exploit the interplay between locally optimized and globally optimized hyper-parameters (regularization parameter and kernel width) each new predictor needs to optimize online. There is a substantial reduction of the operation cost in the case we use two globally optimized hyper-parameters that are common to all local models. We also demonstrate that this global optimization of the two hyper-parameters produces more accurate models than the other cases that locally optimize online either the regularization parameter, or the kernel width, or both. Then by comparing Eigenvalue decomposition (EVD) with Cholesky decomposition specifically for the local learning training and testing phases, we also reveal that the Cholesky-based implementations are faster that their EVD counterparts for all the training cases. While EVD is suitable for validating cost-effectively several regularization parameters, Cholesky should be preferred when validating several neighborhood sizes (the number of k-nearest neighbors) as well as when the local network operates online. Then, we exploit parallelism in a multi-core system for these local computations demonstrating that the execution times are further reduced. Finally, although the use of pre-computed stored local models instead of the online learning local models is even faster, this option deteriorates the performance. Apparently, there is a substantial gain in waiting for a testing point to arrive before building a local model, and hence the online local learning RNs are more accurate than their pre-computed stored local models. To support all these findings, we also present extensive experimental results and comparisons on several benchmark datasets.

  相似文献   

6.

This paper presents a new method for reconstructing a surface model from scanned data as a \(\mathrm{C}^0\) composite surface. The surface is represented by intersecting underlying surfaces (U-surfaces), which are reconstructed from their respective parts segmented according to sweep-based modeling. However, even if each U-surface is successfully reconstructed, a surface model for the scanned data cannot be guaranteed to be desirably represented by a composite surface of those U-surfaces. Therefore, the proposed method reconstructs adjacent U-surfaces such that their intersecting curve represents part of feature lines, which are slightly offset from the scanned data, and has as small a torsion as possible. Compared with conventional approaches that naively fit a single patch to whole or some segmented parts, the method provides a guiding principle for the generation of surface models that are more suitable for styling design. The experimental results demonstrate that desirable models can be generated from real-world scanned data.

  相似文献   

7.
Abstract

Until recently, hardware was upgraded simply by ordering a larger, faster, more expensive computer. Now, upgrading a system is more complex but potentially more rewarding. The complexity results from the many alternative approaches available, the advent of distributed processing with its networking implications, the proliferation of microcomputers and office systems and the potential need to integrate them, and the options available for obtaining equipment. An understanding of an organization's needs and careful planning can make an equipment upgrade successful.  相似文献   

8.

In this study we represent malware as opcode sequences and detect it using a deep belief network (DBN). Compared with traditional shallow neural networks, DBNs can use unlabeled data to pretrain a multi-layer generative model, which can better represent the characteristics of data samples. We compare the performance of DBNs with that of three baseline malware detection models, which use support vector machines, decision trees, and the k-nearest neighbor algorithm as classifiers. The experiments demonstrate that the DBN model provides more accurate detection than the baseline models. When additional unlabeled data are used for DBN pretraining, the DBNs perform better than the other detection models. We also use the DBNs as an autoencoder to extract the feature vectors of executables. The experiments indicate that the autoencoder can effectively model the underlying structure of input data and significantly reduce the dimensions of feature vectors.

  相似文献   

9.
Abstract

Functions which control access to data in groupware should be designed flexibly by offering different options to end users. However, conflicts might arise among different end users in the process of selecting one of these options. To support users in finding a consensual solution for these conflicts, we propose a metafunction called ‘negotiability’. We propose to define and explore the concept of ‘negotiability’, and discuss its application to access control, concurrency control, and consistency control. We assume that negotiability can play an important role in tailoring these mechanisms and supporting a co-operative use of system's flexibility.  相似文献   

10.

Processes such as growth and atrophy cause changes through time that can be visible in a series of medical images, following the hypothesis that form follows function. As was hypothesised by D’Arcy Thompson more than 100 years ago, models of the changes inherent in these actions can aid understanding of the processes at work. We consider how image registration using finite-dimensional planar Lie groups (in contrast to general diffeomorphisms) can be used in this process. The deformations identified can be described as points in the Lie algebra, thus enabling processes such as evolutionary change, growth, and deformation from disease, to be described in a linear space. The choice of appropriate Lie group becomes a modelling choice and can be selected using model selection; Occam’s razor suggests that groups with the smallest number of parameters (which Thompson referred to as ‘simple transformations’) are to be preferred. We demonstrate our method on an example from Thompson of the cannon-bones of three hoofed mammals and a set of outline curves of the development of the human skull, with promising results.

  相似文献   

11.

The competitive layer model (CLM) implemented by the Lotka–Volterra recurrent neural networks (LV RNNs) is prominently characterized by its capability of binding neurons with similar feature into the same layer by competing among neurons at different layers in a column. This paper proposes to use the CLM of the LV RNN for detecting brain activated regions from the fMRI data. The correlated voxels from brain fMRI data can be obtained, and the clusters from fMRI time series can be uncovered. Experiments on synthetic and real fMRI data demonstrate the effectiveness of binding activated voxels into the ‘active’ layers of the CLM. The activated voxels can be detected more accurately than some existing methods by the proposed method.

  相似文献   

12.
Liu  Yu-Yao  Yang  Bo  Pei  Hong-Bin  Huang  Jing 《计算机科学技术学报》2020,35(6):1446-1460

Explainable recommendation, which can provide reasonable explanations for recommendations, is increasingly important in many fields. Although traditional embedding-based models can learn many implicit features, resulting in good performance, they cannot provide the reason for their recommendations. Existing explainable recommender methods can be mainly divided into two types. The first type models highlight reviews written by users to provide an explanation. For the second type, attribute information is taken into consideration. These approaches only consider one aspect and do not make the best use of the existing information. In this paper, we propose a novel neural explainable recommender model based on attributes and reviews (NERAR) for recommendation that combines the processing of attribute features and review features. We employ a tree-based model to extract and learn attribute features from auxiliary information, and then we use a time-aware gated recurrent unit (T-GRU) to model user review features and process item review features based on a convolutional neural network (CNN). Extensive experiments on Amazon datasets demonstrate that our model outperforms the state-of-the-art recommendation models in accuracy of recommendations. The presented examples also show that our model can offer more reasonable explanations. Crowd-sourcing based evaluations are conducted to verify our model’s superiority in explainability.

  相似文献   

13.
Liang  Qi  Xiao  Mengmeng  Song  Dan 《Multimedia Tools and Applications》2021,80(11):16173-16184

The classification and retrieval of 3D models have been widely used in the field of multimedia and computer vision. With the rapid development of computer graphics, different algorithms corresponding to different representations of 3D models have achieved the best performance. The advances in deep learning also encourage various deep models for 3D feature representation. For multi-view, point cloud, and PANORAMA-view, different models have shown significant performance on 3D shape classification. However, There’s not a way to consider utilizing the fusion information of multi-modal for 3D shape classification. In our opinion, We propose a novel multi-modal information fusion method for 3D shape classification, which can fully utilize the advantage of different modal to predict the label of class. More specifically, the proposed can effectively fuse more modal information. it is easy to utilize in other similar applications. We have evaluated our framework on the popular dataset ModelNet40 for the classification task on 3D shape. Series experimental results and comparisons with state-of-the-art methods demonstrate the validity of our approach.

  相似文献   

14.
Yu  Haiping  He  Fazhi  Pan  Yiteng 《Multimedia Tools and Applications》2020,79(9-10):5743-5765

Image segmentation plays an important role in the computer vision . However, it is extremely challenging due to low resolution, high noise and blurry boundaries. Recently, region-based models have been widely used to segment such images. The existing models often utilized Gaussian filtering to filter images, which caused the loss of edge gradient information. Accordingly, in this paper, a novel local region model based on adaptive bilateral filter is presented for segmenting noisy images. Specifically, we firstly construct a range-based adaptive bilateral filter, in which an image can well be preserved edge structures as well as resisted noise. Secondly, we present a data-driven energy model, which utilizes local information of regions centered at each pixel of image to approximate intensities inside and outside of the circular contour. The estimation approach has improved the accuracy of noisy image segmentation. Thirdly, under the premise of keeping the image original shape, a regularization function is used to accelerate the convergence speed and smoothen the segmentation contour. Experimental results of both synthetic and real images demonstrate that the proposed model is more efficient and robust to noise than the state-of-art region-based models.

  相似文献   

15.

Sensory processing relies on efficient computation driven by a combination of low-level unsupervised, statistical structural learning, and high-level task-dependent learning. In the earliest stages of sensory processing, sparse and independent coding strategies are capable of modeling neural processing using the same coding strategy with only a change in the input (e.g., grayscale images, color images, and audio). We present a consolidated review of Independent Component Analysis (ICA) as an efficient neural coding scheme with the ability to model early visual and auditory neural processing. We created a self-contained, accessible Jupyter notebook using Python to demonstrate the efficient coding principle for different modalities following a consistent five-step strategy. For each modality, derived receptive field models from natural and non-natural inputs are contrasted, demonstrating how neural codes are not produced when the inputs sufficiently deviate from those animals were evolved to process. Additionally, the demonstration shows that ICA produces more neurally-appropriate receptive field models than those based on common compression strategies, such as Principal Component Analysis. The five-step strategy not only produces neural-like models but also promotes reuse of code to emphasize the input-agnostic nature where each modality can be modeled with only a change in inputs. This notebook can be used to readily observe the links between unsupervised machine learning strategies and early sensory neuroscience, improving our understanding of flexible data-driven neural development in nature and future applications.

  相似文献   

16.
Yokoi  Soma  Otsuka  Takuma  Sato  Issei 《Machine Learning》2020,109(9-10):1903-1923

Stochastic gradient Langevin dynamics (SGLD) is a computationally efficient sampler for Bayesian posterior inference given a large scale dataset and a complex model. Although SGLD is designed for unbounded random variables, practical models often incorporate variables within a bounded domain, such as non-negative or a finite interval. The use of variable transformation is a typical way to handle such a bounded variable. This paper reveals that several mapping approaches commonly used in the literature produce erroneous samples from theoretical and empirical perspectives. We show that the change of random variable in discretization using an invertible Lipschitz mapping function overcomes the pitfall as well as attains the weak convergence, while the other methods are numerically unstable or cannot be justified theoretically. Experiments demonstrate its efficacy for widely-used models with bounded latent variables, including Bayesian non-negative matrix factorization and binary neural networks.

  相似文献   

17.
ABSTRACT

In this article, I argue that increased mobility, a continued emphasis on business process management, expanded options for the sourcing of enterprise system software, and IS maturity models are trends that will require new capabilities and skills for tomorrow's IS organization.  相似文献   

18.

In this study, the capacity of artificial neural networks (ANNs) and genetic programming (GP) in making possible, fast and reliable predictions of equilibrium compositions of alkane binary mixtures is investigated. A data set comprising 847 data points was gathered and used in both training the proposed ANN and generating the closed-form expressions of the GP procedure. The results obtained demonstrate the relative precision of the proposed ANN, while, on the other hand, exhibit that the GP model, although less precise, affords high CPU time efficiency and simplicity. Concisely, the proposed models can serve the purpose of being close first estimates for more thermodynamically rigorous vapor–liquid equilibrium calculation procedures and do obviate the necessity for the availability of a large set of experimental binary interaction coefficients. Mean absolute errors of 0.0100 and 0.0404 for liquid compositions and of 0.0054 and 0.0254 for vapor-phase mole fractions, for the proposed ANN and GP models, respectively, are a testament to the reliability of the proposed models.

  相似文献   

19.
Abstract

There are two main approaches to improving the effectiveness of database interfaces. One is to raise the level of abstraction for the content of the user-database interaction. The relational model belonging to the logical level has replaced the hierarchical and network models that belong to the lower physical level. It is likely that the relational model will eventually be replaced by models belonging to the even higher conceptual level, such as entity relationship models and object-oriented models. The second approach is to enhance the actual interaction process. This can be done by providing better feedback to the user. Feedback can be in the form of more comprehensible error messages, and the provision of a natural language interpretation of user's query. Such a feedback system was developed, and its effectiveness tested in an experiment. The results showed that the feedback system enhanced user performance greatly. Specifically, users who used the feedback system were 12.9% more accurate than those without the feedback system. They were also 41.2% more confident of their answers, and they took 29.0% less time than those without the feedback system.  相似文献   

20.

We demonstrate refinement-based formal development of the hybrid, ‘fixed virtual block’ approach to train movement control for the emerging European Rail Traffic Management System (ERTMS) level 3. Our approach uses iUML-B diagrams as a front end to the Event-B modelling language. We use abstraction to verify the principle of movement authority before gradually developing the details of the Virtual Block Detector component in subsequent refinements, thus verifying that it preserves the safety properties. We animate the refined models to demonstrate their validity using the scenarios from the Hybrid ERTMS Level 3 (HLIII) specification. We reflect on our team-based approach to finding useful modelling abstractions and demonstrate a systematic modelling method based on the state and class diagrams of iUML-B. The component and control flow architectures of the application, its environment and interacting systems emerge through the layered refinement process. The runtime semantics of the specification’s state-machine behaviour are modelled in the final refinements. We discuss how the model could be used to generate an implementation using code generation tools and techniques.

  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号