首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Rich Internet Applications (RIAs) have become a common platform for Web developments. Its adoption has been accelerated thanks to different factors, among others, the appearance of patterns for typical RIA behaviors and the extension of different Model Driven Web Engineering methodologies to introduce RIA concepts. The real fact is that more and more developers are switching to RIA technologies and, thus, the modernization of legacy Web applications into RIAs has become a trend topic. However, this modernization process lacks of a systematic approach. Currently, it is done in an ad hoc manner, being expensive and error-prone. This work presents a systematic process to modernize legacy Web applications into RIAs. The process is based on the use of traceability matrices that relate modernization requirements, RIA features and patterns. Performing some operations on these matrices, they provide the analyst with the necessary information about the suitability of a pattern or set of patterns to address a given requirement. This work also introduces two measures, the degree of requirement realization and the degree of pattern realization, which are used to discuss the pattern selection. Finally, the applicability of the approach is evaluated by using it in several Web systems.  相似文献   

2.
Forward stepwise regression analysis selects critical attributes all the way with the same set of data. Regression analysis is, however, not capable of splitting data to construct piecewise regression models. Regression trees have been known to be an effective data mining tool for constructing piecewise models by iteratively splitting data set and selecting attributes into a hierarchical tree model. However, the sample size reduces sharply after few levels of data splitting causing unreliable attribute selection. In this research, we propose a method to effectively construct a piecewise regression model by extending the sample-efficient regression tree (SERT) approach that combines the forward selection in regression analysis and the regression tree methodologies. The proposed method attempts to maximize the usage of the dataset's degree of freedom and to attain unbiased model estimates at the same time. Hypothetical and actual semiconductor yield-analysis cases are used to illustrate the method and its effective search for critical factors to be included in the dataset's underlying model.  相似文献   

3.
4.
We introduce a generic problem component that captures the most common, difficult kernel of many problems. This kernel involves general prefix computations (GPC). GPC's lower bound complexity of (n logn) time is established, and we give optimal solutions on the sequential model inO(n logn) time, on the CREW PRAM model inO(logn) time, on the BSR (broadcasting with selective reduction) model in constant time, and on mesh-connected computers inO(n) time, all withn processors, plus anO(log2 n) time solution on the hypercube model. We show that GPC techniques can be applied to a wide variety of geometric (point set and tree) problems, including triangulation of point sets, two-set dominance counting, ECDF searching, finding two-and three-dimensional maximal points, the reconstruction of trees from their traversals, counting inversions in a permutation, and matching parentheses.work partially supported by NSF IRI/8709726work partially supported by NSERC.  相似文献   

5.
Algorithmic DNA self-assembly is capable of forming complex patterns and shapes, that have been shown theoretically, and experimentally. Its experimental demonstrations, although improving over recent years, have been limited by significant assembly errors. Since 2003 there have been several designs of error-resilient tile sets but all of these existing error-resilient tile systems assumed directional growth of the tiling assembly. This is a very strong assumption because experiments show that tile self-assembly does not necessarily behave in such a fashion, since they may also grow in the reverse of the intended direction. The assumption of directional growth of the tiling assembly also underlies the growth model in theoretical assembly models such as the TAM. What is needed is a means for enforce this directionality constraint, which will allow us to reduce assembly errors. In this paper we describe a protection/deprotection strategy to strictly enforce the direction of tiling assembly growth so that the assembly process is robust against errors. Initially, we start with (1) a single “activated” tile with output pads that can bind with other tiles, along with (2) a set of “deactivated” tiles, meaning that the tile’s output pads are protected and cannot bind with other tiles. After other tiles bind to a “deactivated” tile’s input pads, the tile transitions to an active state and its output pads are exposed, allowing further growth. When these are activated in a desired order, we can enforce a directional assembly at the same scale as the original one. Such a system can be built with minimal modifications of existing DNA tile nanostructures. We propose a new type of tiles called activatable tiles and its role in compact proofreading. Activatable tiles can be thought of as a particular case of the more recent signal tile assembly model, where signals transmit binding/unbinding instructions across tiles on binding to one or more input sites. We describe abstract and kinetic models of activatable tile assembly and show that the error rate can be decreased significantly with respect to Winfree’s original kinetic tile assembly model without considerable decrease in assembly growth speed. We prove that an activatable tile set is an instance of a compact, error-resilient and self-healing tile-set. We describe a DNA design of activatable tiles and a mechanism of deprotection using DNA polymerization and strand displacement. We also perform detailed stepwise simulations using a DNA Tile simulator Xgrow, and show that the activatable tiles mechanism can reduce error rates in self assembly. We conclude with a brief discussion on some applications of activatable tiles beyond computational tiling, both as (1) a novel system for concentration of molecules, and (2) a catalyst in sequentially triggered chemical reactions.  相似文献   

6.
Nowadays, the SMS is a very popular communication channel for numerous value added services (VAS), business and commercial applications. Hence, the security of SMS is the most important aspect in such applications. Recently, the researchers have proposed approaches to provide end-to-end security for SMS during its transmission over the network. Thus, in this direction, many SMS-based frameworks and protocols like Marko's SMS framework, Songyang's SMS framework, Alfredo's SMS framework, SSMS protocol, and, Marko and Konstantin's protocol have been proposed but these frameworks/protocols do not justify themselves in terms of security analysis, communication and computation overheads, prevention from various threats and attacks, and the bandwidth utilization of these protocols. The two protocols SMSSec and PK-SIM have also been proposed to provide end-to-end security and seem to be little better in terms of security analysis as compared to the protocols/framework mentioned above. In this paper, we propose a new secure and optimal protocol called SecureSMS, which generates less communication and computation overheads. We also discuss the possible threats and attacks in the paper and provide the justified prevention against them. The proposed protocol is also better than the above two protocols in terms of the bandwidth utilization. On an average the SecureSMS protocol reduces 71% and 59% of the total bandwidth used in the authentication process as compared to the SMSSec and PK-SIM protocols respectively. Apart from this, the paper also proposes a scheme to store and implement the cryptographic algorithms onto the SIM card. The proposed scheme provides end-to-end SMS security with authentication (by the SecureSMS protocol), confidentiality (by encryption AES/Blowfish; preferred AES-CTR), integrity (SHA1/MD5; preferred SHA1) and non-repudiation (ECDSA/DSA; preferred ECDSA).  相似文献   

7.
Applications of the random recursive partitioning (RRP) method are described. This method generates a proximity matrix which can be used in non-parametric matching problems such as hot-deck missing data imputation and average treatment effect estimation. RRP is a Monte Carlo procedure that randomly generates non-empty recursive partitions of the data and calculates the proximity between observations as the empirical frequency in the same cell of these random partitions over all the replications. Also, the method in the presence of missing data is invariant under monotonic transformations of the data but no other formal properties of the method are known yet. Therefore, Monte Carlo experiments were conducted in order to explore the performance of the method. A companion software is available as a package for the R statistical environment.  相似文献   

8.
9.
This paper reports an application of the Kolmogorov-Smirnov test for the purpose of detecting changes in the distribution of a sequence of measurements. The ‘probability chart’ of the title gave a value reflecting whether the distribution was constant in the neighbourhood of each observation. The chart had the advantage that the threshold for detection of a change was a dimensionless probability value. The non-parametric nature of the test made it suitable for measurements sampled from a non-Gaussian distribution.  相似文献   

10.
Three kinds of quantum optimizations are introduced in this paper as follows: quantum minimization (QM), neuromorphic quantum-based optimization (NQO), and logarithmic search with quantum existence testing (LSQET). In order to compare their optimization ability for training adaptive support vector regression, the performance evaluation is accomplished in the basis of forecasting the complex time series through two real world experiments. The model used for this complex time series prediction comprises both BPNN-Weighted Grey-C3LSP (BWGC) and nonlinear generalized autoregressive conditional heteroscedasticity (NGARCH) that is tuned perfectly by quantum-optimized adaptive support vector regression. Finally, according to the predictive accuracy of time series forecast and the cost of the computational complexity, the concluding remark will be made to illustrate and discuss these quantum optimizations.  相似文献   

11.
12.
13.
Hierarchical task analysis: developments, applications, and extensions   总被引:2,自引:0,他引:2  
Hierarchical task analysis (HTA) is a core ergonomics approach with a pedigree of over 30 years continuous use. At its heart, HTA is based upon a theory of performance and has only three governing principles. Originally developed as a means of determining training requirements, there was no way the initial pioneers of HTA could have foreseen the extent of its success. HTA has endured as a way of representing a system sub-goal hierarchy for extended analysis. It has been used for a range of applications, including interface design and evaluation, allocation of function, job aid design, error prediction, and workload assessment. Ergonomists are still developing new ways of using HTA which has assured the continued use of the approach for the foreseeable future.  相似文献   

14.
基于回归系数的变量筛选方法用于近红外光谱分析   总被引:1,自引:0,他引:1  
提出了一种基于回归系数的变量逐步筛选方法。对光谱中各变量计算其回归系数后,按其绝对值由大到小将相应变量排列,采用PLS交互检验按前向选择法逐步选择最佳变量子集。用该方法对玉米和柴油近红外光谱数据进行分析,对玉米蛋白质、柴油十六烷值和粘度分别选择出了14、12以及30个最佳变量用于建模,所得预测结果均优于全谱变量建模的预测结果。可见本方法是一种有效实用的近红外光谱变量选择方法。  相似文献   

15.
According to Lindley’s paradox, most point null hypotheses will be rejected when the sample size is too large. In this paper, a two-stage block testing procedure is proposed for massive data regression analysis. New variables selection criteria incorporating with classical stepwise procedure are also developed to select significant explanatory variables. Our approach is not only simple in computation for massive data but also confirmed by the simulation study that our approach is more accurate in the sense of achieving the nominal significance level for huge data sets. A real example with moderate sample size verifies that the proposed procedure is accurate compared with the classical method, and a huge real data set is also demonstrated to select appropriate regressors.  相似文献   

16.
According to Lindley’s paradox, most point null hypotheses will be rejected when the sample size is too large. In this paper, a two-stage block testing procedure is proposed for massive data regression analysis. New variables selection criteria incorporating with classical stepwise procedure are also developed to select significant explanatory variables. Our approach is not only simple in computation for massive data but also confirmed by the simulation study that our approach is more accurate in the sense of achieving the nominal significance level for huge data sets. A real example with moderate sample size verifies that the proposed procedure is accurate compared with the classical method, and a huge real data set is also demonstrated to select appropriate regressors.  相似文献   

17.
18.
In this paper, a bibliographical review over the last decade is presented on the application of Bayesian networks to dependability, risk analysis and maintenance. It is shown an increasing trend of the literature related to these domains. This trend is due to the benefits that Bayesian networks provide in contrast with other classical methods of dependability analysis such as Markov Chains, Fault Trees and Petri Nets. Some of these benefits are the capability to model complex systems, to make predictions as well as diagnostics, to compute exactly the occurrence probability of an event, to update the calculations according to evidences, to represent multi-modal variables and to help modeling user-friendly by a graphical and compact approach. This review is based on an extraction of 200 specific references in dependability, risk analysis and maintenance applications among a database with 7000 Bayesian network references. The most representatives are presented, then discussed and some perspectives of work are provided.  相似文献   

19.
The energy in network of strings subject to the feedback in a root is decreasing to zero if the network is non-degenerate and this decrease is never exponential. We study in this paper the case of degenerate network (i.e. the ratio of two lengths of strings is a rational number) with a feedback in the root of the network. Under this assumption, we determine the energy limit and we identify the best rate of decay with the spectral abscissa of the underlying semi-group generator. We identify the spectral abscissa as in terms of the largest modulus of the roots of a polynomial.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号