首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 734 毫秒
1.
This paper is an investigation into the performance of E-commerce applications. E-commerce has become one of the most popular applications of the web as a large population of web users is now benefiting from various on-line services including product searches, product purchases and product comparison. E-commerce provides users with 24-7 shopping facilities. However, the consequence of these benefits and facilities is the excessive load on E-commerce web servers and the performance degradation of E-commerce (eCom) requests they process. This paper addresses this issue and proposes a class-based priority scheme which classifies eCom requests into high and low priority requests. In E-commerce, some requests (e.g. payment) are generally considered more important than others (e.g. search or browse). We believe that by assigning class-based priorities at multiple service levels, E-commerce web servers can perform better and can improve the performance of high priority eCom requests. In this paper, we formally specify and implement the proposed scheme and evaluate its performance using multiple servers. Experimental results demonstrate that the proposed scheme significantly improves the performance of high priority eCom requests.  相似文献   

2.
The evolution of the web has outpaced itself: A growing wealth of information and increasingly sophisticated interfaces necessitate automated processing, yet existing automation and data extraction technologies have been overwhelmed by this very growth. To address this trend, we identify four key requirements for web data extraction, automation, and (focused) web crawling: (1) interact with sophisticated web application interfaces, (2) precisely capture the relevant data to be extracted, (3) scale with the number of visited pages, and (4) readily embed into existing web technologies. We introduce OXPath as an extension of XPath for interacting with web applications and extracting data thus revealed—matching all the above requirements. OXPath’s page-at-a-time evaluation guarantees memory use independent of the number of visited pages, yet remains polynomial in time. We experimentally validate the theoretical complexity and demonstrate that OXPath’s resource consumption is dominated by page rendering in the underlying browser. With an extensive study of sublanguages and properties of OXPath, we pinpoint the effect of specific features on evaluation performance. Our experiments show that OXPath outperforms existing commercial and academic data extraction tools by a wide margin.  相似文献   

3.
4.
Despite a large body of work on XPath query processing in relational environment, systematic study of queries containing not-predicates have received little attention in the literature. Particularly, several xml supports of industrial-strength commercial rdbms fail to efficiently evaluate such queries. In this paper, we present an efficient and novel strategy to evaluate not -twig queries in a tree-unaware relational environment. not -twig queries are XPath queries with ancestor–descendant and parent–child axis and contain one or more not-predicates. We propose a novel Dewey-based encoding scheme called Andes (ANcestor Dewey-based Encoding Scheme), which enables us to efficiently filter out elements satisfying a not-predicate by comparing their ancestor group identifiers. In this approach, a set of elements under the same common ancestor at a specific level in the xml tree is assigned same ancestor group identifier. Based on this scheme, we propose a novel sql translation algorithm for not-twig query evaluation. Experiments carried out confirm that our proposed approach built on top of an off-the-shelf commercial rdbms significantly outperforms state-of-the-art relational and native approaches. We also explore the query plans selected by a commercial relational optimizer to evaluate our translated queries in different input cardinality. Such exploration further validates the performance benefits of Andes.  相似文献   

5.
Noise filtering is most frequently used in data preprocessing to improve the accuracy of induced classifiers. The focus of this work is different: we aim at detecting noisy instances for improved data understanding, data cleaning and outlier identification. The paper is composed of three parts. The first part presents an ensemble-based noise ranking methodology for explicit noise and outlier identification, named Noise- Rank, which was successfully applied to a real-life medical problem as proven in domain expert evaluation. The second part is concerned with quantitative performance evaluation of noise detection algorithms on data with randomly injected noise. A methodology for visual performance evaluation of noise detection algorithms in the precision-recall space, named Viper, is presented and compared to standard evaluation practice. The third part presents the implementation of the NoiseRank and Viper methodologies in a web-based platform for composition and execution of data mining workflows. This implementation allows public accessibility of the developed approaches, repeatability and sharing of the presented experiments as well as the inclusion of web services enabling to incorporate new noise detection algorithms into the proposed noise detection and performance evaluation workflows.  相似文献   

6.
Advance reservation is important to guarantee the quality of services of jobs by allowing exclusive access to resources over a defined time interval on resources. It is a challenge for the scheduler to organize available resources efficiently and to allocate them for parallel advance reservation jobs with deadline constraint appropriately. This paper provides a slot-based data structure to organize available resources of multiprocessor systems in a way that enables efficient search and update operations and formulates a suite of scheduling policies to allocate resources for dynamically arriving advance reservation requests. The performance of the scheduling algorithms were investigated by simulations with different job sizes and durations, system loads, and scheduling flexibilities. Simulation results show that job sizes and durations, system load and the flexibility of scheduling will impact the performance metrics of all the scheduling algorithms, and the $\textit{PE}\; \textit{Worst Fit}$ algorithm becomes the best algorithm for the scheduler with the highest acceptance rate of advance reservation requests, and the jobs with the $\textit{First Fit}$ algorithm experience the lowest average slowdown. The data structure and scheduling policies can be used to organize and allocate resources for parallel advance reservation jobs with deadline constraint in large-scale computing systems.  相似文献   

7.
Onion routing is a privacy-enabling protocol that allows users to establish anonymous channels over a public network. In such a protocol, parties send their messages through $n$ anonymizing servers (called a circuit) using several layers of encryption. Several proposals for onion routing have been published in recent years, and TOR, a real-life implementation, provides an onion routing service to thousands of users over the Internet. This paper puts forward a new onion routing protocol which outperforms TOR by achieving forward secrecy in a fully non-interactive fashion, without requiring any communication from the router and/or the users and the service provider to update time-related keys. We compare this to TOR which requires $O(n^2)$ rounds of interaction to establish a circuit of size $n$ . In terms of the computational effort required to the parties, our protocol is comparable to TOR, but the network latency associated with TOR’s high round complexity ends up dominating the running time. Compared to other recently proposed alternative to TOR, such as the PB-OR (PETS 2007) and CL-OR (CCS 2009) protocols, our scheme still has the advantage of being non-interactive (both PB-OR and CL-OR require some interaction to update time-sensitive information), and achieves similar computational performances. We performed implementation and simulation tests that confirm our theoretical analysis. Additionally, while comparing our scheme to PB-OR, we discovered a flaw in the security of that scheme which we repair in this paper. Our solution is based on the application of forward secure encryption. We design a forward secure encryption scheme (of independent interest) to be used as the main encryption scheme in our onion routing protocol.  相似文献   

8.
Kemeny Rank Aggregation is a consensus finding problem important in many areas ranging from classical voting over web search and databases to bioinformatics. The underlying decision problem Kemeny Score is NP-complete even in case of four input rankings to be aggregated into a “median ranking”. We analyze efficient polynomial-time data reduction rules with provable performance bounds that allow us to find even all optimal median rankings. We show that our reduced instances contain at most candidates where  \(d_a\) denotes the average Kendall’s tau distance between the input votes. On the theoretical side, this improves a corresponding result for a “partial problem kernel” from quadratic to linear size. In this context we provide a theoretical analysis of a commonly used data reduction. On the practical side, we provide experimental results with data based on web search and sport competitions, e.g., computing optimal median rankings for real-world instances with more than 100 candidates within milliseconds. Moreover, we perform experiments with randomly generated data based on two random distribution models for permutations.  相似文献   

9.
Information-theoretically secure (ITS) authentication is needed in quantum key distribution (QKD). In this paper, we study security of an ITS authentication scheme proposed by Wegman & Carter, in the case of partially known authentication key. This scheme uses a new authentication key in each authentication attempt, to select a hash function from an Almost Strongly Universal \(_2\) hash function family. The partial knowledge of the attacker is measured as the trace distance between the authentication key distribution and the uniform distribution; this is the usual measure in QKD. We provide direct proofs of security of the scheme, when using partially known key, first in the information-theoretic setting and then in terms of witness indistinguishability as used in the universal composability (UC) framework. We find that if the authentication procedure has a failure probability \(\varepsilon \) and the authentication key has an \(\varepsilon ^{\prime }\) trace distance to the uniform, then under ITS, the adversary’s success probability conditioned on an authentic message-tag pair is only bounded by \(\varepsilon +|\mathcal T |\varepsilon ^{\prime }\) , where \(|\mathcal T |\) is the size of the set of tags. Furthermore, the trace distance between the authentication key distribution and the uniform increases to \(|\mathcal T |\varepsilon ^{\prime }\) after having seen an authentic message-tag pair. Despite this, we are able to prove directly that the authenticated channel is indistinguishable from an (ideal) authentic channel (the desired functionality), except with probability less than \(\varepsilon +\varepsilon ^{\prime }\) . This proves that the scheme is ( \(\varepsilon +\varepsilon ^{\prime }\) )-UC-secure, without using the composability theorem.  相似文献   

10.
随着过去几十年互联网服务的指数增长,各大网站的访问量急剧上升。海量的用户请求使得热门网站的网络请求率可能在几秒钟内大规模增加。一旦服务器承受不住这样的高并发请求,由此带来的网络拥塞和延迟会极大地影响用户体验。负载均衡是高可用网络基础架构的关键组件,通过在后端引入一个负载均衡器,将工作负载分布到多个服务器来缓解海量并发请求对服务器造成的巨大压力,提高后端服务器和数据库的性能以及可靠性。而Nginx作为一款高性能的HTTP和反向代理服务器,正越来越多地应用到实践中。文中将分析Nginx服务器负载均衡的体系架构,研究默认的加权轮询算法,并提出一种改进后的动态负载均衡算法,实时收集负载信息,重新计算并分配权值。通过实验测试,对比不同算法下的负载均衡性能,改进后的算法能有效提高服务器集群的性能。  相似文献   

11.
This paper presents new schedulability tests for preemptive global fixed-priority (FP) scheduling of sporadic tasks on identical multiprocessor platform. One of the main challenges in deriving a schedulability test for global FP scheduling is identifying the worst-case runtime behavior, i.e., the critical instant, at which the release of a job suffers the maximum interference from the jobs of its higher priority tasks. Unfortunately, the critical instant is not yet known for sporadic tasks under global FP scheduling. To overcome this limitation, pessimism is introduced during the schedulability analysis to safely approximate the worst-case. The endeavor in this paper is to reduce such pessimism by proposing three new schedulability tests for global FP scheduling. Another challenge for global FP scheduling is the problem of assigning the fixed priorities to the tasks because no efficient method to find the optimal priority ordering in such case is currently known. Each of the schedulability tests proposed in this paper can be used to determine the priority of each task based on Audsley’s approach. It is shown that the proposed tests not only theoretically dominate but also empirically perform better than the state-of-the-art schedulability test for global FP scheduling of sporadic tasks.  相似文献   

12.
Scientific applications are getting increasingly complex, e.g., to improve their accuracy by taking into account more phenomena. Meanwhile, computing infrastructures are continuing their fast evolution. Thus, software engineering is becoming a major issue to offer ease of development, portability and maintainability while achieving high performance. Component based software engineering offers a promising approach that enables the manipulation of the software architecture of applications. However, existing models do not provide an adequate support for performance portability of HPC applications. This paper proposes a low level component model (L \(^2\) C) that supports inter-component interactions for typical scenarios of high performance computing, such as process-local shared memory and function invocation (C++ and Fortran), MPI, and Corba. To study the benefits of using L \(^2\) C, this paper walks through an example of stencil computation, i.e. a structured mesh Jacobi implementation of the 2D heat equation parallelized through domain decomposition. The experimental results obtained on the Grid’5000 testbed and on the Curie supercomputer show that L \(^2\) C can achieve performance similar to that of native implementations, while easing performance portability.  相似文献   

13.
We study the stability of some finite difference schemes for symmetric hyperbolic systems in two space dimensions. For the so-called upwind scheme and the Lax–Wendroff scheme with a stabilizer, we show that stability is equivalent to strong stability, meaning that both schemes are either unstable or $\ell ^2$ -decreasing. These results improve on a series of partial results on strong stability. We also show that, for the Lax–Wendroff scheme without stabilizer, strong stability may not occur no matter how small the CFL parameters are chosen. This partially invalidates some of Turkel’s conjectures in Turkel (16(2):109–129, 1977).  相似文献   

14.
The integration of cloud computing and mobile computing has recently resulted in the Mobile Cloud Computing (MCC) paradigm which is defined as the availability of \(c\) loud services over a mobile ecosystem. Platform as a Service (PaaS) is a model of cloud computing that refers to high-level software systems delivered over Internet. This model typically enables developers to deliver Web applications as Software as a Service. With the aim of providing support to the MCC, in this work a PaaS called MobiCloUP! is proposed for mobile Web and native applications based on third-party cloud services such as Netflix, Instagram and Pinterest, to mention but a few. Unlike other commercial solutions such as force.com, Google \(^{\mathrm{TM}}\) App Engine and other academic proposals like MOSAIC, MobiCloUP! implements an automatic code generation programming model targeting rich mobile applications based on both Web standards such as HTML5, CSS3 and AJAX and Rich Internet Application frameworks like Adobe \(^{\textregistered }\) Flex. The MobiCloUP! core is a wizard tool that covers design, publish/deployment, development and maintenance phases for mobile development life-cycle. In order to validate our proposal, Web 2.0 services-based Web and native mobile applications were developed and deployed to the Cloud using MobiCloUP!. Finally, a qualitative-comparative evaluation was performed in order to validate the legitimacy of our proposal against other similar commercial proposals.  相似文献   

15.
16.
This paper presents a distributed (Bulk-Synchronous Parallel or bsp) algorithm to compute on-the-fly whether a structured model of a security protocol satisfies a ctl \(^*\) formula. Using the structured nature of the security protocols allows us to design a simple method to distribute the state space under consideration in a need-driven fashion. Based on this distribution of the states, the algorithm for logical checking of a ltl formula can be simplified and optimised allowing, with few tricky modifications, the design of an efficient algorithm for ctl \(^*\) checking. Some prototype implementations have been developed, allowing to run benchmarks to investigate the parallel behaviour of our algorithms.  相似文献   

17.
We propose a scheme for generating atomic NOON states via adiabatic passage. In the scheme, a double \(\Lambda \) -type three-level atom is trapped in a bimodal cavity, and two sets of \(\Lambda \) -type three-level atoms are translated into and outside of two single-mode cavities, respectively. The three cavities connected by optical fibers are always in vacuum states. After a series of operations and suitable interaction time, we can obtain arbitrary large- \(n\) NOON states of two sets of \(\Lambda \) -type three-level atoms in distant cavities by performing a single projective measurement on the double \(\Lambda \) -type three-level atom. Our scheme is robust against the spontaneous emissions of atoms, the decays of fibers, and photon leakage of cavities, due to the adiabatic elimination of atomic excited states and the application of adiabatic passage.  相似文献   

18.
19.
We present and analyze a finite volume scheme of arbitrary order for elliptic equations in the one-dimensional setting. In this scheme, the control volumes are constructed by using the Gauss points in subintervals of the underlying mesh. We provide a unified proof for the inf-sup condition, and show that our finite volume scheme has optimal convergence rate under the energy and $L^2$ norms of the approximate error. Furthermore, we prove that the derivative error is superconvergent at all Gauss points and in some special cases, the convergence rate can reach $h^{r+2}$ and even $h^{2r}$ , comparing with $h^{r+1}$ rate of the counterpart finite element method. Here $r$ is the polynomial degree of the trial space. All theoretical results are justified by numerical tests.  相似文献   

20.
A combination method of Newton’s method and two-level piecewise linear finite element algorithm is applied for solving second-order nonlinear elliptic partial differential equations numerically. Newton’s method is to find a finite element solution by solving $m$ Newton equations on a fine mesh. The two-level Newton’s method solves $m-1$ Newton equations on a coarse mesh and processes one Newton iteration on a fine mesh. Moreover, the optimal error estimates of Newton’s method and the two-level Newton’s method are provided to justify the efficiency of the two-level Newton’s method. If we choose $H$ such that $h=O(|\log h|^{1-2/{p}}H^2)$ for the $W^{1,p}(\Omega )$ -error estimates, the two-level Newton’s method is asymptotically as accurate as Newton’s method on the fine mesh. Meanwhile, the numerical investigations provided a sufficient support for the theoretical analysis. Finally, these investigations also proved that the proposed method is efficient for solving the nonlinear elliptic problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号