首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 13 毫秒
1.
在大数据时代,如何通过数据分析抓住顾客需求,增加产品优化的科学性,对企业有着至关重要的战略意义.本文将在线评论数据应用于企业产品的辅助优化中,提出了产品优化信息的获取技术与方法,从评论中获取产品优化所需要的优化信息.首先计算在线评论中的顾客关注度和满意度等指标,构建客户意见的权重算法模型;然后,提取出产品特征和顾客意见的词对,并根据权重算法模型计算出顾客意见的权重;接着,通过关联矩阵找到对应的产品优化信息;最后并通过实例分析验证的方法的可行性.  相似文献   

2.
Multimedia Tools and Applications - Matrix coding based data hiding (MCDH) using linear codes (syndrome coding) is an efficient coding method for steganographic schemes to improve their embedding...  相似文献   

3.
Cubegrades: Generalizing Association Rules   总被引:7,自引:0,他引:7  
Cubegrades are a generalization of association rules which represent how a set of measures (aggregates) is affected by modifying a cube through specialization (rolldown), generalization (rollup) and mutation (which is a change in one of the cube's dimensions). Cubegrades are significantly more expressive than association rules in capturing trends and patterns in data because they can use other standard aggregate measures, in addition to COUNT. Cubegrades are atoms which can support sophisticated what if analysis tasks dealing with behavior of arbitrary aggregates over different database segments. As such, cubegrades can be useful in marketing, sales analysis, and other typical data mining applications in business.In this paper we introduce the concept of cubegrades. We define them and give examples of their usage. We then describe in detail an important task for computing cubegrades: generation of significant cubes whichis analogous to generating frequent sets. A novel Grid Based Pruning (GBP) method is employed for this purpose. We experimentally demonstrate the practicality of the method. We conclude with a number of open questions and possible extensions of the work.  相似文献   

4.
于宝琴  曹满超  白玉坤 《计算机仿真》2021,38(4):230-235,337
以网购快递物流活动为出发点,着眼于研究快递企业的生态实践对生态系统进而对快递企业自身经济效益的影响.对网购快递生态物流流程进行分析,确定网购快递生态物流系统的边界、结构与层次,以及各成员对整体系统的输入与输出.运用系统动力学理论和方法,阐述整个系统以及各内部子系统之间的因果反馈机制.分别构建出物流经济性、物流流程、物流...  相似文献   

5.
介绍了捕捉专用软件GBCUV屏幕图形的驻留程序设计原理及过程。研究发现,当某些软件设置有防屏幕拷贝措施时,可以通过图形端口寄存器的编程,读写图形缓冲区指定页的内容。  相似文献   

6.
7.
In the?k-Apex problem the task is to find at most?k vertices whose deletion makes the given graph planar. The graphs for which there exists a solution form a minor closed class of graphs, hence by the deep results of Robertson and Seymour (J.?Comb. Theory, Ser.?B 63(1):65–110, 1995; J.?Comb. Theory, Ser.?B 92(2):325–357, 2004), there is a cubic algorithm for every fixed value of?k. However, the proof is extremely complicated and the constants hidden by the big-O notation are huge. Here we give a much simpler algorithm for this problem with quadratic running time, by iteratively reducing the input graph and then applying techniques for graphs of bounded treewidth.  相似文献   

8.
The numerical performance of many linear algebra algorithms (even numerically stable ones) can be significantly improved by proper initial scaling of the matrices they operate on. In the case of state-space representations, the most difficult aspect of this is finding appropriate scaling for the system states. This problem is studied here, and a new numerically reliable scaling algorithm is derived which minimizes the Gerschgorin discs of the matrix A, so yielding tight eigenvalue bounds in a computationally cheap way. Examples are used to illustrate the superiority of this algorithm over existing methods.  相似文献   

9.
This paper describes a process for determining the value of the gradient of the real outputs of a program with respect to its real parameters. CalledGradient Instrumentation, it is a mechanical process of insertion into the program's source code. The resulting program yields the gradient without the re-execution of the program. The sample path derivatives of many discrete event dynamical system simulations can be found using Gradient Instrumentation, by treating them as deterministic programs. The technique can also be applied to continuous simulations. The subject of a patent, Gradient Instrumentation yields derivatives of any order.  相似文献   

10.
Online ranking by projecting   总被引:1,自引:0,他引:1  
We discuss the problem of ranking instances. In our framework, each instance is associated with a rank or a rating, which is an integer in 1 to k. Our goal is to find a rank-prediction rule that assigns each instance a rank that is as close as possible to the instance's true rank. We discuss a group of closely related online algorithms, analyze their performance in the mistake-bound model, and prove their correctness. We describe two sets of experiments, with synthetic data and with the EachMovie data set for collaborative filtering. In the experiments we performed, our algorithms outperform online algorithms for regression and classification applied to ranking.  相似文献   

11.
Conjoint choice experiments elicit individuals’ preferences for the attributes of a good by asking respondents to indicate repeatedly their most preferred alternative in a number of choice sets. However, conjoint choice experiments can be used to obtain more information than that revealed by the individuals’ single best choices. A way to obtain extra information is by means of best-worst choice experiments in which respondents are asked to indicate not only their most preferred alternative but also their least preferred one in each choice set. To create D-optimal designs for these experiments, an expression for the Fisher information matrix for the maximum-difference model is developed. Semi-Bayesian D-optimal best-worst choice designs are derived and compared with commonly used design strategies in marketing in terms of the D-optimality criterion and prediction accuracy. Finally, it is shown that best-worst choice experiments yield considerably more information than choice experiments.  相似文献   

12.
Regular languages (RL) are the simplest family in Chomsky’s hierarchy. Thanks to their simplicity they enjoy various nice algebraic and logic properties that have been successfully exploited in many application fields. Practically all of their related problems are decidable, so that they support automatic verification algorithms. Also, they can be recognized in real-time.Context-free languages (CFL) are another major family well-suited to formalize programming, natural, and many other classes of languages; their increased generative power w.r.t. RL, however, causes the loss of several closure properties and of the decidability of important problems; furthermore they need complex parsing algorithms. Thus, various subclasses thereof have been defined with different goals, spanning from efficient, deterministic parsing to closure properties, logic characterization and automatic verification techniques.Among CFL subclasses, so-called structured ones, i.e., those where the typical tree-structure is visible in the sentences, exhibit many of the algebraic and logic properties of RL, whereas deterministic CFL have been thoroughly exploited in compiler construction and other application fields.After surveying and comparing the main properties of those various language families, we go back to operator precedence languages (OPL), an old family through which R. Floyd pioneered deterministic parsing, and we show that they offer unexpected properties in two fields so far investigated in totally independent ways: they enable parsing parallelization in a more effective way than traditional sequential parsers, and exhibit the same algebraic and logic properties so far obtained only for less expressive language families.  相似文献   

13.
Scientific visualization has many effective methods for examining and exploring scalar and vector fields, but rather fewer for bivariate fields. We report the first general purpose approach for the interactive extraction of geometric separating surfaces in bivariate fields. This method is based on fiber surfaces: surfaces constructed from sets of fibers, the multivariate analogues of isolines. We show simple methods for fiber surface definition and extraction. In particular, we show a simple and efficient fiber surface extraction algorithm based on Marching Cubes. We also show how to construct fiber surfaces interactively with geometric primitives in the range of the function. We then extend this to build user interfaces that generate parameterized families of fiber surfaces with respect to arbitrary polygons. In the special case of isovalue‐gradient plots, fiber surfaces capture features geometrically for quantitative analysis that have previously only been analysed visually and qualitatively using multi‐dimensional transfer functions in volume rendering. We also demonstrate fiber surface extraction on a variety of bivariate data.  相似文献   

14.
15.
Robust stabilization is studied within the context of time-domain feedback control of single-input-single-output distributed parameter systems using approximate models. Using the operational methods of Mikusinski (1983), one can easily obtain strong versions of several famous theorems on robust stabilization. The operational methods of Mikusinski can play a powerful role in control theory. Using these methods, one is able to avoid the technical difficulties of convergence, existence, inversion, etc. associated with the Laplace transform. Unlike similar estimates obtained via the contraction mapping and small gain theorems, one is able to obtain robust stabilization free of restrictions on the size of certain error operators  相似文献   

16.
王大玲  于戈  鲍玉斌  张沫  沈洲 《软件学报》2010,21(1):1083-1097
基于目前对用户搜索意图的分类,进一步分析了每种用户意图的信息需求,提出了基于用户搜索意图的 Web 网页动态泛化模型,为搜索的Web 网页动态地建立文档片段、关键词、导航类型、文档格式之间的概念层次, 通过网页内容、类型和格式的泛化为不同的访问意图提供进一步的搜索导航,从而返回与搜索意图更相关的结果. 与相关工作对比,重点并非获取用户意图,也不是对用户意图分类,而是基于用户搜索意图的Web 网页动态泛化模型 的建立及Web 网页泛化过程的实现.实验结果表明,该泛化模型不仅能够通过导航自动获取用户搜索意图,而且能够 基于该意图提供相关搜索结果以及进一步的搜索导航.  相似文献   

17.
基于用户搜索意图的Web网页动态泛化   总被引:3,自引:0,他引:3  
基于目前对用户搜索意图的分类,进一步分析了每种用户意图的信息需求,提出了基于用户搜索意图的Web网页动态泛化模型,为搜索的Web网页动态地建立文档片段、关键词、导航类型、文档格式之间的概念层次,通过网页内容、类型和格式的泛化为不同的访问意图提供进一步的搜索导航,从而返回与搜索意图更相关的结果.与相关工作对比,重点并非获取用户意图,也不是对用户意图分类,而是基于用户搜索意图的Web网页动态泛化模型的建立及Web网页泛化过程的实现.实验结果表明,该泛化模型不仅能够通过导航自动获取用户搜索意图,而且能够基于该意图提供相关搜索结果以及进一步的搜索导航.  相似文献   

18.
The Chinese Proposition Bank (CPB) is a corpus annotated with semantic roles for the arguments of verbal and nominalized predicates. The semantic roles for the core arguments are defined in a predicate-specific manner. That is, a set of semantic roles, numerically identified, are defined for each sense of a predicate lemma and recorded in a valency lexicon called frame files. The predicate-specific manner in which the semantic roles are defined reduces the cognitive burden on the annotators since they only need to internalize a few roles at a time and this has contributed to the consistency in annotation. It was also a sensible approach given the contentious issue of how many semantic roles are needed if one were to adopt of set of global semantic roles that apply to all predicates. A downside of this approach, however, is that the predicate-specific roles may not be consistent across predicates, and this inconsistency has a negative impact on training automatic systems. Given the progress that has been made in defining semantic roles in the last decade or so, time is ripe for adopting a set of general semantic roles. In this article, we describe our effort to “re-annotate” the CPB with a set of “global” semantic roles that are predicate-independent and investigate their impact on automatic semantic role labeling systems. When defining these global semantic roles, we strive to make them compatible with a recently published ISO standards on the annotation of semantic roles (ISO 24617-4:2014 SemAF-SR) while taking the linguistic characteristics of the Chinese language into account. We show that in spite of the much larger number of global semantic roles, the accuracy of an off-the-shelf semantic role labeling system retrained on the data re-annotated with global semantic roles is comparable to that trained on the data set with the original predicate-specific semantic roles. We also argue that the re-annotated data set, together with the original data, provides the user with more flexibility when using the corpus.  相似文献   

19.
图象数据读取方法及阈值分割技术   总被引:2,自引:0,他引:2  
对BMP文件格式进行分析,读取图象数据,并对这些数据进行处理,通过阈值分割技术得到图象边缘。  相似文献   

20.
The concept of electronic signatures emerged decades ago, however they are still not prevalent due to lack of reliable infrastructure. Although the signatures are hard to perfectly imitate, it is simple with an image editing software to copy the original signature and paste on a document. On the other hand, technological developments of touchscreens may create a new era by utilizing simple interfaces which would be recording and validating the electronic signatures with biometric features. Therefore, in this paper, we propose a novel online signature analysis methodology for touchscreens that starts with signing an interface consisting of a signature silhouette. The frequency spectrum along the signing process is stealthily extracted and spectrograms are created by short-time Fourier transforms. Since the spectrograms are found as RGB images, providing valuable information about frequency vs time, grid histograms are formed by quantization for the real signature sample. Given the discrimination purposes, a fuzzified surface is designed for computing closeness of grid histograms.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号