首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The estimation of average (or mean) treatment effects is one of the most popular methods in the statistical literature. If one can have observations directly from treatment and control groups, then the simple t-statistic can be used if the underlying distributions are close to normal distributions. On the other hand, if the underlying distributions are skewed, then the median difference or the Wilcoxon statistic is preferable. In observational studies, however, each individual’s choice of treatment is not completely at random. It may depend on the baseline covariates. In order to find an unbiased estimation, one has to adjust the choice probability function or the propensity score function. In this paper, we study the median treatment effect. The empirical likelihood method is used to calibrate baseline covariate information effectively. An economic dataset is used for illustration.  相似文献   

2.
The generalised median string is defined as a string that has the smallest sum of distances to the elements of a given set of strings. It is a valuable tool in representing a whole set of objects by a single prototype, and has interesting applications in pattern recognition. All algorithms for computing generalised median strings known from the literature are of static nature. That is, they require all elements of the underlying set of strings to be given when the algorithm is started. In this paper, we present a novel approach that is able to operate in a dynamic environment, where there is a steady arrival of new strings belonging to the considered set. Rather than computing the median from scratch upon arrival of each new string, the proposed algorithm needs only the median of the set computed before together with the new string to compute an updated median string of the new set. Our approach is experimentally compared to a greedy algorithm and the set median using both synthetic and real data.  相似文献   

3.
This paper explores the issue of choosing the best data to use when running a scheduling model to select a permanent workforce for a service facility. Because demand is assumed to vary over the day and throughout the week, the choice of the input data is crucial. If a week of low volume is selected, the solution might call for an insufficient number of workers; if a week of high volume is chosen, excessive idle time might be the result. Staff scheduling at mail processing and distribution centers (P&DCs) in the United States provides the backdrop. In operating these facilities, a critical objective is to manage overtime, part-time workers, and temporaries so that when volumes are high, additional costs are kept to a minimum, and when volumes are low the permanent workforce is almost never idle. In quantitative terms, this means selecting the size and composition of the workforce so that over the year, no more than a total number of overtime hours, part-time hours and temporary worker hours are used when demand exceeds some baseline and absenteeism is taken into account. To solve the problem, an engineering approach is proposed in which estimates of productivity are made based on a single run of the optimization model and the final data set is chosen to satisfy a small error tolerance. The full methodology is illustrated with data provided by the Dallas P&DC.  相似文献   

4.
针对传统中值滤波算法比较运算量大、处理效率低、无法满足实时性的问题,以中值滤波原理为基础,对滤波排序算法和实现方案进行了研究,提出了一种基于System Generator的快速中值滤波算法。该算法通过两次行、列排序将3×3窗口中9个像素取中值简化为3个像素的排序运算,使得单窗口查找中值的比较次数由传统排序算法的36次最少降到了14次。结合System Generator系统建模工具,将处理速度提升到传统方法的近6倍,达到了快速抑制噪声的目的,满足了图像实时处理的要求。  相似文献   

5.
Delivering successful projects requires establishing a well-drafted contract, through setting balanced rights and obligations. Nowadays, Standard Forms of Construction Contracts (SFCCs) are being utilized, aiming to manifest the contractual relationships between the parties. This research aims to develop a model that helps in selecting the most appropriate SFCC using text analysis and multi-criteria decision-making techniques. This research studies three SFCCs which are: 1 (FIDIC Red Book 2017, 2 (JCT Standard Building Contract with Quantities 2016, and 3) NEC4 Engineering and Construction Contract. The research compares SFCCs in terms of five aspects which are: 1) Readability, 2) Sharing common goals, 3) Loss and rewards, 4) Dealing with uncertainties, and 5) Suspension by parties. Literature emphasized that these aspects were proven to be crucial in achieving a successful contract agreement. The research starts with contract analysis to extract insights for the selected SFCCs. Where the clauses and provisions of the three SFCCs are compared based on the output of the text analysis process. Accordingly, questions are developed to assist the user in selecting the most suitable SFCC based on a hybrid Multi-Criteria Decision-Making model. Finally, a recommendation system is developed to advise the user for critical success factors to be considered in the contract formation stage. This study adds to the body of knowledge of contract management and aids in proactively avoiding or mitigating conflicts and disputes, by reducing the contractual-related problems that may take place due to inappropriate contract types or provisions.  相似文献   

6.
This paper introduces the concept of set deviation as a tool to characterise the deviation of a set of strings around its set median. The set deviation is defined as the set median of the positive edit sequences between any string and the set median. We show that the set deviation has the same properties as the classic second-order statistical moment. This approach is generalised to higher-order-moments of a set of strings. We then show how the set deviation can be efficiently used in well-known statistical algorithms to improve the computation of the set median of a set of strings, illustrating this concept with several examples, particularly in post-processing of texts extracted from video sequences.  相似文献   

7.
领域本体创建过程中知识源的选取研究   总被引:1,自引:0,他引:1  
本体是近年来信息处理领域的研究热点,而领域本体的创建是本体研究和应用的基础,在领域本体创建过程中知识源的选取至关重要。目前领域知识源的选取有三种方法,但都存在明显的不足,针对这些不足给出一种新的选取方法,并通过使用这种新方法创建实际领域本体。结果证明这种新方法是正确的、可行的和有效的。  相似文献   

8.
In [1] Algorithm III is proposed as a solution to the problem of finding the median of a multiset distributed on two processes. In this note it is shown how this algorithm fails to find the median and how it can be corrected.  相似文献   

9.
Jong Soo Park  Myunghwan Kim 《Software》1989,19(11):1105-1110
A selection algorithm to find the Irth smallest element of a elements is presented. The algorithm mainly consists of the partition procedure and an improved method to choose a partitioning element. The algorithm estimates the partitioning element from a small sample so that the icth element is contained in the smaller partition between two partitions resulting from partitioning. The partitioning element is determined by using estimation of the cumulative frequency distribution in the theory of non-parametric statistics. The expected number of comparisons is found to be n + min(k, n-k) + O(n2/3) through experimental tests where the sample size is approximately n2/3. The experimental results show that the performance of the algorithm is improved, compared to the two known selection algorithms, particularly when the selection index is near to the median.  相似文献   

10.
Algorithmic solutions of parametric problems of two types with a parameter in the objective function and with a parameter in the system of constraints are considered in a Euclidean combinatorial set of combinations with repetitions. Translated from Kibernetika i Sistemnyi Analiz, No. 6, pp. 160–165, November–December, 1999.  相似文献   

11.
The median graph has been presented as a useful tool to represent a set of graphs. Nevertheless its computation is very complex and the existing algorithms are restricted to use limited amount of data. In this paper we propose a new approach for the computation of the median graph based on graph embedding. Graphs are embedded into a vector space and the median is computed in the vector domain. We have designed a procedure based on the weighted mean of a pair of graphs to go from the vector domain back to the graph domain in order to obtain a final approximation of the median graph. Experiments on three different databases containing large graphs show that we succeed to compute good approximations of the median graph. We have also applied the median graph to perform some basic classification tasks achieving reasonable good results. These experiments on real data open the door to the application of the median graph to a number of more complex machine learning algorithms where a representative of a set of graphs is needed.  相似文献   

12.
A variation of ranked set sampling (RSS), multistage RSS (MSRSS), is investigated for the estimation of the distribution function and some of its quantiles, in particular the median. It is shown that this method is significantly more efficient than simple random sampling (SRS). The method becomes more and more effective as the number of stages r increases. Two estimators of the median based on MSRSS are proposed and compared to the sample median obtained by SRS.  相似文献   

13.
A conceptual problem that appears in different contexts of clustering analysis is that of measuring the degree of compatibility between two sequences of numbers. This problem is usually addressed by means of numerical indexes referred to as sequence correlation indexes. This paper elaborates on why some specific sequence correlation indexes may not be good choices depending on the application scenario in hand. A variant of the Product-Moment correlation coefficient and a weighted formulation for the Goodman-Kruskal and Kendall’s indexes are derived that may be more appropriate for some particular application scenarios. The proposed and existing indexes are analyzed from different perspectives, such as their sensitivity to the ranks and magnitudes of the sequences under evaluation, among other relevant aspects of the problem. The results help suggesting scenarios within the context of clustering analysis that are possibly more appropriate for the application of each index.  相似文献   

14.
P. V. Poblete 《Algorithmica》2001,29(1-2):227-237
Given a setS ofN distinct elements in random order and a pivotxS, we study the problem of simultaneously finding the left and the right neighbors ofx, i.e.,L=max{u|u<x} andR=min{v|v>x}. We analyze an adaptive algorithm that solves this problem by scanning the setS while maintaining current values for the neighborsL andR. Each new element inspected is compared first against the neighbor in the most populous side, then (if necessary) against the neighbor in the other side, and finally (if necessary), against the pivot. This algorithm may require 3N comparisons in the worst case, but it performs well on the average. If the pivot has rankαN, where α is fixed and <1/2, the algorithm does (1+α)N+Θ(logN) comparisons on the average, with a variance of 3 lnN+Θ(1). However, in the case where the pivot is the median, the average becomes 3/2;N+Θ(√N), while the variance grows to (1/2−π/8)N+Θ(logN). We also prove that, in the αN case, the limit distribution is Gaussian. This work has been supported in part by Grant FONDECYT(Chile) 1950622 and 1981029. Online publication October 6, 2000.  相似文献   

15.
In this paper we consider the problem of finding two parallel rectangles in arbitrary orientation for covering a given set of n points in a plane, such that the area of the larger rectangle is minimized. We propose an algorithm that solves the problem in O(n3) time using O(n2) space. Without altering the complexity, our approach can be used to solve another optimization problem namely, minimize the sum of the areas of two arbitrarily oriented parallel rectangles covering a given set of points in a plane.  相似文献   

16.
17.
Number of shifts is one of the most important criteria for the production planners to minimize the production costs. The first measurement of the facility, where we executed the application, is optimizing the manpower rotation in B12 production, a special kind of chemical, which is used for paint binders. Seasonal variations in demand of B12 add more importance to the number of shifts in the facility. The purpose of this article is to optimize the shift periods subject to raw material, shipping date, inventory and demand constraints. The model is developed for a paint factory’s reactor facility. The production system has a raw material inventory, a reactor, and a finished goods inventory. We use fuzzy control to optimize the number of shifts under the constraints, given before.  相似文献   

18.
提出了一种基于模糊推理用于去除图像椒盐噪声的中央值滤波器的新型设计方法,在图像复原处理中,理想的期望是对图像被劣化的部分处理,没有被劣化的部分不作处理,但实际图像处理中处理点是否为噪声点具有模糊性.利用模糊推理对处理点像素多大程度上属于劣质像素进行推定,并且多个模糊滤波器联合使用,处理结果证明对广范围噪声发生率的各种被椒盐噪声劣化的图像复原处理都适用.  相似文献   

19.
Cables offer interesting possibilities in bridge design, but are rather susceptible to damage. Since damage in a cable changes its natural vibration frequency, it can be assessed with a vibration-based finite element updating procedure. However, the natural frequency of a cable is also influenced by the temperature, and the measured frequencies are prone to measurement errors. Therefore, it is useful to check the sensitivity of the identified damage with respect to these factors. This paper presents a methodology based on fuzzy numbers to investigate the propagation of measurement errors and uncertainty on the structural temperature throughout the updating procedure.  相似文献   

20.
An interesting new approach for selecting a distribution to describe the observed data has been presented in a paper recently published in Computers and Industrial Engineering. While appreciating the new contribution of the paper, we highlight an important difficulty with their method and suggest that data envelopment analysis (DEA) could be advantageously used to improve the multi-criterion evaluation.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号