首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
The study of confusion data is a well established practice in psychology. Although many types of analytical approaches for confusion data are available, among the most common methods are the extraction of 1 or more subsets of stimuli, the partitioning of the complete stimulus set into distinct groups, and the ordering of the stimulus set. Although standard commercial software packages can sometimes facilitate these types of analyses, they are not guaranteed to produce optimal solutions. The authors present a MATLAB *.m file for preprocessing confusion matrices, which includes fitting of the similarity-choice model. Two additional MATLAB programs are available for optimally clustering stimuli on the basis of confusion data. The authors also developed programs for optimally ordering stimuli and extracting subsets of stimuli using information from confusion matrices. Together, these programs provide several pragmatic alternatives for the applied researcher when analyzing confusion data. Although the programs are described within the context of confusion data, they are also amenable to other types of proximity data. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

2.
A number of important applications require the clustering of binary data sets. Traditional nonhierarchical cluster analysis techniques, such as the popular K-means algorithm, can often be successfully applied to these data sets. However, the presence of masking variables in a data set can impede the ability of the K-means algorithm to recover the true cluster structure. The author presents a heuristic procedure that selects an appropriate subset from among the set of all candidate clustering variables. Specifically, this procedure attempts to select only those variables that contribute to the definition of true cluster structure while eliminating variables that can hide (or mask) that true structure. Experimental testing of the proposed variable-selection procedure reveals that it is extremely successful at accomplishing this goal. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

3.
Evaluations of several commercial presence-absence (P-A) test kits were performed over a 6-month period in 1990 by using the Ontario Ministry of the Environment (MOE) P-A test for comparison. The general principles of the multiple-tube fermentation technique formed the basis for conducting the product evaluations. Each week, a surface water sample was diluted and inoculated into 25 99-ml dilution blanks for each of three dilutions. The inoculated dilution blanks from each dilution series were randomly sorted into sets of five. Three of these sets were inoculated into the P-A test kits or vice versa, as required. The other two sets were passed through membrane filters, and one set of five membrane filters was placed onto m-Endo agar LES to give replicate total coliform counts and the other set was placed onto m-TEC agar to give replicate fecal coliform results. A statistical analysis of the results was performed by a modified logistic transform method, which provided an improved way to compare binary data obtained from the different test kits. The comparative test results showed that three of the four commercial products tested gave very good levels of recovery and that the fourth commercial product gave only fair levels of recovery when the data were compared with the data from MOE P-A tests and membrane filter tests. P-A bottles showing positive results after 18 h of incubation that were subcultured immediately in ECMUG tubes frequently could be confirmed as containing total coliforms, fecal coliforms, or Escherichia coli after 6 h of incubation; thus, the total incubation time was only 24 h.(ABSTRACT TRUNCATED AT 250 WORDS)  相似文献   

4.
5.
结构面分组是开展岩体工程稳定性分析的基础,为此,采用谱聚类算法根据岩体结构面产状信息将结构面进行优势组划分。与目前广泛使用的K均值聚类相比,该算法能够收敛到全局最优。选取结构面法向量所夹锐角的正弦值平方作为结构面间的相似度量准则,应用谱聚类算法进行优化求解;同时,引入Silhouette指标对聚类有效性进行评价,以确定最佳分类数目。利用谱聚类方法对人工生成结构面数据进行计算的结果验证了该方法的可靠性。最后,将该算法应用于三山岛金矿岩体结构面的优势组划分,取得了理想的分类效果,为进一步岩体稳定性分析提供了可靠的数据基础。  相似文献   

6.
The interaction between three independent data sets (anatomy/morphology, cytology, molecules) has been evaluated within the controversial genus Trichomanes (Hymenophyllaceae). Anatomy/morphology, cytology, and rbcL sequences, despite their high and significant level of incongruence, were thus empirically combined with differential weighting in a cladistic analysis within Trichomanes in order to give an appreciation of the contribution of each data set in the resulting topologies and to study more precisely the nature of potential conflicts. Results show that any standard statistics values (such as bootstrap) do not appear to be objectively useful for the choice of the "best" topology or the "good" clades provided by the combination. This weighting approach reveals three cases: (i) some clades (such as subgenus Didymoglossum) are always retrieved and correspond to the absence of conflicts between the different data, (ii) some new clades (such as subgenus Achomanes) are either provided or reenforced as a "synergetic" result of the combination of the data and (iii) that remaining conflicting clades reflect the persistence of incongruence between data whatever the weighting.  相似文献   

7.
In many countries, the most widely used method for timing plan selection and implementation is the time-of-day (TOD) method. In TOD mode, a few traffic patterns that exist in the historical volume data are recognized and used to find the signal timing plans needed to achieve optimum performance of the intersections during the day. Traffic engineers usually determine TOD breakpoints by analyzing 1 or 2?days worth of traffic data and relying on their engineering judgment. The current statistical methods, such as hierarchical and K-means clustering methods, determine TOD breakpoints but introduce a large number of transitions. This paper proposes adopting the Z-score of the traffic flow and time variable in the K-means clustering to reduce the number of transitions. The numbers of optimum breakpoints are chosen based on a microscopic simulation model considering a set of performance measures. By using simulation and the K-means algorithm, it was found that five clusters are the optimum for a major arterial in Al-Khobar, Saudi Arabia. As an alternative to the simulation-based approach, a subtractive algorithm-based K-means technique is introduced to determine the optimum number of TODs. Through simulation, it was found that both approaches results in almost the same values of measure of effectiveness (MOE). The proposed two approaches seem promising for similar studies in other regions, and both of them can be extended for different types of roads. The paper also suggests a procedure for considering the cyclic nature of the daily traffic in the clustering effort.  相似文献   

8.
A systematic analysis of the localization of objects in extra-personal space requires a three-dimensional method of documenting location. In auditory localization studies the location of a sound source is often reduced to a directional vector with constant magnitude with respect to the observer, data being plotted on a unit sphere with the observer at the origin. This is an attractive form of data representation as the relevant spherical statistical and graphical methods are well described. In this paper we collect together a set of spherical plotting and statistical procedures to visualize and summarize these data. We describe methods for visualizing auditory localization data without assuming that the principal components of the data are aligned with the coordinate system. As a means of comparing experimental techniques and having a common set of data for the verification of spherical statistics, the software (implemented in MATLAB) and database described in this paper have been placed in the public domain. Although originally intended for the visualization and summarization of auditory psychophysical data, these routines are sufficiently general to be applied in other situations involving spherical data.  相似文献   

9.
Although the availability of discrete-element method (DEM) codes has improved, the need still exists to solve simple verification problems to obtain an understanding of these codes. Different DEM codes may have subtle differences in the manner in which the method is implemented, and the significance of these differences may be problem dependent. This paper investigates a series of simple, one and two-particle contact problems. These problems, which employ various types of damping, are shown to be equivalent to classical one-dimensional vibration problems. The solutions are discussed in the context of the DEM, and results from the DEM are shown to compare very well with the classical solutions. It is demonstrated that results from a well-known commercial two-dimensional code (PFC2D) and the open source three-dimensional code (YADE) yield identical solutions to these problems provided the problem solution process is manipulated properly. A discussion of the differences in how gravity and damping are implemented may be of interest to users of PFC2D.  相似文献   

10.
A new method that makes possible, for the first time, simultaneous acquisition of individual dissociation mass spectra of isomeric ions in mixtures is presented. This method exploits the exquisite sensitivity of blackbody infrared radiative dissociation kinetics to minor differences in ion structure. Instead of separating precursor ions based on mass (isomers have identical mass), fragment ions are related to their original precursor ions on the basis of rate constants for dissociation. Mixtures of the peptide isomers des-R1 and des-R9 bradykinin are dissociated simultaneously at several temperatures. By fitting the kinetic data to double-exponential functions, the dissociation rate constant and abundance of each isomer in the mixture are obtained. To overcome the difficulty of fitting double-exponential functions, a novel global analysis method is used in which several dissociation data sets collected at different temperatures are simultaneously fit. The kinetic data measured at multiple temperatures are modeled with the preexponentials (corresponding to the abundance of each isomer) as "global" parameters which are constant for all data sets and the exponentials (rate constants) as "local" variables which differ for each data set. The use of global parameters significantly improves the accuracy with which abundances and dissociation rate constants of each individual compound can be obtained from the mixture data. Fragment ions produced from a mixture of these two isomers are related back to their respective precursor ions from the kinetic data. Thus, not only can the composition of the isomeric mixture be determined but an individual tandem mass spectrum of each component in the mixture can be obtained.  相似文献   

11.
Simplified Trial Wedge Method for Soil Nailed Wall Analysis   总被引:1,自引:0,他引:1  
This paper presents a new approach that allows soil nailed walls to be analyzed using a trial wedge method. Most soil nailed wall analysis methods are rooted in traditional slope stability solutions with curvilinear or bilinear slip surfaces. This has led to limited access to these methods due to the cost of commercial software. In addition, there are at least two well-documented test walls brought to failure that indicated evidence of relatively steep, approximately linear slip surfaces instead of the more complex surfaces assumed by most software packages. The simplified trial wedge method is intended as a relatively simple and inexpensive method for preliminary or supplemental design calculations for soil nailed walls. The method stems from the existing Federal Highway Administration analysis guidelines. Procedures are outlined for implementing the trial wedge method using a spreadsheet-based approach. The method is applied to two test walls that were intentionally brought to failure, the Amherst Test Wall in clay, and the Clouterre Test Wall No. 1 in sand. In each case, the trial wedge analysis produces results consistent with the failure mode of the wall.  相似文献   

12.
[Correction Notice: An erratum for this article was reported in Vol 105(2) of Psychological Bulletin (see record 2008-10620-001). Information was inadvertently left out of the author note on page 417. The information that should have been included in the author note is provided in the erratum.] Evaluation of the fit of a data set to an algebraic model often relies on the squared correlation coefficient (r–2). Algebraic models commonly prescribe functions between two variables that are monotonic. We created data sets that are monotonic but otherwise random and computed their r–2 values. These values are shown to far exceed those for data sets not constrained to be monotonic. We tabulated selected cumulants of the distribution of r–2 for monotone data fit to two common models, linear and power functions, for several conditions of randomness for data sets comprising 3, 9, and 15 points. Random monotone data fit all these functions well. The common experimental practices of averaging several sets of data and using regular spacing in the values of the independent variable both tend to produce slight additional improvements in fit. Consideration of these results reveals that correlation alone is inadequate to test the fit of monotone data to algebraic models. Some helpful auxiliary techniques are recommended. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

13.
"Two equated groups of 96 Ss rated the self-concept on 40 semantic scales. Each set of data was separately factor analyzed and rotated by three objective procedures. Corresponding pairs of rotated solutions were compared to determine which method yielded greatest invariance. Kaiser's normal varimax method of rotation provided the most satisfactory factor structure for interpretation. Six dimensions of the self-concept were identified, and were called Self-Confidence, Social Worth, Corpulence, Potency, Independence, and Tension-Discomfort." (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

14.
连续冷轧的新设备及技术特点   总被引:1,自引:0,他引:1  
结合项目实例介绍酸洗冷轧联合机组所采用的多种技术方案.  相似文献   

15.
The transit route network design (TRND) problem seeks a set of bus routes and schedules that is optimal in the sense that it maximizes the utility of an urban bus system for passengers while minimizing operator cost. Because of the computational intractability of the problem, finding an optimal solution for most systems is not possible. Instead, a wide variety of heuristic and meta-heuristic approaches have been applied to the problem to attempt to find near-optimal solutions. This paper presents an optimization system that synthesizes aspects of previous approaches into a scalable, flexible, intelligent agent architecture. This architecture has successfully been applied to other transportation and logistics problems in both research studies and commercial applications. This study shows that this intelligent agent system outperforms previous solutions for both a benchmark Swiss bus network system and the very large bus system in Delhi, India. Moreover, the system produces in a single run a set of Pareto equivalent solutions that allow a transit operator to evaluate the trade-offs between operator costs and passenger costs.  相似文献   

16.
The role of cognitive mediators in identifying differences in aggression was examined. Male and female adolescents incarcerated for antisocial aggression offenses and high-school students rated as either high or low in aggression were compared in terms of two sets of cognitive mediators: skills in solving social problems and beliefs supporting aggression. Antisocial-aggressive individuals were most likely (and low-aggressive individuals were least likely) to solve social problems by: defining problems in hostile ways; adopting hostile goals; seeking few additional facts; generating few alternative solutions; anticipating few consequences for aggression; and choosing few "best" and "second best" solutions that were rated as "effective." Antisocial-aggressive individuals were also most likely to hold a set of beliefs supporting the use of aggression, including beliefs that aggression: is a legitimate response; increases self-esteem; helps avoid a negative image; and does not lead to suffering by the victim. The ways in which these findings further elaborate a model of social-cognitive development and extend it to antisocial-aggressive adolescents are discussed. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

17.
This paper describes a cost-effective technique that optimally utilizes all available diagnostic studies for three-dimensional treatment planning. A simulator unit modified to produce cross-sectional images (simulator-CT unit) is used to create a reference data set with the patient in the treatment position. Registration software (qsh) brings other diagnostic studies into agreement with this reference data set. Two cases are presented as examples of the use of this technique. Registration of abdominal scans from the same patient demonstrates the warping of a nontreatment position study to the treatment position. The second case is based on paired data sets through the head, in which the diagnostic study was obtained by using a gantry tilt to follow the base of the skull and to avoid sections passing through the teeth. The registration software provides a method for combining diagnostic studies into a single "master" data set. The success of the transformation depends on the operator's ability to identify corresponding anatomic landmarks for different data sets and on the magnitude of the variation in the patient's position from one procedure to the next. Limitations in image quality and the number of cross-sections obtainable from a simulator-CT unit can be partially overcome by using the described technique. Thus, the information contained in nontreatment position diagnostic tests can be used accurately for treatment planning at limited cost.  相似文献   

18.
There are generally three stages to the development of rules for matching vital events data from two sources covering the same population: (a) establishing a set of "true" matches and nonmatches; (b) determining the best tolerance limits for each single characteristic which might be used in matching; and (c) experimenting to determine the set or sets of characteristics and the weights to be used in classifying a pair of records as matched or nonmatched. Specific examples, based on early matching experiments with data from the dual record system of the Mindanao Center for Population Studies (MCPS), are presented. Successive application of different sets of characteristics (differential valence rule) to the remaining unmatched events produced an acceptable rule for matching in this study.  相似文献   

19.
The use of multiple imputation for the analysis of missing data.   总被引:1,自引:0,他引:1  
This article provides a comprehensive review of multiple imputation (MI), a technique for analyzing data sets with missing values. Formally, MI is the process of replacing each missing data point with a set of m > 1 plausible values to generate m complete data sets. These complete data sets are then analyzed by standard statistical software, and the results combined, to give parameter estimates and standard errors that take into account the uncertainty due to the missing data values. This article introduces the idea behind MI, discusses the advantages of MI over existing techniques for addressing missing data, describes how to do MI for real problems, reviews the software available to implement MI, and discusses the results of a simulation study aimed at finding out how assumptions regarding the imputation model affect the parameter estimates provided by MI. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   

20.
The use of mathematical software packages provides a number of benefits to an engineering user. In general, the packages provide a platform that supports iterative design and parametric analysis through a flexible, transparent interface combined with extensive computing power. This enables an engineering user to develop design equations that are based on fundamental mechanics theories, rather than relying on the “black-box” approach of most commercial design packages. As an example, a closed-form solution for obtaining effective length factors for the design of stepped columns is presented. In the example a series of formula is used to demonstrate the transparency of Mathcad, including the ability of using real engineering units in the calculations, formulas as they may appear in textbooks or in codes, and ability to hide and password protect some areas. This facilitates easier automation of the design and design checking processes. Most commercial structural design packages can be classified as black-box packages. The analyst inputs data at one end only to receive results at the other without fully appreciating the process the input data have gone through. This phenomenon has the tendency of reducing the engineer to a technician, blindly implementing the ideas of the software designer. The Mathcad package discussed in this paper and similar mathematical packages returns the engineer to being in control of the design process.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号