Primary graft rejection after marrow transplantation occurs more frequently in patients receiving HLA-haploidentical compared with HLA-identical sibling transplants. Both human and experimental animal data suggest that the cells responsible for this phenomenon are either host natural killer (NK) cells, T cells, or both. To investigate the mechanisms of graft rejection, we have developed a canine model of marrow transplantation, which uses DLA-nonidentical unrelated donors in the absence of postgrafting immunosuppression. In this model most animals rejected their marrow grafts after a preparative regimen of 9.2 Gy total body irradiation (TBI). However, engraftment of DLA-nonidentical marrow can be facilitated when the recipients are pretreated with monoclonal antibody (MoAb) S5, which recognizes CD44. In this report, we extended these observations by first cloning the canine CD44 and, next, mapping the epitope recognized by S5, which was located in a region conserved among human and canine CD44 and was distinct from the hyaluronan binding domain. However, in vitro binding of S5 caused a conformational change in CD44, which allowed increased hyaluronan binding. Then, we reexamined the in vivo model of marrow transplantation and compared results with MoAb S5 to those with two other anti-CD44 MoAbs, IM7 and S3. Only MoAb S5 significantly increased the engraftment rate of DLA-nonidentical unrelated marrow, whereas the two other anti-CD44 MoAbs were ineffective. The enhanced in vivo effect was not related to differences in the MoAbs' avidities, since both S5 and IM7 had equivalent binding to CD44, but most likely related to the specific epitope that S5 recognizes. Thus, this study shows that the effect of the anti-CD44 MoAb S5 in facilitating engraftment is epitope specific and if one is to use an anti-CD44 to facilitate engraftment of marrow in humans, one cannot assume that any anti-CD44 would work. 相似文献
Tension and vascular headache patients, initially treated with biofeedback and/or relaxation training in either a minimal therapist contact protocol (3 visits) or an intensive individual protocol (10 or 16 visits) were followed-up prospectively for 2 years. In the first study, for the first 6 months of follow-up, half of all patients continued to keep headache diaries and were seen monthly and the other half had only minimal contact. The results at 1-year follow-up, based on 4 weeks of daily headache diaries, revealed equally good maintenance from both treatment protocols and from both follow-up conditions. In Study 2, we found that patients remained improved over pretreatment baseline levels at the 2-year follow-up regardless of initial treatment intensity. Approximately three quarters of vascular patients who were initially improved at posttreatment remained improved at 2 years. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
Face datasets are considered a primary tool for evaluating the efficacy of face recognition methods. Here we show that in
many of the commonly used face datasets, face images can be recognized accurately at a rate significantly higher than random
even when no face, hair or clothes features appear in the image. The experiments were done by cutting a small background area
from each face image, so that each face dataset provided a new image dataset which included only seemingly blank images. Then,
an image classification method was used in order to check the classification accuracy. Experimental results show that the
classification accuracy ranged between 13.5% (color FERET) to 99% (YaleB). These results indicate that the performance of
face recognition methods measured using face image datasets may be biased. Compilable source code used for this experiment
is freely available for download via the Internet. 相似文献
The algorithm selection problem is defined as identifying the best-performing machine learning (ML) algorithm for a given combination of dataset, task, and evaluation measure. The human expertise required to evaluate the increasing number of ML algorithms available has resulted in the need to automate the algorithm selection task. Various approaches have emerged to handle the automatic algorithm selection challenge, including meta-learning. Meta-learning is a popular approach that leverages accumulated experience for future learning and typically involves dataset characterization. Existing meta-learning methods often represent a dataset using predefined features and thus cannot be generalized across different ML tasks, or alternatively, learn a dataset’s representation in a supervised manner and therefore are unable to deal with unsupervised tasks. In this study, we propose a novel learning-based task-agnostic method for producing dataset representations. Then, we introduce TRIO, a meta-learning approach, that utilizes the proposed dataset representations to accurately recommend top-performing algorithms for previously unseen datasets. TRIO first learns graphical representations for the datasets, using four tools to learn the latent interactions among dataset instances and then utilizes a graph convolutional neural network technique to extract embedding representations from the graphs obtained. We extensively evaluate the effectiveness of our approach on 337 datasets and 195 ML algorithms, demonstrating that TRIO significantly outperforms state-of-the-art methods for algorithm selection for both supervised (classification and regression) and unsupervised (clustering) tasks.
We present a new concept—Wikiometrics—the derivation of metrics and indicators from Wikipedia. Wikipedia provides an accurate representation of the real world due to its size, structure, editing policy and popularity. We demonstrate an innovative “mining” methodology, where different elements of Wikipedia – content, structure, editorial actions and reader reviews – are used to rank items in a manner which is by no means inferior to rankings produced by experts or other methods. We test our proposed method by applying it to two real-world ranking problems: top world universities and academic journals. Our proposed ranking methods were compared to leading and widely accepted benchmarks, and were found to be extremely correlative but with the advantage of the data being publically available. 相似文献
The aggregation (sorting) of the individual solar cells into an array is commonly based on a single operating point on the current-voltage (I-V) characteristic curve. an alternative approach for cell performance prediction and cell screening is provided by modelling the cell using an equivalent electrical circuit, in which the parameters involved are related to the physical phenomena in the device. These analytical models may be represented by a double exponential I-V characteristic with seven parameters, by a double exponential model with five parameters or by a single exponential equation with four or five parameters. In this article we address issues concerning methodologies for the determination of solar cell parameters based on measured data points of the I-V characteristic, and introduce a procedure for screening solar cells for arrays. We show that common curve-fitting techniques, e.g. least-squares, may produce many combinations of parameter values while maintaining a good fit between the fitted and measured I-V characteristics of the cell. Therefore, techniques relying on curve-fitting criteria alone cannot be used directly for cell parameterization. We propose a consistent procedure that takes into account the entire set of parameter values for a batch of cells. This procedure is based on a definition of a mean cell representing the batch, and takes into account the relative contribution of each parameter to the overall goodness of fit. the procedure is demonstrated on a batch of 50 silicon cells for Space Station Freedom. 相似文献
Ensemble methods combine several individual pattern classifiers in order to achieve better classification. The challenge is to choose the minimal number of classifiers that achieve the best performance. An ensemble that contains too many members might incur large storage requirements and even reduce the classification performance. The goal of ensemble pruning is to identify a subset of ensemble members that performs at least as good as the original ensemble and discard any other members.In this paper, we introduce the Collective-Agreement-based Pruning (CAP) method. Rather than ranking individual members, CAP ranks subsets by considering the individual predictive ability of each member along with the degree of redundancy among them. Subsets whose members highly agree with the class while having low inter-agreement are preferred. 相似文献
Projection matrices from projective spaces
have long been used in multiple-view geometry to model the perspective projection created by the pin-hole camera. In this work we introduce higher-dimensional mappings
for the representation of various applications in which the world we view is no longer rigid. We also describe the multi-view constraints from these new projection matrices (where k > 3) and methods for extracting the (non-rigid) structure and motion for each application. 相似文献