Examined the relationship between aphasia type and lesion site in 80 subcortical stroke patients. Ss were classified as affected by aphasia, dysarthria, and nonaphasia, nondysarthria. Sites of lesions were identified by means of computerized tomography (CT) scan. No correlation between site of lesions and category group was found. Lesions of the same subcortical structures yielded different neurolinguistic impairment, whereas comparable linguistic patterns were observed with lesions of different deep areas. (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
This paper discuss several quantitative issues that arise in the analysis of health risks, beginning with principles such as de minimis and zero-risk. The paper also provides a probabilistic definition of risk in terms of hazard, context, consequence, magnitude, and uncertainty. The example relies on this definition to investigate, through sensitivity analysis, the effect that uncertainty has on the results obtained. The results, from a case study based on waterborne total arsenic, show that the choice of dose—response functions causes more uncertainty than any other component of risk analysis. Chemical carcinogenesis provides the basis for discussing inability to know as well as uncertainty. The conclusion is that risk analysis keeps uncertainty and inability to know separate; through this function, it provides a much needed method to present information to decision makers and the public. 相似文献
The suppression of apoptosis may contribute to the carcinogenicity of the peroxisome proliferators (PPs), a class of non-genotoxic rodent hepatocarcinogens. Our previous work demonstrated that the PP nafenopin suppressed both spontaneous and transforming growth factor beta1 (TGFbeta1)-induced hepatocyte apoptosis both in vivo and in vitro. Here, we extend these observations by demonstrating the ability of nafenopin to suppress apoptosis induced by other major candidates for the signalling of cell death in the liver. Treatment of rat or mouse hepatocyte monolayers with TGFbeta1 or the DNA damaging drugs etoposide or hydroxyurea induced high levels of apoptosis. Western blot analysis did not support a role for either p53 or p21waf1 in etoposide-induced apoptosis in rat hepatocytes. Treatment of mouse hepatocytes with an agonistic anti-Fas antibody also resulted in an induction of high levels of apoptosis. Pre-addition and continued exposure to nafenopin suppressed apoptosis induced by all three stimuli. Overall, our studies demonstrate that the ability of nafenopin to protect hepatocytes from apoptosis is not restricted to species or apoptotic stimulus. It is possible, therefore, that the PPs may suppress apoptosis by acting on diverse signalling pathways. However, it seems more likely that nafenopin suppresses hepatocyte apoptosis elicited by each death stimulus by impinging on a core apoptotic mechanism. 相似文献
The exploitation of the salient features of capability-based addressing environments leads to a high number of small objects existing in memory at the same time. It is thus necessary to enhance the efficiency of the mechanisms for object relocation, and to avoid congestion of input/output devices due to swapping. In this paper, we present an approach to the management of a large virtual memory space aimed at solving these problems. We insert partial information concerning the physical allocation of each object into the virtual identifier of this object. Objects are grouped into large swapping units, called pages. The page size is independent of the average object size. This results in enhanced efficiency in managing the relocation information both with regard to memory requirements and access times. The allocation of objects into pages, and the movement of pages through the memory hierarchy, are controlled by user processes. This means that programs which have knowledge of their own use of virtual memory can increase their locality of reference, diminish the number of swap operations and reduce fragmentation. 相似文献
This paper extends the Bayesian direct deconvolution method to the case in which the convolution kernel depends on few unknown nuisance parameters. Moreover, an acceleration procedure is proposed that drastically reduces the computational burden. Finally, the implementation of the method by means of the fast Fourier transform is fully discussed in the multidimensional case. 相似文献
Studying the changes of shape is a common concern in many scientific fields. We address here two problems: (1) quantifying the deformation between two given shapes and (2) transporting this deformation to morph a third shape. These operations can be done with or without point correspondence, depending on the availability of a surface matching algorithm, and on the type of mathematical procedure adopted. In computer vision, the re-targeting of emotions mapped on faces is a common application. We contrast here four different methods used for transporting the deformation toward a target once it was estimated upon the matching of two shapes. These methods come from very different fields such as computational anatomy, computer vision and biology. We used the large diffeomorphic deformation metric mapping and thin plate spline, in order to estimate deformations in a deformational trajectory of a human face experiencing different emotions. Then we use naive transport (NT), linear shift (LS), direct transport (DT) and fanning scheme (FS) to transport the estimated deformations toward four alien faces constituted by 240 homologous points and identifying a triangulation structure of 416 triangles. We used both local and global criteria for evaluating the performance of the 4 methods, e.g., the maintenance of the original deformation. We found DT, LS and FS very effective in recovering the original deformation while NT fails under several aspects in transporting the shape change. As the best method may differ depending on the application, we recommend carefully testing different methods in order to choose the best one for any specific application.
The projection of a photographic data set on a 3D model is a robust and widely applicable way to acquire appearance information of an object. The first step of this procedure is the alignment of the images on the 3D model. While any reconstruction pipeline aims at avoiding misregistration by improving camera calibrations and geometry, in practice a perfect alignment cannot always be reached. Depending on the way multiple camera images are fused on the object surface, remaining misregistrations show up either as ghosting or as discontinuities at transitions from one camera view to another. In this paper we propose a method, based on the computation of Optical Flow between overlapping images, to correct the local misalignment by determining the necessary displacement. The goal is to correct the symptoms of misregistration, instead of searching for a globally consistent mapping, which might not exist. The method scales up well with the size of the data set (both photographic and geometric) and is quite independent of the characteristics of the 3D model (topology cleanliness, parametrization, density). The method is robust and can handle real world cases that have different characteristics: low level geometric details and images that lack enough features for global optimization or manual methods. It can be applied to different mapping strategies, such as texture or per-vertex attribute encoding. 相似文献