首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
This paper describes an autostereoscopic image overlay technique that is integrated into a surgical navigation system to superimpose a real three-dimensional (3-D) image onto the patient via a half-silvered mirror. The images are created by employing a modified version of integral videography (IV), which is an animated extension of integral photography. IV records and reproduces 3-D images using a microconvex lens array and flat display; it can display geometrically accurate 3-D autostereoscopic images and reproduce motion parallax without the need for special devices. The use of semitransparent display devices makes it appear that the 3-D image is inside the patient's body. This is the first report of applying an autostereoscopic display with an image overlay system in surgical navigation. Experiments demonstrated that the fast IV rendering technique and patient-image registration method produce an average registration accuracy of 1.13 mm. Experiments using a target in phantom agar showed that the system can guide a needle toward a target with an average error of 2.6 mm. Improvement in the quality of the IV display will make this system practical and its use will increase surgical accuracy and reduce invasiveness.  相似文献   

2.
Image overlay projection is a form of augmented reality that allows surgeons to view underlying anatomical structures directly on the patient surface. It improves intuitiveness of computer-aided surgery by removing the need for sight diversion between the patient and a display screen and has been reported to assist in 3-D understanding of anatomical structures and the identification of target and critical structures. Challenges in the development of image overlay technologies for surgery remain in the projection setup. Calibration, patient registration, view direction, and projection obstruction remain unsolved limitations to image overlay techniques. In this paper, we propose a novel, portable, and handheld-navigated image overlay device based on miniature laser projection technology that allows images of 3-D patient-specific models to be projected directly onto the organ surface intraoperatively without the need for intrusive hardware around the surgical site. The device can be integrated into a navigation system, thereby exploiting existing patient registration and model generation solutions. The position of the device is tracked by the navigation system's position sensor and used to project geometrically correct images from any position within the workspace of the navigation system. The projector was calibrated using modified camera calibration techniques and images for projection are rendered using a virtual camera defined by the projectors extrinsic parameters. Verification of the device's projection accuracy concluded a mean projection error of 1.3 mm. Visibility testing of the projection performed on pig liver tissue found the device suitable for the display of anatomical structures on the organ surface. The feasibility of use within the surgical workflow was assessed during open liver surgery. We show that the device could be quickly and unobtrusively deployed within the sterile environment.  相似文献   

3.
Surgeries of the skull base require accuracy to safely navigate the critical anatomy. This is particularly the case for endoscopic endonasal skull base surgery (ESBS) where the surgeons work within millimeters of neurovascular structures at the skull base. Today's navigation systems provide approximately 2 mm accuracy. Accuracy is limited by the indirect relationship of the navigation system, the image and the patient. We propose a method to directly track the position of the endoscope using video data acquired from the endoscope camera. Our method first tracks image feature points in the video and reconstructs the image feature points to produce 3D points, and then registers the reconstructed point cloud to a surface segmented from preoperative computed tomography (CT) data. After the initial registration, the system tracks image features and maintains the 2D-3D correspondence of image features and 3D locations. These data are then used to update the current camera pose. We present a method for validation of our system, which achieves submillimeter (0.70 mm mean) target registration error (TRE) results.  相似文献   

4.
The alignment performance of a full-field X-ray exposure system is discussed. A “single-mask alignment” technique is presented, which extracts an array of local misalignments from two exposures of the same mask. A computer program is described which decomposes the array of vector misalignments into mask- and system-related components. The system overlay precision is shown to be 0.06 micron in x and 0.06 micron in y (1 σ), across a 40 mm x 40 mm array, using the Hewlett-Packard prototype X-ray mask aligner to overlay one pattern onto itself. Sources of error in overlays of two different masks are categorized. With feedback control of in-plane mask distortion, system overlay accuracy of 0.12 micron in x or y, for two-mask overlays, is achievable. The random component of die placement by the Perkin-Elmer MEBES II, used to generate the X-ray masks, is inferred to be 0.05 micron, RMS.  相似文献   

5.
We describe a registration and tracking technique to integrate cardiac X-ray images and cardiac magnetic resonance (MR) images acquired from a combined X-ray and MR interventional suite (XMR). Optical tracking is used to determine the transformation matrices relating MR image coordinates and X-ray image coordinates. Calibration of X-ray projection geometry and tracking of the X-ray C-arm and table enable three-dimensional (3-D) reconstruction of vessel centerlines and catheters from bi-plane X-ray views. We can, therefore, combine single X-ray projection images with registered projection MR images from a volume acquisition, and we can also display 3-D reconstructions of catheters within a 3-D or multi-slice MR volume. Registration errors were assessed using phantom experiments. Errors in the combined projection images (two-dimensional target registration error--TRE) were found to be 2.4 to 4.2 mm, and the errors in the integrated volume representation (3-D TRE) were found to be 4.6 to 5.1 mm. These errors are clinically acceptable for alignment of images of the great vessels and the chambers of the heart. Results are shown for two patients. The first involves overlay of a catheter used for invasive pressure measurements on an MR volume that provides anatomical context. The second involves overlay of invasive electrode catheters (including a basket catheter) on a tagged MR volume in order to relate electrophysiology to myocardial motion in a patient with an arrhythmia. Visual assessment of these results suggests the errors were of a similar magnitude to those obtained in the phantom measurements.  相似文献   

6.
X-ray fluoroscopically guided cardiac electrophysiological procedures are routinely carried out for diagnosis and treatment of cardiac arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of static 3-D roadmaps derived from preprocedural volumetric data can be used to add anatomical information. However, the registration between the 3-D roadmap and the 2-D X-ray image can be compromised by patient respiratory motion. Three methods were designed and evaluated to correct for respiratory motion using features in the 2-D X-ray images. The first method is based on tracking either the diaphragm or the heart border using the image intensity in a region of interest. The second method detects the tracheal bifurcation using the generalized Hough transform and a 3-D model derived from 3-D preoperative volumetric data. The third method is based on tracking the coronary sinus (CS) catheter. This method uses blob detection to find all possible catheter electrodes in the X-ray image. A cost function is applied to select one CS catheter from all catheter-like objects. All three methods were applied to X-ray images from 18 patients undergoing radiofrequency ablation for the treatment of atrial fibrillation. The 2-D target registration errors (TRE) at the pulmonary veins were calculated to validate the methods. A TRE of 1.6 mm ± 0.8 mm was achieved for the diaphragm tracking; 1.7 mm ± 0.9 mm for heart border tracking, 1.9 mm ± 1.0 mm for trachea tracking, and 1.8 mm ± 0.9 mm for CS catheter tracking. We present a comprehensive comparison between the techniques in terms of robustness, as computed by tracking errors, and accuracy, as computed by TRE using two independent approaches.  相似文献   

7.
Surgical navigation systems are used widely among all fields of modern medicine, including, but not limited to ENT- and maxillofacial surgery. As a fundamental prerequisite for image-guided surgery, intraoperative registration, which maps image to patient coordinates, has been subject to many studies and developments. While registration methods have evolved from invasive procedures like fixed stereotactic frames and implanted fiducial markers toward surface-based registration and noninvasive markers fixed to the patient's skin, even the most sophisticated registration techniques produce an imperfect result. Due to errors introduced during the registration process, the projection of navigated instruments into image data deviates up to several millimeter from the actual position, depending on the applied registration method and the distance between the instrument and the fiducial markers. We propose a method that allows to automatically and continually improve registration accuracy during intraoperative navigation after the actual registration process has been completed. The projections of navigated instruments into image data are inspected and validated by the navigation software. Errors in image-to-patient registration are identified by calculating intersections between the virtual instruments' axes and surfaces of hard bone tissue extracted from the patient's image data. The information gained from the identification of such registration errors is then used to improve registration accuracy by adding an additional pair of registration points at every location where an error has been detected. The proposed method was integrated into a surgical navigation system based on paired points registration with anatomical landmarks. Experiments were conducted, where registrations with deliberately misplaced point pairs were corrected with automatic error correction. Results showed an improvement in registration quality in all cases.  相似文献   

8.
Describes an extrinsic-point-based, interactive image-guided neurosurgical system designed at Vanderbilt University, Nashville, TN, as part of a collaborative effort among the Departments of Neurological Surgery, Computer Science, and Biomedical Engineering. Multimodal image-to-image (II) and image-to-physical (IP) registration is accomplished using implantable markers. Physical space tracking is accomplished with optical triangulation. The authors investigate the theoretical accuracy of point-based registration using numerical simulations, the experimental accuracy of their system using data obtained with a phantom, and the clinical accuracy of their system using data acquired in a prospective clinical trial by 6 neurosurgeons at 4 medical centers from 158 patients undergoing craniotomies to respect cerebral lesions. The authors can determine the position of their markers with an error of approximately 0.4 mm in X-ray computed tomography (CT) and magnetic resonance (MR) images and 0.3 mm in physical space. The theoretical registration error using 4 such markers distributed around the head in a configuration that is clinically practical is approximately 0.5-0.6 mm. The mean CT-physical registration error for the: phantom experiments is 0.5 mm and for the clinical data obtained with rigid head fixation during scanning is 0.7 mm. The mean CT-MR registration error for the clinical data obtained without rigid head fixation during scanning is 1.4 mm, which is the highest mean error that the authors observed. These theoretical and experimental findings indicate that this system is an accurate navigational aid that can provide real-time feedback to the surgeon about anatomical structures encountered in the surgical field  相似文献   

9.
Fluoroscopic overlay images rendered from preoperative volumetric data can provide additional anatomical details to guide physicians during catheter ablation procedures for treatment of atrial fibrillation (AFib). As these overlay images are often compromised by cardiac and respiratory motion, motion compensation methods are needed to keep the overlay images in sync with the fluoroscopic images. So far, these approaches have either required simultaneous biplane imaging for 3-D motion compensation, or in case of monoplane X-ray imaging, provided only a limited 2-D functionality. To overcome the downsides of the previously suggested methods, we propose an approach that facilitates a full 3-D motion compensation even if only monoplane X-ray images are available. To this end, we use a training phase that employs a biplane sequence to establish a patient specific motion model. Afterwards, a constrained model-based 2-D/3-D registration method is used to track a circumferential mapping catheter. This device is commonly used for AFib catheter ablation procedures. Based on the experiments on real patient data, we found that our constrained monoplane 2-D/3-D registration outperformed the unconstrained counterpart and yielded an average 2-D tracking error of 0.6 mm and an average 3-D tracking error of 1.6 mm. The unconstrained 2-D/3-D registration technique yielded a similar 2-D performance, but the 3-D tracking error increased to 3.2 mm mostly due to wrongly estimated 3-D motion components in X-ray view direction. Compared to the conventional 2-D monoplane method, the proposed method provides a more seamless workflow by removing the need for catheter model re-initialization otherwise required when the C-arm view orientation changes. In addition, the proposed method can be straightforwardly combined with the previously introduced biplane motion compensation technique to obtain a good trade-off between accuracy and radiation dose reduction.  相似文献   

10.
Accurate and fast localization of a predefined target region inside the patient is an important component of many image-guided therapy procedures. This problem is commonly solved by registration of intraoperative 2-D projection images to 3-D preoperative images. If the patient is not fixed during the intervention, the 2-D image acquisition is repeated several times during the procedure, and the registration problem can be cast instead as a 3-D tracking problem. To solve the 3-D problem, we propose in this paper to apply 2-D region tracking to first recover the components of the transformation that are in-plane to the projections. The 2-D motion estimates of all projections are backprojected into 3-D space, where they are then combined into a consistent estimate of the 3-D motion. We compare this method to intensity-based 2-D to 3-D registration and a combination of 2-D motion backprojection followed by a 2-D to 3-D registration stage. Using clinical data with a fiducial marker-based gold-standard transformation, we show that our method is capable of accurately tracking vertebral targets in 3-D from 2-D motion measured in X-ray projection images. Using a standard tracking algorithm (hyperplane tracking), tracking is achieved at video frame rates but fails relatively often (32% of all frames tracked with target registration error (TRE) better than 1.2 mm, 82% of all frames tracked with TRE better than 2.4 mm). With intensity-based 2-D to 2-D image registration using normalized mutual information (NMI) and pattern intensity (PI), accuracy and robustness are substantially improved. NMI tracked 82% of all frames in our data with TRE better than 1.2 mm and 96% of all frames with TRE better than 2.4 mm. This comes at the cost of a reduced frame rate, 1.7 s average processing time per frame and projection device. Results using PI were slightly more accurate, but required on average 5.4 s time per frame. These results are still substantially faster than 2-D to 3-D registration. We conclude that motion backprojection from 2-D motion tracking is an accurate and efficient method for tracking 3-D target motion, but tracking 2-D motion accurately and robustly remains a challenge.  相似文献   

11.
张慧娟  熊芝  劳达宝  周维虎 《红外与激光工程》2019,48(5):517005-0517005(6)
利用计算机视觉进行姿态测量的方法已广泛应用于现代控制、导航、跟踪等多个领域中。研究并设计了一种基于P4P矩形分布的平面靶标和EPNP算法结合的单目视觉姿态测量方法。首先,利用单相机获取平面靶标图像,经图像处理后得到四个特征点的像素坐标,并使用EPNP算法进行姿态解算;其次,对姿态角测量误差进行了仿真分析,为提高姿态测量精度提供了理论指导和依据;最后,提出一种与高精度二维转台结合的坐标系配准方法,利用该方法对三个方向姿态角精度进行验证。实验结果表明:当绕x和y轴的转动角度在[-6,6]时,姿态测量误差小于0.1,可以满足测量应用需求。  相似文献   

12.
We present a gradient-based method for rigid registration of a patient preoperative computed tomography (CT) to its intraoperative situation with a few fluoroscopic X-ray images obtained with a tracked C-arm. The method is noninvasive, anatomy-based, requires simple user interaction, and includes validation. It is generic and easily customizable for a variety of routine clinical uses in orthopaedic surgery. Gradient-based registration consists of three steps: 1) initial pose estimation; 2) coarse geometry-based registration on bone contours, and; 3) fine gradient projection registration (GPR) on edge pixels. It optimizes speed, accuracy, and robustness. Its novelty resides in using volume gradients to eliminate outliers and foreign objects in the fluoroscopic X-ray images, in speeding up computation, and in achieving higher accuracy. It overcomes the drawbacks of intensity-based methods, which are slow and have a limited convergence range, and of geometry-based methods, which depend on the image segmentation quality. Our simulated, in vitro, and cadaver experiments on a human pelvis CT, dry vertebra, dry femur, fresh lamb hip, and human pelvis under realistic conditions show a mean 0.5-1.7 mm (0.5-2.6 mm maximum) target registration accuracy.  相似文献   

13.
This paper presents a new method for image-guided surgery called image-enhanced endoscopy. Registered real and virtual endoscopic images (perspective volume renderings generated from the same view as the endoscope camera using a preoperative image) are displayed simultaneously; when combined with the ability to vary tissue transparency in the virtual images, this provides surgeons with the ability to see beyond visible surfaces and, thus, provides additional exposure during surgery. A mount with four photoreflective spheres is rigidly attached to the endoscope and its position and orientation is tracked using an optical position sensor. Generation of virtual images that are accurately registered to the real endoscopic images requires calibration of the tracked endoscope. The calibration process determines intrinsic parameters (that represent the projection of three-dimensional points onto the two-dimensional endoscope camera imaging plane) and extrinsic parameters (that represent the transformation from the coordinate system of the tracker mount attached to the endoscope to the coordinate system of the endoscope camera), and determines radial lens distortion. The calibration routine is fast, automatic, accurate and reliable, and is insensitive to rotational orientation of the endoscope. The routine automatically detects, localizes, and identifies dots in a video image snapshot of the calibration target grid and determines the calibration parameters from the sets of known physical coordinates and localized image coordinates of the target grid dots. Using nonlinear lens-distortion correction, which can be performed at real-time rates (30 frames per second), the mean projection error is less than 0.5 mm at distances up to 25 mm from the endoscope tip, and less than 1.0 mm up to 45 mm. Experimental measurements and point-based registration error theory show that the tracking error is about 0.5-0.7 mm at the tip of the endoscope and less than 0.9 mm for all points in the field of view of the endoscope camera at a distance of up to 65 mm from the tip. It is probable that much of the projection error is due to endoscope tracking error rather than calibration error. Two examples of clinical applications are presented to illustrate the usefulness of image-enhanced endoscopy. This method is a useful addition to conventional image-guidance systems, which generally show only the position of the tip (and sometimes the orientation) of a surgical instrument or probe on reformatted image slices.  相似文献   

14.
作为微细加工技术之一的高分辨率光刻技术--同步辐射X射线光刻(XRL)可应用于100nm及100nm节点以下分辨率光刻,高精度对准技术对XRL至关重要,直接影响到后续器件的生产质量。目前国内的XRL对准系统主要采用CCD相机和显微物镜采集图像,经过计算机图像处理程序进行自动对准;图像边缘分辨是图像处理部分的关键,直接决定了对准精度。针对3种不同的图像边缘增强方法进行了掩模和硅片识别精度以及对准精度的研究,并且初步设计了几种对准标记。  相似文献   

15.
Intraoperative brain deformations decrease accuracy in image-guided neurosurgery. Approaches to quantify these deformations based on 3-D reconstruction of cortectomy surfaces have been described and have shown promising results regarding the extrapolation to the whole brain volume using additional prior knowledge or sparse volume modalities. Quantification of brain deformations from surface measurement requires the registration of surfaces at different times along the surgical procedure, with different challenges according to the patient and surgical step. In this paper, we propose a new flexible surface registration approach for any textured point cloud computed by stereoscopic or laser range approach. This method includes three terms: the first term is related to image intensities, the second to Euclidean distance, and the third to anatomical landmarks automatically extracted and continuously tracked in the 2-D video flow. Performance evaluation was performed on both phantom and clinical cases. The global method, including textured point cloud reconstruction, had accuracy within 2 mm, which is the usual rigid registration error of neuronavigation systems before deformations. Its main advantage is to consider all the available data, including the microscope video flow with higher temporal resolution than previously published methods.   相似文献   

16.
Brain shift during open cranial surgery presents a challenge for maintaining registration with image-guidance systems. Ultrasound (US) is a convenient intraoperative imaging modality that may be a useful tool in detecting tissue shift and updating preoperative images based on intraoperative measurements of brain deformation. We have quantitatively evaluated the ability of spatially tracked freehand US to detect displacement of implanted markers in a series of three in vivo porcine experiments, where both US and computed tomography (CT) image acquisitions were obtained before and after deforming the brain. Marker displacements ranged from 0.5 to 8.5 mm. Comparisons between CT and US measurements showed a mean target localization error of 1.5 mm, and a mean vector error for displacement of 1.1 mm. Mean error in the magnitude of displacement was 0.6 mm. For one of the animals studied, the US data was used in conjunction with a biomechanical model to nonrigidly re-register a baseline CT to the deformed brain. The mean error between the actual and deformed CT's was found to be on average 1.2 and 1.9 mm at the marker locations depending on the extent of the deformation induced. These findings indicate the potential accuracy in coregistered freehand US displacement tracking in brain tissue and suggest that the resulting information can be used to drive a modeling re-registration strategy to comparable levels of agreement.  相似文献   

17.
18.
The use of stereotactic systems has been one of the main approaches for image-based guidance of the surgical tool within the brain. The main limitation of stereotactic systems is that they are based on preoperative images that might become outdated and invalid during the course of surgery. Ultrasound (US) is considered the most practical and cost-effective intraoperative imaging modality, but US images inherently have a low signal-to-noise ratio. Integrating intraoperative US with stereotactic systems has recently been attempted. In this paper, we present a new system for interactively registering two-dimensional US and three-dimensional magnetic resonance (MR) images. This registration is based on tracking the US probe with a DC magnetic position sensor. We have performed an extensive analysis of the errors of our system by using a custom-built phantom. The registration error between the MR and the position sensor space was found to have a mean value of 1.78 mm and a standard deviation of 0.18 mm. The registration error between US and MR space was dependent on the distance of the target point from the US probe face. For a 3.5-MHz phased one-dimensional array transducer and a depth of 6 cm, the mean value of the registration error was 2.00 mm and the standard deviation was 0.75 mm. The registered MR images were reconstructed using either zeroth-order or first-order interpolation  相似文献   

19.
Displaying anatomical and physiological information derived from preoperative medical images in the operating room is critical in image-guided neurosurgery. This paper presents a new approach referred to as augmented virtuality (AV) for displaying intraoperative views of the operative field over three-dimensional (3-D) multimodal preoperative images onto an external screen during surgery. A calibrated stereovision system was set up between the surgical microscope and the binocular tubes. Three-dimensional surface meshes of the operative field were then generated using stereopsis. These reconstructed 3-D surface meshes were directly displayed without any additional geometrical transform over preoperative images of the patient in the physical space. Performance evaluation was achieved using a physical skull phantom. Accuracy of the reconstruction method itself was shown to be within 1 mm (median: 0.76 mm +/- 0.27), whereas accuracy of the overall approach was shown to be within 3 mm (median: 2.29 mm +/- 0.59), including the image-to-physical space registration error. We report the results of six surgical cases where AV was used in conjunction with augmented reality. AV not only enabled vision beyond the cortical surface but also gave an overview of the surgical area. This approach facilitated understanding of the spatial relationship between the operative field and the preoperative multimodal 3-D images of the patient.  相似文献   

20.
Transcatheter aortic valve implantation is a minimally invasive alternative to open-heart surgery for aortic stenosis in which a stent-based bioprosthetic valve is delivered into the heart on a catheter. Limited visualization during this procedure can lead to severe complications. Improved visualization can be provided by live registration of transesophageal echo (TEE) and fluoroscopy images intraoperatively. Since the TEE probe is always visible in the fluoroscopy image, it is possible to track it using fiducial-based single-perspective pose estimation. In this study, inherent probe tracking performance was assessed, and TEE to fluoroscopy registration accuracy and robustness were evaluated. Results demonstrated probe tracking errors of below 0.6 mm and 0.2°, a 2-D RMS registration error of 1.5 mm, and a tracking failure rate of below 1%. In addition to providing live registration and better accuracy and robustness compared to existing TEE probe tracking methods, this system is designed to be suitable for clinical use. It is fully automatic, requires no additional operating room hardware, does not require intraoperative calibration, maintains existing procedure and imaging workflow without modification, and can be implemented in all cardiac centers at extremely low cost.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号