首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 46 毫秒
1.
We describe a registration and tracking technique to integrate cardiac X-ray images and cardiac magnetic resonance (MR) images acquired from a combined X-ray and MR interventional suite (XMR). Optical tracking is used to determine the transformation matrices relating MR image coordinates and X-ray image coordinates. Calibration of X-ray projection geometry and tracking of the X-ray C-arm and table enable three-dimensional (3-D) reconstruction of vessel centerlines and catheters from bi-plane X-ray views. We can, therefore, combine single X-ray projection images with registered projection MR images from a volume acquisition, and we can also display 3-D reconstructions of catheters within a 3-D or multi-slice MR volume. Registration errors were assessed using phantom experiments. Errors in the combined projection images (two-dimensional target registration error--TRE) were found to be 2.4 to 4.2 mm, and the errors in the integrated volume representation (3-D TRE) were found to be 4.6 to 5.1 mm. These errors are clinically acceptable for alignment of images of the great vessels and the chambers of the heart. Results are shown for two patients. The first involves overlay of a catheter used for invasive pressure measurements on an MR volume that provides anatomical context. The second involves overlay of invasive electrode catheters (including a basket catheter) on a tagged MR volume in order to relate electrophysiology to myocardial motion in a patient with an arrhythmia. Visual assessment of these results suggests the errors were of a similar magnitude to those obtained in the phantom measurements.  相似文献   

2.
This paper describes an autostereoscopic image overlay technique that is integrated into a surgical navigation system to superimpose a real three-dimensional (3-D) image onto the patient via a half-silvered mirror. The images are created by employing a modified version of integral videography (IV), which is an animated extension of integral photography. IV records and reproduces 3-D images using a microconvex lens array and flat display; it can display geometrically accurate 3-D autostereoscopic images and reproduce motion parallax without the need for special devices. The use of semitransparent display devices makes it appear that the 3-D image is inside the patient's body. This is the first report of applying an autostereoscopic display with an image overlay system in surgical navigation. Experiments demonstrated that the fast IV rendering technique and patient-image registration method produce an average registration accuracy of 1.13 mm. Experiments using a target in phantom agar showed that the system can guide a needle toward a target with an average error of 2.6 mm. Improvement in the quality of the IV display will make this system practical and its use will increase surgical accuracy and reduce invasiveness.  相似文献   

3.
The problem of providing surgical navigation using image overlays on the operative scene can be split into four main tasks--calibration of the optical system; registration of preoperative images to the patient; system and patient tracking, and display using a suitable visualization scheme. To achieve a convincing result in the magnified microscope view a very high alignment accuracy is required. We have simulated an entire image overlay system to establish the most significant sources of error and improved each of the stages involved. The microscope calibration process has been automated. We have introduced bone-implanted markers for registration and incorporated a locking acrylic dental stent (LADS) for patient tracking. The LADS can also provide a less-invasive registration device with mean target error of 0.7 mm in volunteer experiments. These improvements have significantly increased the alignment accuracy of our overlays. Phantom accuracy is 0.3-0.5 mm and clinical overlay errors were 0.5-1.0 mm on the bone fiducials and 0.5-4 mm on target structures. We have improved the graphical representation of the stereo overlays. The resulting system provides three-dimensional surgical navigation for microscope-assisted guided interventions (MAGI).  相似文献   

4.
Surgeries of the skull base require accuracy to safely navigate the critical anatomy. This is particularly the case for endoscopic endonasal skull base surgery (ESBS) where the surgeons work within millimeters of neurovascular structures at the skull base. Today's navigation systems provide approximately 2 mm accuracy. Accuracy is limited by the indirect relationship of the navigation system, the image and the patient. We propose a method to directly track the position of the endoscope using video data acquired from the endoscope camera. Our method first tracks image feature points in the video and reconstructs the image feature points to produce 3D points, and then registers the reconstructed point cloud to a surface segmented from preoperative computed tomography (CT) data. After the initial registration, the system tracks image features and maintains the 2D-3D correspondence of image features and 3D locations. These data are then used to update the current camera pose. We present a method for validation of our system, which achieves submillimeter (0.70 mm mean) target registration error (TRE) results.  相似文献   

5.
Registration of intraoperative fluoroscopy images with preoperative 3D CT images can he used for several purposes in image-guided surgery. On the one hand, it can be used to display the position of surgical instruments, which are being tracked by a localizer, in the preoperative CT scan. On the other hand, the registration result can be used to project preoperative planning information or important anatomical structures visible in the CT image on to the fluoroscopy image. For this registration task, a novel voxel-based method in combination with a new similarity measure (pattern intensity) has been developed. The basic concept of the method is explained at the example of 2D/3D registration of a vertebra in an X-ray fluoroscopy image with a 3D CT image. The registration method is described, and the results for a spine phantom are presented and discussed. Registration has been carried out repeatedly with different starting estimates to study the capture range. Information about registration accuracy has been obtained by comparing the registration results with a highly accurate “ground-truth” registration, which has been derived from fiducial markers attached to the phantom prior to imaging. In addition, registration results for different vertebrae have been compared. The results show that the rotation parameters and the shifts parallel to the projection plane can accurately be determined from a single projection. Because of the projection geometry, the accuracy of the height above the projection plane is significantly lower  相似文献   

6.
Intraoperative freehand three-dimensional (3-D) ultrasound (3D-US) has been proposed as a noninvasive method for registering bones to a preoperative computed tomography image or computer-generated bone model during computer-aided orthopedic surgery (CAOS). In this technique, an US probe is tracked by a 3-D position sensor and acts as a percutaneous device for localizing the bone surface. However, variations in the acoustic properties of soft tissue, such as the average speed of sound, can introduce significant errors in the bone depth estimated from US images, which limits registration accuracy. We describe a new self-calibrating approach to US-based bone registration that addresses this problem, and demonstrate its application within a standard registration scheme. Using realistic US image data acquired from 6 femurs and 3 pelves of intact human cadavers, and accurate Gold Standard registration transformations calculated using bone-implanted fiducial markers, we show that self-calibrating registration is significantly more accurate than a standard method, yielding an average root mean squared target registration error of 1.6 mm. We conclude that self-calibrating registration results in significant improvements in registration accuracy for CAOS applications over conventional approaches where calibration parameters of the 3D-US system remain fixed to values determined using a preoperative phantom-based calibration.  相似文献   

7.
Fluoroscopic overlay images rendered from preoperative volumetric data can provide additional anatomical details to guide physicians during catheter ablation procedures for treatment of atrial fibrillation (AFib). As these overlay images are often compromised by cardiac and respiratory motion, motion compensation methods are needed to keep the overlay images in sync with the fluoroscopic images. So far, these approaches have either required simultaneous biplane imaging for 3-D motion compensation, or in case of monoplane X-ray imaging, provided only a limited 2-D functionality. To overcome the downsides of the previously suggested methods, we propose an approach that facilitates a full 3-D motion compensation even if only monoplane X-ray images are available. To this end, we use a training phase that employs a biplane sequence to establish a patient specific motion model. Afterwards, a constrained model-based 2-D/3-D registration method is used to track a circumferential mapping catheter. This device is commonly used for AFib catheter ablation procedures. Based on the experiments on real patient data, we found that our constrained monoplane 2-D/3-D registration outperformed the unconstrained counterpart and yielded an average 2-D tracking error of 0.6 mm and an average 3-D tracking error of 1.6 mm. The unconstrained 2-D/3-D registration technique yielded a similar 2-D performance, but the 3-D tracking error increased to 3.2 mm mostly due to wrongly estimated 3-D motion components in X-ray view direction. Compared to the conventional 2-D monoplane method, the proposed method provides a more seamless workflow by removing the need for catheter model re-initialization otherwise required when the C-arm view orientation changes. In addition, the proposed method can be straightforwardly combined with the previously introduced biplane motion compensation technique to obtain a good trade-off between accuracy and radiation dose reduction.  相似文献   

8.
Surgical navigation systems are used widely among all fields of modern medicine, including, but not limited to ENT- and maxillofacial surgery. As a fundamental prerequisite for image-guided surgery, intraoperative registration, which maps image to patient coordinates, has been subject to many studies and developments. While registration methods have evolved from invasive procedures like fixed stereotactic frames and implanted fiducial markers toward surface-based registration and noninvasive markers fixed to the patient's skin, even the most sophisticated registration techniques produce an imperfect result. Due to errors introduced during the registration process, the projection of navigated instruments into image data deviates up to several millimeter from the actual position, depending on the applied registration method and the distance between the instrument and the fiducial markers. We propose a method that allows to automatically and continually improve registration accuracy during intraoperative navigation after the actual registration process has been completed. The projections of navigated instruments into image data are inspected and validated by the navigation software. Errors in image-to-patient registration are identified by calculating intersections between the virtual instruments' axes and surfaces of hard bone tissue extracted from the patient's image data. The information gained from the identification of such registration errors is then used to improve registration accuracy by adding an additional pair of registration points at every location where an error has been detected. The proposed method was integrated into a surgical navigation system based on paired points registration with anatomical landmarks. Experiments were conducted, where registrations with deliberately misplaced point pairs were corrected with automatic error correction. Results showed an improvement in registration quality in all cases.  相似文献   

9.
在基于彩色伪随机编码的三维重构测量中,由于相机视场有限,无法一次清晰地得到被测物体的表面编码投影图像,必须对所得图像进行配准拼接.一般的配准方法都是基于灰度图像.而伪随机编码投影图像具有特有的颜色特征,基于此种投影编码图片提出了一种新配准方法,对于此类图像能进行很好的配准.  相似文献   

10.
Immersive projection technology has become very popular as a virtual reality display system. A 2.5-D video avatar method was proposed and developed. The 2.5-D video avatar was created using a depth map generated by a stereo camera, and it was superimposed on the shared virtual world in real time. A 2.5-D video avatar was also transmitted between two immersive projection displays, computer augmented booth for image navigation (CABIN) and COSMOS, which were connected by a high bandwidth ATM network. In addition, we experimentally evaluated the accuracy of pointing when using the 2.5-D video avatar  相似文献   

11.
Image overlay guidance for needle insertion in CT scanner   总被引:1,自引:0,他引:1  
We present an image overlay system to aid needle insertion procedures in computed tomography (CT) scanners. The device consists of a display and a semitransparent mirror that is mounted on the gantry. Looking at the patient through the mirror, the CT image appears to be floating inside the patient with correct size and position, thereby providing the physician with two-dimensional (2-D) "X-ray vision" to guide needle insertions. The physician inserts the needle following the optimal path identified in the CT image rendered on the display and, thus, reflected in the mirror. The system promises to reduce X-ray dose, patient discomfort, and procedure time by significantly reducing faulty insertion attempts. It may also increase needle placement accuracy. We report the design and implementation of the image overlay system followed by the results of phantom and cadaver experiments in several clinical applications.  相似文献   

12.
Accurate and fast localization of a predefined target region inside the patient is an important component of many image-guided therapy procedures. This problem is commonly solved by registration of intraoperative 2-D projection images to 3-D preoperative images. If the patient is not fixed during the intervention, the 2-D image acquisition is repeated several times during the procedure, and the registration problem can be cast instead as a 3-D tracking problem. To solve the 3-D problem, we propose in this paper to apply 2-D region tracking to first recover the components of the transformation that are in-plane to the projections. The 2-D motion estimates of all projections are backprojected into 3-D space, where they are then combined into a consistent estimate of the 3-D motion. We compare this method to intensity-based 2-D to 3-D registration and a combination of 2-D motion backprojection followed by a 2-D to 3-D registration stage. Using clinical data with a fiducial marker-based gold-standard transformation, we show that our method is capable of accurately tracking vertebral targets in 3-D from 2-D motion measured in X-ray projection images. Using a standard tracking algorithm (hyperplane tracking), tracking is achieved at video frame rates but fails relatively often (32% of all frames tracked with target registration error (TRE) better than 1.2 mm, 82% of all frames tracked with TRE better than 2.4 mm). With intensity-based 2-D to 2-D image registration using normalized mutual information (NMI) and pattern intensity (PI), accuracy and robustness are substantially improved. NMI tracked 82% of all frames in our data with TRE better than 1.2 mm and 96% of all frames with TRE better than 2.4 mm. This comes at the cost of a reduced frame rate, 1.7 s average processing time per frame and projection device. Results using PI were slightly more accurate, but required on average 5.4 s time per frame. These results are still substantially faster than 2-D to 3-D registration. We conclude that motion backprojection from 2-D motion tracking is an accurate and efficient method for tracking 3-D target motion, but tracking 2-D motion accurately and robustly remains a challenge.  相似文献   

13.
A major limitation of the use of endoscopes in minimally invasive surgery is the lack of relative context between the endoscope and its surroundings. The purpose of this work was to fuse images obtained from a tracked endoscope to surfaces derived from three-dimensional (3-D) preoperative magnetic resonance or computed tomography (CT) data, for assistance in surgical planning, training and guidance. We extracted polygonal surfaces from preoperative CT images of a standard brain phantom and digitized endoscopic video images from a tracked neuro-endoscope. The optical properties of the endoscope were characterized using a simple calibration procedure. Registration of the phantom (physical space) and CT images (preoperative image space) was accomplished using fiducial markers that could be identified both on the phantom and within the images. The endoscopic images were corrected for radial lens distortion and then mapped onto the extracted surfaces via a two-dimensional 2-D to 3-D mapping algorithm. The optical tracker has an accuracy of about 0.3 mm at its centroid, which allows the endoscope tip to be localized to within 1.0 mm. The mapping operation allows multiple endoscopic images to be "painted" onto the 3-D brain surfaces, as they are acquired, in the correct anatomical position. This allows panoramic and stereoscopic visualization, as well as navigation of the 3-D surface, painted with multiple endoscopic views, from arbitrary perspectives.  相似文献   

14.
X-ray fluoroscopically guided cardiac electrophysiological procedures are routinely carried out for diagnosis and treatment of cardiac arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of static 3-D roadmaps derived from preprocedural volumetric data can be used to add anatomical information. However, the registration between the 3-D roadmap and the 2-D X-ray image can be compromised by patient respiratory motion. Three methods were designed and evaluated to correct for respiratory motion using features in the 2-D X-ray images. The first method is based on tracking either the diaphragm or the heart border using the image intensity in a region of interest. The second method detects the tracheal bifurcation using the generalized Hough transform and a 3-D model derived from 3-D preoperative volumetric data. The third method is based on tracking the coronary sinus (CS) catheter. This method uses blob detection to find all possible catheter electrodes in the X-ray image. A cost function is applied to select one CS catheter from all catheter-like objects. All three methods were applied to X-ray images from 18 patients undergoing radiofrequency ablation for the treatment of atrial fibrillation. The 2-D target registration errors (TRE) at the pulmonary veins were calculated to validate the methods. A TRE of 1.6 mm ± 0.8 mm was achieved for the diaphragm tracking; 1.7 mm ± 0.9 mm for heart border tracking, 1.9 mm ± 1.0 mm for trachea tracking, and 1.8 mm ± 0.9 mm for CS catheter tracking. We present a comprehensive comparison between the techniques in terms of robustness, as computed by tracking errors, and accuracy, as computed by TRE using two independent approaches.  相似文献   

15.
In order to use pre-operative images during an intervention for navigation, they must be registered to the patient's co-ordinate system in the operating theatre or to an intra-operative image. For the registration to be valid in the case of patient movements, the registration must be updated or the patient movement must be tracked. One problem in this area is the registration of intra-operatively acquired X-ray fluoroscopies with 3D CT images obtained before the intervention as well as motion tracking for this setup. The result can be used to support the placement of pedicle screws in spine surgery or aortic endoprostheses in transfemoral endovascular aneurysm management (TEAM). The different approaches to 2D/3D registration are discussed and a novel voxel-based method is presented: using a small part of the CT image covering only the vertebra of interest, pseudo-projections are computed and the resulting vertebra template is compared to the X-ray projection using a new similarity measure which is called pattern intensity. Application, performance and registration accuracy are discussed and demonstrated by application to images of a TEAM procedure and of a spine phantom.  相似文献   

16.
One of the most important technical challenges in image-guided intervention is to obtain a precise transformation between the intrainterventional patient's anatomy and corresponding preinterventional 3-D image on which the intervention was planned. This goal can be achieved by acquiring intrainterventional 2-D images and matching them to the preinterventional 3-D image via 3-D/2-D image registration. A novel 3-D/2-D registration method is proposed in this paper. The method is based on robustly matching 3-D preinterventional image gradients and coarsely reconstructed 3-D gradients from the intrainterventional 2-D images. To improve the robustness of finding the correspondences between the two sets of gradients, hypothetical correspondences are searched for along normals to anatomical structures in 3-D images, while the final correspondences are established in an iterative process, combining the robust random sample consensus algorithm (RANSAC) and a special gradient matching criterion function. The proposed method was evaluated using the publicly available standardized evaluation methodology for 3-D/2-D registration, consisting of 3-D rotational X-ray, computed tomography, magnetic resonance (MR), and 2-D X-ray images of two spine segments, and standardized evaluation criteria. In this way, the proposed method could be objectively compared to the intensity, gradient, and reconstruction-based registration methods. The obtained results indicate that the proposed method performs favorably both in terms of registration accuracy and robustness. The method is especially superior when just a few X-ray images and when MR preinterventional images are used for registration, which are important advantages for many clinical applications.   相似文献   

17.
The accuracy of image-guided neurosurgery generally suffers from brain deformations due to intraoperative changes. These deformations cause significant changes of the anatomical geometry (organ shape and spatial interorgan relations), thus making intraoperative navigation based on preoperative images error prone. In order to improve the navigation accuracy, we developed a biomechanical model of the human head based on the finite element method, which can be employed for the correction of preoperative images to cope with the deformations occurring during surgical interventions. At the current stage of development, the two-dimensional (2-D) implementation of the model comprises two different materials, though the theory holds for the three-dimensional (3-D) case and is capable of dealing with an arbitrary number of different materials. For the correction of a preoperative image, a set of homologous landmarks must be specified which determine correspondences. These correspondences can be easily integrated into the model and are maintained throughout the computation of the deformation of the preoperative image. The necessary material parameter values have been determined through a comprehensive literature study. Our approach has been tested for the case of synthetic images and yields physically plausible deformation results. Additionally, we carried out registration experiments with a preoperative MR image of the human head and a corresponding postoperative image simulating an intraoperative image. We found that our approach yields good prediction results, even in the case when correspondences are given in a relatively small area of the image only.  相似文献   

18.
Displaying anatomical and physiological information derived from preoperative medical images in the operating room is critical in image-guided neurosurgery. This paper presents a new approach referred to as augmented virtuality (AV) for displaying intraoperative views of the operative field over three-dimensional (3-D) multimodal preoperative images onto an external screen during surgery. A calibrated stereovision system was set up between the surgical microscope and the binocular tubes. Three-dimensional surface meshes of the operative field were then generated using stereopsis. These reconstructed 3-D surface meshes were directly displayed without any additional geometrical transform over preoperative images of the patient in the physical space. Performance evaluation was achieved using a physical skull phantom. Accuracy of the reconstruction method itself was shown to be within 1 mm (median: 0.76 mm +/- 0.27), whereas accuracy of the overall approach was shown to be within 3 mm (median: 2.29 mm +/- 0.59), including the image-to-physical space registration error. We report the results of six surgical cases where AV was used in conjunction with augmented reality. AV not only enabled vision beyond the cortical surface but also gave an overview of the surgical area. This approach facilitated understanding of the spatial relationship between the operative field and the preoperative multimodal 3-D images of the patient.  相似文献   

19.
The two challenges for three-dimensional (3-D) display are designing the optics for wide fields of view, and delivering pixels at the rates needed to support this. Getting such pixel rates at low cost is merely an extension of the key challenge for two-dimensional (2-D) displays,and the cost advantage of projection in this respect over alternatives increases considerably at the data rates needed for 3-D. Both 2-D and 3-D projection concepts are bulky, so the authors describe how to project images within a flat panel. Flat projection is not only inexpensive: it can generate virtual as well as real images, and allows the screen to take images and input from the viewer as well as vice versa. Real images are created by pointing a projector into a wedge-shaped light guide, and either the projector or the screen can be shuttered in order to time-multiplex a 3-D image on a large screen. Virtual images are created by pointing a projector into a slab embossed with a grating and can deliver the collimated illumination needed if a liquid crystal display is to time-multiplex a 3-D image with the high off-screen resolution provided by holograms.  相似文献   

20.
This paper presents a new method for image-guided surgery called image-enhanced endoscopy. Registered real and virtual endoscopic images (perspective volume renderings generated from the same view as the endoscope camera using a preoperative image) are displayed simultaneously; when combined with the ability to vary tissue transparency in the virtual images, this provides surgeons with the ability to see beyond visible surfaces and, thus, provides additional exposure during surgery. A mount with four photoreflective spheres is rigidly attached to the endoscope and its position and orientation is tracked using an optical position sensor. Generation of virtual images that are accurately registered to the real endoscopic images requires calibration of the tracked endoscope. The calibration process determines intrinsic parameters (that represent the projection of three-dimensional points onto the two-dimensional endoscope camera imaging plane) and extrinsic parameters (that represent the transformation from the coordinate system of the tracker mount attached to the endoscope to the coordinate system of the endoscope camera), and determines radial lens distortion. The calibration routine is fast, automatic, accurate and reliable, and is insensitive to rotational orientation of the endoscope. The routine automatically detects, localizes, and identifies dots in a video image snapshot of the calibration target grid and determines the calibration parameters from the sets of known physical coordinates and localized image coordinates of the target grid dots. Using nonlinear lens-distortion correction, which can be performed at real-time rates (30 frames per second), the mean projection error is less than 0.5 mm at distances up to 25 mm from the endoscope tip, and less than 1.0 mm up to 45 mm. Experimental measurements and point-based registration error theory show that the tracking error is about 0.5-0.7 mm at the tip of the endoscope and less than 0.9 mm for all points in the field of view of the endoscope camera at a distance of up to 65 mm from the tip. It is probable that much of the projection error is due to endoscope tracking error rather than calibration error. Two examples of clinical applications are presented to illustrate the usefulness of image-enhanced endoscopy. This method is a useful addition to conventional image-guidance systems, which generally show only the position of the tip (and sometimes the orientation) of a surgical instrument or probe on reformatted image slices.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号