首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
A major limitation of the use of endoscopes in minimally invasive surgery is the lack of relative context between the endoscope and its surroundings. The purpose of this work was to fuse images obtained from a tracked endoscope to surfaces derived from three-dimensional (3-D) preoperative magnetic resonance or computed tomography (CT) data, for assistance in surgical planning, training and guidance. We extracted polygonal surfaces from preoperative CT images of a standard brain phantom and digitized endoscopic video images from a tracked neuro-endoscope. The optical properties of the endoscope were characterized using a simple calibration procedure. Registration of the phantom (physical space) and CT images (preoperative image space) was accomplished using fiducial markers that could be identified both on the phantom and within the images. The endoscopic images were corrected for radial lens distortion and then mapped onto the extracted surfaces via a two-dimensional 2-D to 3-D mapping algorithm. The optical tracker has an accuracy of about 0.3 mm at its centroid, which allows the endoscope tip to be localized to within 1.0 mm. The mapping operation allows multiple endoscopic images to be "painted" onto the 3-D brain surfaces, as they are acquired, in the correct anatomical position. This allows panoramic and stereoscopic visualization, as well as navigation of the 3-D surface, painted with multiple endoscopic views, from arbitrary perspectives.  相似文献   

2.
In image-guided therapy, high-quality preoperative images serve for planning and simulation, and intraoperatively as "background", onto which models of surgical instruments or radiation beams are projected. The link between a preoperative image and intraoperative physical space of the patient is established by image-to-patient registration. In this paper, we present a novel 3-D/2-D registration method. First, a 3-D image is reconstructed from a few 2-D X-ray images and next, the preoperative 3-D image is brought into the best possible spatial correspondence with the reconstructed image by optimizing a similarity measure (SM). Because the quality of the reconstructed image is generally low, we introduce a novel SM, which is able to cope with low image quality as well as with different imaging modalities. The novel 3-D/2-D registration method has been evaluated and compared to the gradient-based method (GBM) using standardized evaluation methodology and publicly available 3-D computed tomography (CT), 3-D rotational X-ray (3DRX), and magnetic resonance (MR) and 2-D X-ray images of two spine phantoms, for which gold standard registrations were known. For each of the 3DRX, CT, or MR images and each set of X-ray images, 1600 registrations were performed from starting positions, defined as the mean target registration error (mTRE), randomly generated and uniformly distributed in the interval of 0-20 mm around the gold standard. The capture range was defined as the distance from gold standard for which the final TRE was less than 2 mm in at least 95% of all cases. In terms of success rate, as the function of initial misalignment and capture range the proposed method outperformed the GBM. TREs of the novel method and the GBM were approximately the same. For the registration of 3DRX and CT images to X-ray images as few as 2-3 X-ray views were sufficient to obtain approximately 0.4 mm TREs, 7-9 mm capture range, and 80%-90% of successful registrations. To obtain similar results for MR to X-ray registrations, an image, reconstructed from at least 11 X-ray images was required. Reconstructions from more than 11 images had no effect on the registration results.  相似文献   

3.
Image overlay projection is a form of augmented reality that allows surgeons to view underlying anatomical structures directly on the patient surface. It improves intuitiveness of computer-aided surgery by removing the need for sight diversion between the patient and a display screen and has been reported to assist in 3-D understanding of anatomical structures and the identification of target and critical structures. Challenges in the development of image overlay technologies for surgery remain in the projection setup. Calibration, patient registration, view direction, and projection obstruction remain unsolved limitations to image overlay techniques. In this paper, we propose a novel, portable, and handheld-navigated image overlay device based on miniature laser projection technology that allows images of 3-D patient-specific models to be projected directly onto the organ surface intraoperatively without the need for intrusive hardware around the surgical site. The device can be integrated into a navigation system, thereby exploiting existing patient registration and model generation solutions. The position of the device is tracked by the navigation system's position sensor and used to project geometrically correct images from any position within the workspace of the navigation system. The projector was calibrated using modified camera calibration techniques and images for projection are rendered using a virtual camera defined by the projectors extrinsic parameters. Verification of the device's projection accuracy concluded a mean projection error of 1.3 mm. Visibility testing of the projection performed on pig liver tissue found the device suitable for the display of anatomical structures on the organ surface. The feasibility of use within the surgical workflow was assessed during open liver surgery. We show that the device could be quickly and unobtrusively deployed within the sterile environment.  相似文献   

4.
We present a system to assist in the treatment of cardiac arrhythmias by catheter ablation. A patient-specific three-dimensional (3-D) anatomical model, constructed from magnetic resonance images, is merged with fluoroscopic images in an augmented reality environment that enables the transfer of electrocardiography (ECG) measurements and cardiac activation times onto the model. Accurate mapping is realized through the combination of: a new calibration technique, adapted to catheter guided treatments; a visual matching registration technique, allowing the electrophysiologist to align the model with contrast-enhanced images; and the use of virtual catheters, which enable the annotation of multiple ECG measurements on the model. These annotations can be visualized by color coding on the patient model. We provide an accuracy analysis of each of these components independently. Based on simulation and experiments, we determined a segmentation error of 0.6 mm, a calibration error in the order of 1 mm and a target registration error of 1.04 +/- 0.45 mm. The system provides a 3-D visualization of the cardiac activation pattern which may facilitate and improve diagnosis and treatment of the arrhytmia. Because of its low cost and similar advantages we believe our approach can compete with existing commercial solutions, which rely on dedicated hardware and costly catheters. We provide qualitative results of the first clinical use of the system in 11 ablation procedures.  相似文献   

5.
3-D/2-D registration of CT and MR to X-ray images   总被引:6,自引:0,他引:6  
A crucial part of image-guided therapy is registration of preoperative and intraoperative images, by which the precise position and orientation of the patient's anatomy is determined in three dimensions. This paper presents a novel approach to register three-dimensional (3-D) computed tomography (CT) or magnetic resonance (MR) images to one or more two-dimensional (2-D) X-ray images. The registration is based solely on the information present in 2-D and 3-D images. It does not require fiducial markers, intraoperative X-ray image segmentation, or timely construction of digitally reconstructed radiographs. The originality of the approach is in using normals to bone surfaces, preoperatively defined in 3-D MR or CT data, and gradients of intraoperative X-ray images at locations defined by the X-ray source and 3-D surface points. The registration is concerned with finding the rigid transformation of a CT or MR volume, which provides the best match between surface normals and back projected gradients, considering their amplitudes and orientations. We have thoroughly validated our registration method by using MR, CT, and X-ray images of a cadaveric lumbar spine phantom for which "gold standard" registration was established by means of fiducial markers, and its accuracy assessed by target registration error. Volumes of interest, containing single vertebrae L1-L5, were registered to different pairs of X-ray images from different starting positions, chosen randomly and uniformly around the "gold standard" position. CT/X-ray (MR/ X-ray) registration, which is fast, was successful in more than 91% (82% except for L1) of trials if started from the "gold standard" translated or rotated for less than 6 mm or 17 degrees (3 mm or 8.6 degrees), respectively. Root-mean-square target registration errors were below 0.5 mm for the CT to X-ray registration and below 1.4 mm for MR to X-ray registration.  相似文献   

6.
Minimally invasive robotically assisted cardiac surgical systems currently do not routinely employ 3-D image guidance. However, preoperative magnetic resonance and computed tomography (CT) images have the potential to be used in this role, if appropriately registered with the patient anatomy and animated synchronously with the motion of the actual heart. This paper discusses the fusion of optical images of a beating heart phantom obtained from an optically tracked endoscope, with volumetric images of the phantom created from a dynamic CT dataset. High quality preoperative dynamic CT images are created by first extracting the motion parameters of the heart from the series of temporal frames, and then applying this information to animate a high-quality heart image acquired at end systole. Temporal synchronization of the endoscopic and CT model is achieved by selecting the appropriate CT image from the dynamic set, based on an electrocardiographic trigger signal. The spatial error between the optical and virtual images is 1.4 +/- 1.1 mm, while the time discrepancy is typically 50-100 ms. Index Terms-Image guidance, image warping, minimally invasive cardiac surgery, virtual endoscopy, virtual reality.  相似文献   

7.
Fluoroscopic overlay images rendered from preoperative volumetric data can provide additional anatomical details to guide physicians during catheter ablation procedures for treatment of atrial fibrillation (AFib). As these overlay images are often compromised by cardiac and respiratory motion, motion compensation methods are needed to keep the overlay images in sync with the fluoroscopic images. So far, these approaches have either required simultaneous biplane imaging for 3-D motion compensation, or in case of monoplane X-ray imaging, provided only a limited 2-D functionality. To overcome the downsides of the previously suggested methods, we propose an approach that facilitates a full 3-D motion compensation even if only monoplane X-ray images are available. To this end, we use a training phase that employs a biplane sequence to establish a patient specific motion model. Afterwards, a constrained model-based 2-D/3-D registration method is used to track a circumferential mapping catheter. This device is commonly used for AFib catheter ablation procedures. Based on the experiments on real patient data, we found that our constrained monoplane 2-D/3-D registration outperformed the unconstrained counterpart and yielded an average 2-D tracking error of 0.6 mm and an average 3-D tracking error of 1.6 mm. The unconstrained 2-D/3-D registration technique yielded a similar 2-D performance, but the 3-D tracking error increased to 3.2 mm mostly due to wrongly estimated 3-D motion components in X-ray view direction. Compared to the conventional 2-D monoplane method, the proposed method provides a more seamless workflow by removing the need for catheter model re-initialization otherwise required when the C-arm view orientation changes. In addition, the proposed method can be straightforwardly combined with the previously introduced biplane motion compensation technique to obtain a good trade-off between accuracy and radiation dose reduction.  相似文献   

8.
This paper describes augmented reality visualization for the guidance of breast-conservative cancer surgery using ultrasonic images acquired in the operating room just before surgical resection. By combining an optical three-dimensional (3-D) position sensor, the position and orientation of each ultrasonic cross section are precisely measured to reconstruct geometrically accurate 3-D tumor models from the acquired ultrasonic images. Similarly, the 3-D position and orientation of a video camera are obtained to integrate video and ultrasonic images in a geometrically accurate manner. Superimposing the 3-D tumor models onto live video images of the patient's breast enables the surgeon to perceive the exact 3-D position of the tumor, including irregular cancer invasions which cannot be perceived by touch, as if it were visible through the breast skin. Using the resultant visualization, the surgeon can determine the region for surgical resection in a more objective and accurate manner, thereby minimizing the risk of a relapse and maximizing breast conservation. The system was shown to be effective in experiments using phantom and clinical data  相似文献   

9.
Accurate and fast localization of a predefined target region inside the patient is an important component of many image-guided therapy procedures. This problem is commonly solved by registration of intraoperative 2-D projection images to 3-D preoperative images. If the patient is not fixed during the intervention, the 2-D image acquisition is repeated several times during the procedure, and the registration problem can be cast instead as a 3-D tracking problem. To solve the 3-D problem, we propose in this paper to apply 2-D region tracking to first recover the components of the transformation that are in-plane to the projections. The 2-D motion estimates of all projections are backprojected into 3-D space, where they are then combined into a consistent estimate of the 3-D motion. We compare this method to intensity-based 2-D to 3-D registration and a combination of 2-D motion backprojection followed by a 2-D to 3-D registration stage. Using clinical data with a fiducial marker-based gold-standard transformation, we show that our method is capable of accurately tracking vertebral targets in 3-D from 2-D motion measured in X-ray projection images. Using a standard tracking algorithm (hyperplane tracking), tracking is achieved at video frame rates but fails relatively often (32% of all frames tracked with target registration error (TRE) better than 1.2 mm, 82% of all frames tracked with TRE better than 2.4 mm). With intensity-based 2-D to 2-D image registration using normalized mutual information (NMI) and pattern intensity (PI), accuracy and robustness are substantially improved. NMI tracked 82% of all frames in our data with TRE better than 1.2 mm and 96% of all frames with TRE better than 2.4 mm. This comes at the cost of a reduced frame rate, 1.7 s average processing time per frame and projection device. Results using PI were slightly more accurate, but required on average 5.4 s time per frame. These results are still substantially faster than 2-D to 3-D registration. We conclude that motion backprojection from 2-D motion tracking is an accurate and efficient method for tracking 3-D target motion, but tracking 2-D motion accurately and robustly remains a challenge.  相似文献   

10.
X-ray fluoroscopically guided cardiac electrophysiological procedures are routinely carried out for diagnosis and treatment of cardiac arrhythmias. X-ray images have poor soft tissue contrast and, for this reason, overlay of static 3-D roadmaps derived from preprocedural volumetric data can be used to add anatomical information. However, the registration between the 3-D roadmap and the 2-D X-ray image can be compromised by patient respiratory motion. Three methods were designed and evaluated to correct for respiratory motion using features in the 2-D X-ray images. The first method is based on tracking either the diaphragm or the heart border using the image intensity in a region of interest. The second method detects the tracheal bifurcation using the generalized Hough transform and a 3-D model derived from 3-D preoperative volumetric data. The third method is based on tracking the coronary sinus (CS) catheter. This method uses blob detection to find all possible catheter electrodes in the X-ray image. A cost function is applied to select one CS catheter from all catheter-like objects. All three methods were applied to X-ray images from 18 patients undergoing radiofrequency ablation for the treatment of atrial fibrillation. The 2-D target registration errors (TRE) at the pulmonary veins were calculated to validate the methods. A TRE of 1.6 mm ± 0.8 mm was achieved for the diaphragm tracking; 1.7 mm ± 0.9 mm for heart border tracking, 1.9 mm ± 1.0 mm for trachea tracking, and 1.8 mm ± 0.9 mm for CS catheter tracking. We present a comprehensive comparison between the techniques in terms of robustness, as computed by tracking errors, and accuracy, as computed by TRE using two independent approaches.  相似文献   

11.
This paper describes an autostereoscopic image overlay technique that is integrated into a surgical navigation system to superimpose a real three-dimensional (3-D) image onto the patient via a half-silvered mirror. The images are created by employing a modified version of integral videography (IV), which is an animated extension of integral photography. IV records and reproduces 3-D images using a microconvex lens array and flat display; it can display geometrically accurate 3-D autostereoscopic images and reproduce motion parallax without the need for special devices. The use of semitransparent display devices makes it appear that the 3-D image is inside the patient's body. This is the first report of applying an autostereoscopic display with an image overlay system in surgical navigation. Experiments demonstrated that the fast IV rendering technique and patient-image registration method produce an average registration accuracy of 1.13 mm. Experiments using a target in phantom agar showed that the system can guide a needle toward a target with an average error of 2.6 mm. Improvement in the quality of the IV display will make this system practical and its use will increase surgical accuracy and reduce invasiveness.  相似文献   

12.
为了解决硬质材料的3维深雕问题,在充分分析3维图像空间遮挡关系的基础上,提出一种用激光在实体材料上雕刻真正3维图形的新算法。首先利用盒子算法,使3维空间图像对应一个空间大盒子,再利用平行于z轴的射线射切三角片,得到一种3维模型文件格式(STL)完备的包络点,同时,把点归一化到空间小盒子,并置位对应的被遮挡的小盒子列,从而得到STL实体雕刻数据。该算法已经应用于激光3维雕刻系统,处理一个面片数目为130000片量级的STL3维文件,尺寸268mm422mm253mm,算法仅耗时2s~5s。结果表明,该算法实用高效。  相似文献   

13.
Image-guided neurosurgery relies on accurate registration of the patient, the preoperative image series, and the surgical instruments in the same coordinate space. Recent clinical reports have documented the magnitude of gravity-induced brain deformation in the operating room and suggest these levels of tissue motion may compromise the integrity of such systems. We are investigating a model-based strategy which exploits the wealth of readily-available preoperative information in conjunction with intraoperatively acquired data to construct and drive a three dimensional (3-D) computational model which estimates volumetric displacements in order to update the neuronavigational image set. Using model calculations, the preoperative image database can be deformed to generate a more accurate representation of the surgical focus during an operation. In this paper, we present a preliminary study of four patients that experienced substantial brain deformation from gravity and correlate cortical shift measurements with model predictions. Additionally, we illustrate our image deforming algorithm and demonstrate that preoperative image resolution is maintained. Results over the four cases show that the brain shifted, on average, 5.7 mm in the direction of gravity and that model predictions could reduce this misregistration error to an average of 1.2 mm.  相似文献   

14.
Intraoperative brain deformations decrease accuracy in image-guided neurosurgery. Approaches to quantify these deformations based on 3-D reconstruction of cortectomy surfaces have been described and have shown promising results regarding the extrapolation to the whole brain volume using additional prior knowledge or sparse volume modalities. Quantification of brain deformations from surface measurement requires the registration of surfaces at different times along the surgical procedure, with different challenges according to the patient and surgical step. In this paper, we propose a new flexible surface registration approach for any textured point cloud computed by stereoscopic or laser range approach. This method includes three terms: the first term is related to image intensities, the second to Euclidean distance, and the third to anatomical landmarks automatically extracted and continuously tracked in the 2-D video flow. Performance evaluation was performed on both phantom and clinical cases. The global method, including textured point cloud reconstruction, had accuracy within 2 mm, which is the usual rigid registration error of neuronavigation systems before deformations. Its main advantage is to consider all the available data, including the microscope video flow with higher temporal resolution than previously published methods.   相似文献   

15.
Two-dimensional or 3-D visual guidance is often used for minimally invasive cardiac surgery and diagnosis. This visual guidance suffers from several drawbacks such as limited field of view, loss of signal from time to time, and in some cases, difficulty of interpretation. These limitations become more evident in beating-heart procedures when the surgeon has to perform a surgical procedure in the presence of heart motion. In this paper, we propose dynamic 3-D virtual fixtures (DVFs) to augment the visual guidance system with haptic feedback, to provide the surgeon with more helpful guidance by constraining the surgeon's hand motions thereby protecting sensitive structures. DVFs can be generated from preoperative dynamic magnetic resonance (MR) or computed tomograph (CT) images and then mapped to the patient during surgery. We have validated the feasibility of the proposed method on several simulated surgical tasks using a volunteer's cardiac image dataset. Validation results show that the integration of visual and haptic guidance can permit a user to perform surgical tasks more easily and with reduced error rate. We believe this is the first work presented in the field of virtual fixtures that explicitly considers heart motion.  相似文献   

16.
17.
The problem of providing surgical navigation using image overlays on the operative scene can be split into four main tasks--calibration of the optical system; registration of preoperative images to the patient; system and patient tracking, and display using a suitable visualization scheme. To achieve a convincing result in the magnified microscope view a very high alignment accuracy is required. We have simulated an entire image overlay system to establish the most significant sources of error and improved each of the stages involved. The microscope calibration process has been automated. We have introduced bone-implanted markers for registration and incorporated a locking acrylic dental stent (LADS) for patient tracking. The LADS can also provide a less-invasive registration device with mean target error of 0.7 mm in volunteer experiments. These improvements have significantly increased the alignment accuracy of our overlays. Phantom accuracy is 0.3-0.5 mm and clinical overlay errors were 0.5-1.0 mm on the bone fiducials and 0.5-4 mm on target structures. We have improved the graphical representation of the stereo overlays. The resulting system provides three-dimensional surgical navigation for microscope-assisted guided interventions (MAGI).  相似文献   

18.
Fiducial markers are reference points used in the registration of image space(s) with physical (patient) space. As applied to interactive, image-guided surgery, the registration of image space with physical space allows the current location of a surgical tool to be indicated on a computer display of patient-specific preoperative images. This intrasurgical guidance information is particularly valuable in surgery within the brain, where visual feedback is limited. The accuracy of the mapping between physical and image space depends upon the accuracy with which the fiducial markers were located in each coordinate system. To effect accurate space registration for interactive, image-guided neurosurgery, the use of permanent fiducial markers implanted into the surface of the skull is proposed in this paper. These small cylindrical markers are composed of materials that make them visible in the image sets. The challenge lies in locating the subcutaneous markers in physical space. This paper presents an ultrasonic technique for transcutaneously detecting the location of these markers. The technique incorporates an algorithm based on detection of characteristic properties of the reflected A-mode ultrasonic waveform. The results demonstrate that ultrasound is an appropriate technique for accurate transcutaneous marker localization. The companion paper to this article describes an automatic, enhanced implementation of the marker-localization theory described in this article  相似文献   

19.
Super-resolution in PET imaging   总被引:1,自引:0,他引:1  
This paper demonstrates a super-resolution method for improving the resolution in clinical positron emission tomography (PET) scanners. Super-resolution images were obtained by combining four data sets with spatial shifts between consecutive acquisitions and applying an iterative algorithm. Super-resolution attenuation corrected PET scans of a phantom were obtained using the two-dimensional and three-dimensional (3-D) acquisition modes of a clinical PET/computed tomography (CT) scanner (Discovery LS, GEMS). In a patient study, following a standard 18F-FDG PET/CT scan, a super-resolution scan around one small lesion was performed using axial shifts without increasing the patient radiation exposure. In the phantom study, smaller features (3 mm) could be resolved axially with the super-resolution method than without (6 mm). The super-resolution images had better resolution than the original images and provided higher contrast ratios in coronal images and in 3-D acquisition transaxial images. The coronal super-resolution images had superior resolution and contrast ratios compared to images reconstructed by merely interleaving the data to the proper axial location. In the patient study, super-resolution reconstructions displayed a more localized 18F-FDG uptake. A new approach for improving the resolution of PET images using a super-resolution method has been developed and experimentally confirmed, employing a clinical scanner. The improvement in axial resolution requires no changes in hardware.  相似文献   

20.
The recovery of a three-dimensional (3-D) model from a sequence of two-dimensional (2-D) images is very useful in medical image analysis. Image sequences obtained from the relative motion between the object and the camera or the scanner contain more 3-D information than a single image. Methods to visualize the computed tomograms can be divided into two approaches: the surface rendering approach and the volume rendering approach. In this paper, a new surface rendering method using optical flow is proposed. Optical flow is the apparent motion in the image plane produced by the projection of real 3-D motion onto the 2-D image. The 3-D motion of an object can be recovered from the optical-flow field using additional constraints. By extracting the surface information from 3-D motion, it is possible to obtain an accurate 3-D model of the object. Both synthetic and real image sequences have been used to illustrate the feasibility of the proposed method. The experimental results suggest that the proposed method is suitable for the reconstruction of 3-D models from ultrasound medical images as well as other computed tomograms  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号