首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10篇
  免费   0篇
机械仪表   2篇
一般工业技术   1篇
自动化技术   7篇
  2020年   3篇
  2018年   1篇
  2012年   1篇
  2011年   1篇
  2010年   1篇
  2008年   2篇
  2004年   1篇
排序方式: 共有10条查询结果,搜索用时 15 毫秒
1
1.
In this paper, we present an automatic segmentation method that detects virus particles of various shapes in transmission electron microscopy images. The method is based on a statistical analysis of local neighbourhoods of all the pixels in the image followed by an object width discrimination and finally, for elongated objects, a border refinement step. It requires only one input parameter, the approximate width of the virus particles searched for. The proposed method is evaluated on a large number of viruses. It successfully segments viruses regardless of shape, from polyhedral to highly pleomorphic.  相似文献   
2.
An automatic image analysis method for describing, segmenting, and classifying human cytomegalovirus capsids in transmission electron micrograph (TEM) images of host cell nuclei has been developed. Three stages of the capsid assembly process in the host cell nucleus have been investigated. Each class is described by a radial density profile, which is the average grey-level at each radial distance from the center. A template, constructed from the profile, is used to find possible capsid locations by correlation based matching. The matching results are further refined by size and distortion analysis of each possible capsid, resulting in a final segmentation and classification.  相似文献   
3.
4.
Manual detection of small uncalcified pulmonary nodules (diameter <4 mm) in thoracic computed tomography (CT) scans is a tedious and error-prone task. Automatic detection of disperse micronodules is, thus, highly desirable for improved characterization of the fatal and incurable occupational pulmonary diseases. Here, we present a novel computer-assisted detection (CAD) scheme specifically dedicated to detect micronodules. The proposed scheme consists of a candidate-screening module and a false positive (FP) reduction module. The candidate-screening module is initiated by a lung segmentation algorithm and is followed by a combination of 2D/3D features-based thresholding parameters to identify plausible micronodules. The FP reduction module employs a 3D convolutional neural network (CNN) to classify each identified candidate. It automatically encodes the discriminative representations by exploiting the volumetric information of each candidate. A set of 872 micro-nodules in 598 CT scans marked by at least two radiologists are extracted from the Lung Image Database Consortium and Image Database Resource Initiative to test our CAD scheme. The CAD scheme achieves a detection sensitivity of 86.7% (756/872) with only 8 FPs/scan and an AUC of 0.98. Our proposed CAD scheme efficiently identifies micronodules in thoracic scans with only a small number of FPs. Our experimental results provide evidence that the automatically generated features by the 3D CNN are highly discriminant, thus making it a well-suited FP reduction module of a CAD scheme.  相似文献   
5.
Intensity normalization is important in quantitative image analysis, especially when extracting features based on intensity. In automated microscopy, particularly in large cellular screening experiments, each image contains objects of similar type (e.g. cells) but the object density (number and size of the objects) may vary markedly from image to image. Standard intensity normalization methods, such as matching the grey-value histogram of an image to a target histogram from, i.e. a reference image, only work well if both object type and object density are similar in the images to be matched. This is typically not the case in cellular screening and many other types of images where object type varies little from image to image, but object density may vary dramatically. In this paper, we propose an improved form of intensity normalization which uses grey-value as well as gradient information. This method is very robust to differences in object density. We compare and contrast our method with standard histogram normalization across a range of image types, and show that the modified procedure performs much better when object density varies between images.  相似文献   
6.
This paper introduces an accurate real‐time soft shadow algorithm that uses sample based visibility. Initially, we present a GPU‐based alias‐free hard shadow map algorithm that typically requires only a single render pass from the light, in contrast to using depth peeling and one pass per layer. For closed objects, we also suppress the need for a bias. The method is extended to soft shadow sampling for an arbitrarily shaped area‐/volumetric light source using 128‐1024 light samples per screen pixel. The alias‐free shadow map guarantees that the visibility is accurately sampled per screen‐space pixel, even for arbitrarily shaped (e.g. non‐planar) surfaces or solid objects. Another contribution is a smooth coherent shading model to avoid common light leakage near shadow borders due to normal interpolation.  相似文献   
7.
Stochastic transparency provides a unified approach to order-independent transparency, antialiasing, and deep shadow maps. It augments screen-door transparency using a random sub-pixel stipple pattern, where each fragment of transparent geometry covers a random subset of pixel samples of size proportional to alpha. This results in correct alpha-blended colors on average, in a single render pass with fixed memory size and no sorting, but introduces noise. We reduce this noise by an alpha correction pass, and by an accumulation pass that uses a stochastic shadow map from the camera. At the pixel level, the algorithm does not branch and contains no read-modify-write loops, other than traditional z-buffer blend operations. This makes it an excellent match for modern massively parallel GPU hardware. Stochastic transparency is very simple to implement and supports all types of transparent geometry, able without coding for special cases to mix hair, smoke, foliage, windows, and transparent cloth in a single scene.  相似文献   
8.
An application may have to load an unknown 3D model and, for enhanced realistic rendering, precompute values over the surface domain, such as light maps, ambient occlusion, or other global-illumination parameters. High-quality uv-unwrapping has several problems, such as seams, distortions, and wasted texture space. Additionally, procedurally generated scene content, perhaps on the fly, can make manual uv unwrapping impossible. Even when artist manipulation is feasible, good uv layouts can require expertise and be highly labor intensive. This paper investigates how to use Sparse Voxel DAGs (or DAGs for short) as one alternative to avoid uv mapping. The result is an algorithm enabling high compression ratios of both voxel structure and colors, which can be important for a baked scene to fit in GPU memory. Specifically, we enable practical usage for an automatic system by targeting efficient real-time mipmap filtering using compressed textures and adding support for individual mesh voxelizations and resolutions in the same DAG. Furthermore, the latter increases the texture-compression ratios by up to 32% compared to using one global voxelization, DAG compression by 10 – 15% compared to using a DAG per mesh, and reduces color-bleeding problems for large mipmap filter sizes. The voxel-filtering is more costly than standard hardware 2D-texture filtering. However, for full HD with deferred shading, it is optimized down to 2.5 ± 0.5 ms for a custom multisampling filtering (e.g., targeted for minification of low-frequency textures) and 5 ± 2 ms for quad-linear mipmap filtering (e.g., for high-frequency textures). Multiple textures sharing voxelization can amortize the majority of this cost. Hence, these numbers involve 1–3 textures per pixel (Fig. 1c).  相似文献   
9.
We describe a method to use Spherical Gaussians with free directions and arbitrary sharpness and amplitude to approximate the precomputed local light field for any point on a surface in a scene. This allows for a high-quality reconstruction of these light fields in a manner that can be used to render the surfaces with precomputed global illumination in real-time with very low cost both in memory and performance. We also extend this concept to represent the illumination-weighted environment visibility, allowing for high-quality reflections of the distant environment with both surface-material properties and visibility taken into account. We treat obtaining the Spherical Gaussians as an optimization problem for which we train a Convolutional Neural Network to produce appropriate values for each of the Spherical Gaussians' parameters. We define this CNN in such a way that the produced parameters can be interpolated between adjacent local light fields while keeping the illumination in the intermediate points coherent.  相似文献   
10.
This paper presents an algorithm for fast sorting of large lists using modern GPUs. The method achieves high speed by efficiently utilizing the parallelism of the GPU throughout the whole algorithm. Initially, GPU-based bucketsort or quicksort splits the list into enough sublists then to be sorted in parallel using merge-sort. The algorithm is of complexity nlognnlogn, and for lists of 8 M elements and using a single Geforce 8800 GTS-512, it is 2.5 times as fast as the bitonic sort algorithms, with standard complexity of n(logn)2n(logn)2, which for a long time was considered to be the fastest for GPU sorting. It is 6 times faster than single CPU quicksort, and 10% faster than the recent GPU-based radix sort. Finally, the algorithm is further parallelized to utilize two graphics cards, resulting in yet another 1.8 times speedup.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号