首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   10篇
  免费   0篇
机械仪表   2篇
无线电   6篇
原子能技术   1篇
自动化技术   1篇
  2009年   1篇
  2006年   1篇
  2005年   1篇
  2003年   4篇
  2002年   1篇
  1998年   1篇
  1990年   1篇
排序方式: 共有10条查询结果,搜索用时 31 毫秒
1
1.
Network coding-based protection of many-to-one wireless flows   总被引:1,自引:0,他引:1  
This paper addresses the problem of survivability of many-to-one flows in wireless networks, such as wireless mesh networks (WMNs) and wireless sensor networks (WSNs). Traditional protection schemes are either resource-hungry like the (1+1) protection scheme, or introduce a delay and interrupt the network operation like the (1 : N) protection scheme. In this paper, we present a network coding-based protection technique that overcomes the deficiencies of the traditional schemes. We derive and prove the necessary and sufficient conditions for our solution on a restricted network topology. Then we relax these connectivity requirements and show how to generalize the sufficient and necessary conditions to work with any other topology. We also show how to perform deterministic coding with {0,1} coefficients to achieve linear independence. Moreover, we discuss some of the practical considerations related to our approach. Specifically, we show how to adapt our solution when the network has a limited min-cut; we therefore define a more general problem that takes this constraint into account, which prove to be NP-complete. Furthermore, we discuss the decoding process at the sink, and show how to make use of our solution in the upstream communication (from sink to sources). We also study the effect of the proposed scheme on network performance. Finally, we consider the implementation of our approach when all network nodes have single transceivers, and we solve the problem through a greedy algorithm that constructs a feasible schedule for the transmissions from the sources.  相似文献   
2.
This paper presents a method to exploit rank statistics to improve fully automatic tracing of neurons from noisy digital confocal microscope images. Previously proposed exploratory tracing (vectorization) algorithms work by recursively following the neuronal topology, guided by responses of multiple directional correlation kernels. These algorithms were found to fail when the data was of lower quality (noisier, less contrast, weak signal, or more discontinuous structures). This type of data is commonly encountered in the study of neuronal growth on microfabricated surfaces. We show that by partitioning the correlation kernels in the tracing algorithm into multiple subkernels, and using the median of their responses as the guiding criterion improves the tracing precision from 41% to 89% for low-quality data, with a 5% improvement in recall. Improved handling was observed for artifacts such as discontinuities and/or hollowness of structures. The new algorithms require slightly higher amounts of computation, but are still acceptably fast, typically consuming less than 2 seconds on a personal computer (Pentium III, 500 MHz, 128 MB). They produce labeling for all somas present in the field, and a graph-theoretic representation of all dendritic/axonal structures that can be edited. Topological and size measurements such as area, length, and tortuosity are derived readily. The efficiency, accuracy, and fully-automated nature of the proposed method makes it attractive for large-scale applications such as high-throughput assays in the pharmaceutical industry, and study of neuron growth on nano/micro-fabricated structures. A careful quantitative validation of the proposed algorithms is provided against manually derived tracing, using a performance measure that combines the precision and recall metrics.  相似文献   
3.
A 3 MV General lonex Tandetron accelerator has recently been tested at the nearly established Energy Resarch Laboratory at the King Fand University of Petroleum and Minerals. The accelerator features a very stable solid-state power supply which delivers about 3 MV of terminal high voltage. A beam resolution of about 400 eV was measured. Ions of a wide range of masses, ranging from hydrogen to gold, were accelerated. The configuration of this Tandetron will be described along with a discussion of the facility and research programs.  相似文献   
4.
Confocal microscopy is a three‐dimensional (3D) imaging modality, but the specimen thickness that can be imaged is limited by depth‐dependent signal attenuation. Both software and hardware methods have been used to correct the attenuation in reconstructed images, but previous methods do not increase the image signal‐to‐noise ratio (SNR) using conventional specimen preparation and imaging. We present a practical two‐view method that increases the overall imaging depth, corrects signal attenuation and improves the SNR. This is achieved by a combination of slightly modified but conventional specimen preparation, image registration, montage synthesis and signal reconstruction methods. The specimen is mounted in a symmetrical manner between a pair of cover slips, rather than between a slide and a cover slip. It is imaged sequentially from both sides to generate two 3D image stacks from perspectives separated by approximately 180° with respect to the optical axis. An automated image registration algorithm performs a precise 3D alignment, and a model‐based minimum mean squared algorithm synthesizes a montage, combining the content of both the 3D views. Experiments with images of individual neurones contrasted with a space‐filling fluorescent dye in thick brain tissue slices produced precise 3D montages that are corrected for depth‐dependent signal attenuation. The SNR of the reconstructed image is maximized by the method, and it is significantly higher than in the single views after applying our attenuation model. We also compare our method with simpler two‐view reconstruction methods and quantify the SNR improvement. The reconstructed images are a more faithful qualitative visualization of the specimen's structure and are quantitatively more accurate, providing a more rigorous basis for automated image analysis.  相似文献   
5.
This paper presents automated and accurate algorithms based on high‐order transformation models for registering three‐dimensional (3D) confocal images of dye‐injected neurons. The algorithms improve upon prior methods in several ways, and meet the more stringent image registration needs of applications such as two‐view attenuation correction recently developed by us. First, they achieve high accuracy (≈ 1.2 voxels, equivalent to 0.4 µm) by using landmarks, rather than intensity correlations, and by using a high‐dimensional affine and quadratic transformation model that accounts for 3D translation, rotation, non‐isotropic scaling, modest curvature of field, distortions and mechanical inconsistencies introduced by the imaging system. Second, they use a hierarchy of models and iterative algorithms to eliminate potential instabilities. Third, they incorporate robust statistical methods to achieve accurate registration in the face of inaccurate and missing landmarks. Fourth, they are fully automated, even estimating the initial registration from the extracted landmarks. Finally, they are computationally efficient, taking less than a minute on a 900‐MHz Pentium III computer for registering two images roughly 70 MB in size. The registration errors represent a combination of modelling, estimation, discretization and neuron tracing errors. Accurate 3D montaging is described; the algorithms have broader applicability to images of vasculature, and other structures with distinctive point, line and surface landmarks.  相似文献   
6.
Algorithms are presented for fully automatic three-dimensional (3D) tracing of neurons that are imaged by fluorescence confocal microscopy. Unlike previous voxel-based skeletonization methods, the present approach works by recursively following the neuronal topology, using a set of 4 × N2 directional kernels (e.g., N = 32), guided by a generalized 3D cylinder model. This method extends our prior work on exploratory tracing of retinal vasculature to 3D space. Since the centerlines are of primary interest, the 3D extension can be accomplished by four rather than six sets of kernels. Additional modifications, such as dynamic adaptation of the correlation kernels, and adaptive step size estimation, were introduced for achieving robustness to photon noise, varying contrast, and apparent discontinuity and/or hollowness of structures. The end product is a labeling of all somas present, graph-theoretic representations of all dendritic/axonal structures, and image statistics such as soma volume and centroid, soma interconnectivity, the longest branch, and lengths of all graph branches originating from a soma. This method is able to work directly with unprocessed confocal images, without expensive deconvolution or other preprocessing. It is much faster that skeletonization, typically consuming less than a minute to trace a 70 MB image on a 500 MHz computer. These properties make it attractive for large-scale automated tissue studies that require rapid on-line image analysis, such as high-throughput neurobiology/angiogenesis assays, and initiatives such as the Human Brain Project  相似文献   
7.
In this paper, we consider the reliability of n-channel MOSFETs using the Substrate Hot Electron (SHE) technique. We confirm that there is a dependence of oxide degradation upon the current density during SHE injection (as previously observed by ourselves and others). In order to explain this effect, the detrapping of previously trapped electrons must be taken into account A new theoretical model is presented which accounts for the main features of the phenomenon. We consider the technologically important low field case (< 2MVcm−1) for a range current densities (from 0.05 to 2 mAcm−2) and injected charge densities up to 10 C/cm2. The device lifetime for these different conditions is calculated and shown to be also a function of the current density. It is clear that in order to calculate the lifetime during normal operation from accelerated testing, the precise hot electron injection current density must be known, furthermore it must be demonstrated that the same degradation mechanisms hold at very high fields and/or current densities. This result has profound implications for device reliability predictions made using accelerated hot electron measurements and calls into question lifetime predictions made where the effect is not taken into account.  相似文献   
8.
We describe an information extraction and retrieval system, called History Assistant, which extracts rulings from court opinions and retrieves relevant prior cases from a citator database. The technology employed is similar to that adopted in the Message Understanding Conferences, but attempts a fuller parse in order to distinguish current rulings from previous rulings reported in a case. In addition, we employ a combination of information retrieval and machine learning techniques to link each new case to related documents that it may impact. We present experimental results, in terms of precision and recall, for all tasks performed by the extraction and linking programs. Part of the finished system has been deemed worthy of further development into a computer-assisted database update tool to help editors assimilate historical relationships between cases into a concordance of court decisions, called a citator.  相似文献   
9.
Image change detection algorithms: a systematic survey.   总被引:33,自引:0,他引:33  
Detecting regions of change in multiple images of the same scene taken at different times is of widespread interest due to a large number of applications in diverse disciplines, including remote sensing, surveillance, medical diagnosis and treatment, civil infrastructure, and underwater sensing. This paper presents a systematic survey of the common processing steps and core decision rules in modern change detection algorithms, including significance and hypothesis testing, predictive models, the shading model, and background modeling. We also discuss important preprocessing methods, approaches to enforcing the consistency of the change mask, and principles for evaluating and comparing the performance of change detection algorithms. It is hoped that our classification of algorithms into a relatively small number of categories will provide useful guidance to the algorithm designer.  相似文献   
10.
Quantitative studies of dynamic behaviors of live neurons are currently limited by the slowness, subjectivity, and tedium of manual analysis of changes in time-lapse image sequences. Challenges to automation include the complexity of the changes of interest, the presence of obfuscating and uninteresting changes due to illumination variations and other imaging artifacts, and the sheer volume of recorded data. This paper describes a highly automated approach that not only detects the interesting changes selectively, but also generates quantitative analyses at multiple levels of detail. Detailed quantitative neuronal morphometry is generated for each frame. Frame-to-frame neuronal changes are measured and labeled as growth, shrinkage, merging, or splitting, as would be done by a human expert. Finally, events unfolding over longer durations, such as apoptosis and axonal specification, are automatically inferred from the short-term changes. The proposed method is based on a Bayesian model selection criterion that leverages a set of short-term neurite change models and takes into account additional evidence provided by an illumination-insensitive change mask. An automated neuron tracing algorithm is used to identify the objects of interest in each frame. A novel curve distance measure and weighted bipartite graph matching are used to compare and associate neurites in successive frames. A separate set of multi-image change models drives the identification of longer term events. The method achieved frame-to-frame change labeling accuracies ranging from 85% to 100% when tested on 8 representative recordings performed under varied imaging and culturing conditions, and successfully detected all higher order events of interest. Two sequences were used for training the models and tuning their parameters; the learned parameter settings can be applied to hundreds of similar image sequences, provided imaging and culturing conditions are similar to the training set. The proposed approach is a substantial innovation over manual annotation and change analysis, accomplishing in minutes what it would take an expert hours to complete.  相似文献   
1
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号