首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   143篇
  免费   5篇
  国内免费   2篇
综合类   1篇
化学工业   11篇
机械仪表   4篇
建筑科学   4篇
能源动力   2篇
轻工业   29篇
石油天然气   1篇
无线电   6篇
一般工业技术   11篇
原子能技术   3篇
自动化技术   78篇
  2022年   2篇
  2020年   4篇
  2019年   1篇
  2018年   3篇
  2017年   3篇
  2016年   4篇
  2015年   4篇
  2014年   3篇
  2013年   8篇
  2012年   13篇
  2011年   23篇
  2010年   11篇
  2009年   17篇
  2008年   16篇
  2007年   14篇
  2006年   12篇
  2005年   5篇
  2004年   2篇
  2003年   2篇
  2002年   1篇
  2000年   1篇
  1998年   1篇
排序方式: 共有150条查询结果,搜索用时 31 毫秒
1.
Hyperspectral imaging (HSI) is a spectroscopic method that uses densely sampled measurements along the electromagnetic spectrum to identify the unique molecular composition of an object. Traditionally HSI has been associated with remote sensing-type applications, but recently has found increased use in biomedicine, from investigations at the cellular to the tissue level. One of the main challenges in the analysis of HSI is estimating the proportions, also called abundance fractions of each of the molecular signatures. While there is great promise for HSI in the area of biomedicine, large variability in the measurements and artifacts related to the instrumentation has slow adoption into more widespread practice. In this article, we propose a novel regularization and variable selection method called the spatial LASSO (SPLASSO). The SPLASSO incorporates spatial information via a graph Laplacian-based penalty to help improve the model estimation process for multivariate response data. We show the strong performance of this approach on a benchmark HSI dataset with considerable improvement in predictive accuracy over the standard LASSO. Supplementary materials for this article are available online.  相似文献   
2.
We address the problems of noise and huge data sizes in microarray images. First, we propose a mixture model for describing the statistical and structural properties of microarray images. Then, based on the microarray image model, we present methods for denoising and for compressing microarray images. The denoising method is based on a variant of the translation-invariant wavelet transform. The compression method introduces the notion of approximate contexts (rather than traditional exact contexts) in modeling the symbol probabilities in a microarray image. This inexact context modeling approach is important in dealing with the noisy nature of microarray images. Using the proposed denoising and compression methods, we describe a near-lossless compression scheme suitable for microarray images. Results on both denoising and compression are included, which show the performance of the proposed methods. Further experiments using the results of the proposed near-lossless compression scheme in gene clustering using cell-cycle microarray data for S. cerevisiae showed a general improvement in the clustering performance, when compared with using the original data. This provides an indirect validation of the effectiveness of the proposed denoising method.  相似文献   
3.
Biochips are miniaturized, highly ordered analysis systems which offer the unique advantage of highly parallel analysis of thousands of analytes at the same time. Although this technique has been enthusiastically developed and has promised to improve and speed up numerous biological assays, the quality control of chip manufacture, chip analysis and data management has received less attention.

The following article compares three epoxy-containing chip surfaces (ARChip Epoxy, 3D-Link™, and EasySpot) with respect to their autofluorescence, immobilization capacity, background fluorescence and hybridization efficiency. Since data collected from biochip experiments are random snapshots with errors, inherently noisy and incomplete, we tried to evaluate technical factors causing variability and to set up quality control procedures for chip manufacture and chip analysis. Variabilities caused by arraying, glass substrate and polymer coating, fluorescent label and experimental conditions are discussed in details.  相似文献   

4.
Gene expression microarray is a rapidly maturing technology that provides the opportunity to assay the expression levels of thousands or tens of thousands of genes in a single experiment. We present a new heuristic to select relevant gene subsets in order to further use them for the classification task. Our method is based on the statistical significance of adding a gene from a ranked-list to the final subset. The efficiency and effectiveness of our technique is demonstrated through extensive comparisons with other representative heuristics. Our approach shows an excellent performance, not only at identifying relevant genes, but also with respect to the computational cost.  相似文献   
5.
Extracting significant features from high-dimension and small sample size biological data is a challenging problem. Recently, Micha? Draminski proposed the Monte Carlo feature selection (MC) algorithm, which was able to search over large feature spaces and achieved better classification accuracies. However in MC the information of feature rank variations is not utilized and the ranks of features are not dynamically updated. Here, we propose a novel feature selection algorithm which integrates the ideas of the professional tennis players ranking, such as seed players and dynamic ranking, into Monte Carlo simulation. Seed players make the feature selection game more competitive and selective. The strategy of dynamic ranking ensures that it is always the current best players to take part in each competition. The proposed algorithm is tested on 8 biological datasets. Results demonstrate that the proposed method is computationally efficient, stable and has favorable performance in classification.  相似文献   
6.
7.
祁云篙  孙怀江 《计算机科学》2010,37(12):203-205
提出了一种基于主曲线(principal curves)的微阵列数据分类方法(PC)。主曲线是第一主成分的非线性推广,它是数据集合的“骨架”,数据集合是主曲线的“云”。基于主曲线的微阵列数据分类方法,首先利用专门设计的算法在训练数据集上计算出每类样本的主曲线,然后根据测试样本与各类样本主曲线距离的期望方差来确定测试样本所属的类别。实验结果表明,该分类方法在进行小样本微阵列数据分类时性能优于现有的方法。  相似文献   
8.
Gene expression technology, namely microarrays, offers the ability to measure the expression levels of thousands of genes simultaneously in biological organisms. Microarray data are expected to be of significant help in the development of an efficient cancer diagnosis and classification platform. A major problem in these data is that the number of genes greatly exceeds the number of tissue samples. These data also have noisy genes. It has been shown in literature reviews that selecting a small subset of informative genes can lead to improved classification accuracy. Therefore, this paper aims to select a small subset of informative genes that are most relevant for cancer classification. To achieve this aim, an approach using two hybrid methods has been proposed. This approach is assessed and evaluated on two well-known microarray data sets, showing competitive results. This work was presented in part at the 13th International Symposium on Artificial Life and Robotics, Oita, Japan, January 31–February 2, 2008  相似文献   
9.
Statistical tests are often performed to discover which experimental variables are reacting to specific treatments. Time-series statistical models usually require the researcher to make assumptions with respect to the distribution of measured responses which may not hold. Randomization tests can be applied to data in order to generate null distributions non-parametrically. However, large numbers of randomizations are required for the precise p-values needed to control false discovery rates. When testing tens of thousands of variables (genes, chemical compounds, or otherwise), significant q-value cutoffs can be extremely small (on the order of 10−5 to 10−8). This requires high-precision p-values, which in turn require large numbers of randomizations. The NVIDIA® Compute Unified Device Architecture® (CUDA®) platform for General Programming on the Graphics Processing Unit (GPGPU) was used to implement an application which performs high-precision randomization tests via Monte Carlo sampling for quickly screening custom test statistics for experiments with large numbers of variables, such as microarrays, Next-Generation sequencing read counts, chromatographical signals, or other abundance measurements. The software has been shown to achieve up to more than 12 fold speedup on a Graphics Processing Unit (GPU) when compared to a powerful Central Processing Unit (CPU). The main limitation is concurrent random access of shared memory on the GPU. The software is available from the authors.  相似文献   
10.
Antibodies, among other things, are important components of the immune system. This paper proposes using the specific recognition capability exhibited by antibodies for computation, in particular, for solving the stable marriage problem, which has been studied as a combinatorial computational problem. Antibody-based computation is proposed by integrating the recognition capabilities of antibodies. The computation is carried out on an array form that is suitable not only for expressing stable marriage problems, but also for further integration to antibody microarrays. This work was presented in part at the 12th International Symposium on Artificial Life and Robotics, Oita, Japan, January 25–27, 2007  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号