首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1564篇
  免费   84篇
  国内免费   1篇
电工技术   20篇
综合类   5篇
化学工业   240篇
金属工艺   50篇
机械仪表   47篇
建筑科学   128篇
矿业工程   4篇
能源动力   59篇
轻工业   125篇
水利工程   14篇
石油天然气   4篇
无线电   200篇
一般工业技术   244篇
冶金工业   105篇
原子能技术   7篇
自动化技术   397篇
  2023年   14篇
  2022年   18篇
  2021年   34篇
  2020年   29篇
  2019年   27篇
  2018年   50篇
  2017年   39篇
  2016年   62篇
  2015年   53篇
  2014年   49篇
  2013年   121篇
  2012年   98篇
  2011年   96篇
  2010年   93篇
  2009年   84篇
  2008年   76篇
  2007年   98篇
  2006年   82篇
  2005年   56篇
  2004年   50篇
  2003年   64篇
  2002年   32篇
  2001年   25篇
  2000年   32篇
  1999年   24篇
  1998年   22篇
  1997年   19篇
  1996年   25篇
  1995年   12篇
  1994年   13篇
  1993年   15篇
  1992年   23篇
  1991年   15篇
  1990年   10篇
  1989年   10篇
  1988年   7篇
  1987年   4篇
  1985年   9篇
  1984年   6篇
  1983年   4篇
  1982年   4篇
  1981年   6篇
  1980年   5篇
  1979年   6篇
  1977年   3篇
  1976年   3篇
  1975年   4篇
  1968年   2篇
  1966年   2篇
  1965年   2篇
排序方式: 共有1649条查询结果,搜索用时 12 毫秒
121.
There is a significant need to provide nationwide consistent information for land managers and scientists to assist with property planning, vegetation monitoring applications, risk assessment, and conservation activities at an appropriate spatial scale. We created maps of woody vegetation cover of Australia using a consistent method applied across the continent, and made them accessible. We classified pixels as woody or not woody, quantified their foliage projective cover, and classed them as forest or other wooded lands based on their cover density. The maps provide, for the first time, cover density estimates of Australian forests and other wooded lands with the spatial detail required for local-scale studies. The maps were created by linking field data, collected by a network of collaborators across the continent, to a time series of Landsat-5 TM and Landsat-7 ETM+ images for the period 2000–2010. The fractions of green vegetation cover, non-green vegetation cover, and bare ground were calculated for each pixel using a previously developed spectral unmixing approach. Time series statistics, for the green vegetation cover, were used to classify each pixel as either woody or not using a random forest classifier. An estimate of woody foliage projective cover was made by calibration with field measurements, and woody pixels classified as forest where the foliage cover was at least 0.1. Validation of the foliage projective cover with field measurements gave a coefficient of determination, R2,of 0.918 and root mean square error of 0.070. The user’s and producer’s accuracies for areas mapped as forest were high at 92.2% and 95.9%, respectively. The user’s and producers’s accuracies were lower for other wooded lands at 75.7% and 61.3%, respectively. Further research into methods to better separate areas with sparse woody vegetation from those without woody vegetation is needed. The maps provide information that will assist in gaining a better understanding of our natural environment. Applications range from the continental-scale activity of estimating national carbon stocks, to the local scale activities of assessing habitat suitability and property planning.  相似文献   
122.
Scale-invariant interest points have found several highly successful applications in computer vision, in particular for image-based matching and recognition. This paper presents a theoretical analysis of the scale selection properties of a generalized framework for detecting interest points from scale-space features presented in Lindeberg (Int. J. Comput. Vis. 2010, under revision) and comprising:
  • an enriched set of differential interest operators at a fixed scale including the Laplacian operator, the determinant of the Hessian, the new Hessian feature strength measures I and II and the rescaled level curve curvature operator, as well as
  • an enriched set of scale selection mechanisms including scale selection based on local extrema over scale, complementary post-smoothing after the computation of non-linear differential invariants and scale selection based on weighted averaging of scale values along feature trajectories over scale.
  • It is shown how the selected scales of different linear and non-linear interest point detectors can be analyzed for Gaussian blob models. Specifically it is shown that for a rotationally symmetric Gaussian blob model, the scale estimates obtained by weighted scale selection will be similar to the scale estimates obtained from local extrema over scale of scale normalized derivatives for each one of the pure second-order operators. In this respect, no scale compensation is needed between the two types of scale selection approaches. When using post-smoothing, the scale estimates may, however, be different between different types of interest point operators, and it is shown how relative calibration factors can be derived to enable comparable scale estimates for each purely second-order operator and for different amounts of self-similar post-smoothing. A theoretical analysis of the sensitivity to affine image deformations is presented, and it is shown that the scale estimates obtained from the determinant of the Hessian operator are affine covariant for an anisotropic Gaussian blob model. Among the other purely second-order operators, the Hessian feature strength measure I has the lowest sensitivity to non-uniform scaling transformations, followed by the Laplacian operator and the Hessian feature strength measure II. The predictions from this theoretical analysis agree with experimental results of the repeatability properties of the different interest point detectors under affine and perspective transformations of real image data. A number of less complete results are derived for the level curve curvature operator.  相似文献   
    123.
    Abstract— A novel preparation method for dichroic dye‐doped polymer‐dispersed liquid crystals has been developed. This was achieved by creating a porous polymer matrix first by washing out the liquid crystal from a polymer‐dispersed liquid crystal (PDLC), which is then refilled with dye‐doped liquid crystal. Optimizing the liquid crystal used in the refilling results in decreased turn‐on voltage and faster response time. Poster‐standard reflectivity and newspaper‐standard contrast was demonstrated with a 3.8‐in. QVGA reflective TFT display with a drive voltage of 10 V.  相似文献   
    124.
    Comprehending changes of customer behavior is an essential problem that must be faced for survival in a fast-changing business environment. Particularly in the management of electronic commerce (EC), many companies have developed on-line shopping stores to serve customers and immediately collect buying logs in databases. This trend has led to the development of data-mining applications. Fuzzy time-interval sequential pattern mining is one type of serviceable data-mining technique that discovers customer behavioral patterns over time. To take a shopping example, (Bread, Short, Milk, Long, Jam), means that Bread is bought before Milk in a Short period, and Jam is bought after Milk in a Long period, where Short and Long are predetermined linguistic terms given by managers. This information shown in this example reveals more general and concise knowledge for managers, allowing them to make quick-response decisions, especially in business. However, no studies, to our knowledge, have yet to address the issue of changes in fuzzy time-interval sequential patterns. The fuzzy time-interval sequential pattern, (Bread, Short, Milk, Long, Jam), became available in last year; however, is not a trend this year, and has been substituted by (Bread, Short, Yogurt, Short, Jam). Without updating this knowledge, managers might map out inappropriate marketing plans for products or services and dated inventory strategies with respect to time-intervals. To deal with this problem, we propose a novel change mining model, MineFuzzChange, to detect the change in fuzzy time-interval sequential patterns. Using a brick-and-mortar transactional dataset collected from a retail chain in Taiwan and a B2C EC dataset, experiments are carried out to evaluate the proposed model. We empirically demonstrate how the model helps managers to understand the changing behaviors of their customers and to formulate timely marketing and inventory strategies.  相似文献   
    125.
    针对乳腺癌数据存在的不平衡性问题,对标准的Adaboost算法进行改进,即首先引入BP神经网络,然后融合模拟退火遗传算法(SA-GA)较强的全局寻优能力和较快的收敛速度,最后通过权重的合理分配,提出BP-GamysBoost算法。同时为验证所提出的新算法BP-GamysBoost的合理性,从UCI机器学习知识库中获取WBCD数据库,比较BP-GamysBoost算法模型与BP模型、BP-GA模型、BP-Adaboost模型的稳定性、准确率、漏诊率、灵敏度等性能指标。最终结果表明,BP-GamysBoost模型在乳腺癌数据库中运行良好,并优于其他3种算法模型。  相似文献   
    126.
    The Protein Processor Associative Memory (PPAM) is a novel hardware architecture for a distributed, decentralised, robust and scalable, bidirectional, hetero-associative memory, that can adapt online to changes in the training data. The PPAM uses the location of data in memory to identify relationships and is therefore fundamentally different from traditional processing methods that tend to use arithmetic operations to perform computation. This paper presents the hardware architecture and details a sample digital logic implementation with an analysis of the implications of using existing techniques for such hardware architectures. It also presents the results of implementing the PPAM for a robotic application that involves learning the forward and inverse kinematics. The results show that, contrary to most other techniques, the PPAM benefits from higher dimensionality of data, and that quantisation intervals are crucial to the performance of the PPAM.  相似文献   
    127.
    用多种浓度的料液对现有铜萃取剂的适应性进行了考察.结果证明.经过改质的更强的乙醛后萃取剂同含有较弱的铜肟萃取剂的混合萃取剂相比,性能更加优越,尤其是经酯改质的ACORGAM5640萃取剂,在所有被考察的萃取剂中,最适合处理浓料液.在回收率、对铁的选择性及抗水解降解等方面ACORGAM5640显示出明显的优越性.为证实ACORGAM5640对浓料液的适应性,进行了实例研究.  相似文献   
    128.
    129.
    Software Engineering activities are information intensive. Research proposes Information Retrieval (IR) techniques to support engineers in their daily tasks, such as establishing and maintaining traceability links, fault identification, and software maintenance. We describe an engineering task, test case selection, and illustrate our problem analysis and solution discovery process. The objective of the study is to gain an understanding of to what extent IR techniques (one potential solution) can be applied to test case selection and provide decision support in a large-scale, industrial setting. We analyze, in the context of the studied company, how test case selection is performed and design a series of experiments evaluating the performance of different IR techniques. Each experiment provides lessons learned from implementation, execution, and results, feeding to its successor. The three experiments led to the following observations: 1) there is a lack of research on scalable parameter optimization of IR techniques for software engineering problems; 2) scaling IR techniques to industry data is challenging, in particular for latent semantic analysis; 3) the IR context poses constraints on the empirical evaluation of IR techniques, requiring more research on developing valid statistical approaches. We believe that our experiences in conducting a series of IR experiments with industry grade data are valuable for peer researchers so that they can avoid the pitfalls that we have encountered. Furthermore, we identified challenges that need to be addressed in order to bridge the gap between laboratory IR experiments and real applications of IR in the industry.  相似文献   
    130.
    设为首页 | 免责声明 | 关于勤云 | 加入收藏

    Copyright©北京勤云科技发展有限公司  京ICP备09084417号