首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   929篇
  免费   83篇
  国内免费   2篇
电工技术   20篇
综合类   1篇
化学工业   234篇
金属工艺   17篇
机械仪表   24篇
建筑科学   36篇
矿业工程   6篇
能源动力   26篇
轻工业   94篇
水利工程   7篇
无线电   87篇
一般工业技术   180篇
冶金工业   16篇
原子能技术   20篇
自动化技术   246篇
  2024年   3篇
  2023年   16篇
  2022年   41篇
  2021年   67篇
  2020年   46篇
  2019年   32篇
  2018年   33篇
  2017年   39篇
  2016年   45篇
  2015年   33篇
  2014年   45篇
  2013年   67篇
  2012年   64篇
  2011年   76篇
  2010年   50篇
  2009年   51篇
  2008年   50篇
  2007年   26篇
  2006年   42篇
  2005年   27篇
  2004年   19篇
  2003年   21篇
  2002年   22篇
  2001年   14篇
  2000年   6篇
  1999年   13篇
  1998年   8篇
  1997年   6篇
  1996年   9篇
  1995年   5篇
  1994年   6篇
  1993年   3篇
  1992年   7篇
  1991年   2篇
  1990年   1篇
  1989年   1篇
  1987年   3篇
  1986年   5篇
  1985年   2篇
  1984年   3篇
  1983年   2篇
  1981年   2篇
  1977年   1篇
排序方式: 共有1014条查询结果,搜索用时 15 毫秒
61.
62.
Journal of Inorganic and Organometallic Polymers and Materials - Due to their excellent properties, polymides (PIs) result promising as high-performance materials in different technological fields....  相似文献   
63.
本文是根据马兰戈尼(Riccardo Rubino)2010中国艺术类大学巡回讲座--北京服装学院名师讲座系列演讲稿进行的整理.他主要针对时装造型师这一职业的历史起源、从事这一职业需要具备的条件和素质,其在整个时尚体系中的地位,他们与设计师、摄影师的合作过程,这项工作在整个时装时尚产业中的重要作用等相关问题与师生们进行了广泛深入的交流.  相似文献   
64.
The recent availability of reliable schemes for physically unclonable constants (PUC) opens interesting possibilities in the field of security. In this paper, we explore the possibility of using PUCs to embed in a chip random permutations to be used, for example, as building blocks in cryptographic constructions such as sponge functions, substitution–permutation networks, and so on. We show that the most difficult part is the generation of random integers using as the only randomness source the bit-string produced by the PUC. In order to solve the integer generation problem, we propose a partial rejection method that allows the designer to trade-off between entropy and efficiency. The results show that the proposed schemes can be implemented with reasonable complexity.  相似文献   
65.
We propose two models for improving the performance of rule-based classification under unbalanced and highly imprecise domains. Both models are probabilistic frameworks aimed to boost the performance of basic rule-based classifiers. The first model implements a global-to-local scheme, where the response of a global rule-based classifier is refined by performing a probabilistic analysis of the coverage of its rules. In particular, the coverage of the individual rules is used to learn local probabilistic models, which ultimately refine the predictions from the corresponding rules of the global classifier. The second model implements a dual local-to-global strategy, in which single classification rules are combined within an exponential probabilistic model in order to boost the overall performance as a side effect of mutual influence. Several variants of the basic ideas are studied, and their performances are thoroughly evaluated and compared with state-of-the-art algorithms on standard benchmark datasets.  相似文献   
66.
We examine four approaches for dealing with the logical omniscience problem and their potential applicability: the syntactic approach, awareness, algorithmic knowledge, and impossible possible worlds. Although in some settings these approaches are equi-expressive and can capture all epistemic states, in other settings of interest (especially with probability in the picture), we show that they are not equi-expressive. We then consider the pragmatics of dealing with logical omniscience—how to choose an approach and construct an appropriate model.  相似文献   
67.
In Very Long Baseline Interferometry, signals from far radio sources are simultaneously recorded at different antennas, with the purpose of investigating their physical properties. The recorded signals are generally modeled as realizations of Gaussian processes, whose power is dominated by the system noise at the receiving antennas. The actual signal coming from the radio source can be detected only after cross-correlation of the various data-streams. The signals received at each antenna are digitized after low noise amplification and frequency down-conversion, in order to allow subsequent digital post-processing. The applied quantization is coarse, 1 or 2 bits being generally associated to the signal amplitude. In modern applications the sampling is typically performed at a high rate, and subchannels are then generated by filtering, followed by decimation and requantization of the signal streams. The redigitized streams are then cross-correlated to extract the physical observables. While the classical effect of quantization has widely been studied in the past, the decorrelation induced by the filtering and requantization process is still characterized experimentally, mainly due to its inherent mathematical complexity. In the present work we analyze the above problem, and provide algorithms and analytical formulas aimed at predicting the induced decorrelation for a wide class of quantization schemes, with the unique assumption of weakly correlated signals, typically fulfilled in VLBI and radio astronomy applications.  相似文献   
68.
The growing size and complexity of cloud systems determine scalability issues for resource monitoring and management. While most existing solutions consider each Virtual Machine (VM) as a black box with independent characteristics, we embrace a new perspective where VMs with similar behaviors in terms of resource usage are clustered together. We argue that this new approach has the potential to address scalability issues in cloud monitoring and management. In this paper, we propose a technique to cluster VMs starting from the usage of multiple resources, assuming no knowledge of the services executed on them. This innovative technique models VMs behavior exploiting the probability histogram of their resources usage, and performs smoothing-based noise reduction and selection of the most relevant information to consider for the clustering process. Through extensive evaluation, we show that our proposal achieves high and stable performance in terms of automatic VM clustering, and can reduce the monitoring requirements of cloud systems.  相似文献   
69.
The scale of shot, i.e. the apparent distance of the camera from the main subject of a scene, is one of the main stylistic and narrative functions of audiovisual products, conveying meaning and inducing the viewer’s emotional state. The statistical distribution of different shot scales in a film may be an important identifier of an individual film, an individual author, and of various narrative and affective functions of a film. In order to understand at which level shot scale distribution (SSD) of a movie might become its fingerprint, it is necessary to produce automatic recognition of shot scale on a large movie corpus. In our work we propose an automatic framework for estimating the SSD of a movie by using inherent characteristics of shots containing information about camera distance, without the need to recover the 3D structure of the scene. In the experimental investigation, the comparison of obtained results with manual SSD annotations proves the validity of the framework. Experiments conducted on movies by Michelangelo Antonioni taken from different stylistic periods (1950–57, 1960–64, 1966–75, 1980–82) show a strong similarity in shot scale distributions within each period, thus opening interesting research lines regarding the possible aesthetic and cognitive sources of such a regularity.  相似文献   
70.
Boosting text segmentation via progressive classification   总被引:1,自引:4,他引:1  
A novel approach for reconciling tuples stored as free text into an existing attribute schema is proposed. The basic idea is to subject the available text to progressive classification, i.e., a multi-stage classification scheme where, at each intermediate stage, a classifier is learnt that analyzes the textual fragments not reconciled at the end of the previous steps. Classification is accomplished by an ad hoc exploitation of traditional association mining algorithms, and is supported by a data transformation scheme which takes advantage of domain-specific dictionaries/ontologies. A key feature is the capability of progressively enriching the available ontology with the results of the previous stages of classification, thus significantly improving the overall classification accuracy. An extensive experimental evaluation shows the effectiveness of our approach.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号