首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3395篇
  免费   225篇
  国内免费   108篇
电工技术   38篇
综合类   93篇
化学工业   116篇
金属工艺   38篇
机械仪表   94篇
建筑科学   658篇
矿业工程   189篇
能源动力   82篇
轻工业   107篇
水利工程   23篇
石油天然气   54篇
武器工业   6篇
无线电   217篇
一般工业技术   146篇
冶金工业   53篇
原子能技术   16篇
自动化技术   1798篇
  2024年   5篇
  2023年   35篇
  2022年   65篇
  2021年   69篇
  2020年   85篇
  2019年   80篇
  2018年   69篇
  2017年   82篇
  2016年   112篇
  2015年   110篇
  2014年   203篇
  2013年   187篇
  2012年   211篇
  2011年   283篇
  2010年   170篇
  2009年   256篇
  2008年   240篇
  2007年   234篇
  2006年   242篇
  2005年   178篇
  2004年   133篇
  2003年   124篇
  2002年   90篇
  2001年   50篇
  2000年   49篇
  1999年   63篇
  1998年   51篇
  1997年   52篇
  1996年   29篇
  1995年   37篇
  1994年   18篇
  1993年   11篇
  1992年   9篇
  1991年   6篇
  1990年   6篇
  1989年   4篇
  1986年   7篇
  1985年   6篇
  1984年   6篇
  1983年   7篇
  1982年   6篇
  1981年   6篇
  1980年   8篇
  1979年   7篇
  1978年   3篇
  1977年   5篇
  1976年   7篇
  1975年   3篇
  1974年   2篇
  1973年   3篇
排序方式: 共有3728条查询结果,搜索用时 265 毫秒
81.
Boosting text segmentation via progressive classification   总被引:5,自引:4,他引:1  
A novel approach for reconciling tuples stored as free text into an existing attribute schema is proposed. The basic idea is to subject the available text to progressive classification, i.e., a multi-stage classification scheme where, at each intermediate stage, a classifier is learnt that analyzes the textual fragments not reconciled at the end of the previous steps. Classification is accomplished by an ad hoc exploitation of traditional association mining algorithms, and is supported by a data transformation scheme which takes advantage of domain-specific dictionaries/ontologies. A key feature is the capability of progressively enriching the available ontology with the results of the previous stages of classification, thus significantly improving the overall classification accuracy. An extensive experimental evaluation shows the effectiveness of our approach.  相似文献   
82.
Learning often occurs through comparing. In classification learning, in order to compare data groups, most existing methods compare either raw instances or learned classification rules against each other. This paper takes a different approach, namely conceptual equivalence, that is, groups are equivalent if their underlying concepts are equivalent while their instance spaces do not necessarily overlap and their rule sets do not necessarily present the same appearance. A new methodology of comparing is proposed that learns a representation of each group’s underlying concept and respectively cross-exams one group’s instances by the other group’s concept representation. The innovation is fivefold. First, it is able to quantify the degree of conceptual equivalence between two groups. Second, it is able to retrace the source of discrepancy at two levels: an abstract level of underlying concepts and a specific level of instances. Third, it applies to numeric data as well as categorical data. Fourth, it circumvents direct comparisons between (possibly a large number of) rules that demand substantial effort. Fifth, it reduces dependency on the accuracy of employed classification algorithms. Empirical evidence suggests that this new methodology is effective and yet simple to use in scenarios such as noise cleansing and concept-change learning.  相似文献   
83.
Automatic Construction and Verification of Isotopy Invariants   总被引:1,自引:0,他引:1  
We extend our previous study of the automatic construction of isomorphic classification theorems for algebraic domains by considering the isotopy equivalence relation. Isotopism is an important generalisation of isomorphism, and is studied by mathematicians in domains such as loop theory. This extension was not straightforward, and we had to solve two major technical problems, namely, generating and verifying isotopy invariants. Concentrating on the domain of loop theory, we have developed three novel techniques for generating isotopic invariants, by using the notion of universal identities and by using constructions based on subblocks. In addition, given the complexity of the theorems that verify that a conjunction of the invariants form an isotopy class, we have developed ways of simplifying the problem of proving these theorems. Our techniques employ an interplay of computer algebra, model generation, theorem proving, and satisfiability-solving methods. To demonstrate the power of the approach, we generate isotopic classification theorems for loops of size 6 and 7, which extend the previously known enumeration results. This work was previously beyond the capabilities of automated reasoning techniques. The author’s work was supported by EPSRC MathFIT grant GR/S31099.  相似文献   
84.
Interoperability is a key property of enterprise applications, which is hard to achieve due to the large number of interoperating components and semantic heterogeneity. The inherent complexity of interoperability problems implies that there exists no silver bullet to solve them. Rather, the knowledge about how to solve wicked interoperability problems is hidden in the application cases that expose those problems. The paper addresses the question of how to organise and use method knowledge to resolve interoperability problems. We propose the structure of a knowledge-based system that can deliver situation-specific solutions, called method chunks. Situational Method Engineering promotes modularisation and formalisation of method knowledge in the form of reusable method chunks, which can be combined to compose a situation-specific method. The method chunks are stored in a method chunk repository. In order to cater for management and retrieval, we introduce an Interoperability Classification Framework, which is used to classify and tag method chunks and to assess the project situation in which they are to be used. The classification framework incorporates technical as well as business and organisational aspects of interoperability. This is an important feature as interoperability problems typically are multifaceted spanning multiple aspects. We have applied the approach to analyse an industry case from the insurance sector to identify and classify a set of method chunks.  相似文献   
85.
Several researchers have recently investigated the connection between reinforcement learning and classification. We are motivated by proposals of approximate policy iteration schemes without value functions, which focus on policy representation using classifiers and address policy learning as a supervised learning problem. This paper proposes variants of an improved policy iteration scheme which addresses the core sampling problem in evaluating a policy through simulation as a multi-armed bandit machine. The resulting algorithm offers comparable performance to the previous algorithm achieved, however, with significantly less computational effort. An order of magnitude improvement is demonstrated experimentally in two standard reinforcement learning domains: inverted pendulum and mountain-car.  相似文献   
86.
It is well recognized that the impact-acoustic emissions contain information that can indicate the presence of the adhesive defects in the bonding structures. In our previous papers, artificial neural network (ANN) was adopted to assess the bonding integrity of the tile–walls with the feature extracted from the power spectral density (PSD) of the impact-acoustic signals acting as the input of classifier. However, in addition to the inconvenience posed by the general drawbacks such as long training time and large number of training samples needed, the performance of the classic ANN classifier is deteriorated by the similar spectral characteristics between different bonding status caused by abnormal impacts. In this paper our previous works was developed by the employment of the least-squares support vector machine (LS-SVM) classifier instead of the ANN to derive a bonding integrity recognition approach with better reliability and enhanced immunity to surface roughness. With the help of the specially designed artificial sample slabs, experiments results obtained with the proposed method are provided and compared with that using the ANN classifier, demonstrating the effectiveness of the present strategy.  相似文献   
87.
Autonomous mobile robots need to adapt their behavior to the terrain over which they drive, and to predict the traversability of the terrain so that they can effectively plan their paths. Such robots usually make use of a set of sensors to investigate the terrain around them and build up an internal representation that enables them to navigate. This paper addresses the question of how to use sensor data to learn properties of the environment and use this knowledge to predict which regions of the environment are traversable. The approach makes use of sensed information from range sensors (stereo or ladar), color cameras, and the vehicle’s navigation sensors. Models of terrain regions are learned from subsets of pixels that are selected by projection into a local occupancy grid. The models include color and texture as well as traversability information obtained from an analysis of the range data associated with the pixels. The models are learned without supervision, deriving their properties from the geometry and the appearance of the scene. The models are used to classify color images and assign traversability costs to regions. The classification does not use the range or position information, but only color images. Traversability determined during the model-building phase is stored in the models. This enables classification of regions beyond the range of stereo or ladar using the information in the color images. The paper describes how the models are constructed and maintained, how they are used to classify image regions, and how the system adapts to changing environments. Examples are shown from the implementation of this algorithm in the DARPA Learning Applied to Ground Robots (LAGR) program, and an evaluation of the algorithm against human-provided ground truth is presented.
James S. AlbusEmail:
  相似文献   
88.
In this paper, we propose a scheme to integrate independent component analysis (ICA) and neural networks for electrocardiogram (ECG) beat classification. The ICA is used to decompose ECG signals into weighted sum of basic components that are statistically mutual independent. The projections on these components, together with the RR interval, then constitute a feature vector for the following classifier. Two neural networks, including a probabilistic neural network (PNN) and a back-propagation neural network (BPNN), are employed as classifiers. ECG samples attributing to eight different beat types were sampled from the MIT-BIH arrhythmia database for experiments. The results show high classification accuracy of over 98% with either of the two classifiers. Between them, the PNN shows a slightly better performance than BPNN in terms of accuracy and robustness to the number of ICA-bases. The impressive results prove that the integration of independent component analysis and neural networks, especially PNN, is a promising scheme for the computer-aided diagnosis of heart diseases based on ECG.  相似文献   
89.
Feature selection is about finding useful (relevant) features to describe an application domain. Selecting relevant and enough features to effectively represent and index the given dataset is an important task to solve the classification and clustering problems intelligently. This task is, however, quite difficult to carry out since it usually needs a very time-consuming search to get the features desired. This paper proposes a bit-based feature selection method to find the smallest feature set to represent the indexes of a given dataset. The proposed approach originates from the bitmap indexing and rough set techniques. It consists of two-phases. In the first phase, the given dataset is transformed into a bitmap indexing matrix with some additional data information. In the second phase, a set of relevant and enough features are selected and used to represent the classification indexes of the given dataset. After the relevant and enough features are selected, they can be judged by the domain expertise and the final feature set of the given dataset is thus proposed. Finally, the experimental results on different data sets also show the efficiency and accuracy of the proposed approach.  相似文献   
90.
基于国际专利分类号的层次结构,利用自身的类别描述信息,建立了不同层次的类别特征向量,结合现有专利进行修正训练,分别在各层次上采用经典的KNN算法实现专利的自动分类。实验结果表明:该方法的分类效果在部、大类、小类层次上表现较好。经过修正训练后的分类性能有所提高。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号