首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   764篇
  免费   52篇
  国内免费   1篇
电工技术   16篇
综合类   13篇
化学工业   201篇
金属工艺   10篇
机械仪表   14篇
建筑科学   47篇
矿业工程   2篇
能源动力   35篇
轻工业   55篇
水利工程   7篇
无线电   63篇
一般工业技术   156篇
冶金工业   30篇
原子能技术   3篇
自动化技术   165篇
  2024年   1篇
  2023年   18篇
  2022年   39篇
  2021年   68篇
  2020年   37篇
  2019年   29篇
  2018年   36篇
  2017年   33篇
  2016年   44篇
  2015年   31篇
  2014年   53篇
  2013年   50篇
  2012年   43篇
  2011年   70篇
  2010年   52篇
  2009年   36篇
  2008年   34篇
  2007年   19篇
  2006年   18篇
  2005年   16篇
  2004年   12篇
  2003年   12篇
  2002年   11篇
  2001年   3篇
  2000年   9篇
  1999年   3篇
  1998年   10篇
  1997年   7篇
  1996年   4篇
  1995年   7篇
  1994年   1篇
  1993年   2篇
  1992年   1篇
  1989年   1篇
  1988年   1篇
  1985年   1篇
  1980年   1篇
  1978年   1篇
  1976年   1篇
  1974年   1篇
  1973年   1篇
排序方式: 共有817条查询结果,搜索用时 15 毫秒
81.
Human can handle a deformable object and damp its vibration with recognized skill. However, for an industrial robot, handling a deformable object with acute vibration is often a difficult task. This paper addresses the problem of active damping skill for handling deformable linear objects (DLOs) by using a strategy inspired from human manipulation skills. The strategy is illustrated by several rules, which are explained by a fuzzy and a P controller. A proportional-integral-derivative (PID) controller is also employed to explain the rules as a comparison. The interpretations from controllers are translated into high level commands in a robotic language V+. A standard industrial robot with a force/torque sensor mounted on the wrist was employed to demonstrate the skill. Experimental results showed the fuzzy based damping skill is quite effective and stable even without any previous acknowledge of the deformable linear objects.Category (5)  相似文献   
82.
In this paper, we develop the a posteriori error estimation of mixed discontinuous Galerkin finite element approximations of the Stokes problem. In particular, we derive computable upper bounds on the error, measured in terms of a natural (mesh-dependent) energy norm. This is done by rewriting the underlying method in a non-consistent form using appropriate lifting operators, and by employing a decomposition result for the discontinuous spaces. A series of numerical experiments highlighting the performance of the proposed a posteriori error estimator on adaptively refined meshes are presented.Paul Houston - Funded by the EPSRC (Grant GR/R76615). Thomas P. Wihler - Funded by the Swiss National Science Foundation (Grant PBEZ2-102321). This revised version was published online in July 2005 with corrected volume and issue numbers.  相似文献   
83.
84.
Software and Systems Modeling - The notation of a modeling language is of paramount importance for its efficient use and the correct comprehension of created models. A graphical notation,...  相似文献   
85.
In this paper we analyze the effect of including price competition into a classical (market entrant’s) competitive location problem. The multinomial logit approach is applied to model the decision process of utility maximizing customers. We provide complexity results and show that, given the locations of all facilities, a fixed-point iteration approach that has previously been introduced in the literature can be adapted to reliably and quickly determine local price equilibria. We present examples of problem instances that demonstrate the potential non-existence of price equilibria and the case of multiple local equilibria in prices. Furthermore, we show that different price sensitivity levels of customers may actually affect optimal locations of facilities, and we provide first insights into the performance of heuristic algorithms for the location problem.  相似文献   
86.
Classifiers based on radial basis function neural networks have a number of useful properties that can be exploited in many practical applications. Using sample data, it is possible to adjust their parameters (weights), to optimize their structure, and to select appropriate input features (attributes). Moreover, interpretable rules can be extracted from a trained classifier and input samples can be identified that cannot be classified with a sufficient degree of “certainty”. These properties support an analysis of radial basis function classifiers and allow for an adaption to “novel” kinds of input samples in a real-world application. In this article, we outline these properties and show how they can be exploited in the field of intrusion detection (detection of network-based misuse). Intrusion detection plays an increasingly important role in securing computer networks. In this case study, we first compare the classification abilities of radial basis function classifiers, multilayer perceptrons, the neuro-fuzzy system NEFCLASS, decision trees, classifying fuzzy-k-means, support vector machines, Bayesian networks, and nearest neighbor classifiers. Then, we investigate the interpretability and understandability of the best paradigms found in the previous step. We show how structure optimization and feature selection for radial basis function classifiers can be done by means of evolutionary algorithms and compare this approach to decision trees optimized using certain pruning techniques. Finally, we demonstrate that radial basis function classifiers are basically able to detect novel attack types. The many advantageous properties of radial basis function classifiers could certainly be exploited in other application fields in a similar way.  相似文献   
87.
We present a filter-and-refine method to speed up nearest neighbor searches with the Kullback–Leibler divergence for multivariate Gaussians. This combination of features and similarity estimation is of special interest in the field of automatic music recommendation as it is widely used to compute music similarity. However, the non-vectorial features and a non-metric divergence make using it with large corpora difficult, as standard indexing algorithms can not be used. This paper proposes a method for fast nearest neighbor retrieval in large databases which relies on the above approach. In its core the method rescales the divergence and uses a modified FastMap implementation to speed up nearest-neighbor queries. Overall the method accelerates the search for similar music pieces by a factor of 10–30 and yields high recall values of 95–99% compared to a standard linear search.  相似文献   
88.
Social resource sharing systems are central elements of the Web 2.0 and use the same kind of lightweight knowledge representation, called folksonomy. Their large user communities and ever-growing networks of user-generated content have made them an attractive object of investigation for researchers from different disciplines like Social Network Analysis, Data Mining, Information Retrieval or Knowledge Discovery. In this paper, we summarize and extend our work on different aspects of this branch of Web 2.0 research, demonstrated and evaluated within our own social bookmark and publication sharing system BibSonomy, which is currently among the three most popular systems of its kind. We structure this presentation along the different interaction phases of a user with our system, coupling the relevant research questions of each phase with the corresponding implementation issues. This approach reveals in a systematic fashion important aspects and results of the broad bandwidth of folksonomy research like capturing of emergent semantics, spam detection, ranking algorithms, analogies to search engine log data, personalized tag recommendations and information extraction techniques. We conclude that when integrating a real-life application like BibSonomy into research, certain constraints have to be considered; but in general, the tight interplay between our scientific work and the running system has made BibSonomy a valuable platform for demonstrating and evaluating Web 2.0 research.  相似文献   
89.
90.
Attribute selection with fuzzy decision reducts   总被引:2,自引:0,他引:2  
Rough set theory provides a methodology for data analysis based on the approximation of concepts in information systems. It revolves around the notion of discernibility: the ability to distinguish between objects, based on their attribute values. It allows to infer data dependencies that are useful in the fields of feature selection and decision model construction. In many cases, however, it is more natural, and more effective, to consider a gradual notion of discernibility. Therefore, within the context of fuzzy rough set theory, we present a generalization of the classical rough set framework for data-based attribute selection and reduction using fuzzy tolerance relations. The paper unifies existing work in this direction, and introduces the concept of fuzzy decision reducts, dependent on an increasing attribute subset measure. Experimental results demonstrate the potential of fuzzy decision reducts to discover shorter attribute subsets, leading to decision models with a better coverage and with comparable, or even higher accuracy.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号