首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5844篇
  免费   454篇
  国内免费   12篇
电工技术   73篇
综合类   3篇
化学工业   1592篇
金属工艺   74篇
机械仪表   165篇
建筑科学   223篇
矿业工程   10篇
能源动力   238篇
轻工业   964篇
水利工程   76篇
石油天然气   30篇
无线电   391篇
一般工业技术   913篇
冶金工业   315篇
原子能技术   30篇
自动化技术   1213篇
  2024年   13篇
  2023年   45篇
  2022年   147篇
  2021年   265篇
  2020年   174篇
  2019年   201篇
  2018年   247篇
  2017年   211篇
  2016年   278篇
  2015年   206篇
  2014年   285篇
  2013年   491篇
  2012年   405篇
  2011年   472篇
  2010年   317篇
  2009年   359篇
  2008年   316篇
  2007年   279篇
  2006年   235篇
  2005年   191篇
  2004年   155篇
  2003年   126篇
  2002年   135篇
  2001年   63篇
  2000年   62篇
  1999年   50篇
  1998年   112篇
  1997年   66篇
  1996年   56篇
  1995年   43篇
  1994年   40篇
  1993年   35篇
  1992年   24篇
  1991年   24篇
  1990年   21篇
  1989年   14篇
  1988年   16篇
  1987年   17篇
  1986年   11篇
  1985年   11篇
  1984年   19篇
  1983年   9篇
  1982年   12篇
  1980年   12篇
  1979年   4篇
  1978年   6篇
  1977年   6篇
  1976年   9篇
  1975年   5篇
  1974年   4篇
排序方式: 共有6310条查询结果,搜索用时 46 毫秒
111.
In this article, the authors compare offshore outsourcing and the internal offshoring of software development. Empirical evidence is presented from a case study conducted in five companies. Based on a detailed literature review, a framework was developed that guided the authors' analysis of the differences in the challenges faced by companies and the patterns of evolution in the practice of software development in each business model.  相似文献   
112.
The provision of support for holistic operations in the scope of a horizontal digital administration requires the fulfillment of a number of previous steps that guarantee the viability of services in the domain. One of these tasks is the proper management of documents. Documents are a key element of any democratic administration and their digital management is a clear pre-requisite for the arrival of digital government. This article tackles how this service can be provided using the support of semantics as a technological cornerstone. The implementation of such a tool is made through the so-called cPortfolio. This platform is deeply discussed and details about its design and implementation are provided. This system is designed to manage both the personal information from the citizen and the documents they possess. Tests on the prototype showed interesting features regarding the simplicity of use and the interoperability support provided to third party agents.  相似文献   
113.
Discrete evolutionary transform (DET) has usually been applied to signals in a blind-way without using any parameters to characterize the signal. For this reason, it is not optimal and needs improvement by using some information about the signal. In this paper, we propose an improvement for the discrete evolutionary transform in order to obtain a sparse representation and redefine the generalized time-bandwidth product optimal short-time Fourier transform as a special case of it. In case of linear FM-type signals, the optimized kernel function of the transform is determined according to signal parameters including the instantaneous frequency. The performance of the adaptive-DET is illustrated on three distinct cases. In case of multi-component LFM signals, when the concentration of the proposed distribution is compared to the ordinary sinusoidal-DET, the improvement is computed as 28% in terms of the ratio of norms. Furthermore we define a new and a general class of distribution functions named as the short-time generalized discrete Fourier transform (ST-GDFT) which is a larger set of signal representations including the adaptive-DET.  相似文献   
114.
Automated airborne collision‐detection systems are a key enabling technology for facilitating the integration of unmanned aerial vehicles (UAVs) into the national airspace. These safety‐critical systems must be sensitive enough to provide timely warnings of genuine airborne collision threats, but not so sensitive as to cause excessive false alarms. Hence, an accurate characterization of detection and false‐alarm sensitivity is essential for understanding performance tradeoffs, and system designers can exploit this characterization to help achieve a desired balance in system performance. In this paper, we experimentally evaluate a sky‐region, image‐based, aircraft collision‐detection system that is based on morphological and temporal processing techniques. (Note that the examined detection approaches are not suitable for the detection of potential collision threats against a ground clutter background.) A novel collection methodology for collecting realistic airborne collision‐course target footage in both head‐on and tail‐chase engagement geometries is described. Under (hazy) blue sky conditions, our proposed system achieved detection ranges greater than 1540 m in three flight test cases with no false‐alarm events in 14.14 h of nontarget data (under cloudy conditions, the system achieved detection ranges greater than 1170 m in four flight test cases with no false‐alarm events in 6.63 h of nontarget data). Importantly, this paper is the first documented presentation of detection range versus false‐alarm curves generated from airborne target and nontarget image data. © 2012 Wiley Periodicals, Inc.  相似文献   
115.
A fault detection and correction methodology for personal positioning systems for outdoor environments is presented. We demonstrate its successful use in a system consisting of a global positioning system receiver and an inertial measurement unit. Localization is based on the dead reckoning algorithm. In order to obtain more reliable information from data fusion, which is carried out with Kalman filtering, the proposed methodology involves: (1) evaluation of the information provided by the sensors and (2) adaptability of the filtering. By carefully analyzing these factors we accomplish fault detection in different sources of information and in filtering. This allows us to apply corrections whenever the system requires it. Hence, our methodology consists of two stages. In the first stage, the evaluation is conducted. We apply the principles of causal diagnosis using possibility theory by defining states for normal behavior and for fault states. When a fault occurs, corrective measures are applied according to empirical knowledge. In the second stage, the consistency test of the filtering is performed. If this is inconsistent, principles of adaptive Kalman filtering are applied, which means the process and measurement noise matrices are tuned. Our results indicate a reasonable improvement of the trajectory obtained. At the same time, we can achieve consistent filtering, to obtain a more robust system and reliable information.  相似文献   
116.
During the last decades the Web has become the greatest repository of digital information. In order to organize all this information, several text categorization methods have been developed, achieving accurate results in most cases and in very different domains. Due to the recent usage of Internet as communication media, short texts such as news, tweets, blogs, and product reviews are more common every day. In this context, there are two main challenges; on the one hand, the length of these documents is short, and therefore, the word frequencies are not informative enough, making text categorization even more difficult than usual. On the other hand, topics are changing constantly at a fast rate, causing the lack of adequate amounts of training data. In order to deal with these two problems we consider a text classification method that is supported on the idea that similar documents may belong to the same category. Mainly, we propose a neighborhood consensus classification method that classifies documents by considering their own information as well as information about the category assigned to other similar documents from the same target collection. In particular, the short texts we used in our evaluation are news titles with an average of 8 words. Experimental results are encouraging; they indicate that leveraging information from similar documents helped to improve classification accuracy and that the proposed method is especially useful when labeled training resources are limited.  相似文献   
117.
Hierarchical clustering is a stepwise clustering method usually based on proximity measures between objects or sets of objects from a given data set. The most common proximity measures are distance measures. The derived proximity matrices can be used to build graphs, which provide the basic structure for some clustering methods. We present here a new proximity matrix based on an entropic measure and also a clustering algorithm (LEGClust) that builds layers of subgraphs based on this matrix, and uses them and a hierarchical agglomerative clustering technique to form the clusters. Our approach capitalizes on both a graph structure and a hierarchical construction. Moreover, by using entropy as a proximity measure we are able, with no assumption about the cluster shapes, to capture the local structure of the data, forcing the clustering method to reflect this structure. We present several experiments on artificial and real data sets that provide evidence on the superior performance of this new algorithm when compared with competing ones.  相似文献   
118.
In this paper we discuss models and methods for solving the rooted distance constrained minimum spanning tree problem which is defined as follows: given a graph G=(V,E)G=(V,E) with node set V={0,1,…,n}V={0,1,,n} and edge set EE, two integer weights, a cost cece and a delay wewe associated with each edge ee of EE, and a natural (time limit) number HH, we wish to find a spanning tree TT of the graph with minimum total cost and such that the unique path from a specified root node, node 0, to any other node has total delay not greater than HH. This problem generalizes the well known hop-constrained spanning tree problem and arises in the design of centralized networks with quality of service constraints and also in package shipment with service guarantee constraints. We present three theoretically equivalent modeling approaches, a column generation scheme, a Lagrangian relaxation combined with subgradient optimization procedure, both based on a path formulation of the problem, and a shortest path (compact) reformulation of the problem which views the underlying subproblem as defined in a layered extended graph. We present results for complete graph instances with up to 40 nodes. Our results indicate that the layered graph path reformulation model is still quite good when the arc weights are reasonably small. Lagrangian relaxation combined with subgradient optimization procedure appears to work much better than column generation and seems to be a quite reasonable approach to the problem for large weight, and even small weight, instances.  相似文献   
119.
This research introduces a new optimality criterion for motion planning of wheeled mobile robots based on a cost index that assesses the nearness to singularity of forward and inverse kinematic models. Slip motions, infinite estimation error and impossible control actions are avoided escaping from singularities. In addition, high amplification of wheel velocity errors and high wheel velocity values are also avoided by moving far from the singularity. The proposed cost index can be used directly to complement path-planning and motion-planning techniques (e.g. tree graphs, roadmaps, etc.) in order to select the optimal collision-free path or trajectory among several possible solutions. To illustrate the applications of the proposed approach, an industrial forklift, equivalent to a tricycle-like mobile robot, is considered in a simulated environment. In particular, several results are validated for the proposed optimality criterion, which are extensively compared to those obtained with other classical optimality criteria, such as shortest-path, time-optimal and minimum-energy.  相似文献   
120.
In this paper, we propose the use of a multiobjective evolutionary approach to generate a set of linguistic fuzzy-rule-based systems with different tradeoffs between accuracy and interpretability in regression problems. Accuracy and interpretability are measured in terms of approximation error and rule base (RB) complexity, respectively. The proposed approach is based on concurrently learning RBs and parameters of the membership functions of the associated linguistic labels. To manage the size of the search space, we have integrated the linguistic two-tuple representation model, which allows the symbolic translation of a label by only considering one parameter, with an efficient modification of the well known (2 + 2) Pareto archived evolution strategy (PAES). We tested our approach on nine real world datasets of different sizes and with different numbers of variables. Besides the (2 + 2)PAES, we have also used the well known nondominated sorting genetic algorithm (NSGA-II) and an accuracy-driven single-objective evolutionary algorithm (EA). We employed these optimization techniques both to concurrently learn rules and parameters and to learn only rules. We compared the different approaches by applying a nonparametric statistical test for pairwise comparisons, thus taking into consideration three representative points from the obtained Pareto fronts in the case of the multiobjective EAs. Finally, a data complexity measure, which is typically used in pattern recognition to evaluate the data density in terms of average number of patterns per variable, has been introduced to characterize regression problems. Results confirm the effectiveness of our approach, particularly for (possibly high dimensional) datasets with high values of the complexity metric.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号