首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3856篇
  免费   66篇
  国内免费   4篇
电工技术   32篇
综合类   11篇
化学工业   505篇
金属工艺   76篇
机械仪表   53篇
建筑科学   49篇
能源动力   89篇
轻工业   142篇
水利工程   4篇
石油天然气   8篇
无线电   437篇
一般工业技术   442篇
冶金工业   1675篇
原子能技术   16篇
自动化技术   387篇
  2023年   10篇
  2022年   38篇
  2021年   55篇
  2020年   20篇
  2019年   24篇
  2018年   33篇
  2017年   33篇
  2016年   50篇
  2015年   56篇
  2014年   80篇
  2013年   173篇
  2012年   117篇
  2011年   145篇
  2010年   142篇
  2009年   167篇
  2008年   137篇
  2007年   144篇
  2006年   102篇
  2005年   80篇
  2004年   81篇
  2003年   66篇
  2002年   72篇
  2001年   48篇
  2000年   44篇
  1999年   83篇
  1998年   584篇
  1997年   334篇
  1996年   224篇
  1995年   113篇
  1994年   107篇
  1993年   103篇
  1992年   31篇
  1991年   48篇
  1990年   31篇
  1989年   40篇
  1988年   32篇
  1987年   33篇
  1986年   27篇
  1985年   30篇
  1984年   18篇
  1983年   9篇
  1982年   12篇
  1981年   15篇
  1980年   14篇
  1979年   8篇
  1978年   7篇
  1977年   29篇
  1976年   49篇
  1975年   6篇
  1973年   5篇
排序方式: 共有3926条查询结果,搜索用时 15 毫秒
91.
The Lucas–Kanade tracker (LKT) is a commonly used method to track target objects over 2D images. The key principle behind the object tracking of an LKT is to warp the object appearance so as to minimize the difference between the warped object’s appearance and a pre-stored template. Accordingly, the 2D pose of the tracked object in terms of translation, rotation, and scaling can be recovered from the warping. To extend the LKT for 3D pose estimation, a model-based 3D LKT assumes a 3D geometric model for the target object in the 3D space and tries to infer the 3D object motion by minimizing the difference between the projected 2D image of the 3D object and the pre-stored 2D image template. In this paper, we propose an extended model-based 3D LKT for estimating 3D head poses by tracking human heads on video sequences. In contrast to the original model-based 3D LKT, which uses a template with each pixel represented by a single intensity value, the proposed model-based 3D LKT exploits an adaptive template with each template pixel modeled by a continuously updated Gaussian distribution during head tracking. This probabilistic template modeling improves the tracker’s ability to handle temporal fluctuation of pixels caused by continuous environmental changes such as varying illumination and dynamic backgrounds. Due to the new probabilistic template modeling, we reformulate the head pose estimation as a maximum likelihood estimation problem, rather than the original difference minimization procedure. Based on the new formulation, an algorithm to estimate the best head pose is derived. The experimental results show that the proposed extended model-based 3D LKT achieves higher accuracy and reliability than the conventional one does. Particularly, the proposed LKT is very effective in handling varying illumination, which cannot be well handled in the original LKT.  相似文献   
92.
As one of the major steps toward a fully intelligent autonomous robotic weapon, we have made progress in three major areas, (1) design of the surveillance system by an AVR microcontroller, (2) implementation of the design mechanism, and (3) performance of the human- machine interface surveillance system via the LabVIEW graphical programming environment, so that the supervisor can control the vehicle with a keyboard or a specially adapted mouse. In order to accomplish all these achievements, there have been major additions and overhauls in both system software codes and system circuit board developments. All these developments, including a new algorithm and hardware implementation, are described in this article. The experimental results have shown the practicality of the AVR microcontroller, the LabVIEW graphical programming environment, and ZigBee wireless technology applied to a robotic weapon.  相似文献   
93.
e-Learning tools have profoundly transformed modern pedagogical approaches. Vendors provide different types of systems, such as self-paced (SP) and instructor–student interactive (ISI) e-Learning tools. Although both types of tools represent promising solutions to facilitate the learning process, it is important to theoretically identify a framework to evaluate the success of these tools and assess whether one type of tool is more effective than another. Toward this end, we (1) propose a model to evaluate e-Learning tools’ success by extending and contextualizing Seddon’s information systems (IS) success model for the e-Learning environment and (2) formulate four hypotheses to predict the differences in the success factors between SP and ISI tools. We test the model and hypotheses using data from 783 students across seven higher education institutions in Hong Kong. The results support the proposed e-Learning tool success model and three of the four hypotheses. ISI tools outperform SP tools in terms of system quality, perceived usefulness, satisfaction, and learning outcome.  相似文献   
94.
This paper presents a knowledge exchange framework that can leverage the interoperability among semantically heterogeneous learning objects. With the release of various e-Learning standards, learning contents and digital courses are easy to achieve cross-platform sharing, exchanging, and even reorganizing. However, knowledge sharing in semantic level is still a challenge due to that the learning materials can be presented in any form, such as audios, videos, web pages, and even flash files. The proposed knowledge exchange framework allows users to share their learning materials (also called “learning objects”) in semantic level automatically. This framework contains two methodologies: the first is a semantic mapping between knowledge bases (i.e. ontologies) which have essentially similar concepts, and the second is an ontology-based classification algorithm for sharable learning objects. The proposed algorithm adopts the IMS DRI standard and classifies the sharable learning objects from heterogeneous repositories into a local knowledge base by their inner meaning instead of keyword matching. Significance of this research lies in the semantic inferring rules for ontology mapping and learning objects classification as well as the full automatic processing and self-optimizing capability. Focused on digital learning materials and contrasted to other traditional technologies, the proposed approach has experimentally demonstrated significantly improvement in performance.  相似文献   
95.
Feature selection aims at finding the most relevant features of a problem domain. It is very helpful in improving computational speed and prediction accuracy. However, identification of useful features from hundreds or even thousands of related features is a nontrivial task. In this paper, we introduce a hybrid feature selection method which combines two feature selection methods – the filters and the wrappers. Candidate features are first selected from the original feature set via computationally-efficient filters. The candidate feature set is further refined by more accurate wrappers. This hybrid mechanism takes advantage of both the filters and the wrappers. The mechanism is examined by two bioinformatics problems, namely, protein disordered region prediction and gene selection in microarray cancer data. Experimental results show that equal or better prediction accuracy can be achieved with a smaller feature set. These feature subsets can be obtained in a reasonable time period.  相似文献   
96.
To estimate a summarized dose–response relation across different exposure levels from epidemiologic data, meta-analysis often needs to take into account heterogeneity across studies beyond the variation associated with fixed effects. We extended a generalized-least-squares method and a multivariate maximum likelihood method to estimate the summarized nonlinear dose–response relation taking into account random effects. These methods are readily suited to fitting and testing models with covariates and curvilinear dose–response relations.  相似文献   
97.
In this paper, a novel clustering method in the kernel space is proposed. It effectively integrates several existing algorithms to become an iterative clustering scheme, which can handle clusters with arbitrary shapes. In our proposed approach, a reasonable initial core for each of the cluster is estimated. This allows us to adopt a cluster growing technique, and the growing cores offer partial hints on the cluster association. Consequently, the methods used for classification, such as support vector machines (SVMs), can be useful in our approach. To obtain initial clusters effectively, the notion of the incomplete Cholesky decomposition is adopted so that the fuzzy c‐means (FCM) can be used to partition the data in a kernel defined‐like space. Then a one‐class and a multiclass soft margin SVMs are adopted to detect the data within the main distributions (the cores) of the clusters and to repartition the data into new clusters iteratively. The structure of the data set is explored by pruning the data in the low‐density region of the clusters. Then data are gradually added back to the main distributions to assure exact cluster boundaries. Unlike the ordinary SVM algorithm, whose performance relies heavily on the kernel parameters given by the user, the parameters are estimated from the data set naturally in our approach. The experimental evaluations on two synthetic data sets and four University of California Irvine real data benchmarks indicate that the proposed algorithms outperform several popular clustering algorithms, such as FCM, support vector clustering (SVC), hierarchical clustering (HC), self‐organizing maps (SOM), and non‐Euclidean norm fuzzy c‐means (NEFCM). © 2009 Wiley Periodicals, Inc.4  相似文献   
98.
The star graph is viewed as an attractive alternative to the hypercube. In this paper, we investigate the Hamiltonicity of an n-dimensional star graph. We show that for any n-dimensional star graph (n≥4) with at most 3n−10 faulty edges in which each node is incident with at least two fault-free edges, there exists a fault-free Hamiltonian cycle. Our result improves on the previously best known result for the case where the number of tolerable faulty edges is bounded by 2n−7. We also demonstrate that our result is optimal with respect to the worst case scenario, where every other node of a cycle of length 6 is incident with exactly n−3 faulty noncycle edges.  相似文献   
99.
Although the contract net protocol answers some of the questions in cooperative distributed problem solving (CDPS), it raises many others that CDPS researchers are still trying to answer. In contract net protocol, an agent may play the role of a manager or a bidder. Without a coordination mechanism, a manager may acquire excessive resources from the bidders in forming a collaborative network to execute the assigned task and thus hinder the progress of the tasks assigned to other managers due to resource contention. As a result, application of contract net protocol may not always lead to feasible solutions to accomplish tasks effectively. As a general framework for exchanging messages, the original contract net protocol does not prescribe how agents should cooperate. How to develop a collaborative mechanism to effectively perform the tasks is an important issue. This paper aims to improve the insufficiency of the contract net by developing a mechanism to facilitate cooperation of agents to accomplish their tasks while avoiding undesirable states and enhance the overall system performance in manufacturing systems. To achieve these objectives, detail process models about how agents accomplish their tasks are required. Due to the advantages in modeling concurrent, synchronous and/or asynchronous activities, Petri nets are adopted in this paper. Based on Petri net models, we study the information needed for agents to make cooperative decisions, mechanism to make agents cooperate, and how to enhance the performance in the system level by taking advantage of the agents’ cooperation capabilities. To characterize the condition for cooperation, we represent the collaborative networks formed based on the contract net protocol with Petri nets and then find the condition for a collaborative network to be feasible. The feasible condition also serves as a condition for the development of cooperation mechanism for managers. We propose a cooperation mechanism based on the idea of resource donation, including unilateral resource donation and reciprocal resource donation. Implementation architecture has also been proposed to realize our methodology.  相似文献   
100.
The advent of internet has led to a significant growth in the amount of information available, resulting in information overload, i.e. individuals have too much information to make a decision. To resolve this problem, collaborative tagging systems form a categorization called folksonomy in order to organize web resources. A folksonomy aggregates the results of personal free tagging of information and objects to form a categorization structure that applies utilizes the collective intelligence of crowds. Folksonomy is more appropriate for organizing huge amounts of information on the Web than traditional taxonomies established by expert cataloguers. However, the attributes of collaborative tagging systems and their folksonomy make them impractical for organizing resources in personal environments.This work designs a desktop collaborative tagging (DCT) system that enables collaborative workers to tag their documents. This work proposes an application in patent analysis based on the DCT system. Folksonomy in DCT is built by aggregating personal tagging results, and is represented by a concept space. Concept spaces provide synonym control, tag recommendation and relevant search. Additionally, to protect privacy of authors and to decrease the transmission cost, relations between tagged and untagged documents are constructed by extracting document’s features rather than adopting the full text.Experimental results reveal that the adoption rate of recommended tags for new documents increases by 10% after users have tagged five or six documents. Furthermore, DCT can recommend tags with higher adoption rates when given new documents with similar topics to previously tagged ones. The relevant search in DCT is observed to be superior to keyword search when adopting frequently used tags as queries. The average precision, recall, and F-measure of DCT are 12.12%, 23.08%, and 26.92% higher than those of keyword searching.DCT allows a multi-faceted categorization of resources for collaborative workers and recommends tags for categorizing resources to simplify categorization easier. Additionally, DCT system provides relevance searching, which is more effective than traditional keyword searching for searching personal resources.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号