首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3069篇
  免费   132篇
  国内免费   15篇
电工技术   44篇
综合类   13篇
化学工业   499篇
金属工艺   57篇
机械仪表   88篇
建筑科学   135篇
矿业工程   1篇
能源动力   95篇
轻工业   181篇
水利工程   26篇
石油天然气   7篇
无线电   556篇
一般工业技术   521篇
冶金工业   483篇
原子能技术   13篇
自动化技术   497篇
  2023年   18篇
  2022年   32篇
  2021年   56篇
  2020年   40篇
  2019年   49篇
  2018年   67篇
  2017年   62篇
  2016年   62篇
  2015年   59篇
  2014年   89篇
  2013年   190篇
  2012年   124篇
  2011年   150篇
  2010年   125篇
  2009年   146篇
  2008年   162篇
  2007年   175篇
  2006年   144篇
  2005年   113篇
  2004年   112篇
  2003年   99篇
  2002年   96篇
  2001年   70篇
  2000年   67篇
  1999年   72篇
  1998年   162篇
  1997年   119篇
  1996年   85篇
  1995年   54篇
  1994年   59篇
  1993年   46篇
  1992年   20篇
  1991年   17篇
  1990年   22篇
  1989年   36篇
  1988年   26篇
  1987年   22篇
  1986年   33篇
  1985年   25篇
  1984年   19篇
  1983年   10篇
  1982年   8篇
  1981年   11篇
  1980年   8篇
  1979年   4篇
  1978年   7篇
  1977年   11篇
  1976年   11篇
  1973年   5篇
  1966年   3篇
排序方式: 共有3216条查询结果,搜索用时 687 毫秒
101.
Once segmentation of 3D surface data of a rock pile has been performed, the next task is to determine the visibility of the surface rocks. A region boundary-following algorithm that accommodates irregularly spaced 3D coordinate data is presented for determining this visibility. We examine 3D surface segmentations of laboratory rock piles and determine which regions in the segmentation correspond to entirely visible rocks, and which correspond to overlapped or partially visible rocks. This is a significant distinction as it allows accurate size determination of entirely visible rocks, separate handling of partially visible rocks, and prevents erroneous bias resulting from mishandling partially visible rocks as smaller entirely visible rocks. Literature review indicates that other rock pile sizing techniques fail to make this distinction. The rock visibility results are quantified by comparison to manual surface classifications of the laboratory piles and the size results are quantified by comparison to the sieve size.  相似文献   
102.
Liu F  Quek C  Ng GS 《Neural computation》2007,19(6):1656-1680
There are two important issues in neuro-fuzzy modeling: (1) interpretability--the ability to describe the behavior of the system in an interpretable way--and (2) accuracy--the ability to approximate the outcome of the system accurately. As these two objectives usually exert contradictory requirements on the neuro-fuzzy model, certain compromise has to be undertaken. This letter proposes a novel rule reduction algorithm, namely, Hebb rule reduction, and an iterative tuning process to balance interpretability and accuracy. The Hebb rule reduction algorithm uses Hebbian ordering, which represents the degree of coverage of the samples by the rule, as an importance measure of each rule to merge the membership functions and hence reduces the number of the rules. Similar membership functions (MFs) are merged by a specified similarity measure in an order of Hebbian importance, and the resultant equivalent rules are deleted from the rule base. The rule with a higher Hebbian importance will be retained among a set of rules. The MFs are tuned through the least mean square (LMS) algorithm to reduce the modeling error. The tuning of the MFs and the reduction of the rules proceed iteratively to achieve a balance between interpretability and accuracy. Three published data sets by Nakanishi (Nakanishi, Turksen, & Sugeno, 1993), the Pat synthetic data set (Pal, Mitra, & Mitra, 2003), and the traffic flow density prediction data set are used as benchmarks to demonstrate the effectiveness of the proposed method. Good interpretability, as well as high modeling accuracy, are derivable simultaneously and are suitably benchmarked against other well-established neuro-fuzzy models.  相似文献   
103.
In this paper we consider the single machine batch scheduling problem with family setup times and release dates to minimize makespan. We show that this problem is strongly NP-hard, and give an time dynamic programming algorithm and an time dynamic programming algorithm for the problem, where n is the number of jobs, m is the number of families, k is the number of distinct release dates and P is the sum of the setup times of all the families and the processing times of all the jobs. We further give a heuristic with a performance ratio 2. We also give a polynomial-time approximation scheme (PTAS) for the problem.  相似文献   
104.
With the increasing importance of XML, LDAP directories, and text-based information sources on the Internet, there is an ever-greater need to evaluate queries involving (sub)string matching. In many cases, matches need to be on multiple attributes/dimensions, with correlations between the multiple dimensions. Effective query optimization in this context requires good selectivity estimates. In this paper, we use pruned count-suffix trees (PSTs) as the basic data structure for substring selectivity estimation. For the 1-D problem, we present a novel technique called MO (Maximal Overlap). We then develop and analyze two 1-D estimation algorithms, MOC and MOLC, based on MO and a constraint-based characterization of all possible completions of a given PST. For the k-D problem, we first generalize PSTs to multiple dimensions and develop a space- and time-efficient probabilistic algorithm to construct k-D PSTs directly. We then show how to extend MO to multiple dimensions. Finally, we demonstrate, both analytically and experimentally, that MO is both practical and substantially superior to competing algorithms. Received April 28, 2000 / Accepted July 11, 2000  相似文献   
105.
106.
A laser induced fluorescence system, in combination with a glass-frit nebulizer and a photo-voltaic cell detector, is described for single molecule detection. The glass-frit nebulizer continuously generates a large number of droplets with an average droplet size of three micrometers in diameter. Rhodamine 6G molecules were detected at the 10(-12) M level. Concentrations 10(-12)-10(-10) M would provide mostly single molecules (0, 1, 2, 3, ...) in the individual droplets, as determined by Poisson distribution.  相似文献   
107.
In recent years, there has been much debate about the concept of digital natives, in particular the differences between the digital natives' knowledge and adoption of digital technologies in informal versus formal educational contexts. This paper investigates the knowledge about educational technologies of a group of undergraduate students studying the course Introduction to eLearning at a university in Australia and how they adopt unfamiliar technologies into their learning. The study explores the 'digital nativeness' of these students by investigating their degree of digital literacy and the ease with which they learn to make use of unfamiliar technologies. The findings show that the undergraduates were generally able to use unfamiliar technologies easily in their learning to create useful artefacts. They need, however to be made aware of what constitutes educational technologies and be provided with the opportunity to use them for meaningful purposes. The self-perception measures of the study indicated that digital natives can be taught digital literacy.  相似文献   
108.
Future chip multiprocessors (CMPs) may have hundreds to thousands of threads competing to access shared resources, and will require quality-of-service (QoS) support to improve system utilization. This paper introduces Globally-Synchronized Frames (GSF), a framework for providing guaranteed QoS in on-chip networks in terms of minimum bandwidth and maximum delay bound. The GSF framework can be easily integrated in a conventional virtual channel (VC) router without significantly increasing the hardware complexity. We exploit a fast on-chip barrier network to efficiently implement GSF. Performance guarantees are verified by analysis and simulation. According to our simulations, all concurrent flows receive their guaranteed minimum share of bandwidth in compliance with a given bandwidth allocation. The average throughput degradation of GSF on an 8×8 mesh network is within 10% compared to the conventional best-effort VC router.  相似文献   
109.

Context

Practitioners may use design patterns to organize program code. Various empirical studies have investigated the effects of pattern deployment and work experience on the effectiveness and efficiency of program maintenance. However, results from these studies are not all consistent. Moreover, these studies have not considered some interesting factors, such as a maintainer’s prior exposure to the program under maintenance.

Objective

This paper aims at identifying what factors may contribute to the productivity of maintainers in the context of making correct software changes when they work on programs with deployed design patterns.

Method

We performed an empirical study involving 118 human subjects with three change tasks on a medium-sized program to explore the possible effects of a suite of six human and program factors on the productivity of maintainers, measured by the time taken to produce a correctly revised program in a course-based setting. The factors we studied include the deployment of design patterns and the presence of pattern-unaware solutions, as well as the maintainer’s prior exposure to design patterns, the subject program and the programming language, and prior work experience.

Results

Among the factors under examination, we find that the deployment of design patterns, prior exposure to the program and the presence of pattern-unaware solutions are strongly correlated with the time taken to correctly complete maintenance tasks. We also report some interesting observations from the experiment.

Conclusion

A new factor, namely, the presence of pattern-unaware solutions, contributes to the efficient completion of maintenance tasks of programs with deployed design patterns. Moreover, we conclude from the study that neither prior exposure to design patterns nor prior exposure to the programming language is supported by sufficient evidences to be significant factors, whereas the subjects’ exposure to the program under maintenance is notably more important.  相似文献   
110.
This paper presents a disturbance decoupled fault reconstruction (DDFR) scheme using cascaded sliding mode observers (SMOs). The processed signals from a SMO are found to be the output of a fictitious system which treats the faults and disturbances as inputs; the ‘outputs’ are then fed into the next SMO. This process is repeated until the attainment of a fictitious system which satisfies the conditions that guarantee DDFR. It is found that this scheme is less restrictive and enables DDFR for a wider class of systems compared to previous work when only one or two SMOs were used. This paper also presents a systematic routine to check for the feasibility of the scheme and to calculate the required number of SMOs from the outset and also to design the DDFR scheme. A design example verifies its effectiveness.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号