首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   404篇
  免费   9篇
电工技术   7篇
化学工业   38篇
金属工艺   6篇
机械仪表   19篇
建筑科学   6篇
能源动力   21篇
轻工业   10篇
无线电   113篇
一般工业技术   62篇
冶金工业   48篇
原子能技术   3篇
自动化技术   80篇
  2023年   1篇
  2022年   2篇
  2021年   6篇
  2020年   2篇
  2019年   2篇
  2018年   3篇
  2017年   5篇
  2016年   4篇
  2015年   6篇
  2014年   10篇
  2013年   16篇
  2012年   11篇
  2011年   24篇
  2010年   24篇
  2009年   25篇
  2008年   30篇
  2007年   27篇
  2006年   13篇
  2005年   18篇
  2004年   7篇
  2003年   9篇
  2002年   11篇
  2001年   8篇
  2000年   10篇
  1999年   7篇
  1998年   16篇
  1997年   19篇
  1996年   10篇
  1995年   8篇
  1994年   9篇
  1993年   8篇
  1992年   6篇
  1991年   7篇
  1990年   6篇
  1989年   9篇
  1988年   9篇
  1987年   3篇
  1986年   4篇
  1985年   3篇
  1984年   1篇
  1983年   3篇
  1982年   3篇
  1981年   3篇
  1980年   1篇
  1978年   1篇
  1977年   1篇
  1976年   2篇
排序方式: 共有413条查询结果,搜索用时 31 毫秒
1.
A mathematical formulation of uncertain information   总被引:1,自引:0,他引:1  
This paper introduces a mathematical model of uncertain information. Each body of uncertain information is an information quadruplet, consisting of a code space, a message space, an interpretation function, and an evidence space. Each information quadruplet contains prior information as well as possible new evidence which may appear later. The definitions of basic probability and belief function are based on the prior information. Given new evidence, Bayes' rule is used to update the prior information. This paper also introduces an idea of independent information and its combination. A combination formula is derived for combining independent information. Both the conventional Bayesian approach and Dempster-Shafer's approach belong to this mathematical model. A Bayesian prior probability measure is the prior information of a special information quadruplet; Bayesian conditioning is the combination of special independent information. A Dempster's belief function is the belief function of a different information quadruplet; the Dempster combination rule is the combination rule of independent quadruplets. This paper is a mathematical study of handling uncertainty and shows that both the conventional Bayesian approach and Dempster-Shafer's approach originate from the same mathematical theory.This work was supported in part by the National Science Foundation under grant number IRI-8505735 and a summer research grant of Ball State University.  相似文献   
2.
The authors have achieved a 2.488 Gb/s, 318 km repeaterless transmission without any fiber dispersion penalty through a nondispersion-shifted fiber in a direct detection system. The system was loss limited with a T-R power budget of 57 dB. Three key components enabled the authors to achieve this result: (1) a Ti:LiNbO3 external amplitude modulator enabling a dispersion-free transmission, (2) erbium-doped fiber amplifiers increasing the transmitting power to +16 dBm, and (3) an erbium-doped fiber preamplifier enabling a high-receiver sensitivity of -4.1 dBm for 10-9 BER. To the author's knowledge, this result is the longest repeaterless transmission span length ever reported for direct detection at this bit rate. From the experimental results and a theoretical model, the authors identified the sources of the receiver sensitivity degradation from the quantum limit (-48.6 dBm) and estimated the practically achievable receiver sensitivity of ~-44 dBm (~-124 photons/bit) for 2.5 Gb/s optical preamplifier detection  相似文献   
3.
Efficient solutions to the problem of optimally selecting recovery points are developed. The solutions are intended for models of computation in which task precedence has a tree structure and a task may fail due to the presence of faults. An algorithm to minimize the expected computation time of the task system under a uniprocessor environment has been developed for the binary tree model. The algorithm has time complexity of O(N2), where N is the number of tasks, while previously reported procedures have exponential time requirements. The results are generalized for an arbitrary tree model  相似文献   
4.
A system-on-chip (SOC) usually consists of many memory cores with different sizes and functionality, and they typically represent a significant portion of the SOC and therefore dominate its yield. Diagnostics for yield enhancement of the memory cores thus is a very important issue. In this paper we present two data compression techniques that can be used to speed up the transmission of diagnostic data from the embedded RAM built-in self-test (BIST) circuit that has diagnostic support to the external tester. The proposed syndrome-accumulation approach compresses the faulty-cell address and March syndrome to about 28% of the original size on average under the March-17N diagnostic test algorithm. The key component of the compressor is a novel syndrome-accumulation circuit, which can be realized by a content-addressable memory. Experimental results show that the area overhead is about 0.9% for a 1Mb SRAM with 164 faults. A tree-based compression technique for word-oriented memories is also presented. By using a simplified Huffman coding scheme and partitioning each 256-bit Hamming syndrome into fixed-size symbols, the average compression ratio (size of original data to that of compressed data) is about 10, assuming 16-bit symbols. Also, the additional hardware to implement the tree-based compressor is very small. The proposed compression techniques effectively reduce the memory diagnosis time as well as the tester storage requirement.  相似文献   
5.
Measured and calculated voltages induced on an unenergized overhead power line by lightning return strokes at distances greater than 5 km from the line are presented. The experiment was performed at the NASA Kennedy Space Center during the summer of 1985 and involved the simultaneous measurement of the voltage induced at one end of the top phase of a three-phase power line and the two horizontal components of the return-stroke magnetic field incident on the line. The effective ground conductivity was determined from previous simultaneous measurements of the vertical and horizontal electric fields. Experiments were performed for two cases: (1) all phases of the power line open-circuited, and (2) one end of the top line terminated at 600 Ω with the other end open-circuited and the other two phases open-circuited at both ends. The waveshapes of the measured and calculated voltages are in reasonably good agreement, and the reasons for observed discrepancies are discussed  相似文献   
6.
Additive effects of formaldehyde, propionaldehyde and benzaldehyde on the deposition of tin in acidic solution of tin(II) sulfate have been investigated. The effects of these additives on cathodic polarization and a.c. impedance was measured by galvanostatic or potentiostatic methods, respectively. The reduction products of the aldehyde during the deposition and the diffusion coefficient of Sn(II) in various solutions were also determined.  相似文献   
7.
This research proposes a revised discrete particle swarm optimization (RDPSO) to solve the permutation flow-shop scheduling problem with the objective of minimizing makespan (PFSP-makespan). The candidate problem is one of the most studied NP-complete scheduling problems. RDPSO proposes new particle swarm learning strategies to thoroughly study how to properly apply the global best solution and the personal best solution to guide the search of RDPSO. A new filtered local search is developed to filter the solution regions that have been reviewed and guide the search to new solution regions in order to keep the search from premature convergence. Computational experiments on Taillard’s benchmark problem sets demonstrate that RDPSO significantly outperforms all the existing PSO algorithms.  相似文献   
8.
This study presents a spiral polishing method and a device for micro-finishing purposes. This novel finishing process has wider application than traditional processes. This offers both automation and flexibility in final machining operations for deburring, polishing, and removing recast layers, thereby producing compressive residual stresses even in difficult to reach areas. Applying of this method can obtain a fine polished surface by removing tiny fragments via a micro lapping generated by transmission of an abrasive medium through a screw rod. The effect of the removal of the tiny fragments can be achieved due to the function of micro lapping. The method is not dependent on the size of the work-piece's application area in order to carry out the ultra precise process. The application of this research can be extended to various products of precision ball-bearing lead screw. The proposed method produces products with greater precision and more efficiently than traditional processes, in terms of processing precisions and the surface quality of products. These parameters used in achieving maximum material removal rate (MRR) and the lowest surface roughness (SR) are abrasive particle size, abrasive concentration, gap, revolution speed and machining time.  相似文献   
9.
Traditionally, twist drills are reconditioned by thinning the web so the correct chisel edge length is restored. Recently, thinning has been included in the original design of drills so as to reduce torque and tool force. Because the International Standards Organization (ISO) has a system which can comprehensively model conventional twist drills but cannot model thinning specifications, this paper presents a system for precise mathematical modeling and CNC control of a 6-axis grinding workstation for drill thinning. The presented method determines the position and orientation of the grinding wheel based on the evaluated rake and clearance angles of ISO standards for 2-flute twist drills. The mathematical model and background are discussed. For verification and demonstration, two experimental drills are produced to the identical ISO standard except that one is thinned. The modeling herein is of value to industry and research if incorporated into computer software for drill design and manufacture. It is suitable for linear notch-type cutting with controlled variable rake angle along the secondary cutting edge for purposes of thinning, notching, dubbing and advanced drill research.  相似文献   
10.
Greening the supply chain is an increasingly important concern for many business enterprises and a challenge for logistics management. Critical functions within green supply chain management are internal improvements and selection of green suppliers. This study proposes a novel, hybrid model that addresses dependent relationships between various criteria and the vague information coming from decision-makers. The Decision-making Trial and Evaluation Laboratory (DEMATEL) technique structures the relationships among criteria, thereby constructing an influential network relationship map (INRM). Meanwhile the DEMATEL-based, analytical network process (ANP) method aids in obtaining influential weights of the criteria. Decision-makers may hold diverse opinions and preferences due to incomplete information, differences in knowledge or simply conflicts that are inherent between various departments. This can make it difficult to judge the performance of alternatives. One remedy is to apply a modified COmplex PRoportional ASsessment of alternatives with Grey relations. Next, this is applied to improve each criterion for integration of the performance values obtained in closing the aspiration level from different expert opinions based on INRM. An empirical example using data from a Taiwanese electronics company is provided to demonstrate our proposed method. The results can provide firms with a knowledge-based understanding of the source of some problems, thus reducing the performance gaps and closing the aspiration levels. Finally, there is a discussion on certain managerial implications.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号