首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3498篇
  免费   89篇
  国内免费   1篇
电工技术   39篇
综合类   1篇
化学工业   509篇
金属工艺   102篇
机械仪表   72篇
建筑科学   86篇
矿业工程   6篇
能源动力   51篇
轻工业   276篇
水利工程   27篇
石油天然气   7篇
无线电   435篇
一般工业技术   559篇
冶金工业   1071篇
原子能技术   25篇
自动化技术   322篇
  2022年   22篇
  2021年   33篇
  2020年   22篇
  2019年   30篇
  2018年   37篇
  2017年   44篇
  2016年   46篇
  2015年   41篇
  2014年   58篇
  2013年   161篇
  2012年   106篇
  2011年   147篇
  2010年   113篇
  2009年   123篇
  2008年   137篇
  2007年   136篇
  2006年   126篇
  2005年   116篇
  2004年   90篇
  2003年   86篇
  2002年   74篇
  2001年   75篇
  2000年   52篇
  1999年   77篇
  1998年   241篇
  1997年   170篇
  1996年   116篇
  1995年   82篇
  1994年   88篇
  1993年   71篇
  1992年   57篇
  1991年   51篇
  1990年   47篇
  1989年   45篇
  1988年   44篇
  1987年   38篇
  1986年   41篇
  1985年   41篇
  1984年   36篇
  1983年   41篇
  1982年   35篇
  1981年   27篇
  1980年   32篇
  1979年   25篇
  1978年   19篇
  1977年   46篇
  1976年   53篇
  1972年   21篇
  1971年   17篇
  1970年   20篇
排序方式: 共有3588条查询结果,搜索用时 31 毫秒
91.
92.
Current real-time collaborative application implementations use dedicated infrastructure to carry out all communication and synchronization activities. Specifically, they require all end nodes to communicate directly with and through the central server. In this paper, we investigate scenarios in which the most resource intensive functionality of continuous communication among collaborators to disseminate changes is decentralized, utilizing the end users themselves as relays. We observe that communication characteristics of real-time collaboration makes use of existing multicast mechanisms unsuitable. As collaborative editing sessions are typically long and repeated, it is possible to gather and leverage node behavior (their instabilities and frequency of sending updates) and communication links (latencies and average costs). Several criteria to determine the quality of a multicast tree can be identified: cost, latency and instability. In this paper, we analyze the complexity of the problem of finding optimal communication topologies, and propose approximate algorithms to optimize the same. We also consider the multiobjective problem in which we search for a topology that provides good trade-off between these sometimes conflicting measures. Validation of our proposed algorithms on numerous graphs shows that it is important to consider the multiobjective problem, as optimal solutions for one performance measure can be far from optimal for the other metrics. Finally, we briefly present an implementation of a communication library that uses the proposed algorithms to periodically adjust the dissemination tree.  相似文献   
93.
The objective of the current research is to model trends in video game playing, overall computer use, and communication technology use in a longitudinal sample of youths, aged 11-16 over a 3-year interval. In addition, individual difference characteristics that may be predictive of these trends were included, namely, socio-demographic characteristics (gender, ethnicity, and parental income) and personality characteristics (self-esteem, the Big Five personality factors). Findings suggested that youth increased their overall computer and communication technology use but decreased their videogame playing over time. Many individual differences predicted mean levels of these technologies with fewer predicting slopes. Conclusions, implications, and limitations are discussed.  相似文献   
94.
We have developed a compensated capacitive pressure and temperature sensor for kraft pulp digesters (pH 13.5, temperatures 25–175°C reaching a local maximum of 180°C and pressures up to 2 MPa). The gauge capacitive pressure sensor was fabricated by bonding silicon and Pyrex chips using a high temperature, low viscosity UV (ultraviolent) adhesive as the gap-controlling layer and heat curing adhesive as the bonding agent. A simple chip bonding technique, involving insertion of the adhesive into the gap between two chips was developed. A platinum thin-film wire was patterned on top of a silicon chip to form a resistance temperature detector (RTD) with a nominal resistance of 1,500 Ω. A silicon dioxide layer and a thin layer of Parylene were deposited to passivate the pressure sensor diaphragm and the sensors were embedded into epoxy for protection against the caustic environment in kraft digesters. The sensors were tested up to 2 MPa and 170°C in an environment chamber. The maximum thermal error of ±1% (absolute value of ±20 kPa) full scale output (FSO) and an average sensitivity of 0.554 fF/kPa were measured. Parylene-coated silicon chips were tested for a full kraft pulping cycle with no signs of corrosion.  相似文献   
95.
Hazard perception is one of the most important facets of driving and if the appropriate diagnostic tool is used it can discriminate between novice and experienced drivers. In this study video clips of actual driving scenarios were shown to novice and experienced drivers. The clips were stopped just prior to hazard onset and either the screen went black or the final still image stayed on the screen. Participants were then asked five questions about what happened next. This variant of the hazard perception test allowed the influence of processing time to be included and the level of situation awareness to be measured. Experienced drivers significantly anticipated more correct hazardous outcomes than novice drivers when the screen went black. Novice drivers benefited from the extra processing time afforded by the image remaining on the screen and significantly anticipated more hazards when the image remained on the screen than when it went black. The findings indicate that when processing time is manipulated, hazard perception accuracy reveals experiential differences. These differences are discussed with reference to hazard perception and situation awareness. This research informs the current controversy over whether hazard perception is a good diagnostic tool for driving performance. It identifies potential confounds in previous work and demonstrates that experiential differences can be found if the appropriate tests are used. Further, it suggests improvements for new hazard perception tests.  相似文献   
96.
97.
Frequently, user interface (UI) designers must choose between modifying an established, but suboptimal and familiar, UI or to avoid such changes. Changing the UI’s, organization may frustrate users who have become familiar with the original design, whereas failing to make changes may force users to perform at an unsatisfactory level. This paper presents two studies that investigate whether users familiar with a poorly designed UI would find items faster, and prefer a reorganized UI that conformed to domain expert knowledge, or would their familiarity with the original UI yield faster performance and higher satisfaction.This paper describes activities to redesign a menu structure in a simulator instructor–operator station (IOS) using hierarchical card sorting and cluster analysis (Romesburg, 2004). This analysis was used to reorganize the menu structure to reflect the knowledge representations of domain experts in accordance with the principle of proximity compatibility (Wickens and Carswell, 1995, Rothrock et al., 2006). The new design was validated with a separate set of users by a reaction time experiment and preference selection.  相似文献   
98.
We describe an experiment in which art and illustration experts evaluated six 2D vector visualization methods. We found that these expert critiques mirrored previously recorded experimental results; these findings support that using artists, visual designers and illustrators to critique scientific visualizations can be faster and more productive than quantitative user studies. Our participants successfully evaluated how well the given methods would let users complete a given set of tasks. Our results show a statistically significant correlation with a previous objective study: designers' subjective predictions of user performance by these methods match users measured performance. The experts improved the evaluation by providing insights into the reasons for the effectiveness of each visualization method and suggesting specific improvements.  相似文献   
99.
A multi-level cache model for run-time optimization of remote visualization   总被引:1,自引:0,他引:1  
Remote visualization is an enabling technology aiming to resolve the barrier of physical distance. While many researchers have developed innovative algorithms for remote visualization, previous work has focused little on systematically investigating optimal configurations of remote visualization architectures. In this paper, we study caching and prefetching, an important aspect of such architecture design, in order to optimize the fetch time in a remote visualization system. Unlike a processor cache or web cache, caching for remote visualization is unique and complex. Through actual experimentation and numerical simulation, we have discovered ways to systematically evaluate and search for optimal configurations of remote visualization caches under various scenarios, such as different network speeds, sizes of data for user requests, prefetch schemes, cache depletion schemes, etc. We have also designed a practical infrastructure software to adaptively optimize the caching architecture of general remote visualization systems, when a different application is started or the network condition varies. The lower bound of achievable latency discovered with our approach can aid the design of remote visualization algorithms and the selection of suitable network layouts for a remote visualization system.  相似文献   
100.
Hyperglycaemia is prevalent in critical illness and increases the risk of further complications and mortality, while tight control can reduce mortality up to 43%. Adaptive control methods are capable of highly accurate, targeted blood glucose regulation using limited numbers of manual measurements due to patient discomfort and labour intensity. Therefore, the option to obtain greater data density using emerging continuous glucose sensing devices is attractive. However, the few such systems currently available can have errors in excess of 20-30%. In contrast, typical bedside testing kits have errors of approximately 7-10%. Despite greater measurement frequency larger errors significantly impact the resulting glucose and patient specific parameter estimates, and thus the control actions determined creating an important safety and performance issue. This paper models the impact of the continuous glucose monitoring system (CGMS, Medtronic, Northridge, CA) on model-based parameter identification and glucose prediction. An integral-based fitting and filtering method is developed to reduce the effect of these errors. A noise model is developed based on CGMS data reported in the literature, and is slightly conservative with a mean Clarke Error Grid (CEG) correlation of R=0.81 (range: 0.68-0.88) as compared to a reported value of R=0.82 in a critical care study. Using 17 virtual patient profiles developed from retrospective clinical data, this noise model was used to test the methods developed. Monte-Carlo simulation for each patient resulted in an average absolute 1-h glucose prediction error of 6.20% (range: 4.97-8.06%) with an average standard deviation per patient of 5.22% (range: 3.26-8.55%). Note that all the methods and results are generalizable to similar applications outside of critical care, such as less acute wards and eventually ambulatory individuals. Clinically, the results show one possible computational method for managing the larger errors encountered in emerging continuous blood glucose sensors, thus enabling their more effective use in clinical glucose regulation studies.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号