全文获取类型
收费全文 | 6549篇 |
免费 | 229篇 |
国内免费 | 45篇 |
专业分类
电工技术 | 133篇 |
综合类 | 24篇 |
化学工业 | 827篇 |
金属工艺 | 110篇 |
机械仪表 | 198篇 |
建筑科学 | 347篇 |
矿业工程 | 3篇 |
能源动力 | 148篇 |
轻工业 | 301篇 |
水利工程 | 23篇 |
石油天然气 | 25篇 |
武器工业 | 3篇 |
无线电 | 1473篇 |
一般工业技术 | 1027篇 |
冶金工业 | 1118篇 |
原子能技术 | 33篇 |
自动化技术 | 1030篇 |
出版年
2023年 | 40篇 |
2022年 | 76篇 |
2021年 | 92篇 |
2020年 | 58篇 |
2019年 | 70篇 |
2018年 | 106篇 |
2017年 | 82篇 |
2016年 | 135篇 |
2015年 | 112篇 |
2014年 | 134篇 |
2013年 | 319篇 |
2012年 | 305篇 |
2011年 | 322篇 |
2010年 | 238篇 |
2009年 | 297篇 |
2008年 | 328篇 |
2007年 | 317篇 |
2006年 | 305篇 |
2005年 | 253篇 |
2004年 | 205篇 |
2003年 | 213篇 |
2002年 | 158篇 |
2001年 | 171篇 |
2000年 | 167篇 |
1999年 | 189篇 |
1998年 | 474篇 |
1997年 | 274篇 |
1996年 | 209篇 |
1995年 | 173篇 |
1994年 | 108篇 |
1993年 | 126篇 |
1992年 | 77篇 |
1991年 | 62篇 |
1990年 | 52篇 |
1989年 | 49篇 |
1988年 | 48篇 |
1987年 | 41篇 |
1986年 | 39篇 |
1985年 | 60篇 |
1984年 | 27篇 |
1983年 | 24篇 |
1982年 | 39篇 |
1981年 | 34篇 |
1980年 | 29篇 |
1979年 | 24篇 |
1978年 | 17篇 |
1977年 | 22篇 |
1976年 | 44篇 |
1975年 | 17篇 |
1973年 | 19篇 |
排序方式: 共有6823条查询结果,搜索用时 15 毫秒
131.
A confidence-based framework for business to consumer (B2C) mobile commerce adoption 总被引:1,自引:0,他引:1
The Technology Acceptance Model (TAM) has been considered to be fundamental in determining the acceptance of new technology
in the past decades. The two beliefs, ease of use and usefulness, in the model may not, however, fully explain the consumers’
behavior in an emerging environment, such as mobile commerce (m-commerce). This paper aims to develop a framework for m-commerce
adoption in consumer decision-making processes. In this paper TAM has been adopted and extended to analyze successful m-commerce
adoption. The key elements of the proposed confidence-based framework for B2C m-commerce adoption include psychological and
behavioral factors. Psychological factors include history-based confidence, institution-based confidence and personality-based
confidence. Behavioral factors include perceived ease of use and perceived usefulness of the mobile application and technology. 相似文献
132.
Liu D Cao Y Kim KH Stanek S Doungratanaex-Chai B Lin K Tavanapong W Wong J Oh J de Groen PC 《Computer methods and programs in biomedicine》2007,88(2):152-163
Colonoscopy is an endoscopic technique that allows physicians to inspect the inside of the human colon. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. In current practice, the entire colonoscopic procedure is not routinely captured. Software tools providing easy access to important contents of videos that are digitally captured during colonoscopy are not available. Hence, it is very time consuming to review an entire video, locate important contents, annotate them, and extract the annotated contents for research, teaching, and training purposes. Arthemis, a software application, was developed to facilitate this process. For convenient data sharing, Arthemis allows annotation according to the European Gastrointestinal Society for Endoscopy (ESGE) Minimal Standard Terminology (MST), an internationally accepted standard for digestive endoscopy. Arthemis is part of our integrated capturing and content analysis system for colonoscopy called Endoscopic Multimedia Information System (EMIS). This paper presents Arthemis as a component of EMIS, the design and implementation of Arthemis, and key lessons learned from the development process. 相似文献
133.
Multiregression is one of the most common approaches used to discover dependency pattern among attributes in a database. Nonadditive set functions have been applied to deal with the interactive predictive attributes involved, and some nonlinear integrals with respect to nonadditive set functions are employed to establish a nonlinear multiregression model describing the relation between the objective attribute and predictive attributes. The values of the nonadditive set function play a role of unknown regression coefficients in the model and are determined by an adaptive genetic algorithm from the data of predictive and objective attributes. Furthermore, such a model is now improved by a new numericalization technique such that the model can accommodate both categorical and continuous numerical attributes. The traditional dummy binary method dealing with the mixed type data can be regarded as a very special case of our model when there is no interaction among the predictive attributes and the Choquet integral is used. When running the algorithm, to avoid a premature during the evolutionary procedure, a technique of maintaining diversity in the population is adopted. A test example shows that the algorithm and the relevant program have a good reversibility for the data. © 2001 John Wiley & Sons, Inc.16: 949–962 (2001) 相似文献
134.
This paper illustrates two strategies for the detection and classification of abnormal process operating conditions in which multiple process variable trends are available. The first strategy uses a hidden Markov model (HMM) for overall process classification while the second method uses a back-propagation neural network (BPNN) to determine the overall process classification. The methods are compared in terms of their ability to detect and correctly diagnose a variety of abnormal operating conditions for a non-isothermal CSTR simulation. For the case study problem, the BPNN method resulted in better classification accuracy with a moderate increase in training time compared with the HMM approach. 相似文献
135.
This study examined different fluid replacement quantities during intermittent work while wearing firefighting protective clothing and self-contained breathing apparatus in the heat (35 degrees C, 50% relative humidity). Twelve firefighters walked at 4.5 km per h with 0% elevation on an intermittent work (50 min) and rest (30 min) schedule until they reached a rectal temperature of 39.5 degrees C during work periods and 40.0 degrees C during rest, heart rates of 95% of maximum and/or exhaustion. During the heat-stress trials subjects received one of four fluid replacement quantities, high (H), moderate (M), low (L), and no hydration (NH), where H, M and L represented 78%, 63% and 37% of fluid loss, respectively. The total tolerance time (work + rest) was significantly greater during H (111.8 +/- 3.5), M (112.9 +/- 5.2) and L (104.2 +/- 5.8) compared to NH (95.3 +/- 3.8). In addition, work time (min), which excluded rest periods, was significantly greater in H (82.6 +/- 3.5), and M (82.9 +/- 5.2) compared to NH (65.3 +/- 3.8). It is concluded that incorporating even partial fluid replacement strategies while wearing firefighting protective clothing and self-contained breathing apparatus in the heat improves tolerance time. 相似文献
136.
Chase JG Hann CE Jackson M Lin J Lotz T Wong XW Shaw GM 《Computer methods and programs in biomedicine》2006,82(3):238-247
Hyperglycaemia is prevalent in critical illness and increases the risk of further complications and mortality, while tight control can reduce mortality up to 43%. Adaptive control methods are capable of highly accurate, targeted blood glucose regulation using limited numbers of manual measurements due to patient discomfort and labour intensity. Therefore, the option to obtain greater data density using emerging continuous glucose sensing devices is attractive. However, the few such systems currently available can have errors in excess of 20-30%. In contrast, typical bedside testing kits have errors of approximately 7-10%. Despite greater measurement frequency larger errors significantly impact the resulting glucose and patient specific parameter estimates, and thus the control actions determined creating an important safety and performance issue. This paper models the impact of the continuous glucose monitoring system (CGMS, Medtronic, Northridge, CA) on model-based parameter identification and glucose prediction. An integral-based fitting and filtering method is developed to reduce the effect of these errors. A noise model is developed based on CGMS data reported in the literature, and is slightly conservative with a mean Clarke Error Grid (CEG) correlation of R=0.81 (range: 0.68-0.88) as compared to a reported value of R=0.82 in a critical care study. Using 17 virtual patient profiles developed from retrospective clinical data, this noise model was used to test the methods developed. Monte-Carlo simulation for each patient resulted in an average absolute 1-h glucose prediction error of 6.20% (range: 4.97-8.06%) with an average standard deviation per patient of 5.22% (range: 3.26-8.55%). Note that all the methods and results are generalizable to similar applications outside of critical care, such as less acute wards and eventually ambulatory individuals. Clinically, the results show one possible computational method for managing the larger errors encountered in emerging continuous blood glucose sensors, thus enabling their more effective use in clinical glucose regulation studies. 相似文献
137.
As music can be represented symbolically, most of the existing methods extend some string matching algorithms to retrieve musical patterns in a music database. However, not all retrieved patterns are perceptually significant because some of them are, in fact, inaudible. Music is perceived in groupings of musical notes called streams. The process of grouping musical notes into streams is called stream segregation. Stream-crossing musical patterns are perceptually insignificant and should be pruned from the retrieval results. This can be done if all musical notes in a music database are segregated into streams and musical patterns are retrieved from the streams. Findings in auditory psychology are utilized in this paper, in which stream segregation is modelled as a clustering process and an adapted single-link clustering algorithm is proposed. Supported by experiments on real music data, streams are identified by the proposed algorithm with considerable accuracy.
相似文献
Man Hon WongEmail: |
138.
In conventional video-on-demand systems, video data are stored in a video server for delivery to multiple receivers over a communications network. The video server's hardware limits the maximum storage capacity as well as the maximum number of video sessions that can concurrently be delivered. Clearly, these limits will eventually be exceeded by the growing need for better video quality and larger user population. This paper studies a parallel video server architecture that exploits server parallelism to achieve incremental scalability. First, unlike data partition and replication, the architecture employs data striping at the server level to achieve fine-grain load balancing across multiple servers. Second, a client-pull service model is employed to eliminate the need for interserver synchronization. Third, an admission-scheduling algorithm is proposed to further control the instantaneous load at each server so that linear scalability can be achieved. This paper analyzes the performance of the architecture by deriving bounds for server service delay, client buffer requirement, prefetch delay, and scheduling delay. These performance metrics and design tradeoffs are further evaluated using numerical examples. Our results show that the proposed parallel video server architecture can be linearly scaled up to more concurrent users simply by adding more servers and redistributing the video data among the servers 相似文献
139.
We study thegrouping by swapping problem, which occurs in memory compaction and in computing the exponential of a matrix. In this problem we are given a sequence ofn numbers drawn from {0,1, 2,...,m?1} with repetitions allowed; we are to rearrange them, using as few swaps of adjacent elements as possible, into an order such that all the like numbers are grouped together. It is known that this problem is NP-hard. We present a probabilistic analysis of a grouping algorithm calledMEDIAN that works by sorting the numbers in the sequence according to their median positions. Our results show that the expected behavior ofMEDIAN is within 10% of optimal and is asymptotically optimal asn/m→∞ or asn/m→0. 相似文献
140.
Stochastic neural networks 总被引:2,自引:0,他引:2
Eugene Wong 《Algorithmica》1991,6(1):466-478
The first purpose of this paper is to present a class of algorithms for finding the global minimum of a continuous-variable function defined on a hypercube. These algorithms, based on both diffusion processes and simulated annealing, are implementable as analog integrated circuits. Such circuits can be viewed as generalizations of neural networks of the Hopfield type, and are called diffusion machines.Our second objective is to show that learning in these networks can be achieved by a set of three interconnected diffusion machines: one that learns, one to model the desired behavior, and one to compute the weight changes.This research was supported in part by U.S. Army Research Office Grant DAAL03-89-K-0128. 相似文献