全文获取类型
收费全文 | 15574篇 |
免费 | 366篇 |
国内免费 | 23篇 |
专业分类
电工技术 | 211篇 |
综合类 | 12篇 |
化学工业 | 3275篇 |
金属工艺 | 247篇 |
机械仪表 | 312篇 |
建筑科学 | 731篇 |
矿业工程 | 100篇 |
能源动力 | 406篇 |
轻工业 | 1053篇 |
水利工程 | 141篇 |
石油天然气 | 166篇 |
武器工业 | 1篇 |
无线电 | 963篇 |
一般工业技术 | 2216篇 |
冶金工业 | 4136篇 |
原子能技术 | 147篇 |
自动化技术 | 1846篇 |
出版年
2022年 | 126篇 |
2021年 | 182篇 |
2020年 | 157篇 |
2019年 | 172篇 |
2018年 | 195篇 |
2017年 | 194篇 |
2016年 | 223篇 |
2015年 | 180篇 |
2014年 | 253篇 |
2013年 | 889篇 |
2012年 | 481篇 |
2011年 | 671篇 |
2010年 | 453篇 |
2009年 | 474篇 |
2008年 | 613篇 |
2007年 | 596篇 |
2006年 | 585篇 |
2005年 | 479篇 |
2004年 | 407篇 |
2003年 | 401篇 |
2002年 | 406篇 |
2001年 | 267篇 |
2000年 | 257篇 |
1999年 | 289篇 |
1998年 | 292篇 |
1997年 | 315篇 |
1996年 | 292篇 |
1995年 | 285篇 |
1994年 | 279篇 |
1993年 | 328篇 |
1992年 | 281篇 |
1991年 | 198篇 |
1990年 | 240篇 |
1989年 | 236篇 |
1988年 | 202篇 |
1987年 | 242篇 |
1986年 | 230篇 |
1985年 | 276篇 |
1984年 | 263篇 |
1983年 | 241篇 |
1982年 | 214篇 |
1981年 | 226篇 |
1980年 | 178篇 |
1979年 | 209篇 |
1978年 | 199篇 |
1977年 | 152篇 |
1976年 | 168篇 |
1975年 | 194篇 |
1974年 | 154篇 |
1973年 | 158篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
992.
993.
Antonio Torralba Rob Fergus William T Freeman 《IEEE transactions on pattern analysis and machine intelligence》2008,30(11):1958-1970
With the advent of the Internet, billions of images are now freely available online and constitute a dense sampling of the visual world. Using a variety of non-parametric methods, we explore this world with the aid of a large dataset of 79,302,017 images collected from the Internet. Motivated by psychophysical results showing the remarkable tolerance of the human visual system to degradations in image resolution, the images in the dataset are stored as 32 x 32 color images. Each image is loosely labeled with one of the 75,062 non-abstract nouns in English, as listed in the Wordnet lexical database. Hence the image database gives a comprehensive coverage of all object categories and scenes. The semantic information from Wordnet can be used in conjunction with nearest-neighbor methods to perform object classification over a range of semantic levels minimizing the effects of labeling noise. For certain classes that are particularly prevalent in the dataset, such as people, we are able to demonstrate a recognition performance comparable to class-specific Viola-Jones style detectors. 相似文献
994.
OBJECTIVE: The performance costs associated with cell phone use while driving were assessed meta-analytically using standardized measures of effect size along five dimensions. BACKGROUND: There have been many studies on the impact of cell phone use on driving, showing some mixed findings. METHODS: Twenty-three studies (contributing 47 analysis entries) met the appropriate conditions for the meta-analysis. The statistical results from each of these studies were converted into effect sizes and combined in the meta-analysis. RESULTS: Overall, there were clear costs to driving performance when drivers were engaged in cell phone conversations. However, subsequent analyses indicated that these costs were borne primarily by reaction time tasks, with far smaller costs associated with tracking (lane-keeping) performance. Hands-free and handheld phones revealed similar patterns of results for both measures of performance. Conversation tasks tended to show greater costs than did information-processing tasks (e.g., word games). There was a similar pattern of results for passenger and remote (cell phone) conversations. Finally, there were some small differences between simulator and field studies, though both exhibited costs in performance for cell phone use. CONCLUSION: We suggest that (a) there are significant costs to driver reactions to external hazards or events associated with cell phone use, (b) hands-free cell phones do not eliminate or substantially reduce these costs, and (c) different research methodologies or performance measures may underestimate these costs. APPLICATION: Potential applications of this research include the assessment of performance costs attributable to different types of cell phones, cell phone conversations, experimental measures, or methodologies. 相似文献
995.
Wankang Zhao William Kreahling David Whalley Christopher Healy Frank Mueller 《Real-Time Systems》2006,34(2):129-152
It is advantageous to perform compiler optimizations that attempt to lower the worst-case execution time (WCET) of an embedded application since tasks with lower WCETs are easier to schedule and more likely to meet their deadlines.
Compiler writers in recent years have used profile information to detect the frequently executed paths in a program and there
has been considerable effort to develop compiler optimizations to improve these paths in order to reduce the average-case execution time (ACET). In this paper, we describe an approach to reduce the WCET by adapting and applying optimizations designed for frequent
paths to the worst-case (WC) paths in an application. Instead of profiling to find the frequent paths, our WCET path optimization uses feedback from
a timing analyzer to detect the WC paths in a function. Since these path-based optimizations may increase code size, the subsequent
effects on the WCET due to these optimizations are measured to ensure that the worst-case path optimizations actually improve
the WCET before committing to a code size increase. We evaluate these WC path optimizations and present results showing the
decrease in WCET versus the increase in code size.
A preliminary version of this paper entitled “Improving WCET by optimizing worst-case paths” appeared in the 2005 Real-Time and Embedded Technology and Applications Symposium.
Wankang Zhao received his PhD in Computer Science from Florida State University in 2005. He was an associate professor in Nanjin University
of Post and Telecommunications. He is currently working for Datamaxx Corporation.
William Kreahling received his PhD in Computer Science from Florida State University in 2005. He is currently an assistant professor in the
Math and Computer Science department at Western Carolina University. His research interests include compilers, computer architecture
and parallel computing.
David Whalley received his PhD in CS from the University of Virginia in 1990. He is currently the E.P. Miles professor and chair of the
Computer Science department at Florida State University. His research interests include low-level compiler optimizations,
tools for supporting the development and maintenance of compilers, program performance evaluation tools, predicting execution
time, computer architecture, and embedded systems. Some of the techniques that he developed for new compiler optimizations
and diagnostic tools are currently being applied in industrial and academic compilers. His research is currently supported
by the National Science Foundation. More information about his background and research can be found on his home page, http://www.cs.fsu.edu/∼whalley.
Dr. Whalley is a member of the IEEE Computer Society and the Association for Computing Machinery.
Chris Healy earned a PhD in computer science from Florida State University in 1999, and is currently an associate professor of computer
science at Furman University. His research interests include static and parametric timing analysis, real-time and embedded
systems, compilers and computer architecture. He is committed to research experiences for undergraduate students, and his
work has been supported by funding from the National Science Foundation. He is a member of ACM and the IEEE Computer Society.
Frank Mueller is an Associate Professor in Computer Science and a member of the Centers for Embedded Systems Research (CESR) and High Performance
Simulations (CHiPS) at North Carolina State University. Previously, he held positions at Lawrence Livermore National Laboratory
and Humboldt University Berlin, Germany. He received his Ph.D. from Florida State University in 1994. He has published papers
in the areas of embedded and real-time systems, compilers and parallel and distributed systems. He is a founding member of
the ACM SIGBED board and the steering committee chair of the ACM SIGPLAN LCTES conference. He is a member of the ACM, ACM
SIGPLAN, ACM SIGBED and the IEEE Computer Society. He is a recipient of an NSF Career Award. 相似文献
996.
A framework for dialectal Chinese speech recognition is proposed and studied, in which a relatively small dialectal Chinese (or in other words Chinese influenced by the native dialect) speech corpus and dialect-related knowledge are adopted to transform a standard Chinese (or Putonghua, abbreviated as PTH) speech recognizer into a dialectal Chinese speech recognizer. Two kinds of knowledge sources are explored: one is expert knowledge and the other is a small dialectal Chinese corpus. These knowledge sources provide information at four levels: phonetic level, lexicon level, language level, and acoustic decoder level. This paper takes Wu dialectal Chinese (WDC) as an example target language. The goal is to establish a WDC speech recognizer from an existing PTH speech recognizer based on the Initial-Final structure of the Chinese language and a study of how dialectal Chinese speakers speak Putonghua. The authors propose to use context-independent PTH-IF mappings (where IF means either a Chinese Initial or a Chinese Final), context-independent WDC-IF mappings, and syllable-dependent WDC-IF mappings (obtained from either experts or data), and combine them with the supervised maximum likelihood linear regression (MLLR) acoustic model adaptation method. To reduce the size of the multi-pronunciation lexicon introduced by the IF mappings, which might also enlarge the lexicon confusion and hence lead to the performance degradation, a Multi-Pronunciation Expansion (MPE) method based on the accumulated uni-gram probability (AUP) is proposed. In addition, some commonly used WDC words are selected and added to the lexicon. Compared with the original PTH speech recognizer, the resulting WDC speech recognizer achieves 10-18% absolute Character Error Rate (CER) reduction when recognizing WDC, with only a 0.62% CER increase when recognizing PTH. The proposed framework and methods are expected to work not only for Wu dialectal Chinese but also for other dialectal Chinese languages and even other languages. 相似文献
997.
This essay continues my investigation of `syntactic semantics': the theory that, pace Searle's Chinese-Room Argument, syntax does suffice for semantics (in particular, for the semantics needed for a computational cognitive theory of natural-language understanding). Here, I argue that syntactic semantics (which is internal and first-person) is what has been called a conceptual-role semantics: The meaning of any expression is the role that it plays in the complete system of expressions. Such a `narrow', conceptual-role semantics is the appropriate sort of semantics to account (from an `internal', or first-person perspective) for how a cognitive agent understands language. Some have argued for the primacy of external, or `wide', semantics, while others have argued for a two-factor analysis. But, although two factors can be specified–-one internal and first-person, the other only specifiable in an external, third-person way–-only the internal, first-person one is needed for understanding how someone understands. A truth-conditional semantics can still be provided, but only from a third-person perspective. 相似文献
998.
Ashraf Qadir William Semke Jeremiah Neubert 《Journal of Intelligent and Robotic Systems》2014,74(3-4):1029-1047
This paper presents the development of a vision-based neuro-fuzzy controller for a two axes gimbal system mounted on a small Unmanned Aerial Vehicle (UAV). The controller uses vision-based object detection as input and generates pan and tilt motion and velocity commands for the gimbal in order to keep the interest object at the center of the image frame. A readial basis function based neuro-fuzzy system and a learning algorithm is developed for the controller to address the dynamic and non-linear characteristics of the gimbal movement. The controller uses two separate, but identical radial basis function networks, one for pan and one for tilt motion of the gimbal. Each system is initialized with a fixed number of neurons that act as rules basis for the fuzzy inference system. The membership functions and rule strengths are then adjusted with the feedback from the visual tracking system. The controller is trained off-line until a desired error level is achieved. Training is then continued on-line to allow the system to accommodate air speed changes. The algorithm learns from the error computed from the detected position of the object in image frame and generates position and velocity commands for the gimbal movement. Several tests including lab tests and actual flight tests of the UAV have been carried out to demonstrate the effectiveness of the controller. Test results show that the controller is able to converge effectively and generate accurate position and velocity commands to keep the object at the center of the image frame. 相似文献
999.
William Winder 《Computers and the Humanities》2002,36(3):295-306
Parallel to, and to some degree inreaction to French poststructuralisttheorization (as championed by Derrida,Foucault, and Lacan, among others) is a Frenchneo-structuralism built directly on theachievements of structuralism using electronicmeans. This paper examines some exemplaryapproaches to text analysis in thisneo-structuralist vein: SATOR's topoidictionary, the WinBrill POS tagger andFrançois Rastier's interpretativesemantics. I consider how a computer-assisted``Wissenschaft' accumulation of expertisecomplements the neo-structuralist approach.Ultimately, electronic critical studies will bedefined by their strategic position at theintersection of the two chief technologiesshaping our society: the new informationprocessing technology of computers and therepresentational techniques that haveaccumulated for centuries in texts.Understanding how these two informationmanagement paradigms complement each other is akey issue for the humanities, for computerscience, and vital to industry, even beyond thenarrow realm of the language industries. Thedirection of critical studies, a small planetlong orbiting in only rarefied academiccircles, will be radically altered by the sheersize of the economic stakes implied by a newkind of text, the industrial text, thetechnological heart of an information society. 相似文献
1000.
William Gasarch James Glenn Clyde P. Kruskal 《Journal of Computer and System Sciences》2008,74(4):628-655
There has been much work on the following question: given n, how large can a subset of be that has no arithmetic progressions of length 3. We call such sets 3-free. Most of the work has been asymptotic. In this paper we sketch applications of large 3-free sets, present techniques to find large 3-free sets of for , and give empirical results obtained by coding up those techniques. In the sequel we survey the known techniques for finding large 3-free sets of for large n, discuss variants of them, and give empirical results obtained by coding up those techniques and variants. 相似文献