全文获取类型
收费全文 | 8260篇 |
免费 | 518篇 |
国内免费 | 7篇 |
专业分类
电工技术 | 113篇 |
综合类 | 23篇 |
化学工业 | 2136篇 |
金属工艺 | 242篇 |
机械仪表 | 147篇 |
建筑科学 | 591篇 |
矿业工程 | 98篇 |
能源动力 | 282篇 |
轻工业 | 592篇 |
水利工程 | 50篇 |
石油天然气 | 20篇 |
武器工业 | 1篇 |
无线电 | 656篇 |
一般工业技术 | 1725篇 |
冶金工业 | 463篇 |
原子能技术 | 48篇 |
自动化技术 | 1598篇 |
出版年
2024年 | 9篇 |
2023年 | 107篇 |
2022年 | 166篇 |
2021年 | 326篇 |
2020年 | 199篇 |
2019年 | 209篇 |
2018年 | 231篇 |
2017年 | 267篇 |
2016年 | 351篇 |
2015年 | 293篇 |
2014年 | 377篇 |
2013年 | 596篇 |
2012年 | 533篇 |
2011年 | 717篇 |
2010年 | 506篇 |
2009年 | 475篇 |
2008年 | 484篇 |
2007年 | 424篇 |
2006年 | 379篇 |
2005年 | 303篇 |
2004年 | 252篇 |
2003年 | 186篇 |
2002年 | 165篇 |
2001年 | 102篇 |
2000年 | 96篇 |
1999年 | 114篇 |
1998年 | 123篇 |
1997年 | 95篇 |
1996年 | 86篇 |
1995年 | 51篇 |
1994年 | 59篇 |
1993年 | 37篇 |
1992年 | 47篇 |
1991年 | 26篇 |
1990年 | 39篇 |
1989年 | 23篇 |
1988年 | 28篇 |
1987年 | 27篇 |
1986年 | 23篇 |
1985年 | 20篇 |
1984年 | 29篇 |
1983年 | 17篇 |
1982年 | 18篇 |
1981年 | 15篇 |
1980年 | 20篇 |
1978年 | 12篇 |
1977年 | 16篇 |
1976年 | 26篇 |
1975年 | 13篇 |
1969年 | 11篇 |
排序方式: 共有8785条查询结果,搜索用时 15 毫秒
121.
122.
Martin Bauer Florian Schornbaum Christian Godenschwager Matthias Markl Daniela Anderl Harald Köstler 《International Journal of Parallel, Emergent and Distributed Systems》2016,31(6):529-542
We present a Python extension to the massively parallel HPC simulation toolkit waLBerla. waLBerla is a framework for stencil based algorithms operating on block-structured grids, with the main application field being fluid simulations in complex geometries using the lattice Boltzmann method. Careful performance engineering results in excellent node performance and good scalability to over 400,000 cores. To increase the usability and flexibility of the framework, a Python interface was developed. Python extensions are used at all stages of the simulation pipeline: they simplify and automate scenario setup, evaluation, and plotting. We show how our Python interface outperforms the existing text-file-based configuration mechanism, providing features like automatic nondimensionalization of physical quantities and handling of complex parameter dependencies. Furthermore, Python is used to process and evaluate results while the simulation is running, leading to smaller output files and the possibility to adjust parameters dependent on the current simulation state. C++ data structures are exported such that a seamless interfacing to other numerical Python libraries is possible. The expressive power of Python and the performance of C++ make development of efficient code with low time effort possible. 相似文献
123.
124.
Paul Luff Christian Heath Marcus Sanchez Svensson 《International journal of human-computer interaction》2013,29(4):410-436
Alongside the emergence of the use of fieldwork studies for design there has been a discussion on how best these studies can inform system development. Concerns have been expressed as to whether their most appropriate contribution is a list of requirements or design recommendations. This article explores a recurrent issue that has emerged from fieldwork studies in Computer-Supported Cooperative Work, awareness, and with respect to a particular system development project discusses some of the implications for the development and deployment of one particular kind of technology—image recognition systems—in particular, organizational settings. In the setting in question—surveillance centers or operations rooms—staff utilize a range of practices to maintain awareness. Rather than extending field studies so that they can better assist design, it may be considered how workplace studies can contribute to a respecification of key concepts, like awareness, that are critical to an understanding of how technologies are used and deployed in everyday environments. 相似文献
125.
Christian Meske Konstantin Wilms Stefan Stieglitz 《Information Systems Management》2013,30(4):350-367
ABSTRACTIn this study, we first show that while both the perceived usefulness and perceived enjoyment of enterprise social networks impact employees’ intentions for continuous participation, the utilitarian value significantly outpaces its hedonic value. Second, we prove that the network’s utilitarian value is constituted by its digital infrastructure characteristics: versatility, adaptability, interconnectedness and invisibility-in-use. The study is set within a software engineering company and bases on quantitative survey research, applying partial least squares structural equation modeling. 相似文献
126.
Tim Furche Georg Gottlob Giovanni Grasso Christian Schallhart Andrew Sellers 《The VLDB Journal The International Journal on Very Large Data Bases》2013,22(1):47-72
The evolution of the web has outpaced itself: A growing wealth of information and increasingly sophisticated interfaces necessitate automated processing, yet existing automation and data extraction technologies have been overwhelmed by this very growth. To address this trend, we identify four key requirements for web data extraction, automation, and (focused) web crawling: (1) interact with sophisticated web application interfaces, (2) precisely capture the relevant data to be extracted, (3) scale with the number of visited pages, and (4) readily embed into existing web technologies. We introduce OXPath as an extension of XPath for interacting with web applications and extracting data thus revealed—matching all the above requirements. OXPath’s page-at-a-time evaluation guarantees memory use independent of the number of visited pages, yet remains polynomial in time. We experimentally validate the theoretical complexity and demonstrate that OXPath’s resource consumption is dominated by page rendering in the underlying browser. With an extensive study of sublanguages and properties of OXPath, we pinpoint the effect of specific features on evaluation performance. Our experiments show that OXPath outperforms existing commercial and academic data extraction tools by a wide margin. 相似文献
127.
Kostas Tzoumas Amol Deshpande Christian S. Jensen 《The VLDB Journal The International Journal on Very Large Data Bases》2013,22(1):3-27
Query optimizers rely on statistical models that succinctly describe the underlying data. Models are used to derive cardinality estimates for intermediate relations, which in turn guide the optimizer to choose the best query execution plan. The quality of the resulting plan is highly dependent on the accuracy of the statistical model that represents the data. It is well known that small errors in the model estimates propagate exponentially through joins, and may result in the choice of a highly sub-optimal query execution plan. Most commercial query optimizers make the attribute value independence assumption: all attributes are assumed to be statistically independent. This reduces the statistical model of the data to a collection of one-dimensional synopses (typically in the form of histograms), and it permits the optimizer to estimate the selectivity of a predicate conjunction as the product of the selectivities of the constituent predicates. However, this independence assumption is more often than not wrong, and is considered to be the most common cause of sub-optimal query execution plans chosen by modern query optimizers. We take a step towards a principled and practical approach to performing cardinality estimation without making the independence assumption. By carefully using concepts from the field of graphical models, we are able to factor the joint probability distribution over all the attributes in the database into small, usually two-dimensional distributions, without a significant loss in estimation accuracy. We show how to efficiently construct such a graphical model from the database using only two-way join queries, and we show how to perform selectivity estimation in a highly efficient manner. We integrate our algorithms into the PostgreSQL DBMS. Experimental results indicate that estimation errors can be greatly reduced, leading to orders of magnitude more efficient query execution plans in many cases. Optimization time is kept in the range of tens of milliseconds, making this a practical approach for industrial-strength query optimizers. 相似文献
128.
Chris Parnin Christian Bird Emerson Murphy-Hill 《Empirical Software Engineering》2013,18(6):1047-1089
Support for generic programming was added to the Java language in 2004, representing perhaps the most significant change to one of the most widely used programming languages today. Researchers and language designers anticipated this addition would relieve many long-standing problems plaguing developers, but surprisingly, no one has yet measured how generics have been adopted and used in practice. In this paper, we report on the first empirical investigation into how Java generics have been integrated into open source software by automatically mining the history of 40 popular open source Java programs, traversing more than 650 million lines of code in the process. We evaluate five hypotheses and research questions about how Java developers use generics. For example, our results suggest that generics sometimes reduce the number of type casts and that generics are usually adopted by a single champion in a project, rather than all committers. We also offer insights into why some features may be adopted sooner and others features may be held back. 相似文献
129.
When implementing a propagator for a constraint, one must decide about variants: When implementing min, should one also implement max? Should one implement linear constraints both with unit and non-unit coefficients? Constraint variants are ubiquitous: implementing them requires considerable (if not prohibitive) effort and decreases maintainability, but will deliver better performance than resorting to constraint decomposition. This paper shows how to use views to derive propagator variants, combining the efficiency of dedicated propagator implementations with the simplicity and effortlessness of decomposition. A model for views and derived propagators is introduced. Derived propagators are proved to be perfect in that they inherit essential properties such as correctness and domain and bounds consistency. Techniques for systematically deriving propagators such as transformation, generalization, specialization, and type conversion are developed. The paper introduces an implementation architecture for views that is independent of the underlying constraint programming system. A detailed evaluation of views implemented in Gecode shows that derived propagators are efficient and that views often incur no overhead. Views have proven essential for implementing Gecode, substantially reducing the amount of code that needs to be written and maintained. 相似文献
130.
Kevin Curran Michelle Murray David Stephen Norrby Martin Christian 《New Review of Information Networking》2013,18(1-2):47-59
Libraries, as we know them today, can be defined by the term Library 1.0. This defines the way resources are kept on shelves or at a computer behind a login. These resources can be taken from a shelf, checked out by the library staff, taken home for a certain length of time and absorbed, and then returned to the library for someone else to avail of. Library 1.0 is a one-directional service that takes people to the information they require. Library 2.0 – or L2 as it is now more commonly addressed as – aims to take the information to the people by bringing the library service to the Internet and getting the users more involved by encouraging feedback participation. This paper presents an overview of Library 2.0 and introduces web 2.0 concepts. 相似文献