全文获取类型
收费全文 | 38057篇 |
免费 | 1481篇 |
国内免费 | 61篇 |
专业分类
电工技术 | 390篇 |
综合类 | 31篇 |
化学工业 | 7509篇 |
金属工艺 | 766篇 |
机械仪表 | 794篇 |
建筑科学 | 2047篇 |
矿业工程 | 118篇 |
能源动力 | 1117篇 |
轻工业 | 3059篇 |
水利工程 | 437篇 |
石油天然气 | 120篇 |
武器工业 | 5篇 |
无线电 | 2641篇 |
一般工业技术 | 6334篇 |
冶金工业 | 7193篇 |
原子能技术 | 294篇 |
自动化技术 | 6744篇 |
出版年
2023年 | 205篇 |
2022年 | 356篇 |
2021年 | 695篇 |
2020年 | 474篇 |
2019年 | 626篇 |
2018年 | 799篇 |
2017年 | 714篇 |
2016年 | 841篇 |
2015年 | 769篇 |
2014年 | 1059篇 |
2013年 | 2489篇 |
2012年 | 1726篇 |
2011年 | 2144篇 |
2010年 | 1708篇 |
2009年 | 1595篇 |
2008年 | 1836篇 |
2007年 | 1829篇 |
2006年 | 1649篇 |
2005年 | 1484篇 |
2004年 | 1209篇 |
2003年 | 1161篇 |
2002年 | 1083篇 |
2001年 | 752篇 |
2000年 | 591篇 |
1999年 | 641篇 |
1998年 | 759篇 |
1997年 | 688篇 |
1996年 | 641篇 |
1995年 | 624篇 |
1994年 | 582篇 |
1993年 | 587篇 |
1992年 | 523篇 |
1991年 | 319篇 |
1990年 | 446篇 |
1989年 | 415篇 |
1988年 | 347篇 |
1987年 | 382篇 |
1986年 | 326篇 |
1985年 | 447篇 |
1984年 | 440篇 |
1983年 | 338篇 |
1982年 | 327篇 |
1981年 | 305篇 |
1980年 | 291篇 |
1979年 | 291篇 |
1978年 | 262篇 |
1977年 | 253篇 |
1976年 | 245篇 |
1975年 | 209篇 |
1974年 | 189篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
941.
Josh Tenenberg Wolff-Michael Roth David Socha 《Computer Supported Cooperative Work (CSCW)》2016,25(4-5):235-278
Awareness is one of the central concepts in Computer Supported Cooperative Work, though it has often been used in several different senses. Recently, researchers have begun to provide a clearer conceptualization of awareness that provides concrete guidance for the structuring of empirical studies of awareness and the development of tools to support awareness. Such conceptions, however, do not take into account newer understandings of shared intentionality among cooperating actors that recently have been defined by philosophers and empirically investigated by psychologists and psycho-linguists. These newer conceptions highlight the common ground and socially recursive inference that underwrites cooperative behavior. And it is this inference that is often seamlessly carried out in collocated work, so easy to take for granted and hence overlook, that will require computer support if such work is to be partially automated or carried out at a distance. Ignoring the inferences required in achieving common ground may thus focus a researcher or designer on surface forms of “heeding” that miss the underlying processes of intention shared in and through activity that are critical for cooperation to succeed. Shared intentionality thus provides a basis for reconceptualizing awareness in CSCW research, building on and augmenting existing notions. In this paper, we provide a philosophically grounded conception of awareness based on shared intentionality, demonstrate how it accounts for behavior in an empirical study of two individuals in collocated, tightly-coupled work, and provide implications of this conception for the design of computational systems to support tightly-coupled collaborative work. 相似文献
942.
The WAT (wafer acceptance test) is the last examination that is performed before a wafer or a chip fab out to ensure the quality and stability of chip performance. In 55 nm CIS (CMOS Image Sensor) technology, a highly smooth wafer surface is critical for the BSI (backside illumination) process. The traditional WAT process cannot be used; rather the in-line WAT must be performed during the process for forming copper interconnect. However, increasing the processing time increases the period of exposure of the copper interconnect to air, which is called the Q-time, affecting the reliability of copper interconnect. Nitrogen-doped silicon carbide (also called NDC or SiCN) has been used to fabricate copper diffusion barrier films. PECVD SiCN dielectric has a promisingly low dielectric constant for use as a copper diffusion barrier. Copper diffusion barrier films comprise one or more layers of silicon carbide. Covering a copper layer with a single thin NDC pre-layer significantly increases the maximum allowable Q-time for wafer probing. However, after the Q-time, a void forms between NDC layer and the NDC pre-layer. This work proposes a new two-step NDC process and the optimization of the thickness of the NDC pre-layer. The process has the advantages of providing a high stability for parametric test and a long allowable Q-time. These advantages are achieved by changing the thickness of the NDC pre-layer. This new approach has been analyzed using TEM and by performing parametric tests, and the feasibility has been confirmed experimentally. No void is formed between the NDC layers and a high test stability is achieved when the thickness of the NDC pre-layer is 120 Å. 相似文献
943.
Christiane Jasmin Reinert‐Weiss Holger Baur Sheikh Abdullah Al Nusayer David Duhme Norbert Frühauf 《Journal of the Society for Information Display》2017,25(2):90-97
Conventional adaptive driving beam headlamps are limited in achieving still higher quantities of switchable pixels by the number of LEDs and movable elements needed. In this paper, it is shown that by integrating an active matrix liquid crystal display module, it is possible to realize fully adaptive high‐resolution headlights without mechanical elements and a finite number of LED with 30 k switchable pixels. 相似文献
944.
The hemispherical lens antenna is a candidate for satellite communications‐on‐the‐move, offering good scan performance in a reduced height. A short focal length minimizes height but presents challenges in illuminating the lens. Aperture efficiency is dominated by both the primary feed and dielectric loss. Feed effects are investigated in a threefold approach: spherical wave theory, commercial solver, and measurements. Gain and loss in a 432 mm diameter polyethylene/polystyrene lens are also measured. Gain for a waveguide‐fed array of two lenses is 36.3, 38.8, and 41.1 dBi, respectively, at 12.5, 20, and 30 GHz. The performance of a proposed four‐element array of equivalent area is then estimated. 相似文献
945.
Ahilan Kanagasundaram David Dean Sridha Sridharan Houman Ghaemmaghami Clinton Fookes 《International Journal of Speech Technology》2017,20(2):247-259
This paper studies the performance degradation of Gaussian probabilistic linear discriminant analysis (GPLDA) speaker verification system, when only short-utterance data is used for speaker verification system development. Subsequently, a number of techniques, including utterance partitioning and source-normalised weighted linear discriminant analysis (SN-WLDA) projections are introduced to improve the speaker verification performance in such conditions. Experimental studies have found that when short utterance data is available for speaker verification development, GPLDA system overall achieves best performance with a lower number of universal background model (UBM) components. As a lower number of UBM components significantly reduces the computational complexity of speaker verification system, that is a useful observation. In limited session data conditions, we propose a simple utterance-partitioning technique, which when applied to the LDA-projected GPLDA system shows over 8% relative improvement on EER values over baseline system on NIST 2008 truncated 10–10 s conditions. We conjecture that this improvement arises from the apparent increase in the number of sessions arising from our partitioning technique and this helps to better model the GPLDA parameters. Further, partitioning SN-WLDA-projected GPLDA shows over 16% and 6% relative improvement on EER values over LDA-projected GPLDA systems respectively on NIST 2008 truncated 10–10 s interview-interview, and NIST 2010 truncated 10–10 s interview-interview and telephone-telephone conditions. 相似文献
946.
Jing Jiang David Lo Jiahuan He Xin Xia Pavneet Singh Kochhar Li Zhang 《Empirical Software Engineering》2017,22(1):547-578
Forking is the creation of a new software repository by copying another repository. Though forking is controversial in traditional open source software (OSS) community, it is encouraged and is a built-in feature in GitHub. Developers freely fork repositories, use codes as their own and make changes. A deep understanding of repository forking can provide important insights for OSS community and GitHub. In this paper, we explore why and how developers fork what from whom in GitHub. We collect a dataset containing 236,344 developers and 1,841,324 forks. We make surveys, and analyze programming languages and owners of forked repositories. Our main observations are: (1) Developers fork repositories to submit pull requests, fix bugs, add new features and keep copies etc. Developers find repositories to fork from various sources: search engines, external sites (e.g., Twitter, Reddit), social relationships, etc. More than 42 % of developers that we have surveyed agree that an automated recommendation tool is useful to help them pick repositories to fork, while more than 44.4 % of developers do not value a recommendation tool. Developers care about repository owners when they fork repositories. (2) A repository written in a developer’s preferred programming language is more likely to be forked. (3) Developers mostly fork repositories from creators. In comparison with unattractive repository owners, attractive repository owners have higher percentage of organizations, more followers and earlier registration in GitHub. Our results show that forking is mainly used for making contributions of original repositories, and it is beneficial for OSS community. Moreover, our results show the value of recommendation and provide important insights for GitHub to recommend repositories. 相似文献
947.
Chen?HajajEmail authorView authors OrcID profile Noam?Hazon David?Sarne 《Autonomous Agents and Multi-Agent Systems》2017,31(3):696-714
The plethora of comparison shopping agents (CSAs) in today’s markets enables buyers to query more than a single CSA when shopping, thus expanding the list of sellers whose prices they obtain. This potentially decreases the chance of a purchase within any single interaction between a buyer and a CSA, and consequently decreases each CSAs’ expected revenue per-query. Obviously, a CSA can improve its competence in such settings by acquiring more sellers’ prices, potentially resulting in a more attractive “best price”. In this paper we suggest a complementary approach that improves the attractiveness of the best result returned based on intelligently controlling the order according to which they are presented to the user, in a way that utilizes several known cognitive-biases of human buyers. The advantage of this approach is in its ability to affect the buyer’s tendency to terminate her search for a better price, hence avoid querying further CSAs, without spending valuable resources on finding additional prices to present. The effectiveness of our method is demonstrated using real data, collected from four CSAs for five products. Our experiments confirm that the suggested method effectively influence people in a way that is highly advantageous to the CSA compared to the common method for presenting the prices. Furthermore, we experimentally show that all of the components of our method are essential to its success. 相似文献
948.
949.
Shaio Yan Huang Chi-Chen Lin An-An Chiu David C. Yen 《Information Systems Frontiers》2017,19(6):1343-1356
The objective of this study is to identify the financial statement fraud factors and rank the relative importance. First, this study reviews the previous studies to identify the possible fraud indicators. Expert questionnaires are distributed next. After questionnaires are collected, Lawshe’s approach is employed to eliminate these factors whose CVR (content validity ratio) values do not meet the criteria. Further, the remaining 32 factors are reviewed by experts to be the measurements suitable for the assessment of fraud detection. The Analytic Hierarchy Process (AHP) is utilized to determine the relative weights of the individual items. The result of AHP shows that the most important dimension is Pressure/Incentive and the least one is Attitude/rationalization. In addition, the top five important measurements are “Poor performance”, “The need for external financing”, “Financial distress”, “Insufficient board oversight”, and “Competition or market saturation”. The result provides a significant advantage to auditors and managers in enhancing the efficiency of fraud detection and critical evaluation. 相似文献
950.
One of the recurring questions in designing dynamic control environments is whether providing more information leads to better operational decisions. The idea of having every piece of information is increasingly tempting (and in safety critical domains often mandatory) but has become a potential obstacle for designers and operators. The present research study examined this challenge of appropriate information design and usability within a railway control setting. A laboratory study was conducted to investigate the presentation of different levels of information (taken from data processing framework, Dadashi et al. in Ergonomics 57(3):387–402, 2014) and the association with, and potential prediction of, the performance of a human operator when completing a cognitively demanding problem-solving scenario within railways. Results indicated that presenting users only with information corresponding to their cognitive task, and in the absence of other, non task-relevant information, improves the performance of their problem-solving/alarm handling. Knowing the key features of interest to various agents (machine or human) and using the data processing framework to guide the optimal level of information required by each of these agents could potentially lead to safer and more usable designs. 相似文献