首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1872篇
  免费   115篇
  国内免费   3篇
电工技术   10篇
综合类   2篇
化学工业   824篇
金属工艺   36篇
机械仪表   21篇
建筑科学   45篇
矿业工程   7篇
能源动力   38篇
轻工业   312篇
水利工程   9篇
石油天然气   9篇
无线电   71篇
一般工业技术   313篇
冶金工业   88篇
原子能技术   2篇
自动化技术   203篇
  2024年   6篇
  2023年   38篇
  2022年   197篇
  2021年   193篇
  2020年   55篇
  2019年   60篇
  2018年   76篇
  2017年   60篇
  2016年   71篇
  2015年   57篇
  2014年   83篇
  2013年   138篇
  2012年   96篇
  2011年   121篇
  2010年   95篇
  2009年   95篇
  2008年   108篇
  2007年   56篇
  2006年   70篇
  2005年   58篇
  2004年   49篇
  2003年   30篇
  2002年   39篇
  2001年   18篇
  2000年   15篇
  1999年   11篇
  1998年   16篇
  1997年   6篇
  1996年   15篇
  1995年   7篇
  1994年   2篇
  1993年   5篇
  1992年   3篇
  1990年   2篇
  1989年   5篇
  1986年   5篇
  1985年   1篇
  1984年   2篇
  1983年   4篇
  1982年   1篇
  1981年   2篇
  1978年   3篇
  1977年   5篇
  1976年   1篇
  1975年   2篇
  1974年   1篇
  1973年   1篇
  1971年   1篇
  1964年   1篇
  1911年   2篇
排序方式: 共有1990条查询结果,搜索用时 15 毫秒
31.
We investigated intrinsic noise in plasmonic sensors caused by adsorption and desorption of gaseous analytes on the sensor surface. We analyzed a general situation when there is a larger number of different analyte species. We applied our model to calculate various analyte mixtures, including some environmental pollutants, toxic and dangerous substances. The spectral density of mean square refractive index fluctuations follows a dependence similar to that of generation-recombination noise in photodetectors, flat at lower frequencies and sharply decreasing at higher. Some of the calculated noise levels are well within the detection range of conventional surface plasmon resonance sensors. An AD noise peak is observed in temperature dependence of mean square refractive index fluctuations, thus sensor operating temperature may be optimized to obtain larger signal to noise ratio. A significant property of AD noise is its rise with the decreasing plasmon sensor area, which means that it will be even more pronounced in modern nanoplasmonic devices. Our consideration is valid both for conventional surface plasmon resonance devices and for general nanoplasmonic devices.  相似文献   
32.
This article investigates portfolio management in double unknown situations. Double unknown refers to a situation in which the level of uncertainty is high and both technology and markets are as yet unknown. This situation can be an opportunity for new discoveries, creation of new performance solutions and giving direction to portfolio structuring. The literature highlights that the double unknown situation is a prerequisite to designing generic technologies that are able to address many existing and emerging markets and create value across a broad range of applications. The purpose of this paper is to investigate the initial phases of generic technology governance and associated portfolio structuring in multi‐project firms. We studied three empirical contexts of portfolio structuring at the European semiconductor provider STMicroelectronics. The results demonstrate that (1) portfolio management for generic technologies is highly transversal and comprises creating both modules to address market complementarities and the core element of a technological system – the platform, and (2) the design of generic technologies requires ‘cross‐application’ managers who are able to supervise the interactions among innovative concepts developed in different business and research groups and who are responsible for structuring and managing technological and marketing exploration portfolios within the organizational structures of a company.  相似文献   
33.
The objective of this paper is to elucidate an organizational process for the design of generic technologies (GTs). While recognizing the success of GTs, the literature on innovation management generally describes their design according to evolutionary strategies featuring multiple and uncertain trials, resulting in the discovery of common features among multiple applications. This random walk depends on multiple market and technological uncertainties that are considered exogenous: as smart as he can be, the ‘gambler’ must play in a given probability space. However, what happens when the innovator is not a gambler but a designer, i.e., when the actor is able to establish new links between previously independent emerging markets and technologies? Formally speaking, the actor designs a new probability space. Building on a case study of two technological development programmes at the French Center for Atomic Energy, we present cases of GTs that correspond to this logic of designing the probability space, i.e. the logic of intentionally designing common features that bridge the gap between a priori heterogeneous applications and technologies. This study provides another example showing that the usual trial‐and‐learning strategy is not the only strategy to design GTs and that these technologies can be designed by intentionally building new interdependences between markets and technologies. Our main result is that building these interdependences requires organizational patterns that correspond to a ‘design of exploration’ phase in which multiple technology suppliers and application providers are involved in designing both the probability space itself and the instruments to explore and benefit from this new space.  相似文献   
34.
The Northern Eurasian land mass encompasses a diverse array of land cover types including tundra, boreal forest, wetlands, semi-arid steppe, and agricultural land use. Despite the well-established importance of Northern Eurasia in the global carbon and climate system, the distribution and properties of land cover in this region are not well characterized. To address this knowledge and data gap, a hierarchical mapping approach was developed that encompasses the study area for the Northern Eurasia Earth System Partnership Initiative (NEESPI). The Northern Eurasia Land Cover (NELC) database developed in this study follows the FAO-Land Cover Classification System and provides nested groupings of land cover characteristics, with separate layers for land use, wetlands, and tundra. The database implementation is substantially different from other large-scale land cover datasets that provide maps based on a single set of discrete classes. By providing a database consisting of nested maps and complementary layers, the NELC database provides a flexible framework that allows users to tailor maps to suit their needs. The methods used to create the database combine empirically derived climate–vegetation relationships with results from supervised classifications based on Moderate Resolution Imaging Spectroradiometer (MODIS) data. The hierarchical approach provides an effective framework for integrating climate–vegetation relationships with remote sensing-based classifications, and also allows sources of error to be characterized and attributed to specific levels in the hierarchy. The cross-validated accuracy was 73% for the land cover map and 73% and 91% for the agriculture and wetland classifications, respectively. These results support the use of hierarchical classification and climate–vegetation relationships for mapping land cover at continental scales.  相似文献   
35.
The present paper is focused on the finite‐time differential game of m players with nonzero sum. The principal difference from regular cases consists in the fact that the states of players are subordinated to boundary value system of ordinary differential equations (rather than the initial value system). By the end of the game we understand the equilibrium situation and our purpose is to design a well‐founded suitable method for the equilibrium control search.  相似文献   
36.
With the increasing importance of XML, LDAP directories, and text-based information sources on the Internet, there is an ever-greater need to evaluate queries involving (sub)string matching. In many cases, matches need to be on multiple attributes/dimensions, with correlations between the multiple dimensions. Effective query optimization in this context requires good selectivity estimates. In this paper, we use pruned count-suffix trees (PSTs) as the basic data structure for substring selectivity estimation. For the 1-D problem, we present a novel technique called MO (Maximal Overlap). We then develop and analyze two 1-D estimation algorithms, MOC and MOLC, based on MO and a constraint-based characterization of all possible completions of a given PST. For the k-D problem, we first generalize PSTs to multiple dimensions and develop a space- and time-efficient probabilistic algorithm to construct k-D PSTs directly. We then show how to extend MO to multiple dimensions. Finally, we demonstrate, both analytically and experimentally, that MO is both practical and substantially superior to competing algorithms. Received April 28, 2000 / Accepted July 11, 2000  相似文献   
37.
Internet and the WWW more and more play an important role in our information society. It is now one of the major sources of information in every rank of our society. The overwhelming accessibility to data, on a global scale, does not necessarily translate to widespread utility of data. We often find that we are drowning in data, with few tools to help managing relevant data for our various activities. In this paper, we argue that the WWW and its end-users could benefit from the existence of a conceptual web site schema. We propose such a conceptual web site schema that describes what information is available in a web site and how this information is structured into pages and links. To allow to communicate this information through the web, we developed an XML Document Type Definition (DTD) for this conceptual web site schema. We also illustrate the feasibility of the approach by a simple application program developed using the XML Document Object Model (DOM). This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   
38.
Functions that optimize Laplacian‐based energies have become popular in geometry processing, e.g. for shape deformation, smoothing, multiscale kernel construction and interpolation. Minimizers of Dirichlet energies, or solutions of Laplace equations, are harmonic functions that enjoy the maximum principle, ensuring no spurious local extrema in the interior of the solved domain occur. However, these functions are only C0 at the constrained points, which often causes smoothness problems. For this reason, many applications optimize higher‐order Laplacian energies such as biharmonic or triharmonic. Their minimizers exhibit increasing orders of continuity but lose the maximum principle and show oscillations. In this work, we identify characteristic artifacts caused by spurious local extrema, and provide a framework for minimizing quadratic energies on manifolds while constraining the solution to obey the maximum principle in the solved region. Our framework allows the user to specify locations and values of desired local maxima and minima, while preventing any other local extrema. We demonstrate our method on the smoothness energies corresponding to popular polyharmonic functions and show its usefulness for fast handle‐based shape deformation, controllable color diffusion, and topologically‐constrained data smoothing.  相似文献   
39.
The present study examined the impact of a judicial warning, witness age, and the method of testimony presentation on mock jurors' perceptions of credibility of witnesses and accused, and on guilty verdicts. The participants were 435 undergraduate university students who listened to an audio-taped summary of a theft trial followed by abbreviated instructions to the jury. Witness age was 7, 10, or 23, the jury warning was either present or absent when witnesses were children, and the testimony by the prosecution eyewitness and accused were either presented or summarized. For the taped testimony conditions, the mock witnesses and the accused read a fact pattern describing the events in the case and were audiotaped as they answered a series of questions, which constituted direct and cross-examination. The testimony of the 7-year-old child, compared to the 10-year-old, was associated with lower cognitive competence and higher suggestibility, but also with higher accuracy of recall (lower mistaken recollection) and lower credibility of accused. The pattern of results for appraisal of the older child was more similar to that of an adult witness. The young adult was judged to be less trustworthy than children of either age. While the presence of a warning had no impact on guilty verdicts when a 7-year-old was a witness, there were fewer guilty verdicts when a witness was 10 years old. Participants also made fewer guilty verdicts when a young adult's testimony, compared to conditions involving child witnesses, was presented, but not when it was summarized. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
40.
A modified microwave-assisted polyol method was applied to prepare nanoparticulate ceramic powders of different oxides, i.e. Gd2O3, AlO(OH) (boehmite) and TiO2. Due to the good dielectric properties of the utilised solvents, such as ethylene glycol, diethylene glycol and 1,4 butanediol, a significant decrease in reaction time was achieved under microwave heating. In the case of AlO(OH) and Gd2O3, <5 nm primary particle size were obtained. Boehmite was found to be intercalated with the solvent. The general applicability of the process is shown and the advantages in terms of properties and processibility are described. The powders thus prepared were investigated using X-ray diffractometry, electron microscopy and physisorption.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号