首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   819篇
  免费   35篇
  国内免费   4篇
电工技术   2篇
综合类   1篇
化学工业   157篇
金属工艺   20篇
机械仪表   19篇
建筑科学   43篇
矿业工程   4篇
能源动力   20篇
轻工业   60篇
水利工程   10篇
无线电   75篇
一般工业技术   190篇
冶金工业   106篇
原子能技术   8篇
自动化技术   143篇
  2023年   12篇
  2022年   15篇
  2021年   41篇
  2020年   8篇
  2019年   14篇
  2018年   22篇
  2017年   24篇
  2016年   41篇
  2015年   18篇
  2014年   39篇
  2013年   44篇
  2012年   64篇
  2011年   62篇
  2010年   32篇
  2009年   47篇
  2008年   51篇
  2007年   36篇
  2006年   29篇
  2005年   25篇
  2004年   11篇
  2003年   11篇
  2002年   13篇
  2001年   11篇
  2000年   9篇
  1999年   4篇
  1998年   27篇
  1997年   14篇
  1996年   22篇
  1995年   12篇
  1994年   5篇
  1993年   11篇
  1991年   2篇
  1990年   4篇
  1989年   5篇
  1988年   8篇
  1987年   4篇
  1986年   6篇
  1985年   2篇
  1984年   3篇
  1983年   2篇
  1982年   3篇
  1981年   2篇
  1980年   2篇
  1979年   2篇
  1978年   4篇
  1977年   6篇
  1976年   11篇
  1975年   4篇
  1974年   3篇
  1973年   3篇
排序方式: 共有858条查询结果,搜索用时 10 毫秒
11.
The characterization of a simple, dual-fiber quartz capillary/fiber optical sensor (C/FOS) for remote excitation and collection of Raman signals is presented. The Raman signals acquired with the C/FOS exhibit a 70-fold sensitivity enhancement and a 50-fold improvement in detectability relative to those obtained with the corresponding conventional dual-fiber sensor without the capillary. A background spectral feature at 790 cm(-)(1) is related to the optical fiber background and is not due to the capillary tube. With no focusing lenses or filters needed at the sample site, the remote Raman C/FOS is easy to assemble and use, and it is relatively inexpensive compared to other designs.  相似文献   
12.
This article investigates portfolio management in double unknown situations. Double unknown refers to a situation in which the level of uncertainty is high and both technology and markets are as yet unknown. This situation can be an opportunity for new discoveries, creation of new performance solutions and giving direction to portfolio structuring. The literature highlights that the double unknown situation is a prerequisite to designing generic technologies that are able to address many existing and emerging markets and create value across a broad range of applications. The purpose of this paper is to investigate the initial phases of generic technology governance and associated portfolio structuring in multi‐project firms. We studied three empirical contexts of portfolio structuring at the European semiconductor provider STMicroelectronics. The results demonstrate that (1) portfolio management for generic technologies is highly transversal and comprises creating both modules to address market complementarities and the core element of a technological system – the platform, and (2) the design of generic technologies requires ‘cross‐application’ managers who are able to supervise the interactions among innovative concepts developed in different business and research groups and who are responsible for structuring and managing technological and marketing exploration portfolios within the organizational structures of a company.  相似文献   
13.
The objective of this paper is to elucidate an organizational process for the design of generic technologies (GTs). While recognizing the success of GTs, the literature on innovation management generally describes their design according to evolutionary strategies featuring multiple and uncertain trials, resulting in the discovery of common features among multiple applications. This random walk depends on multiple market and technological uncertainties that are considered exogenous: as smart as he can be, the ‘gambler’ must play in a given probability space. However, what happens when the innovator is not a gambler but a designer, i.e., when the actor is able to establish new links between previously independent emerging markets and technologies? Formally speaking, the actor designs a new probability space. Building on a case study of two technological development programmes at the French Center for Atomic Energy, we present cases of GTs that correspond to this logic of designing the probability space, i.e. the logic of intentionally designing common features that bridge the gap between a priori heterogeneous applications and technologies. This study provides another example showing that the usual trial‐and‐learning strategy is not the only strategy to design GTs and that these technologies can be designed by intentionally building new interdependences between markets and technologies. Our main result is that building these interdependences requires organizational patterns that correspond to a ‘design of exploration’ phase in which multiple technology suppliers and application providers are involved in designing both the probability space itself and the instruments to explore and benefit from this new space.  相似文献   
14.
The general objective of this paper is to investigate the separation, with microfluidics, of the components of a ternary mixture, when using vacuum or purge gas pervaporation. The ternary mixture considered is a mixture of methanol (MeOH), water (H2O) and hydrogen peroxide (H2O2). In a previous work (Ziemecka in Lab Chip 15:504–511, 2015), we presented the proof of concept of a microfluidic device, which was able to partially separate MeOH from the other components of such a mixture, by using vacuum pervaporation. Here, our goal is to optimize the operation of this device, by considering vacuum pervaporation, but also purge gas pervaporation. First, we provide a mathematical model of the device. This model is used to discuss the influence of the operating parameters on the device operation. To apply this model to the considered mixture, we determined the MeOH and H2O permeability coefficients of PDMS membranes prepared from different concentrations of the curing agent. The model is then successfully compared to experimental data. The model and the experiments show that high efficiencies can be reached for both vacuum and purge gas pervaporation, provided a fine-tuning of the operating parameters. For instance, a good efficiency of the vacuum pervaporation is reached at high temperature and low pressure. For purge gas pervaporation, it is reached for low temperature and high pressure.  相似文献   
15.
One has a large computational workload that is “divisible” (its constituent tasks’ granularity can be adjusted arbitrarily) and one has access to p remote computers that can assist in computing the workload. How can one best utilize the computers? Two features complicate this question. First, the remote computers may differ from one another in speed. Second, each remote computer is subject to interruptions of known likelihood that kill all work in progress on it. One wishes to orchestrate sharing the workload with the remote computers in a way that maximizes the expected amount of work completed. We deal with three versions of this problem. The simplest version ignores communication costs but allows computers to differ in speed (a heterogeneous set of computers). The other two versions account for communication costs, first with identical remote computers (a homogeneous set of computers), and then with computers that may differ in speed. We provide exact expressions for the optimal work expectation for all three versions of the problem - via explicit closed-form expressions for the first two versions, and via a recurrence that computes this optimal value for the last, most general version.  相似文献   
16.
This paper presents a novel technique for three-dimensional (3D) human motion capture using a set of two non-calibrated cameras. The user’s five extremities (head, hands and feet) are extracted, labeled and tracked after silhouette segmentation. As they are the minimal number of points that can be used in order to enable whole body gestural interaction, we will henceforth refer to these features as crucial points. Features are subsequently labelled using 3D triangulation and inter-image tracking. The crucial point candidates are defined as the local maxima of the geodesic distance with respect to the center of gravity of the actor region that lie on the silhouette boundary. Due to its low computational complexity, the system can run at real-time paces on standard personal computers, with an average error rate range between 4% and 9% in realistic situations, depending on the context and segmentation quality.
Benoit MacqEmail:
  相似文献   
17.
The Children's National Medical Center is located in the inner-city area of Washington, DC. As is nationally now well publicized, the drug-related violence in Washington has earned the area the dubious title of "murder capital of the world." Our outpatient child and adolescent psychiatry clinic at Children's Hospital provides walk-in services during daytime hours, Monday through Friday. Access to services is available at other times through the emergency room.  相似文献   
18.
Computational fluid dynamics and meteorology in particular are among the major consumers of high performance computer technology. The next generation of atmospheric models will be capable of representing fluid flow phenomena at very small scales in the atmosphere. The mesoscale compressible community (MC2) model represents one of the first successful applications of a semi-implicit, semi-Lagrangian scheme to integrate the compressible governing equations for atmospheric flow in a limited area domain. A distributed-memory SPMD implementation of the MC2 model is described and the convergence rates of various parallel preconditioners for a Krylov type GMRES elliptic solver are reported. Parallel performance of the model on the Cray T3E MPP and NEC SX-4/32 SMP is also presented.  相似文献   
19.
20.
We consider pipelined real-time systems that consist of a chain of tasks executing on a distributed platform. The processing of the tasks is pipelined: each processor executes only one interval of consecutive tasks. We are interested in minimizing both the input–output latency and the period of application mapping. For dependability reasons, we are also interested in maximizing the reliability of the system. We therefore assign several processors to each interval of tasks, so as to increase the reliability of the system. Both processors and communication links are unreliable and subject to transient failures. We assume that the arrival of the failures follows a constant parameter Poisson law, and that the failures are statistically independent events. We study several variants of this multiprocessor mapping problem, with several hypotheses on the target platform (homogeneous/heterogeneous speeds and/or failure rates). We provide NP-hardness complexity results, and optimal mapping algorithms for polynomial problem instances. Efficient heuristics are presented to solve the general case, and experimental results are provided.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号