首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   2581篇
  免费   31篇
  国内免费   1篇
电工技术   24篇
综合类   2篇
化学工业   136篇
金属工艺   16篇
机械仪表   28篇
建筑科学   29篇
矿业工程   1篇
能源动力   35篇
轻工业   87篇
水利工程   7篇
石油天然气   3篇
无线电   105篇
一般工业技术   165篇
冶金工业   1822篇
原子能技术   32篇
自动化技术   121篇
  2023年   4篇
  2022年   7篇
  2021年   9篇
  2019年   9篇
  2018年   23篇
  2017年   13篇
  2016年   20篇
  2015年   17篇
  2014年   27篇
  2013年   60篇
  2012年   46篇
  2011年   53篇
  2010年   37篇
  2009年   43篇
  2008年   41篇
  2007年   54篇
  2006年   27篇
  2005年   32篇
  2004年   21篇
  2003年   15篇
  2002年   16篇
  2001年   14篇
  2000年   19篇
  1999年   68篇
  1998年   578篇
  1997年   309篇
  1996年   217篇
  1995年   123篇
  1994年   97篇
  1993年   137篇
  1992年   14篇
  1991年   21篇
  1990年   25篇
  1989年   31篇
  1988年   27篇
  1987年   27篇
  1986年   28篇
  1985年   28篇
  1984年   8篇
  1983年   10篇
  1982年   18篇
  1981年   16篇
  1980年   33篇
  1978年   7篇
  1977年   44篇
  1976年   93篇
  1975年   7篇
  1972年   3篇
  1967年   4篇
  1955年   3篇
排序方式: 共有2613条查询结果,搜索用时 0 毫秒
71.
72.
Driving macrophage (M?) polarization into the M2 phenotype provides potential against inflammatory diseases. Interleukin‐4 (IL‐4) promotes polarization into the M2‐M? phenotype, but its systemic use is constrained by dose‐limiting toxicity. Consequently, we developed IL‐4‐decorated surfaces aiming at sustained and localized activity. IL‐4 muteins were generated by genetic code expansion; Lys42 was replaced by unnatural amino acids (uAAs). Both muteins showed cell‐stimulation ability and binding affinity to IL4Rα similar to those of wt‐IL‐4. Copper‐catalyzed (CuAAC) and copper‐free strain‐promoted (SPAAC) 1,3‐dipolar azide–alkyne cycloadditions were used to site‐selectively anchor IL‐4 to agarose surfaces. These surfaces had sustained IL‐4 activity, as demonstrated by TF‐1 cell proliferation and M2, but not M1, polarization of M‐CSF‐generated human M?. The approach provides a blueprint for the engineering of cytokine‐activated surfaces profiled for sustained and spatially controlled activity.  相似文献   
73.
We coupled the radiation emitted by arrays of Josephson junctions oscillators to detector arrays of small Josephson junctions. The number of junctions in the detector array ranges up to 1536, which is typical for a 1V standard array operation. Evidence is presented that both uniform coupling of the emitted radiation over all the small junctions arrays and coherent emission of the Josephson oscillators can be achieved. PACS numbers: 74.50. + r, 74.40. + k.  相似文献   
74.
Visualization algorithms can have a large number of parameters, making the space of possible rendering results rather high-dimensional. Only a systematic analysis of the perceived quality can truly reveal the optimal setting for each such parameter. However, an exhaustive search in which all possible parameter permutations are presented to each user within a study group would be infeasible to conduct. Additional complications may result from possible parameter co-dependencies. Here, we will introduce an efficient user study design and analysis strategy that is geared to cope with this problem. The user feedback is fast and easy to obtain and does not require exhaustive parameter testing. To enable such a framework we have modified a preference measuring methodology, conjoint analysis, that originated in psychology and is now also widely used in market research. We demonstrate our framework by a study that measures the perceived quality in volume rendering within the context of large parameter spaces.  相似文献   
75.
Mueller  Frank 《Real-Time Systems》2000,18(2-3):217-247
This paper contributes a comprehensive study of a framework to bound worst-case instruction cache performance for caches with arbitrary levels of associativity. The framework is formally introduced, operationally described and its correctness is shown. Results of incorporating instruction cache predictions within pipeline simulation show that timing predictions for set-associative caches remain just as tight as predictions for direct-mapped caches. The low cache simulation overhead allows interactive use of the analysis tool and scales well with increasing associativity.The approach taken is based on a data-flow specification of the problem and provides another step toward worst-case execution time prediction of contemporary architectures and its use in schedulability analysis for hard real-time systems.  相似文献   
76.
An analysis of cryogenic liquefaction and storage methods for in-situ produced propellants (oxygen and methane) on Mars is presented. The application is to a subscale precursor sample return mission, intended to demonstrate critical cryogenic technologies prior to a human mission. A heat transfer analysis is included, resulting in predicted cryogenic tank surface temperatures and heat leak values for different conditions. Insulation thickness is traded off against cryocooler capacity to find optimum combinations for various insulation configurations, including multilayer insulation and microspheres. Microsphere insulation is shown to have promise, and further development is recommended.  相似文献   
77.
Embedded control systems with hard real-time constraints require that deadlines are met at all times or the system may malfunction with potentially catastrophic consequences. Schedulability theory can assure deadlines for a given task set when periods and worst-case execution times (WCETs) of tasks are known. While periods are generally derived from the problem specification, a task??s code needs to be statically analyzed to derive safe and tight bounds on its WCET. Such static timing analysis abstracts from program input and considers loop bounds and architectural features, such as pipelining and caching. However, unpredictability due to dynamic memory (DRAM) refresh cannot be accounted for by such analysis, which limits its applicability to systems with static memory (SRAM). In this paper, we assess the impact of DRAM refresh on task execution times and demonstrate how predictability is adversely affected leading to unsafe hard real-time system design. We subsequently contribute a novel and effective approach to overcome this problem through software-initiated DRAM refresh. We develop (1)?a?pure software and (2)?a?hybrid hardware/software refresh scheme. Both schemes provide predictable timings and fully replace the classical hardware auto-refresh. We discuss implementation details based on this design for multiple concrete embedded platforms and experimentally assess the benefits of different schemes on these platforms. We further formalize the integration of variable latency memory references into a data-flow framework suitable for static timing analysis to bound a task??s memory latencies with regard to their WCET. The resulting predictable execution behavior in the presence of DRAM refresh combined with the additional benefit of reduced access delays is unprecedented, to the best of our knowledge.  相似文献   
78.
Rapid advances in image acquisition and storage technology underline the need for real-time algorithms that are capable of solving large-scale image processing and computer-vision problems. The minimum st cut problem, which is a classical combinatorial optimization problem, is a prominent building block in many vision and imaging algorithms such as video segmentation, co-segmentation, stereo vision, multi-view reconstruction, and surface fitting to name a few. That is why finding a real-time algorithm which optimally solves this problem is of great importance. In this paper, we introduce to computer vision the Hochbaum’s pseudoflow (HPF) algorithm, which optimally solves the minimum st cut problem. We compare the performance of HPF, in terms of execution times and memory utilization, with three leading published algorithms: (1) Goldberg’s and Tarjan’s Push-Relabel; (2) Boykov’s and Kolmogorov’s augmenting paths; and (3) Goldberg’s partial augment-relabel. While the common practice in computer-vision is to use either BK or PRF algorithms for solving the problem, our results demonstrate that, in general, HPF algorithm is more efficient and utilizes less memory than these three algorithms. This strongly suggests that HPF is a great option for many real-time computer-vision problems that require solving the minimum st cut problem.  相似文献   
79.
E-procurement and supplier-relationship management systems have helped to substantially advance process execution in supply management. However, current supply network systems still face challenges of high data integration efforts, as well as the decoupling of structured data and processes from the growing amount of digitalized unstructured interactions of supply management professionals. Inspired by the room for improvement posed by this challenges, our research proposes a design for a supply network artifact in supplier qualification that addresses these problems by enabling holistic integration of data, processes, and people. The artifact is developed following an action design research approach. Building on a set of meta-requirements derived from literature and practice explorations, we conceptualize two design principles and derive corresponding design decisions that have been implemented in an software artifact. Finally, we formulate testable hypotheses and evaluate the artifact and its design in the context of supplier qualification. Our results show that the proposed design reduces mental effort of supply management professionals and significantly increases efficiency when performing typical supply network tasks such as supplier qualification.  相似文献   
80.
The radiation budget at the earth surface is an essential climate variable for climate monitoring and analysis as well as for verification of climate model output and reanalysis data. Accurate solar surface irradiance data is a prerequisite for an accurate estimation of the radiation budget and for an efficient planning and operation of solar energy systems.This paper describes a new approach for the retrieval of the solar surface irradiance from satellite data. The method is based on radiative transfer modelling and enables the use of extended information about the atmospheric state. Accurate analysis of the interaction between the atmosphere, surface albedo, transmission and the top of atmosphere albedo has been the basis for the new method, characterised by a combination of parameterisations and “eigenvector” look-up tables. The method is characterised by a high computing performance combined with a high accuracy. The performed validation shows that the mean absolute deviation is of the same magnitude as the confidence level of the BSRN (Baseline Surface Radiation Measurement) ground based measurements and significant lower as the CM-SAF (Climate Monitoring Satellite Application Facility) target accuracy of 10 W/m2. The mean absolute difference between monthly means of ground measurements and satellite based solar surface irradiance is 5 W/m2 with a mean bias deviation of − 1 W/m2 and a RMSD (Root Mean Square Deviation) of 5.4 W/m2 for the investigated European sites. The results for the investigated African sites obtained by comparing instantaneous values are also encouraging. The mean absolute difference is with 2.8% even lower as for the European sites being 3.9%, but the mean bias deviation is with − 1.1% slightly higher as for the European sites, being 0.8%. Validation results over the ocean in the Mediterranean Sea using shipboard data complete the validation. The mean bias is − 3.6 W/m2 and 2.3% respectively. The slightly higher mean bias deviation over ocean is at least partly resulting from inherent differences due to the movement of the ship (shadowing, allocation of satellite pixel). The validation results demonstrate that the high accuracy of the surface solar irradiance is given in different climate regions. The discussed method has also the potential to improve the treatment of radiation processes in climate and Numerical Weather Prediction (NWP) models, because of the high accuracy combined with a high computing speed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号