全文获取类型
收费全文 | 5254篇 |
免费 | 327篇 |
国内免费 | 45篇 |
专业分类
电工技术 | 117篇 |
综合类 | 51篇 |
化学工业 | 1280篇 |
金属工艺 | 156篇 |
机械仪表 | 150篇 |
建筑科学 | 262篇 |
矿业工程 | 34篇 |
能源动力 | 341篇 |
轻工业 | 769篇 |
水利工程 | 43篇 |
石油天然气 | 79篇 |
武器工业 | 12篇 |
无线电 | 225篇 |
一般工业技术 | 798篇 |
冶金工业 | 508篇 |
原子能技术 | 48篇 |
自动化技术 | 753篇 |
出版年
2024年 | 12篇 |
2023年 | 43篇 |
2022年 | 116篇 |
2021年 | 195篇 |
2020年 | 135篇 |
2019年 | 150篇 |
2018年 | 206篇 |
2017年 | 217篇 |
2016年 | 216篇 |
2015年 | 191篇 |
2014年 | 245篇 |
2013年 | 506篇 |
2012年 | 283篇 |
2011年 | 347篇 |
2010年 | 313篇 |
2009年 | 253篇 |
2008年 | 243篇 |
2007年 | 224篇 |
2006年 | 154篇 |
2005年 | 104篇 |
2004年 | 138篇 |
2003年 | 135篇 |
2002年 | 161篇 |
2001年 | 97篇 |
2000年 | 82篇 |
1999年 | 70篇 |
1998年 | 145篇 |
1997年 | 100篇 |
1996年 | 56篇 |
1995年 | 46篇 |
1994年 | 60篇 |
1993年 | 47篇 |
1992年 | 25篇 |
1991年 | 16篇 |
1990年 | 22篇 |
1989年 | 19篇 |
1988年 | 17篇 |
1987年 | 10篇 |
1986年 | 12篇 |
1985年 | 26篇 |
1984年 | 20篇 |
1983年 | 15篇 |
1982年 | 20篇 |
1981年 | 20篇 |
1980年 | 15篇 |
1977年 | 12篇 |
1976年 | 23篇 |
1975年 | 9篇 |
1974年 | 11篇 |
1973年 | 6篇 |
排序方式: 共有5626条查询结果,搜索用时 19 毫秒
91.
Knowing the estimation of a statistical process’s parameters for measured network traffic is very important as it can then be further used for the statistical analyses and modeling of network traffic in simulation tools. It is for this reason that different estimation methods are proposed that allow estimations of the statistical processes of network traffic. One of them is our own histograms comparison (EMHC) based method that can be used to estimate statistical data-length process parameters from measured packet traffic. The main part of EMHC method is Mapping Algorithm with Fragmentation Mimics (MAFM). 相似文献
92.
In this paper, we present a novel method for fast lossy or lossless compression and decompression of regular height fields. The method is suitable for SIMD parallel implementation and thus inherently suitable for modern GPU architectures. Lossy compression is achieved by approximating the height field with a set of quadratic Bezier surfaces. In addition, lossless compression is achieved by superimposing the residuals over the lossy approximation. We validated the method’s efficiency through a CUDA implementation of compression and decompression algorithms. The method allows independent decompression of individual data points, as well as progressive decompression. Even in the case of lossy decompression, the decompressed surface is inherently seamless. In comparison with the GPU-oriented state-of-the-art method, the proposed method, combined with a widely available lossless compression method (such as DEFLATE), achieves comparable compression ratios. The method’s efficiency slightly outperforms the state-of-the-art method for very high workloads and considerably for lower workloads. 相似文献
93.
Esen Gökçe Özdamar 《Digital Creativity》2013,24(3):206-212
ABSTRACTThis exhibition review focuses on the quest for weaving boundaries between body-form-space-and material relationship using immersive technologies. Emerging as an architectural counterpoint, Universal Everything: Fluid Bodies exhibition held in Borusan Contemporary focuses on how we perceive the motion and form in relation to it, as well as fusing senses of kinaesthetic and synaesthetic through a data driven and motion-based visual representation. Through these algorithms, the exhibition displays how neuroarchitecture reminds us of the senses of perception. The architectural counterpoint, intention as an interaction and encounter of the body with ‘the machine’ as the ‘voyeur body’ and how this observational dialogue becomes a research methodology in understanding the nature of movement in space through digital tools. 相似文献
94.
Daniel Dominguez Gouvêa Cyro de A. Assis D. Muniz Gilson A. Pinto Alberto Avritzer Rosa Maria Meri Leão Edmundo de Souza e Silva Morganna Carmem Diniz Vittorio Cortellessa Luca Berardinelli Julius C. B. Leite Daniel Mossé Yuanfang Cai Michael Dalton Lucia Happe Anne Koziolek 《Software and Systems Modeling》2013,12(4):765-787
In this paper, we report on our experience with the application of validated models to assess performance, reliability, and adaptability of a complex mission critical system that is being developed to dynamically monitor and control the position of an oil-drilling platform. We present real-time modeling results that show that all tasks are schedulable. We performed stochastic analysis of the distribution of task execution time as a function of the number of system interfaces. We report on the variability of task execution times for the expected system configurations. In addition, we have executed a system library for an important task inside the performance model simulator. We report on the measured algorithm convergence as a function of the number of vessel thrusters. We have also studied the system architecture adaptability by comparing the documented system architecture and the implemented source code. We report on the adaptability findings and the recommendations we were able to provide to the system’s architect. Finally, we have developed models of hardware and software reliability. We report on hardware and software reliability results based on the evaluation of the system architecture. 相似文献
95.
96.
We obtain subquadratic algorithms for 3SUM on integers and rationals in several models. On a standard word RAM with w-bit words, we obtain a running time of
. In the circuit RAM with one nonstandard AC
0 operation, we obtain
. In external memory, we achieve O(n
2/(MB)), even under the standard assumption of data indivisibility. Cache-obliviously, we obtain a running time of
. In all cases, our speedup is almost quadratic in the “parallelism” the model can afford, which may be the best possible.
Our algorithms are Las Vegas randomized; time bounds hold in expectation, and in most cases, with high probability. 相似文献
97.
The paper presents a new method introducing an anchored discrete convolution for calculating the length of a digital curve. The method is based on discrete convolution by using convolution masks and point anchoring in the pixel. The use of ordinary convolution distorts the curve shape and gives large errors in length calculation. The advantage of anchoring is that it limits the point shifting into the pixel during the calculation of the curve length. The method is applied to an analytical arc and various calculations are performed. In addition different methods from the literature were compared and a real sample was tested. 相似文献
98.
Aye Akbalk Sekoun Kebe Bernard Penz Najiba Sbihi 《International Transactions in Operational Research》2008,15(2):195-214
In this paper we study the coordination of different activities in a supply chain issued from a real case. Multiple suppliers send raw materials (RMs) to a distribution center (DC) that delivers them to a unique plant where the storage of the RMs and the finished goods is not possible. Then, the finished goods are directly shipped to multiple customers having just‐in‐time (JIT) demands. Under these hypotheses, we show that the problem can be reduced to multiple suppliers and one DC. Afterwards, we analyze two cases; in the first, we consider an uncapacitated storage at DC, and in the second, we analyze the capacitated storage case. For the first case, we show that the problem is NP‐hard in the ordinary sense using the Knapsack decision problem. We then propose two exact methods: a mixed integer linear program (MILP) and a pseudopolynomial dynamic program. A classical dynamic program and an improved one using the idea of Shaw and Wagelmans are given. With numerical tests we show that the dynamic program gives the optimal solution in reasonable time for quite large instances compared with the MILP. For the second case, the capacity limitation in DC is assumed, which makes the problem solving more challenging. We propose an MILP and a dynamic programming‐based heuristic that provides solutions close to the optimal solution in very short times. 相似文献
99.
A new image segmentation system is presented to automatically segment and label brain magnetic resonance (MR) images to show normal and abnormal brain tissues using self-organizing maps (SOM) and knowledge-based expert systems. Elements of a feature vector are formed by image intensities, first-order features, texture features extracted from gray-level co-occurrence matrix and multiscale features. This feature vector is used as an input to the SOM. SOM is used to over segment images and a knowledge-based expert system is used to join and label the segments. Spatial distributions of segments extracted from the SOM are also considered as well as gray level properties. Segments are labeled as background, skull, white matter, gray matter, cerebrospinal fluid (CSF) and suspicious regions. 相似文献
100.
Anders A. Larsen Martin Bendsøe Jesper Hattel Henrik Schmidt 《Structural and Multidisciplinary Optimization》2009,38(3):289-299
The aim of this paper is to optimize a thermal model of a friction stir welding process by finding optimal welding parameters.
The optimization is performed using space mapping and manifold mapping techniques in which a coarse model is used along with
the fine model to be optimized. Different coarse models are applied and the results and computation time are compared to gradient
based optimization using the full model. It is found that the use of space and manifold mapping reduces the computational
cost significantly due to the fact that fewer function evaluations and no fine model gradient information is required. 相似文献