首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   216篇
  免费   31篇
  国内免费   1篇
电工技术   4篇
化学工业   71篇
金属工艺   4篇
建筑科学   16篇
矿业工程   1篇
能源动力   11篇
轻工业   13篇
水利工程   5篇
无线电   24篇
一般工业技术   48篇
冶金工业   18篇
原子能技术   1篇
自动化技术   32篇
  2024年   1篇
  2023年   4篇
  2022年   13篇
  2021年   6篇
  2020年   11篇
  2019年   17篇
  2018年   10篇
  2017年   16篇
  2016年   14篇
  2015年   8篇
  2014年   12篇
  2013年   15篇
  2012年   10篇
  2011年   11篇
  2010年   13篇
  2009年   6篇
  2008年   9篇
  2007年   12篇
  2006年   7篇
  2005年   9篇
  2004年   7篇
  2003年   6篇
  2002年   3篇
  2000年   2篇
  1999年   5篇
  1998年   2篇
  1995年   1篇
  1993年   1篇
  1990年   1篇
  1988年   1篇
  1987年   3篇
  1986年   1篇
  1985年   2篇
  1984年   3篇
  1983年   1篇
  1982年   1篇
  1980年   2篇
  1979年   1篇
  1976年   1篇
排序方式: 共有248条查询结果,搜索用时 15 毫秒
1.

Announcements

Preliminary course programme international centre for mechanical Sciences  相似文献   
2.
It was found that prolonged high-energy ball-milling of Hilgenstokite (tetracalcium phosphate, TTCP) resulted in a decrease in both particle and crystallite size, leading to a mechanical activation of the compound. This mechanically activated material demonstrated a high reactivity such that, in contrast to highly crystalline TTCP, a setting reaction with water to nanocrystalline hydroxyapatite (HA) and Ca(OH)2 could be achieved at 37°C. However, crystalline TTCP is practically unreactive at physiologic temperatures because of the formation of a thin HA layer on the particle surface preventing further reaction.  相似文献   
3.
Summary Measurements of autocorrelation functions extending over a broad time range are reported for a sample of polystyrene in ethyl acetate as a function of temperature between –44°C (-temperature) and 70°C. The corresponding spectra of decay times are obtained by two mathematical methods. The existence of three dynamic processes is shown and their temperature and angular behaviour is studied.  相似文献   
4.
Web proxy caches are used to reduce the strain of contemporary web traffic on web servers and network bandwidth providers. In this research, a novel approach to web proxy cache replacement which utilizes neural networks for replacement decisions is developed and analyzed. Neural networks are trained to classify cacheable objects from real world data sets using information known to be important in web proxy caching, such as frequency and recency. Correct classification ratios between 0.85 and 0.88 are obtained both for data used for training and data not used for training. Our approach is compared with Least Recently Used (LRU), Least Frequently Used (LFU) and the optimal case which always rates an object with the number of future requests. Performance is evaluated in simulation for various neural network structures and cache conditions. The final neural networks achieve hit rates that are 86.60% of the optimal in the worst case and 100% of the optimal in the best case. Byte-hit rates are 93.36% of the optimal in the worst case and 99.92% of the optimal in the best case. We examine the input-to-output mappings of individual neural networks and analyze the resulting caching strategy with respect to specific cache conditions.  相似文献   
5.
6.
7.
In this paper we present a hierarchical and contextual model for aerial image understanding. Our model organizes objects (cars, roofs, roads, trees, parking lots) in aerial scenes into hierarchical groups whose appearances and configurations are determined by statistical constraints (e.g. relative position, relative scale, etc.). Our hierarchy is a non-recursive grammar for objects in aerial images comprised of layers of nodes that can each decompose into a number of different configurations. This allows us to generate and recognize a vast number of scenes with relatively few rules. We present a minimax entropy framework for learning the statistical constraints between objects and show that this learned context allows us to rule out unlikely scene configurations and hallucinate undetected objects during inference. A similar algorithm was proposed for texture synthesis (Zhu et al. in Int. J. Comput. Vis. 2:107–126, 1998) but didn’t incorporate hierarchical information. We use a range of different bottom-up detectors (AdaBoost, TextonBoost, Compositional Boosting (Freund and Schapire in J. Comput. Syst. Sci. 55, 1997; Shotton et al. in Proceedings of the European Conference on Computer Vision, pp. 1–15, 2006; Wu et al. in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–8, 2007)) to propose locations of objects in new aerial images and employ a cluster sampling algorithm (C4 (Porway and Zhu, 2009)) to choose the subset of detections that best explains the image according to our learned prior model. The C4 algorithm can quickly and efficiently switch between alternate competing sub-solutions, for example whether an image patch is better explained by a parking lot with cars or by a building with vents. We also show that our model can predict the locations of objects our detectors missed. We conclude by presenting parsed aerial images and experimental results showing that our cluster sampling and top-down prediction algorithms use the learned contextual cues from our model to improve detection results over traditional bottom-up detectors alone.  相似文献   
8.
Using satellite data for flood forecasting in catchments located in mid-latitudes is challenging to engineers and model developers, in no small part due to the plethora of data sets that need to be retrieved, combined, calibrated and used for simulation in real time. The differences between the various satellite rainfall data products and the continuous improvement in their quantity and quality render the development of a single software tool, able to read and process all the different data sets, particularly difficult. Even if such an endeavour was undertaken, the degree of flexibility and extensibility that such a tool would require to accommodate future versions of data sets, available in different file formats as well as different temporal and spatial resolution should not be underestimated. This paper describes the development of a flood forecasting system that addresses this issue through a modular architecture based on the use of the Open Modeling Interface (OpenMI) standard, which facilitates the interaction between a number of separate software components. It is suggested that this approach greatly simplifies programming and debugging and eliminates the need to create spatial and temporal transformation functions without significantly compromising the overall execution speed. The approach and system were tested for forecasting flood events within a particularly challenging transboundary catchment, the Evros catchment, extending between Greece, Bulgaria and Turkey. The system uses two sets of data sources, as an example (NASA’s TRMM 3B42 and 3B42RT satellite data sets) to forecast flooding in the Evros catchment. Results indicate that OpenMI greatly facilitates the complex interaction of various software components and considerably increases the flexibility and extensibility of the overall system and hence its operational value and sustainability.  相似文献   
9.
A comparative study of low complexity motion estimation algorithms is presented. The algorithms included in the study are the 1-bit transform, the 2-bit transform, the constrained 1-bit transform and the multiplication free 1-bit transform which are using different motion estimation strategies compared to standard exhaustive search algorithm-mean absolute difference or similar combinations. These techniques provide better performance in terms of computational load compared to traditional algorithms. Although the accuracy of motion compensation is only slightly lower comparing to the other techniques, results in terms of objective quality (peak signal-to-noise ratio) and entropy are comparable. This fact, nominates them as suitable candidates for inclusion in embedded devices applications where lower complexity translates to lower power consumption and consequently improved device autonomy.  相似文献   
10.
This paper illustrates a hierarchical generative model for representing and recognizing compositional object categories with large intra-category variance. In this model, objects are broken into their constituent parts and the variability of configurations and relationships between these parts are modeled by stochastic attribute graph grammars, which are embedded in an And-Or graph for each compositional object category. It combines the power of a stochastic context free grammar (SCFG) to express the variability of part configurations, and a Markov random field (MRF) to represent the pictorial spatial relationships between these parts. As a generative model, different object instances of a category can be realized as a traversal through the And-Or graph to arrive at a valid configuration (like a valid sentence in language, by analogy). The inference/recognition procedure is intimately tied to the structure of the model and follows a probabilistic formulation consisting of bottom-up detection steps for the parts, which in turn recursively activate the grammar rules for top-down verification and searches for missing parts. We present experiments comparing our results to state of art methods and demonstrate the potential of our proposed framework on compositional objects with cluttered backgrounds using training and testing data from the public Lotus Hill and Caltech datasets.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号