首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   162篇
  免费   16篇
电工技术   1篇
综合类   1篇
化学工业   37篇
金属工艺   2篇
建筑科学   6篇
能源动力   15篇
轻工业   20篇
水利工程   1篇
无线电   9篇
一般工业技术   11篇
冶金工业   4篇
自动化技术   71篇
  2021年   1篇
  2020年   2篇
  2019年   2篇
  2018年   12篇
  2017年   9篇
  2016年   5篇
  2015年   5篇
  2014年   10篇
  2013年   14篇
  2012年   15篇
  2011年   14篇
  2010年   11篇
  2009年   18篇
  2008年   6篇
  2007年   7篇
  2006年   7篇
  2005年   4篇
  2004年   3篇
  2003年   5篇
  2002年   3篇
  2001年   2篇
  2000年   3篇
  1999年   1篇
  1998年   5篇
  1997年   2篇
  1996年   1篇
  1995年   2篇
  1994年   2篇
  1993年   1篇
  1992年   2篇
  1991年   4篇
排序方式: 共有178条查询结果,搜索用时 15 毫秒
31.
Our work on active vision has recently focused on the computational modelling of navigational tasks, where our investigations were guided by the idea of approaching vision for behavioural systems in the form of modules that are directly related to perceptual tasks. These studies led us to branch in various directions and inquire into the problems that have to be addressed in order to obtain an overall understanding of perceptual systems. In this paper, we present our views about the architecture of vision systems, about how to tackle the design and analysis of perceptual systems, and promising future research directions. Our suggested approach for understanding behavioural vision to realize the relationships of perception and action builds on two earlier approaches, the Medusa philosophy1 and the Synthetic approach2. The resulting framework calls for synthesizing an artificial vision system by studying vision competences of increasing complexity and, at the same time, pursuing the integration of the perceptual components with action and learning modules. We expect that computer vision research in the future will progress in tight collaboration with many other disciplines that are concerned with empirical approaches to vision, i.e. the understanding of biological vision. Throughout the paper, we describe biological findings that motivate computational arguments which we believe will influence studies of computer vision in the near future.  相似文献   
32.
Sentiment lexicons and word embeddings constitute well-established sources of information for sentiment analysis in online social media. Although their effectiveness has been demonstrated in state-of-the-art sentiment analysis and related tasks in the English language, such publicly available resources are much less developed and evaluated for the Greek language. In this paper, we tackle the problems arising when analyzing text in such an under-resourced language. We present and make publicly available a rich set of such resources, ranging from a manually annotated lexicon, to semi-supervised word embedding vectors and annotated datasets for different tasks. Our experiments using different algorithms and parameters on our resources show promising results over standard baselines; on average, we achieve a 24.9% relative improvement in F-score on the cross-domain sentiment analysis task when training the same algorithms with our resources, compared to training them on more traditional feature sources, such as n-grams. Importantly, while our resources were built with the primary focus on the cross-domain sentiment analysis task, they also show promising results in related tasks, such as emotion analysis and sarcasm detection.  相似文献   
33.
With the proliferation of smartphones and social media, journalistic practices are increasingly dependent on information and images contributed by local bystanders through Internet-based applications and platforms. Verifying the images produced by these sources is integral to forming accurate news reports, given that there is very little or no control over the type of user-contributed content, and hence, images found on the Web are always likely to be the result of image tampering. In particular, image splicing, i.e. the process of taking an area from one image and placing it in another is a typical such tampering practice, often used with the goal of misinforming or manipulating Internet users. Currently, the localization of splicing traces in images found on the Web is a challenging task. In this work, we present the first, to our knowledge, exhaustive evaluation of today’s state-of-the-art algorithms for splicing localization, that is, algorithms attempting to detect which pixels in an image have been tampered with as the result of such a forgery. As our aim is the application of splicing localization on images found on the Web and social media environments, we evaluate a large number of algorithms aimed at this problem on datasets that match this use case, while also evaluating algorithm robustness in the face of image degradation due to JPEG recompressions. We then extend our evaluations to a large dataset we formed by collecting real-world forgeries that have circulated the Web during the past years. We review the performance of the implemented algorithms and attempt to draw broader conclusions with respect to the robustness of splicing localization algorithms for application in Web environments, their current weaknesses, and the future of the field. Finally, we openly share the framework and the corresponding algorithm implementations to allow for further evaluations and experimentation.  相似文献   
34.
Real world datasets often consist of data expressed through multiple modalities. Clustering such datasets is in most cases a challenging task as the involved modalities are often heterogeneous. In this paper we propose a graph-based multimodal clustering approach. The proposed approach utilizes an example relevant clustering in order to learn a model of the “same cluster” relationship between a pair of items. This model is subsequently used in order to organize the items of the collection to be clustered in a graph, where the nodes represent the items and a link between a pair of nodes exists if the model predicted that the corresponding pair of items belong to the same cluster. Eventually, a graph clustering algorithm is applied on the graph in order to produce the final clustering. The proposed approach is applied on two problems that are typically treated using clustering techniques; in particular, it is applied on the problem of detecting social events and to the problem of discovering different landmark views in collections of social multimedia.  相似文献   
35.
This study reports on the impact of the curing conditions on the mechanical properties and leaching of inorganic polymer (IP) mortars made from a water quenched fayalitic slag. Three similar IP mortars were produced by mixing together slag, aggregate and activating solution, and cured in three different environments for 28 d: a) at 20 °C and relative humidity (RH) ~ 50% (T20RH50), b) at 20 °C and RH≥90% (T20RH90) and c) at 60 °C and RH ~ 20% (T60RH20). Compressive strength (EN 196-1) varied between 19 MPa (T20RH50) and 31 MPa (T20RH90). This was found to be attributed to the cracks formed upon curing. Geochemical modelling and two leaching tests were performed, the EA NEN 7375 tank test, and the BS EN 12457-1 single batch test. Results show that Cu, Ni, Pb, Zn and As leaching occurred even at high pH, which varied between 10 and 11 in the tank test’s leachates and between 12 and 12.5 in the single batch’s leachates. Leaching values obtained were below the requirements for non-shaped materials of Flemish legislation for As, Cu and Ni in the single batch test.
  相似文献   
36.
37.
A technique to produce spherical and monodisperse particles of selected polymers is presented. Liquid precursors of either mixtures of organic monomers and initiator catalysts or polymers dissolved in organic solvents were sprayed inside a vertical thermal reactor. The temperature range in the reactor was 400–670 K and the experiments were conducted in a nitrogen atmosphere. Atomization was achieved by an acoustically excited aerosol generator. Batches of equal size particles of two thermoplastic materials, poly(styrene) and poly(methyl methacrylate), were obtained in the range of 30–60 μm in diameter. Elemental analysis showed that the C and H composition of the produced particles was very close to theoretically expected values. The thermal environment, atomization conditions, and residence times the particles experienced in the reactor were explored using numerical techniques; residence times in the order of 4–10 s were estimated.  相似文献   
38.
Water-soluble functionalized carbon nanotubes (CNTs) have been prepared and further conjugated through a stable covalent bond with two peptidomimetics. The structural design of the covalently grafted peptidomimetics to the CNTs is based on their structural similarity with the metabolites of antagonist G of substance P. A variety of analytical spectroscopic methods, in combination with electron microscopy and thermal analysis, aided the structural and morphological characterization of the four newly synthesized peptidomimetic–CNT conjugates. It is demonstrated that the trypsin inhibitory effect of the peptidomimetic–CNT is enhanced as a result of the high-loading of peptidomimetics onto the skeleton of the modified water-soluble CNTs. Additionally, the peptidomimetic–functionalized CNT conjugates can be recovered and re-employed up to six biological evaluation cycles showing the same trypsin inhibitory activity. Such a nanosized system is extremely advantageous for the inhibition of inflammation or malignancy and could find potential future biological applications in the area of drug delivery systems.  相似文献   
39.
External sorting of large files of records involves use of disk space to store temporary files, processing time for sorting, and transfer time between CPU, cache, memory, and disk. Compression can reduce disk and transfer costs, and, in the case of external sorts, cut merge costs by reducing the number of runs. It is therefore plausible that overall costs of external sorting could be reduced through use of compression. In this paper, we propose new compression techniques for data consisting of sets of records. The best of these techniques, based on building a trie of variable-length common strings, provides fast compression and decompression and allows random access to individual records. We show experimentally that our trie-based compression leads to significant reduction in sorting costs; that is, it is faster to compress the data, sort it, and then decompress it than to sort the uncompressed data. While the degree of compression is not quite as great as can be obtained with adaptive techniques such as Lempel-Ziv methods, these cannot be applied to sorting. Our experiments show that, in comparison to approaches such as Huffman coding of fixed-length substrings, our novel trie-based method is faster and provides greater size reductions. Preliminary versions of parts of this paper, not including the work on vargram compression” [41]  相似文献   
40.
Moments constitute a well-known tool in the field of image analysis and pattern recognition, but they suffer from the drawback of high computational cost. Efforts for the reduction of the required computational complexity have been reported, mainly focused on binary images, but recently some approaches for gray images have been also presented. In this study, we propose a simple but effective approach for the computation of gray image moments. The gray image is decomposed in a set of binary images. Some of these binary images are substituted by an ideal image, which is called “half-intensity” image. The remaining binary images are represented using the image block representation concept and their moments are computed fast using block techniques. The proposed method computes approximated moment values with an error of 2–3% from the exact values and operates in real time (i.e., video rate). The procedure is parameterized by the number m of “half-intensity” images used, which controls the approximation error and the speed gain of the method. The computational complexity is O(kL 2), where k is the number of blocks and L is the moment order.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号