首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   140篇
  免费   3篇
电工技术   2篇
化学工业   32篇
金属工艺   3篇
机械仪表   1篇
建筑科学   4篇
矿业工程   1篇
能源动力   5篇
轻工业   2篇
水利工程   2篇
石油天然气   1篇
无线电   20篇
一般工业技术   30篇
冶金工业   4篇
自动化技术   36篇
  2024年   1篇
  2023年   6篇
  2022年   3篇
  2021年   9篇
  2020年   8篇
  2019年   5篇
  2018年   11篇
  2017年   9篇
  2016年   9篇
  2015年   6篇
  2014年   7篇
  2013年   10篇
  2012年   7篇
  2011年   6篇
  2010年   10篇
  2009年   6篇
  2008年   7篇
  2007年   6篇
  2006年   3篇
  2005年   1篇
  2004年   1篇
  2002年   1篇
  2000年   1篇
  1999年   2篇
  1997年   1篇
  1994年   1篇
  1985年   1篇
  1983年   1篇
  1980年   1篇
  1979年   2篇
  1975年   1篇
排序方式: 共有143条查询结果,搜索用时 15 毫秒
1.
Proficiency on underlying mechanism of rubber-metal adhesion has been increased significantly in the last few decades. Researchers have investigated the effect of various ingredients, such as hexamethoxymethyl melamine, resorcinol, cobalt stearate, and silica, on rubber-metal interface. The role of each ingredient on rubber-metal interfacial adhesion is still a subject of scrutiny. In this article, a typical belt skim compound of truck radial tire is selected and the effect of each adhesive ingredient on adhesion strength is explored. Out of these ingredients, the effect of cobalt stearate is found noteworthy. It has improved adhesion strength by 12% (without aging) and by 11% (humid-aged), respectively, over control compound. For detailed understanding of the effect of cobalt stearate on adhesion, scanning electron microscopy and energy dispersive spectroscopy are utilized to ascertain the rubber coverage and distribution of elements. X-ray photoelectron spectroscopy results helped us to understand the impact of CuXS layer depth on rubber-metal adhesion. The depth profile of the CuXS layer was found to be one of the dominant factors of rubber-metal adhesion retention. Thus, this study has made an attempt to find the impact of different adhesive ingredients on the formation of CuXS layer depth at rubber-metal interface and establish a correlation with adhesion strength simultaneously.  相似文献   
2.
In a wireless sensor network (WSN), random occurrences of faulty nodes degrade the quality of service of the network. In this paper, we propose an efficient fault detection and routing (EFDR) scheme to manage a large size WSN. The faulty nodes are detected by neighbour node’s temporal and spatial correlation of sensing information and heart beat message passed by the cluster head. In EFDR scheme, three linear cellular automata (CA) are used to manage transmitter circuit/ battery condition/microcontroller fault, receiver circuit fault and sensor circuit fault representation. On the other hand, L-system rules based data routing scheme is proposed to determine optimal routing path between cluster head and base station. The proposed EFDR technique is capable of detecting and managing the faulty nodes in an efficient manner. The simulation results show 86% improvement in the rate of energy loss compared to an existing algorithm.  相似文献   
3.
Informally stated, we present here a randomized algorithm that given black-box access to the polynomial f computed by an unknown/hidden arithmetic formula ? reconstructs, on the average, an equivalent or smaller formula \({\hat{\phi}}\) in time polynomial in the size of its output \({\hat{\phi}}\) . Specifically, we consider arithmetic formulas wherein the underlying tree is a complete binary tree, the leaf nodes are labeled by affine forms (i.e., degree one polynomials) over the input variables and where the internal nodes consist of alternating layers of addition and multiplication gates. We call these alternating normal form (ANF) formulas. If a polynomial f can be computed by an arithmetic formula μ of size s, it can also be computed by an ANF formula ?, possibly of slightly larger size s O(1). Our algorithm gets as input black-box access to the output polynomial f (i.e., for any point x in the domain, it can query the black box and obtain f(x) in one step) of a random ANF formula ? of size s (wherein the coefficients of the affine forms in the leaf nodes of ? are chosen independently and uniformly at random from a large enough subset of the underlying field). With high probability (over the choice of coefficients in the leaf nodes), the algorithm efficiently (i.e., in time s O(1)) computes an ANF formula \({\hat{\phi}}\) of size s computing f. This then is the strongest model of arithmetic computation for which a reconstruction algorithm is presently known, albeit efficient in a distributional sense rather than in the worst case.  相似文献   
4.
5.
Subgraph querying has wide applications in various fields such as cheminformatics and bioinformatics. Given a query graph, q, a subgraph-querying algorithm retrieves all graphs, D(q), which have q as a subgraph, from a graph database, D. Subgraph querying is costly because it uses subgraph isomorphism tests, which are NP-complete. Graph indices are commonly used to improve the performance of subgraph querying in graph databases. Subgraph-querying algorithms first construct a candidate answer set by filtering out a set of false answers and then verify each candidate graph using subgraph isomorphism tests. To build graph indices, various kinds of substructure (subgraph, subtree, or path) features have been proposed with the goal of maximizing the filtering rate. Each of them works with a specifically designed index structure, for example, discriminative and frequent subgraph features work with gIndex, δ-TCFG features work with FG-index, etc. We propose Lindex, a graph index, which indexes subgraphs contained in database graphs. Nodes in Lindex represent key-value pairs where the key is a subgraph in a database and the value is a list of database graphs containing the key. We propose two heuristics that are used in the construction of Lindex that allows us to determine answers to subgraph queries conducting less subgraph isomorphism tests. Consequently, Lindex improves subgraph-querying efficiency. In addition, Lindex is compatible with any choice of features. Empirically, we demonstrate that Lindex used in conjunction with subgraph indexing features proposed in previous works outperforms other specifically designed index structures. As a novel index structure, Lindex (1) is effective in filtering false graphs (2) provides fast index lookups, (3) is fast with respect to index construction and maintenance, and (4) can be constructed using any set of substructure index features. These four properties result in a fast and scalable subgraph-querying infrastructure. We substantiate the benefits of Lindex and its disk-resident variation Lindex+ theoretically and empirically.  相似文献   
6.
Discharges of combined sewer overflows (CSOs) and stormwater are recognized as an important source of environmental contamination. However, the harsh sewer environment and particular hydraulic conditions during rain events reduce the reliability of traditional flow measurement probes. An in situ system for sewer water flow monitoring based on video images was evaluated. Algorithms to determine water velocities were developed based on image-processing techniques. The image-based water velocity algorithm identifies surface features and measures their positions with respect to real world coordinates. A web-based user interface and a three-tier system architecture enable remote configuration of the cameras and the image-processing algorithms in order to calculate automatically flow velocity on-line. Results of investigations conducted in a CSO are presented. The system was found to measure reliably water velocities, thereby providing the means to understand particular hydraulic behaviors.  相似文献   
7.
Authors use images to present a wide variety of important information in documents. For example, two-dimensional (2-D) plots display important data in scientific publications. Often, end-users seek to extract this data and convert it into a machine-processible form so that the data can be analyzed automatically or compared with other existing data. Existing document data extraction tools are semi-automatic and require users to provide metadata and interactively extract the data. In this paper, we describe a system that extracts data from documents fully automatically, completely eliminating the need for human intervention. The system uses a supervised learning-based algorithm to classify figures in digital documents into five classes: photographs, 2-D plots, 3-D plots, diagrams, and others. Then, an integrated algorithm is used to extract numerical data from data points and lines in the 2-D plot images along with the axes and their labels, the data symbols in the figure’s legend and their associated labels. We demonstrate that the proposed system and its component algorithms are effective via an empirical evaluation. Our data extraction system has the potential to be a vital component in high volume digital libraries.  相似文献   
8.
Selection of a robot for a specific industrial application is one of the most challenging problems in real time manufacturing environment. It has become more and more complicated due to increase in complexity, advanced features and facilities that are continuously being incorporated into the robots by different manufacturers. At present, different types of industrial robots with diverse capabilities, features, facilities and specifications are available in the market. Manufacturing environment, product design, production system and cost involved are some of the most influencing factors that directly affect the robot selection decision. The decision maker needs to identify and select the best suited robot in order to achieve the desired output with minimum cost and specific application ability. This paper attempts to solve the robot selection problem using two most appropriate multi-criteria decision-making (MCDM) methods and compares their relative performance for a given industrial application. The first MCDM approach is ‘VIsekriterijumsko KOmpromisno Rangiranje’ (VIKOR), a compromise ranking method and the other one is ‘ELimination and Et Choice Translating REality’ (ELECTRE), an outranking method. Two real time examples are cited in order to demonstrate and validate the applicability and potentiality of both these MCDM methods. It is observed that the relative rankings of the alternative robots as obtained using these two MCDM methods match quite well with those as derived by the past researchers.  相似文献   
9.
We have investigated and determined the potentiality of different water sources, both for drinking and domestic purposes, in diarrheal disease transmission in diarrhea endemic foci of urban slums in Kolkata, India in a one and half year prospective study. Out of 517 water samples, collected from different sources, stored water (washing) showed higher prevalence of fecal coliforms (58%) (p < 0.0001) in comparison with stored (drinking) samples (28%) and tap/tubewell water (8%) respectively. Among different sources, stored water (washing) samples had the highest non-permissible range of physico-chemical parameters. Fecal coliform levels in household water containers (washing) were comparatively high and almost 2/3 of these samples failed to reach the satisfactory level of residual chlorine. Interestingly, 7% stored water (washing) samples were found to be harboring Vibrio cholerae Improper usage of stored water and unsafe/poor sanitation practices such as hand washing etc. are highlighted as contributory factors for sustained diarrheal episodes. Vulnerability of stored water for domestic usage, a hitherto unexplored source, at domiciliary level in an urban slum where enteric infections are endemic, is reported for the first time. This attempt highlights the impact of quality of stored water at domiciliary level for fecal-oral contamination vis-à-vis disease transmission.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号