全文获取类型
收费全文 | 1224篇 |
免费 | 80篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 12篇 |
综合类 | 1篇 |
化学工业 | 281篇 |
金属工艺 | 23篇 |
机械仪表 | 24篇 |
建筑科学 | 44篇 |
矿业工程 | 1篇 |
能源动力 | 56篇 |
轻工业 | 85篇 |
水利工程 | 8篇 |
石油天然气 | 2篇 |
无线电 | 115篇 |
一般工业技术 | 256篇 |
冶金工业 | 135篇 |
原子能技术 | 12篇 |
自动化技术 | 250篇 |
出版年
2023年 | 14篇 |
2022年 | 25篇 |
2021年 | 48篇 |
2020年 | 26篇 |
2019年 | 46篇 |
2018年 | 44篇 |
2017年 | 34篇 |
2016年 | 48篇 |
2015年 | 44篇 |
2014年 | 59篇 |
2013年 | 91篇 |
2012年 | 87篇 |
2011年 | 132篇 |
2010年 | 77篇 |
2009年 | 71篇 |
2008年 | 85篇 |
2007年 | 54篇 |
2006年 | 59篇 |
2005年 | 39篇 |
2004年 | 25篇 |
2003年 | 26篇 |
2002年 | 17篇 |
2001年 | 3篇 |
2000年 | 7篇 |
1999年 | 5篇 |
1998年 | 17篇 |
1997年 | 14篇 |
1996年 | 15篇 |
1995年 | 10篇 |
1994年 | 8篇 |
1993年 | 8篇 |
1992年 | 2篇 |
1991年 | 6篇 |
1990年 | 5篇 |
1989年 | 2篇 |
1988年 | 3篇 |
1987年 | 2篇 |
1986年 | 2篇 |
1984年 | 2篇 |
1983年 | 3篇 |
1982年 | 5篇 |
1980年 | 4篇 |
1979年 | 2篇 |
1977年 | 4篇 |
1976年 | 2篇 |
1958年 | 2篇 |
1954年 | 3篇 |
1948年 | 2篇 |
1947年 | 2篇 |
1946年 | 2篇 |
排序方式: 共有1305条查询结果,搜索用时 812 毫秒
31.
Marc Baboulin Alfredo Buttari Jack Dongarra Jakub Kurzak Julie Langou Julien Langou Piotr Luszczek Stanimire Tomov 《Computer Physics Communications》2009,180(12):2526-2533
On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented.
Program summary
Program title: ITER-REFCatalogue identifier: AECO_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 7211No. of bytes in distributed program, including test data, etc.: 41 862Distribution format: tar.gzProgramming language: FORTRAN 77Computer: desktop, serverOperating system: Unix/LinuxRAM: 512 MbytesClassification: 4.8External routines: BLAS (optional)Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution.Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision.Running time: seconds/minutes 相似文献32.
Partial 3D Shape Retrieval by Reeb Pattern Unfolding 总被引:2,自引:0,他引:2
This paper presents a novel approach for fast and efficient partial shape retrieval on a collection of 3D shapes. Each shape is represented by a Reeb graph associated with geometrical signatures. Partial similarity between two shapes is evaluated by computing a variant of their maximum common sub-graph.
By investigating Reeb graph theory, we take advantage of its intrinsic properties at two levels. First, we show that the segmentation of a shape by a Reeb graph provides charts with disk or annulus topology only. This topology control enables the computation of concise and efficient sub-part geometrical signatures based on parameterisation techniques. Secondly, we introduce the notion of Reeb pattern on a Reeb graph along with its structural signature. We show this information discards Reeb graph structural distortion and still depicts the topology of the related sub-parts. The number of combinations to evaluate in the matching process is then dramatically reduced by only considering the combinations of topology equivalent Reeb patterns.
The proposed framework is invariant against rigid transformations and robust against non-rigid transformations and surface noise. It queries the collection in interactive time (from 4 to 30 seconds for the largest queries). It outperforms the competing methods of the SHREC 2007 contest in term of NDCG vector and provides, respectively, a gain of 14.1% and 40.9% on the approaches by Biasotti et al. [ BMSF06 ] and Cornea et al. [ CDS*05 ].
As an application, we present an intelligent modelling-by-example system which enables a novice user to rapidly create new 3D shapes by composing shapes of a collection having similar sub-parts. 相似文献
By investigating Reeb graph theory, we take advantage of its intrinsic properties at two levels. First, we show that the segmentation of a shape by a Reeb graph provides charts with disk or annulus topology only. This topology control enables the computation of concise and efficient sub-part geometrical signatures based on parameterisation techniques. Secondly, we introduce the notion of Reeb pattern on a Reeb graph along with its structural signature. We show this information discards Reeb graph structural distortion and still depicts the topology of the related sub-parts. The number of combinations to evaluate in the matching process is then dramatically reduced by only considering the combinations of topology equivalent Reeb patterns.
The proposed framework is invariant against rigid transformations and robust against non-rigid transformations and surface noise. It queries the collection in interactive time (from 4 to 30 seconds for the largest queries). It outperforms the competing methods of the SHREC 2007 contest in term of NDCG vector and provides, respectively, a gain of 14.1% and 40.9% on the approaches by Biasotti et al. [ BMSF06 ] and Cornea et al. [ CDS*05 ].
As an application, we present an intelligent modelling-by-example system which enables a novice user to rapidly create new 3D shapes by composing shapes of a collection having similar sub-parts. 相似文献
33.
34.
Several studies have stressed that even expert operators who are aware of a machine's limits could adopt its proposals without questioning them (i.e., the complacency phenomenon). In production scheduling for manufacturing, this is a significant problem, as it is often suggested that the machine be allowed to build the production schedule, confining the human role to that of rescheduling. This article evaluates the characteristics of scheduling algorithms on human rescheduling performance, the quality of which was related to complacency. It is suggested that scheduling algorithms be characterized as having result comprehensibility (the result respects the scheduler's expectations in terms of the discourse rules of the information display) or algorithm comprehensibility (the complexity of the algorithm hides some important constraints). The findings stress, on the one hand, that result comprehensibility is necessary to achieve good production performance and to limit complacency. On the other hand, algorithm comprehensibility leads to poor performance due to the very high cost of understanding the algorithm. © 2008 Wiley Periodicals, Inc. 相似文献
35.
The paper describes a software method to extend ITK (Insight ToolKit, supported by the National Library of Medicine), leading to ITK++. This method, which is based on the extension of the iterator design pattern, allows the processing of regions of interest with arbitrary shapes, without modifying the existing ITK code. We experimentally evaluate this work by considering the practical case of the liver vessel segmentation from CT-scan images, where it is pertinent to constrain processings to the liver area. Experimental results clearly prove the interest of this work: for instance, the anisotropic filtering of this area is performed in only 16 s with our proposed solution, while it takes 52 s using the native ITK framework. A major advantage of this method is that only add-ons are performed: this facilitates the further evaluation of ITK++ while preserving the native ITK framework. 相似文献
36.
This work addresses the soundtrack indexing of multimedia documents. Our purpose is to detect and locate sound unity to structure the audio dataflow in program broadcasts (reports). We present two audio classification tools that we have developed. The first one, a speech music classification tool, is based on three original features: entropy modulation, stationary segment duration (with a Forward–Backward Divergence algorithm) and number of segments. They are merged with the classical 4 Hz modulation energy. It is divided into two classifications (speech/non-speech and music/non-music) and provides more than 90% of accuracy for speech detection and 89% for music detection. The other system, a jingle identification tool, uses an Euclidean distance in the spectral domain to index the audio data flow. Results show that is efficient: among 132 jingles to recognize, we have detected 130. Systems are tested on TV and radio corpora (more than 10 h). They are simple, robust and can be improved on every corpus without training or adaptation.
相似文献
Régine André-ObrechtEmail: |
37.
The Dutch company Chess develops a wireless sensor network (WSN) platform using an epidemic communication model. One of the greatest challenges in the design is to find suitable mechanisms for clock synchronization. In this paper, we study a proposed clock synchronization protocol for the Chess platform. First, we model the protocol as a network of timed automata and verify various instances using the Uppaal model checker. Next, we present a full parametric analysis of the protocol for the special case of cliques (networks with full connectivity), that is, we give constraints on the parameters that are both necessary and sufficient for correctness. These results have been checked using the proof assistant Isabelle. We report on the exhaustive analysis of the protocol for networks with four nodes, and we present a negative result for the special case of line topologies: for any instantiation of the parameters, the protocol will eventually fail if the network grows. 相似文献
38.
Raheel Hassan Syed Jasmina Pazardzievska Julien Bourgeois 《The Journal of supercomputing》2012,62(2):804-827
Due to the extensive growth of grid computing networks, security is becoming a challenge. Usual solutions are not enough to prevent sophisticated attacks fabricated by multiple users especially when the number of nodes connected to the network is changing over the time. Attackers can use multiple nodes to launch DDoS attacks which generate a large amount of security alerts. On the one hand, this large number of security alerts degrades the overall performance of the network and creates instability in the operation of the security management solutions. On the other hand, they can help in camouflaging other real attacks. To address these issues, a?correlation mechanism is proposed which reduces the security alerts and continue detecting attacks in grid computing networks. To obtain the more accurate results, a?major portion of the experiments are performed by launching DDoS and Brute Force (BF) attacks in real grid environment, i.e., the Grid??5000 (G5K) network. 相似文献
39.
Shear Stress in Smooth Rectangular Open-Channel Flows 总被引:1,自引:0,他引:1
The average bed and sidewall shear stresses in smooth rectangular open-channel flows are determined after solving the continuity and momentum equations. The analysis shows that the shear stresses are function of three components: (1) gravitational; (2) secondary flows; and (3) interfacial shear stress. An analytical solution in terms of series expansion is obtained for the case of constant eddy viscosity without secondary currents. In comparison with laboratory measurements, it slightly overestimates the average bed shear stress measurements but underestimates the average sidewall shear stress by 17% when the width–depth ratio becomes large. A second approximation is formulated after introducing two empirical correction factors. The second approximation agrees very well (R2>0.99 and average relative error less than 6%) with experimental measurements over a wide range of width–depth ratios. 相似文献
40.
Julien Mouli-Castillo Stuart R. Haszeldine Kevin Kinsella Mark Wheeldon Angus McIntosh 《International Journal of Hydrogen Energy》2021,46(29):16217-16231
The increased reliance on natural gas for heating worldwide makes the search for carbon-free alternatives imperative, especially if international decarbonisation targets are to be met. Hydrogen does not release carbon dioxide (CO2) at the point of use which makes it an appealing candidate to decarbonise domestic heating. Hydrogen can be produced from either 1) the electrolysis of water with no associated carbon emissions, or 2) from methane reformation (using steam) which produces CO2, but which is easily captured and storable during production. Hydrogen could be transported to the end-user via gas distribution networks similar to, and adapted from, those in use today. This would reduce both installation costs and end-user disruption. However, before hydrogen can provide domestic heat, it is necessary to assess the ‘risk’ associated with its distribution in direct comparison to natural gas. Here we develop a comprehensive and multi-faceted quantitative risk assessment tool to assess the difference in ‘risk’ between current natural gas distribution networks, and the potential conversion to a hydrogen based system. The approach uses novel experimental and modelling work, scientific literature, and findings from historic large scale testing programmes. As a case study, the risk assessment tool is applied to the newly proposed H100 demonstration (100% hydrogen network) project. The assessment includes the comparative risk of gas releases both upstream and downstream of the domestic gas meter. This research finds that the risk associated with the proposed H100 network (based on its current design) is lower than that of the existing natural gas network by a factor 0.88. 相似文献