全文获取类型
收费全文 | 1828篇 |
免费 | 74篇 |
专业分类
电工技术 | 4篇 |
综合类 | 1篇 |
化学工业 | 435篇 |
金属工艺 | 32篇 |
机械仪表 | 25篇 |
建筑科学 | 73篇 |
矿业工程 | 2篇 |
能源动力 | 17篇 |
轻工业 | 268篇 |
水利工程 | 24篇 |
石油天然气 | 6篇 |
无线电 | 61篇 |
一般工业技术 | 270篇 |
冶金工业 | 441篇 |
原子能技术 | 3篇 |
自动化技术 | 240篇 |
出版年
2024年 | 2篇 |
2023年 | 19篇 |
2022年 | 63篇 |
2021年 | 62篇 |
2020年 | 30篇 |
2019年 | 47篇 |
2018年 | 57篇 |
2017年 | 44篇 |
2016年 | 50篇 |
2015年 | 51篇 |
2014年 | 58篇 |
2013年 | 102篇 |
2012年 | 104篇 |
2011年 | 158篇 |
2010年 | 126篇 |
2009年 | 98篇 |
2008年 | 132篇 |
2007年 | 113篇 |
2006年 | 70篇 |
2005年 | 67篇 |
2004年 | 58篇 |
2003年 | 42篇 |
2002年 | 48篇 |
2001年 | 41篇 |
2000年 | 27篇 |
1999年 | 30篇 |
1998年 | 21篇 |
1997年 | 20篇 |
1996年 | 21篇 |
1995年 | 12篇 |
1994年 | 19篇 |
1993年 | 16篇 |
1992年 | 17篇 |
1990年 | 14篇 |
1989年 | 6篇 |
1988年 | 4篇 |
1987年 | 12篇 |
1986年 | 6篇 |
1985年 | 6篇 |
1984年 | 4篇 |
1983年 | 4篇 |
1982年 | 3篇 |
1981年 | 2篇 |
1980年 | 3篇 |
1977年 | 3篇 |
1976年 | 3篇 |
1975年 | 1篇 |
1974年 | 1篇 |
1970年 | 1篇 |
1969年 | 1篇 |
排序方式: 共有1902条查询结果,搜索用时 0 毫秒
101.
The effects of a segmented presentation applied to a visually structured text were examined in the context of the explosion of small-screen devices. Empirical research investigating the influence of text signaling on text processing suggests that the text visual structure may influence comprehension by facilitating the construction of a coherent text representation. Undergraduate students were asked to read a text under different segmented conditions varying on the type of information provided about the text visual structure and on the segmentation unit. When the segmented presentation did not supply any information or when it only offered local information about the text visual structure, text comprehension depended on the segmentation unit. When the segmentation unit did not fit the text visual structure, an erroneous text representation was constructed, whereas the compatible segmentation unit led to a correct text comprehension. When the segmented presentation rendered the global text visual structure, the segmentation unit had no effect on comprehension and more readers constructed a correct and close text representation. Thus, the text visual structure seems to play a role in text comprehension and this role has to be taken into account for text segmented presentation. 相似文献
102.
Marc Baboulin Alfredo Buttari Jack Dongarra Jakub Kurzak Julie Langou Julien Langou Piotr Luszczek Stanimire Tomov 《Computer Physics Communications》2009,180(12):2526-2533
On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution. The approach presented here can apply not only to conventional processors but also to other technologies such as Field Programmable Gate Arrays (FPGA), Graphical Processing Units (GPU), and the STI Cell BE processor. Results on modern processor architectures and the STI Cell BE are presented.
Program summary
Program title: ITER-REFCatalogue identifier: AECO_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AECO_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 7211No. of bytes in distributed program, including test data, etc.: 41 862Distribution format: tar.gzProgramming language: FORTRAN 77Computer: desktop, serverOperating system: Unix/LinuxRAM: 512 MbytesClassification: 4.8External routines: BLAS (optional)Nature of problem: On modern architectures, the performance of 32-bit operations is often at least twice as fast as the performance of 64-bit operations. By using a combination of 32-bit and 64-bit floating point arithmetic, the performance of many dense and sparse linear algebra algorithms can be significantly enhanced while maintaining the 64-bit accuracy of the resulting solution.Solution method: Mixed precision algorithms stem from the observation that, in many cases, a single precision solution of a problem can be refined to the point where double precision accuracy is achieved. A common approach to the solution of linear systems, either dense or sparse, is to perform the LU factorization of the coefficient matrix using Gaussian elimination. First, the coefficient matrix A is factored into the product of a lower triangular matrix L and an upper triangular matrix U. Partial row pivoting is in general used to improve numerical stability resulting in a factorization PA=LU, where P is a permutation matrix. The solution for the system is achieved by first solving Ly=Pb (forward substitution) and then solving Ux=y (backward substitution). Due to round-off errors, the computed solution, x, carries a numerical error magnified by the condition number of the coefficient matrix A. In order to improve the computed solution, an iterative process can be applied, which produces a correction to the computed solution at each iteration, which then yields the method that is commonly known as the iterative refinement algorithm. Provided that the system is not too ill-conditioned, the algorithm produces a solution correct to the working precision.Running time: seconds/minutes 相似文献103.
Discusses the decision to eliminate the term "neurosis" from the DSM-III. The history of the term is traced; weaknesses of DSM-II pertaining to neurosis are presented; theoretical and political processes in the deletion procedure are described; and an overview is given of the current resolution as presented in DSM-III. Instead of neurosis, "neurotic disorder" and "neurotic process" were distinguished to reduce potential theoretical bias. The process may or may not be seen by the clinician as causal in the disorder, but those of all theoretical persuasions should be able to agree on what the disorder is. (13 ref) (PsycINFO Database Record (c) 2010 APA, all rights reserved) 相似文献
104.
Abstract: Picture sorts were used to investigate perceptions of women's office clothes, with a sample of ten male and ten female subjects who normally worked in an office environment. The pictures on the cards were taken from catalogues, and showed women's outfits which might be worn in an office. The subjects sorted the cards repeatedly and generated criteria and categories of their own choice. Some of the criteria and categories had not been previously reported in the clothing research literature. Over half of the male subjects, but none of the female subjects, used ‘married/unmarried woman’ as a sorting criterion, although only one of the images sorted showed a wedding ring. A significantly higher proportion of male than of female subjects used dichotomous categorization (i.e. sorting the cards into two piles for one or more of the criteria). The reasons for this are obscure, but do not appear to be a simple outcome of males not knowing much about female clothing. Previous research into clothing has tended to involve researcher‐centred approaches such as semiotics; the results from this study suggest that there would be advantages in wider use of subject‐centred approaches such as card sorts, both in this domain and elsewhere. It was concluded that card sorts were a useful method and should be more widely used. 相似文献
105.
In early or preparatory design stages, an architect or designer sketches out rough ideas, not only about the object or structure being considered, but its relation to its spatial context. This is an iterative process, where the sketches are not only the primary means for testing and refining ideas, but also for communicating among a design team and to clients. Hence, sketching is the preferred media for artists and designers during the early stages of design, albeit with a major drawback: sketches are 2D and effects such as view perturbations or object movement are not supported, thereby inhibiting the design process. We present an interactive system that allows for the creation of a 3D abstraction of a designed space, built primarily by sketching in 2D within the context of an anchoring design or photograph. The system is progressive in the sense that the interpretations are refined as the user continues sketching. As a key technical enabler, we reformulate the sketch interpretation process as a selection optimization from a set of context‐generated canvas planes in order to retrieve a regular arrangement of planes. We demonstrate our system (available at http:/geometry.cs.ucl.ac.uk/projects/2016/smartcanvas/ ) with a wide range of sketches and design studies. 相似文献
106.
Pillai Karthik Ganesh Ramaswamy Radhakrishnan Kanthavel Ramakrishnan Dhaya Yesudhas Harold Robinson Eanoch Golden Julie Kumar Raghvendra Long Hoang Viet Son Le Hoang 《Multimedia Tools and Applications》2021,80(5):7077-7101
Multimedia Tools and Applications - Detection and clustering of commercial advertisements plays an important role in multimedia indexing also in the creation of personalized user content. In... 相似文献
107.
John L. Van Hemert Julie A. Dickerson 《Computer methods and programs in biomedicine》2011,101(1):80-86
Statistical tests are often performed to discover which experimental variables are reacting to specific treatments. Time-series statistical models usually require the researcher to make assumptions with respect to the distribution of measured responses which may not hold. Randomization tests can be applied to data in order to generate null distributions non-parametrically. However, large numbers of randomizations are required for the precise p-values needed to control false discovery rates. When testing tens of thousands of variables (genes, chemical compounds, or otherwise), significant q-value cutoffs can be extremely small (on the order of 10−5 to 10−8). This requires high-precision p-values, which in turn require large numbers of randomizations. The NVIDIA® Compute Unified Device Architecture® (CUDA®) platform for General Programming on the Graphics Processing Unit (GPGPU) was used to implement an application which performs high-precision randomization tests via Monte Carlo sampling for quickly screening custom test statistics for experiments with large numbers of variables, such as microarrays, Next-Generation sequencing read counts, chromatographical signals, or other abundance measurements. The software has been shown to achieve up to more than 12 fold speedup on a Graphics Processing Unit (GPU) when compared to a powerful Central Processing Unit (CPU). The main limitation is concurrent random access of shared memory on the GPU. The software is available from the authors. 相似文献
108.
109.
110.
Outsourcing continues to capture the attention of researchers as more companies move to outsourcing models as part of their
business practice. Two areas frequently researched and reported in the literature are the reasons why a company decides to
outsource, and outsourcing success factors. This paper describes an in-depth, longitudinal case study that explores both the
reasons why the company decided to outsource and factors that impact on success. The paper describes how Alpha, a very large
Australian communications company, approached outsourcing and how its approach matured over a period of 9 years. The paper
concludes that although a number of reasons are proposed for a company's decision to outsource, lowering costs was the predominant
driver in this case. We also describe other factors identified as important for outsourcing success such as how contracts
are implemented, the type of outsourcing partner arrangement, and outsourcing vendor capabilities.
相似文献
Robert JacobsEmail: |