首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1952篇
  免费   206篇
  国内免费   2篇
电工技术   19篇
综合类   1篇
化学工业   607篇
金属工艺   45篇
机械仪表   66篇
建筑科学   68篇
矿业工程   2篇
能源动力   56篇
轻工业   457篇
水利工程   20篇
石油天然气   14篇
无线电   154篇
一般工业技术   278篇
冶金工业   147篇
原子能技术   13篇
自动化技术   213篇
  2024年   15篇
  2023年   27篇
  2022年   91篇
  2021年   117篇
  2020年   93篇
  2019年   80篇
  2018年   102篇
  2017年   109篇
  2016年   101篇
  2015年   103篇
  2014年   100篇
  2013年   158篇
  2012年   147篇
  2011年   130篇
  2010年   106篇
  2009年   91篇
  2008年   79篇
  2007年   53篇
  2006年   50篇
  2005年   50篇
  2004年   35篇
  2003年   27篇
  2002年   19篇
  2001年   21篇
  2000年   17篇
  1999年   19篇
  1998年   50篇
  1997年   36篇
  1996年   29篇
  1995年   21篇
  1994年   10篇
  1993年   9篇
  1992年   3篇
  1991年   9篇
  1990年   8篇
  1989年   4篇
  1988年   2篇
  1987年   3篇
  1986年   6篇
  1985年   6篇
  1984年   2篇
  1983年   1篇
  1982年   1篇
  1981年   5篇
  1980年   1篇
  1977年   3篇
  1976年   8篇
  1974年   1篇
  1966年   1篇
  1965年   1篇
排序方式: 共有2160条查询结果,搜索用时 15 毫秒
81.
In this paper, we verify how far electric disturbance signals can be compressed without compromising the analysis of encoded fault records. A recently proposed compression algorithm, referred to as Damped Sinusoidal Matching Pursuit (DSMP) has the remarkable feature of obtaining both compact and physically interpretable representations. However, for fault analysis applications, one is primarily interested in how accurate can be the analysis performed on compressed signals, instead of evaluating mean-squared error figures. Unlike previous works in digital fault records compression, the performance of the DSMP compression method is evaluated using a protocol based on fault analysis procedures commonly performed by expert engineers. This protocol is applied for comparing the results obtained in the analysis of both uncompressed records and their compressed versions at different compression ratios. The results show that the DSMP is a reliable compression system since it achieves high compression ratios (6.4:1) without causing fault analysis misinterpretation.  相似文献   
82.
Searching in a dataset for elements that are similar to a given query element is a core problem in applications that manage complex data, and has been aided by metric access methods (MAMs). A growing number of applications require indices that must be built faster and repeatedly, also providing faster response for similarity queries. The increase in the main memory capacity and its lowering costs also motivate using memory-based MAMs. In this paper, we propose the Onion-tree, a new and robust dynamic memory-based MAM that slices the metric space into disjoint subspaces to provide quick indexing of complex data. It introduces three major characteristics: (i) a partitioning method that controls the number of disjoint subspaces generated at each node; (ii) a replacement technique that can change the leaf node pivots in insertion operations; and (iii) range and k-NN extended query algorithms to support the new partitioning method, including a new visit order of the subspaces in k-NN queries. Performance tests with both real-world and synthetic datasets showed that the Onion-tree is very compact. Comparisons of the Onion-tree with the MM-tree and a memory-based version of the Slim-tree showed that the Onion-tree was always faster to build the index. The experiments also showed that the Onion-tree significantly improved range and k-NN query processing performance and was the most efficient MAM, followed by the MM-tree, which in turn outperformed the Slim-tree in almost all the tests.  相似文献   
83.
Research in the area of collision detection permeates most of the literature on simulations, interaction and agents planning, being commonly regarded as one of the main bottlenecks for large-scale systems. To this day, despite its importance, most subareas of collision detection lack a common ground to test and validate solutions, reference implementations and widely accepted benchmark suites. In this paper, we delve into the broad-phase of collision detection systems, providing both an open-source framework, named Broadmark, to test, compare and validate algorithms, and an in-deep analysis of the main techniques used so far to tackle the broad-phase problem. The technical challenges of building this framework from the software and hardware perspectives are also described. Within our framework, several original and state-of-the-art implementations of CPU and GPU algorithms are bundled, alongside three benchmark scenes to stress algorithms under several conditions. Furthermore, the system is designed to be easily extensible. We use our framework to bring out an extensive performance comparison among assembled solutions, detailing the current CPU and GPU state-of-the-art on a common ground. We believe that Broadmark encompasses the principal insights and tools to derive and evaluate novel algorithms, thus greatly facilitating discussion about successful broad-phase collision detection solutions.  相似文献   
84.
Applied Intelligence - The17 Sustainable Development Goals (SDGs) established by the United Nations Agenda 2030 constitute a global blueprint agenda and instrument for peace and prosperity...  相似文献   
85.
3-D Container Packing Heuristics   总被引:5,自引:0,他引:5  
In this paper, we study the 3-D container packing problem. The problem is divided into box selection, space selection, box orientation and new space generation sub-problems. As a first step, a basic heuristic is devised. From results using this heuristic, problems are categorized as homogeneous and heterogeneous. Two augmenting heuristics are then formulated to deal with these categories. They are complementary in their capabilities in dealing with a range of practical problems, and in terms of their computational consumption. Results using our algorithmsexceed the benchmark by 4.5% on average.  相似文献   
86.
Cloud computing systems handle large volumes of data by using almost unlimited computational resources, while spatial data warehouses (SDWs) are multidimensional databases that store huge volumes of both spatial data and conventional data. Cloud computing environments have been considered adequate to host voluminous databases, process analytical workloads and deliver database as a service, while spatial online analytical processing (spatial OLAP) queries issued over SDWs are intrinsically analytical. However, hosting a SDW in the cloud and processing spatial OLAP queries over such database impose novel obstacles. In this article, we introduce novel concepts as cloud SDW and spatial OLAP as a service, and afterwards detail the design of novel schemas for cloud SDW and spatial OLAP query processing over cloud SDW. Furthermore, we evaluate the performance to process spatial OLAP queries in cloud SDWs using our own query processor aided by a cloud spatial index. Moreover, we describe the cloud spatial bitmap index to improve the performance to process spatial OLAP queries in cloud SDWs, and assess it through an experimental evaluation. Results derived from our experiments revealed that such index was capable to reduce the query response time from 58.20 up to 98.89 %.  相似文献   
87.
ABSTRACT

Gestural interaction devices emerged and originated various studies on multimodal human–computer interaction to improve user experience (UX). However, there is a knowledge gap regarding the use of these devices to enhance learning. We present an exploratory study which analysed the UX with a multimodal immersive videogame prototype, based on a Portuguese historical/cultural episode. Evaluation tests took place in high school environments and public videogaming events. Two users would be present simultaneously in the same virtual reality (VR) environment: one as the helmsman aboard Vasco da Gama’s fifteenth-century Portuguese ship and the other as the mythical Adamastor stone giant at the Cape of Good Hope. The helmsman player wore a VR headset to explore the environment, whereas the giant player used body motion to control the giant, and observed results on a screen, with no headset. This allowed a preliminary characterisation of UX, identifying challenges and potential use of these devices in multi-user virtual learning contexts. We also discuss the combined use of such devices, towards future development of similar systems, and its implications on learning improvement through multimodal human–computer interaction.  相似文献   
88.
The code STATFLUX, implementing a new and simple statistical procedure for the calculation of transfer coefficients in radionuclide transport to animals and plants, is proposed. The method is based on the general multiple-compartment model, which uses a system of linear equations involving geometrical volume considerations. Flow parameters were estimated by employing two different least-squares procedures: Derivative and Gauss-Marquardt methods, with the available experimental data of radionuclide concentrations as the input functions of time. The solution of the inverse problem, which relates a given set of flow parameter with the time evolution of concentration functions, is achieved via a Monte Carlo simulation procedure.

Program summary

Title of program:STATFLUXCatalogue identifier:ADYS_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADYS_v1_0Program obtainable from: CPC Program Library, Queen's University of Belfast, N. IrelandLicensing provisions: noneComputer for which the program is designed and others on which it has been tested:Micro-computer with Intel Pentium III, 3.0 GHzInstallation:Laboratory of Linear Accelerator, Department of Experimental Physics, University of São Paulo, BrazilOperating system:Windows 2000 and Windows XPProgramming language used:Fortran-77 as implemented in Microsoft Fortran 4.0. NOTE: Microsoft Fortran includes non-standard features which are used in this program. Standard Fortran compilers such as, g77, f77, ifort and NAG95, are not able to compile the code and therefore it has not been possible for the CPC Program Library to test the program.Memory required to execute with typical data:8 Mbytes of RAM memory and 100 MB of Hard disk memoryNo. of bits in a word:16No. of lines in distributed program, including test data, etc.:6912No. of bytes in distributed program, including test data, etc.:229 541Distribution format:tar.gzNature of the physical problem:The investigation of transport mechanisms for radioactive substances, through environmental pathways, is very important for radiological protection of populations. One such pathway, associated with the food chain, is the grass-animal-man sequence. The distribution of trace elements in humans and laboratory animals has been intensively studied over the past 60 years [R.C. Pendlenton, C.W. Mays, R.D. Lloyd, A.L. Brooks, Differential accumulation of iodine-131 from local fallout in people and milk, Health Phys. 9 (1963) 1253-1262]. In addition, investigations on the incidence of cancer in humans, and a possible causal relationship to radioactive fallout, have been undertaken [E.S. Weiss, M.L. Rallison, W.T. London, W.T. Carlyle Thompson, Thyroid nodularity in southwestern Utah school children exposed to fallout radiation, Amer. J. Public Health 61 (1971) 241-249; M.L. Rallison, B.M. Dobyns, F.R. Keating, J.E. Rall, F.H. Tyler, Thyroid diseases in children, Amer. J. Med. 56 (1974) 457-463; J.L. Lyon, M.R. Klauber, J.W. Gardner, K.S. Udall, Childhood leukemia associated with fallout from nuclear testing, N. Engl. J. Med. 300 (1979) 397-402]. From the pathways of entry of radionuclides in the human (or animal) body, ingestion is the most important because it is closely related to life-long alimentary (or dietary) habits. Those radionuclides which are able to enter the living cells by either metabolic or other processes give rise to localized doses which can be very high. The evaluation of these internally localized doses is of paramount importance for the assessment of radiobiological risks and radiological protection. The time behavior of trace concentration in organs is the principal input for prediction of internal doses after acute or chronic exposure. The General Multiple-Compartment Model (GMCM) is the powerful and more accepted method for biokinetical studies, which allows the calculation of concentration of trace elements in organs as a function of time, when the flow parameters of the model are known. However, few biokinetics data exist in the literature, and the determination of flow and transfer parameters by statistical fitting for each system is an open problem.Restriction on the complexity of the problem:This version of the code works with the constant volume approximation, which is valid for many situations where the biological half-live of a trace is lower than the volume rise time. Another restriction is related to the central flux model. The model considered in the code assumes that exist one central compartment (e.g., blood) that connect the flow with all compartments, and the flow between other compartments is not included.Typical running time:Depends on the choice for calculations. Using the Derivative Method the time is very short (a few minutes) for any number of compartments considered. When the Gauss-Marquardt iterative method is used the calculation time can be approximately 5-6 hours when ∼15 compartments are considered.  相似文献   
89.
90.
Generalized topology design of structures with a buckling load criterion   总被引:2,自引:0,他引:2  
Material based models for topology optimization of linear elastic solids with a low volume constraint generate very slender structures composed mainly of bars and beam elements. For this type of structure the value of the buckling critical load becomes one of the most important design criteria and so its control is very important for meaningful practical designs. This paper tries to address this problem, presenting an approach to introduce the possibility of critical load control into the topology optimization model.Using the material based formulation for topology design of structures, the problem of optimal structural reinforcement for a critical load criterion is formulated. The stability problem is conveniently reduced to a linearized eigenvalue problem assuming only material effective properties and macroscopic instability modes. The respective optimality criteria are presented by introducing the Lagrangian associated with the optimization problem. Based on this Lagrangian a first-order method is used as a basis for the numerical update scheme. Two numerical examples to validate the developments are presented and analysed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号