首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1975篇
  免费   69篇
  国内免费   7篇
电工技术   32篇
综合类   2篇
化学工业   716篇
金属工艺   65篇
机械仪表   56篇
建筑科学   61篇
矿业工程   12篇
能源动力   83篇
轻工业   126篇
水利工程   6篇
石油天然气   1篇
武器工业   3篇
无线电   147篇
一般工业技术   280篇
冶金工业   82篇
原子能技术   10篇
自动化技术   369篇
  2023年   3篇
  2022年   29篇
  2021年   110篇
  2020年   25篇
  2019年   38篇
  2018年   51篇
  2017年   48篇
  2016年   73篇
  2015年   50篇
  2014年   71篇
  2013年   144篇
  2012年   102篇
  2011年   100篇
  2010年   114篇
  2009年   82篇
  2008年   101篇
  2007年   72篇
  2006年   55篇
  2005年   58篇
  2004年   55篇
  2003年   52篇
  2002年   47篇
  2001年   34篇
  2000年   24篇
  1999年   33篇
  1998年   23篇
  1997年   28篇
  1996年   34篇
  1995年   23篇
  1994年   24篇
  1993年   29篇
  1992年   15篇
  1991年   19篇
  1990年   23篇
  1989年   12篇
  1988年   24篇
  1987年   20篇
  1986年   15篇
  1985年   18篇
  1984年   24篇
  1983年   26篇
  1982年   22篇
  1981年   26篇
  1980年   15篇
  1979年   16篇
  1978年   10篇
  1977年   10篇
  1976年   5篇
  1975年   8篇
  1974年   4篇
排序方式: 共有2051条查询结果,搜索用时 31 毫秒
61.
A key aspect of resource management is efficient and effective deployment of available resources whenever needed. The issue typically covers two areas: monitoring of resources used by software systems and managing the consumption of resources. A key aspect of each monitoring system is its reconfigurability – the ability of a system to limit the number of resources monitored at a given time to those that are really necessary at any particular moment. The authors of this article propose a fully dynamic and reconfigurable monitoring system based on the concept of Adaptable Aspect-Oriented Programming (AAOP) in which a set of AOP aspects is used to run an application in a manner specified by the adaptability strategy. The model can be used to implement systems that are able to monitor an application and its execution environment and perform actions such as changing the current set of resource management constraints applied to an application if the application/environment conditions change. Any aspect that implements a predefined interface may be used by the AAOP-based monitoring system as a source of information. The system utilizes the concept of dynamic AOP, meaning that the aspects (which are sources of information) may be dynamically enabled/disabled.  相似文献   
62.
Two mobile agents, starting from different nodes of a network at possibly different times, have to meet at the same node. This problem is known as rendezvous. Agents move in synchronous rounds. Each agent has a distinct integer label from the set \(\{1,\ldots ,L\}\). Two main efficiency measures of rendezvous are its time (the number of rounds until the meeting) and its cost (the total number of edge traversals). We investigate tradeoffs between these two measures. A natural benchmark for both time and cost of rendezvous in a network is the number of edge traversals needed for visiting all nodes of the network, called the exploration time. Hence we express the time and cost of rendezvous as functions of an upper bound E on the time of exploration (where E and a corresponding exploration procedure are known to both agents) and of the size L of the label space. We present two natural rendezvous algorithms. Algorithm Cheap has cost O(E) (and, in fact, a version of this algorithm for the model where the agents start simultaneously has cost exactly E) and time O(EL). Algorithm Fast has both time and cost \(O(E\log L)\). Our main contributions are lower bounds showing that, perhaps surprisingly, these two algorithms capture the tradeoffs between time and cost of rendezvous almost tightly. We show that any deterministic rendezvous algorithm of cost asymptotically E (i.e., of cost \(E+o(E)\)) must have time \(\varOmega (EL)\). On the other hand, we show that any deterministic rendezvous algorithm with time complexity \(O(E\log L)\) must have cost \(\varOmega (E\log L)\).  相似文献   
63.
64.
A computational model is a computer program, which attempts to simulate an abstract model of a particular system. Computational models use enormous calculations and often require supercomputer speed. As personal computers are becoming more and more powerful, more laboratory experiments can be converted into computer models that can be interactively examined by scientists and students without the risk and cost of the actual experiments. The future of programming is concurrent programming. The threaded programming model provides application programmers with a useful abstraction of concurrent execution of multiple tasks. The objective of this release is to address the design of architecture for scientific application, which may execute as multiple threads execution, as well as implementations of the related shared data structures.

New version program summary

Program title: GrowthCPCatalogue identifier: ADVL_v4_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v4_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 32 269No. of bytes in distributed program, including test data, etc.: 8 234 229Distribution format: tar.gzProgramming language: Free Object PascalComputer: multi-core x64-based PCOperating system: Windows XP, Vista, 7Has the code been vectorised or parallelized?: NoRAM: More than 1 GB. The program requires a 32-bit or 64-bit processor to run the generated code. Memory is addressed using 32-bit (on 32-bit processors) or 64-bit (on 64-bit processors with 64-bit addressing) pointers. The amount of addressed memory is limited only by the available amount of virtual memory.Supplementary material: The figures mentioned in the “Summary of revisions” section can be obtained here.Classification: 4.3, 7.2, 6.2, 8, 14External routines: Lazarus [1]Catalogue identifier of previous version: ADVL_v3_0Journal reference of previous version: Comput. Phys. Comm. 181 (2010) 709Does the new version supersede the previous version?: YesNature of problem: Reflection high-energy electron diffraction (RHEED) is an important in-situ analysis technique, which is capable of giving quantitative information about the growth process of thin layers and its control. It can be used to calibrate growth rate, analyze surface morphology, calibrate surface temperature, monitor the arrangement of the surface atoms, and provide information about growth kinetics. Such control allows the development of structures where the electrons can be confined in space, giving quantum wells or even quantum dots. In order to determine the atomic positions of atoms in the first few layers, the RHEED intensity must be measured as a function of the scattering angles and then compared with dynamic calculations. The objective of this release is to address the design of architecture for application that simulates the rocking curves RHEED intensities during hetero-epitaxial growth process of thin films.Solution method: The GrowthCP is a complex numerical model that uses multiple threads for simulation of epitaxial growth of thin layers. This model consists of two transactional parts. The first part is a mathematical model being based on the Runge–Kutta method with adaptive step-size control. The second part represents first-principles of the one-dimensional RHEED computational model. This model is based on solving a one-dimensional Schrödinger equation. Several problems can arise when applications contain a mixture of data access code, numerical code, and presentation code. Such applications are difficult to maintain, because interdependencies between all the components cause strong ripple effects whenever a change is made anywhere. Adding new data views often requires reimplementing a numerical code, which then requires maintenance in multiple places. In order to solve problems of this type, the computational and threading layers of the project have been implemented in the form of one design pattern as a part of Model-View-Controller architecture.Reasons for new version: Responding to the users? feedback the Growth09 project has been upgraded to a standard that allows the carrying out of sample computations of the RHEED intensities for a disordered surface for a wide range of single- and epitaxial hetero-structures. The design pattern on which the project is based has also been improved. It is shown that this model can be effectively used for multithreaded growth simulations of thin epitaxial layers and corresponding RHEED intensities for a wide range of single- and hetero-structures. Responding to the users? feedback the present release has been implemented using a well-documented free compiler [1] not requiring the special configuration and installation additional libraries.Summary of revisions:
  • 1. 
    The logical structure of the Growth09 program has been modified according to the scheme showed in Fig. 1.1 The class diagram in Fig. 11 is a static view of the main platform-specific elements of the GrowthCP architecture. Fig. 21 provides a dynamic view by showing the creation and destruction simplistic sequence diagram for the process.
  • 2. 
    The program requires the user to provide the appropriate parameters in the form of a knowledge base for the crystal structures under investigation. These parameters are loaded from the parameters.ini files at run-time. Instructions to prepare the .ini files can be found in the new distribution.
  • 3. 
    The program enables carrying out different growth models and one-dimensional dynamical RHEED calculations for the fcc lattice with basis of three-atoms, fcc lattice with basis of two-atoms, fcc lattice with single atom basis, Zinc-Blende, Sodium Chloride, and Wurtzite crystalline structures and hetero-structures, but yet the Fourier component of the scattering potential in the TRHEEDCalculations.crystPotUgXXX() procedure can be modified and implemented according to users? specific application requirements. The Fourier component of the scattering potential of the whole crystalline hetero-structures can be determined as a sum of contributions coming from all thin slices of individual atomic layers. To carry out one-dimensional calculations of the scattering potentials, the program uses properly constructed self-consistent procedures.
  • 4. 
    Each component of the system shown in Figs. 11 and 21 is fully extendable and can easily be adapted to new changeable requirements. Two essential logical elements of the system, i.e. TGrowthTransaction and TRHEEDCalculations classes, were designed and implemented in this way for them to pass the information to themselves without the need to use the data-exchange files given. In consequence each of them can be independently modified and/or extended. Implementing other types of differential equations and the different algorithm for solving them in the TGrowthTransaction class does not require another implementation of the TRHEEDCalculations class. Similarly, implementing other forms of scattering potential and different algorithm for RHEED calculation stays without the influence on the TGrowthTransaction class construction.
Unusual features: The program is distributed in the form of main project GrowthCP.lpr, with associated files, and should be compiled using Lazarus IDE. The program should be compiled with English/USA regional and language options.Running time: The typical running time is machine and user-parameters dependent.References:
  • [1] 
    http://sourceforge.net/projects/lazarus/files/.
  相似文献   
65.
We study the problem of the amount of information required to perform fast broadcasting in tree networks. The source located at the root of a tree has to disseminate a message to all nodes. In each round each informed node can transmit to one child. Nodes do not know the topology of the tree but an oracle knowing it can give a string of bits of advice to the source which can then pass it down the tree with the source message. The quality of a broadcasting algorithm with advice is measured by its competitive ratio: the worst case ratio, taken over n-node trees, between the time of this algorithm and the optimal broadcasting time in the given tree. Our goal is to find a trade-off between the size of advice and the best competitive ratio of a broadcasting algorithm for n-node trees. We establish such a trade-off with an approximation factor of O(n ε ), for an arbitrarily small positive constant ε. This is the first communication problem for which a trade-off between the size of advice and the efficiency of the solution is shown for arbitrary size of advice.  相似文献   
66.
In this paper, we discuss the importance of information systems in modeling interactive computations performed on (complex) granules and we propose a formal approach to interactive computations based on generalized information systems and rough sets which can be combined with other soft computing paradigms such as fuzzy sets or evolutionary computing, but also with machine learning and data mining techniques. Information systems are treated as dynamic granules used for representing the results of the interaction of attributes with the environment. Two kinds of attributes are distinguished, namely, the perception attributes, including sensory attributes, and the action attributes. Sensory attributes are the basic perception attributes, other perception attributes are constructed on the basis of the sensory ones. Actions are activated when their guards, being often complex and vague concepts, are satisfied to a satisfactory degree. The guards can be approximated on the basis of measurements performed by sensory attributes rather than defined exactly. Satisfiability degrees for guards are results of reasoning called the adaptive judgment. The approximations are induced using hierarchical modeling. We show that information systems can be used for modeling more advanced forms of interactions in hierarchical modeling. The role of hierarchical interactions is emphasized in the modeling of interactive computations. Some illustrative examples of interactions used in the ACT-R 6.0 system are reported. ACT-R 6.0 is based on a cognitive architecture and can be treated as an example of a highly interactive complex granule which can be involved in hierarchical interactions. For modeling of interactive computations, we propose much more general information systems than the studied dynamic information systems (see, e.g., Ciucci (2010) [8] and Pa?asiński and Pancerz (2010) [32]). For example, the dynamic information systems are making it possible to consider incremental changes in information systems. However, they do not contain the perception and action attributes necessary for modeling interactive computations, in particular for modeling intrastep interactions.  相似文献   
67.
A Fast Search Algorithm for a Large Fuzzy Database   总被引:1,自引:0,他引:1  
In this paper, we propose a fast search algorithm for a large fuzzy database that stores iris codes or data with a similar binary structure. The fuzzy nature of iris codes and their high dimensionality render many modern search algorithms, mainly relying on sorting and hashing, inadequate. The algorithm that is used in all current public deployments of iris recognition is based on a brute force exhaustive search through a database of iris codes, looking for a match that is close enough. Our new technique, Beacon Guided Search (BGS), tackles this problem by dispersing a multitude of ldquobeaconsrdquo in the search space. Despite random bit errors, iris codes from the same eye are more likely to collide with the same beacons than those from different eyes. By counting the number of collisions, BGS shrinks the search range dramatically with a negligible loss of precision. We evaluate this technique using 632,500 iris codes enrolled in the United Arab Emirates (UAE) border control system, showing a substantial improvement in search speed with a negligible loss of accuracy. In addition, we demonstrate that the empirical results match theoretical predictions.  相似文献   
68.
We consider deterministic broadcasting in radio networks whose nodes have full topological information about the network. The aim is to design a polynomial algorithm, which, given a graph G with source s, produces a fast broadcast scheme in the radio network represented by G. The problem of finding a fastest broadcast scheme for a given graph is NP-hard, hence it is only possible to get an approximation algorithm. We give a deterministic polynomial algorithm which produces a broadcast scheme of length , for every n-node graph of diameter D, thus improving a result of Gąsieniec et al. (PODC 2005) [17] and solving a problem stated there. Unless the inclusion NP BPTIME( holds, the length of a polynomially constructible deterministic broadcast scheme is optimal.A preliminary version of this paper (with a weaker result) appeared in the Proc. 7th International Workshop on Approximation Algorithms for Combinatorial Optimization Problems (APPROX’2004), August 2004, Harvard University, Cambridge, USA, LNCS 3122, 171–182. Research of the second author supported in part by NSERC discovery grant and by the Research Chair in Distributed Computing of the Université du Québec en Outaouais. Part of this work was done during the second author’s visit at the Max-Planck-Institut für Informatik.  相似文献   
69.
Kernel PCA for Feature Extraction and De-Noising in Nonlinear Regression   总被引:4,自引:0,他引:4  
In this paper, we propose the application of the Kernel Principal Component Analysis (PCA) technique for feature selection in a high-dimensional feature space, where input variables are mapped by a Gaussian kernel. The extracted features are employed in the regression problems of chaotic Mackey–Glass time-series prediction in a noisy environment and estimating human signal detection performance from brain event-related potentials elicited by task relevant signals. We compared results obtained using either Kernel PCA or linear PCA as data preprocessing steps. On the human signal detection task, we report the superiority of Kernel PCA feature extraction over linear PCA. Similar to linear PCA, we demonstrate de-noising of the original data by the appropriate selection of various nonlinear principal components. The theoretical relation and experimental comparison of Kernel Principal Components Regression, Kernel Ridge Regression and ε-insensitive Support Vector Regression is also provided.  相似文献   
70.
Most of the algorithms for blind separation/extraction and independent component analysis (ICA) can not separate mixtures of sources with extremely low kurtosis or colored Gaussian sources. Moreover, to separate mixtures of super- and sub-Gaussian signals, it is necessary to use adaptive (time-variable) or switching nonlinearities which are controlled via computationally intensive measures, such as estimation of the sign of kurtosis of extracted signals. In this paper, we develop a very simple neural network model and an efficient on-line adaptive algorithm that sequentially extract temporally correlated sources with arbitrary distributions, including colored Gaussian sources and sources with extremely low values (or even zero) of kurtosis. The validity and performance of the algorithm have been confirmed by extensive computer simulation experiments.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号