首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5914篇
  免费   320篇
  国内免费   8篇
电工技术   80篇
综合类   9篇
化学工业   1023篇
金属工艺   69篇
机械仪表   97篇
建筑科学   209篇
矿业工程   6篇
能源动力   174篇
轻工业   365篇
水利工程   54篇
石油天然气   41篇
无线电   587篇
一般工业技术   1360篇
冶金工业   1097篇
原子能技术   22篇
自动化技术   1049篇
  2024年   13篇
  2023年   59篇
  2022年   96篇
  2021年   150篇
  2020年   119篇
  2019年   127篇
  2018年   134篇
  2017年   150篇
  2016年   173篇
  2015年   144篇
  2014年   204篇
  2013年   356篇
  2012年   334篇
  2011年   413篇
  2010年   321篇
  2009年   309篇
  2008年   349篇
  2007年   280篇
  2006年   249篇
  2005年   231篇
  2004年   168篇
  2003年   157篇
  2002年   160篇
  2001年   83篇
  2000年   77篇
  1999年   90篇
  1998年   159篇
  1997年   153篇
  1996年   106篇
  1995年   69篇
  1994年   84篇
  1993年   76篇
  1992年   53篇
  1991年   28篇
  1990年   37篇
  1989年   32篇
  1988年   27篇
  1987年   40篇
  1986年   20篇
  1985年   47篇
  1984年   28篇
  1983年   21篇
  1982年   27篇
  1981年   42篇
  1980年   23篇
  1979年   23篇
  1978年   19篇
  1977年   27篇
  1976年   48篇
  1975年   18篇
排序方式: 共有6242条查询结果,搜索用时 15 毫秒
121.
The classical Geiringer theorem addresses the limiting frequency of occurrence of various alleles after repeated application of crossover. It has been adopted to the setting of evolutionary algorithms and, a lot more recently, reinforcement learning and Monte-Carlo tree search methodology to cope with a rather challenging question of action evaluation at the chance nodes. The theorem motivates novel dynamic parallel algorithms that are explicitly described in the current paper for the first time. The algorithms involve independent agents traversing a dynamically constructed directed graph that possibly has loops and multiple edges. A rather elegant and profound category-theoretic model of cognition in biological neural networks developed by a well-known French mathematician, professor Andree Ehresmann jointly with a neurosurgeon, Jan Paul Vanbremeersch over the last thirty years provides a hint at the connection between such algorithms and Hebbian learning.  相似文献   
122.
Aboveground dry biomass was estimated for the 1.3 M km2 forested area south of the treeline in the eastern Canadian province of Québec by combining data from an airborne and spaceborne LiDAR, a Landsat ETM+ land cover map, a Shuttle Radar Topographic Mission (SRTM) digital elevation model, ground inventory plots, and vegetation zone maps. Plot-level biomass was calculated using allometric relationships between tree attributes and biomass. A small footprint portable laser profiler then flew over these inventory plots to develop a generic airborne LiDAR-based biomass equation (R2 = 0.65, n = 207). The same airborne LiDAR system flew along four portions of orbits of the ICESat Geoscience Laser Altimeter System (GLAS). A square-root transformed equation was developed to predict airborne profiling LiDAR estimates of aboveground dry biomass from GLAS waveform parameters combined with an SRTM slope index (R2 = 0.59, n = 1325).Using the 104,044 quality-filtered GLAS pulses obtained during autumn 2003 from 97 orbits over the study area, we then predicted aboveground dry biomass for the main vegetation areas of Québec as well as for the entire Province south of the treeline. Including cover type covariances both within and between GLAS orbits increased standard errors of the estimates by two to five times at the vegetation zone level and as much as threefold at the provincial level. Aboveground biomass for the whole study area averaged 39.0 ± 2.2 (standard error) Mg ha? 1 and totalled 4.9 ± 0.3 Pg. Biomass distributions were 12.6% northern hardwoods, 12.6% northern mixedwood, 38.4% commercial boreal, 13% non-commercial boreal, 14.2% taiga, and 9.2% treed tundra. Non-commercial forests represented 36% of the estimated aboveground biomass, thus highlighting the importance of remote northern forests to C sequestration. This study has shown that space-based forest inventories of northern forests could be an efficient way of estimating the amount, distribution, and uncertainty of aboveground biomass and carbon stocks at large spatial scales.  相似文献   
123.
We consider the problem of approximately integrating a Lipschitz function f (with a known Lipschitz constant) over an interval. The goal is to achieve an additive error of at most ε using as few samples of f as possible. We use the adaptive framework: on all problem instances an adaptive algorithm should perform almost as well as the best possible algorithm tuned for the particular problem instance. We distinguish between and , the performances of the best possible deterministic and randomized algorithms, respectively. We give a deterministic algorithm that uses samples and show that an asymptotically better algorithm is impossible. However, any deterministic algorithm requires samples on some problem instance. By combining a deterministic adaptive algorithm and Monte Carlo sampling with variance reduction, we give an algorithm that uses at most samples. We also show that any algorithm requires samples in expectation on some problem instance (f,ε), which proves that our algorithm is optimal.  相似文献   
124.
Real-time crowd motion planning requires fast, realistic methods for path planning as well as obstacle avoidance. In a previous work (Morini et al. in Cyberworlds International Conference, pp. 144–151, 2007), we introduced a hybrid architecture to handle real-time motion planning of thousands of pedestrians. In this article, we present an extended version of our architecture, introducing two new features: an improved short-term collision avoidance algorithm, and simple efficient group behavior for crowds. Our approach allows the use of several motion planning algorithms of different precision for regions of varied interest. Pedestrian motion continuity is ensured when switching between such algorithms. To assess our architecture, several performance tests have been conducted, as well as a subjective test demonstrating the impact of using groups. Our results show that the architecture can plan motion in real time for several thousands of characters.
Daniel ThalmannEmail:
  相似文献   
125.
This paper deals dynamically with the question of how recruitment to terror organizations is influenced by counter-terror operations. This is done within an optimal control model, where the key state is the (relative) number of terrorists and the key controls are two types of counter-terror tactics, one (“water”) that does not and one (“fire”) that does provoke recruitment of new terrorists. The model is nonlinear and does not admit analytical solutions, but an efficient numerical implementation of Pontryagin's minimum principle allows for solution with base case parameters and considerable sensitivity analysis. Generally, this model yields two different steady states, one where the terror organization is nearly eradicated and one with a high number of terrorists. Whereas water strategies are used at almost any time, it can be optimal not to use fire strategies if the number of terrorists is below a certain threshold.  相似文献   
126.
Differencing and merging of architectural views   总被引:1,自引:0,他引:1  
Differencing and merging architectural views is an important activity in software engineering. However, existing approaches are still based on restrictive assumptions, such as requiring view elements to have unique identifiers or exactly matching types, which is often not the case in many application domains. We propose an approach based on structural information. We generalize a published polynomial-time tree-to-tree correction algorithm that detects inserts, renames and deletes, into a novel algorithm that additionally detects restricted moves. Our algorithm also supports forcing and preventing matches between view elements. We incorporate the algorithm into tools to compare and merge Component-and-Connector (C&C) architectural views. We provide an empirical evaluation of the algorithm. We illustrate the tools using extended examples, and use them to detect and reconcile interesting differences between real architectural views. This article is an expanded version of the following paper: Abi-Antoun, M., Aldrich, J., Nahas, N., Schmerl, B., and Garlan, D: 2006, ‘Differencing and Merging of Architectural Views’. In: Proceedings of the 21st IEEE International Conference on Automated Software Engineering, pp. 47–58.  相似文献   
127.
Recent research into user experience has identified the need for a theoretical model to build cumulative knowledge in research addressing how the overall quality or ‘goodness’ of an interactive product is formed. An experiment tested and extended Hassenzahl’s model of aesthetic experience. The study used a 2 × 2 × (2) experimental design with three factors: principles of screen design, principles for organizing information on a web page and experience of using a web site. Dependent variables included hedonic perceptions and evaluations of a web site as well as measures of task performance, navigation behaviour and mental effort. Measures, except Beauty, were sensitive to manipulation of web design. Beauty was influenced by hedonic attributes (identification and stimulation), but Goodness by both hedonic and pragmatic (user-perceived usability) attributes as well as task performance and mental effort. Hedonic quality was more stable with experience of web-site use than pragmatic quality and Beauty was more stable than Goodness.  相似文献   
128.
Multisensory VR interaction for protein-docking in the CoRSAIRe project   总被引:1,自引:1,他引:0  
Proteins take on their function in the cell by interacting with other proteins or biomolecular complexes. To study this process, computational methods, collectively named protein docking, are used to predict the position and orientation of a protein ligand when it is bound to a protein receptor or enzyme, taking into account chemical or physical criteria. This process is intensively studied to discover new biological functions for proteins and to better understand how these macromolecules take on these functions at the molecular scale. Pharmaceutical research also employs docking techniques for a variety of purposes, most notably in the virtual screening of large databases of available chemicals to select likely molecular candidates for drug design. The basic hypothesis of our work is that Virtual Reality (VR) and multimodal interaction can increase efficiency in reaching and analysing docking solutions, in addition to fully a computational docking approach. To this end, we conducted an ergonomic analysis of the protein–protein current docking task as it is carried out today. Using these results, we designed an immersive and multimodal application where VR devices, such as the three-dimensional mouse and haptic devices, are used to interactively manipulate two proteins to explore possible docking solutions. During this exploration, visual, audio, and haptic feedbacks are combined to render and evaluate chemical or physical properties of the current docking configuration.  相似文献   
129.
Visual localization systems that are practical for autonomous vehicles in outdoor industrial applications must perform reliably in a wide range of conditions. Changing outdoor conditions cause difficulty by drastically altering the information available in the camera images. To confront the problem, we have developed a visual localization system that uses a surveyed three‐dimensional (3D)‐edge map of permanent structures in the environment. The map has the invariant properties necessary to achieve long‐term robust operation. Previous 3D‐edge map localization systems usually maintain a single pose hypothesis, making it difficult to initialize without an accurate prior pose estimate and also making them susceptible to misalignment with unmapped edges detected in the camera image. A multihypothesis particle filter is employed here to perform the initialization procedure with significant uncertainty in the vehicle's initial pose. A novel observation function for the particle filter is developed and evaluated against two existing functions. The new function is shown to further improve the abilities of the particle filter to converge given a very coarse estimate of the vehicle's initial pose. An intelligent exposure control algorithm is also developed that improves the quality of the pertinent information in the image. Results gathered over an entire sunny day and also during rainy weather illustrate that the localization system can operate in a wide range of outdoor conditions. The conclusion is that an invariant map, a robust multihypothesis localization algorithm, and an intelligent exposure control algorithm all combine to enable reliable visual localization through challenging outdoor conditions. © 2009 Wiley Periodicals, Inc.  相似文献   
130.
Correlation between a dithering signal and the prediction error has been used for detecting model mismatch in univariate model based control systems. This paper extends that approach to MIMO control systems. A closed-loop cross-correlation method is presented to detect which specific input-output pairings of a model-based controller are mismatched. This method may be used in screening the complete set of models and in selecting candidate models for re-identification. The method first finds the rows and columns of the transfer function matrix that contain mismatch, and then the candidates are found by the intersection of the said rows and columns. Placing the system under partial control, whereby one or more of the manipulated variables are held constant, can be used to further reduce the set of candidate models.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号