首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3392篇
  免费   86篇
  国内免费   2篇
电工技术   30篇
化学工业   379篇
金属工艺   57篇
机械仪表   41篇
建筑科学   137篇
矿业工程   40篇
能源动力   83篇
轻工业   261篇
水利工程   30篇
石油天然气   37篇
武器工业   4篇
无线电   455篇
一般工业技术   439篇
冶金工业   859篇
原子能技术   30篇
自动化技术   598篇
  2023年   12篇
  2021年   37篇
  2020年   25篇
  2019年   37篇
  2018年   47篇
  2017年   50篇
  2016年   65篇
  2015年   51篇
  2014年   77篇
  2013年   179篇
  2012年   99篇
  2011年   173篇
  2010年   122篇
  2009年   109篇
  2008年   150篇
  2007年   138篇
  2006年   133篇
  2005年   110篇
  2004年   94篇
  2003年   99篇
  2002年   75篇
  2001年   60篇
  2000年   52篇
  1999年   77篇
  1998年   191篇
  1997年   136篇
  1996年   154篇
  1995年   78篇
  1994年   76篇
  1993年   78篇
  1992年   49篇
  1991年   28篇
  1990年   49篇
  1989年   43篇
  1988年   37篇
  1987年   34篇
  1986年   28篇
  1985年   38篇
  1984年   41篇
  1983年   29篇
  1982年   22篇
  1981年   22篇
  1980年   26篇
  1979年   21篇
  1978年   24篇
  1977年   30篇
  1976年   49篇
  1975年   17篇
  1973年   22篇
  1971年   14篇
排序方式: 共有3480条查询结果,搜索用时 15 毫秒
91.
We demonstrate that certain large-clique graph triangulations can be useful for reducing computational requirements when making queries on mixed stochastic/deterministic graphical models. This is counter to the conventional wisdom that triangulations that minimize clique size are always most desirable for use in computing queries on graphical models. Many of these large-clique triangulations are non-minimal and are thus unattainable via the popular elimination algorithm. We introduce ancestral pairs as the basis for novel triangulation heuristics and prove that no more than the addition of edges between ancestral pairs needs to be considered when searching for state space optimal triangulations in such graphs. Empirical results on random and real world graphs are given. We also present an algorithm and correctness proof for determining if a triangulation can be obtained via elimination, and we show that the decision problem associated with finding optimal state space triangulations in this mixed setting is NP-complete.  相似文献   
92.
We give a simple tutorial introduction to the Mathematica package STRINGVACUA, which is designed to find vacua of string-derived or inspired four-dimensional N=1 supergravities. The package uses powerful algebro-geometric methods, as implemented in the free computer algebra system Singular, but requires no knowledge of the mathematics upon which it is based. A series of easy-to-use Mathematica modules are provided which can be used both in string theory and in more general applications requiring fast polynomial computations. The use of these modules is illustrated throughout with simple examples.

Program summary

Program title: STRINGVACUACatalogue identifier: AEBZ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBZ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU GPLNo. of lines in distributed program, including test data, etc.: 31 050No. of bytes in distributed program, including test data, etc.: 163 832Distribution format: tar.gzProgramming language: “Mathematica” syntaxComputer: Home and office spec desktop and laptop machines, networked or stand aloneOperating system: Windows XP (with Cygwin), Linux, Mac OS, running Mathematica V5 or aboveRAM: Varies greatly depending on calculation to be performedClassification: 11.1External routines: Linux: The program “Singular” is called from Mathematica. Windows: “Singular” is called within the Cygwin environment from Mathematica.Nature of problem: A central problem of string-phenomenology is to find stable vacua in the four-dimensional effective theories which result from compactification.Solution method: We present an algorithmic method, which uses techniques of algebraic geometry, to find all of the vacua of any given string-phenomenological system in a huge class.Running time: Varies greatly depending on calculation requested.  相似文献   
93.
“Global Interoperability Using Semantics, Standards, Science and Technology” is a concept that is predicated on the assumption that the semantic integration, frameworks and standards that support information exchange, and advances in science and technology can enable information-systems interoperability for many diverse users. This paper recommends technologies and approaches for enabling interoperability across a wide spectrum of political, geographical, and organizational levels, e.g. coalition, federal, state, tribal, regional, non government, and private. These recommendations represent steps toward the goal of the Semantic Web, where computers understand information on web sites through knowledge representations, agents, and ontologies.  相似文献   
94.
This is the first systematic investigation into the assumptions of image fusion using regression Kriging (RK) – a geostatistical method – illustrated with Landsat MS (multispectral) and SPOT (Satellite Pour l’Observation de la Terre) panchromatic images. The efficiency of different linear regression and Kriging methods in the fusion process is examined by visual and quantitative indicators. Results indicate a trade-off between spectral fidelity and spatial detail preservation for the GLS (generalized least squares regression) and OLS (ordinary least squares regression) methods in the RK process: OLS methods preserve more spatial detail, while GLS methods retain more spectral information from the MS images but at a greater computational cost. Under either OK (ordinary Kriging) or UK (universal Kriging) with either OLS or GLS, the spherical variogram improves spatial details from the panchromatic image, while the exponential variogram maintains more spectral information from the MS image. Overall, RK-based fusion methods outperform conventional fusion approaches from both the spectral and spatial point of view.  相似文献   
95.
Recent robotics efforts have automated simple, repetitive tasks to increase execution speed and lessen an operator's cognitive load, allowing them to focus on higher‐level objectives. However, an autonomous system will eventually encounter something unexpected, and if this exceeds the tolerance of automated solutions, there must be a way to fall back to teleoperation. Our solution is a largely autonomous system with the ability to determine when it is necessary to ask a human operator for guidance. We call this approach human‐guided autonomy. Our design emphasizes human‐on‐the‐loop control where an operator expresses a desired high‐level goal for which the reasoning component assembles an appropriate chain of subtasks. We introduce our work in the context of the DARPA Robotics Challenge (DRC) Finals. We describe the software architecture Team TROOPER developed and used to control an Atlas humanoid robot. We employ perception, planning, and control automation for execution of subtasks. If subtasks fail, or if changing environmental conditions invalidate the planned subtasks, the system automatically generates a new task chain. The operator is able to intervene at any stage of execution, to provide input and adjustment to any control layer, enabling operator involvement to increase as confidence in automation decreases. We present our performance at the DRC Finals and a discussion about lessons learned.  相似文献   
96.
97.
A Faà di Bruno type Hopf algebra is developed for a group of integral operators known as Fliess operators, where operator composition is the group product. Such operators are normally written in terms of generating series over a noncommutative alphabet. Using a general series expansion for the antipode, an explicit formula for the generating series of the compositional inverse operator is derived. The result is applied to analytic nonlinear feedback systems to produce an explicit formula for the feedback product, that is, the generating series for the Fliess operator representation of the closed-loop system written in terms of the generating series of the Fliess operator component systems. This formula is employed to provide a proof that local convergence is preserved under feedback.  相似文献   
98.
Reasoning about software systems at the architectural level is key to effective software development, management, evolution and reuse. All too often, though, the lack of appropriate documentation leads to a situation where architectural design information has to be recovered directly from implemented software artifacts. This is a very demanding process, particularly when involving recovery of runtime abstractions (clients, servers, interaction protocols, etc.) that are typical to the design of distributed software systems. This paper presents an exploratory reverse engineering approach, called X-ray, to aid programmers in recovering architectural runtime information from a distributed system's existing software artifacts. X-ray comprises three domain-based static analysis techniques, namely component module classification, syntactic pattern matching, and structural reachability analysis. These complementary techniques can facilitate the task of identifying a distributed system's implemented executable components and their potential runtime interconnections. The component module classification technique automatically distinguishes source code modules according to the executables components they implement. The syntactic pattern matching technique in turn helps to recognise specific code fragments that may implement typical component interaction features. Finally, the structural reachability analysis technique aids in the association of those features to the code specific for each executable component. The paper describes and illustrates the main concepts underlying each technique, reports on their implementation as a suit of new and off-the-shelf tools, and, to give evidence of the utility of the approach, provides a detailed account of a successful application of the three techniques to help recover a static approximation of the runtime architecture for Field, a publicly-available distributed programming environment.  相似文献   
99.
100.
Nonparametric neighborhood methods for learning entail estimation of class conditional probabilities based on relative frequencies of samples that are "near-neighbors" of a test point. We propose and explore the behavior of a learning algorithm that uses linear interpolation and the principle of maximum entropy (LIME). We consider some theoretical properties of the LIME algorithm: LIME weights have exponential form; the estimates are consistent; and the estimates are robust to additive noise. In relation to bias reduction, we show that near-neighbors contain a test point in their convex hull asymptotically. The common linear interpolation solution used for regression on grids or look-up-tables is shown to solve a related maximum entropy problem. LIME simulation results support use of the method, and performance on a pipeline integrity classification problem demonstrates that the proposed algorithm has practical value.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号