首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3404篇
  免费   79篇
  国内免费   2篇
电工技术   30篇
化学工业   383篇
金属工艺   57篇
机械仪表   41篇
建筑科学   137篇
矿业工程   40篇
能源动力   83篇
轻工业   261篇
水利工程   30篇
石油天然气   37篇
武器工业   4篇
无线电   455篇
一般工业技术   439篇
冶金工业   859篇
原子能技术   30篇
自动化技术   599篇
  2022年   15篇
  2021年   37篇
  2020年   26篇
  2019年   37篇
  2018年   47篇
  2017年   50篇
  2016年   65篇
  2015年   51篇
  2014年   77篇
  2013年   179篇
  2012年   99篇
  2011年   173篇
  2010年   122篇
  2009年   109篇
  2008年   150篇
  2007年   138篇
  2006年   133篇
  2005年   110篇
  2004年   94篇
  2003年   99篇
  2002年   75篇
  2001年   60篇
  2000年   52篇
  1999年   77篇
  1998年   191篇
  1997年   136篇
  1996年   154篇
  1995年   78篇
  1994年   76篇
  1993年   78篇
  1992年   49篇
  1991年   28篇
  1990年   49篇
  1989年   43篇
  1988年   37篇
  1987年   34篇
  1986年   28篇
  1985年   38篇
  1984年   41篇
  1983年   29篇
  1982年   22篇
  1981年   22篇
  1980年   26篇
  1979年   21篇
  1978年   24篇
  1977年   30篇
  1976年   49篇
  1975年   17篇
  1973年   22篇
  1971年   14篇
排序方式: 共有3485条查询结果,搜索用时 15 毫秒
61.
We describe a fast, data-driven bandwidth selection procedure for kernel conditional density estimation (KCDE). Specifically, we give a Monte Carlo dual-tree algorithm for efficient, error-controlled approximation of a cross-validated likelihood objective. While exact evaluation of this objective has an unscalable O(n2) computational cost, our method is practical and shows speedup factors as high as 286,000 when applied to real multivariate datasets containing up to one million points. In absolute terms, computation times are reduced from months to minutes. This enables applications at much greater scale than previously possible. The core idea in our method is to first derive a standard deterministic dual-tree approximation, whose loose deterministic bounds we then replace with tight, probabilistic Monte Carlo bounds. The resulting Monte Carlo dual-tree algorithm exhibits strong error control and high speedup across a broad range of datasets several orders of magnitude greater in size than those reported in previous work. The cost of this high acceleration is the loss of the formal error guarantee of the deterministic dual-tree framework; however, our experiments show that error is still amply controlled by our Monte Carlo algorithm, and the many-order-of-magnitude speedups are worth this sacrifice in the large-data case, where cross-validated bandwidth selection for KCDE would otherwise be impractical.  相似文献   
62.
This article presents an experience report where we compare 8 years of experience of product related usability testing and evaluation with principles for software process improvement (SPI). In theory the product and the process views are often seen to be complementary, but studies of industry have demonstrated the opposite. Therefore, more empirical studies are needed to understand and improve the present situation. We find areas of close agreement as well as areas where our work illuminates new characteristics. It has been identified that successful SPI is dependent upon being successfully combined with a business orientation. Usability and business orientation also have strong connections although this has not been extensively addressed in SPI publications. Reasons for this could be that usability focuses on product metrics whilst today's SPI mainly focuses on process metrics. Also because today's SPI is dominated by striving towards a standardized, controllable, and predictable software engineering process; whilst successful usability efforts in organisations are more about creating a creative organisational culture advocating a useful product throughout the development and product life cycle. We provide a study and discussion that supports future development when combining usability and product focus with SPI, in particular if these efforts are related to usability process improvement efforts.  相似文献   
63.
Web applications are fast becoming more widespread, larger, more interactive, and more essential to the international use of computers. It is well understood that web applications must be highly dependable, and as a field we are just now beginning to understand how to model and test Web applications. One straightforward technique is to model Web applications as finite state machines. However, large numbers of input fields, input choices and the ability to enter values in any order combine to create a state space explosion problem. This paper evaluates a solution that uses constraints on the inputs to reduce the number of transitions, thus compressing the FSM. The paper presents an analysis of the potential savings of the compression technique and reports actual savings from two case studies.  相似文献   
64.
Web software applications have become complex, sophisticated programs that are based on novel computing technologies. Their most essential characteristic is that they represent a different kind of software deployment—most of the software is never delivered to customers’ computers, but remains on servers, allowing customers to run the software across the web. Although powerful, this deployment model brings new challenges to developers and testers. Checking static HTML links is no longer sufficient; web applications must be evaluated as complex software products. This paper focuses on three aspects of web applications that are unique to this type of deployment: (1) an extremely loose form of coupling that features distributed integration, (2) the ability that users have to directly change the potential flow of execution, and (3) the dynamic creation of HTML forms. Taken together, these aspects allow the potential control flow to vary with each execution, thus the possible control flows cannot be determined statically, prohibiting several standard analysis techniques that are fundamental to many software engineering activities. This paper presents a new way to model web applications, based on software couplings that are new to web applications, dynamic flow of control, distributed integration, and partial dynamic web application development. This model is based on the notion of atomic sections, which allow analysis tools to build the analog of a control flow graph for web applications. The atomic section model has numerous applications in web applications; this paper applies the model to the problem of testing web applications.  相似文献   
65.
We give an example of a monoid with finitely many left and right ideals, all of whose Schützenberger groups are presentable by finite complete rewriting systems, and so each have finite derivation type, but such that the monoid itself does not have finite derivation type, and therefore does not admit a presentation by a finite complete rewriting system. The example also serves as a counterexample to several other natural questions regarding complete rewriting systems and finite derivation type. Specifically it allows us to construct two finitely generated monoids M and N with isometric Cayley graphs, where N has finite derivation type (respectively, admits a presentation by a finite complete rewriting system) but M does not. This contrasts with the case of finitely generated groups for which finite derivation type is known to be a quasi-isometry invariant. The same example is also used to show that neither of these two properties is preserved under finite Green index extensions.  相似文献   
66.
We demonstrate that certain large-clique graph triangulations can be useful for reducing computational requirements when making queries on mixed stochastic/deterministic graphical models. This is counter to the conventional wisdom that triangulations that minimize clique size are always most desirable for use in computing queries on graphical models. Many of these large-clique triangulations are non-minimal and are thus unattainable via the popular elimination algorithm. We introduce ancestral pairs as the basis for novel triangulation heuristics and prove that no more than the addition of edges between ancestral pairs needs to be considered when searching for state space optimal triangulations in such graphs. Empirical results on random and real world graphs are given. We also present an algorithm and correctness proof for determining if a triangulation can be obtained via elimination, and we show that the decision problem associated with finding optimal state space triangulations in this mixed setting is NP-complete.  相似文献   
67.
We give a simple tutorial introduction to the Mathematica package STRINGVACUA, which is designed to find vacua of string-derived or inspired four-dimensional N=1 supergravities. The package uses powerful algebro-geometric methods, as implemented in the free computer algebra system Singular, but requires no knowledge of the mathematics upon which it is based. A series of easy-to-use Mathematica modules are provided which can be used both in string theory and in more general applications requiring fast polynomial computations. The use of these modules is illustrated throughout with simple examples.

Program summary

Program title: STRINGVACUACatalogue identifier: AEBZ_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEBZ_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: GNU GPLNo. of lines in distributed program, including test data, etc.: 31 050No. of bytes in distributed program, including test data, etc.: 163 832Distribution format: tar.gzProgramming language: “Mathematica” syntaxComputer: Home and office spec desktop and laptop machines, networked or stand aloneOperating system: Windows XP (with Cygwin), Linux, Mac OS, running Mathematica V5 or aboveRAM: Varies greatly depending on calculation to be performedClassification: 11.1External routines: Linux: The program “Singular” is called from Mathematica. Windows: “Singular” is called within the Cygwin environment from Mathematica.Nature of problem: A central problem of string-phenomenology is to find stable vacua in the four-dimensional effective theories which result from compactification.Solution method: We present an algorithmic method, which uses techniques of algebraic geometry, to find all of the vacua of any given string-phenomenological system in a huge class.Running time: Varies greatly depending on calculation requested.  相似文献   
68.
“Global Interoperability Using Semantics, Standards, Science and Technology” is a concept that is predicated on the assumption that the semantic integration, frameworks and standards that support information exchange, and advances in science and technology can enable information-systems interoperability for many diverse users. This paper recommends technologies and approaches for enabling interoperability across a wide spectrum of political, geographical, and organizational levels, e.g. coalition, federal, state, tribal, regional, non government, and private. These recommendations represent steps toward the goal of the Semantic Web, where computers understand information on web sites through knowledge representations, agents, and ontologies.  相似文献   
69.
This is the first systematic investigation into the assumptions of image fusion using regression Kriging (RK) – a geostatistical method – illustrated with Landsat MS (multispectral) and SPOT (Satellite Pour l’Observation de la Terre) panchromatic images. The efficiency of different linear regression and Kriging methods in the fusion process is examined by visual and quantitative indicators. Results indicate a trade-off between spectral fidelity and spatial detail preservation for the GLS (generalized least squares regression) and OLS (ordinary least squares regression) methods in the RK process: OLS methods preserve more spatial detail, while GLS methods retain more spectral information from the MS images but at a greater computational cost. Under either OK (ordinary Kriging) or UK (universal Kriging) with either OLS or GLS, the spherical variogram improves spatial details from the panchromatic image, while the exponential variogram maintains more spectral information from the MS image. Overall, RK-based fusion methods outperform conventional fusion approaches from both the spectral and spatial point of view.  相似文献   
70.
Recent robotics efforts have automated simple, repetitive tasks to increase execution speed and lessen an operator's cognitive load, allowing them to focus on higher‐level objectives. However, an autonomous system will eventually encounter something unexpected, and if this exceeds the tolerance of automated solutions, there must be a way to fall back to teleoperation. Our solution is a largely autonomous system with the ability to determine when it is necessary to ask a human operator for guidance. We call this approach human‐guided autonomy. Our design emphasizes human‐on‐the‐loop control where an operator expresses a desired high‐level goal for which the reasoning component assembles an appropriate chain of subtasks. We introduce our work in the context of the DARPA Robotics Challenge (DRC) Finals. We describe the software architecture Team TROOPER developed and used to control an Atlas humanoid robot. We employ perception, planning, and control automation for execution of subtasks. If subtasks fail, or if changing environmental conditions invalidate the planned subtasks, the system automatically generates a new task chain. The operator is able to intervene at any stage of execution, to provide input and adjustment to any control layer, enabling operator involvement to increase as confidence in automation decreases. We present our performance at the DRC Finals and a discussion about lessons learned.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号