首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   694篇
  免费   25篇
  国内免费   30篇
电工技术   39篇
综合类   24篇
化学工业   62篇
金属工艺   16篇
机械仪表   32篇
建筑科学   32篇
矿业工程   4篇
能源动力   20篇
轻工业   2篇
水利工程   5篇
石油天然气   12篇
武器工业   3篇
无线电   71篇
一般工业技术   25篇
冶金工业   122篇
原子能技术   2篇
自动化技术   278篇
  2024年   1篇
  2023年   7篇
  2022年   8篇
  2021年   15篇
  2020年   6篇
  2019年   9篇
  2018年   11篇
  2017年   20篇
  2016年   12篇
  2015年   15篇
  2014年   17篇
  2013年   30篇
  2012年   17篇
  2011年   55篇
  2010年   20篇
  2009年   41篇
  2008年   58篇
  2007年   42篇
  2006年   55篇
  2005年   49篇
  2004年   27篇
  2003年   47篇
  2002年   31篇
  2001年   16篇
  2000年   11篇
  1999年   14篇
  1998年   11篇
  1997年   12篇
  1996年   11篇
  1995年   11篇
  1994年   22篇
  1993年   9篇
  1992年   8篇
  1991年   7篇
  1990年   7篇
  1989年   3篇
  1988年   5篇
  1987年   2篇
  1984年   3篇
  1983年   1篇
  1982年   1篇
  1980年   1篇
  1975年   1篇
排序方式: 共有749条查询结果,搜索用时 0 毫秒
1.
Combinatorial auction is a useful trade manner for transportation service procurements in e-marketplaces. To enhance the competition of combinatorial auction, a novel auction mechanism of two-round bidding with bundling optimization is proposed. As the recommended the auction mechanism, the shipper/auctioneer integrates the objects into several bundles based on the bidding results of first round auction. Then, carriers/bidders bid for the object bundles in second round. The bundling optimization is described as a multi-objective model with two criteria on price complementation and combination consistency. A Quantum Evolutionary Algorithm (QEA) with β-based rotation gate and the encoding scheme based on non-zero elements in complementary coefficient matrix is developed for the model solution. Comparing with a Contrast Genetic Algorithm, QEA can achieve better computational performances for small and middle size problems.  相似文献   
2.
Efficient Spreadsheet Algorithm for First-Order Reliability Method   总被引:2,自引:0,他引:2  
A new spreadsheet-cell-object-oriented algorithm for the first-order reliability method is proposed and illustrated for cases with correlated nonnormals and explicit and implicit performance functions. The new approach differs from the writers earlier algorithm by obviating the need for computations of equivalent normal means and equivalent normal standard deviations. It obtains the solution faster and is more efficient, robust, and succinct. Other advantages include ease of initialization prior to constrained optimization, ease of randomization of initial values for checking robustness, and fewer required optimization constraints during spreadsheet-automated search for the design point. Two cases with implicit performance functions, namely an asymmetrically loaded beam on Winkler medium and a strut with complex supports are analyzed using the new approach and discussed. Comparisons are also made between the proposed approach and that based on Rosenblatt transformation.  相似文献   
3.
It was found that the discontinuity at the end of an impulse will lead to numerical inaccuracy as this discontinuity will result in an extra impulse and thus an extra displacement in the time history analysis. In addition, this extra impulse is proportional to the discontinuity value at the end of the impulse and the size of integration time step. To overcome this difficulty, an effective approach is proposed to reduce the extra impulse and hence the extra displacement. In fact, the novel approach proposed in this paper is to perform a single small time step immediately upon the termination of applied impulse, whereas other time steps can be conducted by using the step size determined from accuracy consideration in period. The feasibility of this approach is analytically explored. Further, analytical results are confirmed by numerical examples. Numerical studies also show that this approach can be applied to other step-by-step integration methods. It seems that to slightly complicate the programming of dynamic analysis codes is the only disadvantage of this approach.  相似文献   
4.
A procedure based on K?tter’s equation is developed for the evaluation of bearing capacity factor Nγ with Terzaghi’s mechanism. Application of K?tter’s equation makes the analysis statically determinate, in which the unique failure surface is identified using force equilibrium conditions. The computed Nγ values are found to be higher than Terzaghi’s value in the range 0.25–20%, with a diverging trend for higher values of angle of soil internal friction. A fairly good agreement is observed with other solutions which are based on finite difference coupled with associated flow rule, limit analysis, and limit equilibrium. Finally, the comparison with available experimental results vis-à-vis other solutions shows that, computed Nγ values are capable of making a reasonably good prediction.  相似文献   
5.
Lisp and its descendants are among the most important and widely used of programming languages. At the same time, parallelism in the architecture of computer systems is becoming commonplace. There is a pressing need to extend the technology of automatic parallelization that has become available to Fortran programmers of parallel machines, to the realm of Lisp programs and symbolic computing. In this paper we present a comprehensive approach to the compilation of Scheme programs for shared-memory multiprocessors. Our strategy has two principal components:interprocedural analysis andprogram restructuring. We introduceprocedure strings andstack configurations as a framework in which to reason about interprocedural side-effects and object lifetimes, and develop a system of interprocedural analysis, using abstract interpretation, that is used in the dependence analysis and memory management of Scheme programs. We introduce the transformations ofexit-loop translation andrecursion splitting to treat the control structures of iteration and recursion that arise commonly in Scheme programs. We propose an alternative representation for s-expressions that facilitates the parallel creation and access of lists. We have implemented these ideas in a parallelizing Scheme compiler and run-time system, and we complement the theory of our work with snapshots of programs during the restructuring process, and some preliminary performance results of the execution of object codes produced by the compiler.This work was supported in part by the National Science Foundation under Grant No. NSF MIP-8410110, the U.S. Department of Energy under Grant No. DE-FG02-85ER25001, the Office of Naval Research under Grant No. ONR N00014-88-K-0686, the U.S. Air Force Office of Scientific Research under Grant No. AFOSR-F49620-86-C-0136, and by a donation from the IBM Corportation.  相似文献   
6.
The hydrodynamics of a two-dimensional gas–solid fluidized bed reactor were studied experimentally and computationally. Computational fluid dynamics (CFD) simulation results from a commercial CFD software package, Fluent, were compared to those obtained by experiments conducted in a fluidized bed containing spherical glass beads of 250– in diameter. A multifluid Eulerian model incorporating the kinetic theory for solid particles was applied in order to simulate the gas–solid flow. Momentum exchange coefficients were calculated using the Syamlal–O’Brien, Gidaspow, and Wen–Yu drag functions. The solid-phase kinetic energy fluctuation was characterized by varying the restitution coefficient values from 0.9 to 0.99. The modeling predictions compared reasonably well with experimental bed expansion ratio measurements and qualitative gas–solid flow patterns. Pressure drops predicted by the simulations were in relatively close agreement with experimental measurements at superficial gas velocities higher than the minimum fluidization velocity, Umf. Furthermore, the predicted instantaneous and time-average local voidage profiles showed similarities with the experimental results. Further experimental and modeling efforts are required in a comparable time and space resolutions for the validation of CFD models for fluidized bed reactors.  相似文献   
7.
多种应用场合需要寻找给定数据库中相似度最大的前k对数据.然而,由于应用领域需要处理的数据规模呈上升趋势,计算这样的最相似k对数据,难度非常大.提出了一种基于序列计算的最相似k对数据搜索方案,首先,将所有数据对分割成多个组,然后,提出了所有数据对分组算法和核心数据对分组算法,通过单独计算每个组中的最近似k对数据,从所有组的最近似k对数据中选择相似度最高的k对数据,进而正确地确定最近似k对数据.最后基于合成数据进行实验,性能评估结果验证了本文算法的有效性和可扩展性.  相似文献   
8.
GEORG SCHWARZ 《连接科学》1992,4(3-4):207-226
computing devices such as Turing machines resolve the dilemma between the necessary finitude of effective procedures and the potential infinity of a function's domain by distinguishing between a finite-state processing part, defined over finitely many representation types, and a memory sufficiently large to contain representation tokens for any of the function's arguments and values. Connectionist networks have been shown to be (at least) Turing-equivalent if provided with infinitely many nodes or infinite-precision activation values and weights. Physical computation, however, is necessarily finite.

The notion of a processing-memory system is introduced to discuss physical computing systems. Constitutive for a processing-memory system is that its causal structure supports the functional distinction between processing part and memory necessary for employing a type-token distinction for representations, which in turn allows for representations to be the objects of computational manipulation. Moreover, the processing part realized by such systems provides a criterion of identity for the function computed as well as helps to define competence and performance of a processing-memory system.

Networks, on the other hand, collapse the functional distinction between processing part and memory. Since preservation of this distinction is necessary for employing a type-token distinction for representation, connectionist information processing does not consist in the computational manipulation of representations. Moreover, since we no longer have a criterion of identity for the function processed other than the behaviour of the network itself, we are left without a competence-performance distinction for connectionist networks,  相似文献   

9.
Accelerated life testing (ALT) is widely used in high-reliability product estimation to get relevant information about an item's performance and its failure mechanisms. To analyse the observed ALT data, reliability practitioners need to select a suitable accelerated life model based on the nature of the stress and the physics involved. A statistical model consists of (i) a lifetime distribution that represents the scatter in product life and (ii) a relationship between life and stress. In practice, several accelerated life models could be used for the same failure mode and the choice of the best model is far from trivial. For this reason, an efficient selection procedure to discriminate between a set of competing accelerated life models is of great importance for practitioners. In this paper, accelerated life model selection is approached by using the Approximate Bayesian Computation (ABC) method and a likelihood-based approach for comparison purposes. To demonstrate the efficiency of the ABC method in calibrating and selecting accelerated life model, an extensive Monte Carlo simulation study is carried out using different distances to measure the discrepancy between the empirical and simulated times of failure data. Then, the ABC algorithm is applied to real accelerated fatigue life data in order to select the most likely model among five plausible models. It has been demonstrated that the ABC method outperforms the likelihood-based approach in terms of reliability predictions mainly at lower percentiles particularly useful in reliability engineering and risk assessment applications. Moreover, it has shown that ABC could mitigate the effects of model misspecification through an appropriate choice of the distance function.  相似文献   
10.
基于本体的计算无关模型到平台无关模型的转换过程,分为元本体映射规则的发现以及基于元本体映射规则的转换执行这2个部分,其中,映射发现是转换实现的基础。为此,提出一种映射发现方法以进行模型转换。该方法在已知计算无关模型和平台无关模型的元模型基础上,抽取各自的元本体,结合基于相似度的本体映射技术建立2种模型的元本体映射关系,并作为模型转换语义匹配推理的基础。通过实例验证该方法的可行性和实用性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号