首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   338篇
  免费   32篇
电工技术   4篇
综合类   1篇
化学工业   61篇
金属工艺   11篇
机械仪表   8篇
建筑科学   5篇
矿业工程   1篇
能源动力   10篇
轻工业   21篇
水利工程   2篇
无线电   30篇
一般工业技术   49篇
冶金工业   56篇
原子能技术   8篇
自动化技术   103篇
  2023年   5篇
  2022年   5篇
  2021年   11篇
  2020年   18篇
  2019年   9篇
  2018年   17篇
  2017年   11篇
  2016年   26篇
  2015年   13篇
  2014年   17篇
  2013年   27篇
  2012年   22篇
  2011年   28篇
  2010年   19篇
  2009年   19篇
  2008年   12篇
  2007年   7篇
  2006年   6篇
  2005年   6篇
  2004年   6篇
  2003年   3篇
  2002年   7篇
  2001年   3篇
  2000年   3篇
  1999年   4篇
  1998年   18篇
  1997年   8篇
  1996年   4篇
  1992年   2篇
  1990年   1篇
  1989年   1篇
  1987年   1篇
  1986年   1篇
  1985年   1篇
  1983年   1篇
  1982年   2篇
  1981年   1篇
  1980年   2篇
  1979年   1篇
  1978年   3篇
  1977年   1篇
  1976年   2篇
  1975年   2篇
  1974年   2篇
  1973年   1篇
  1972年   1篇
  1971年   1篇
  1970年   1篇
  1968年   2篇
  1957年   2篇
排序方式: 共有370条查询结果,搜索用时 15 毫秒
1.
We introduce a compact hierarchical procedural model that combines feature‐based primitives to describe complex terrains with varying level of detail. Our model is inspired by skeletal implicit surfaces and defines the terrain elevation function by using a construction tree. Leaves represent terrain features and they are generic parametrized skeletal primitives, such as mountains, ridges, valleys, rivers, lakes or roads. Inner nodes combine the leaves and subtrees by carving, blending or warping operators. The elevation of the terrain at a given point is evaluated by traversing the tree and by combining the contributions of the primitives. The definition of the tree leaves and operators guarantees that the resulting elevation function is Lipschitz, which speeds up the sphere tracing used to render the terrain. Our model is compact and allows for the creation of large terrains with a high level o detail using a reduced set of primitives. We show the creation of different kinds of landscapes and demonstrate that our model allows to efficiently control the shape and distribution of landform features.  相似文献   
2.
We generalize Kedlaya and Umans’ modular composition algorithm to the multivariate case. As a main application, we give fast algorithms for many operations involving triangular sets (over a finite field), such as modular multiplication, inversion, or change of order. For the first time, we are able to exhibit running times for these operations that are almost linear, without any overhead exponential in the number of variables. As a further application, we show that, from the complexity viewpoint, Charlap, Coley, and Robbins’ approach to elliptic curve point counting can be competitive with the better known approach due to Elkies.  相似文献   
3.
The fundamental macroscopic material property needed to quantify the flow in a fibrous medium viewed as a porous medium is the permeability. Composite processing models require the permeability as input data to predict flow patterns and pressure fields. As permeability reflects both the magnitude and anisotropy of the fluid/fiber resistance, efficient numerical techniques are needed to solve linear and nonlinear homogenization problems online during the flow simulation. In a previous work the expressions of macroscopic permeability were derived in a double-scale porosity medium for both Newtonian and rheo-thinning resins. In the linear case only a microscopic calculation on a representative volume is required, implying as many microscopic calculations as representative microscopic volumes exist in the whole fibrous structure. In the non-linear case, and even when the porous microstructure can be described by a unique representative volume, microscopic calculation must be carried out many times because the microscale resin viscosity depends on the macroscopic velocity, which in turn depends on the permeability that results from a microscopic calculation. Thus, a nonlinear multi-scale problem results. In this paper an original and efficient offline-online procedure is proposed for the efficient solution of nonlinear flow problems in porous media.  相似文献   
4.
This paper proposes a generalized finite element method based on the use of parametric solutions as enrichment functions. These parametric solutions are precomputed off‐line and stored in memory in the form of a computational vademecum so that they can be used on‐line with negligible cost. This renders a more efficient computational method than traditional finite element methods at performing simulations of processes. One key issue of the proposed method is the efficient computation of the parametric enrichments. These are computed and efficiently stored in memory by employing proper generalized decompositions. Although the presented method can be broadly applied, it is particularly well suited in manufacturing processes involving localized physics that depend on many parameters, such as welding. After introducing the vademecum‐generalized finite element method formulation, we present some numerical examples related to the simulation of thermal models encountered in welding processes. Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   
5.
We describe the research and the integration methods we developed to make the HRP-2 humanoid robot climb vertical industrial-norm ladders. We use our multi-contact planner and multi-objective closed-loop control formulated as a QP (quadratic program). First, a set of contacts to climb the ladder is planned off-line (automatically or by the user). These contacts are provided as an input for a finite state machine. The latter builds supplementary tasks that account for geometric uncertainties and specific grasps procedures to be added to the QP controller. The latter provides instant desired states in terms of joint accelerations and contact forces to be tracked by the embedded low-level motor controllers. Our trials revealed that hardware changes are necessary, and parts of software must be made more robust. Yet, we confirmed that HRP-2 has the kinematic and power capabilities to climb real industrial ladders, such as those found in nuclear power plants and large scale manufacturing factories (e.g. aircraft, shipyard) and construction sites.  相似文献   
6.
We consider a Riemann surface X defined by a polynomial f(x,y) of degree d, whose coefficients are chosen randomly. Hence, we can suppose that X is smooth, that the discriminant δ(x) of f has d(d−1) simple roots, Δ, and that δ(0)≠0, i.e. the corresponding fiber has d distinct points {y1,…,yd}. When we lift a loop 0∈γCΔ by a continuation method, we get d paths in X connecting {y1,…,yd}, hence defining a permutation of that set. This is called monodromy.Here we present experimentations in Maple to get statistics on the distribution of transpositions corresponding to loops around each point of Δ. Multiplying families of “neighbor” transpositions, we construct permutations and the subgroups of the symmetric group they generate. This allows us to establish and study experimentally two conjectures on the distribution of these transpositions and on transitivity of the generated subgroups.Assuming that these two conjectures are true, we develop tools allowing fast probabilistic algorithms for absolute multivariate polynomial factorization, under the hypothesis that the factors behave like random polynomials whose coefficients follow uniform distributions.  相似文献   
7.
We have designed a new symbolic-numeric strategy for computing efficiently and accurately floating point Puiseux series defined by a bivariate polynomial over an algebraic number field. In essence, computations modulo a well-chosen prime number p are used to obtain the exact information needed to guide floating point computations. In this paper, we detail the symbolic part of our algorithm. First of all, we study modular reduction of Puiseux series and give a good reduction criterion for ensuring that the information required by the numerical part is preserved. To establish our results, we introduce a simple modification of classical Newton polygons, that we call “generic Newton polygons”, which turns out to be very convenient. Finally, we estimate the size of good primes obtained with deterministic and probabilistic strategies. Some of these results were announced without proof at ISSAC’08.  相似文献   
8.
9.
To evaluate the association between pre‐natal and post‐natal exposure to pet ownership and lung function in children, a cross‐sectional study named Seven Northeastern Cities (SNEC) study was conducted. In this study, children's lung function including the forced expiratory volume in 1 second (FEV1), forced vital capacity (FVC), maximal mid‐expiratory flow (MMEF), and peak expiratory flow (PEF) were measured by spirometers, and pet ownership situations were collected by questionnaire. Analyzed by multiple logistic regression and generalized linear modeling, we found that for all subjects, pet exposure in the first 2 years of life was significantly associated with lung function impairment of FVC<85% predicted (adjusted odds ratio [aOR]=1.28; 95% confidence interval [CI]: 1.01, 1.63). For current pet exposure, the increased odds of lung function impairment ranged from 35% (aOR=1.35; 95%CI: 1.12, 1.62) for FVC<85% predicted to 57% (aOR=1.57; 95%CI: 1.29, 1.93) for FEV1<85% predicted. The in utero exposure was not related to lung function impairment. Compared with other pets, higher odds were observed among children with dogs. When stratified by gender, girls with current pet exposure were more likely to have lung function impairment than boys. It implies self‐reported exposures to pets were negatively associated with lung function among the children under study.  相似文献   
10.
A boiling water reactor SVEA-96+ fresh fuel lattice has been used as the basis for a benchmark study of the void reactivity coefficient at assembly level in the full voidage range. Results have been obtained using the deterministic codes CASMO-4, HELIOS, PHOENIX, BOXER and the probabilistic code MCNP4C, combined for almost all cases with different cross section libraries. A statistical analysis of the results obtained showed that the void reactivity coefficient tends to become less negative beyond 80% void and that the discrepancies between codes tend to increase from less than 15% at voidages lower than 40% to more than 25% at voidages higher than 70%. The void reactivity coefficient results and the corresponding differences between codes were isotopically decomposed to interpret discrepancies. The isotopic decomposition shows that the minimum observed in the void reactivity coefficient between 80% and 90% void is largely due to the decrease in the relative importance of the 157Gd(n, γ) rate with increasing voidage, and that the fundamental discrepancies between codes or libraries are mainly governed by the different predictions of the 238U(n, γ) variation with voidage.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号