首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1389篇
  免费   66篇
  国内免费   2篇
电工技术   34篇
综合类   4篇
化学工业   380篇
金属工艺   10篇
机械仪表   54篇
建筑科学   62篇
矿业工程   1篇
能源动力   78篇
轻工业   89篇
水利工程   5篇
无线电   139篇
一般工业技术   244篇
冶金工业   44篇
原子能技术   31篇
自动化技术   282篇
  2023年   11篇
  2022年   48篇
  2021年   69篇
  2020年   28篇
  2019年   29篇
  2018年   47篇
  2017年   38篇
  2016年   48篇
  2015年   32篇
  2014年   61篇
  2013年   105篇
  2012年   104篇
  2011年   112篇
  2010年   80篇
  2009年   92篇
  2008年   63篇
  2007年   69篇
  2006年   64篇
  2005年   32篇
  2004年   45篇
  2003年   31篇
  2002年   35篇
  2001年   20篇
  2000年   18篇
  1999年   21篇
  1998年   16篇
  1997年   16篇
  1996年   18篇
  1995年   15篇
  1994年   8篇
  1993年   15篇
  1992年   6篇
  1991年   4篇
  1990年   4篇
  1989年   6篇
  1988年   2篇
  1987年   3篇
  1986年   3篇
  1985年   3篇
  1983年   3篇
  1982年   2篇
  1981年   3篇
  1980年   6篇
  1979年   4篇
  1977年   3篇
  1976年   2篇
  1975年   2篇
  1973年   2篇
  1968年   2篇
  1957年   1篇
排序方式: 共有1457条查询结果,搜索用时 812 毫秒
91.
The Layer-Oriented Simulation Tool (LOST) is a numerical simulation code developed for analysis of the performance of multiconjugate adaptive optics modules following a layer-oriented approach. The LOST code computes the atmospheric layers in terms of phase screens and then propagates the phase delays introduced in the natural guide stars' wave fronts by using geometrical optics approximations. These wave fronts are combined in an optical or numerical way, including the effects of wave-front sensors on measurements in terms of phase noise. The LOST code is described, and two applications to layer-oriented modules are briefly presented. We have focus on the Multiconjugate adaptive optics demonstrator to be mounted upon the Very Large Telescope and on the Near-IR-Visible Adaptive Interferometer for Astronomy (NIRVANA) interferometric system to be installed on the combined focus of the Large Binocular Telescope.  相似文献   
92.
We address the problem of the efficient visualization of large irregular volume data sets by exploiting a multiresolution model based on tetrahedral meshes. Multiresolution models, also called Level-Of-Detail (LOD) models, allow encoding the whole data set at a virtually continuous range of different resolutions. We have identified a set of queries for extracting meshes at variable resolution from a multiresolution model, based on field values, domain location, or opacity of the transfer function. Such queries allow trading off between resolution and speed in visualization. We define a new compact data structure for encoding a multiresolution tetrahedral mesh built through edge collapses to support selective refinement efficiently and show that such a structure has a storage cost from 3 to 5.5 times lower than standard data structures used for tetrahedral meshes. The data structures and variable resolution queries have been implemented together with state-of-the art visualization techniques in a system for the interactive visualization of three-dimensional scalar fields defined on tetrahedral meshes. Experimental results show that selective refinement queries can support interactive visualization of large data sets.  相似文献   
93.
In this paper we show that statistical properties of the transition graph of a system to be verified can be exploited to improve memory or time performances of verification algorithms.We show experimentally that protocols exhibit transition locality. That is, with respect to levels of a breadth-first state space exploration, state transitions tend to be between states belonging to close levels of the transition graph. We support our claim by measuring transition locality for the set of protocols included in the Mur verifier distribution .We present a cache-based verification algorithm that exploits transition locality to decrease memory usage and a disk-based verification algorithm that exploits transition locality to decrease disk read accesses, thus reducing the time overhead due to disk usage. Both algorithms have been implemented within the Mur verifier.Our experimental results show that our cache-based algorithm can typically save more than 40% of memory with an average time penalty of about 50% when using (Mur) bit compression and 100% when using bit compression and hash compaction, whereas our disk-based verification algorithm is typically more than ten times faster than a previously proposed disk-based verification algorithm and, even when using 10% of the memory needed to complete verification, it is only between 40 and 530% (300% on average) slower than (RAM) Mur with enough memory to complete the verification task at hand. Using just 300 MB of memory our disk-based Mur was able to complete verification of a protocol with about 109 reachable states. This would require more than 5 GB of memory using standard Mur .  相似文献   
94.
95.
Computational aspects of prospect theory with asset pricing applications   总被引:1,自引:0,他引:1  
We develop an algorithm to compute asset allocations for Kahneman and Tversky’s (Econometrica, 47(2), 263–291, 1979) prospect theory. An application to benchmark data as in Fama and French (Journal of Financial Economics, 47(2), 427–465, 1992) shows that the equity premium puzzle is resolved for parameter values similar to those found in the laboratory experiments of Kahneman and Tversky (Econometrica, 47(2), 263–291, 1979). While previous studies like Benartzi and Thaler (The Quarterly Journal of Economics, 110(1), 73–92, 1995), Barberis, Huang and Santos (The Quarterly Journal of Economics, 116(1), 1–53, 2001), and Grüne and Semmler (Asset prices and loss aversion, Germany, Mimeo Bielefeld University, 2005) focussed on dynamic aspects of asset pricing but only used loss aversion to explain the equity premium puzzle our paper explains the unconditional moments of asset pricing by a static two-period optimization problem. However, we incorporate asymmetric risk aversion. Our approach allows reducing the degree of loss aversion from 2.353 to 2.25, which is the value found by Tversky and Kahneman (Journal of Risk and Uncertainty, 5, 297–323, 1992) while increasing the risk aversion from 1 to 0.894, which is a slightly higher value than the 0.88 found by Tversky and Kahneman (Journal of Risk and Uncertainty, 5, 297–323, 1992). The equivalence of these parameter settings is robust to incorporating the size and the value portfolios of Fama and French (Journal of Finance, 47(2), 427–465, 1992). However, the optimal prospect theory portfolios found on this larger set of assets differ drastically from the optimal mean-variance portfolio.  相似文献   
96.
A crack is steadily running in an elastic isotropic fluid-saturated porous solid at an intersonic constant speed c. The crack tip speeds of interest are bounded below by the slower between the slow longitudinal wave-speed and the shear wave-speed, and above by the fast longitudinal wave-speed. Biot’s theory of poroelasticity with inertia forces governs the motion of the mixture. The poroelastic moduli depend on the porosity, and the complete range of porosities n ∈ [0, 1] is investigated. Solids are obtained as the limit case n = 0, and the continuity of the energy release rate as the porosity vanishes is addressed. Three characteristic regions in the plane (n, c) are delineated, depending on the relative order of the body wave-speeds. Mode II loading conditions are considered, with a permeable crack surface. Cracks with and without process zones are envisaged. In each region, the analytical solution to a Riemann–Hilbert problem provides the stress, pore pressure and velocity fields near the tip of the crack. For subsonic propagation, the asymptotic crack tip fields are known to be continuous in the body [Loret and Radi (2001) J Mech Phys Solids 49(5):995–1020]. In contrast, for intersonic crack propagation without a process zone, the asymptotic stress and pore pressure might display a discontinuity across two or four symmetric rays emanating from the moving crack tip. Under Mode II loading condition, the singularity exponent for energetically admissible tip speeds turns out to be weaker than 1/2, except at a special point and along special curves of the (n, c)-plane. The introduction of a finite length process zone is required so that 1. the energy release rate at the crack tip is strictly positive and finite; 2. the relative sliding of the crack surfaces has the same direction as the applied loading. The presence of the process zone is shown to wipe out possible first order discontinuities.  相似文献   
97.
Nanostructured powders of Nb-doped TiO2 (TN) and SnO2 mixed with Nb-doped TiO2 in two different atomic ratios—10 to 1 (TSN 101) and 1 to 1 (TSN 11)—were synthesized using the reverse micelle microemulsion of a nonionic surfactant (brine solution/1-hexanol/Triton X-100/cyclohexane). The powders were characterized by transmission electron microscopy (TEM) and X-ray diffraction (XRD). Thick films were fabricated for gas sensors and characterized by XRD analysis and field emission scanning electron microscopy (FE-SEM). The effects of the film morphology and firing temperature in the range 650–850 °C on CO sensitivity were studied. The best gas response, expressed as the ratio between the resistance in air and the resistance under gas exposure (R air/R gas), was measured for TSN 11 at 11 for 1,000 ppm CO exposure. All types of sensors showed good thermal stability. The electrochemical impedance spectroscopy (EIS) measurements were performed in different gas atmospheres (air, O2, CO and NO2) to better understand the electrical properties of the nanostructured mixed metal oxides.  相似文献   
98.
In the last years, the number of Wi-Fi hotspots at public venues has undergone a substantial growth, promoting the WLAN technologies as the ubiquitous solution to provide high-speed wireless connectivity in public areas. However, the adoption of a random access CSMA-based paradigm for the 802.11 MAC protocol makes difficult to ensure high throughput and a fair allocation of radio resources in 802.11-based WLANs. In this paper we evaluate extensively via simulations the interaction between the flow control mechanisms implemented at the TCP layer and the contention avoidance techniques used at the 802.11 MAC layer. We conducted our study considering initially M wireless stations performing downloads from the Internet. From our results, we observed that the TCP downlink throughput is not limited by the collision events, but by the inability of the MAC protocol to assign a higher chance of accessing the channel to the base station. We propose a simple and easy to implement modification of the base station’s behavior with the purpose of increasing the TCP throughput reducing useless MAC protocol overheads. With our scheme, the base station is allowed to transmit periodically bursts of data frames towards the mobile hosts. We design a resource allocation protocol aimed at maximizing the success probability of the uplink transmissions by dynamically adapting the burst length to the collision probability estimated by the base station. By its design, our scheme is also beneficial to achieve a fairer allocation of the channel bandwidth among the downlink and uplink flows, and among TCP and UDP flows. Simulation results confirm both the improvement in the TCP downlink throughput and the reduction of system unfairness.  相似文献   
99.
A support vector machine (SVM) approach to the classification of transients in nuclear power plants is presented. SVM is a machine-learning algorithm that has been successfully used in pattern recognition for cluster analysis. In the present work, single- and multiclass SVM are combined into a hierarchical structure for distinguishing among transients in nuclear systems on the basis of measured data. An example of application of the approach is presented with respect to the classification of anomalies and malfunctions occurring in the feedwater system of a boiling water reactor. The data used in the example are provided by the HAMBO simulator of the Halden Reactor Project.  相似文献   
100.
Spatial contagion between two financial markets X and Y appears when there is more dependence between X and Y when they are doing badly than when they exhibit typical performance. In this paper, we introduce an index to measure the contagion effects. This tool is based on the use of suitable copulas associated with the markets and on the calculation of the related conditional Spearman's correlation coefficients. As an empirical application, the proposed index is used to create a clustering of European stock market indices to assess their behavior in the recent years. The whole procedure is expected to be useful for portfolio diversification in crisis periods.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号