全文获取类型
收费全文 | 45590篇 |
免费 | 6053篇 |
国内免费 | 3095篇 |
专业分类
电工技术 | 10214篇 |
技术理论 | 1篇 |
综合类 | 4650篇 |
化学工业 | 5761篇 |
金属工艺 | 1443篇 |
机械仪表 | 2135篇 |
建筑科学 | 3837篇 |
矿业工程 | 1532篇 |
能源动力 | 1992篇 |
轻工业 | 1744篇 |
水利工程 | 1685篇 |
石油天然气 | 2948篇 |
武器工业 | 486篇 |
无线电 | 3957篇 |
一般工业技术 | 3959篇 |
冶金工业 | 1875篇 |
原子能技术 | 949篇 |
自动化技术 | 5570篇 |
出版年
2024年 | 250篇 |
2023年 | 701篇 |
2022年 | 1350篇 |
2021年 | 1652篇 |
2020年 | 1846篇 |
2019年 | 1594篇 |
2018年 | 1520篇 |
2017年 | 1861篇 |
2016年 | 1908篇 |
2015年 | 1999篇 |
2014年 | 2911篇 |
2013年 | 3199篇 |
2012年 | 3286篇 |
2011年 | 3484篇 |
2010年 | 2502篇 |
2009年 | 2686篇 |
2008年 | 2554篇 |
2007年 | 2876篇 |
2006年 | 2654篇 |
2005年 | 2126篇 |
2004年 | 1907篇 |
2003年 | 1657篇 |
2002年 | 1354篇 |
2001年 | 1107篇 |
2000年 | 969篇 |
1999年 | 828篇 |
1998年 | 679篇 |
1997年 | 533篇 |
1996年 | 456篇 |
1995年 | 411篇 |
1994年 | 420篇 |
1993年 | 281篇 |
1992年 | 223篇 |
1991年 | 193篇 |
1990年 | 163篇 |
1989年 | 141篇 |
1988年 | 104篇 |
1987年 | 78篇 |
1986年 | 58篇 |
1985年 | 50篇 |
1984年 | 35篇 |
1983年 | 24篇 |
1982年 | 30篇 |
1981年 | 10篇 |
1980年 | 15篇 |
1979年 | 8篇 |
1977年 | 5篇 |
1975年 | 4篇 |
1959年 | 8篇 |
1951年 | 9篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
181.
182.
切削加工表面由于微观不平度及镜面效应的存在,使得照射其上的光线具有多次散射及后向散射的特性,从而影响了以工件表面图像3维重建形貌的精度。提出了一种基于二向反射分布函数的表面重建算法,构造了以Hapke模型表达的金属切削表面重建方程。该算法以重建方程离散化为手段,计算出光源和物体的倾角和偏角,得到光源和物体的梯度值,最后采用全微分方程对物体表面高度进行恢复。通过对实际切削加工表面图像3维形貌重建结果与触针式测量结果的对比表明,重建形貌粗糙度轮廓贴近实测粗糙度轮廓,证明了本文算法的准确、有效性,为加工表面3维形貌的重建提供了新的思路和方法。 相似文献
183.
Dominance-based rough set approach and knowledge reductions in incomplete ordered information system 总被引:6,自引:0,他引:6
Many methods based on the rough set to deal with incomplete information systems have been proposed in recent years. However, they are only suitable for the incomplete systems with regular attributes whose domains are not preference-ordered. This paper thus attempts to present research focusing on a complex incomplete information system—the incomplete ordered information system. In such incomplete information systems, all attributes are considered as criterions. A criterion indicates an attribute with preference-ordered domain. To conduct classification analysis in the incomplete ordered information system, the concept of similarity dominance relation is first proposed. Two types of knowledge reductions are then formed for preserving two different notions of similarity dominance relations. With introduction of the approximate distribution reduct into the incomplete ordered decision system, the judgment theorems and discernibility matrixes associated with four novel approximate distribution reducts are obtained. A numerical example is employed to substantiate the conceptual arguments. 相似文献
184.
Tsong Yueh Chen Author Vitae Author Vitae Huai Liu Author Vitae 《Journal of Systems and Software》2008,81(12):2146-2162
Adaptive random testing (ART) has recently been proposed to enhance the failure-detection capability of random testing. In ART, test cases are not only randomly generated, but also evenly spread over the input domain. Various ART algorithms have been developed to evenly spread test cases in different ways. Previous studies have shown that some ART algorithms prefer to select test cases from the edge part of the input domain rather than from the centre part, that is, inputs do not have equal chance to be selected as test cases. Since we do not know where the failure-causing inputs are prior to testing, it is not desirable for inputs to have different chances of being selected as test cases. Therefore, in this paper, we investigate how to enhance some ART algorithms by offsetting the edge preference, and propose a new family of ART algorithms. A series of simulations have been conducted and it is shown that these new algorithms not only select test cases more evenly, but also have better failure detection capabilities. 相似文献
185.
In an organization operating in the bancassurance sector we identified a low-risk IT subportfolio of 84 IT projects comprising together 16,500 function points, each project varying in size and duration, for which we were able to quantify its requirements volatility. This representative portfolio stems from a much larger portfolio of IT projects. We calculated the volatility from the function point countings that were available to us. These figures were aggregated into a requirements volatility benchmark. We found that maximum requirements volatility rates depend on size and duration, which refutes currently known industrial averages. For instance, a monthly growth rate of 5% is considered a critical failure factor, but in our low-risk portfolio we found more than 21% of successful projects with a volatility larger than 5%. We proposed a mathematical model taking size and duration into account that provides a maximum healthy volatility rate that is more in line with the reality of low-risk IT portfolios. Based on the model, we proposed a tolerance factor expressing the maximal volatility tolerance for a project or portfolio. For a low-risk portfolio its empirically found tolerance is apparently acceptable, and values exceeding this tolerance are used to trigger IT decision makers. We derived two volatility ratios from this model, the π-ratio and the ρ-ratio. These ratios express how close the volatility of a project has approached the danger zone when requirements volatility reaches a critical failure rate. The volatility data of a governmental IT portfolio were juxtaposed to our bancassurance benchmark, immediately exposing a problematic project, which was corroborated by its actual failure. When function points are less common, e.g. in the embedded industry, we used daily source code size measures and illustrated how to govern the volatility of a software product line of a hardware manufacturer. With the three real-world portfolios we illustrated that our results serve the purpose of an early warning system for projects that are bound to fail due to excessive volatility. Moreover, we developed essential requirements volatility metrics that belong on an IT governance dashboard and presented such a volatility dashboard. 相似文献
186.
Recently a power study of some popular tests for bivariate independence based on ranks has been conducted. An alternative class of tests appropriate for testing not only bivariate, but also multivariate independence is developed, and their small-sample performance is studied. The test statistics employ the familiar equation between the joint characteristic function and the product of component characteristic functions, and may be written in a closed form convenient for computer implementation. Simulations on a distribution-free version of the new test statistic show that the proposed method compares well to standard methods of testing independence via the empirical distribution function. The methods are applied to multivariate observations incorporating data from several major stock-market indices. Issues pertaining to the theoretical properties of the new test are also addressed. 相似文献
187.
A new likelihood based AR approximation is given for ARMA models. The usual algorithms for the computation of the likelihood of an ARMA model require O(n) flops per function evaluation. Using our new approximation, an algorithm is developed which requires only O(1) flops in repeated likelihood evaluations. In most cases, the new algorithm gives results identical to or very close to the exact maximum likelihood estimate (MLE). This algorithm is easily implemented in high level quantitative programming environments (QPEs) such as Mathematica, MatLab and R. In order to obtain reasonable speed, previous ARMA maximum likelihood algorithms are usually implemented in C or some other machine efficient language. With our algorithm it is easy to do maximum likelihood estimation for long time series directly in the QPE of your choice. The new algorithm is extended to obtain the MLE for the mean parameter. Simulation experiments which illustrate the effectiveness of the new algorithm are discussed. Mathematica and R packages which implement the algorithm discussed in this paper are available [McLeod, A.I., Zhang, Y., 2007. Online supplements to “Faster ARMA Maximum Likelihood Estimation”, 〈http://www.stats.uwo.ca/faculty/aim/2007/faster/〉]. Based on these package implementations, it is expected that the interested researcher would be able to implement this algorithm in other QPEs. 相似文献
188.
Simos G. Meintanis 《Computational statistics & data analysis》2008,52(5):2496-2503
Goodness-of-fit statistics are considered which are appropriate for generalized families of distributions, resulting from exponentiation. The tests employ a variation of the data determined by the cumulative distribution function of the corresponding non-generalized distribution. The resulting test, which makes use of the Mellin transform of the transformed data, is shown to be consistent. Simulation results for the case of the generalized Rayleigh distribution show that the proposed test compares well with standard methods based on the empirical distribution function. 相似文献
189.
One of the main problems in operational risk management is the lack of loss data, which affects the parameter estimates of the marginal distributions of the losses. The principal reason is that financial institutions only started to collect operational loss data a few years ago, due to the relatively recent definition of this type of risk. Considering this drawback, the employment of Bayesian methods and simulation tools could be a natural solution to the problem. The use of Bayesian methods allows us to integrate the scarce and, sometimes, inaccurate quantitative data collected by the bank with prior information provided by experts. An original proposal is a Bayesian approach for modelling operational risk and for calculating the capital required to cover the estimated risks. Besides this methodological innovation a computational scheme, based on Markov chain Monte Carlo simulations, is required. In particular, the application of the MCMC method to estimate the parameters of the marginals shows advantages in terms of a reduction of capital charge according to different choices of the marginal loss distributions. 相似文献
190.
The goal of service differentiation is to provide different service quality levels to meet changing system configuration and resource availability and to satisfy different requirements and expectations of applications and users. In this paper, we investigate the problem of quantitative service differentiation on cluster-based delay-sensitive servers. The goal is to support a system-wide service quality optimization with respect to resource allocation on a computer system while provisioning proportionality fairness to clients. We first propose and promote a square-root proportional differentiation model. Interestingly, both popular delay factors, queueing delay and slowdown, are reciprocally proportional to the allocated resource usage. We formulate the problem of quantitative service differentiation as a generalized resource allocation optimization towards the minimization of system delay, defined as the sum of weighted delay of client requests. We prove that the optimization-based resource allocation scheme essentially provides square-root proportional service differentiation to clients. We then study the problem of service differentiation provisioning from an important relative performance metric, slowdown. We give a closed-form expression of the expected slowdown of a popular heavy-tailed workload model with respect to resource allocation on a server cluster. We design a two-tier resource management framework, which integrates a dispatcher-based node partitioning scheme and a server-based adaptive process allocation scheme. We evaluate the resource allocation framework with different models via extensive simulations. Results show that the square-root proportional model provides service differentiation at a minimum cost of system delay. The two-tier resource allocation framework can provide fine-grained and predictable service differentiation on cluster-based servers. 相似文献