首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3280篇
  免费   185篇
  国内免费   5篇
电工技术   26篇
综合类   3篇
化学工业   959篇
金属工艺   83篇
机械仪表   57篇
建筑科学   142篇
矿业工程   12篇
能源动力   75篇
轻工业   424篇
水利工程   22篇
石油天然气   6篇
无线电   189篇
一般工业技术   594篇
冶金工业   391篇
原子能技术   16篇
自动化技术   471篇
  2023年   26篇
  2022年   136篇
  2021年   176篇
  2020年   91篇
  2019年   84篇
  2018年   94篇
  2017年   104篇
  2016年   133篇
  2015年   80篇
  2014年   150篇
  2013年   222篇
  2012年   196篇
  2011年   228篇
  2010年   207篇
  2009年   154篇
  2008年   177篇
  2007年   150篇
  2006年   106篇
  2005年   105篇
  2004年   70篇
  2003年   67篇
  2002年   62篇
  2001年   38篇
  2000年   54篇
  1999年   38篇
  1998年   62篇
  1997年   46篇
  1996年   42篇
  1995年   37篇
  1994年   28篇
  1993年   25篇
  1992年   18篇
  1991年   12篇
  1990年   12篇
  1989年   14篇
  1988年   11篇
  1987年   14篇
  1986年   16篇
  1985年   15篇
  1984年   19篇
  1983年   13篇
  1982年   9篇
  1981年   15篇
  1980年   22篇
  1979年   8篇
  1977年   10篇
  1976年   11篇
  1975年   11篇
  1974年   10篇
  1973年   7篇
排序方式: 共有3470条查询结果,搜索用时 46 毫秒
991.
992.
Improving MCMC, using efficient importance sampling   总被引:1,自引:0,他引:1  
A generic Markov Chain Monte Carlo (MCMC) framework, based upon Efficient Importance Sampling (EIS) is developed, which can be used for the analysis of a wide range of econometric models involving integrals without analytical solution. EIS is a simple, generic and yet accurate Monte-Carlo integration procedure based on sampling densities which are global approximations to the integrand. By embedding EIS within MCMC procedures based on Metropolis-Hastings (MH) one can significantly improve their numerical properties, essentially by providing a fully automated selection of critical MCMC components, such as auxiliary sampling densities, normalizing constants and starting values. The potential of this integrated MCMC-EIS approach is illustrated with simple univariate integration problems, and with the Bayesian posterior analysis of stochastic volatility models and stationary autoregressive processes.  相似文献   
993.
Currently, large efforts are spent to develop standards and architectures useful to achieve more effective interoperability among medical information systems. Despite such efforts, there are no researches produced so far to directly analyse, with statistical methods, biomedical data represented as eXtensible Markup Language (XML) documents. Thus, the paper proposes an architecture which offers a twofold approach to the statistical analysis of XML data, i.e. via a web service and by extending the query languages used in XML databases. To show how the architecture can be used, a sample system is also reported. Finally, the paper ends by reporting the advantages and drawbacks of the proposed approach in comparison with classic statistical packages.  相似文献   
994.
This paper aims to fully present a new word sense disambiguation method that has been introduced in Hristea and Popescu (Fundam Inform 91(3–4):547–562, 2009) and so far tested in the case of adjectives (Hristea and Popescu in Fundam Inform 91(3–4):547–562, 2009) and verbs (Hristea in Int Rev Comput Softw 4(1):58–67, 2009). We hereby extend the method to the case of nouns and draw conclusions regarding its performance with respect to all these parts of speech. The method lies at the border between unsupervised and knowledge-based techniques. It performs unsupervised word sense disambiguation based on an underlying Naïve Bayes model, while using WordNet as knowledge source for feature selection. The performance of the method is compared to that of previous approaches that rely on completely different feature sets. Test results for all involved parts of speech show that feature selection using a knowledge source of type WordNet is more effective in disambiguation than local type features (like part-of-speech tags) are.  相似文献   
995.
This study set out to investigate the type of media individuals are more likely to tell self-serving and other-oriented lies, and whether this varied according to the target of the lie. One hundred and fifty participants rated on a likert-point scale how likely they would tell a lie. Participants were more likely to tell self-serving lies to people not well-known to them. They were more likely to tell self-serving lies in email, followed by phone, and finally face-to-face. Participants were more likely to tell other-oriented lies to individuals they felt close to and this did not vary according to the type media. Participants were more likely to tell harsh truths to people not well-known to them via email.  相似文献   
996.
The article addresses the problem of finding a small unsatisfiable core of an unsatisfiable CNF formula. The proposed algorithm, CoreTrimmer, iterates over each internal node d in the resolution graph that ‘consumes’ a large number of clauses M (i.e., a large number of original clauses are present in the unsat core with the sole purpose of proving d) and attempts to prove them without the M clauses. If this is possible, it transforms the resolution graph into a new graph that does not have the M clauses at its core. CoreTrimmer can be integrated into a fixpoint framework similarly to Malik and Zhang’s fix-point algorithm run_till_ fix. We call this option trim_till_fix. Experimental evaluation on a large number of industrial CNF unsatisfiable formulas shows that trim_till_fix doubles, on average, the number of reduced clauses in comparison to run_till_fix. It is also better when used as a component in a bigger system that enforces short timeouts.  相似文献   
997.
New hydrotalcite-like materials containing magnesium, chromium, and/or iron were synthesized by the coprecipitation method and then thermally transformed into mixed metal oxides. The obtained catalysts were characterized with respect to chemical composition (XRF), structural (XRD, Mössbauer spectroscopy) and textural (BET) properties. The catalytic performance of the hydrotalcite-derived oxides was tested in the N2O decomposition and the N2O reduction by ethylbenzene. An influence of N2O/ethylbenzene molar ratio on the process selectivity was studied. The relationship between catalytic performance and structure of catalysts was discussed.  相似文献   
998.
Multimedia Tools and Applications - Advanced intelligent surveillance systems are able to automatically analyze video of surveillance data without human intervention. These systems allow high...  相似文献   
999.

In many Natural Language Processing problems the combination of machine learning and optimization techniques is essential. One of these problems is the estimation of the human effort needed to improve a text that has been translated using a machine translation method. Recent advances in this area have shown that Gaussian Processes can be effective in post-editing effort prediction. However, Gaussian Processes require a kernel function to be defined, the choice of which highly influences the quality of the prediction. On the other hand, the extraction of features from the text can be very labor-intensive, although recent advances in sentence embedding have shown that this process can be automated. In this paper, we use a Genetic Programming algorithm to evolve kernels for Gaussian Processes to predict post-editing effort based on sentence embeddings. We show that the combination of evolutionary optimization and Gaussian Processes removes the need for a-priori specification of the kernel choice, and, by using a multi-objective variant of the Genetic Programming approach, kernels that are suitable for predicting several metrics can be learned. We also investigate the effect that the choice of the sentence embedding method has on the kernel learning process.

  相似文献   
1000.
The results reported in this paper create a step toward the rough set-based foundations of data mining and machine learning. The approach is based on calculi of approximation spaces. In this paper, we present the summarization and extension of our results obtained since 2003 when we started investigations on foundations of approximation of partially defined concepts (see, e.g., [2], [3], [7], [37], [20], [21], [5], [42], [39], [38], [40]). We discuss some important issues for modeling granular computations aimed at inducing compound granules relevant for solving problems such as approximation of complex concepts or selecting relevant actions (plans) for reaching target goals. The problems discussed in this article are crucial for building computer systems that assist researchers in scientific discoveries in many areas such as biology. In this paper, we present foundations for modeling of granular computations inside of system that is based on granules called approximation spaces. Our approach is based on the rough set approach introduced by Pawlak [24], [25]. Approximation spaces are fundamental granules used in searching for relevant complex granules called as data models, e.g., approximations of complex concepts, functions or relations. In particular, we discuss some issues that are related to generalizations of the approximation space introduced in [33], [34]. We present examples of rough set-based strategies for the extension of approximation spaces from samples of objects onto a whole universe of objects. This makes it possible to present foundations for inducing data models such as approximations of concepts or classifications analogous to the approaches for inducing different types of classifiers known in machine learning and data mining. Searching for relevant approximation spaces and data models are formulated as complex optimization problems. The proposed interactive, granular computing systems should be equipped with efficient heuristics that support searching for (semi-)optimal granules.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号