首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   335620篇
  免费   35914篇
  国内免费   20181篇
电工技术   26278篇
技术理论   17篇
综合类   30999篇
化学工业   37232篇
金属工艺   12336篇
机械仪表   25866篇
建筑科学   30228篇
矿业工程   11410篇
能源动力   11392篇
轻工业   22432篇
水利工程   10843篇
石油天然气   14629篇
武器工业   3821篇
无线电   30497篇
一般工业技术   30443篇
冶金工业   12296篇
原子能技术   4526篇
自动化技术   76470篇
  2024年   932篇
  2023年   4282篇
  2022年   8697篇
  2021年   12083篇
  2020年   10617篇
  2019年   9192篇
  2018年   8922篇
  2017年   11286篇
  2016年   13937篇
  2015年   15347篇
  2014年   22404篇
  2013年   22259篇
  2012年   23925篇
  2011年   25465篇
  2010年   18954篇
  2009年   19392篇
  2008年   18846篇
  2007年   22987篇
  2006年   20658篇
  2005年   17734篇
  2004年   14392篇
  2003年   12629篇
  2002年   10009篇
  2001年   7969篇
  2000年   6899篇
  1999年   5365篇
  1998年   4444篇
  1997年   3771篇
  1996年   3097篇
  1995年   2558篇
  1994年   2224篇
  1993年   1671篇
  1992年   1416篇
  1991年   1118篇
  1990年   928篇
  1989年   795篇
  1988年   636篇
  1987年   467篇
  1986年   411篇
  1985年   411篇
  1984年   426篇
  1983年   388篇
  1982年   317篇
  1981年   189篇
  1980年   173篇
  1979年   130篇
  1978年   85篇
  1977年   82篇
  1962年   81篇
  1959年   68篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
81.
针对异构计算节点组成的大规模多状态计算系统的容错性能分析问题,提出了一种计算系统容错性能的评估方法。该方法采用自定义的两级容错性能形式化描述框架进行系统描述,通过构造多值决策图(Multi-value Decision Diagram,MDD)模型对系统进行容错性能建模,并基于构造的模型高效地计算出部件故障的条件下计算系统在特定性能水平上运行的概率,减少了计算的冗余性。实验结果表明,该方法在模型的大小和构建时间上均优于传统方法。该方法的提出将对系统操作员或程序设计者具有重要意义,使其确保系统适合预期应用。  相似文献   
82.
Abstract

Data mining techniques have been successfully utilized in different applications of significant fields, including medical research. With the wealth of data available within the health-care systems, there is a lack of practical analysis tools to discover hidden relationships and trends in data. The complexity of medical data that is unfavorable for most models is a considerable challenge in prediction. The ability of a model to perform accurately and efficiently in disease diagnosis is extremely significant. Thus, the model must be selected to fit the data better, such that the learning from previous data is most efficient, and the diagnosis of the disease is highly accurate. This work is motivated by the limited number of regression analysis tools for multivariate counts in the literature. We propose two regression models for count data based on flexible distributions, namely, the multinomial Beta-Liouville and multinomial scaled Dirichlet, and evaluated the proposed models in the problem of disease diagnosis. The performance is evaluated based on the accuracy of the prediction which depends on the nature and complexity of the dataset. Our results show the efficiency of the two proposed regression models where the prediction performance of both models is competitive to other previously used regression models for count data and to the best results in the literature.  相似文献   
83.
Topic modeling is a popular analytical tool for evaluating data. Numerous methods of topic modeling have been developed which consider many kinds of relationships and restrictions within datasets; however, these methods are not frequently employed. Instead many researchers gravitate to Latent Dirichlet Analysis, which although flexible and adaptive, is not always suited for modeling more complex data relationships. We present different topic modeling approaches capable of dealing with correlation between topics, the changes of topics over time, as well as the ability to handle short texts such as encountered in social media or sparse text data. We also briefly review the algorithms which are used to optimize and infer parameters in topic modeling, which is essential to producing meaningful results regardless of method. We believe this review will encourage more diversity when performing topic modeling and help determine what topic modeling method best suits the user needs.  相似文献   
84.
Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50%-80%) is used for training and the rest—for validation. In many problems, however, the data are highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesizing feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesize data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesizing data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, for example, support vector machines, k-nearest neighbour classifiers deep neural, rule-based classifiers, decision trees, and so forth. The results demonstrated that (a) a significantly more balanced (and fair) classification results can be achieved and (b) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning.  相似文献   
85.
Data and software are nowadays one and the same: for this very reason, the European Union (EU) and other governments introduce frameworks for data protection — a key example being the General Data Protection Regulation (GDPR). However, GDPR compliance is not straightforward: its text is not written by software or information engineers but rather, by lawyers and policy-makers. As a design aid to information engineers aiming for GDPR compliance, as well as an aid to software users’ understanding of the regulation, this article offers a systematic synthesis and discussion of it, distilled by the mathematical analysis method known as Formal Concept Analysis (FCA). By its principles, GDPR is synthesised as a concept lattice, that is, a formal summary of the regulation, featuring 144372 records — its uses are manifold. For example, the lattice captures so-called attribute implications, the implicit logical relations across the regulation, and their intensity. These results can be used as drivers during systems and services (re-)design, development, operation, or information systems’ refactoring towards more GDPR consistency.  相似文献   
86.
By leveraging the secret data coding using the remainder storage based exploiting modification direction (RSBEMD), and the pixel change operation recording based on multi-segment left and right histogram shifting, a novel reversible data hiding (RHD) scheme is proposed in this paper. The secret data are first encoded by some specific pixel change operations to the pixels in groups. After that, multi-segment left and right histogram shifting based on threshold manipulation is implemented for recording the pixel change operations. Furthermore, a multiple embedding policy based on chess board prediction (CBP) and threshold manipulation is put forward, and the threshold can be adjusted to achieve adaptive data hiding. Experimental results and analysis show that it is reversible and can achieve good performance in capacity and imperceptibility compared with the existing methods.  相似文献   
87.
Perfluorocarbon gas is widely used in the semiconductor industry. However, perfluorocarbon has a negative effect on the global environment owing to its high global warming potential (GWP) value. An alternative solution is essential. Therefore, we evaluated the possibility of replacing conventional perfluorocarbon etching gases such as CHF3 with C6F12O, which has a low GWP and is in a liquid state at room temperature. In this study, silicon oxynitride (SiON) films were plasma-etched using inductively coupled CF4 +C6F12O+O2 mixed plasmas. Subsequently, the etching characteristics of the film, such as etching rate, etching profile, selectivity over Si, and photoresist, were investigated. A double Langmuir probe was used and optical emission spectroscopy was performed for plasma diagnostics. In addition, a contact angle goniometer and x-ray photoelectron spectroscope were used to confirm the change in the surface properties of the etched SiON film surface. Consequently, the etching characteristics of the C6F12O mixed plasma exhibited a lower etching rate, higher SiON/Si selectivity, lower plasma damage, and more vertical etched profiles than the conventional CHF3 mixed plasma. In addition, the C6F12O gas can be recovered in the liquid state, thereby decreasing global warming. These results confirmed that the C6F12O precursor can sufficiently replace the conventional etching gas.  相似文献   
88.
本文基于差频检测的原理,提出一种在高频动态输入模式下,对高速高精度模数转换器(AD)的抗单粒子翻转效应进行评估的测试方法,并以一款8位3 GSPS高速AD为测试对象,设计开发了一套高速AD单粒子翻转效应测试系统,对目标器件进行了重离子试验。通过对试验结果的图像和错误数据进行分析,评估参试器件的抗辐照性能参数,为抗辐照高速高精度AD的加固设计提供数据支撑。  相似文献   
89.
Several modifications and enhancements to control charts in increasing the performance of small and moderate process shifts have been introduced in the quality control charting techniques. In this paper, a new hybrid control chart for monitoring process location is proposed by combining two homogeneously weighted moving average (HWMA) control charts. The hybrid homogeneously weighted moving average (HHWMA) statistic is derived using two smoothing constants λ1 and λ2 . The average run length (ARL) and the standard deviation of the run length (SDRL) values of the HHWMA control chart are obtained and compared with some existing control charts for monitoring small and moderate shifts in the process location. The results of study show that the HHWMA control chart outperforms the existing control charts in many situations. The application of the HHWMA chart is demonstrated using a simulated data.  相似文献   
90.
探索采用数据可视化技术分析儿童用品TBT通报数据,以可视化图形图像呈现通报热点并揭示趋势信息,提出对策与建议,助力为儿童用品产业升级、TBT预警数据分析和信息传播工作提供新思路,提高中小企业的国外市场准入机会。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号