首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   310917篇
  免费   33807篇
  国内免费   18658篇
电工技术   63748篇
技术理论   18篇
综合类   30255篇
化学工业   20632篇
金属工艺   9426篇
机械仪表   22558篇
建筑科学   24885篇
矿业工程   9857篇
能源动力   15565篇
轻工业   14241篇
水利工程   10833篇
石油天然气   11517篇
武器工业   3281篇
无线电   30055篇
一般工业技术   21712篇
冶金工业   9027篇
原子能技术   5108篇
自动化技术   60664篇
  2024年   892篇
  2023年   3709篇
  2022年   7418篇
  2021年   9214篇
  2020年   9844篇
  2019年   8106篇
  2018年   7589篇
  2017年   9938篇
  2016年   11456篇
  2015年   12704篇
  2014年   20967篇
  2013年   19349篇
  2012年   23968篇
  2011年   26185篇
  2010年   19182篇
  2009年   19344篇
  2008年   19042篇
  2007年   22457篇
  2006年   19508篇
  2005年   16398篇
  2004年   13754篇
  2003年   11565篇
  2002年   9090篇
  2001年   7482篇
  2000年   6364篇
  1999年   4953篇
  1998年   3957篇
  1997年   3284篇
  1996年   2807篇
  1995年   2314篇
  1994年   2020篇
  1993年   1434篇
  1992年   1273篇
  1991年   943篇
  1990年   762篇
  1989年   667篇
  1988年   532篇
  1987年   375篇
  1986年   315篇
  1985年   319篇
  1984年   359篇
  1983年   341篇
  1982年   290篇
  1981年   145篇
  1980年   119篇
  1979年   105篇
  1978年   74篇
  1977年   70篇
  1976年   58篇
  1959年   48篇
排序方式: 共有10000条查询结果,搜索用时 0 毫秒
51.
针对异构计算节点组成的大规模多状态计算系统的容错性能分析问题,提出了一种计算系统容错性能的评估方法。该方法采用自定义的两级容错性能形式化描述框架进行系统描述,通过构造多值决策图(Multi-value Decision Diagram,MDD)模型对系统进行容错性能建模,并基于构造的模型高效地计算出部件故障的条件下计算系统在特定性能水平上运行的概率,减少了计算的冗余性。实验结果表明,该方法在模型的大小和构建时间上均优于传统方法。该方法的提出将对系统操作员或程序设计者具有重要意义,使其确保系统适合预期应用。  相似文献   
52.
Abstract

Data mining techniques have been successfully utilized in different applications of significant fields, including medical research. With the wealth of data available within the health-care systems, there is a lack of practical analysis tools to discover hidden relationships and trends in data. The complexity of medical data that is unfavorable for most models is a considerable challenge in prediction. The ability of a model to perform accurately and efficiently in disease diagnosis is extremely significant. Thus, the model must be selected to fit the data better, such that the learning from previous data is most efficient, and the diagnosis of the disease is highly accurate. This work is motivated by the limited number of regression analysis tools for multivariate counts in the literature. We propose two regression models for count data based on flexible distributions, namely, the multinomial Beta-Liouville and multinomial scaled Dirichlet, and evaluated the proposed models in the problem of disease diagnosis. The performance is evaluated based on the accuracy of the prediction which depends on the nature and complexity of the dataset. Our results show the efficiency of the two proposed regression models where the prediction performance of both models is competitive to other previously used regression models for count data and to the best results in the literature.  相似文献   
53.
Topic modeling is a popular analytical tool for evaluating data. Numerous methods of topic modeling have been developed which consider many kinds of relationships and restrictions within datasets; however, these methods are not frequently employed. Instead many researchers gravitate to Latent Dirichlet Analysis, which although flexible and adaptive, is not always suited for modeling more complex data relationships. We present different topic modeling approaches capable of dealing with correlation between topics, the changes of topics over time, as well as the ability to handle short texts such as encountered in social media or sparse text data. We also briefly review the algorithms which are used to optimize and infer parameters in topic modeling, which is essential to producing meaningful results regardless of method. We believe this review will encourage more diversity when performing topic modeling and help determine what topic modeling method best suits the user needs.  相似文献   
54.
Traditionally, in supervised machine learning, (a significant) part of the available data (usually 50%-80%) is used for training and the rest—for validation. In many problems, however, the data are highly imbalanced in regard to different classes or does not have good coverage of the feasible data space which, in turn, creates problems in validation and usage phase. In this paper, we propose a technique for synthesizing feasible and likely data to help balance the classes as well as to boost the performance in terms of confusion matrix as well as overall. The idea, in a nutshell, is to synthesize data samples in close vicinity to the actual data samples specifically for the less represented (minority) classes. This has also implications to the so-called fairness of machine learning. In this paper, we propose a specific method for synthesizing data in a way to balance the classes and boost the performance, especially of the minority classes. It is generic and can be applied to different base algorithms, for example, support vector machines, k-nearest neighbour classifiers deep neural, rule-based classifiers, decision trees, and so forth. The results demonstrated that (a) a significantly more balanced (and fair) classification results can be achieved and (b) that the overall performance as well as the performance per class measured by confusion matrix can be boosted. In addition, this approach can be very valuable for the cases when the number of actual available labelled data is small which itself is one of the problems of the contemporary machine learning.  相似文献   
55.
Data and software are nowadays one and the same: for this very reason, the European Union (EU) and other governments introduce frameworks for data protection — a key example being the General Data Protection Regulation (GDPR). However, GDPR compliance is not straightforward: its text is not written by software or information engineers but rather, by lawyers and policy-makers. As a design aid to information engineers aiming for GDPR compliance, as well as an aid to software users’ understanding of the regulation, this article offers a systematic synthesis and discussion of it, distilled by the mathematical analysis method known as Formal Concept Analysis (FCA). By its principles, GDPR is synthesised as a concept lattice, that is, a formal summary of the regulation, featuring 144372 records — its uses are manifold. For example, the lattice captures so-called attribute implications, the implicit logical relations across the regulation, and their intensity. These results can be used as drivers during systems and services (re-)design, development, operation, or information systems’ refactoring towards more GDPR consistency.  相似文献   
56.
By leveraging the secret data coding using the remainder storage based exploiting modification direction (RSBEMD), and the pixel change operation recording based on multi-segment left and right histogram shifting, a novel reversible data hiding (RHD) scheme is proposed in this paper. The secret data are first encoded by some specific pixel change operations to the pixels in groups. After that, multi-segment left and right histogram shifting based on threshold manipulation is implemented for recording the pixel change operations. Furthermore, a multiple embedding policy based on chess board prediction (CBP) and threshold manipulation is put forward, and the threshold can be adjusted to achieve adaptive data hiding. Experimental results and analysis show that it is reversible and can achieve good performance in capacity and imperceptibility compared with the existing methods.  相似文献   
57.
本文基于差频检测的原理,提出一种在高频动态输入模式下,对高速高精度模数转换器(AD)的抗单粒子翻转效应进行评估的测试方法,并以一款8位3 GSPS高速AD为测试对象,设计开发了一套高速AD单粒子翻转效应测试系统,对目标器件进行了重离子试验。通过对试验结果的图像和错误数据进行分析,评估参试器件的抗辐照性能参数,为抗辐照高速高精度AD的加固设计提供数据支撑。  相似文献   
58.
探索采用数据可视化技术分析儿童用品TBT通报数据,以可视化图形图像呈现通报热点并揭示趋势信息,提出对策与建议,助力为儿童用品产业升级、TBT预警数据分析和信息传播工作提供新思路,提高中小企业的国外市场准入机会。  相似文献   
59.
60.
This paper studies the restoration of a transmission system after a significant disruption such as a natural disaster. It considers the co-optimization of repairs, load pickups, and generation dispatch to produce a sequencing of the repairs that minimizes the size of the blackout over time. The core of this process is a Restoration Ordering Problem (ROP), a non-convex mixed-integer nonlinear program that is outside the capabilities of existing solver technologies. To address this computational barrier, the paper examines two approximations of the power flow equations: The DC model and the recently proposed LPAC model. Systematic, large-scale testing indicates that the DC model is not sufficiently accurate for solving the ROP. In contrast, the LPAC power flow model, which captures line losses, reactive power, and voltage magnitudes, is sufficiently accurate to obtain restoration plans that can be converted into AC-feasible power flows. An experimental study also suggests that the LPAC model provides a robust and appealing tradeoff between accuracy and computational performance for solving the ROP.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号