首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   132388篇
  免费   18202篇
  国内免费   13755篇
电工技术   11751篇
技术理论   8篇
综合类   17371篇
化学工业   11073篇
金属工艺   4303篇
机械仪表   9693篇
建筑科学   13091篇
矿业工程   4453篇
能源动力   6067篇
轻工业   4189篇
水利工程   8520篇
石油天然气   7171篇
武器工业   2001篇
无线电   10664篇
一般工业技术   11291篇
冶金工业   4383篇
原子能技术   1310篇
自动化技术   37006篇
  2024年   604篇
  2023年   1961篇
  2022年   3924篇
  2021年   4577篇
  2020年   4852篇
  2019年   4207篇
  2018年   3990篇
  2017年   4974篇
  2016年   5752篇
  2015年   6078篇
  2014年   8131篇
  2013年   8640篇
  2012年   9754篇
  2011年   10563篇
  2010年   8327篇
  2009年   8641篇
  2008年   8681篇
  2007年   9829篇
  2006年   8494篇
  2005年   7483篇
  2004年   6014篇
  2003年   5259篇
  2002年   4163篇
  2001年   3407篇
  2000年   2980篇
  1999年   2274篇
  1998年   1918篇
  1997年   1561篇
  1996年   1473篇
  1995年   1274篇
  1994年   1041篇
  1993年   740篇
  1992年   617篇
  1991年   476篇
  1990年   378篇
  1989年   322篇
  1988年   198篇
  1987年   125篇
  1986年   101篇
  1985年   99篇
  1984年   104篇
  1983年   50篇
  1982年   70篇
  1981年   38篇
  1980年   49篇
  1979年   50篇
  1978年   16篇
  1974年   10篇
  1959年   18篇
  1951年   10篇
排序方式: 共有10000条查询结果,搜索用时 687 毫秒
51.
Though modeling and verifying Multi-Agent Systems (MASs) have long been under study, there are still challenges when many different aspects need to be considered simultaneously. In fact, various frameworks have been carried out for modeling and verifying MASs with respect to knowledge and social commitments independently. However, considering them under the same framework still needs further investigation, particularly from the verification perspective. In this article, we present a new technique for model checking the logic of knowledge and commitments (CTLKC+). The proposed technique is fully-automatic and reduction-based in which we transform the problem of model checking CTLKC+ into the problem of model checking an existing logic of action called ARCTL. Concretely, we construct a set of transformation rules to formally reduce the CTLKC+ model into an ARCTL model and CTLKC+ formulae into ARCTL formulae to get benefit from the extended version of NuSMV symbolic model checker of ARCTL. Compared to a recent approach that reduces the problem of model checking CTLKC+ to another logic of action called GCTL1, our technique has better scalability and efficiency. We also analyze the complexity of the proposed model checking technique. The results of this analysis reveal that the complexity of our reduction-based procedure is PSPACE-complete for local concurrent programs with respect to the size of these programs and the length of the formula being checked. From the time perspective, we prove that the complexity of the proposed approach is P-complete with regard to the size of the model and length of the formula, which makes it efficient. Finally, we implement our model checking approach on top of extended NuSMV and report verification results for the verification of the NetBill protocol, taken from business domain, against some desirable properties. The obtained results show the effectiveness of our model checking approach when the system scales up.  相似文献   
52.
In this paper, an adaptive control approach is designed for compensating the faults in the actuators of chaotic systems and maintaining the acceptable system stability. We propose a state‐feedback model reference adaptive control scheme for unknown chaotic multi‐input systems. Only the dimensions of the chaotic systems are required to be known. Based on Lyapunov stability theory, new adaptive control laws are synthesized to accommodate actuator failures and system nonlinearities. An illustrative example is studied. The simulation results show the effectiveness of the design method. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   
53.
Abstract

Model order reduction is a common practice to reduce large order systems so that their simulation and control become easy. Nonlinearity aware trajectory piecewise linear is a variation of trajectory piecewise linearization technique of order reduction that is used to reduce nonlinear systems. With this scheme, the reduced approximation of the system is generated by weighted sum of the linearized and reduced sub-models obtained at certain linearization points on the system trajectory. This scheme uses dynamically inspired weight assignment that makes the approximation nonlinearity aware. Just as weight assignment, the process of linearization points selection is also important for generating faithful approximations. This article uses a global maximum error controller based linearization points selection scheme according to which a state is chosen as a linearization point if the error between a current reduced model and the full order nonlinear system reaches a maximum value. A combination that not only selects linearization points based on an error controller but also assigns dynamic inspired weights is shown in this article. The proposed scheme generates approximations with higher accuracies. This is demonstrated by applying the proposed method to some benchmark nonlinear circuits including RC ladder network and inverter chain circuit and comparing the results with the conventional schemes.  相似文献   
54.
The penalized calibration technique in survey sampling combines usual calibration and soft calibration by introducing a penalty term. Certain relevant estimates in survey sampling can be considered as penalized calibration estimates obtained as particular cases from an optimization problem with a common basic structure. In this framework, a case deletion diagnostic is proposed for a class of penalized calibration estimators including both design-based and model-based estimators. The diagnostic compares finite population parameter estimates and can be calculated from quantities related to the full data set. The resulting diagnostic is a function of the residual and leverage, as other diagnostics in regression models, and of the calibration weight, a singular feature in survey sampling. Moreover, a particular case, which includes the basic unit level model for small area estimation, is considered. Both a real and an artificial example are included to illustrate the diagnostic proposed. The results obtained clearly show that the proposed diagnostic depends on the calibration and soft-calibration variables, on the penalization term, as well as on the parameter to estimate.  相似文献   
55.
An alternative Equivalent Electrical Circuit for Proton Exchange Membrane Fuel Cells is modelled in this study. Both I–V characteristics and H2 consumptions corresponding to generated power under load and no-load conditions are investigated. For this purpose, H2 consumptions and I–V characteristics of three different sized PEMFCs are tested. There is a very good harmony between the model results and measured values (relative error %0.7, %6.4 and %2.5 for FC-A, FC-B and FC-C respectively). In the proposed model current passes only on parallel resistance and not on serial resistance at no-load condition. Thus, a FC with higher parallel resistance should be preferred. Another key output of this study is that based on the proposed model, performance comparison of FCs can be performed with the parameters defined in this work. Proposals made in this study can easily be used for performance analysis of FCs under for both steady state and transient analysis.  相似文献   
56.
The identification of the Hammerstein–Wiener (H-W) systems based on the nonuniform input–output dataset remains a challenging problem. This article studies the identification problem of a periodically nonuniformly sampled-data H-W system. In addition, the product terms of the parameters in the H-W system are inevitable. In order to solve the problem, the key-term separation is applied and two algorithms are proposed. One is the key-term-based forgetting factor stochastic gradient (KT-FFSG) algorithm based on the gradient search. The other is the key-term-based hierarchical forgetting factor stochastic gradient (KT-HFFSG) algorithm. Compared with the KT-FFSG algorithm, the KT-HFFSG algorithm gives more accurate estimates. The simulation results indicate that the proposed algorithms are effective.  相似文献   
57.
58.
The architectural choices underlying Linked Data have led to a compendium of data sources which contain both duplicated and fragmented information on a large number of domains. One way to enable non-experts users to access this data compendium is to provide keyword search frameworks that can capitalize on the inherent characteristics of Linked Data. Developing such systems is challenging for three main reasons. First, resources across different datasets or even within the same dataset can be homonyms. Second, different datasets employ heterogeneous schemas and each one may only contain a part of the answer for a certain user query. Finally, constructing a federated formal query from keywords across different datasets requires exploiting links between the different datasets on both the schema and instance levels. We present Sina, a scalable keyword search system that can answer user queries by transforming user-supplied keywords or natural-languages queries into conjunctive SPARQL queries over a set of interlinked data sources. Sina uses a hidden Markov model to determine the most suitable resources for a user-supplied query from different datasets. Moreover, our framework is able to construct federated queries by using the disambiguated resources and leveraging the link structure underlying the datasets to query. We evaluate Sina over three different datasets. We can answer 25 queries from the QALD-1 correctly. Moreover, we perform as well as the best question answering system from the QALD-3 competition by answering 32 questions correctly while also being able to answer queries on distributed sources. We study the runtime of SINA in its mono-core and parallel implementations and draw preliminary conclusions on the scalability of keyword search on Linked Data.  相似文献   
59.
60.
Process mining techniques relate observed behavior (i.e., event logs) to modeled behavior (e.g., a BPMN model or a Petri net). Process models can be discovered from event logs and conformance checking techniques can be used to detect and diagnose differences between observed and modeled behavior. Existing process mining techniques can only uncover these differences, but the actual repair of the model is left to the user and is not supported. In this paper we investigate the problem of repairing a process model w.r.t. a log such that the resulting model can replay the log (i.e., conforms to it) and is as similar as possible to the original model. To solve the problem, we use an existing conformance checker that aligns the runs of the given process model to the traces in the log. Based on this information, we decompose the log into several sublogs of non-fitting subtraces. For each sublog, either a loop is discovered that can replay the sublog or a subprocess is derived that is then added to the original model at the appropriate location. The approach is implemented in the process mining toolkit ProM and has been validated on logs and models from several Dutch municipalities.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号