全文获取类型
收费全文 | 20932篇 |
免费 | 3243篇 |
国内免费 | 2207篇 |
专业分类
电工技术 | 439篇 |
技术理论 | 2篇 |
综合类 | 1816篇 |
化学工业 | 519篇 |
金属工艺 | 140篇 |
机械仪表 | 383篇 |
建筑科学 | 1327篇 |
矿业工程 | 9417篇 |
能源动力 | 395篇 |
轻工业 | 161篇 |
水利工程 | 216篇 |
石油天然气 | 102篇 |
武器工业 | 42篇 |
无线电 | 922篇 |
一般工业技术 | 496篇 |
冶金工业 | 1455篇 |
原子能技术 | 22篇 |
自动化技术 | 8528篇 |
出版年
2024年 | 60篇 |
2023年 | 211篇 |
2022年 | 544篇 |
2021年 | 785篇 |
2020年 | 952篇 |
2019年 | 625篇 |
2018年 | 627篇 |
2017年 | 668篇 |
2016年 | 769篇 |
2015年 | 836篇 |
2014年 | 1457篇 |
2013年 | 1216篇 |
2012年 | 1906篇 |
2011年 | 1780篇 |
2010年 | 1381篇 |
2009年 | 1536篇 |
2008年 | 1652篇 |
2007年 | 1718篇 |
2006年 | 1507篇 |
2005年 | 1292篇 |
2004年 | 1111篇 |
2003年 | 1035篇 |
2002年 | 648篇 |
2001年 | 469篇 |
2000年 | 413篇 |
1999年 | 280篇 |
1998年 | 210篇 |
1997年 | 135篇 |
1996年 | 120篇 |
1995年 | 107篇 |
1994年 | 95篇 |
1993年 | 72篇 |
1992年 | 53篇 |
1991年 | 45篇 |
1990年 | 15篇 |
1989年 | 13篇 |
1988年 | 16篇 |
1987年 | 14篇 |
1986年 | 5篇 |
1985年 | 1篇 |
1984年 | 1篇 |
1983年 | 1篇 |
1979年 | 1篇 |
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
101.
针对某铜矿主矿体500 m水平西区域复杂难采矿体,提出了进路式上向分层充填采矿法、脉内凿岩出矿巷低分段空场小步距回采嗣后充填法及脉外凿岩出矿巷低分段空场小步距回采嗣后充填法三种可行开采方案,并对三种采矿方案的采场布置、采切工程、回采工艺等进行了详细的介绍。对比分析三种采矿方法的优缺点和技术经济指标,优选出进路式上向水平分层充填采矿法进行开采。该采矿方法回采过程中采用浅孔爆破,能较好地保护上盘不稳定含砂层,工人作业环境安全性高,且其采切比、贫化损失相对较低,出矿品位较高,综合效益明显。 相似文献
102.
《Expert systems with applications》2014,41(9):4337-4348
A goal of this study is to develop a Composite Knowledge Manipulation Tool (CKMT). Some of traditional medical activities are rely heavily on the oral transfer of knowledge, with the risk of losing important knowledge. Moreover, the activities differ according to the regions, traditions, experts’ experiences, etc. Therefore, it is necessary to develop an integrated and consistent knowledge manipulation tool. By using the tool, it will be possible to extract the tacit knowledge consistently, transform different types of knowledge into a composite knowledge base (KB), integrate disseminated and complex knowledge, and complement the lack of knowledge. For the reason above, I have developed the CKMT called as K-Expert and it has four advanced functionalities as follows. Firstly, it can extract/import logical rules from data mining (DM) with the minimum of effort. I expect that the function can complement the oral transfer of traditional knowledge. Secondly, it transforms the various types of logical rules into database (DB) tables after the syntax checking and/or transformation. In this situation, knowledge managers can refine, evaluate, and manage the huge-sized composite KB consistently with the support of the DB management systems (DBMS). Thirdly, it visualizes the transformed knowledge in the shape of decision tree (DT). With the function, the knowledge workers can evaluate the completeness of the KB and complement the lack of knowledge. Finally, it gives SQL-based backward chaining function to the knowledge users. It could reduce the inference time effectively since it is based on SQL query and searching not the sentence-by-sentence translation used in the traditional inference systems. The function will give the young researchers and their fellows in the field of knowledge management (KM) and expert systems (ES) more opportunities to follow up and validate their knowledge. Finally, I expect that the approach can present the advantages of mitigating knowledge loss and the burdens of knowledge transformation and complementation. 相似文献
103.
《Expert systems with applications》2014,41(1):195-209
The processes of logistics service providers are considered as highly human-centric, flexible and complex. Deviations from the standard operating procedures as described in the designed process models, are not uncommon and may result in significant uncertainties. Acquiring insight in the dynamics of the actual logistics processes can effectively assist in mitigating the uncovered risks and creating strategic advantages, which are the result of uncertainties with respectively a negative and a positive impact on the organizational objectives.In this paper a comprehensive methodology for applying process mining in logistics is presented, covering the event log extraction and preprocessing as well as the execution of exploratory, performance and conformance analyses. The applicability of the presented methodology and roadmap is demonstrated with a case study at an important Chinese port that specializes in bulk cargo. 相似文献
104.
105.
《Expert systems with applications》2014,41(11):5416-5430
Detecting SQL injection attacks (SQLIAs) is becoming increasingly important in database-driven web sites. Until now, most of the studies on SQLIA detection have focused on the structured query language (SQL) structure at the application level. Unfortunately, this approach inevitably fails to detect those attacks that use already stored procedure and data within the database system. In this paper, we propose a framework to detect SQLIAs at database level by using SVM classification and various kernel functions. The key issue of SQLIA detection framework is how to represent the internal query tree collected from database log suitable for SVM classification algorithm in order to acquire good performance in detecting SQLIAs. To solve the issue, we first propose a novel method to convert the query tree into an n-dimensional feature vector by using a multi-dimensional sequence as an intermediate representation. The reason that it is difficult to directly convert the query tree into an n-dimensional feature vector is the complexity and variability of the query tree structure. Second, we propose a method to extract the syntactic features, as well as the semantic features when generating feature vector. Third, we propose a method to transform string feature values into numeric feature values, combining multiple statistical models. The combined model maps one string value to one numeric value by containing the multiple characteristic of each string value. In order to demonstrate the feasibility of our proposals in practical environments, we implement the SQLIA detection system based on PostgreSQL, a popular open source database system, and we perform experiments. The experimental results using the internal query trees of PostgreSQL validate that our proposal is effective in detecting SQLIAs, with at least 99.6% of the probability that the probability for malicious queries to be correctly predicted as SQLIA is greater than the probability for normal queries to be incorrectly predicted as SQLIA. Finally, we perform additional experiments to compare our proposal with syntax-focused feature extraction and single statistical model based on feature transformation. The experimental results show that our proposal significantly increases the probability of correctly detecting SQLIAs for various SQL statements, when compared to the previous methods. 相似文献
106.
《Expert systems with applications》2014,41(6):2703-2712
A concept lattice is an ordered structure between concepts. It is particularly effective in mining association rules. However, a concept lattice is not efficient for large databases because the lattice size increases with the number of transactions. Finding an efficient strategy for dynamically updating the lattice is an important issue for real-world applications, where new transactions are constantly inserted into databases. To build an efficient storage structure for mining association rules, this study proposes a method for building the initial frequent closed itemset lattice from the original database. The lattice is updated when new transactions are inserted. The number of database rescans over the entire database is reduced in the maintenance process. The proposed algorithm is compared with building a lattice in batch mode to demonstrate the effectiveness of the proposed algorithm. 相似文献
107.
《Expert systems with applications》2014,41(11):5009-5016
The weak signal concept according to Ansoff has the aim to advance strategic early warning. It enables to predict the appearance of events in advance that are relevant for an organization. An example is to predict the appearance of a new and relevant technology for a research organization. Existing approaches detect weak signals based on an environmental scanning procedure that considers textual information from the internet. This is because about 80% of all data in the internet are textual information. The texts are processed by a specific clustering approach where clusters that represent weak signals are identified. In contrast to these related approaches, we propose a new methodology that investigates a sequence of clusters measured at successive points in time. This enables to trace the development of weak signals over time and thus, it enables to identify relevant weak signal developments for organization’s decision making in strategic early warning environment. 相似文献
108.
本文针对微博用户兴趣属性缺失问题,提出一种基于发文内容分析的微博用户兴趣挖掘方法。利用基于短语的主题模型和自动构建的用户兴趣知识库,能够有效地从发文内容中挖掘出高质量的用户兴趣短语并标识其类别,从而实现对微博用户的兴趣挖掘。在SMP CUP 2016数据集上的实验结果表明,主题短语模型在困惑度和短语质量上取得的效果均优于传统的主题模型,用户兴趣挖掘的准确率和召回率最高可达到78%和82%。 相似文献
109.
110.
为了有效控制破碎覆岩回采巷道变形,根据其赋存特征采用工字钢架棚支护技术,分析破碎覆岩普氏拱结构的形成原理和研究回采巷道临界宽度。针对破碎覆岩回采巷道具有围岩破碎、强度低和不具有可锚性等特点,采用普氏拱理论,计算回采巷道围岩压力;当棚梁工字钢最大正应力等于工字钢屈服极限时推导出破碎覆岩条件下回采巷道临界宽度计算公式。以106煤矿为工程实例进行分析,结果表明:破碎覆岩回采巷道的合理临界宽度为4.4m;增大支护强度和减小巷道高度可有效提高破碎覆岩回采巷道临界宽度;如果需要更大的巷道宽度,可通过降低棚间距或增大型钢型号方法实现。 相似文献