首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   20651篇
  免费   3185篇
  国内免费   2197篇
电工技术   439篇
技术理论   2篇
综合类   1810篇
化学工业   517篇
金属工艺   139篇
机械仪表   382篇
建筑科学   1318篇
矿业工程   9315篇
能源动力   393篇
轻工业   159篇
水利工程   216篇
石油天然气   100篇
武器工业   42篇
无线电   910篇
一般工业技术   493篇
冶金工业   1321篇
原子能技术   22篇
自动化技术   8455篇
  2024年   54篇
  2023年   188篇
  2022年   533篇
  2021年   767篇
  2020年   921篇
  2019年   598篇
  2018年   594篇
  2017年   636篇
  2016年   744篇
  2015年   820篇
  2014年   1446篇
  2013年   1201篇
  2012年   1893篇
  2011年   1768篇
  2010年   1372篇
  2009年   1533篇
  2008年   1646篇
  2007年   1709篇
  2006年   1496篇
  2005年   1290篇
  2004年   1107篇
  2003年   1030篇
  2002年   643篇
  2001年   467篇
  2000年   410篇
  1999年   277篇
  1998年   208篇
  1997年   135篇
  1996年   117篇
  1995年   105篇
  1994年   90篇
  1993年   72篇
  1992年   53篇
  1991年   44篇
  1990年   15篇
  1989年   13篇
  1988年   15篇
  1987年   14篇
  1986年   5篇
  1985年   1篇
  1984年   1篇
  1983年   1篇
  1979年   1篇
排序方式: 共有10000条查询结果,搜索用时 359 毫秒
101.
为解决设计模式挖掘正确率较低的问题, 提出一种带特征指标约束描述的设计模式分类挖掘方法。归纳了47种特征指标, 给出了设计模式约束定义, 对设计模式特征进行了描述, 以Adapter模式、Command模式、Factory Method模式为例, 分结构型、行为型与创建型3类挖掘设计模式, 设计了3个基准系统与4个经典系统的设计模式挖掘试验。试验结果表明, 本研究方法对基准系统的Adapter模式、Command模式、Factory Method模式挖掘正确率为96.13%、91.67%、72.23%, 对经典系统挖掘正确率分别为84.3%、81.26%、73.17%, 与传统方法比较, 本研究方法取得了较好的效果。  相似文献   
102.
目前对流动人口的管理仅停留在数据查询比对和简单统计上,缺少对数据的深层次分析,难以对决策指挥提供支持。针对流动人口的分析问题,提出了构建一个基于生物启发计算的智能分析系统,用于发现流动人口中各类人员的流动模式以及流动人口的趋势性问题,找出异常的流动信息和模式。该系统综合运用了前沿的生物启发计算技术——基于多层染色体基因表达式编程算法、重叠基因表达进化算法、基于概念相似度神经网络分类模型和层次距离计算的聚类算法搭建了一个警用流动人口的分析平台。同时根据实际需求,提出了一种新的基于智能分析结果的分级报警模型。实验表明系统具有较高的性能和实用性。  相似文献   
103.
针对某铜矿主矿体500 m水平西区域复杂难采矿体,提出了进路式上向分层充填采矿法、脉内凿岩出矿巷低分段空场小步距回采嗣后充填法及脉外凿岩出矿巷低分段空场小步距回采嗣后充填法三种可行开采方案,并对三种采矿方案的采场布置、采切工程、回采工艺等进行了详细的介绍。对比分析三种采矿方法的优缺点和技术经济指标,优选出进路式上向水平分层充填采矿法进行开采。该采矿方法回采过程中采用浅孔爆破,能较好地保护上盘不稳定含砂层,工人作业环境安全性高,且其采切比、贫化损失相对较低,出矿品位较高,综合效益明显。  相似文献   
104.
A goal of this study is to develop a Composite Knowledge Manipulation Tool (CKMT). Some of traditional medical activities are rely heavily on the oral transfer of knowledge, with the risk of losing important knowledge. Moreover, the activities differ according to the regions, traditions, experts’ experiences, etc. Therefore, it is necessary to develop an integrated and consistent knowledge manipulation tool. By using the tool, it will be possible to extract the tacit knowledge consistently, transform different types of knowledge into a composite knowledge base (KB), integrate disseminated and complex knowledge, and complement the lack of knowledge. For the reason above, I have developed the CKMT called as K-Expert and it has four advanced functionalities as follows. Firstly, it can extract/import logical rules from data mining (DM) with the minimum of effort. I expect that the function can complement the oral transfer of traditional knowledge. Secondly, it transforms the various types of logical rules into database (DB) tables after the syntax checking and/or transformation. In this situation, knowledge managers can refine, evaluate, and manage the huge-sized composite KB consistently with the support of the DB management systems (DBMS). Thirdly, it visualizes the transformed knowledge in the shape of decision tree (DT). With the function, the knowledge workers can evaluate the completeness of the KB and complement the lack of knowledge. Finally, it gives SQL-based backward chaining function to the knowledge users. It could reduce the inference time effectively since it is based on SQL query and searching not the sentence-by-sentence translation used in the traditional inference systems. The function will give the young researchers and their fellows in the field of knowledge management (KM) and expert systems (ES) more opportunities to follow up and validate their knowledge. Finally, I expect that the approach can present the advantages of mitigating knowledge loss and the burdens of knowledge transformation and complementation.  相似文献   
105.
The processes of logistics service providers are considered as highly human-centric, flexible and complex. Deviations from the standard operating procedures as described in the designed process models, are not uncommon and may result in significant uncertainties. Acquiring insight in the dynamics of the actual logistics processes can effectively assist in mitigating the uncovered risks and creating strategic advantages, which are the result of uncertainties with respectively a negative and a positive impact on the organizational objectives.In this paper a comprehensive methodology for applying process mining in logistics is presented, covering the event log extraction and preprocessing as well as the execution of exploratory, performance and conformance analyses. The applicability of the presented methodology and roadmap is demonstrated with a case study at an important Chinese port that specializes in bulk cargo.  相似文献   
106.
介绍了新桥硫铁矿二步骤回采工艺,并从技术经济角度对该矿所采用的充填料及充填系统进行了探讨。  相似文献   
107.
Detecting SQL injection attacks (SQLIAs) is becoming increasingly important in database-driven web sites. Until now, most of the studies on SQLIA detection have focused on the structured query language (SQL) structure at the application level. Unfortunately, this approach inevitably fails to detect those attacks that use already stored procedure and data within the database system. In this paper, we propose a framework to detect SQLIAs at database level by using SVM classification and various kernel functions. The key issue of SQLIA detection framework is how to represent the internal query tree collected from database log suitable for SVM classification algorithm in order to acquire good performance in detecting SQLIAs. To solve the issue, we first propose a novel method to convert the query tree into an n-dimensional feature vector by using a multi-dimensional sequence as an intermediate representation. The reason that it is difficult to directly convert the query tree into an n-dimensional feature vector is the complexity and variability of the query tree structure. Second, we propose a method to extract the syntactic features, as well as the semantic features when generating feature vector. Third, we propose a method to transform string feature values into numeric feature values, combining multiple statistical models. The combined model maps one string value to one numeric value by containing the multiple characteristic of each string value. In order to demonstrate the feasibility of our proposals in practical environments, we implement the SQLIA detection system based on PostgreSQL, a popular open source database system, and we perform experiments. The experimental results using the internal query trees of PostgreSQL validate that our proposal is effective in detecting SQLIAs, with at least 99.6% of the probability that the probability for malicious queries to be correctly predicted as SQLIA is greater than the probability for normal queries to be incorrectly predicted as SQLIA. Finally, we perform additional experiments to compare our proposal with syntax-focused feature extraction and single statistical model based on feature transformation. The experimental results show that our proposal significantly increases the probability of correctly detecting SQLIAs for various SQL statements, when compared to the previous methods.  相似文献   
108.
A concept lattice is an ordered structure between concepts. It is particularly effective in mining association rules. However, a concept lattice is not efficient for large databases because the lattice size increases with the number of transactions. Finding an efficient strategy for dynamically updating the lattice is an important issue for real-world applications, where new transactions are constantly inserted into databases. To build an efficient storage structure for mining association rules, this study proposes a method for building the initial frequent closed itemset lattice from the original database. The lattice is updated when new transactions are inserted. The number of database rescans over the entire database is reduced in the maintenance process. The proposed algorithm is compared with building a lattice in batch mode to demonstrate the effectiveness of the proposed algorithm.  相似文献   
109.
The weak signal concept according to Ansoff has the aim to advance strategic early warning. It enables to predict the appearance of events in advance that are relevant for an organization. An example is to predict the appearance of a new and relevant technology for a research organization. Existing approaches detect weak signals based on an environmental scanning procedure that considers textual information from the internet. This is because about 80% of all data in the internet are textual information. The texts are processed by a specific clustering approach where clusters that represent weak signals are identified. In contrast to these related approaches, we propose a new methodology that investigates a sequence of clusters measured at successive points in time. This enables to trace the development of weak signals over time and thus, it enables to identify relevant weak signal developments for organization’s decision making in strategic early warning environment.  相似文献   
110.
本文针对微博用户兴趣属性缺失问题,提出一种基于发文内容分析的微博用户兴趣挖掘方法。利用基于短语的主题模型和自动构建的用户兴趣知识库,能够有效地从发文内容中挖掘出高质量的用户兴趣短语并标识其类别,从而实现对微博用户的兴趣挖掘。在SMP CUP 2016数据集上的实验结果表明,主题短语模型在困惑度和短语质量上取得的效果均优于传统的主题模型,用户兴趣挖掘的准确率和召回率最高可达到78%和82%。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号