首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9132篇
  免费   2364篇
  国内免费   2177篇
电工技术   883篇
技术理论   2篇
综合类   1119篇
化学工业   160篇
金属工艺   83篇
机械仪表   419篇
建筑科学   136篇
矿业工程   129篇
能源动力   131篇
轻工业   177篇
水利工程   111篇
石油天然气   106篇
武器工业   67篇
无线电   1389篇
一般工业技术   412篇
冶金工业   68篇
原子能技术   16篇
自动化技术   8265篇
  2024年   67篇
  2023年   251篇
  2022年   433篇
  2021年   495篇
  2020年   558篇
  2019年   487篇
  2018年   497篇
  2017年   540篇
  2016年   585篇
  2015年   644篇
  2014年   754篇
  2013年   726篇
  2012年   964篇
  2011年   1033篇
  2010年   845篇
  2009年   797篇
  2008年   846篇
  2007年   866篇
  2006年   566篇
  2005年   438篇
  2004年   330篇
  2003年   232篇
  2002年   180篇
  2001年   121篇
  2000年   90篇
  1999年   74篇
  1998年   43篇
  1997年   34篇
  1996年   32篇
  1995年   25篇
  1994年   17篇
  1993年   12篇
  1992年   6篇
  1991年   8篇
  1990年   10篇
  1989年   10篇
  1988年   5篇
  1987年   5篇
  1986年   2篇
  1985年   8篇
  1984年   9篇
  1983年   5篇
  1982年   2篇
  1981年   3篇
  1980年   3篇
  1979年   4篇
  1978年   2篇
  1977年   3篇
  1976年   2篇
  1974年   2篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
991.
Geo-Demographic Analysis, which is one of the most interesting inter-disciplinary research topics between Geographic Information Systems and Data Mining, plays a very important role in policies decision, population migration and services distribution. Among some soft computing methods used for this problem, clustering is the most popular one because it has many advantages in comparison with the rests such as the fast processing time, the quality of results and the used memory space. Nonetheless, the state-of-the-art clustering algorithm namely FGWC has low clustering quality since it was constructed on the basis of traditional fuzzy sets. In this paper, we will present a novel interval type-2 fuzzy clustering algorithm deployed in an extension of the traditional fuzzy sets namely Interval Type-2 Fuzzy Sets to enhance the clustering quality of FGWC. Some additional techniques such as the interval context variable, Particle Swarm Optimization and the parallel computing are attached to speed up the algorithm. The experimental evaluation through various case studies shows that the proposed method obtains better clustering quality than some best-known ones.  相似文献   
992.
Bioavailability is a major bottleneck in the clinical application of medium molecular weight therapeutics, including protein and peptide drugs. Paracellular transport of these molecules is hampered by intercellular tight junction (TJ) complexes. Therefore, safe chemical regulators for TJ loosening are desired. Here, we showed a potential application of select non-steroidal anti-inflammatory drugs (NSAIDs) as TJ modulators. Based on our previous observation that diclofenac and flufenamic acid directly bound various PDZ domains with a broad specificity, we applied solution nuclear magnetic resonance techniques to examine the interaction of other NSAIDs and the first PDZ domain (PDZ1) of zonula occludens (ZO)-1, ZO-1(PDZ1). Inhibition of ZO-1(PDZ1) is expected to provide loosening of the epithelial barrier function because the domain plays a crucial role in maintaining TJ integrity. Accordingly, diclofenac and indomethacin were found to decrease the subcellular localization of claudin (CLD)-2 but not occludin and ZO-1 at the apicolateral intercellular compartment of Madin–Darby canine kidney (MDCK) II cells. These NSAIDs exhibited 125–155% improved paracellular efflux of fluorescein isothiocyanate insulin for the Caco-2 cell monolayer. We propose that these NSAIDs can be repurposed as drug absorption enhancers for peptide drugs.  相似文献   
993.
最小二乘回归(LSR)算法是一种常见的子空间分割方法,由于LSR具有解析解,因此它的聚类性能较高。然而LSR算法是应用谱聚类方法聚类数据,谱聚类方法初始化聚类中心是随机的,会影响后面的聚类效果。针对这一问题,提出一种基于聚类中心局部密度和距离这2个特点的改进的LSR算法(LSR-DC)。在Extended Yale B数据集上进行实验,结果表明,该算法有较高的聚类精度,具有一定的鲁棒性,优于现有LSR等子空间分割方法。  相似文献   
994.
针对传统协同过滤(CF)推荐算法存在评分矩阵稀疏、扩展性弱和推荐准确率低的缺陷,提出一种改进模糊划分聚类的协同过滤推荐算法(GIFP-CCF+)。在传统基于修正余弦相似度计算方法上,引入时间差因子、热门物品权重因子以及冷门物品权重因子以改善相似度计算结果;同时引入改进模糊划分的GIFP-FCM算法,将属性特征相似的项目聚成一类,构造索引矩阵,同索引间根据项目间的相似度寻找项目最近邻居构成推荐,从而提高协同过滤算法(CF)的精度。通过与Kmeans-CF、FCM-CF和GIFP-CCF算法进行仿真对比实验,证明了GIFP-CCF+算法在推荐结果和推荐精度上具有一定的优越性。  相似文献   
995.
Road traffic networks are rapidly growing in size with increasing complexities. To simplify their analysis in order to maintain smooth traffic, a large urban road network can be considered as a set of small sub-networks, which exhibit distinctive traffic flow patterns. In this paper, we propose a robust framework for spatial partitioning of large urban road networks based on traffic measures. For a given urban road network, we aim to identify the different sub-networks or partitions that exhibit homogeneous traffic patterns internally, but heterogeneous patterns to others externally. To this end, we develop a two-stage algorithm (referred as FaDSPa) within our framework. It first transforms the large road graph into a well-structured and condensed density peak graph (DPG) via density based clustering and link aggregation using traffic density and adjacency connectivity, respectively. Thereafter we apply our spectral theory based graph cut (referred as α-Cut) to partition the DPG and obtain the different sub-networks. Thus the framework applies the locally distributed computations of density based clustering to improve efficiency and the centralized global computations of spectral clustering to improve accuracy. We perform extensive experiments on real as well as synthetic datasets, and compare its performance with that of an existing road network partitioning method. Our results show that the proposed method outperforms the existing normalized cut based method for small road networks and provides impressive results for much larger networks, where other methods may face serious problems of time and space complexities.  相似文献   
996.
The capacitated redistricting problem (CRP) has the objective to redefine, under a given criterion, an initial set of districts of an urban area represented by a geographic network. Each node in the network has different types of demands and each district has a limited capacity. Real-world applications consider more than one criteria in the design of the districts, leading to a multicriteria CRP (MCRP). Examples are found in political districting, sales design, street sweeping, garbage collection and mail delivery. This work addresses the MCRP applied to power meter reading and two criteria are considered: compactness and homogeneity of districts. The proposed solution framework is based on a greedy randomized adaptive search procedure and multicriteria scalarization techniques to approximate the Pareto frontier. The computational experiments show the effectiveness of the method for a set of randomly generated networks and for a real-world network extracted from the city of São Paulo.  相似文献   
997.
The amount of deception taking place via electronic text-based communication is increasing. Research has sought to automatically detect deception by analyzing the text from the communicator. However, the deceptive intent of the communication partner is being ignored. We compare the text from subjects who are trying to deceive each other, subjects trying to deceive truth tellers, subjects telling the truth to truth tellers, and subjects telling the truth to deceivers. We hypothesize that despite the intent of the partner, deceitful text will cluster closest to deceitful text. We cluster each of the four conditions using the text content. The cluster algorithm placed subjects trying to deceive each other closest to subjects telling the truth to each other. In this analysis, the language that led subjects to choose the same outcomes had a stronger effect than the language tied to being deceitful or truthful.  相似文献   
998.
心肺运动试验(CPET)能将人体的呼吸系统、心血管系统等综合为一体,不仅能够体现受试者的有氧运动能力,评估受试者的心肺耐力,而且能以整体整合医学的视角来研究受试者对运动的应激反应。为对CPET数据进行凝聚层次聚类分析,提出一种基于时间序列形态特征的算法。选取15名业余中长跑运动员的CPET数据作为聚类对象,聚类指标选取了表征有氧能力和心肺耐量的耗氧量、二氧化碳、心率、分钟通气当量、代谢当量、生理死腔与潮气量比值、呼吸商及每搏输出量等8类指标,体现运动员摄取、利用氧的效率、肺循环以及心功能等综合状况。通过聚类分析发现受试者个体差异较大,未出现明显的“群居分布”特征,根据轮廓系数评估可剔除心肺耐量较差的测试者。实验结果表明,该算法在确保聚类准确率的同时能够降低数据压缩率,且对形态特征显著的数据集进行聚类效果更佳。  相似文献   
999.
沈艺敏  蒋小波 《计算机仿真》2020,(4):385-388,445
隐蔽信道数据分布散乱,对数据检测造成阻碍。针对传统的隐蔽信道数据检测方法存在检测速度慢、有效性差等问题,提出一种基于SIR模型的隐蔽信道数据安全检测方法。构建SIR隐蔽信道模型,使用在线检测模型进行隐蔽信道数据编码处理,使用密度聚类算法对隐蔽信道编码数据进行搜索聚类,划分密度区域,通过判断各密度区域数据有效性,完成隐蔽信道数据的密度聚类。利用决策树对聚类完成的数据进行特征属性提取,引入特征属性获取新的信息递增率,通过数据间差异性计算完成隐蔽信道数据安全检测。实验结果表明,所提方法能有效完成隐蔽信道数据检测,精准度、效率和稳定性均优于传统方法,且检测耗时少,具有显著优势。  相似文献   
1000.
聚类混合型数据,通常是依据样本属性类别的不同分别进行评价。但这种将样本属性划分到不同子空间中分别度量的方式,割裂了样本属性原有的统一性;导致对样本个体的相似性评价产生了非一致的度量偏差。针对这一问题,提出以二进制编码样本属性,再由海明差异对属性编码施行统一度量的新的聚类算法。新算法通过在统一的框架内对混合型数据实施相似性度量,避免了对样本属性的切割,在此基础上又根据不同属性的性质赋予其不同的权重,并以此评价样本个体之间的相似程度。实验结果表明,新算法能够有效地聚类混合型数据;与已有的其他聚类算法相比较,表现出更好的聚类准确率及稳定性。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号