首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   552篇
  免费   49篇
  国内免费   22篇
电工技术   4篇
综合类   3篇
化学工业   28篇
金属工艺   12篇
机械仪表   30篇
建筑科学   8篇
矿业工程   4篇
能源动力   24篇
轻工业   5篇
水利工程   1篇
石油天然气   2篇
武器工业   1篇
无线电   114篇
一般工业技术   47篇
冶金工业   2篇
原子能技术   3篇
自动化技术   335篇
  2023年   23篇
  2022年   16篇
  2021年   25篇
  2020年   17篇
  2019年   18篇
  2018年   19篇
  2017年   42篇
  2016年   82篇
  2015年   31篇
  2014年   57篇
  2013年   37篇
  2012年   36篇
  2011年   32篇
  2010年   32篇
  2009年   36篇
  2008年   10篇
  2007年   22篇
  2006年   22篇
  2005年   11篇
  2004年   7篇
  2003年   7篇
  2002年   13篇
  2001年   3篇
  2000年   2篇
  1999年   3篇
  1998年   1篇
  1997年   2篇
  1996年   2篇
  1994年   3篇
  1992年   2篇
  1991年   1篇
  1990年   4篇
  1989年   2篇
  1988年   3篇
排序方式: 共有623条查询结果,搜索用时 15 毫秒
1.
The Journal of Supercomputing - Currently, all online social networks (OSNs) are considered to follow a power-law distribution. In this paper, the degree distribution for multiple OSNs has been...  相似文献   
2.
Given instances (spatial points) of different spatial features (categories), significant spatial co-distribution pattern discovery aims to find subsets of spatial features whose spatial distributions are statistically significantly similar to each other. Discovering significant spatial co-distribution patterns is important for many application domains such as identifying spatial associations between diseases and risk factors in spatial epidemiology. Previous methods mostly associated spatial features whose instances are frequently located together; however, this does not necessarily indicate a similarity in the spatial distributions between different features. Thus, this paper defines the significant spatial co-distribution pattern discovery problem and subsequently develops a novel method to solve it effectively. First, we propose a new measure, dissimilarity index, to quantify the difference between spatial distributions of different features under the spatial neighbor relation and then employ it in a distribution clustering method to detect candidate spatial co-distribution patterns. To further remove spurious patterns that occur accidentally, the validity of each candidate spatial co-distribution pattern is verified through a significance test under the null hypothesis that spatial distributions of different features are independent of each other. To model the null hypothesis, a distribution shift-correction method is presented by randomizing the relationships between different features and maintaining spatial structure of each feature (e.g., spatial auto-correlation). Comparisons with baseline methods using synthetic datasets demonstrate the effectiveness of the proposed method. A case study identifying co-morbidities in central Colorado is also presented to illustrate the real-world applicability of the proposed method.  相似文献   
3.
A steelmaking-continuous casting (SCC) scheduling problem is an example of complex hybrid flow shop scheduling problem (HFSSP) with a strong industrial background. This paper investigates the SCC scheduling problem that involves controllable processing times (CPT) with multiple objectives concerning the total waiting time, earliness/tardiness and adjusting cost. The SCC scheduling problem with CPT is seldom discussed in the existing literature. This study is motivated by the practical situation of a large integrated steel company in which the just-in-time (JIT) and cost-cutting production strategy have become a significant concern. To address this complex HFSSP, the scheduling problem is decomposed into two subproblems: a parallel machine scheduling problem (PMSP) in the last stage and an HFSSP in the upstream stages. First, a hybrid differential evolution (HDE) algorithm combined with a variable neighborhood decomposition search (VNDS) is proposed for the former subproblem. Second, an iterative backward list scheduling (IBLS) algorithm is presented to solve the latter subproblem. The effectiveness of this bi-layer optimization approach is verified by computational experiments on well-designed and real-world scheduling instances. This study provides a new perspective on modeling and solving practical SCC scheduling problems.  相似文献   
4.
A new matching cost computation method based on nonsubsampled contourlet transform (NSCT) for stereo image matching is proposed in this paper. Firstly, stereo image is decomposed into high frequency sub-band images at different scales and along different directions by NSCT. Secondly, by utilizing coefficients in high frequency domain and grayscales in RGB color space, the computation model of weighted matching cost between two pixels is designed based on the gestalt laws. Lastly, two types of experiments are carried out with standard stereopairs in the Middlebury benchmark. One of the experiments is to confirm optimum values of NSCT scale and direction parameters, and the other is to compare proposed matching cost with nine known matching costs. Experimental results show that the optimum values of scale and direction parameters are respectively 2 and 3, and the matching accuracy of the proposed matching cost is twice higher than that of traditional NCC cost.  相似文献   
5.
The cooperation between designers, engineers and scientists in the human–computer interaction (HCI) community is often difficult, and can only be explained by investigating the different paradigms by which they operate. This study proposes a paradigm model for designers, engineers and scientists, using three barriers to separate the professions. We then report on an empirical study that attempted to validate the understand/transform world barrier in the paradigm model using an online questionnaire. We conclude that the used ‘Attitude About Reality’ scale was unsuitable for measuring this barrier, whereas information about the educational background of the participants was a good predictor for the self-reported profession (designer, engineer or scientist). Interestingly, among the three professions, engineers appear to be the cohesive element, since they often have dual backgrounds, whereas very few participants had dual science/design backgrounds. Engineers could, therefore, build a bridge between designers and scientists, and through their integrative role, could guide the HCI community to realizing its full potential.  相似文献   
6.
设计一种基于MSP430单片机的多协议RFID读写器。分析RFID概念、应用现状及系统构成,介绍两款主要芯片的功能特点,给出系统硬件组成结构.提出系统的软件组成模块及软件流程图。系统顺利通过测试,其设计达到相应的技术指标要求,具有良好的市场应用前景。  相似文献   
7.
FrequentItemsetMining (FIM) is one of the most important data mining tasks and is the foundation of many data mining tasks. In Big Data era, centralized FIM algorithms cannot meet the needs of FIM for big data in terms of time and space, so Distributed Frequent Itemset Mining (DFIM) algorithms have been designed to meet the above challenges. In this paper, LocalGlobal and RedistributionMining which are two main paradigms of DFIM algorithm are discussed; Two algorithms of these paradigms on MapReduce named LG and RM are proposed while MapReduce is a popular distributed computing model, and also the related work is discussed. The experimental results show that the RM algorithm has better performance in terms of computation and scalability of sites, and can be used as the basis for designing the DFIM algorithm based on MapReduce. This paper also discusses the main ideas of improving the DFIM algorithms based on MapReduce.  相似文献   
8.
Huang  Wei  Luo  Mingyuan  Zhang  Peng  Zha  Yufei 《Multimedia Tools and Applications》2021,80(4):5945-5975
Multimedia Tools and Applications - The pedestrian re-identification problem (i.e., re-id) is essential and pre-requisite in multi-camera video surveillance studies, provided the fact that...  相似文献   
9.
Query language modeling based on relevance feedback has been widely applied to improve the effectiveness of information retrieval. However, intra‐query term dependencies (i.e., the dependencies between different query terms and term combinations) have not yet been sufficiently addressed in the existing approaches. This article aims to investigate this issue within a comprehensive framework, namely the Aspect Query Language Model (AM). We propose to extend the AM with a hidden Markov model (HMM) structure to incorporate the intra‐query term dependencies and learn the structure of a novel aspect HMM (AHMM) for query language modeling. In the proposed AHMM, the combinations of query terms are viewed as latent variables representing query aspects. They further form an ergodic HMM, where the dependencies between latent variables (nodes) are modeled as the transitional probabilities. The segmented chunks from the feedback documents are considered as observables of the HMM. Then the AHMM structure is optimized by the HMM, which can estimate the prior of the latent variables and the probability distribution of the observed chunks. Our extensive experiments on three large‐scale text retrieval conference (TREC) collections have shown that our method not only significantly outperforms a number of strong baselines in terms of both effectiveness and robustness but also achieves better results than the AM and another state‐of‐the‐art approach, namely the latent concept expansion model. © 2014 Wiley Periodicals, Inc.  相似文献   
10.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号