首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   120篇
  免费   12篇
  国内免费   20篇
电工技术   13篇
综合类   5篇
化学工业   1篇
机械仪表   2篇
建筑科学   3篇
能源动力   2篇
水利工程   4篇
武器工业   2篇
无线电   31篇
一般工业技术   10篇
冶金工业   7篇
自动化技术   72篇
  2024年   1篇
  2021年   3篇
  2020年   2篇
  2019年   5篇
  2018年   6篇
  2017年   2篇
  2016年   4篇
  2015年   3篇
  2014年   11篇
  2013年   5篇
  2012年   8篇
  2011年   13篇
  2010年   6篇
  2009年   7篇
  2008年   15篇
  2007年   14篇
  2006年   12篇
  2005年   8篇
  2004年   5篇
  2003年   4篇
  2001年   2篇
  2000年   1篇
  1999年   1篇
  1998年   2篇
  1997年   2篇
  1996年   2篇
  1995年   1篇
  1993年   3篇
  1992年   1篇
  1987年   1篇
  1982年   1篇
  1980年   1篇
排序方式: 共有152条查询结果,搜索用时 390 毫秒
1.
We present an algorithm for detecting periodicity in sequences produced by repeated application of a given function. Our algorithm uses logarithmic memory with high probability, runs in linear time, and is guaranteed to stop within the second loop through the cycle. We also present a partitioning technique that offers a time/memory tradeoff. Our algorithm is especially well suited for sequences where the cycle length is typically small compared to the length of the acyclic prefix.  相似文献   
2.
Pointing tasks in human–computer interaction obey certain speed–accuracy tradeoff rules. In general, the more accurate the task to be accomplished, the longer it takes and vice versa. Fitts’ law models the speed–accuracy tradeoff effect in pointing as imposed by the task parameters, through Fitts’ index of difficulty (Id) based on the ratio of the nominal movement distance and the size of the target. Operating with different speed or accuracy biases, performers may utilize more or less area than the target specifies, introducing another subjective layer of speed–accuracy tradeoff relative to the task specification. A conventional approach to overcome the impact of the subjective layer of speed–accuracy tradeoff is to use the a posteriori “effective” pointing precision We in lieu of the nominal target width W. Such an approach has lacked a theoretical or empirical foundation. This study investigates the nature and the relationship of the two layers of speed–accuracy tradeoff by systematically controlling both Id and the index of target utilization Iu in a set of four experiments. Their results show that the impacts of the two layers of speed–accuracy tradeoff are not fundamentally equivalent. The use of We could indeed compensate for the difference in target utilization, but not completely. More logical Fitts’ law parameter estimates can be obtained by the We adjustment, although its use also lowers the correlation between pointing time and the index of difficulty. The study also shows the complex interaction effect between Id and Iu, suggesting that a simple and complete model accommodating both layers of speed–accuracy tradeoff may not exist.  相似文献   
3.
In a series of experiments, participants were required to keep track of 1 or 2 working memory (WM) objects, having to update their values in 80% of the trials. Updating cost, defined as the difference between update and non-update trials, was larger when 2 objects were involved compared with when there was only 1 object was involved. This finding was interpreted as evidence that the updating process encompasses both objects in WM, even though only 1 of them is actually updated. This feature of WM updating is limited to objects defined as "updateable," throughout the trial sequence. The results are explained by the need to reprogram the phonological loop when updating or the need for desynchronization followed by resynchronization of WM contents. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
4.
The two basic performance parameters that capture the complexity of any VLSI chip are the area of the chip,A, and the computation time,T. A systematic approach for establishing lower bounds onA is presented. This approach relatesA to the bisection flow, . A theory of problem transformation based on , which captures bothAT 2 andA complexity, is developed. A fundamental problem, namely, element uniqueness, is chosen as a computational prototype. It is shown under general input/output protocol assumptions that any chip that decides ifn elements (each with (1+)lognbits) are unique must have =(nlogn), and thus, AT2=(n 2log2 n), andA= (nlogn). A theory of VLSI transformability reveals the inherentAT 2 andA complexity of a large class of related problems.This work was supported in part by the Semiconductor Research Corporation under contract RSCH 84-06-049-6.  相似文献   
5.
本文提出了基于蚁群优化(ACO)算法的Ad Hoc网络生存时间和其他网络性能平衡路由协议(ABEAR)。协议按需发送人工蚂蚁进行路由发现,综合节点残留的信息素浓度、下一跳节点剩余能量、节点周围链路质量和拥塞情况选择下一跳节点来转发数据包,尽量避开信道使用频率较高的路径,减少了因信道冲突、数据包丢失和数据包重传所造成的能量损失,还缩短了网络传输时延,提高了网络吞吐量。协议还采用跨层机制根据MAC层通信活动情况,在保证网络连通性的前提下使部分空闲节点转入睡眠状态来节省能量消耗。仿真表明,与AODV协议相比,ABEAR协议在网络生存时间、数据包交付率和端到端平均时延方面均有较大改善。  相似文献   
6.
In this paper, a novel one-dimensional correlation filter based class-dependence feature analysis (1D-CFA) method is presented for robust face recognition. Compared with original CFA that works in the two dimensional (2D) image space, 1D-CFA encodes the image data as vectors. In 1D-CFA, a new correlation filter called optimal extra-class origin output tradeoff filter (OEOTF), which is designed in the low-dimensional principal component analysis (PCA) subspace, is proposed for effective feature extraction. Experimental results on benchmark face databases, such as FERET, AR, and FRGC, show that OEOTF based 1D-CFA consistently outperforms other state-of-the-art face recognition methods. This demonstrates the effectiveness and robustness of the novel method.  相似文献   
7.
Cost optimization for workflow applications described by Directed Acyclic Graph (DAG) with deadline constraints is a fundamental and intractable problem on Grids. In this paper, an effective and efficient heuristic called DET (Deadline Early Tree) is proposed. An early feasible schedule for a workflow application is defined as an Early Tree. According to the Early Tree, all tasks are grouped and the Critical Path is given. For critical activities, the optimal cost solution under the deadline constraint can be obtained by a dynamic programming strategy, and the whole deadline is segmented into time windows according to the slack time float. For non-critical activities, an iterative procedure is proposed to maximize time windows while maintaining the precedence constraints among activities. In terms of the time window allocations, a local optimization method is developed to minimize execution costs. The two local cost optimization methods can lead to a global near-optimal solution. Experimental results show that DET outperforms two other recent leveling algorithms. Moreover, the deadline division strategy adopted by DET can be applied to all feasible deadlines.  相似文献   
8.
9.
In this paper, we present the ARIA media processing workflow architecture that processes, filters, and fuses sensory inputs and actuates responses in real-time. The components of the architecture are programmable and adaptable; i.e. the delay, size, and quality/precision characteristics of the individual operators can be controlled via a number of parameters. Each data object processed by qStream components is subject to transformations based on the parameter values. For instance, the quality of an output data object and the corresponding processing delay and resource usage depend on the values assigned to parameters of the operators in the object flow path. In Candan, Peng, Ryu, Chatha, Mayer (Efficient stream routing in quality- and resource-adaptive flow architectures. In: Workshop on multimedia information systems, 2004), we introduced a class of flow optimization problems that promote creation and delivery of small delay or small resource-usage objects to the actuators in single-sensor, single-actuator workflows. In this paper, we extend our attention to multi-sensor media processing workflow scenarios. The algorithms we present take into account the implicit dependencies between various system parameters, such as resource consumption and object sizes. We experimentally show the effectiveness and efficiency of the algorithms.
Kyung Dong RyuEmail:
  相似文献   
10.
设计了考虑输电价格和输电损耗折价,并按MCP结算的区域电力市场月度撮合交易竞价机制,它主要解决了区域电力市场中省级电网间存在跨省输电价格情况下的竞争交易问题。在理论分析的基础上,对所提出竞价机制的具体交易流程进行了说明,并给出了算例,以期能够为我国区域电力市场竞价交易机制设计提供理论上的依据和参考。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号