首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   210篇
  免费   21篇
  国内免费   22篇
电工技术   3篇
综合类   5篇
化学工业   1篇
金属工艺   3篇
机械仪表   23篇
建筑科学   13篇
能源动力   1篇
轻工业   5篇
水利工程   1篇
武器工业   1篇
无线电   7篇
一般工业技术   6篇
冶金工业   7篇
自动化技术   177篇
  2023年   3篇
  2022年   1篇
  2021年   2篇
  2019年   1篇
  2018年   1篇
  2017年   3篇
  2016年   3篇
  2015年   2篇
  2014年   18篇
  2013年   6篇
  2012年   17篇
  2011年   16篇
  2010年   18篇
  2009年   10篇
  2008年   18篇
  2007年   19篇
  2006年   18篇
  2005年   19篇
  2004年   10篇
  2003年   11篇
  2002年   11篇
  2001年   5篇
  2000年   5篇
  1999年   3篇
  1998年   1篇
  1997年   3篇
  1996年   1篇
  1995年   1篇
  1994年   4篇
  1993年   2篇
  1992年   2篇
  1991年   1篇
  1990年   3篇
  1989年   1篇
  1988年   1篇
  1987年   1篇
  1985年   1篇
  1984年   1篇
  1983年   2篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
  1977年   3篇
  1975年   2篇
排序方式: 共有253条查询结果,搜索用时 312 毫秒
1.
本集成开发环境集编辑、编译、调试和运行于一体,模拟了MCS-96系列单片机的指令系统,并给教学、实验、应用、科研和单片机的开发提供了强有力的手段。  相似文献   
2.
陈世保 《计算机时代》2011,(7):16-17,20
首先对分布式数据库查询执行代价模型进行分析,然后对直接连接中的连接运算的方法、连接关系的传输方法和执行场地等问题进行研究,并计算所有评估方法的执行代价,从中选择出最小执行代价的执行方法,最终确定了执行的场地、连接的方法和传输方法.  相似文献   
3.
曾凡浪  常瑞  许浩  潘少平  赵永望 《软件学报》2023,34(8):3507-3526
TrustZone作为ARM处理器上的可信执行环境技术,为设备上安全敏感的程序和数据提供一个隔离的独立执行环境.然而,可信操作系统与所有可信应用运行在同一个可信环境中,任意组件上的漏洞被利用都会波及系统中的其他组件.虽然ARM提出了S-EL2虚拟化技术,支持在安全世界建立多个隔离分区来缓解这个问题,但实际分区管理器中仍可能存在分区间信息泄漏等安全威胁.当前的分区管理器设计及实现缺乏严格的数学证明来保证隔离分区的安全性.详细研究了ARM TrustZone多隔离分区架构,提出一种基于精化的TrustZone多安全分区建模与安全性分析方法,并基于定理证明器Isabelle/HOL完成了分区管理器的建模和形式化验证.首先,基于逐层精化的方法构建了多安全分区模型RMTEE,使用抽象状态机描述系统运行过程和安全策略要求,建立多安全分区的抽象模型并实例化实现分区管理器的具体模型,遵循FF-A规范在具体模型中实现了事件规约;其次,针对现有分区管理器设计无法满足信息流安全性验证的不足,设计了基于DAC的分区间通信访问控制,并将其应用到TrustZone安全分区管理器的建模与验证中;再次,证明了具体模型...  相似文献   
4.
运行时间是计算机程序的重要性质之一。对于运行时间而言,常用的时间复杂度分析技术基于的是抽象的算法,并非实际程序。而对于实际程序,大多数程序验证技术则不适合验证运行时间。提出一个运行时间的验证框架以解决这个问题,该框架适用于实际代码,而同时和复杂度分析一样,具有编程语言无关性。在对运行时间的性质要求较高的场合下,可以用于提高软件的可靠性。  相似文献   
5.
This work presents a general mechanism for executing specifications that comply with given invariants, which may be expressed in different formalisms and logics. We exploit Maude’s reflective capabilities and its properties as a general semantic framework to provide a generic strategy that allows us to execute Maude specifications taking into account user-defined invariants. The strategy is parameterized by the invariants and by the logic in which such invariants are expressed. We experiment with different logics, providing examples for propositional logic, (finite future time) linear temporal logic and metric temporal logic.  相似文献   
6.
This article is an experience report about the application of a top-down strategy to use and embed an architecture reconstruction approach in the incremental software development process of the Philips MRI scanner, a representative large and complex software-intensive system. The approach is an iterative process to construct execution views without being overwhelmed by the system size and complexity. An execution view contains architectural information that describes what the software of a software-intensive system does at runtime and how it does this. The application of the strategy is illustrated with a case study, the construction of an up-to-date execution view for the start-up process of the Philips MRI scanner. The construction of this view helped the development organization to quickly reduce about 30% the start-up time of the scanner, and set up a new system benchmark for assuring the system performance through future evolution steps. The report provides detailed information about the application of the top-down strategy, including how it supports top-down analysis, communication within the development organization, and the aspects that influence the use of the top-down strategy in other contexts.  相似文献   
7.
This paper addresses scheduling problems for tasks with release and execution times. We present a number of efficient and easy to implement algorithms for constructing schedules of minimum makespan when the number of distinct task execution times is fixed. For a set of independent tasks, our algorithm in the single processor case runs in time linear in the number of tasks; with precedence constraints, our algorithm runs in time linear in the sum of the number of tasks and the size of the precedence constraints. In the multi-processor case, our algorithm constructs minimum makespan schedules for independent tasks with uniform execution times. The algorithm runs in O(n log m) time where n is the number of tasks and m is the number of processors. Received September 25, 1997; revised June 11, 1998.  相似文献   
8.
Recently, High Performance Computing (HPC) platforms have been employed to realize many computationally demanding applications in signal and image processing. These applications require real-time performance constraints to be met. These constraints include latency as well as throughput. In order to meet these performance requirements, efficient parallel algorithms are needed. These algorithms must be engineered to exploit the computational characteristics of such applications. In this paper we present a methodology for mapping a class of adaptive signal processing applications onto HPC platforms such that the throughput performance is optimized. We first define a new task model using the salient computational characteristics of a class of adaptive signal processing applications. Based on this task model, we propose a new execution model. In the earlier linear pipelined execution model, the task mapping choices were restricted. The new model permits flexible task mapping choices, leading to improved throughput performance compared with the previous model. Using the new model, a three-step task mapping methodology is developed. It consists of (1) a data remapping step, (2) a coarse resource allocation step, and (3) a fine performance tuning step. The methodology is demonstrated by designing parallel algorithms for modern radar and sonar signal processing applications. These are implemented on IBM SP2 and Cray T3E, state-of-the-art HPC platforms, to show the effectiveness of our approach. Experimental results show significant performance improvement over those obtained by previous approaches. Our code is written using C and the Message Passing Interface (MPI). Thus, it is portable across various HPC platforms. Received April 8, 1998; revised February 2, 1999.  相似文献   
9.

Speculative execution is one of the key issues to boost the performance of future generation microprocessors. In this paper, we introduce a novel approach to evaluate the effects of branch and value prediction, which allow the processor to execute instructions beyond the limits of control and true data dependences. Until now, almost all the estimations of their performance potential under different scenarios have been obtained using trace-driven or execution-driven simulation. Occasionally, some simple deterministic models have been used. We employ an analytical model based on recently introduced Fluid Stochastic Petri Nets (FSPNs) in order to capture the dynamic behavior of an ILP processor with aggressive use of prediction techniques and speculative execution. Here we define the FSPN model, derive the state equations for the underlying stochastic process and present performance evaluation results to illustrate its usage in deriving measures of interest. Our implementation-independent stochastic modeling framework reveals considerable potential for further research in this area using numerical solution of systems of partial differential equations and/or discrete-event simulation of FSPN models.  相似文献   
10.
Business processes leave trails in a variety of data sources (e.g., audit trails, databases, and transaction logs). Hence, every process instance can be described by a trace, i.e., a sequence of events. Process mining techniques are able to extract knowledge from such traces and provide a welcome extension to the repertoire of business process analysis techniques. Recently, process mining techniques have been adopted in various commercial BPM systems (e.g., BPM|one, Futura Reflect, ARIS PPM, Fujitsu Interstage, Businesscape, Iontas PDF, and QPR PA). Unfortunately, traditional process discovery algorithms have problems dealing with less structured processes. The resulting models are difficult to comprehend or even misleading. Therefore, we propose a new approach based on trace alignment. The goal is to align traces in such a way that event logs can be explored easily. Trace alignment can be used to explore the process in the early stages of analysis and to answer specific questions in later stages of analysis. Hence, it complements existing process mining techniques focusing on discovery and conformance checking. The proposed techniques have been implemented as plugins in the ProM framework. We report the results of trace alignment on one synthetic and two real-life event logs, and show that trace alignment has significant promise in process diagnostic efforts.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号