全文获取类型
收费全文 | 280篇 |
免费 | 78篇 |
国内免费 | 37篇 |
专业分类
电工技术 | 35篇 |
综合类 | 35篇 |
金属工艺 | 1篇 |
机械仪表 | 10篇 |
建筑科学 | 1篇 |
矿业工程 | 1篇 |
能源动力 | 6篇 |
水利工程 | 1篇 |
武器工业 | 2篇 |
无线电 | 48篇 |
一般工业技术 | 11篇 |
冶金工业 | 1篇 |
自动化技术 | 243篇 |
出版年
2025年 | 4篇 |
2024年 | 30篇 |
2023年 | 25篇 |
2022年 | 35篇 |
2021年 | 30篇 |
2020年 | 29篇 |
2019年 | 16篇 |
2018年 | 8篇 |
2017年 | 16篇 |
2016年 | 10篇 |
2015年 | 10篇 |
2014年 | 15篇 |
2013年 | 13篇 |
2012年 | 15篇 |
2011年 | 21篇 |
2010年 | 15篇 |
2009年 | 17篇 |
2008年 | 19篇 |
2007年 | 12篇 |
2006年 | 11篇 |
2005年 | 7篇 |
2004年 | 4篇 |
2003年 | 6篇 |
2002年 | 7篇 |
2001年 | 4篇 |
2000年 | 1篇 |
1999年 | 4篇 |
1998年 | 5篇 |
1997年 | 2篇 |
1996年 | 2篇 |
1994年 | 2篇 |
排序方式: 共有395条查询结果,搜索用时 15 毫秒
321.
322.
Zhengxing HuangAuthor Vitae W.M.P. van der AalstAuthor VitaeXudong LuAuthor Vitae Huilong DuanAuthor Vitae 《Data & Knowledge Engineering》2011,70(1):127-145
Efficient resource allocation is a complex and dynamic task in business process management. Although a wide variety of mechanisms are emerging to support resource allocation in business process execution, these approaches do not consider performance optimization. This paper introduces a mechanism in which the resource allocation optimization problem is modeled as Markov decision processes and solved using reinforcement learning. The proposed mechanism observes its environment to learn appropriate policies which optimize resource allocation in business process execution. The experimental results indicate that the proposed approach outperforms well known heuristic or hand-coded strategies, and may improve the current state of business process management. 相似文献
323.
An adaptive fuzzy Q-learning(AFQL)based on fuzzy inference systems(FIS)is proposed.The FIS realized by a normalized radial basis function(NRBF)neural network is used to approach Q-value function,whose input is composed of state and action.The rules of FIS are created incrementally according to the novelty of each element of the pair of state-action.Moreover the premise part and consequent part of the FIS are updated using extended Kalman filter(EKF).The action that impacts on environment is the one with maximum output of FIS in the current state and generated through optimization method.Simulation results in the wall-following task of mobile robots and the inverted pendulum balancing problem demonstrate that the superiority and applicability of the proposed AFQL method. 相似文献
324.
This paper proposes two methods that give intelligence to automatically guided vehicles (AGVs). In order to drive AGVs autonomously, two types of problems need to be overcome. They are the AGV navigation problem and collision avoidance problem. The first problem has been well known since 1980s. A new method based on the feature scene recognition and acquisition is proposed. The sparse distributed memory neural network (SDM) is employed for the scene recognition and acquisition. The navigation route for the AGV is learnt by use of Q-learning depending on the recognized and acquired scenes. The second problem is described as mutual understanding of behaviors between AGVs. The method of mutual understanding is proposed by the use of Q-learning. Those two methods are combined together for driving plural AGVs autonomously to deliver raw materials between machine tools in a factory. They are incorporated into the AGVs as the machine intelligence. In experimental simulations, it is verified that the first proposed method can guide the AGV to the suitable navigation and that the second method can acquire knowledge of mutual understanding of the AGVs’ behaviors. 相似文献
325.
研究了对机器人足球中的非控球队员同控球队员的协作问题.机器人根据当前状态下机器人和球的位置信息决定目标位置.对目标位置的确定进行了Q学习,对于场地的划分和角色的分配,采用了模糊理论,并对学习结果进行了仿真试验. 相似文献
326.
以热网最小年费用作为目标函数,引入基于Q学习规则的蚁群算法,建立了热网优化算法。结合算例,比较了比摩阻算法、模拟退火算法、基于Q学习规则蚁群算法,基于Q学习规则蚁群算法的热网最小年费用最低。 相似文献
327.
交叉口车辆排放较为复杂,尤其是在考虑初始排队长度的情况下,更是难以建立明确的数学模型。Q学习是一种无模型的强化学习算法,通过与环境的试错交互学习最优控制策略。本文提出了一种基于Q学习的交通排放信号控制方案。利用仿真平台USTCMTS2.0,通过不断地试错学习找到在不同相位排队长度下最优配时。在Q学习中添加了模糊初始化Q函数的方法以改进Q学习的收敛速度,加速了学习过程。仿真实验结果表明:强化学习算法取得较好的效果。相比较Hideki的方法,在车流量较高时,车辆平均排放量减少了13.9%,并且对Q函数值的模糊初始化大大加速了Q函数收敛的过程。 相似文献
328.
Content interest forwarding is a prominent research area in Information Centric Network (ICN). An efficient forwarding strategy can significantly improve data retrieval latency, origin server load, network congestion, and overhead. The state-of-the-art work is either driven by flooding approach trying to minimize the adverse effect of Interest flooding or path-driven approach trying to minimize the additional cost of maintaining routing information. These approaches are less efficient due to storm issues and excessive overhead. Proposed protocol aims to forward interest to the nearest cache without worrying about FIB construction and with significant improvement in latency and overhead. This paper presents the feasibility of integrating reinforcement learning based Q-learning strategy for forwarding in ICN. By revising Q-learning to address the inherent challenges, we introduce Q-learning based interest packets and data packets forwarding mechanisms, namely, IPQ-learning and DPQ-learning. It aims to gain self-learning through historical events and selects best next node to forward interest. Each node in the network acts as an agent with aim of forwarding packet to best next hop according to the Q value such that content can be fetched within fastest possible route and every action returns to be a learning process, which improves the accuracy of the Q value. The performance investigation of protocol in ndnSIM-2.0 shows the improvement in a range of 10%–35% for metrics such as data retrieval delay, server hit rate, network overhead, network throughput, and network load. Outcomes are compared by integrating proposed protocol with state-of-the-art caching protocols and also against recent forwarding mechanisms. 相似文献
329.
In this paper, the bidding decision making problem in electricity pay-as-bid auction is studied from a supplier's point of view. The bidding problem is a complicated task, because of suppliers’ uncertain behaviors and demand fluctuation. In a specific case, in which, the market clearing price (MCP) is considered as a continuous random variable with a known probability distribution function (PDF), an analytic solution is proposed. The suggested solution is generalized to consider the effect of supplier market power due to transmission congestion. As a result, an algebraic equation is developed to compute optimal offering price. The basic assumption in this approach is to take the known probabilistic model for the MCP. 相似文献
330.
This paper proposes a combined Virtual Reference Feedback Tuning–Q-learning model-free control approach, which tunes nonlinear static state feedback controllers to achieve output model reference tracking in an optimal control framework. The novel iterative Batch Fitted Q-learning strategy uses two neural networks to represent the value function (critic) and the controller (actor), and it is referred to as a mixed Virtual Reference Feedback Tuning–Batch Fitted Q-learning approach. Learning convergence of the Q-learning schemes generally depends, among other settings, on the efficient exploration of the state-action space. Handcrafting test signals for efficient exploration is difficult even for input-output stable unknown processes. Virtual Reference Feedback Tuning can ensure an initial stabilizing controller to be learned from few input-output data and it can be next used to collect substantially more input-state data in a controlled mode, in a constrained environment, by compensating the process dynamics. This data is used to learn significantly superior nonlinear state feedback neural networks controllers for model reference tracking, using the proposed Batch Fitted Q-learning iterative tuning strategy, motivating the original combination of the two techniques. The mixed Virtual Reference Feedback Tuning–Batch Fitted Q-learning approach is experimentally validated for water level control of a multi input-multi output nonlinear constrained coupled two-tank system. Discussions on the observed control behavior are offered. 相似文献