首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9019篇
  免费   907篇
  国内免费   561篇
电工技术   233篇
综合类   964篇
化学工业   979篇
金属工艺   241篇
机械仪表   134篇
建筑科学   4007篇
矿业工程   583篇
能源动力   83篇
轻工业   81篇
水利工程   761篇
石油天然气   60篇
武器工业   31篇
无线电   213篇
一般工业技术   642篇
冶金工业   407篇
原子能技术   7篇
自动化技术   1061篇
  2024年   40篇
  2023年   221篇
  2022年   404篇
  2021年   379篇
  2020年   337篇
  2019年   224篇
  2018年   188篇
  2017年   215篇
  2016年   212篇
  2015年   311篇
  2014年   622篇
  2013年   439篇
  2012年   626篇
  2011年   683篇
  2010年   636篇
  2009年   674篇
  2008年   561篇
  2007年   675篇
  2006年   523篇
  2005年   469篇
  2004年   409篇
  2003年   316篇
  2002年   282篇
  2001年   199篇
  2000年   162篇
  1999年   137篇
  1998年   92篇
  1997年   85篇
  1996年   62篇
  1995年   48篇
  1994年   34篇
  1993年   22篇
  1992年   37篇
  1991年   20篇
  1990年   11篇
  1989年   10篇
  1988年   6篇
  1980年   5篇
  1966年   6篇
  1965年   9篇
  1964年   17篇
  1963年   12篇
  1962年   3篇
  1961年   8篇
  1960年   6篇
  1959年   5篇
  1958年   5篇
  1957年   3篇
  1956年   3篇
  1955年   5篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
31.
多Agent协作追捕问题是多Agent协调与协作研究中的一个典型问题。针对具有学习能力的单逃跑者追捕问题,提出了一种基于博弈论及Q学习的多Agent协作追捕算法。首先,建立协作追捕团队,并构建协作追捕的博弈模型;其次,通过对逃跑者策略选择的学习,建立逃跑者有限的Step-T累积奖赏的运动轨迹,并把运动轨迹调整到追捕者的策略集中;最后,求解协作追捕博弈得到Nash均衡解,每个Agent执行均衡策略完成追捕任务。同时,针对在求解中可能存在多个均衡解的问题,加入了虚拟行动行为选择算法来选择最优的均衡策略。C#仿真实验表明,所提算法能够有效地解决障碍环境中单个具有学习能力的逃跑者的追捕问题,实验数据对比分析表明该算法在同等条件下的追捕效率要优于纯博弈或纯学习的追捕算法。  相似文献   
32.
In automotive paint shops, changes of colors between consecutive production orders cause costs for cleaning the painting robots. It is a significant task to re-sequence orders and group orders with identical color as a color batch to minimize the color changeover costs. In this paper, a Color-batching Resequencing Problem (CRP) with mix bank buffer systems is considered. We propose a Color-Histogram (CH) model to describe the CRP as a Markov decision process and a Deep Q-Network (DQN) algorithm to solve the CRP integrated with the virtual car resequencing technique. The CH model significantly reduces the number of possible actions of the DQN agent, so that the DQN algorithm can be applied to the CRP at a practical scale. A DQN agent is trained in a deep reinforcement learning environment to minimize the costs of color changeovers for the CRP. Two experiments with different assumptions on the order attribute distributions and cost metrics were conducted and evaluated. Experimental results show that the proposed approach outperformed conventional algorithms under both conditions. The proposed agent can run in real time on a regular personal computer with a GPU. Hence, the proposed approach can be readily applied in the production control of automotive paint shops to resolve order-resequencing problems.  相似文献   
33.
This paper presents a field-scale experimental track over a poor subgrade with an unreinforced section and a geocell-reinforced section subjected to in-situ performance tests. Plate load tests and Benkelman beam tests were carried out distributed in several unreinforced and reinforced layers. The objective was to: (1) examine the variability of the elastic modulus of unbound granular material (UGM) due the influence of its thickness and the presence of poor subgrade in its base, (2) evaluate the modulus improvement factor (MIF) generated by the geocell reinforcement in the UGM and (3) verify the most appropriate condition to apply the MIF to transport infrastructure design. The results showed that there is a significant influence of the thickness of the UGM layer on its elastic modulus when the layer is supported directly over a soft subgrade. The MIF values obtained in field suggest that its determination is mostly related to the UGM maximum elastic modulus rather than its decreased values (by virtue of poor subgrade or reduced thicknesses), and that the analytical formulation presented for MIF calculation has good predictive capability to be applied to pavement design.  相似文献   
34.
This paper explores the aspects related to the energy consumption for the compaction of unreinforced and fibre reinforced samples fabricated in the laboratory. It is well known that, for a fixed soil density, the addition of fibres invariably results in an increased resistance to compaction. However, similar peak strength properties of a dense unreinforced sample can be obtained using looser granular soil matrices mixed with small quantities of fibres. Based on both experimental and discrete element modelling (DEM) procedures, this paper demonstrates that less compaction energy is required for building loose fibre reinforced sand samples than for denser unreinforced sand samples while both samples show similar peak strength properties. Beyond corroborating the macro-scale experimental observations, the result of the DEM analyses provides an insight into the local micro-scale mechanisms governing the fibre-grain interaction. These assessments focus on the evolution of the void ratio distribution, re-arrangement of soil particles, mobilisation of stresses in the fibres, and the evolution of the fibre orientation distribution during the stages of compaction.  相似文献   
35.
The microstructural features and the consequent mechanical properties were characterized in aluminium borate whisker (ABOw) (5, 10 and 15 wt.%) reinforced commercially-pure aluminium composites fabricated by conventional powder metallurgy technique. The aluminium powder and the whisker were effectively blended by a semi-powder metallurgy method. The blended powder mixtures were cold compacted and sintered at 600 °C. The sintered composites were characterized for microstructural features by optical microscopy (OM), scanning electron microscopy (SEM), energy dispersive spectroscopy (EDS), transmission electron microscopy (TEM) and X-ray diffraction (XRD) analysis. Porosity in the composites with variation in ABOw contents was determined. The effect of variation in content of ABOw on mechanical properties, viz. hardness, bending strength and compressive strength of the composites was evaluated. The dry sliding wear behaviour was evaluated at varying sliding distance at constant loads. Maximum flexural strength of 172 MPa and compressive strength of 324 MPa with improved hardness around HV 40.2 are obtained in composite with 10 wt.% ABOw. Further increase in ABOw content deteriorates the properties. A substantial increase in wear resistance is also observed with 10 wt.% ABOw. The excellent combination of mechanical properties of Al−10wt.%ABOw composites is attributed to good interfacial bonds, less porosity and uniformity in the microstructure.  相似文献   
36.
本文以某办公楼改造工程为实例,对框架结构负弯矩区围板型粘钢加固方法的空间效应进行研究。通过ANSYS有限元软件,采用模态分析、谱分析、时程分析对3组模型进行仿真计算,对3组模型的抗震空间效应结果进行对比分析,结果均显示围板型粘钢加固方法具有较好的抗震性能。本文结果对多层或高层结构某区域梁负弯矩加固后的局部效应具有指导意义。  相似文献   
37.
《Ceramics International》2022,48(11):15668-15676
The mismatch in the coefficients of thermal expansion (CTE) of the carbon fiber reinforced pyrocarbon (Cf/C) composites and their thermal barrier coatings (TBCs) has significantly restricted the service life of Cf/C composites in high-temperature environments. Owing to the high CTE of TBCs, it is vital to find a material with similar mechanical properties and higher CTE than Cf/C composites. In this work, carbon nanotube reinforced pyrocarbon (Ct/C) nanocomposites with high CTEs were prepared to self-adapt to the TBCs. Different CTEs (~4.0–6.5 × 10?6/°C) were obtained by varying the carbon nanotube (CNT) content of the Ct/C composites. Owing to the decreased mismatch in the CTEs, no cracks were formed in the TBCs (SiC and HfB2-SiC-HfC coatings) deposited on the Ct/C composites. After heat treatment at 2100 °C, several wide cracks were found in the TBCs on the Cf/C composite, whereas the TBCs on the Ct/C composites were intact without cracks. We found that the CTE-tunable Ct/C composites can self-adapt to different TBCs, protecting the composites from oxidation at high temperatures.  相似文献   
38.
Aiming at human-robot collaboration in manufacturing, the operator's safety is the primary issue during the manufacturing operations. This paper presents a deep reinforcement learning approach to realize the real-time collision-free motion planning of an industrial robot for human-robot collaboration. Firstly, the safe human-robot collaboration manufacturing problem is formulated into a Markov decision process, and the mathematical expression of the reward function design problem is given. The goal is that the robot can autonomously learn a policy to reduce the accumulated risk and assure the task completion time during human-robot collaboration. To transform our optimization object into a reward function to guide the robot to learn the expected behaviour, a reward function optimizing approach based on the deterministic policy gradient is proposed to learn a parameterized intrinsic reward function. The reward function for the agent to learn the policy is the sum of the intrinsic reward function and the extrinsic reward function. Then, a deep reinforcement learning algorithm intrinsic reward-deep deterministic policy gradient (IRDDPG), which is the combination of the DDPG algorithm and the reward function optimizing approach, is proposed to learn the expected collision avoidance policy. Finally, the proposed algorithm is tested in a simulation environment, and the results show that the industrial robot can learn the expected policy to achieve the safety assurance for industrial human-robot collaboration without missing the original target. Moreover, the reward function optimizing approach can help make up for the designed reward function and improve policy performance.  相似文献   
39.
This paper develops a relative output‐feedback–based solution to the containment control of linear heterogeneous multiagent systems. A distributed optimal control protocol is presented for the followers to not only assure that their outputs fall into the convex hull of the leaders' output but also optimizes their transient performance. The proposed optimal solution is composed of a feedback part, depending of the followers' state, and a feed‐forward part, depending on the convex hull of the leaders' state. To comply with most real‐world applications, the feedback and feed‐forward states are assumed to be unavailable and are estimated using two distributed observers. That is, a distributed observer is designed to measure each agent's states using only its relative output measurements and the information that it receives by its neighbors. Another adaptive distributed observer is designed, which uses exchange of information between followers over a communication network to estimate the convex hull of the leaders' state. The proposed observer relaxes the restrictive requirement of having access to the complete knowledge of the leaders' dynamics by all the followers. An off‐policy reinforcement learning algorithm on an actor‐critic structure is next developed to solve the optimal containment control problem online, using relative output measurements and without requiring the leaders' dynamics. Finally, the theoretical results are verified by numerical simulations.  相似文献   
40.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号