首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   5695篇
  免费   84篇
  国内免费   3篇
电工技术   89篇
综合类   5篇
化学工业   1053篇
金属工艺   91篇
机械仪表   114篇
建筑科学   227篇
矿业工程   33篇
能源动力   122篇
轻工业   419篇
水利工程   76篇
石油天然气   21篇
无线电   360篇
一般工业技术   689篇
冶金工业   1865篇
原子能技术   48篇
自动化技术   570篇
  2021年   34篇
  2019年   40篇
  2018年   52篇
  2016年   44篇
  2014年   81篇
  2013年   325篇
  2012年   138篇
  2011年   178篇
  2010年   109篇
  2009年   123篇
  2008年   154篇
  2007年   178篇
  2006年   157篇
  2005年   172篇
  2004年   149篇
  2003年   141篇
  2002年   152篇
  2001年   86篇
  2000年   92篇
  1999年   86篇
  1998年   185篇
  1997年   147篇
  1996年   108篇
  1995年   110篇
  1994年   90篇
  1993年   122篇
  1992年   107篇
  1991年   57篇
  1990年   110篇
  1989年   97篇
  1988年   91篇
  1987年   95篇
  1986年   85篇
  1985年   122篇
  1984年   121篇
  1983年   107篇
  1982年   108篇
  1981年   97篇
  1980年   68篇
  1979年   117篇
  1978年   84篇
  1977年   116篇
  1976年   106篇
  1975年   108篇
  1974年   76篇
  1973年   90篇
  1972年   52篇
  1971年   40篇
  1970年   38篇
  1969年   35篇
排序方式: 共有5782条查询结果,搜索用时 31 毫秒
111.

In this paper, we develop a novel non-parametric online actor-critic reinforcement learning (RL) algorithm to solve optimal regulation problems for a class of continuous-time affine nonlinear dynamical systems. To deal with the value function approximation (VFA) with inherent nonlinear and unknown structure, a reproducing kernel Hilbert space (RKHS)-based kernelized method is designed through online sparsification, where the dictionary size is fixed and consists of updated elements. In addition, the linear independence check condition, i.e., an online criteria, is designed to determine whether the online data should be inserted into the dictionary. The RHKS-based kernelized VFA has a variable structure in accordance with the online data collection, which is different from classical parametric VFA methods with a fixed structure. Furthermore, we develop a sparse online kernelized actor-critic learning RL method to learn the unknown optimal value function and the optimal control policy in an adaptive fashion. The convergence of the presented kernelized actor-critic learning method to the optimum is provided. The boundedness of the closed-loop signals during the online learning phase can be guaranteed. Finally, a simulation example is conducted to demonstrate the effectiveness of the presented kernelized actor-critic learning algorithm.

  相似文献   
112.
Inherent in project management is the risk that a project fails to meet planned completion deadlines due to delays experienced in individual tasks. As such, certain critical tasks may be candidates for risk management (e.g., the allocation of additional resources such as labor, materials, and equipment) to prevent delays. A common means to identify such critical tasks is with the critical path method (CPM), which identifies a path of tasks in a project network that, when delayed, result in project delays. This work offers a complementary, stochastic approach to CPM that ranks tasks according to their effect on the project completion time distribution, when the distributions of task completion time are delayed. The new hybrid approach is based on the use of a Monte Carlo simulation and a multi-criteria decision analysis technique. Monte Carlo simulation allows for approximating the cumulative distribution function of the total duration of the project, while the multi-criteria decision analysis technique is used to compare and rank the tasks across percentiles of the resulting project completion time distributions. Doing so allows for different percentile weighting schemes to represent decision maker risk preferences. The suggested approach is applied to two project network examples. The examples illustrate that the proposed approach highlights some tasks as risky, which may not always lie on the critical path as identified by CPM. This is valuable for practicing managers as it allows them to properly consider their risk preferences when determining task criticality based on the distribution of project completion time (e.g., emphasizing median vs. upper tail completion time).  相似文献   
113.
Recent advances in computing technology have brought multimedia information processing to prominence. The ability to digitize, store, retrieve, process, and transport analog information in digital form has changed the dimensions of information handling. Several architectural and network configurations have been proposed for efficient and reliable digital video delivery systems. However, these proposals succeed only in addressing subsets of the whole problem. In this paper, we discuss the characteristics of video services. These include Cable Television, Pay-Per-View, and Video Repository Centers. We also discuss requirements for Video On Demand services. With respect to these video services, we analyze two important video properties: image quality and response time. We discuss and present configurations of a Digital Video Delivery System (DVDS) from three general system components - servers, clients, and connectivities. Pertinent issues in developing each component are also analyzed. We also present an architecture of a DVDS that can support the various functionalities that exist in the various video services. Lastly, we discuss data allocation strategies which impact performance of interactive video on demand (IVOD). We present preliminary results from a study using a limited form of mirroring to support high performance IVOD.  相似文献   
114.
Electric arc furnace (EAF) dusts contain significant quantities of zinc, mostly in the form of zinc oxide. This dust has been classified as a hazardous waste due to the presence of lead, cadmium, and hexavalent chromium. It is important that environmentally acceptable processes be developed to treat this waste. One possible alternative process would involve reacting the zinc oxide in the dust with either solid or liquid iron. In addition, in the carbothermic reduction processes, which have been designed to treat the dust, metallic iron is formed, and this iron can participate in the reduction of zinc oxide. In the present research, the reduction of zinc oxide by iron according to the reaction $ZnO_{(s)} + Fe_{(s)} = Zn_{(g)} + FeO_{(s)} $ was studied using a thermogravimetric technique. Briquettes of zinc oxide powder and electrolytic iron were reacted in the temperature range of 1073 to 1423 K in an argon atmosphere. First, a thermodynamic analysis was performed using the Facility for the Analysis of Chemical Thermodynamics (F*A*C*T) computational system, and then the effect of experimental variables on the reaction kinetics was determined. These variables included argon gas flow rate, reaction temperature, reagent particle size, iron to zinc oxide ratio, aspect ratio of the briquette, briquetting pressure, and alkali and alkaline earth additions. It was found that, initially, the reaction was chemically controlled with an activation energy of 230 kJ/mol. Additions, such as sodium chloride and calcium fluoride, promoted the reaction, and the activation energies were 172.5 and 188.7 kJ/mol, respectively. Once a product layer had formed, the reaction was limited by the diffusion of zinc gas away from the reaction interface. The experimental data were fitted to a parabolic rate law, and the parabolic rate constant was found to be $k_p = - 2.47 + 0.0021 T(K)$   相似文献   
115.
A safe flight starts with effective performance of the pre-flight flight planning and briefing task. However, several problems related to the execution of this task can be identified. Potentially, the introduction of an improved flight plan provides an opportunity to improve the quality and availability of information provided to Flight Crew, thereby enhancing the quality of crew briefings. The proposed risk-based, intelligent flight plan is designed from the perspective of the current operational concept (e.g. fixed routes and ATC managerial role for separation), and associated airline Flight Planning and Dispatch functions. In this case, the focus is sharing information across specific airline stakeholders (e.g. Flight Operations Management and Safety functions) and Maintenance, to support a safe and efficient flight operation. Overall, the introduction of this new flight plan will result in the definition of new operational and organisational processes, along with a new way of performing the pre-flight, planning and briefing task. It is anticipated that this will impact positively on the operational and safety outcome of the flight.  相似文献   
116.
We consider the problem of scheduling on uniform machines which may not start processing at the same time with the purpose of minimizing the maximum completion time. We propose using a variant of the MULTIFIT algorithm, LMULTIFIT, which generates schedules which end within 1.382 times the optimal maximum completion time for the general problem, and within \(\sqrt{6}/2\) times the optimal maximum completion time for problem instances with two machines. Both developments represent improvements over previous results. We prove that LMULTIFIT worst-case bounds for scheduling on simultaneous uniform machines are also LMULTIFIT worst-case approximation bounds for scheduling on nonsimultaneous uniform machines and show that worst-case approximation bounds of MULTIFIT variants for simultaneous uniform machines from previous literature also apply to LMULTIFIT. We also comment on how a PTAS for scheduling on a constant number of uniform machines with fixed jobs can be used to obtain a PTAS for scheduling on a constant number of uniform nonsimultaneous parallel machines.  相似文献   
117.
The majority of microfluidic devices used for cell culture, including Organ-on-a-Chips (Organ Chips), are fabricated using polydimethylsiloxane (PDMS) polymer because it is flexible, optically clear, and easy to mold. However, PDMS possesses significant challenges for high volume manufacturing and its tendency to absorb small hydrophobic compounds limits its usefulness as a material in devices used for drug evaluation studies. Here, we demonstrate that a subset of optically clear, elastomeric, styrenic block copolymers based on styrene-ethylene-butylene-styrene exhibit reduced absorption of small hydrophobic molecules and drug compounds compared to PDMS and that they can be fabricated into microfluidic devices with fine features and the flexibility required for Organ Chips using mass production techniques of injection molding and extrusion.  相似文献   
118.
The effective visualization of vascular structures is critical for diagnosis, surgical planning as well as treatment evaluation. In recent work, we have developed an algorithm for vessel detection that examines the intensity profile around each voxel in an angiographic image and determines the likelihood that any given voxel belongs to a vessel; we term this the "vesselness coefficient" of the voxel. Our results show that our algorithm works particularly well for visualizing branch points in vessels. Compared to standard Hessian based techniques, which are fine-tuned to identify long cylindrical structures, our technique identifies branches and connections with other vessels. Using our computed vesselness coefficient, we explore a set of techniques for visualizing vasculature. Visualizing vessels is particularly challenging because not only is their position in space important for clinicians but it is also important to be able to resolve their spatial relationship. We applied visualization techniques that provide shape cues as well as depth cues to allow the viewer to differentiate between vessels that are closer from those that are farther. We use our computed vesselness coefficient to effectively visualize vasculature in both clinical neurovascular x-ray computed tomography based angiography images, as well as images from three different animal studies. We conducted a formal user evaluation of our visualization techniques with the help of radiologists, surgeons, and other expert users. Results indicate that experts preferred distance color blending and tone shading for conveying depth over standard visualization techniques.  相似文献   
119.
120.
Replica Placement Strategies in Data Grid   总被引:1,自引:0,他引:1  
Replication is a technique used in Data Grid environments that helps to reduce access latency and network bandwidth utilization. Replication also increases data availability thereby enhancing system reliability. The research addresses the problem of replication in Data Grid environment by investigating a set of highly decentralized dynamic replica placement algorithms. Replica placement algorithms are based on heuristics that consider both network latency and user requests to select the best candidate sites to place replicas. Due to dynamic nature of Grid, the candidate site holds replicas currently may not be the best sites to fetch replicas in subsequent periods. Therefore, a replica maintenance algorithm is proposed to relocate replicas to different sites if the performance metric degrades significantly. The study of our replica placement algorithms is carried out using a model of the EU Data Grid Testbed 1 [Bell et al. Comput. Appl., 17(4), 2003] sites and their associated network geometry. We validate our replica placement algorithms with total file transfer times, the number of local file accesses, and the number of remote file accesses.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号