首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1731篇
  免费   106篇
  国内免费   1篇
电工技术   23篇
综合类   2篇
化学工业   408篇
金属工艺   18篇
机械仪表   39篇
建筑科学   83篇
矿业工程   11篇
能源动力   93篇
轻工业   273篇
水利工程   6篇
石油天然气   7篇
无线电   135篇
一般工业技术   234篇
冶金工业   171篇
原子能技术   10篇
自动化技术   325篇
  2024年   2篇
  2023年   11篇
  2022年   51篇
  2021年   65篇
  2020年   34篇
  2019年   35篇
  2018年   40篇
  2017年   45篇
  2016年   74篇
  2015年   66篇
  2014年   103篇
  2013年   117篇
  2012年   112篇
  2011年   125篇
  2010年   95篇
  2009年   80篇
  2008年   100篇
  2007年   83篇
  2006年   77篇
  2005年   57篇
  2004年   64篇
  2003年   34篇
  2002年   42篇
  2001年   23篇
  2000年   24篇
  1999年   22篇
  1998年   57篇
  1997年   39篇
  1996年   28篇
  1995年   14篇
  1994年   18篇
  1993年   16篇
  1992年   10篇
  1991年   12篇
  1990年   4篇
  1989年   5篇
  1988年   4篇
  1987年   2篇
  1986年   4篇
  1985年   5篇
  1984年   4篇
  1983年   3篇
  1982年   3篇
  1981年   5篇
  1979年   5篇
  1977年   6篇
  1976年   6篇
  1975年   2篇
  1971年   3篇
  1969年   1篇
排序方式: 共有1838条查询结果,搜索用时 0 毫秒
21.
Automated formal verification of security protocols has been mostly focused on analyzing high-level abstract models which, however, are significantly different from real protocol implementations written in programming languages. Recently, some researchers have started investigating techniques that bring automated formal proofs closer to real implementations. This paper surveys these attempts, focusing on approaches that target the application code that implements protocol logic, rather than the libraries that implement cryptography. According to these approaches, libraries are assumed to correctly implement some models. The aim is to derive formal proofs that, under this assumption, give assurance about the application code that implements the protocol logic. The two main approaches of model extraction and code generation are presented, along with the main techniques adopted for each approach.  相似文献   
22.
Dynamic-mechanical and water sorption properties of porous and non-porous hydrogels have been studied as a function of their porosity and crosslinking density. Porous hydrogels with different crosslinking densities were prepared by co-polymerisation of 2-hydroxyethyl acrylate and ethyleneglycol dimethacrylate in solution in methanol. Pores were formed due to the segregation of the solvent from the polymer network during the polymerisation process. The porosity of the samples was observed by scanning electron microscopy. The pores collapse during the drying process after polymerisation but they reopen when the xerogel is immersed in liquid water. Bulk polymer networks with varying crosslinking densities were also synthesised and used as a reference in the analysis of the porous hydrogels. Water sorption from the gas phase and from the liquid phase was studied by means of equilibrium sorption isotherms and immersion experiments, respectively. Dynamic-mechanical spectroscopy conducted on the xerogels shows that the elastic modulus in the rubber-like state highly depends on the amount of solvent used in the polymerisation what is attributed to the presence of discontinuity surfaces in the xerogel although the pores are closed.  相似文献   
23.
The percolation model has been applied to the study of gelation of the TGDDM-DDS system (tetraglycidyldiaminodiphenylmethane–diaminodiphenylsulfone) at a mass concentration of 100–30. For each temperature the experimental viscosity curves are satisfactorily described by a percolation law. Using the degree of chemical reactions, X, as a variable, a very clear change in the reaction mechanism with temperature can be shown. Then a rate of advancement of effective reactions, Y, is defined. This value only takes intermolecular-type reactions into account, and is probably the only variable on which viscosity depends in a percolation law: η = B(1 ? Y/Yc)?p. We obtain Yc= 0.45 and p= 2.0. Comparing Xc and Yc at the gel point, we obtain information on the proportion of intramolecular reactions with temperature. It is also demonstrated that the critical percolation threshold agrees closely with the gel point determined experimentally on log G″= f(t) curves.  相似文献   
24.
Over the past decade process mining has emerged as a new analytical discipline able to answer a variety of questions based on event data. Event logs have a very particular structure; events have timestamps, refer to activities and resources, and need to be correlated to form process instances. Process mining results tend to be very different from classical data mining results, e.g., process discovery may yield end-to-end process models capturing different perspectives rather than decision trees or frequent patterns. A process-mining tool like ProM provides hundreds of different process mining techniques ranging from discovery and conformance checking to filtering and prediction. Typically, a combination of techniques is needed and, for every step, there are different techniques that may be very sensitive to parameter settings. Moreover, event logs may be huge and may need to be decomposed and distributed for analysis. These aspects make it very cumbersome to analyze event logs manually. Process mining should be repeatable and automated. Therefore, we propose a framework to support the analysis of process mining workflows. Existing scientific workflow systems and data mining tools are not tailored towards process mining and the artifacts used for analysis (process models and event logs). This paper structures the basic building blocks needed for process mining and describes various analysis scenarios. Based on these requirements we implemented RapidProM, a tool supporting scientific workflows for process mining. Examples illustrating the different scenarios are provided to show the feasibility of the approach.  相似文献   
25.
26.
Real-world applications of multivariate data analysis often stumble upon the barrier of interpretability. Simple data analysis methods are usually easy to interpret, but they risk providing poor data models. More involved methods may instead yield faithful data models, but limited interpretability. This is the case of linear and nonlinear methods for multivariate data visualization through dimensionality reduction. Even though the latter have provided some of the most exciting visualization developments, their practicality is hindered by the difficulty of explaining them in an intuitive manner. The interpretability, and therefore the practical applicability, of data visualization through nonlinear dimensionality reduction (NLDR) methods would improve if, first, we could accurately calculate the distortion introduced by these methods in the visual representation and, second, if we could faithfully reintroduce this distortion into such representation. In this paper, we describe a technique for the reintroduction of the distortion into the visualization space of NLDR models. It is based on the concept of density-equalizing maps, or cartograms, recently developed for the representation of geographic information. We illustrate it using Generative Topographic Mapping (GTM), a nonlinear manifold learning method that can provide both multivariate data visualization and a measure of the local distortion that the model generates. Although illustrated here with GTM, it could easily be extended to other NLDR visualization methods, provided a local distortion measure could be calculated. It could also serve as a guiding tool for interactive data visualization.  相似文献   
27.
In this paper the system ACOPlan for planning with non uniform action cost is introduced and analyzed. ACOPlan is a planner based on the ant colony optimization framework, in which a colony of planning ants searches for near optimal solution plans with respect to an overall plan cost metric. This approach is motivated by the strong similarity between the process used by artificial ants to build solutions and the methods used by state?Cbased planners to search solution plans. Planning ants perform a stochastic and heuristic based search by interacting through a pheromone model. The proposed heuristic and pheromone models are presented and compared through systematic experiments on benchmark planning domains. Experiments are also provided to compare the quality of ACOPlan solution plans with respect to state of the art satisficing planners. The analysis of the results confirm the good performance of the Action?CAction pheromone model and points out the promising performance of the novel Fuzzy?CLevel?CAction pheromone model. The analysis also suggests general principles for designing performant pheromone models for planning and further extensions of ACOPlan to other optimization models.  相似文献   
28.
The aim of this paper is to show how the hybridization of a multi-objective evolutionary algorithm (MOEA) and a local search method based on the use of rough set theory is a viable alternative to obtain a robust algorithm able to solve difficult constrained multi-objective optimization problems at a moderate computational cost. This paper extends a previously published MOEA [Hernández-Díaz AG, Santana-Quintero LV, Coello Coello C, Caballero R, Molina J. A new proposal for multi-objective optimization using differential evolution and rough set theory. In: 2006 genetic and evolutionary computation conference (GECCO’2006). Seattle, Washington, USA: ACM Press; July 2006], which was limited to unconstrained multi-objective optimization problems. Here, the main idea is to use this sort of hybrid approach to approximate the Pareto front of a constrained multi-objective optimization problem while performing a relatively low number of fitness function evaluations. Since in real-world problems the cost of evaluating the objective functions is the most significant, our underlying assumption is that, by aiming to minimize the number of such evaluations, our MOEA can be considered efficient. As in its previous version, our hybrid approach operates in two stages: in the first one, a multi-objective version of differential evolution is used to generate an initial approximation of the Pareto front. Then, in the second stage, rough set theory is used to improve the spread and quality of this initial approximation. To assess the performance of our proposed approach, we adopt, on the one hand, a set of standard bi-objective constrained test problems and, on the other hand, a large real-world problem with eight objective functions and 160 decision variables. The first set of problems are solved performing 10,000 fitness function evaluations, which is a competitive value compared to the number of evaluations previously reported in the specialized literature for such problems. The real-world problem is solved performing 250,000 fitness function evaluations, mainly because of its high dimensionality. Our results are compared with respect to those generated by NSGA-II, which is a MOEA representative of the state-of-the-art in the area.  相似文献   
29.
Hop is a multi-tier programming language where the behavior of interacting servers and clients are expressed by a single program. Hop adheres to the standard web programming style where servers elaborate HTML pages containing JavaScript code. This JavaScript code responds locally to user’s interactions but also (following the so-called Ajax style) requests services from remote servers. These services bring back new HTML fragments containing additional JavaScript code replacing or modifying the state of the client. This paper presents a continuation-based denotational semantics for a sequential subset of Hop. Though restricted to a single server and a single client, this semantics takes into account the key feature of Hop namely that the server elaborates client code to be run in the client’s browser. This new client-code dynamically requests services from the server which, again, elaborate new client code to be run in the client’s browser. This semantics details the programming model advocated by Hop and provides a sound basis for future studies such as security of web applications and web continuations.  相似文献   
30.
The study of behavioral and neurophysiological mechanisms involved in rat spatial cognition provides a basis for the development of computational models and robotic experimentation of goal-oriented learning tasks. These models and robotics architectures offer neurobiologists and neuroethologists alternative platforms to study, analyze and predict spatial cognition based behaviors. In this paper we present a comparative analysis of spatial cognition in rats and robots by contrasting similar goal-oriented tasks in a cyclical maze, where studies in rat spatial cognition are used to develop computational system-level models of hippocampus and striatum integrating kinesthetic and visual information to produce a cognitive map of the environment and drive robot experimentation. During training, Hebbian learning and reinforcement learning, in the form of Actor-Critic architecture, enable robots to learn the optimal route leading to a goal from a designated fixed location in the maze. During testing, robots exploit maximum expectations of reward stored within the previously acquired cognitive map to reach the goal from different starting positions. A detailed discussion of comparative experiments in rats and robots is presented contrasting learning latency while characterizing behavioral procedures during navigation such as errors associated with the selection of a non-optimal route, body rotations, normalized length of the traveled path, and hesitations. Additionally, we present results from evaluating neural activity in rats through detection of the immediate early gene Arc to verify the engagement of hippocampus and striatum in information processing while solving the cyclical maze task, such as robots use our corresponding models of those neural structures.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号