首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   924篇
  免费   43篇
  国内免费   1篇
电工技术   9篇
化学工业   275篇
金属工艺   20篇
机械仪表   17篇
建筑科学   30篇
矿业工程   1篇
能源动力   21篇
轻工业   98篇
水利工程   11篇
石油天然气   8篇
无线电   95篇
一般工业技术   147篇
冶金工业   46篇
原子能技术   12篇
自动化技术   178篇
  2024年   2篇
  2023年   12篇
  2022年   21篇
  2021年   29篇
  2020年   17篇
  2019年   18篇
  2018年   28篇
  2017年   26篇
  2016年   35篇
  2015年   26篇
  2014年   30篇
  2013年   69篇
  2012年   51篇
  2011年   74篇
  2010年   53篇
  2009年   53篇
  2008年   59篇
  2007年   43篇
  2006年   47篇
  2005年   38篇
  2004年   28篇
  2003年   19篇
  2002年   29篇
  2001年   15篇
  2000年   18篇
  1999年   6篇
  1998年   16篇
  1997年   15篇
  1996年   11篇
  1995年   8篇
  1994年   13篇
  1993年   9篇
  1992年   8篇
  1991年   4篇
  1990年   6篇
  1989年   5篇
  1988年   7篇
  1987年   2篇
  1986年   2篇
  1985年   3篇
  1984年   1篇
  1983年   3篇
  1982年   1篇
  1981年   4篇
  1976年   1篇
  1973年   2篇
  1958年   1篇
排序方式: 共有968条查询结果,搜索用时 15 毫秒
51.
The efficiency of modern optimization methods, coupled with increasing computational resources, has led to the possibility of real-time optimization algorithms acting in safety-critical roles. There is a considerable body of mathematical proofs on on-line optimization algorithms which can be leveraged to assist in the development and verification of their implementation. In this paper, we demonstrate how theoretical proofs of real-time optimization algorithms can be used to describe functional properties at the level of the code, thereby making it accessible for the formal methods community. The running example used in this paper is a generic semi-definite programming solver. Semi-definite programs can encode a wide variety of optimization problems and can be solved in polynomial time at a given accuracy. We describe a top-down approach that transforms a high-level analysis of the algorithm into useful code annotations. We formulate some general remarks on how such a task can be incorporated into a convex programming autocoder. We then take a first step towards the automatic verification of the optimization program by identifying key issues to be addressed in future work.  相似文献   
52.
The objective of fault-tolerant control (FTC) is to minimise the effect of faults on system performance (stability, trajectory tracking, etc.). However, the majority of the existing FTC methods continue to force the system to follow the pre-fault trajectories without considering the reduction in available control resources caused by actuator faults. Forcing the system to follow the same trajectories as before fault occurrence may result in actuator saturation and system's instability. Thus, pre-fault objectives should be redefined in function of the remaining resources to avoid potential saturation. The main contribution of this paper is a flatness-based trajectory planning/re-planning method that can be combined with any active FTC approach. The work considers the case of over-actuated systems where a new idea is proposed to evaluate the severity of faults occurred. In addition, the trajectory planning/re-planning approach is formulated as an optimisation problem based on the analysis of attainable efforts domain in fault-free and fault cases. The proposed approach is applied to two satellite systems in rendezvous mission.  相似文献   
53.
The cover image, by Bruna A. Bregadiolli et al., is based on the Research Article Towards the synthesis of poly(azafulleroid)s: main chain fullerene oligomers for organic photovoltaic devices, DOI: 10.1002/pi.5419 .

  相似文献   

54.
We introduce a new geometric method to generate sphere packings with restricted overlap values. Sample generation is an important, but time-consuming, step that precedes a calculation performed with the discrete element method (DEM). At present, there does not exist any software dedicated to DEM which would be similar to the mesh software that exists for finite element methods (FEM). A practical objective of the method is to build very large sphere packings (several hundreds of thousands) in a few minutes instead of several days as the current dynamic methods do. The developed algorithm uses a new geometric procedure to position very efficiently the polydisperse spheres in a tetrahedral mesh. The algorithm, implemented into YADE-OPEN DEM (open-source software), consists in filling tetrahedral meshes with spheres. In addition to the features of the tetrahedral mesh, the input parameters are the minimum and maximum radii (or their size ratio), and the magnitude of authorized overlaps. The filling procedure is stopped when a target solid fraction or number of spheres is reached. Based on this method, an efficient tool can be designed for DEMs used by researchers and engineers. The generated packings can be isotropic and the number of contacts per sphere is very high due to its geometric procedure. In this paper, different properties of the generated packings are characterized and examples from real industrial problems are presented to show how this method can be used. The current C++ version of this packing algorithm is part of YADE-OPEN DEM [20] available on the web (https://yade-dem.org).  相似文献   
55.
This work describes a collaborative effort to define and apply a protocol for the rational selection of a general‐purpose screening library, to be used by the screening platforms affiliated with the EU‐OPENSCREEN initiative. It is designed as a standard source of compounds for primary screening against novel biological targets, at the request of research partners. Given the general nature of the potential applications of this compound collection, the focus of the selection strategy lies on ensuring chemical stability, absence of reactive compounds, screening‐compliant physicochemical properties, loose compliance to drug‐likeness criteria (as drug design is a major, but not exclusive application), and maximal diversity/coverage of chemical space, aimed at providing hits for a wide spectrum of drugable targets. Finally, practical availability/cost issues cannot be avoided. The main goal of this publication is to inform potential future users of this library about its conception, sources, and characteristics. The outline of the selection procedure, notably of the filtering rules designed by a large committee of European medicinal chemists and chemoinformaticians, may be of general methodological interest for the screening/medicinal chemistry community. The selection task of 200K molecules out of a pre‐filtered set of 1.4M candidates was shared by five independent European research groups, each picking a subset of 40K compounds according to their own in‐house methodology and expertise. An in‐depth analysis of chemical space coverage of the library serves not only to characterize the collection, but also to compare the various chemoinformatics‐driven selection procedures of maximal diversity sets. Compound selections contributed by various participating groups were mapped onto general‐purpose self‐organizing maps (SOMs) built on the basis of marketed drugs and bioactive reference molecules. In this way, the occupancy of chemical space by the EU‐OPENSCREEN library could be directly compared with distributions of known bioactives of various classes. This mapping highlights the relevance of the selection and shows how the consensus reached by merging the five different 40K selections contributes to achieve this relevance. The approach also allows one to readily identify subsets of target‐ or target‐class‐oriented compounds from the EU‐OPENSCREEN library to suit the needs of the diverse range of potential users. The final EU‐OPENSCREEN library, assembled by merging five independent selections of 40K compounds from various expert groups, represents an excellent example of a Europe‐wide collaborative effort toward the common objective of building best‐in‐class European open screening platforms.  相似文献   
56.
Data reconciliation consists in modifying noisy or unreliable data in order to make them consistent with a mathematical model (herein a material flow network). The conventional approach relies on least-squares minimization. Here, we use a fuzzy set-based approach, replacing Gaussian likelihood functions by fuzzy intervals, and a leximin criterion. We show that the setting of fuzzy sets provides a generalized approach to the choice of estimated values, that is more flexible and less dependent on oftentimes debatable probabilistic justifications. It potentially encompasses interval-based formulations and the least squares method, by choosing appropriate membership functions and aggregation operations. This paper also lays bare the fact that data reconciliation under the fuzzy set approach is viewed as an information fusion problem, as opposed to the statistical tradition which solves an estimation problem.  相似文献   
57.
Recasting MLF     
The language MLF is a proposal for a new type system that supersedes both ML and System F, allows for efficient, predictable, and complete type inference for partially annotated terms. In this work, we revisit the definition of MLF, following a more progressive approach and focusing on the design-space and expressiveness. We introduce a Curry-style version iMLF of MLF and provide an interpretation of iMLF types as instantiation-closed sets of Dash System-F types, from which we derive the definition of type-instance in iMLF. We give equivalent syntactic definition of the type-instance, presented as a set of inference rules. We also show an encoding of iMLF into the closure of Curry-style System F by let-expansion. We derive the Church-style version eMLF by refining types of iMLF so as to distinguish between given and inferred polymorphism. We show an embedding of ML in eMLF and a straightforward encoding of System F into eMLF.  相似文献   
58.
In this paper, we will present a technique for measuring visibility distances under foggy weather conditions using a camera mounted onboard a moving vehicle. Our research has focused in particular on the problem of detecting daytime fog and estimating visibility distances; thanks to these efforts, an original method has been developed, tested and patented. The approach consists of dynamically implementing Koschmieder's law. Our method enables computing the meteorological visibility distance, a measure defined by the International Commission on Illumination (CIE) as the distance beyond which a black object of an appropriate dimension is perceived with a contrast of less than 5%. Our proposed solution is an original one, featuring the advantage of utilizing a single camera and necessitating the presence of just the road and sky in the scene. As opposed to other methods that require the explicit extraction of the road, this method offers fewer constraints by virtue of being applicable with no more than the extraction of a homogeneous surface containing a portion of the road and sky within the image. This image preprocessing also serves to identify the level of compatibility of the processed image with the set of Koschmieder's model hypotheses. Nicolas Hautiére graduated from the École Nationale des Travaux Publics de l'État, France (2002). He received his M.S. and Ph.D. degree in computer vision, respectively, in 2002 and 2005 from Saint-Étienne University (France). From 2002, he is a researcher in the Laboratoire Central des Ponts et Chaussées (LCPC), Paris, France. His research interests include trafic engineering, computer vision, and pattern recognition. Jean-Philippe Tarel graduated from the École Nationale des Ponts et Chaussées, Paris, France (1991). He received his Ph.D. degree in Applied Mathematics from Paris IX-Dauphine University in 1996 and he was with the Institut National de Recherche en Informatique et Automatique (INRIA) from 1991 to 1996. From 1997 to 1998, he was a research associate at Brown University, USA. From 1999, he is a researcher in the Laboratoire Central des Ponts et Chaussées (LCPC), Paris, France, and from 2001 to 2003 in the INRIA. His research interests include computer vision, pattern recognition, and shape modeling. Jean Lavenant graduated from the École Nationale des Travaux Publics de l'État, Lyon, France (2001). He received the M.S. degree in Computer Vision from Jean Monnet university of Saint-Étienne in 2001. In 2001, he was a researcher in the Laboratoire Central des Ponts et Chaussées (LCPC). In 2002, he was a system engineer in Chicago (USA). He is currently an engineer for the french ministry of transports. Didier Aubert received the M.S. and Ph.D. degree, respectively, in 1985 and 1989 from the National Polytechnical Institut of Grenoble (INPG). From 1989--1990, he worked as a research scientist on the development of an automatic road following system for the NAVLAB at Carnegie Mellon University. From 1990–1994, he worked in the research department of a private company (ITMI). During this period he was the project leader of several projects dealing with computer vision. He is currently a researcher at INRETS since 1995 and works on Road traffic measurements, crowd monitoring, automated highway systems, and driving assistance systems for vehicles. He is an image processing expert for several companies, teaches at Universities (Paris VI, Paris XI, ENPC, ENST) and is at the editorial board of RTS (Research - Transport - Safety).  相似文献   
59.
This article aims to further improve previously developed design for Acrobot walking based on partial exact feedback linearisation of order 3. Namely, such an exact system transformation leads to an almost linear system where error dynamics along trajectory to be tracked is a 4-dimensional linear time-varying system having three time-varying entries only, the remaining entries being either zero or one. In such a way, exponentially stable tracking can be obtained by quadratically stabilising a linear system with polytopic uncertainty. The current improvement is based on applying linear matrix inequalities (LMI) methods to solve this problem numerically. This careful analysis significantly improves previously known approaches. Numerical simulations of Acrobot walking based on the above-mentioned LMI design are demonstrated as well.  相似文献   
60.
Hybrid Approach for Addressing Uncertainty in Risk Assessments   总被引:3,自引:0,他引:3  
Parameter uncertainty is a major aspect of the model-based estimation of the risk of human exposure to pollutants. The Monte Carlo method, which applies probability theory to address model parameter uncertainty, relies on a statistical representation of available information. In recent years, other uncertainty theories have been proposed as alternative approaches to address model parameter uncertainty in situations where available information is insufficient to identify statistically representative probability distributions, due in particular to data scarcity. The simplest such theory is possibility theory, which uses so-called fuzzy numbers to represent model parameter uncertainty. In practice, it may occur that certain model parameters can be reasonably represented by probability distributions, because there are sufficient data available to substantiate such distributions by statistical analysis, while others are better represented by fuzzy numbers (due to data scarcity). The question then arises as to how these two modes of representation of model parameter uncertainty can be combined for the purpose of estimating the risk of exposure. This paper proposes an approach (termed a hybrid approach) which combines Monte Carlo random sampling of probability distribution functions with fuzzy calculus. The approach is applied to a real case of estimation of human exposure, via vegetable consumption, to cadmium present in the surficial soils of an industrial site located in the north of France. The application illustrates the potential of the proposed approach, which allows the uncertainty affecting model parameters to be represented in a way that is consistent with the information at hand. Also, because the hybrid approach takes advantage of the “rich” information provided by probability distributions, while retaining the conservative character of fuzzy calculus, it is believed to hold value in terms of a “reasonable” application of the precautionary principle.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号