首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1213篇
  免费   51篇
  国内免费   3篇
电工技术   19篇
综合类   2篇
化学工业   304篇
金属工艺   28篇
机械仪表   38篇
建筑科学   36篇
能源动力   54篇
轻工业   148篇
水利工程   2篇
石油天然气   6篇
无线电   139篇
一般工业技术   201篇
冶金工业   27篇
原子能技术   2篇
自动化技术   261篇
  2024年   8篇
  2023年   20篇
  2022年   47篇
  2021年   70篇
  2020年   51篇
  2019年   48篇
  2018年   52篇
  2017年   41篇
  2016年   47篇
  2015年   47篇
  2014年   35篇
  2013年   102篇
  2012年   76篇
  2011年   97篇
  2010年   69篇
  2009年   62篇
  2008年   57篇
  2007年   37篇
  2006年   38篇
  2005年   25篇
  2004年   41篇
  2003年   24篇
  2002年   16篇
  2001年   16篇
  2000年   19篇
  1999年   11篇
  1998年   12篇
  1997年   10篇
  1996年   10篇
  1995年   3篇
  1994年   10篇
  1993年   7篇
  1992年   7篇
  1991年   5篇
  1990年   3篇
  1989年   6篇
  1988年   4篇
  1986年   5篇
  1984年   2篇
  1983年   2篇
  1981年   2篇
  1980年   4篇
  1979年   4篇
  1977年   1篇
  1976年   3篇
  1974年   2篇
  1973年   1篇
  1972年   2篇
  1970年   1篇
  1969年   1篇
排序方式: 共有1267条查询结果,搜索用时 31 毫秒
21.
This paper discusses an alternative approach to parameter optimization of well-known prototype-based learning algorithms (minimizing an objective function via gradient search). The proposed approach considers a stochastic optimization called the cross entropy method (CE method). The CE method is used to tackle efficiently the initialization sensitiveness problem associated with the original generalized learning vector quantization (GLVQ) algorithm and its variants. Results presented in this paper indicate that the CE method can be successfully applied to this kind of problem on real-world data sets. As far as known by the authors, it is the first use of the CE method in prototype-based learning.  相似文献   
22.
Engineering with Computers - Structural health monitoring (SHM) and Non-destructive Damage Identification (NDI) using responses of structures under dynamic excitation have an imperative role in the...  相似文献   
23.
Six Sigma is a quality philosophy and methodology that aims to achieve operational excellence and delighted customers. The cost of poor quality depends on the sigma quality level and its corresponding failure rate. Six Sigma provides a well-defined target of 3.4 defects per million. This failure rate is commonly evaluated under the assumption that the process is normally distributed and its specifications are two-sided. However, these assumptions may lead to implementation of quality-improvement strategies that are based on inaccurate evaluations of quality costs and profits. This paper defines the relationship between failure rate and sigma quality level for inverse Gaussian processes. The inverse Gaussian distribution has considerable applications in describing cycle times, product life, employee service times, and so on. We show that for these processes attaining Six Sigma target failure rate requires higher quality efforts than for normal processes. A generic model is presented to characterise cycle times in manufacturing systems. In this model, the asymptotic production is described by a drifted Brownian motion, and the cycle time is evaluated by using the first passage time theory of a Wiener process to a boundary. The proposed method estimates the right efforts required to reach Six Sigma goals.  相似文献   
24.
Retrieving the most relevant video frames that contain the object specified in a given query (query-by-region) remains a challenging task. Two common challenges of region-based retrieval approaches are to accurately extract or segment object(s) and select a proper matching strategy. This paper addresses these problems by proposing a retrieval approach that uses a new region-based matching technique equipped with an effective object representation method. In the first stage, the proposed approach selects the most informative instances of each object that appeared in the video by utilizing an adapted clustering algorithm over the extracted features. In the retrieval stage, the new matching technique returns the most relevant sequences of video by mapping a given region with those identified representative instances of objects based on their similarity scores. The proposed approach is evaluated on standard datasets and the results demonstrate a 31% improvement in the retrieval performance compared to other state-of-the-art methods.  相似文献   
25.
We develop a novel coarse-grained contact model for Discrete Element Method simulations of \(\hbox {TiO}_2\) nanoparticle films subjected to mechanical stress. All model elements and parameters are derived in a self-consistent and physically sound way from all-atom Molecular Dynamics simulations of interacting particles and surfaces. In particular, the nature of atomic-scale friction and dissipation effects is taken into account by explicit modelling of the surface features and water adsorbate layers that strongly mediate the particle-particle interactions. The quantitative accuracy of the coarse-grained model is validated against all-atom simulations of \(\hbox {TiO}_2\) nanoparticle agglomerates under tensile stress. Moreover, its predictive power is demonstrated with calculations of force-displacement curves of entire nanoparticle films probed with force spectroscopy. The simulation results are compared with Atomic Force Microscopy and Transmission Electron Microscopy experiments.  相似文献   
26.
This paper presents a design of a smart humidity sensor. First we begin by the modeling of a Capacitive MEMS-based humidity sensor. Using neuronal networks and Matlab environment to accurately express the non-linearity, the hysteresis effect and the cross sensitivity of the output humidity sensor used. We have done the training to create an analytical model CHS “Capacitive Humidity Sensor”. Because our sensor is a capacitive type, the obtained model on PSPICE reflects the humidity variation by a capacity variation, which is a passive magnitude; it requires a conversion to an active magnitude, why we realize a conversion capacity/voltage using a switched capacitor circuit SCC. In a second step a linearization, by Matlab program, is applied to CHS response whose goal is to create a database for an element of correction “CORRECTOR”. After that we use the bias matrix and the weights matrix obtained by training to establish the CHS model and the CORRECTOR model on PSPICE simulator, where the output of the first is identical to the output of the CHS and the last correct its nonlinear response, and eliminate its hysteresis effect and cross sensitivity. The three blocks; CHS model, CORRECTOR model and the capacity/voltage converter, represent the smart sensor.  相似文献   
27.
This paper introduces a self-organizing map dedicated to clustering, analysis and visualization of categorical data. Usually, when dealing with categorical data, topological maps use an encoding stage: categorical data are changed into numerical vectors and traditional numerical algorithms (SOM) are run. In the present paper, we propose a novel probabilistic formalism of Kohonen map dedicated to categorical data where neurons are represented by probability tables. We do not need to use any coding to encode variables. We evaluate the effectiveness of our model in four examples using real data. Our experiments show that our model provides a good quality of results when dealing with categorical data.  相似文献   
28.
This paper describes a method for spatial representation, place recognition and qualitative self-localization in dynamic indoor environments, based on omnidirectional images. This is a difficult problem because of the perceptual ambiguity of the acquired images, and their weak robustness to noise, geometrical and photometric variations of real world scenes. The spatial representation is built up invariant signatures using Invariance Theory where we suggest to adapt Haar invariant integrals to the particular geometry and image transformations of catadioptric omnidirectional sensors. It follows that combining simple image features in a process of integration over visual transformations and robot motion, can build discriminant percepts about robot spatial locations. We further analyze the invariance properties of the signatures and the apparent relation between their similarity measures and metric distances. The invariance properties of the signatures can be adapted to infer a hierarchical process, from global room recognition to local and coarse robot localization.  相似文献   
29.
The design of complex inter-enterprise business processes (IEBP) is generally performed in a modular way. Each process is designed separately and then the whole IEBP is obtained by composition. Even if such a modular approach is intuitive and facilitates the design problem, it poses the problem that correct behavior of each business process of the IEBP taken alone does not guarantee a correct behavior of the composed IEBP (i.e. properties are not preserved by composition). Proving correctness of the (unknown) composed process is strongly related to the model checking problem of a system model. Among others, the symbolic observation graph based approach has proven to be very helpful for efficient model checking in general. Since it is heavily based on abstraction techniques and thus hides detailed information about system components that are not relevant for the correctness decision, it is promising to transfer this concept to the problem raised in this paper: How can the symbolic observation graph technique be adapted and employed for process composition? Answering this question is the aim of this paper.  相似文献   
30.
In this research, a neuro-fuzzy system (NFS) is introduced into the problem of time-delay estimation. Time-delay estimation deals with the problem of estimating a constant time delay embedded within a received noisy and delayed replica of a known reference signal. The received signal is filtered and discrete cosine transformed into DCT coefficients. The time delay is now encoded into the DCT coefficients. Only those few DCT coefficients which possess high sensitivity to time-delay variations are used as input to the NFS. The NFS is used for time-delay estimation because of its ability of learning highly nonlinear relationships and encoding them into its internal structure. This capability is used in learning the nonlinear relationship between the DCT coefficients of a delayed reference signal and the time delay embedded into this signal. The NFS is trained with several hundred training sets in which the highly sensitive DCT coefficients were applied as input, and the corresponding time delay was the output. In the testing phase, DCT coefficients of delayed signals were applied as input to the NFS and the system produced accurate time-delay estimates, as compared to those obtained by the classical cross-correlation technique.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号