首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1411篇
  免费   53篇
  国内免费   2篇
电工技术   14篇
综合类   1篇
化学工业   382篇
金属工艺   51篇
机械仪表   29篇
建筑科学   64篇
矿业工程   1篇
能源动力   52篇
轻工业   171篇
水利工程   5篇
无线电   147篇
一般工业技术   282篇
冶金工业   56篇
原子能技术   10篇
自动化技术   201篇
  2024年   2篇
  2023年   10篇
  2022年   30篇
  2021年   52篇
  2020年   25篇
  2019年   26篇
  2018年   26篇
  2017年   22篇
  2016年   32篇
  2015年   27篇
  2014年   52篇
  2013年   109篇
  2012年   81篇
  2011年   132篇
  2010年   97篇
  2009年   83篇
  2008年   83篇
  2007年   63篇
  2006年   66篇
  2005年   50篇
  2004年   38篇
  2003年   41篇
  2002年   37篇
  2001年   20篇
  2000年   19篇
  1999年   22篇
  1998年   14篇
  1997年   31篇
  1996年   15篇
  1995年   22篇
  1994年   15篇
  1993年   12篇
  1992年   9篇
  1991年   13篇
  1990年   4篇
  1989年   10篇
  1988年   4篇
  1987年   12篇
  1985年   10篇
  1984年   6篇
  1983年   4篇
  1982年   4篇
  1981年   3篇
  1980年   6篇
  1979年   8篇
  1978年   7篇
  1977年   3篇
  1976年   2篇
  1973年   3篇
  1972年   1篇
排序方式: 共有1466条查询结果,搜索用时 15 毫秒
91.
When the steady states are largely predominant with respect to transitional phases, steady-state simulation seems sufficient to predict the behavior of a complex system. Over the past 20 years, different modeling languages and dedicated tools have been developed to improve steady state simulation.In this paper, focus is made on steady-state simulation for system control and design. A model combining an emission sub-model with a ship propulsion sub-model was implemented in a constraint programming (CP) approach. It will help to determine the efficiency (i.e. the ability to model and solve the problem) and complexity of implementation (i.e. difficulties encountered during the implementation) of this approach.First, requirements for the steady-state simulation of complex systems are defined. Then, CP approach is shown to be able to answer these issues through experiments. This approach is then compared to one of the main simulation languages: Modelica.Although the two approaches (i.e Modelica and CP) are able to reverse models, the study shows that the use of Modelica principles for steady-state simulation involves some crippling limitations, such as the non-management of under/over-constrained systems, or inequalities.This study also shows that the constraint programming approach permits to meet some needs for steady-state simulation not yet covered by current approaches.  相似文献   
92.
Objective: Time series often appear in medical databases, but only few machine learning methods exist that process this kind of data properly. Most modeling techniques have been designed with a static data model in mind and are not suitable for coping with the dynamic nature of time series. Recurrent neural networks (RNNs) are often used to process time series, but only a few training algorithms exist for RNNs which are complex and often yield poor results. Therefore, researchers often turn to traditional machine learning approaches, such as support vector machines (SVMs), which can easily be set up and trained and combine them with feature extraction (FE) and selection (FS) to process the high-dimensional temporal data. Recently, a new approach, called echo state networks (ESNs), has been developed to simplify the training process of RNNs. This approach allows modeling the dynamics of a system based on time series data in a straightforwardway.The objective of this study is to explore the advantages of using ESN instead of other traditional classifiers combined with FE and FS in classification problems in the intensive care unit (ICU) when the input data consists of time series. While ESNs have mostly been used to predict the future course of a time series, we use the ESN model for classification instead. Although time series often appear in medical data, little medical applications of ESNs have been studiedyet.Methods and material: ESN is used to predict the need for dialysis between the fifth and tenth day after admission in the ICU. The input time series consist of measured diuresis and creatinine values during the first 3days after admission. Data about 830 patients was used for the study, of which 82 needed dialysis between the fifth and tenth day after admission. ESN is compared to traditional classifiers, a sophisticated and a simple one, namely support vector machines and the naive Bayes (NB) classifier. Prior to the use of the SVM and NB classifier, FE and FS is required to reduce the number of input features and thus alleviate the curse dimensionality. Extensive feature extraction was applied to capture both the overall properties of the time series and the correlation between the different measurements in the time series. The feature selection method consists of a greedy hybrid filter-wrapper method using a NB classifier, which selects in each iteration the feature that improves prediction the best and shows little multicollinearity with the already selected set. Least squares regression with noise was used to train the linear readout function of the ESN to mitigate sensitivity to noise and overfitting. Fisher labeling was used to deal with the unbalanced data set. Parameter sweeps were performed to determine the optimal parameter values for the different classifiers. The area under the curve (AUC) and maximum balanced accuracy are used as performance measures. The required execution time was also measured.Results: The classification performance of the ESN shows significant difference at the 5% level compared to the performance of the SVM or the NB classifier combined with FE and FS. The NB+FE+FS, with an average AUC of 0.874, has the best classification performance. This classifier is followed by the ESN, which has an average AUC of 0.849. The SVM+FE+FS has the worst performance with an average AUC of 0.838. The computation time needed to pre-process the data and to train and test the classifier is significantly less for the ESN compared to the SVM andNB.Conclusion: It can be concluded that the use of ESN has an added value in predicting the need for dialysis through the analysis of time series data. The ESN requires significantly less processing time, needs no domain knowledge, is easy to implement, and can be configured using rules ofthumb.  相似文献   
93.
Owing to the dynamic nature of collaborative environments, the software intended to support collaborative work should adapt itself to the different situations that may occur. This requirement is related to the concept of “context of use”, which has been considered as an important aspect in the design of interactive systems. Nevertheless, two main problems about this concept have been identified by current research in context-aware computing: (1) most of the studies have mainly focused on the context of a single user, so the context of multiple users involved in a common endeavor remains little explored, and (2) adaptability in context-aware systems generally takes into account a reduced number of contextual variables (mainly the user’s location and platform). In this paper, we firstly re-conceptualize the notion of “context of use”, in order to consider the main characteristics of collaborative environments. Based on this new notion, we then design and implement a framework that allows application developers to specify the adaptability of groupware systems in terms of the state of activities, roles, collaborators’ location, available resources, and other typical variables of working groups. This framework has been generalized from scenarios that highlight dynamic situations presented in real collaborative settings. Finally, we validate our proposal by a set of applications that are able to adapt their user interface and functionality, when significant changes are produced in the environment, the working group, and/or the used devices.  相似文献   
94.
Microsystem Technologies - This study presents the results on the feasibility of a resonant planar chemical capacitive sensor in the microwave frequency range suitable for gas detection and...  相似文献   
95.
The transmission line matrix method permits modification of the excitation of modes in microlines. This property gives access to phase velocities by synchronisation of excitation and propagating modes. The selection of modes is possible by this synchronisation. Group velocities are deduced from frequency modulation due to geometrical dimensions variation. In another part, two methods of infinited space simulation are proposed. Applications about radiation patterns of the dipole antenna and the half dipole are given. The input impedance and the resonance frequency are calculated for a printed strip dipole.  相似文献   
96.
Motion estimation is a highly computational demanding operation during video compression process and significantly affects the output quality of an encoded sequence. Special hardware architectures are required to achieve real-time compression performance. Many fast search block matching motion estimation (BMME) algorithms have been developed in order to minimize search positions and speed up computation but they do not take into account how they can be effectively implemented by hardware. In this paper, we propose three new hardware architectures of fast search block matching motion estimation algorithm using Line Diamond Parallel Search (LDPS) for H.264/AVC video coding system. These architectures use pipeline and parallel processing techniques and present minimum latency, maximum throughput and full utilization of hardware resources. The VHDL code has been tested and can work at high frequency in a Xilinx Virtex-5 FPGA circuit for the three proposed architectures.  相似文献   
97.
Data reconciliation consists in modifying noisy or unreliable data in order to make them consistent with a mathematical model (herein a material flow network). The conventional approach relies on least-squares minimization. Here, we use a fuzzy set-based approach, replacing Gaussian likelihood functions by fuzzy intervals, and a leximin criterion. We show that the setting of fuzzy sets provides a generalized approach to the choice of estimated values, that is more flexible and less dependent on oftentimes debatable probabilistic justifications. It potentially encompasses interval-based formulations and the least squares method, by choosing appropriate membership functions and aggregation operations. This paper also lays bare the fact that data reconciliation under the fuzzy set approach is viewed as an information fusion problem, as opposed to the statistical tradition which solves an estimation problem.  相似文献   
98.
In this paper we show that thiolated self-assembled monolayers (SAMs) can be used to anchor source–drain gold electrodes on the substrate, leading to excellent electrical performances of the organic field-effect transistor (OFET) on a par with those using a standard electrode process. Using an amorphous semiconductor and a gate dielectric functionalized with SAMs bearing different dipole moments, we demonstrate that we can tune the threshold voltage alone, while keeping nearly unchanged the other electrical properties (hole carrier mobility, Ion/Ioff ratio, subthreshold swing). This differs from previous studies for which SAMs functionalization induced significant changes in all the OFET electrical performances. This result opens doors to design organic circuits using reproducible amorphous semiconductor based OFETs for which only the threshold voltage can be tuned on demand.  相似文献   
99.
Polynomial ranges are commonly used for numerically solving polynomial systems with interval Newton solvers. Often ranges are computed using the convex hull property of the tensorial Bernstein basis, which is exponential size in the number n of variables. In this paper, we consider methods to compute tight bounds for polynomials in n variables by solving two linear programming problems over a polytope. We formulate a polytope defined as the convex hull of the coefficients with respect to the tensorial Bernstein basis, and we formulate several polytopes based on the Bernstein polynomials of the domain. These Bernstein polytopes can be defined by a polynomial number of halfspaces. We give the number of vertices, the number of hyperfaces, and the volume of each polytope for n=1,2,3,4, and we compare the computed range widths for random n-variate polynomials for n?10. The Bernstein polytope of polynomial size gives only marginally worse range bounds compared to the range bounds obtained with the tensorial Bernstein basis of exponential size.  相似文献   
100.
Multiple-relaxation-time lattice Boltzmann models in three dimensions   总被引:1,自引:0,他引:1  
This article provides a concise exposition of the multiple-relaxation-time lattice Boltzmann equation, with examples of 15-velocity and 19-velocity models in three dimensions. Simulation of a diagonally lid-driven cavity flow in three dimensions at Re = 500 and 2000 is performed. The results clearly demonstrate the superior numerical stability of the multiple-relaxation-time lattice Boltzmann equation over the popular lattice Bhatnagar-Gross-Krook equation.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号