首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1507篇
  免费   55篇
  国内免费   2篇
电工技术   14篇
综合类   1篇
化学工业   408篇
金属工艺   70篇
机械仪表   29篇
建筑科学   67篇
矿业工程   1篇
能源动力   52篇
轻工业   178篇
水利工程   6篇
无线电   149篇
一般工业技术   296篇
冶金工业   77篇
原子能技术   11篇
自动化技术   205篇
  2024年   2篇
  2023年   10篇
  2022年   33篇
  2021年   54篇
  2020年   25篇
  2019年   26篇
  2018年   27篇
  2017年   22篇
  2016年   32篇
  2015年   30篇
  2014年   54篇
  2013年   110篇
  2012年   85篇
  2011年   138篇
  2010年   101篇
  2009年   84篇
  2008年   88篇
  2007年   68篇
  2006年   69篇
  2005年   52篇
  2004年   38篇
  2003年   42篇
  2002年   38篇
  2001年   27篇
  2000年   21篇
  1999年   23篇
  1998年   24篇
  1997年   39篇
  1996年   18篇
  1995年   24篇
  1994年   21篇
  1993年   14篇
  1992年   9篇
  1991年   13篇
  1990年   5篇
  1989年   10篇
  1988年   4篇
  1987年   13篇
  1985年   11篇
  1984年   6篇
  1983年   5篇
  1982年   7篇
  1981年   3篇
  1980年   8篇
  1979年   9篇
  1978年   9篇
  1977年   4篇
  1976年   2篇
  1973年   3篇
  1972年   1篇
排序方式: 共有1564条查询结果,搜索用时 15 毫秒
91.
When the steady states are largely predominant with respect to transitional phases, steady-state simulation seems sufficient to predict the behavior of a complex system. Over the past 20 years, different modeling languages and dedicated tools have been developed to improve steady state simulation.In this paper, focus is made on steady-state simulation for system control and design. A model combining an emission sub-model with a ship propulsion sub-model was implemented in a constraint programming (CP) approach. It will help to determine the efficiency (i.e. the ability to model and solve the problem) and complexity of implementation (i.e. difficulties encountered during the implementation) of this approach.First, requirements for the steady-state simulation of complex systems are defined. Then, CP approach is shown to be able to answer these issues through experiments. This approach is then compared to one of the main simulation languages: Modelica.Although the two approaches (i.e Modelica and CP) are able to reverse models, the study shows that the use of Modelica principles for steady-state simulation involves some crippling limitations, such as the non-management of under/over-constrained systems, or inequalities.This study also shows that the constraint programming approach permits to meet some needs for steady-state simulation not yet covered by current approaches.  相似文献   
92.
Objective: Time series often appear in medical databases, but only few machine learning methods exist that process this kind of data properly. Most modeling techniques have been designed with a static data model in mind and are not suitable for coping with the dynamic nature of time series. Recurrent neural networks (RNNs) are often used to process time series, but only a few training algorithms exist for RNNs which are complex and often yield poor results. Therefore, researchers often turn to traditional machine learning approaches, such as support vector machines (SVMs), which can easily be set up and trained and combine them with feature extraction (FE) and selection (FS) to process the high-dimensional temporal data. Recently, a new approach, called echo state networks (ESNs), has been developed to simplify the training process of RNNs. This approach allows modeling the dynamics of a system based on time series data in a straightforwardway.The objective of this study is to explore the advantages of using ESN instead of other traditional classifiers combined with FE and FS in classification problems in the intensive care unit (ICU) when the input data consists of time series. While ESNs have mostly been used to predict the future course of a time series, we use the ESN model for classification instead. Although time series often appear in medical data, little medical applications of ESNs have been studiedyet.Methods and material: ESN is used to predict the need for dialysis between the fifth and tenth day after admission in the ICU. The input time series consist of measured diuresis and creatinine values during the first 3days after admission. Data about 830 patients was used for the study, of which 82 needed dialysis between the fifth and tenth day after admission. ESN is compared to traditional classifiers, a sophisticated and a simple one, namely support vector machines and the naive Bayes (NB) classifier. Prior to the use of the SVM and NB classifier, FE and FS is required to reduce the number of input features and thus alleviate the curse dimensionality. Extensive feature extraction was applied to capture both the overall properties of the time series and the correlation between the different measurements in the time series. The feature selection method consists of a greedy hybrid filter-wrapper method using a NB classifier, which selects in each iteration the feature that improves prediction the best and shows little multicollinearity with the already selected set. Least squares regression with noise was used to train the linear readout function of the ESN to mitigate sensitivity to noise and overfitting. Fisher labeling was used to deal with the unbalanced data set. Parameter sweeps were performed to determine the optimal parameter values for the different classifiers. The area under the curve (AUC) and maximum balanced accuracy are used as performance measures. The required execution time was also measured.Results: The classification performance of the ESN shows significant difference at the 5% level compared to the performance of the SVM or the NB classifier combined with FE and FS. The NB+FE+FS, with an average AUC of 0.874, has the best classification performance. This classifier is followed by the ESN, which has an average AUC of 0.849. The SVM+FE+FS has the worst performance with an average AUC of 0.838. The computation time needed to pre-process the data and to train and test the classifier is significantly less for the ESN compared to the SVM andNB.Conclusion: It can be concluded that the use of ESN has an added value in predicting the need for dialysis through the analysis of time series data. The ESN requires significantly less processing time, needs no domain knowledge, is easy to implement, and can be configured using rules ofthumb.  相似文献   
93.
Owing to the dynamic nature of collaborative environments, the software intended to support collaborative work should adapt itself to the different situations that may occur. This requirement is related to the concept of “context of use”, which has been considered as an important aspect in the design of interactive systems. Nevertheless, two main problems about this concept have been identified by current research in context-aware computing: (1) most of the studies have mainly focused on the context of a single user, so the context of multiple users involved in a common endeavor remains little explored, and (2) adaptability in context-aware systems generally takes into account a reduced number of contextual variables (mainly the user’s location and platform). In this paper, we firstly re-conceptualize the notion of “context of use”, in order to consider the main characteristics of collaborative environments. Based on this new notion, we then design and implement a framework that allows application developers to specify the adaptability of groupware systems in terms of the state of activities, roles, collaborators’ location, available resources, and other typical variables of working groups. This framework has been generalized from scenarios that highlight dynamic situations presented in real collaborative settings. Finally, we validate our proposal by a set of applications that are able to adapt their user interface and functionality, when significant changes are produced in the environment, the working group, and/or the used devices.  相似文献   
94.
Microsystem Technologies - This study presents the results on the feasibility of a resonant planar chemical capacitive sensor in the microwave frequency range suitable for gas detection and...  相似文献   
95.
The transmission line matrix method permits modification of the excitation of modes in microlines. This property gives access to phase velocities by synchronisation of excitation and propagating modes. The selection of modes is possible by this synchronisation. Group velocities are deduced from frequency modulation due to geometrical dimensions variation. In another part, two methods of infinited space simulation are proposed. Applications about radiation patterns of the dipole antenna and the half dipole are given. The input impedance and the resonance frequency are calculated for a printed strip dipole.  相似文献   
96.
97.
Motion estimation is a highly computational demanding operation during video compression process and significantly affects the output quality of an encoded sequence. Special hardware architectures are required to achieve real-time compression performance. Many fast search block matching motion estimation (BMME) algorithms have been developed in order to minimize search positions and speed up computation but they do not take into account how they can be effectively implemented by hardware. In this paper, we propose three new hardware architectures of fast search block matching motion estimation algorithm using Line Diamond Parallel Search (LDPS) for H.264/AVC video coding system. These architectures use pipeline and parallel processing techniques and present minimum latency, maximum throughput and full utilization of hardware resources. The VHDL code has been tested and can work at high frequency in a Xilinx Virtex-5 FPGA circuit for the three proposed architectures.  相似文献   
98.
Data reconciliation consists in modifying noisy or unreliable data in order to make them consistent with a mathematical model (herein a material flow network). The conventional approach relies on least-squares minimization. Here, we use a fuzzy set-based approach, replacing Gaussian likelihood functions by fuzzy intervals, and a leximin criterion. We show that the setting of fuzzy sets provides a generalized approach to the choice of estimated values, that is more flexible and less dependent on oftentimes debatable probabilistic justifications. It potentially encompasses interval-based formulations and the least squares method, by choosing appropriate membership functions and aggregation operations. This paper also lays bare the fact that data reconciliation under the fuzzy set approach is viewed as an information fusion problem, as opposed to the statistical tradition which solves an estimation problem.  相似文献   
99.
100.
Persistent organic pollutants (POPs) impact upon human and animal health and the wider environment. It is important to determine where POPs are found and the spatial pattern of POP variation. The concentrations of 90 molecules which are members of four families of POPs and two families of herbicides were measured within a region of Northern France as part of the French National Soil Monitoring Network (RMQS: Réseau de Mesures de la Qualité des Sols). We also gather information on five covariates (elevation, soil organic carbon content, road density, land cover and population density) which might influence POP concentrations. The study region contains 105 RMQS observation sites arranged on a regular square grid with spacing of 16 km. The observations include hot-spots at sites of POP application, smaller concentrations where POPs have been dispersed and observations less than the limit of quantification (LOQ) where the soil has not been impacted by POPs. Fifty nine of the molecules were detected at less than 50 sites and hence the data were unsuitable for spatial analyses. We represent the variation of the remaining 31 molecules by various linear mixed models which can include fixed effects (i.e. linear relationships between the molecule concentrations and covariates) and spatially correlated random effects. The best model for each molecule is selected by the Akaike Information Criterion. For nine of the molecules, spatial correlation is evident and hence they can potentially be mapped. For four of these molecules, the spatial correlation cannot be wholly explained by fixed effects. It appears that these molecules have been transported away from their application sites and are now dispersed across the study region with the largest concentrations found in a heavily populated depression. More complicated statistical models and sampling designs are required to explain the distribution of the less dispersed molecules.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号