首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   419篇
  免费   8篇
  国内免费   2篇
电工技术   1篇
化学工业   57篇
金属工艺   22篇
机械仪表   16篇
建筑科学   15篇
矿业工程   1篇
能源动力   41篇
轻工业   46篇
水利工程   1篇
石油天然气   2篇
无线电   33篇
一般工业技术   54篇
冶金工业   9篇
原子能技术   3篇
自动化技术   128篇
  2024年   2篇
  2023年   4篇
  2022年   11篇
  2021年   13篇
  2020年   8篇
  2019年   11篇
  2018年   14篇
  2017年   17篇
  2016年   15篇
  2015年   5篇
  2014年   12篇
  2013年   43篇
  2012年   21篇
  2011年   37篇
  2010年   21篇
  2009年   34篇
  2008年   25篇
  2007年   25篇
  2006年   14篇
  2005年   15篇
  2004年   8篇
  2003年   6篇
  2002年   11篇
  2001年   4篇
  2000年   3篇
  1999年   4篇
  1998年   6篇
  1997年   2篇
  1996年   2篇
  1995年   3篇
  1994年   3篇
  1993年   3篇
  1992年   1篇
  1991年   1篇
  1990年   2篇
  1989年   2篇
  1988年   2篇
  1987年   1篇
  1986年   1篇
  1985年   1篇
  1984年   3篇
  1982年   1篇
  1980年   1篇
  1979年   4篇
  1978年   3篇
  1977年   1篇
  1971年   1篇
  1943年   2篇
排序方式: 共有429条查询结果,搜索用时 0 毫秒
81.

In the fields of pattern recognition and machine learning, the use of data preprocessing algorithms has been increasing in recent years to achieve high classification performance. In particular, it has become inevitable to use the data preprocessing method prior to classification algorithms in classifying medical datasets with the nonlinear and imbalanced data distribution. In this study, a new data preprocessing method has been proposed for the classification of Parkinson, hepatitis, Pima Indians, single proton emission computed tomography (SPECT) heart, and thoracic surgery medical datasets with the nonlinear and imbalanced data distribution. These datasets were taken from UCI machine learning repository. The proposed data preprocessing method consists of three steps. In the first step, the cluster centers of each attribute were calculated using k-means, fuzzy c-means, and mean shift clustering algorithms in medical datasets including Parkinson, hepatitis, Pima Indians, SPECT heart, and thoracic surgery medical datasets. In the second step, the absolute differences between the data in each attribute and the cluster centers are calculated, and then, the average of these differences is calculated for each attribute. In the final step, the weighting coefficients are calculated by dividing the mean value of the difference to the cluster centers, and then, weighting is performed by multiplying the obtained weight coefficients by the attribute values in the dataset. Three different attribute weighting methods have been proposed: (1) similarity-based attribute weighting in k-means clustering, (2) similarity-based attribute weighting in fuzzy c-means clustering, and (3) similarity-based attribute weighting in mean shift clustering. In this paper, we aimed to aggregate the data in each class together with the proposed attribute weighting methods and to reduce the variance value within the class. Thus, by reducing the value of variance in each class, we have put together the data in each class and at the same time, we have further increased the discrimination between the classes. To compare with other methods in the literature, the random subsampling has been used to handle the imbalanced dataset classification. After attribute weighting process, four classification algorithms including linear discriminant analysis, k-nearest neighbor classifier, support vector machine, and random forest classifier have been used to classify imbalanced medical datasets. To evaluate the performance of the proposed models, the classification accuracy, precision, recall, area under the ROC curve, κ value, and F-measure have been used. In the training and testing of the classifier models, three different methods including the 50–50% train–test holdout, the 60–40% train–test holdout, and tenfold cross-validation have been used. The experimental results have shown that the proposed attribute weighting methods have obtained higher classification performance than random subsampling method in the handling of classifying of the imbalanced medical datasets.

  相似文献   
82.
In this work, a novel approach utilizing feature covariance matrices is proposed for time series classification. In order to adapt the feature covariance matrices into time series classification problem, a feature vector is defined for each point in a time series. The feature vector comprises local and global information such as value, derivative, rank, deviation from the mean, the time index of the point and cumulative sum up to the point. Extracted feature vectors for the time instances are concatenated to construct feature matrices for the overlapping subsequences. Covariances of the feature matrices are used to describe the subsequences. Our main purpose in this work is to introduce and evaluate the feature covariance representation for time series classification. Therefore, in classification stage, firstly, 1-NN classifier is utilized. After showing the effectiveness of the representation with 1-NN classifier, the experiments are repeated with SVM classifier. The other novelty in this work is that a novel distance measure is introduced for time series by feature covariance matrix representation. Conducted experiments on UCR time series datasets show that the proposed method mostly outperforms the well-known methods such as DTW, shapelet transform and other state-of-the-art techniques.  相似文献   
83.
The purpose of this study is to control the position of an underactuated underwater vehicle manipulator system (U‐UVMS). It is possible to control the end‐effector using a regular 6‐DOF manipulator despite the undesired displacements of the underactuated vehicle within a certain range. However, in this study an 8‐DOF redundant manipulator is used in order to increase the positioning accuracy of the end‐effector. The redundancy is resolved according to the criterion of minimal vehicle and joint motions. The underactuated underwater vehicle redundant manipulator system is modeled including the hydrodynamic forces for the manipulator in addition to those for the autonomous underwater vehicle (AUV). The shadowing effects of the bodies on each other are also taken into account when computing the hydrodynamic forces. The Newton‐Euler formulation is used to derive the system equations of motion including the thruster dynamics. In order to establish the end‐effector trajectory tracking control of the system, an inverse dynamics control law is formulated. The effectiveness of the control law even in the presence of parameter uncertainties and disturbing ocean currents is illustrated by simulations.  相似文献   
84.
We study the relation between synchronizing sequences and preset distinguishing sequences which are some special sequences used in finite state machine based testing. We show that the problems related to preset distinguishing sequences can be converted into related problems of synchronizing sequences. Using the results existing in the literature for synchronizing sequences, we offer several reflections of these results for preset distinguishing sequences. Although computing a preset distinguishing sequence is PSPACE-hard , we do identify a class of machines for which computing a preset distinguishing sequence can be performed in polynomial time and argue that this class is practically relevant. We also present an experimental study to compare the performance of exponential brute-force and polynomial heuristic algorithms to compute a preset distinguishing sequence.  相似文献   
85.
Poly(methyl methacrylate)‐poly(ε‐caprolactone) (PMMA/PCL) microparticles were synthesized by suspension polymerization of methyl methacrylate in the presence of PCL. The incorporation of a small amount of a macromonomer, methacryloyl‐terminated PCL (M‐PCL), into the reaction mixture, led to the formation of grafted systems, namely PMMA‐g‐PCL/PCL. The synthesis of the macromonomer and its characterization by nuclear magnetic spectroscopy (1H NMR) is described. The role of M‐PCL as an effective compatibilizing agent in the composite was investigated. PMMA/PCL and PMMA‐g‐PCL/PCL composites were fully characterized by 1H NMR, gel permeation chromatography (GPC) and thermal analysis, including thermogravimetric analysis (TGA), conventional differential scanning calorimetry (DSC), modulated DSC (MDSC) and dynamic mechanical thermal analysis (DMTA). Finally, the morphology of the prepared systems was investigated by scanning electron microscopy (SEM). The addition of compatibilizing agent led the formation of a more homogeneous microcomposite with improved mechanical properties.

SEM picture of PMMA‐g‐PCL/PCL composite surface.  相似文献   

86.
In Wireless Sensor Networks (WSNs), maintaining connectivity with the sink node is a crucial issue to collect data from sensors without any interruption. While sensors are typically deployed in abundance to tolerate possible node failures, a large number of simultaneous node failures within the same region may result in partitioning the network which may disrupt the network operation significantly. Given that WSNs are deployed in inhospitable environments, such node failures are very likely due to storms, fires, floods, etc. The self-recovery of the network from these large-scale node failures is challenging since the nodes will not have any information about the location and span of the damage. In this paper, we first present a distributed partition detection algorithm which quickly makes the sensors aware of the partitioning in the network. This process is led by the sensors whose upstream nodes fail due to damages. Upon partition detection, sensors federate the partitions and restore data communication by utilizing the former routing information stored at each sensor to the sink node and exploiting sensor mobility. Specifically, the locations of failed sensors on former routes are used to assess the span of the damage and some of the sensors are relocated to such locations to re-establish the routes with the sink node. Relocation on such former routes is performed in such a way that the movement overhead on sensors is also minimized. Our proposed approach solely depends on the local information to ensure autonomicity, timeliness and scalability. The effectiveness of the proposed federation approach is validated through realistic simulation experiments and has been shown to provide the mentioned features.  相似文献   
87.
This study investigates the key elements an online service or product provider needs to consider when adopting another single-factor or two-factor authentication system. We also uncover the conditions that make the new one-factor or two-factor authentication system more preferable. By using the probability of system failure, this study generalizes all possible combination of authentication systems into four different cases. This generalization allows us to compare different systems and to determine the key factors managers need to consider when adopting a new authentication system. The key factors are (1) additional implementation costs, (2) customer switching which is determined by the market share and customers' preferences, and (3) expected losses when the new system fails. This study also suggests that if the provider chooses an expensive new system, the provider needs to have a larger market share to justify the spending. Also, regulators can encourage the adoption of a more secure authentication system by changing the penalty a firm faces when the system fails. Finally, it could also be preferable to have both one-factor and two-factor authentication systems depending on the customers' characteristics.  相似文献   
88.
Variance reduction is of highest importance in financial simulation. In this study, we present a new and simple variance reduction technique for pricing discretely monitored lookback and barrier options. It is based on using the corresponding continuously monitored option as external control variate. To obtain the value of the continuously monitored price both, conditional simulation and conditional expectation can be utilized. From numerical experiments we can conclude that the efficiency gains obtained by our new method are significant.  相似文献   
89.
In wireless sensor networks (WSNs) contextual information such as the information regarding whether, when, and where the data is collected cannot be protected using only traditional measures (e.g., encryption). Contextual information can be protected against global eavesdroppers by periodic packet transmission combined with dummy traffic filtering at proxy nodes. In this paper, through a Linear Programming (LP) framework, we analyze lifetime limits of WSNs preserving event-unobservability with different proxy assignment methodologies. We show that to maximize the network lifetime data flow should pass through multiple proxies that are organized as a general directed graph rather than as a tree.  相似文献   
90.
This study was performed to investigate certain major and toxic metal concentrations in different tissues of three demersal fish species (Triglia lucerna, Lophius budegassa, Solea lascaris). Generally, skin and liver exhibited higher metal concentrations than did muscle. Sodium and arsenic concentrations were found at higher levels in muscle tissues than in livers and skin, while zinc and nickel in different organs were classified as follows: skin > liver > muscle tissues. The concentrations of trace metals in fish samples indicated that S. lascaris was more contaminated than were other fish species, followed by L. budegassa and T. lucerna. It may be concluded that consumption of these species from this region is not likely to pose a threat for human health. However, although the concentrations are below the limit values for fish muscles, a potential danger may emerge in the future, depending on domestic waste waters and industrial activities in the region. Therefore, further monitoring programmes should be conducted.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号