首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   14926篇
  免费   1287篇
  国内免费   14篇
电工技术   141篇
综合类   12篇
化学工业   3774篇
金属工艺   183篇
机械仪表   402篇
建筑科学   657篇
矿业工程   39篇
能源动力   403篇
轻工业   2962篇
水利工程   126篇
石油天然气   35篇
无线电   1015篇
一般工业技术   2730篇
冶金工业   776篇
原子能技术   46篇
自动化技术   2926篇
  2024年   41篇
  2023年   195篇
  2022年   295篇
  2021年   556篇
  2020年   416篇
  2019年   438篇
  2018年   639篇
  2017年   641篇
  2016年   709篇
  2015年   582篇
  2014年   741篇
  2013年   1361篇
  2012年   1299篇
  2011年   1226篇
  2010年   812篇
  2009年   773篇
  2008年   837篇
  2007年   739篇
  2006年   591篇
  2005年   446篇
  2004年   399篇
  2003年   330篇
  2002年   326篇
  2001年   180篇
  2000年   181篇
  1999年   127篇
  1998年   163篇
  1997年   137篇
  1996年   112篇
  1995年   116篇
  1994年   97篇
  1993年   77篇
  1992年   54篇
  1991年   48篇
  1990年   33篇
  1989年   38篇
  1988年   34篇
  1987年   19篇
  1986年   25篇
  1985年   35篇
  1984年   39篇
  1983年   27篇
  1982年   25篇
  1981年   29篇
  1980年   19篇
  1979年   33篇
  1978年   17篇
  1977年   16篇
  1976年   14篇
  1975年   17篇
排序方式: 共有10000条查询结果,搜索用时 31 毫秒
101.
In this paper, a new approximation to off-line signature verification is proposed based on two-class classifiers using an expert decisions ensemble. Different methods to extract sets of local and a global features from the target sample are detailed. Also a normalization by confidence voting method is used in order to decrease the final equal error rate (EER). Each set of features is processed by a single expert, and on the other approach proposed, the decisions of the individual classifiers are combined using weighted votes. Experimental results are given using a subcorpus of the large MCYT signature database for random and skilled forgeries. The results show that the weighted combination outperforms the individual classifiers significantly. The best EER obtained were 6.3 % in the case of skilled forgeries and 2.31 % in the case of random forgeries.  相似文献   
102.
Fuzzy rule-based classification systems (FRBCSs) are known due to their ability to treat with low quality data and obtain good results in these scenarios. However, their application in problems with missing data are uncommon while in real-life data, information is frequently incomplete in data mining, caused by the presence of missing values in attributes. Several schemes have been studied to overcome the drawbacks produced by missing values in data mining tasks; one of the most well known is based on preprocessing, formerly known as imputation. In this work, we focus on FRBCSs considering 14 different approaches to missing attribute values treatment that are presented and analyzed. The analysis involves three different methods, in which we distinguish between Mamdani and TSK models. From the obtained results, the convenience of using imputation methods for FRBCSs with missing values is stated. The analysis suggests that each type behaves differently while the use of determined missing values imputation methods could improve the accuracy obtained for these methods. Thus, the use of particular imputation methods conditioned to the type of FRBCSs is required.  相似文献   
103.
Adaptive anisotropic refinement of finite element meshes allows one to reduce the computational effort required to achieve a specified accuracy of the solution of a PDE problem. We present a new approach to adaptive refinement and demonstrate that this allows one to construct algorithms which generate very flexible and efficient anisotropically refined meshes, even improving the convergence order compared to adaptive isotropic refinement if the problem permits.  相似文献   
104.
We focus on two aspects of the face recognition, feature extraction and classification. We propose a two component system, introducing Lattice Independent Component Analysis (LICA) for feature extraction and Extreme Learning Machines (ELM) for classification. In previous works we have proposed LICA for a variety of image processing tasks. The first step of LICA is to identify strong lattice independent components from the data. In the second step, the set of strong lattice independent vector are used for linear unmixing of the data, obtaining a vector of abundance coefficients. The resulting abundance values are used as features for classification, specifically for face recognition. Extreme Learning Machines are accurate and fast-learning innovative classification methods based on the random generation of the input-to-hidden-units weights followed by the resolution of the linear equations to obtain the hidden-to-output weights. The LICA-ELM system has been tested against state-of-the-art feature extraction methods and classifiers, outperforming them when performing cross-validation on four large unbalanced face databases.  相似文献   
105.
Model-based testing is focused on testing techniques which rely on the use of models. The diversity of systems and software to be tested implies the need for research on a variety of models and methods for test automation. We briefly review this research area and introduce several papers selected from the 22nd International Conference on Testing Software and Systems (ICTSS).  相似文献   
106.
Membrane Computing is a discipline aiming to abstract formal computing models, called membrane systems or P systems, from the structure and functioning of the living cells as well as from the cooperation of cells in tissues, organs, and other higher order structures. This framework provides polynomial time solutions to NP-complete problems by trading space for time, and whose efficient simulation poses challenges in three different aspects: an intrinsic massively parallelism of P systems, an exponential computational workspace, and a non-intensive floating point nature. In this paper, we analyze the simulation of a family of recognizer P systems with active membranes that solves the Satisfiability problem in linear time on different instances of Graphics Processing Units (GPUs). For an efficient handling of the exponential workspace created by the P systems computation, we enable different data policies to increase memory bandwidth and exploit data locality through tiling and dynamic queues. Parallelism inherent to the target P system is also managed to demonstrate that GPUs offer a valid alternative for high-performance computing at a considerably lower cost. Furthermore, scalability is demonstrated on the way to the largest problem size we were able to run, and considering the new hardware generation from Nvidia, Fermi, for a total speed-up exceeding four orders of magnitude when running our simulations on the Tesla S2050 server.  相似文献   
107.
Nowadays, the impact of technological developments on improving human activities is becoming more evident. In e-learning, this situation is no different. There are common to use systems that assist the daily activities of students and teachers. Typically, e-learning recommender systems are focused on students; however, teachers can also benefit from these type of tools. A recommender system can propose actions and resources that facilitate teaching activities like structuring learning strategies. In any case, a complete user’s representation is required. This paper shows how a fuzzy ontology can be used to represent user profiles into a recommender engine and enhances the user’s activities into e-learning environments. A fuzzy ontology is an extension of domain ontologies for solving the problems of uncertainty in sharing and reusing knowledge on the Semantic Web. The user profile is built from learning objects published by the user himself into a learning object repository. The initial experiment confirms that the automatically obtained fuzzy ontology is a good representation of the user’s preferences. The experiment results also indicate that the presented approach is useful and warrants further research in recommending and retrieval information.  相似文献   
108.
The IEEE 802.3az standard provides a new low power mode that Ethernet network interfaces can use to save energy when there is no traffic to transmit. Simultaneously with the final standard approval, several algorithms were proposed to govern the physical interface state transition between the normal active mode and the new low power mode. In fact, the standard leaves this sleeping algorithm unspecified to spur competition among different vendors and achieve the greatest energy savings. In this paper, we try to bring some light to the most well known sleeping algorithms, providing mathematical models for the expected energy savings and the average packet delay inflicted on outgoing traffic. We will then use the models to derive optimum configuration parameters for them under given efficiency constraints.  相似文献   
109.
We discuss how the standard Cost-Benefit Analysis should be modified in order to take risk (and uncertainty) into account. We propose different approaches used in finance (Value at Risk, Conditional Value at Risk, Downside Risk Measures, and Efficiency Ratio) as useful tools to model the impact of risk in project evaluation. After introducing the concepts, we show how they could be used in CBA and provide some simple examples to illustrate how such concepts can be applied to evaluate the desirability of a new project infrastructure.  相似文献   
110.
An assessment was made of the microbiological quality of the final product (different retail cuts), produced by two different retail supermarket groups (A & B). The influence of sanitary conditions on the microbiological quality of the final product was evaluated, and the possible influences on shelf life were determined. Supermarket group A (Sup group A) received carcasses with significantly lower surface counts of micro-organisms than supermarket group B (Sup group B), while the latter had a more efficient overall sanitation programme than supermarket group A. Five cuts were monitored for the presence of total aerobic counts, psychrotrophic counts, lactobacilli, Enterobacteriaceae and numbers of Pseudomonadaceae present. A shelf life study was also executed by repeating these enumerations on the same meat samples after refrigerated storage at 5°C for 2 and 4 days, respectively. It is generally accepted that a good refrigeration or chilling regime will preserve the inherent meat quality, but in this study it was found that neither served as a guarantee of quality. The more stringent hygiene at retail level of Sup group B yielded consistently lower mean counts of the different bacterial groups for all the meat cuts monitored and, thus meat with an extended shelf life. The total count (at 30°C) on meat cuts was the highest, followed by the psychrotrophs, the Pseudomonadaceae the Enterobacteriaeae and the lactobacilli. Minced meat generally had the highest mean aerobic total microbial counts. This count on minced meat might be a suitable indicator for monitoring the overall sanitary condition of a retail premises. The results re-emphasized the multi-factorial complexity of fresh meat quality and shelf life. The microbial quality of the raw material (carcasses), the maintenance of the cold chain, sanitary condition of premises, equipment and personnel surfaces and general management practices are factors that collectively determine the microbiological quality of the product.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号