首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   9023篇
  免费   590篇
  国内免费   478篇
电工技术   126篇
技术理论   2篇
综合类   342篇
化学工业   185篇
金属工艺   51篇
机械仪表   185篇
建筑科学   297篇
矿业工程   66篇
能源动力   379篇
轻工业   74篇
水利工程   31篇
石油天然气   23篇
武器工业   43篇
无线电   689篇
一般工业技术   688篇
冶金工业   50篇
原子能技术   12篇
自动化技术   6848篇
  2024年   40篇
  2023年   285篇
  2022年   275篇
  2021年   385篇
  2020年   441篇
  2019年   345篇
  2018年   334篇
  2017年   769篇
  2016年   716篇
  2015年   609篇
  2014年   824篇
  2013年   568篇
  2012年   554篇
  2011年   586篇
  2010年   457篇
  2009年   593篇
  2008年   353篇
  2007年   375篇
  2006年   283篇
  2005年   221篇
  2004年   99篇
  2003年   103篇
  2002年   116篇
  2001年   116篇
  2000年   83篇
  1999年   106篇
  1998年   34篇
  1997年   32篇
  1996年   34篇
  1995年   37篇
  1994年   21篇
  1993年   23篇
  1992年   14篇
  1991年   21篇
  1990年   12篇
  1989年   6篇
  1988年   36篇
  1987年   65篇
  1986年   51篇
  1985年   22篇
  1984年   15篇
  1983年   8篇
  1982年   4篇
  1981年   6篇
  1980年   3篇
  1979年   3篇
  1978年   2篇
  1977年   4篇
  1976年   2篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
51.
A concept lattice is an ordered structure between concepts. It is particularly effective in mining association rules. However, a concept lattice is not efficient for large databases because the lattice size increases with the number of transactions. Finding an efficient strategy for dynamically updating the lattice is an important issue for real-world applications, where new transactions are constantly inserted into databases. To build an efficient storage structure for mining association rules, this study proposes a method for building the initial frequent closed itemset lattice from the original database. The lattice is updated when new transactions are inserted. The number of database rescans over the entire database is reduced in the maintenance process. The proposed algorithm is compared with building a lattice in batch mode to demonstrate the effectiveness of the proposed algorithm.  相似文献   
52.
Recommender systems apply data mining and machine learning techniques for filtering unseen information and can predict whether a user would like a given item. This paper focuses on gray-sheep users problem responsible for the increased error rate in collaborative filtering based recommender systems. This paper makes the following contributions: we show that (1) the presence of gray-sheep users can affect the performance – accuracy and coverage – of the collaborative filtering based algorithms, depending on the data sparsity and distribution; (2) gray-sheep users can be identified using clustering algorithms in offline fashion, where the similarity threshold to isolate these users from the rest of community can be found empirically. We propose various improved centroid selection approaches and distance measures for the K-means clustering algorithm; (3) content-based profile of gray-sheep users can be used for making accurate recommendations. We offer a hybrid recommendation algorithm to make reliable recommendations for gray-sheep users. To the best of our knowledge, this is the first attempt to propose a formal solution for gray-sheep users problem. By extensive experimental results on two different datasets (MovieLens and community of movie fans in the FilmTrust website), we showed that the proposed approach reduces the recommendation error rate for the gray-sheep users while maintaining reasonable computational performance.  相似文献   
53.
This paper presents a semi-parametric method of parameter estimation for the class of logarithmic ACD (Log-ACD) models using the theory of estimating functions (EF). A number of theoretical results related to the corresponding EF estimators are derived. A simulation study is conducted to compare the performance of the proposed EF estimates with corresponding ML (maximum likelihood) and QML (quasi maximum likelihood) estimates. It is argued that the EF estimates are relatively easier to evaluate and have sampling properties comparable with those of ML and QML methods. Furthermore, the suggested EF estimates can be obtained without any knowledge of the distribution of errors is known. We apply all these suggested methodology for a real financial duration dataset. Our results show that Log-ACD (1, 1) fits the data well giving relatively smaller variation in forecast errors than in Linear ACD (1, 1) regardless of the method of estimation. In addition, the Diebold–Mariano (DM) and superior predictive ability (SPA) tests have been applied to confirm the performance of the suggested methodology. It is shown that the new method is slightly better than traditional methods in practice in terms of computation; however, there is no significant difference in forecasting ability for all models and methods.  相似文献   
54.
55.
The inverse Gaussian distribution has considerable applications in describing product life, employee service times, and so on. In this paper, the average run length (ARL) unbiased control charts, which monitor the shape and location parameters of the inverse Gaussian distribution respectively, are proposed when the in-control parameters are known. The effects of parameter estimation on the performance of the proposed control charts are also studied. An ARL-unbiased control chart for the shape parameter with the desired ARL0, which takes the variability of the parameter estimate into account, is further developed. The performance of the proposed control charts is investigated in terms of the ARL and standard deviation of the run length. Finally, an example is used to illustrate the proposed control charts.  相似文献   
56.
Multi-criteria ABC inventory classification (MCIC), which aims to classify inventory items by considering more than one criterion, is one of the most widely employed techniques for inventory control. This paper suggests a cross-evaluation-based weighted linear optimization (CE-WLO) model for MCIC that incorporates a cross-efficiency evaluation method into a weighted linear optimization model for finer classification (or ranking) of inventory items. The present study demonstrated the inventory-management-cost effectiveness and advantages of the proposed model using a simulation technique to conduct a comparative experiment with the previous, related investigations. We established that the proposed model enables more accurate classification of inventory items and better inventory management cost effectiveness for MCIC, specifically by mitigating the adverse effect of flexibility in the choice of weights and yielding a unique ordering of inventory items.  相似文献   
57.
The availability of a system or equipment is one of the crucial characteristics that measures the customer satisfaction and strongly influences his final choice decision between concurrent products. The aim of this work is to provide an approach to improve the products availability assessment by taking into account the safety criteria by considering the use situations at design stage. Our work focuses on the routine design of complex products. The availability is often simply estimated considering reliability and maintainability. Basically, the intrinsic availability is the probability that it is operating satisfactorily at any point in time when used under conditions stated by design specifications. The time considered includes operating time and active repair time. Thus, intrinsic availability excludes from consideration all other times in the product lifecycle such as: accident management time, storage time, administrative time or logistic time. But many studies show that the loss of availability performance is also due to accidents that occur in different unforeseeable utilization situations. This engenders stops of the system to ensure the users safety according to standards recommendations. In this purpose, we consider the structural product architecture and the different use cases that correspond to the operational states and downtimes due to stop events that may happen during the utilization like failures, maintenance tasks and accidents. Then, we propose a product behavioral analysis including the use cases to describe interactions between the product and users or maintenance operators. We use Markov chains to model the use cases corresponding to operating time (OT), maintenance time (MT) and preparing time after accidents (RT). Then these three parameters are considered to specify a generic approach to improve the availability assessment. Such an approach provides the traceability of the product behavior along its lifecycle. In this way, the main causes of stop can be identified and this may guide the designer for improving the availability of the product future versions. To validate our approach, an application is presented considering a printing line. The comparison of our simulation considering an industrial case study shows a good agreement about the influence of safety on the availability.  相似文献   
58.
In C2C communication, all necessary information must be collected promptly when a buyer and a seller communicate. That is, an intelligent C2C agent is needed to provide information to buyers and sellers. Along with the evolution of computing technology, C2C agents can exploit the efficient delivery capabilities of peer-to-peer (P2P) technology. However, P2P also increases traffic between agents, but communication faults are a fatal problem for C2C business. This study proposes a robust communication architecture based on current P2P content-delivery standards and its efficiency and robustness have been verified from an experiment.  相似文献   
59.
The massive quantity of data available today in the Internet has reached such a huge volume that it has become humanly unfeasible to efficiently sieve useful information from it. One solution to this problem is offered by using text summarization techniques. Text summarization, the process of automatically creating a shorter version of one or more text documents, is an important way of finding relevant information in large text libraries or in the Internet. This paper presents a multi-document summarization system that concisely extracts the main aspects of a set of documents, trying to avoid the typical problems of this type of summarization: information redundancy and diversity. Such a purpose is achieved through a new sentence clustering algorithm based on a graph model that makes use of statistic similarities and linguistic treatment. The DUC 2002 dataset was used to assess the performance of the proposed system, surpassing DUC competitors by a 50% margin of f-measure, in the best case.  相似文献   
60.
Early and accurate diagnosis of Parkinson’s disease (PD) is important for early management, proper prognostication and for initiating neuroprotective therapies once they become available. Recent neuroimaging techniques such as dopaminergic imaging using single photon emission computed tomography (SPECT) with 123I-Ioflupane (DaTSCAN) have shown to detect even early stages of the disease. In this paper, we use the striatal binding ratio (SBR) values that are calculated from the 123I-Ioflupane SPECT scans (as obtained from the Parkinson’s progression markers initiative (PPMI) database) for developing automatic classification and prediction/prognostic models for early PD. We used support vector machine (SVM) and logistic regression in the model building process. We observe that the SVM classifier with RBF kernel produced a high accuracy of more than 96% in classifying subjects into early PD and healthy normal; and the logistic model for estimating the risk of PD also produced high degree of fitting with statistical significance indicating its usefulness in PD risk estimation. Hence, we infer that such models have the potential to aid the clinicians in the PD diagnostic process.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号