全文获取类型
收费全文 | 16579篇 |
免费 | 2240篇 |
国内免费 | 1123篇 |
专业分类
电工技术 | 3152篇 |
综合类 | 1074篇 |
化学工业 | 823篇 |
金属工艺 | 292篇 |
机械仪表 | 824篇 |
建筑科学 | 131篇 |
矿业工程 | 314篇 |
能源动力 | 696篇 |
轻工业 | 145篇 |
水利工程 | 20篇 |
石油天然气 | 29篇 |
武器工业 | 36篇 |
无线电 | 3648篇 |
一般工业技术 | 879篇 |
冶金工业 | 83篇 |
原子能技术 | 56篇 |
自动化技术 | 7740篇 |
出版年
2024年 | 147篇 |
2023年 | 620篇 |
2022年 | 770篇 |
2021年 | 896篇 |
2020年 | 1028篇 |
2019年 | 737篇 |
2018年 | 839篇 |
2017年 | 1498篇 |
2016年 | 1518篇 |
2015年 | 1299篇 |
2014年 | 1606篇 |
2013年 | 1296篇 |
2012年 | 1650篇 |
2011年 | 1295篇 |
2010年 | 786篇 |
2009年 | 995篇 |
2008年 | 366篇 |
2007年 | 606篇 |
2006年 | 475篇 |
2005年 | 229篇 |
2004年 | 128篇 |
2003年 | 137篇 |
2002年 | 137篇 |
2001年 | 133篇 |
2000年 | 105篇 |
1999年 | 120篇 |
1998年 | 34篇 |
1997年 | 31篇 |
1996年 | 32篇 |
1995年 | 35篇 |
1994年 | 21篇 |
1993年 | 20篇 |
1992年 | 19篇 |
1991年 | 20篇 |
1990年 | 17篇 |
1989年 | 13篇 |
1988年 | 37篇 |
1987年 | 86篇 |
1986年 | 84篇 |
1985年 | 17篇 |
1984年 | 12篇 |
1983年 | 5篇 |
1982年 | 9篇 |
1981年 | 9篇 |
1980年 | 5篇 |
1979年 | 9篇 |
1978年 | 7篇 |
1977年 | 2篇 |
1976年 | 2篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
61.
《Expert systems with applications》2014,41(6):3134-3142
Partitioning the universe of discourse and determining intervals containing useful temporal information and coming with better interpretability are critical for forecasting in fuzzy time series. In the existing literature, researchers seldom consider the effect of time variable when they partition the universe of discourse. As a result, and there is a lack of interpretability of the resulting temporal intervals. In this paper, we take the temporal information into account to partition the universe of discourse into intervals with unequal length. As a result, the performance improves forecasting quality. First, time variable is involved in partitioning the universe through Gath–Geva clustering-based time series segmentation and obtain the prototypes of data, then determine suitable intervals according to the prototypes by means of information granules. An effective method of partitioning and determining intervals is proposed. We show that these intervals carry well-defined semantics. To verify the effectiveness of the approach, we apply the proposed method to forecast enrollment of students of Alabama University and the Taiwan Stock Exchange Capitalization Weighted Stock Index. The experimental results show that the partitioning with temporal information can greatly improve accuracy of forecasting. Furthermore, the proposed method is not sensitive to its parameters. 相似文献
62.
《Expert systems with applications》2014,41(7):3444-3449
During early design and development stages, every engineering system has to meet its specific reliability goals. The target reliability of the system is achieved through application of an effective reliability apportionment technique to its subsystems. There are various traditional methods exist to perform the reliability allocation based on engineering factors that are assessed in a subjective manner. The conventional reliability allocation approach requires the assessment of factors like complexity, cost, and maintenance. This may not be realistic in real applications if they are assessed in a crisp manner by the domain experts of their varied expertise and background.In this paper, we treat allocation factors as fuzzy numbers, which are evaluated in fuzzy linguistic terms. As a result, fuzzy proportionality factor scales are proposed for the subsystems. In order to accomplish fuzzy division to evaluate the fuzzy proportionality factor, an approximation method based on linear programming for trapezoidal fuzzy numbers is also proposed in this paper. For the evaluation of weighting factors from fuzzy proportionality factors, centroid method of defuzzification is being employed. The allocated reliability of each subsystem is computed with the help of weighting factor thereafter. An example is provided to illustrate the potential application of the proposed fuzzy based reliability allocation approach. 相似文献
63.
《Expert systems with applications》2014,41(11):5509-5519
Postal logistics has a complex transportation network for efficient mail delivery. Therefore, a postal logistics network consists of various functional sites with a hybrid hub-and-spoke structure. More specifically, there are multiple Delivery & Pickup Stations (D&PSs), multiple Mail Processing Centers (MPCs), and one Exchange Center (EC). In this paper, we develop two mathematical models with realistic restrictions for Korea Post for the current postal logistics network by simultaneously considering locations and allocations. We propose an Integer Linear Programming (ILP) model for transportation network organization and vehicle operation and a Mixed Integer Linear Programming (MILP) model that considers potential ECs for decision making while simultaneously regarding the EC location, transportation network organization, and vehicle operation. We use modified real data from Korea Post. Additionally, we consider several scenarios for supporting EC decision makers. The proposed models and scenarios are very useful in decision making for postal logistics network designers and operators. 相似文献
64.
《Expert systems with applications》2014,41(11):5416-5430
Detecting SQL injection attacks (SQLIAs) is becoming increasingly important in database-driven web sites. Until now, most of the studies on SQLIA detection have focused on the structured query language (SQL) structure at the application level. Unfortunately, this approach inevitably fails to detect those attacks that use already stored procedure and data within the database system. In this paper, we propose a framework to detect SQLIAs at database level by using SVM classification and various kernel functions. The key issue of SQLIA detection framework is how to represent the internal query tree collected from database log suitable for SVM classification algorithm in order to acquire good performance in detecting SQLIAs. To solve the issue, we first propose a novel method to convert the query tree into an n-dimensional feature vector by using a multi-dimensional sequence as an intermediate representation. The reason that it is difficult to directly convert the query tree into an n-dimensional feature vector is the complexity and variability of the query tree structure. Second, we propose a method to extract the syntactic features, as well as the semantic features when generating feature vector. Third, we propose a method to transform string feature values into numeric feature values, combining multiple statistical models. The combined model maps one string value to one numeric value by containing the multiple characteristic of each string value. In order to demonstrate the feasibility of our proposals in practical environments, we implement the SQLIA detection system based on PostgreSQL, a popular open source database system, and we perform experiments. The experimental results using the internal query trees of PostgreSQL validate that our proposal is effective in detecting SQLIAs, with at least 99.6% of the probability that the probability for malicious queries to be correctly predicted as SQLIA is greater than the probability for normal queries to be incorrectly predicted as SQLIA. Finally, we perform additional experiments to compare our proposal with syntax-focused feature extraction and single statistical model based on feature transformation. The experimental results show that our proposal significantly increases the probability of correctly detecting SQLIAs for various SQL statements, when compared to the previous methods. 相似文献
65.
《Expert systems with applications》2014,41(6):2703-2712
A concept lattice is an ordered structure between concepts. It is particularly effective in mining association rules. However, a concept lattice is not efficient for large databases because the lattice size increases with the number of transactions. Finding an efficient strategy for dynamically updating the lattice is an important issue for real-world applications, where new transactions are constantly inserted into databases. To build an efficient storage structure for mining association rules, this study proposes a method for building the initial frequent closed itemset lattice from the original database. The lattice is updated when new transactions are inserted. The number of database rescans over the entire database is reduced in the maintenance process. The proposed algorithm is compared with building a lattice in batch mode to demonstrate the effectiveness of the proposed algorithm. 相似文献
66.
《Computer Standards & Interfaces》2014,36(5):844-854
In the proposed advanced computing environment, known as the HoneyBee Platform, various computing devices using single or multiple interfaces and technologies/standards need to communicate and cooperate efficiently with a certain level of security and safety measures. These computing devices may be supported by different types of operating systems with different features and levels of security support. In order to ensure that all operations within the environment can be carried out seamlessly in an ad-hoc manner, there is a need for a common mobile platform to be developed. The purpose of this long-term project is to investigate and implement a new functional layered model of the common mobile platform with secured and trusted ensemble computing architecture for an innovative Digital Economic Environment in the Malaysian context. This mobile platform includes a lightweight operating system to provide a common virtual environment, a middleware for providing basic functionalities of routing, resource and network management, as well as to provide security, privacy and a trusted environment. A generic application programming interface is provided for application developers to access underlying resources. The aim is for the developed platform to act as the building block for an ensemble environment, upon which higher level applications could be built. Considered as the most essential project in a series of related projects towards a more digital socio-economy in Malaysia, this article presents the design of the target computational platform as well as the conceptual framework for the HoneyBee project. 相似文献
67.
Xi Mei Jing Wang Hong Zhang Zhi-cheng Liu Zhen-xi Zhang 《Computer methods and programs in biomedicine》2014
The ionic mechanism of change in short-term memory (STM) during acute myocardial ischemia has not been well understood. In this paper, an advanced guinea pig ventricular model developed by Luo and Rudy was used to investigate STM property of ischemic ventricular myocardium. STM response was calculated by testing the time to reach steady-state action potential duration (APD) after an abrupt shortening of basic cycling length (BCL) in the pacing protocol. Electrical restitution curves (RCs), which can simultaneously visualize multiple aspects of APD restitution and STM, were obtained from dynamic and local S1S2 restitution portrait (RP), which consist of a longer interval stimulus (S1) and a shorter interval stimulus (S2). The angle between dynamic RC and local S1S2 RC reflects the amount of STM. Our results indicated that compared with control (normal) condition, time constant of STM response in the ischemic condition decreased significantly. Meanwhile the angle which reflects STM amount is less in ischemic model than that in control model. By tracking the effect of ischemia on intracellular ion concentration and membrane currents, we declared that changes in membrane currents caused by ischemia exert subtle influences on STM; it is only the decline of intracellular calcium concentration that give rise to the most decrement of STM. 相似文献
68.
This paper proposes a novel automatic method for the moment segmentation and peak detection analysis of heart sound (HS) pattern, with special attention to the characteristics of the envelopes of HS and considering the properties of the Hilbert transform (HT). The moment segmentation and peak location are accomplished in two steps. First, by applying the Viola integral waveform method in the time domain, the envelope (ET) of the HS signal is obtained with an emphasis on the first heart sound (S1) and the second heart sound (S2). Then, based on the characteristics of the ET and the properties of the HT of the convex and concave functions, a novel method, the short-time modified Hilbert transform (STMHT), is proposed to automatically locate the moment segmentation and peak points for the HS by the zero crossing points of the STMHT. A fast algorithm for calculating the STMHT of ET can be expressed by multiplying the ET by an equivalent window (WE). According to the range of heart beats and based on the numerical experiments and the important parameters of the STMHT, a moving window width of N = 1 s is validated for locating the moment segmentation and peak points for HS. The proposed moment segmentation and peak location procedure method is validated by sounds from Michigan HS database and sounds from clinical heart diseases, such as a ventricular septal defect (VSD), an aortic septal defect (ASD), Tetralogy of Fallot (TOF), rheumatic heart disease (RHD), and so on. As a result, for the sounds where S2 can be separated from S1, the average accuracies achieved for the peak of S1 (AP1), the peak of S2 (AP2), the moment segmentation points from S1 to S2 (AT12) and the cardiac cycle (ACC) are 98.53%, 98.31% and 98.36% and 97.37%, respectively. For the sounds where S1 cannot be separated from S2, the average accuracies achieved for the peak of S1 and S2 (AP12) and the cardiac cycle ACC are 100% and 96.69%. 相似文献
69.
《Expert systems with applications》2014,41(13):5780-5787
The massive quantity of data available today in the Internet has reached such a huge volume that it has become humanly unfeasible to efficiently sieve useful information from it. One solution to this problem is offered by using text summarization techniques. Text summarization, the process of automatically creating a shorter version of one or more text documents, is an important way of finding relevant information in large text libraries or in the Internet. This paper presents a multi-document summarization system that concisely extracts the main aspects of a set of documents, trying to avoid the typical problems of this type of summarization: information redundancy and diversity. Such a purpose is achieved through a new sentence clustering algorithm based on a graph model that makes use of statistic similarities and linguistic treatment. The DUC 2002 dataset was used to assess the performance of the proposed system, surpassing DUC competitors by a 50% margin of f-measure, in the best case. 相似文献
70.
《Expert systems with applications》2014,41(13):5788-5803
In this paper, an improved global-best harmony search algorithm, named IGHS, is proposed. In the IGHS algorithm, initialization based on opposition-based learning for improving the solution quality of the initial harmony memory, a new improvisation scheme based on differential evolution for enhancing the local search ability, a modified random consideration based on artificial bee colony algorithm for reducing randomness of the global-best harmony search (GHS) algorithm, as well as two perturbation schemes for avoiding premature convergence, are integrated. In addition, two parameters of IGHS, harmony memory consideration rate and pitch adjusting rate, are dynamically updated based on a composite function composed of a linear time-varying function, a periodic function and a sign function in view of approximate periodicity of evolution in nature. Experimental results tested on twenty-eight benchmark functions indicate that IGHS is far better than basic harmony search (HS) algorithm and GHS. In further study, IGHS has also been compared with other eight well known metaheuristics. The results show that IGHS is better than or at least similar to those approaches on most of test functions. 相似文献