全文获取类型
收费全文 | 7582篇 |
免费 | 407篇 |
国内免费 | 14篇 |
专业分类
电工技术 | 83篇 |
综合类 | 2篇 |
化学工业 | 1869篇 |
金属工艺 | 174篇 |
机械仪表 | 145篇 |
建筑科学 | 280篇 |
矿业工程 | 13篇 |
能源动力 | 245篇 |
轻工业 | 705篇 |
水利工程 | 75篇 |
石油天然气 | 28篇 |
武器工业 | 6篇 |
无线电 | 558篇 |
一般工业技术 | 1571篇 |
冶金工业 | 833篇 |
原子能技术 | 43篇 |
自动化技术 | 1373篇 |
出版年
2023年 | 56篇 |
2022年 | 119篇 |
2021年 | 166篇 |
2020年 | 160篇 |
2019年 | 134篇 |
2018年 | 179篇 |
2017年 | 197篇 |
2016年 | 252篇 |
2015年 | 167篇 |
2014年 | 270篇 |
2013年 | 603篇 |
2012年 | 449篇 |
2011年 | 545篇 |
2010年 | 367篇 |
2009年 | 370篇 |
2008年 | 500篇 |
2007年 | 425篇 |
2006年 | 369篇 |
2005年 | 304篇 |
2004年 | 280篇 |
2003年 | 241篇 |
2002年 | 201篇 |
2001年 | 123篇 |
2000年 | 101篇 |
1999年 | 113篇 |
1998年 | 115篇 |
1997年 | 102篇 |
1996年 | 101篇 |
1995年 | 92篇 |
1994年 | 108篇 |
1993年 | 78篇 |
1992年 | 63篇 |
1991年 | 51篇 |
1990年 | 46篇 |
1989年 | 63篇 |
1988年 | 47篇 |
1987年 | 24篇 |
1986年 | 31篇 |
1985年 | 40篇 |
1984年 | 40篇 |
1983年 | 26篇 |
1982年 | 26篇 |
1981年 | 41篇 |
1980年 | 24篇 |
1979年 | 22篇 |
1978年 | 24篇 |
1977年 | 14篇 |
1975年 | 11篇 |
1974年 | 21篇 |
1973年 | 14篇 |
排序方式: 共有8003条查询结果,搜索用时 15 毫秒
91.
Ahmed Samet Eric Lefèvre Sadok Ben Yahia 《Journal of Intelligent Information Systems》2016,47(1):135-163
Associative classification has been shown to provide interesting results whenever of use to classify data. With the increasing complexity of new databases, retrieving valuable information and classifying incoming data is becoming a thriving and compelling issue. The evidential database is a new type of database that represents imprecision and uncertainty. In this respect, extracting pertinent information such as frequent patterns and association rules is of paramount importance task. In this work, we tackle the problem of pertinent information extraction from an evidential database. A new data mining approach, denoted EDMA, is introduced that extracts frequent patterns overcoming the limits of pioneering works of the literature. A new classifier based on evidential association rules is thus introduced. The obtained association rules, as well as their respective confidence values, are studied and weighted with respect to their relevance. The proposed methods are thoroughly experimented on several synthetic evidential databases and showed performance improvement. 相似文献
92.
93.
Eric Pedrol Javier Martínez Magdalena Aguiló Manuel Garcia-Algar Moritz Nazarenus Luca Guerrini Eduardo Garcia-Rico Francesc Díaz Jaume Massons 《Microfluidics and nanofluidics》2017,21(12):181
This paper presents an optofluidic device for cell discrimination with two independent interrogation regions. Pumping light is coupled to the device, and cell fluorescence is extracted from the two interrogation zones by using optical fibers embedded in the optofluidic chip. To test the reliability of this device, AU-565 cells—expressing EpCAM and HER2 receptors—and RAMOS cells were mixed in a controlled manner, confined inside a hydrodynamic focused flow in the microfluidic chip and detected individually so that they could be discriminated as positive (signal reception from fluorescently labeled antibodies from the AU-565 cells) or negative events (RAMOS cells). A correlation analysis of the two signals reduces the influence of noise on the overall data. 相似文献
94.
Crusta: A new virtual globe for real-time visualization of sub-meter digital topography at planetary scales 总被引:1,自引:0,他引:1
Tony Bernardin Eric CowgillOliver Kreylos Christopher BowlesPeter Gold Bernd HamannLouise Kellogg 《Computers & Geosciences》2011,37(1):75-85
Virtual globes are becoming ubiquitous in the visualization of planetary bodies and Earth specifically. While many of the current virtual globes have proven to be quite useful for remote geologic investigation, they were never designed for the purpose of serving as virtual geologic instruments. Their shortcomings have become more obvious as earth scientists struggle to visualize recently released digital elevation models of very high spatial resolution (0.5-1 m2/sample) and extent (>2000 km2). We developed Crusta as an alternative virtual globe that allows users to easily visualize their custom imagery and more importantly their custom topography. Crusta represents the globe as a 30-sided polyhedron to avoid distortion of the display, in particular the singularities at the poles characteristic of other projections. This polyhedron defines 30 “base patches,” each being a four-sided region that can be subdivided to an arbitrarily fine grid on the surface of the globe to accommodate input data of arbitrary resolution, from global (BlueMarble) to local (tripod LiDAR), all in the same visualization. We designed Crusta to be dynamic with the shading of the terrain surface computed on-the-fly when a user manipulates his point-of-view. In a similarly interactive fashion the globe's surface can be exaggerated vertically. The combination of the two effects greatly improves the perception of shape. A convenient pre-processing tool based on the GDAL library facilitates importing a number of data formats into the Crusta-specific multi-scale hierarchies that enable interactive visualization on a range of platforms from laptops to immersive geowalls and caves. The main scientific user community for Crusta is earth scientists, and their needs have been driving the development. 相似文献
95.
This paper describes a novel linearly-weighted gradient smoothing method (LWGSM). The proposed method is based on irregular cells and thus can be used for problems with arbitrarily complex geometrical boundaries. Based on the analysis about the compactness and the positivity of coefficients of influence of their stencils for approximating a derivative, one favorable scheme (VIII) is selected among total eight proposed discretization schemes. This scheme VIII is successively verified and carefully examined in solving Poisson equations, subjected to changes in the number of nodes, the shapes of cells and the irregularity of triangular cells, respectively. Strong form of incompressible Navier–Stokes equations enhanced with artificial compressibility terms are tackled, in which the spatial derivatives are approximated by consistent and successive use of gradient smoothing operation over smoothing domains at various locations. All the test cases using LWGSM solver exhibits its robust, stable and accurate behaviors. The attained incompressible LWGSM solutions show good agreements with experimental and literature data. Therefore, the proposed LWGSM can be reliably used for accurate solutions to versatile fluid flow problems. 相似文献
96.
Vidroha Debroy Author VitaeW. Eric WongAuthor Vitae 《Journal of Systems and Software》2011,84(4):587-602
Test set size in terms of the number of test cases is an important consideration when testing software systems. Using too few test cases might result in poor fault detection and using too many might be very expensive and suffer from redundancy. We define the failure rate of a program as the fraction of test cases in an available test pool that result in execution failure on that program. This paper investigates the relationship between failure rates and the number of test cases required to detect the faults. Our experiments based on 11 sets of C programs suggest that an accurate estimation of failure rates of potential fault(s) in a program can provide a reliable estimate of adequate test set size with respect to fault detection and should therefore be one of the factors kept in mind during test set construction. Furthermore, the model proposed herein is fairly robust to incorrect estimations in failure rates and can still provide good predictive quality. Experiments are also performed to observe the relationship between multiple faults present in the same program using the concept of a failure rate. When predicting the effectiveness against a program with multiple faults, results indicate that not knowing the number of faults in the program is not a significant concern, as the predictive quality is typically not affected adversely. 相似文献
97.
Kazem Taghva Eric Stofsky 《International Journal on Document Analysis and Recognition》2001,3(3):125-137
In this paper, we describe a spelling correction system designed specifically for OCR-generated text that selects candidate
words through the use of information gathered from multiple knowledge sources. This system for text correction is based on
static and dynamic device mappings, approximate string matching, and n-gram analysis. Our statistically based, Bayesian system
incorporates a learning feature that collects confusion information at the collection and document levels. An evaluation of
the new system is presented as well.
Received August 16, 2000 / Revised October 6, 2000 相似文献
98.
99.
Internet complexity makes reasoning about traffic equilibrium difficult, partly because users react to congestion. This difficulty calls for an analytic technique that is simple, yet have enough details to capture user behavior and flexibly address a broad range of issues.This paper presents such a technique. It treats traffic equilibrium as a balance between an inflow controlled by users, and an outflow controlled by the network (link capacity, congestion avoidance, etc.). This decomposition is demonstrated with a surfing session model, and validated with a traffic trace and NS2 simulations.The technique’s accessibility and breadth are illustrated through an analysis of several issues concerning the location, stability, robustness and dynamics of traffic equilibrium. 相似文献
100.
Kees Zoethout Wander Jager Eric Molleman 《Autonomous Agents and Multi-Agent Systems》2008,16(1):75-94
Multi-agent simulation is applied to explore how different types of task variety cause workgroups to change their task allocation
accordingly. We studied two groups, generalists and specialists. We hypothesised that the performance of the specialists would
decrease when task variety increases. The generalists, on the other hand, would perform better in a high task variety condition.
The results show that these hypotheses were only partly supported because both learning and motivational effects changed the
task allocation process in a much more complex way. We conclude that although no task variety leads to specialisation and
high task variety leads to generalisation, in general, performance is better when task variety is low. Further, in case of
no task variety, specialists outperform generalists. In case of moderate variety the opposite is true. With high task variety,
since there is no space for any expertise and motivational development, the behaviour of specialists and generalists becomes
more similar, and, consequently also their performance. 相似文献