全文获取类型
收费全文 | 8860篇 |
免费 | 559篇 |
国内免费 | 12篇 |
专业分类
电工技术 | 68篇 |
综合类 | 11篇 |
化学工业 | 3813篇 |
金属工艺 | 114篇 |
机械仪表 | 132篇 |
建筑科学 | 345篇 |
矿业工程 | 40篇 |
能源动力 | 209篇 |
轻工业 | 1682篇 |
水利工程 | 55篇 |
石油天然气 | 42篇 |
无线电 | 358篇 |
一般工业技术 | 1257篇 |
冶金工业 | 401篇 |
原子能技术 | 21篇 |
自动化技术 | 883篇 |
出版年
2024年 | 27篇 |
2023年 | 160篇 |
2022年 | 913篇 |
2021年 | 1047篇 |
2020年 | 309篇 |
2019年 | 281篇 |
2018年 | 341篇 |
2017年 | 289篇 |
2016年 | 357篇 |
2015年 | 299篇 |
2014年 | 371篇 |
2013年 | 615篇 |
2012年 | 518篇 |
2011年 | 583篇 |
2010年 | 391篇 |
2009年 | 395篇 |
2008年 | 375篇 |
2007年 | 345篇 |
2006年 | 292篇 |
2005年 | 213篇 |
2004年 | 196篇 |
2003年 | 157篇 |
2002年 | 131篇 |
2001年 | 78篇 |
2000年 | 63篇 |
1999年 | 63篇 |
1998年 | 52篇 |
1997年 | 59篇 |
1996年 | 62篇 |
1995年 | 50篇 |
1994年 | 41篇 |
1993年 | 40篇 |
1992年 | 32篇 |
1991年 | 19篇 |
1990年 | 29篇 |
1989年 | 16篇 |
1988年 | 20篇 |
1987年 | 16篇 |
1986年 | 8篇 |
1985年 | 20篇 |
1984年 | 18篇 |
1983年 | 19篇 |
1982年 | 9篇 |
1981年 | 12篇 |
1980年 | 14篇 |
1979年 | 9篇 |
1977年 | 14篇 |
1976年 | 7篇 |
1973年 | 7篇 |
1966年 | 5篇 |
排序方式: 共有9431条查询结果,搜索用时 15 毫秒
991.
Diffusion Tensor Imaging (DTI) has made feasible the visualization of the fibrous structure of the brain white matter. In the last decades, several fiber‐tracking methods have been developed to reconstruct the fiber tracts from DTI data. Usually these fiber tracts are shown individually based on some selection criteria like region of interest. However, if the white matter as a whole is being visualized clutter is generated by directly rendering the individual fiber tracts. Often users are actually interested in fiber bundles, anatomically meaningful entities that abstract from the fibers they contain. Several clustering techniques have been developed that try to group the fiber tracts in fiber bundles. However, even if clustering succeeds, the complex nature of white matter still makes it difficult to investigate. In this paper, we propose the use of illustration techniques to ease the exploration of white matter clusters. We create a technique to visualize an individual cluster as a whole. The amount of fibers visualized for the cluster is reduced to just a few hint lines, and silhouette and contours are used to improve the definition of the cluster borders. Multiple clusters can be easily visualized by a combination of the single cluster visualizations. Focus+context concepts are used to extend the multiple‐cluster renderings. Exploded views ease the exploration of the focus cluster while keeping the context clusters in an abstract form. Real‐time results are achieved by the GPU implementation of the presented techniques. 相似文献
992.
Anna Koufakou 《Knowledge and Information Systems》2014,41(1):77-99
A hyperclique (Xiong et al. in Proceedings of the IEEE international conference on data mining, pp 387–394, 2003) is an itemset containing items that are strongly correlated with each other, based on a user-specified threshold. Hypercliques (HCs) have been successfully used in a number of applications, for example, clustering (Xiong et al. in Proceedings of the 4th SIAM international conference on data mining, pp 279–290, 2004) and noise removal (Xiong et al. in IEEE Trans Knowl Data Eng 18(3):304–319, 2006). Even though HC has been shown to respond well to datasets with skewed support distribution and low support threshold, it may still grow very large for dense datasets and lower h-confidence threshold. In this paper, we propose a new pruning method based on combining HCs and non-derivable itemsets (NDIs) (Calders and Goethals in Proceedings of the PKDD international conference on principles of data mining and knowledge discovery, pp 74–85, 2002) in order to substantially reduce the amount of generated HCs. Specifically, we propose a new collection of HCs, called non-derivable hypercliques (NDHCs). The NDHC collection is a lossless representation of HCs, that is, given the itemsets in NDHCs, we can generate the complete HC collection and their support, without additional scanning of the dataset. We present an efficient algorithm to mine all NDHC sets, NDHCMiner, and an algorithm to derive all HC sets and their support from NDHCs, NDHCDeriveAll. We experimentally compare our collection, NDHC with HC, with respect to runtime performance as well as total number of generated sets, using real and artificial data. We also show comparisons with another condensed representation of HCs, maximal hyperclique patterns (MHPs). Our experiments show that the NDHC collection offers substantial advantages over HCs, and even MHPs, especially for dense datasets and lower h-confidence values. 相似文献
993.
Anna Espunya 《Language Resources and Evaluation》2014,48(1):33-43
The learner translation corpus developed at the School of Translation and Interpreting of Pompeu Fabra University in Barcelona is a web-searchable resource created for pedagogical and research purposes. It comprises a multiple translation corpus (English–Catalan) featuring automatic linguistic annotation and manual error annotation, complemented with an interface for monolingual or bilingual querying of the data. The corpus can be used to identify common errors in the students’ work and to analyse their patterns of language use. It provides easy access to error samples and to multiple versions of the same source text sequence to be used as learning materials in various courses in the translator-training university curriculum. 相似文献
994.
Anna Monreale Dino Pedreschi Ruggero G. Pensa Fabio Pinelli 《Artificial Intelligence and Law》2014,22(2):141-173
The increasing availability of personal data of a sequential nature, such as time-stamped transaction or location data, enables increasingly sophisticated sequential pattern mining techniques. However, privacy is at risk if it is possible to reconstruct the identity of individuals from sequential data. Therefore, it is important to develop privacy-preserving techniques that support publishing of really anonymous data, without altering the analysis results significantly. In this paper we propose to apply the Privacy-by-design paradigm for designing a technological framework to counter the threats of undesirable, unlawful effects of privacy violation on sequence data, without obstructing the knowledge discovery opportunities of data mining technologies. First, we introduce a k-anonymity framework for sequence data, by defining the sequence linking attack model and its associated countermeasure, a k-anonymity notion for sequence datasets, which provides a formal protection against the attack. Second, we instantiate this framework and provide a specific method for constructing the k-anonymous version of a sequence dataset, which preserves the results of sequential pattern mining, together with several basic statistics and other analytical properties of the original data, including the clustering structure. A comprehensive experimental study on realistic datasets of process-logs, web-logs and GPS tracks is carried out, which empirically shows how, in our proposed method, the protection of privacy meets analytical utility. 相似文献
995.
María J. Carreira Majid Mirmehdi Barry T. Thomas Marta Penas 《Image and vision computing》2002,20(13-14)
Directional features extracted from Gabor wavelets responses were used to train a structure of self-organising maps, thus classifying each pixel in the image within a neuron-map. Resulting directional primitives were grouped into perceptual primitives introducing an extended 4D Hough transform to group pixels with similar directional features. These can then be used as perceptual primitives to detect salient structures. The proposed method has independently fixed parameters that do not need to be tuned for different kind or quality of images. We present results in application to noisy FLIR images and show that line primitives for complex structures, such as bridges, or simple structures, such as runways, can be found by this approach. We compare and demonstrate the quality of our results with those obtained through a parameter-dependent traditional Canny edge detector and Hough line finding process. 相似文献
996.
Dolors Costal Cristina Gómez Anna Queralt Ruth Raventós Ernest Teniente 《Software and Systems Modeling》2008,7(4):469-486
An important aspect in the specification of conceptual schemas is the definition of general constraints that cannot be expressed
by the predefined constructs provided by conceptual modeling languages. This is generally achieved by using general-purpose
languages like OCL. In this paper we propose a new approach that facilitates the definition of such general constraints in
UML. More precisely, we define a profile that extends the set of predefined UML constraints by adding certain types of constraints
that are commonly used in conceptual schemas. We also show how our proposal facilitates reasoning about the constraints and
their automatic code generation, study the application of our ideas to the specification of two real-life applications, and
present a prototype tool implementation.
相似文献
Ernest TenienteEmail: |
997.
998.
999.
The problem of query optimization in object-oriented databases is addressed. We follow the Stack-Based Approach to query languages, which employs the naming-scoping-binding paradigm of programming languages rather than traditional database concepts such as relational/object algebras or calculi. The classical environment stack is a semantic basis for definitions of object query operators, such as selection, projection/navigation, dependent join, and quantifiers. We describe a general object data model and define a formalized OQL-like query language SBQL. Optimization by rewriting concerns queries containing so-called independent subqueries. It consists in detecting them and then factoring outside loops implied by query operators. The idea is based on the formal static analysis of scoping rules and binding names occurring in a query. It is more general than the classical pushing selections/projections before joins. 相似文献
1000.
José Luis Crespo Marta Zorrilla Pilar Bernardos Eduardo Mora 《The Visual computer》2009,25(4):309-323
The objective of this paper is to present an overall approach to forecasting the future position of the moving objects of
an image sequence after processing the images previous to it. The proposed method makes use of classical techniques such as
optical flow to extract objects’ trajectories and velocities, and autoregressive algorithms to build the predictive model.
Our method can be used in a variety of applications, where videos with stationary cameras are used, moving objects are not
deformed and change their position with time. One of these applications is traffic control, which is used in this paper as
a case study with different meteorological conditions to compare with.
相似文献
Marta Zorrilla (Corresponding author)Email: |