首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   59209篇
  免费   7384篇
  国内免费   5916篇
电工技术   5171篇
技术理论   9篇
综合类   6425篇
化学工业   1423篇
金属工艺   724篇
机械仪表   3639篇
建筑科学   2330篇
矿业工程   1594篇
能源动力   812篇
轻工业   755篇
水利工程   1401篇
石油天然气   2927篇
武器工业   796篇
无线电   9000篇
一般工业技术   2862篇
冶金工业   1186篇
原子能技术   744篇
自动化技术   30711篇
  2024年   318篇
  2023年   852篇
  2022年   1746篇
  2021年   2155篇
  2020年   2298篇
  2019年   1783篇
  2018年   1599篇
  2017年   1983篇
  2016年   2216篇
  2015年   2532篇
  2014年   4043篇
  2013年   3615篇
  2012年   4329篇
  2011年   4673篇
  2010年   3590篇
  2009年   3554篇
  2008年   4027篇
  2007年   4565篇
  2006年   3863篇
  2005年   3499篇
  2004年   2990篇
  2003年   2534篇
  2002年   1941篇
  2001年   1479篇
  2000年   1268篇
  1999年   931篇
  1998年   710篇
  1997年   592篇
  1996年   497篇
  1995年   437篇
  1994年   350篇
  1993年   251篇
  1992年   175篇
  1991年   180篇
  1990年   134篇
  1989年   109篇
  1988年   85篇
  1987年   69篇
  1986年   59篇
  1985年   85篇
  1984年   58篇
  1983年   72篇
  1982年   60篇
  1981年   37篇
  1980年   20篇
  1979年   34篇
  1978年   12篇
  1977年   20篇
  1976年   13篇
  1959年   7篇
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
991.
An order-clique-based approach for mining maximal co-locations   总被引:2,自引:0,他引:2  
Most algorithms for mining spatial co-locations adopt an Apriori-like approach to generate size-k prevalence co-locations after size-(k − 1) prevalence co-locations. However, generating and storing the co-locations and table instances is costly. A novel order-clique-based approach for mining maximal co-locations is proposed in this paper. The efficiency of the approach is achieved by two techniques: (1) the spatial neighbor relationships and the size-2 prevalence co-locations are compressed into extended prefix-tree structures, which allows the order-clique-based approach to mine candidate maximal co-locations and co-location instances; and (2) the co-location instances do not need to be stored after computing some characteristics of the corresponding co-location, which significantly reduces the execution time and space required for mining maximal co-locations. The performance study shows that the new method is efficient for mining both long and short co-location patterns, and is faster than some other methods (in particular the join-based method and the join-less method).  相似文献   
992.
The decision tree-based classification is a popular approach for pattern recognition and data mining. Most decision tree induction methods assume training data being present at one central location. Given the growth in distributed databases at geographically dispersed locations, the methods for decision tree induction in distributed settings are gaining importance. This paper describes one such method that generates compact trees using multifeature splits in place of single feature split decision trees generated by most existing methods for distributed data. Our method is based on Fisher's linear discriminant function, and is capable of dealing with multiple classes in the data. For homogeneously distributed data, the decision trees produced by our method are identical to decision trees generated using Fisher's linear discriminant function with centrally stored data. For heterogeneously distributed data, a certain approximation is involved with a small change in performance with respect to the tree generated with centrally stored data. Experimental results for several well-known datasets are presented and compared with decision trees generated using Fisher's linear discriminant function with centrally stored data.  相似文献   
993.
Mining association rules in relational databases is a significant computational task with lots of applications. A fundamental ingredient of this task is the discovery of sets of attributes (itemsets) whose frequency in the data exceeds some threshold value. In this paper we describe two algorithms for completing the calculation of frequent sets using a tree structure for storing partial supports, called interim‐support (IS) tree. The first of our algorithms (T‐Tree‐First (TTF)) uses a novel tree pruning technique, based on the notion of (fixed‐prefix) potential inclusion, which is specially designed for trees that are implemented using only two pointers per node. This allows to implement the IS tree in a space‐efficient manner. The second algorithm (P‐Tree‐First (PTF)) explores the idea of storing the frequent itemsets in a second tree structure, called the total support tree (T‐tree); the main innovation lies in the use of multiple pointers per node, which provides rapid access to the nodes of the T‐tree and makes it possible to design a new, usually faster, method for updating them. Experimental comparison shows that these techniques result in considerable speedup for both algorithms compared with earlier approaches that also use IS trees (Principles of Data Mining and Knowledge Discovery, Proceedings of the 5th European Conference, PKDD, 2001, Freiburg, September 2001 (Lecture Notes in Artificial Intelligence, vol. 2168). Springer: Berlin, Heidelberg, 54–66; Journal of Knowledge‐Based Syst. 2000; 13 :141–149). Further comparison between the two new algorithms, shows that the PTF is generally faster on instances with a large number of frequent itemsets, provided that they are relatively short, whereas TTF is more appropriate whenever there exist few or quite long frequent itemsets; in addition, TTF behaves well on instances in which the densities of the items of the database have a high variance. Copyright © 2008 John Wiley & Sons, Ltd.  相似文献   
994.
Massimo Ficco  Stefano Russo 《Software》2009,39(13):1095-1125
Location‐aware computing is a form of context‐aware mobile computing that refers to the ability of providing users with services that depend on their position. Locating the user terminal, often called positioning, is essential in this form of computing. Towards this aim, several technologies exist, ranging from personal area networking, to indoor, outdoor, and up to geographic area systems. Developers of location‐aware software applications have to face with a number of design choices, that typically depend on the chosen technology. This work addresses the problem of easing the development of pull location‐aware applications, by allowing uniform access to multiple heterogeneous positioning systems. Towards this aim, the paper proposes an approach to structure location‐aware mobile computing systems in a way independent of positioning technologies. The approach consists in structuring the system into a layered architecture, that provides application developers with a standard Java Application Programming Interface (JSR‐179 API), and encapsulates location data management and technology‐specific positioning subsystems into lower layers with clear interfaces. In order to demonstrate the proposed approach we present the development of HyLocSys. It is an open hybrid software architecture designed to support indoor/outdoor applications, which allows the uniform (combined or separate) use of several positioning technologies. HyLocSys uses a hybrid data model, which allows the integration of different location information representations (using symbolic and geometric coordinates). Moreover, it allows support to handset‐ and infrastructure‐based positioning approaches while respecting the privacy of the user. The paper presents a prototypal implementation of HyLocSys for heterogeneous scenarios. It has been implemented and tested on several platforms and mobile devices. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
995.
With the advent of multicores, multithreaded programming has acquired increased importance. In order to obtain good performance, the synchronization constructs in multithreaded programs need to be carefully implemented. These implementations can be broadly classified into two categories: busy–wait and schedule‐based. For shared memory architectures, busy–wait synchronizations are preferred over schedule‐based synchronizations because they can achieve lower wakeup latency, especially when the expected wait time is much shorter than the scheduling time. While busy–wait synchronizations can improve the performance of multithreaded programs running on multicore machines, they create a challenge in program debugging, especially in detecting and identifying the causes of data races. Although significant research has been done on data race detection, prior works rely on one important assumption—the debuggers are aware of all the synchronization operations performed during a program run. This assumption is a significant limitation as multithreaded programs, including the popular SPLASH‐2 benchmark have busy–wait synchronizations such as barriers and flag synchronizations implemented in the user code. We show that the lack of knowledge of these synchronization operations leads to unnecessary reporting of numerous races. To tackle this problem, we propose a dynamic technique for identifying user‐defined synchronizations that are performed during a program run. Both software and hardware implementations are presented. Furthermore, our technique can be easily exploited by a record/replay system to significantly speedup the replay. It can also be leveraged by a transactional memory system to effectively resolve a livelock situation. Our evaluation confirms that our synchronization detector is highly accurate with no false negatives and very few false positives. We further observe that the knowledge of synchronization operations results in 23% reduction in replay time. Finally, we show that using synchronization knowledge livelocks can be efficiently avoided during runtime monitoring of programs. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
996.
Analysis of low‐level usage data collected in empirical studies of user interaction is well known as a demanding task. Existing techniques for data collection and analysis are either application specific or data‐driven. This paper presents a workspace for data cleaning, transformation and analysis of low‐level usage data that we have developed and reports our experience with it. By its five‐level architecture, the workspace makes a distinction between more general data that typically can be used in initial data analysis and the data answering a specific research question. The workspace was used in four studies and in total 6.5M user actions were collected from 238 participants. The collected data have been proven to be useful for: (i) validating solution times, (ii) validating process conformances, (iii) exploratory studies on program comprehension for understanding use of classes and documents and (iv) testing hypotheses on keystroke latencies. We have found workspace creation to be demanding in time. Particularly demanding were determining the context of actions and dealing with deficiencies. However, once these processes were understood, it was easy to reuse the workspace for different experiments and to extend it to answer new research questions. Based on our experience, we give a set of guidelines that might help in setting up studies, collecting and preparing data. We recommend that designers of data collection instruments add context to each action. Furthermore, we recommend rapid iterations starting early in the process of data preparation and analysis, and covering both general and specific data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
997.
Development of a pit filling algorithm for LiDAR canopy height models   总被引:1,自引:0,他引:1  
LiDAR canopy height models (CHMs) can exhibit unnatural looking holes or pits, i.e., pixels with a much lower digital number than their immediate neighbors. These artifacts may be caused by a combination of factors, from data acquisition to post-processing, that not only result in a noisy appearance to the CHM but may also limit semi-automated tree-crown delineation and lead to errors in biomass estimates. We present a highly effective semi-automated pit filling algorithm that interactively detects data pits based on a simple user-defined threshold, and then fills them with a value derived from their neighborhood. We briefly describe this algorithm and its graphical user interface, and show its result in a LiDAR CHM populated with data pits. This method can be rapidly applied to any CHM with minimal user interaction. Visualization confirms that our method effectively and quickly removes data pits.  相似文献   
998.
The evolution of computer science and technology has brought new opportunities for multidisciplinary designers and engineers to collaborate with each other in a concurrent and coordinated manner. The development of computational agents with unified data structures and software protocols contributes to the establishment of a new way of working in collaborative design, which is increasingly becoming an international practice. In this paper, based on the analysis of the dynamic nature of collaborative design process, a new framework for collaborative design is described. This framework adopts an agent-based approach and relocates designers, managers, systems, and the supporting agents in a unified knowledge representation scheme for product design. In order to model the constantly evolving design process and the rationales resulted from design collaboration, a Collaborative Product Data Model (CPDM) and a constraint-based Collaborative Design Process Model (CDPM) are proposed to facilitate the management and coordination of the collaborative design process as well as design knowledge management. A prototype system of the proposed framework is implemented and its feasibility is evaluated using a real design scenario whose objective is designing a set of dining table and chairs.  相似文献   
999.
Copyright protection and information security have become serious problems due to the ever growing amount of digital data over the Internet. Reversible data hiding is a special type of data hiding technique that guarantees not only the secret data but also the cover media can be reconstructed without any distortion. Traditional schemes are based on spatial, discrete cosine transformation (DCT) and discrete wavelet transformation (DWT) domains. Recently, some vector quantization (VQ) based reversible data hiding schemes have been proposed. This paper proposes an improved reversible data hiding scheme based on VQ-index residual value coding. Experimental results show that our scheme outperforms two recently proposed schemes, namely side-match vector quantization (SMVQ)-based data hiding and modified fast correlation vector quantization (MFCVQ)-based data hiding.  相似文献   
1000.
The standard C/C++ implementation of a spatial partitioning data structure, such as octree and quadtree, is often inefficient in terms of storage requirements particularly when the memory overhead for maintaining parent‐to‐child pointers is significant with respect to the amount of actual data in each tree node. In this work, we present a novel data structure that implements uniform spatial partitioning without storing explicit parent‐to‐child pointer links. Our linkless tree encodes the storage locations of subdivided nodes using perfect hashing while retaining important properties of uniform spatial partitioning trees, such as coarse‐to‐fine hierarchical representation, efficient storage usage, and efficient random accessibility. We demonstrate the performance of our linkless trees using image compression and path planning examples.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号