全文获取类型
收费全文 | 351868篇 |
免费 | 29570篇 |
国内免费 | 16574篇 |
专业分类
电工技术 | 22517篇 |
技术理论 | 30篇 |
综合类 | 24242篇 |
化学工业 | 56080篇 |
金属工艺 | 20490篇 |
机械仪表 | 23005篇 |
建筑科学 | 28756篇 |
矿业工程 | 11489篇 |
能源动力 | 9943篇 |
轻工业 | 23080篇 |
水利工程 | 6614篇 |
石油天然气 | 22628篇 |
武器工业 | 3039篇 |
无线电 | 39204篇 |
一般工业技术 | 39460篇 |
冶金工业 | 17714篇 |
原子能技术 | 3809篇 |
自动化技术 | 45912篇 |
出版年
2024年 | 1610篇 |
2023年 | 5708篇 |
2022年 | 10373篇 |
2021年 | 14099篇 |
2020年 | 10992篇 |
2019年 | 8977篇 |
2018年 | 10147篇 |
2017年 | 11439篇 |
2016年 | 10148篇 |
2015年 | 13814篇 |
2014年 | 17796篇 |
2013年 | 20545篇 |
2012年 | 22687篇 |
2011年 | 24342篇 |
2010年 | 21540篇 |
2009年 | 20610篇 |
2008年 | 20403篇 |
2007年 | 19157篇 |
2006年 | 19618篇 |
2005年 | 16814篇 |
2004年 | 11739篇 |
2003年 | 10536篇 |
2002年 | 10399篇 |
2001年 | 9128篇 |
2000年 | 8440篇 |
1999年 | 8959篇 |
1998年 | 7145篇 |
1997年 | 5911篇 |
1996年 | 5447篇 |
1995年 | 4602篇 |
1994年 | 3675篇 |
1993年 | 2551篇 |
1992年 | 2022篇 |
1991年 | 1618篇 |
1990年 | 1227篇 |
1989年 | 1001篇 |
1988年 | 717篇 |
1987年 | 458篇 |
1986年 | 372篇 |
1985年 | 254篇 |
1984年 | 175篇 |
1983年 | 147篇 |
1982年 | 166篇 |
1981年 | 127篇 |
1980年 | 94篇 |
1979年 | 58篇 |
1978年 | 36篇 |
1977年 | 27篇 |
1976年 | 38篇 |
1951年 | 22篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
941.
Han Y Liu G 《IEEE transactions on systems, man, and cybernetics. Part B, Cybernetics》2012,42(3):827-837
Localized multiple kernel learning (LMKL) is an attractive strategy for combining multiple heterogeneous features in terms of their discriminative power for each individual sample. However, models excessively fitting to a specific sample would obstacle the extension to unseen data, while a more general form is often insufficient for diverse locality characterization. Hence, both learning sample-specific local models for each training datum and extending the learned models to unseen test data should be equally addressed in designing LMKL algorithm. In this paper, for an integrative solution, we propose a probability confidence kernel (PCK), which measures per-sample similarity with respect to probabilistic-prediction-based class attribute: The class attribute similarity complements the spatial-similarity-based base kernels for more reasonable locality characterization, and the predefined form of involved class probability density function facilitates the extension to the whole input space and ensures its statistical meaning. Incorporating PCK into support-vectormachine-based LMKL framework, we propose a new PCK-LMKL with arbitrary l(p)-norm constraint implied in the definition of PCKs, where both the parameters in PCK and the final classifier can be efficiently optimized in a joint manner. Evaluations of PCK-LMKL on both benchmark machine learning data sets (ten University of California Irvine (UCI) data sets) and challenging computer vision data sets (15-scene data set and Caltech-101 data set) have shown to achieve state-of-the-art performances. 相似文献
942.
Luo SJ Liu CL Chen BY Ma KL 《IEEE transactions on visualization and computer graphics》2012,18(5):810-821
Graph visualization has been widely used to understand and present both global structural and local adjacency information in relational data sets (e.g., transportation networks, citation networks, or social networks). Graphs with dense edges, however, are difficult to visualize because fast layout and good clarity are not always easily achieved. When the number of edges is large, edge bundling can be used to improve the clarity, but in many cases, the edges could be still too cluttered to permit correct interpretation of the relations between nodes. In this paper, we present an ambiguity-free edge-bundling method especially for improving local detailed view of a complex graph. Our method makes more efficient use of display space and supports detail-on-demand viewing through an interactive interface. We demonstrate the effectiveness of our method with public coauthorship network data. 相似文献
943.
Distributed virtual environments (DVEs) are becoming very popular in recent years, due to the rapid growing of applications, such as massive multiplayer online games (MMOGs). As the number of concurrent users increases, scalability becomes one of the major challenges in designing an interactive DVE system. One solution to address this scalability problem is to adopt a multi-server architecture. While some methods focus on the quality of partitioning the load among the servers, others focus on the efficiency of the partitioning process itself. However, all these methods neglect the effect of network delay among the servers on the accuracy of the load balancing solutions. As we show in this paper, the change in the load of the servers due to network delay would affect the performance of the load balancing algorithm. In this work, we conduct a formal analysis of this problem and discuss two efficient delay adjustment schemes to address the problem. Our experimental results show that our proposed schemes can significantly improve the performance of the load balancing algorithm with neglectable computation overhead. 相似文献
944.
In this paper, we present a robust and accurate algorithm for interactive image segmentation. The level set method is clearly advantageous for image objects with a complex topology and fragmented appearance. Our method integrates discriminative classification models and distance transforms with the level set method to avoid local minima and better snap to true object boundaries. The level set function approximates a transformed version of pixelwise posterior probabilities of being part of a target object. The evolution of its zero level set is driven by three force terms, region force, edge field force, and curvature force. These forces are based on a probabilistic classifier and an unsigned distance transform of salient edges. We further propose a technique that improves the performance of both the probabilistic classifier and the level set method over multiple passes. It makes the final object segmentation less sensitive to user interactions. Experiments and comparisons demonstrate the effectiveness of our method. 相似文献
945.
Register allocation for write activity minimization on non-volatile main memory for embedded systems
Yazhi Huang Author VitaeTiantian Liu Author Vitae Chun Jason XueAuthor Vitae 《Journal of Systems Architecture》2012,58(1):13-23
Non-volatile memories are good candidates for DRAM replacement as main memory in embedded systems and they have many desirable characteristics. Nevertheless, the disadvantages of non-volatile memory co-exist with its advantages. First, the lifetime of some of the non-volatile memories is limited by the number of erase operations. Second, read and write operations have asymmetric speed or power consumption in non-volatile memory. This paper focuses on the embedded systems using non-volatile memory as main memory. We propose register allocation technique with re-computation to reduce the number of store instructions. When non-volatile memory is applied as the main memory, reducing store instructions will reduce write activities on non-volatile memory. To re-compute the spills effectively during register allocation, a novel potential spill selection strategy is proposed. During this process, live range splitting is utilized to split certain long live ranges such that they are more likely to be assigned into registers. In addition, techniques for re-computation overhead reduction is proposed on systems with multiple functional units. With the proposed approach, the lifetime of non-volatile memory is extended accordingly. The experimental results demonstrate that the proposed technique can efficiently reduce the number of store instructions on systems with non-volatile memory by 33% on average. 相似文献
946.
Towards a theoretical framework of strategic decision, supporting capability and information sharing under the context of Internet of Things 总被引:2,自引:2,他引:0
The effective strategy of Internet of Things (IoT) can help firms to grasp the emerging opportunities from the IoT and then improve their competitive advantage. In this article, based on organizational capability perspective, we provide a theoretical framework which classifies IoT strategies into four archetypes from two dimensions of managers’ strategic intent and industrial driving force, and propose that market-based exploratory capabilities play a more important role for firms adopting get-ahead strategy, and market-based exploitative capabilities play a more important role for firms adopting catch-up strategy in market. The technology-based exploratory capabilities play a more important role for firms adopting get-ahead strategy in technology, and technology-based exploitative capabilities play a more important role for firms adopting catch-up strategy in technology. Especially, external industry information sharing more efficiently contributes to the enhancement of both market-based and technology-based exploratory capabilities, and internal industry information sharing more efficiently contributes to the enhancement of both market-based and technology-based exploitative capabilities. 相似文献
947.
Given a directed graph, the problem of blackhole mining is to identify groups of nodes, called blackhole patterns, in a way such that the average in-weight of this group is significantly larger than the average out-weight of the same group. The problem of finding volcano patterns is a dual problem of mining blackhole patterns. Therefore, we focus on discovering the blackhole patterns. Indeed, in this article, we develop a generalized blackhole mining framework. Specifically, we first design two pruning schemes for reducing the computational cost by reducing both the number of candidate patterns and the average computation cost for each candidate pattern. The first pruning scheme is to exploit the concept of combination dominance to reduce the exponential growth search space. Based on this pruning approach, we develop the gBlackhole algorithm. Instead, the second pruning scheme is an approximate approach, named approxBlackhole, which can strike a balance between the efficiency and the completeness of blackhole mining. Finally, experimental results on real-world data show that the performance of approxBlackhole can be several orders of magnitude faster than gBlackhole, and both of them have huge computational advantages over the brute-force approach. Also, we show that the blackhole mining algorithm can be used to capture some suspicious financial fraud patterns. 相似文献
948.
Cookies are the primary means for web applications to authenticate HTTP requests and to maintain client states. Many web applications (such as those for electronic commerce) demand a secure cookie scheme. Such a scheme needs to provide the following four services: authentication, confidentiality, integrity, and anti-replay. Several secure cookie schemes have been proposed in previous literature; however, none of them are completely satisfactory. In this paper, we propose a secure cookie scheme that is effective, efficient, and easy to deploy. In terms of effectiveness, our scheme provides all of the above four security services. In terms of efficiency, our scheme does not involve any database lookup or public key cryptography. In terms of deployability, our scheme can be easily deployed on existing web services, and it does not require any change to the Internet cookie specification. We implemented our secure cookie scheme using PHP and conducted experiments. The experimental results show that our scheme is very efficient on both the client side and the server side.A notable adoption of our scheme in industry is that our cookie scheme has been used by Wordpress since version 2.4. Wordpress is a widely used open source content management system. 相似文献
949.
Yiqiao Cai Jiahai Wang Jian Yin 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2012,16(2):303-330
Differential evolution (DE) is a simple and powerful population-based search algorithm, successfully used in various scientific
and engineering fields. However, DE is not free from the problems of stagnation and premature convergence. Hence, designing
more effective search strategies to enhance the performance of DE is one of the most salient and active topics. This paper
proposes a new method, called learning-enhanced DE (LeDE) that promotes individuals to exchange information systematically.
Distinct from the existing DE variants, LeDE adopts a novel learning strategy, namely clustering-based learning strategy (CLS).
In CLS, there are two levels of learning strategies, intra-cluster learning strategy and inter-cluster learning strategy.
They are adopted for exchanging information within the same cluster and between different clusters, respectively. Experimental
studies over 23 benchmark functions show that LeDE significantly outperforms the conventional DE. Compared with other clustering-based
DE algorithms, LeDE can obtain better solutions. In addition, LeDE is also shown to be significantly better than or at least
comparable to several state-of-art DE variants as well as some other evolutionary algorithms. 相似文献
950.
Fuzzy local maximal marginal embedding for feature extraction 总被引:1,自引:0,他引:1
Cairong Zhao Zhihui Lai Chuancai Liu Xingjian Gu Jianjun Qian 《Soft Computing - A Fusion of Foundations, Methodologies and Applications》2012,16(1):77-87
In graph-based linear dimensionality reduction algorithms, it is crucial to construct a neighbor graph that can correctly
reflect the relationship between samples. This paper presents an improved algorithm called fuzzy local maximal marginal embedding
(FLMME) for linear dimensionality reduction. Significantly differing from the existing graph-based algorithms is that two
novel fuzzy gradual graphs are constructed in FLMME, which help to pull the near neighbor samples in same class nearer and
nearer and repel the far neighbor samples of margin between different classes farther and farther when they are projected
to feature subspace. Through the fuzzy gradual graphs, FLMME algorithm has lower sensitivities to the sample variations caused
by varying illumination, expression, viewing conditions and shapes. The proposed FLMME algorithm is evaluated through experiments
by using the WINE database, the Yale and ORL face image databases and the USPS handwriting digital databases. The results
show that the FLMME outperforms PCA, LDA, LPP and local maximal marginal embedding. 相似文献