全文获取类型
收费全文 | 2905篇 |
免费 | 208篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 86篇 |
综合类 | 4篇 |
化学工业 | 701篇 |
金属工艺 | 61篇 |
机械仪表 | 98篇 |
建筑科学 | 103篇 |
矿业工程 | 8篇 |
能源动力 | 118篇 |
轻工业 | 485篇 |
水利工程 | 14篇 |
石油天然气 | 13篇 |
无线电 | 180篇 |
一般工业技术 | 763篇 |
冶金工业 | 28篇 |
原子能技术 | 9篇 |
自动化技术 | 443篇 |
出版年
2023年 | 21篇 |
2022年 | 12篇 |
2021年 | 59篇 |
2020年 | 56篇 |
2019年 | 66篇 |
2018年 | 114篇 |
2017年 | 110篇 |
2016年 | 140篇 |
2015年 | 99篇 |
2014年 | 144篇 |
2013年 | 326篇 |
2012年 | 204篇 |
2011年 | 217篇 |
2010年 | 179篇 |
2009年 | 129篇 |
2008年 | 135篇 |
2007年 | 117篇 |
2006年 | 94篇 |
2005年 | 52篇 |
2004年 | 32篇 |
2003年 | 48篇 |
2002年 | 39篇 |
2001年 | 31篇 |
2000年 | 36篇 |
1999年 | 30篇 |
1998年 | 22篇 |
1997年 | 15篇 |
1996年 | 28篇 |
1995年 | 20篇 |
1994年 | 29篇 |
1993年 | 24篇 |
1992年 | 23篇 |
1991年 | 20篇 |
1990年 | 20篇 |
1989年 | 17篇 |
1988年 | 13篇 |
1985年 | 32篇 |
1984年 | 34篇 |
1983年 | 28篇 |
1982年 | 24篇 |
1981年 | 37篇 |
1980年 | 32篇 |
1979年 | 25篇 |
1978年 | 24篇 |
1977年 | 18篇 |
1976年 | 20篇 |
1975年 | 13篇 |
1974年 | 11篇 |
1973年 | 14篇 |
1972年 | 14篇 |
排序方式: 共有3114条查询结果,搜索用时 15 毫秒
71.
Aleksandar Kovačević Branko Milosavljević Zora Konjović Milan Vidaković 《Multimedia Tools and Applications》2010,47(3):525-544
This paper presents a tunable content-based music retrieval (CBMR) system suitable the for retrieval of music audio clips. The audio clips are represented as extracted feature vectors. The CBMR system is expert-tunable by altering the feature space. The feature space is tuned according to the expert-specified similarity criteria expressed in terms of clusters of similar audio clips. The main goal of tuning the feature space is to improve retrieval performance, since some features may have more impact on perceived similarity than others. The tuning process utilizes our genetic algorithm. The R-tree index for efficient retrieval of audio clips is based on the clustering of feature vectors. For each cluster a minimal bounding rectangle (MBR) is formed, thus providing objects for indexing. Inserting new nodes into the R-tree is efficiently performed because of the chosen Quadratic Split algorithm. Our CBMR system implements the point query and the n-nearest neighbors query with the O(logn) time complexity. Different objective functions based on cluster similarity and dissimilarity measures are used for the genetic algorithm. We have found that all of them have similar impact on the retrieval performance in terms of precision and recall. The paper includes experimental results in measuring retrieval performance, reporting significant improvement over the untuned feature space. 相似文献
72.
Tatjana Davidović Dušan Ramljak Milica Šelmić Dušan Teodorović 《Computers & Operations Research》2011
Bee colony optimization (BCO) is a relatively new meta-heuristic designed to deal with hard combinatorial optimization problems. It is biologically inspired method that explores collective intelligence applied by the honey bees during nectar collecting process. In this paper we apply BCO to the p-center problem in the case of symmetric distance matrix. On the contrary to the constructive variant of the BCO algorithm used in recent literature, we propose variant of BCO based on the improvement concept (BCOi). The BCOi has not been significantly used in the relevant BCO literature so far. In this paper it is proved that BCOi can be a very useful concept for solving difficult combinatorial problems. The numerical experiments performed on well-known benchmark problems show that the BCOi is competitive with other methods and it can generate high-quality solutions within negligible CPU times. 相似文献
73.
74.
75.
Analysis of low‐level usage data collected in empirical studies of user interaction is well known as a demanding task. Existing techniques for data collection and analysis are either application specific or data‐driven. This paper presents a workspace for data cleaning, transformation and analysis of low‐level usage data that we have developed and reports our experience with it. By its five‐level architecture, the workspace makes a distinction between more general data that typically can be used in initial data analysis and the data answering a specific research question. The workspace was used in four studies and in total 6.5M user actions were collected from 238 participants. The collected data have been proven to be useful for: (i) validating solution times, (ii) validating process conformances, (iii) exploratory studies on program comprehension for understanding use of classes and documents and (iv) testing hypotheses on keystroke latencies. We have found workspace creation to be demanding in time. Particularly demanding were determining the context of actions and dealing with deficiencies. However, once these processes were understood, it was easy to reuse the workspace for different experiments and to extend it to answer new research questions. Based on our experience, we give a set of guidelines that might help in setting up studies, collecting and preparing data. We recommend that designers of data collection instruments add context to each action. Furthermore, we recommend rapid iterations starting early in the process of data preparation and analysis, and covering both general and specific data. Copyright © 2009 John Wiley & Sons, Ltd. 相似文献
76.
Lena Cibulski Denis Gračanin Alexandra Diehl Rainer Splechtna Mai Elshehaly Claudio Delrieux Krešimir Matković 《The Visual computer》2016,32(6-8):847-857
Widespread use of GPS and similar technologies makes it possible to collect extensive amounts of trajectory data. These data sets are essential for reasonable decision making in various application domains. Additional information, such as events taking place along a trajectory, makes data analysis challenging, due to data size and complexity. We present an integrated solution for interactive visual analysis and exploration of events along trajectories data. Our approach supports analysis of event sequences at three different levels of abstraction, namely spatial, temporal, and events themselves. Customized views as well as standard views are combined to form a coordinated multiple views system. In addition to trajectories and events, we include on-the-fly derived data in the analysis. We evaluate our integrated solution using the IEEE VAST 2015 Challenge data set. A successful detection and characterization of malicious activity indicate the usefulness and efficiency of the presented approach. 相似文献
77.
78.
79.
Unit verification, including software inspections and unit tests, is usually the first code verification phase in the software development process. However, principles of unit verification are weakly explored, mostly due to the lack of data, since unit verification data are rarely systematically collected and only a few studies have been published with such data from industry. Therefore, we explore the theory of fault distributions, originating in the quantitative analysis by Fenton and Ohlsson, in the weakly explored context of unit verification in large-scale software development. We conduct a quantitative case study on a sequence of four development projects on consecutive releases of the same complex software product line system for telecommunication exchanges. We replicate the operationalization from earlier studies, analyzed hypotheses related to the Pareto principle of fault distribution, persistence of faults, effects of module size, and quality in terms of fault densities, however, now from the perspective of unit verification. The patterns in unit verification results resemble those of later verification phases, e.g., regarding the Pareto principle, and may thus be used for prediction and planning purposes. Using unit verification results as predictors may improve the quality and efficiency of software verification. 相似文献
80.
Igor Cverdelj-Fogaraši Goran Sladić Stevan Gostojić Milan Segedinac Branko Milosavljević 《Information Systems and E-Business Management》2017,15(2):257-304
This paper proposes a non-domain-specific metadata ontology as a core component in a semantic model-based document management system (DMS), a potential contender towards the enterprise information systems of the next generation. What we developed is the core semantic component of an ontology-driven DMS, providing a robust semantic base for describing documents’ metadata. We also enabled semantic services such as automated semantic translation of metadata from one domain to another. The core semantic base consists of three semantic layers, each one serving a different view of documents’ metadata. The core semantic component’s base layer represents a non-domain-specific metadata ontology founded on ebRIM specification. The main purpose of this ontology is to serve as a meta-metadata ontology for other domain-specific metadata ontologies. The base semantic layer provides a generic metadata view. For the sake of enabling domain-specific views of documents’ metadata, we implemented two domain-specific metadata ontologies, semantically layered on top of ebRIM, serving domain-specific views of the metadata. In order to enable semantic translation of metadata from one domain to another, we established model-to-model mappings between these semantic layers by introducing SWRL rules. Having the semantic translation of metadata automated not only allows for effortless switching between different metadata views, but also opens the door for automating the process of documents long-term archiving. For the case study, we chose judicial domain as a promising ground for improving the efficiency of the judiciary by introducing the semantics in this field. 相似文献