首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3779篇
  免费   245篇
  国内免费   13篇
电工技术   117篇
综合类   8篇
化学工业   958篇
金属工艺   96篇
机械仪表   125篇
建筑科学   130篇
矿业工程   9篇
能源动力   153篇
轻工业   600篇
水利工程   28篇
石油天然气   18篇
无线电   254篇
一般工业技术   898篇
冶金工业   90篇
原子能技术   23篇
自动化技术   530篇
  2023年   29篇
  2022年   25篇
  2021年   97篇
  2020年   73篇
  2019年   89篇
  2018年   133篇
  2017年   134篇
  2016年   175篇
  2015年   124篇
  2014年   177篇
  2013年   390篇
  2012年   263篇
  2011年   285篇
  2010年   230篇
  2009年   170篇
  2008年   180篇
  2007年   161篇
  2006年   129篇
  2005年   78篇
  2004年   60篇
  2003年   56篇
  2002年   54篇
  2001年   45篇
  2000年   45篇
  1999年   44篇
  1998年   36篇
  1997年   30篇
  1996年   39篇
  1995年   29篇
  1994年   39篇
  1993年   31篇
  1992年   32篇
  1991年   20篇
  1990年   24篇
  1989年   21篇
  1988年   17篇
  1985年   36篇
  1984年   40篇
  1983年   31篇
  1982年   28篇
  1981年   40篇
  1980年   36篇
  1979年   32篇
  1978年   32篇
  1977年   22篇
  1976年   26篇
  1975年   17篇
  1974年   19篇
  1973年   19篇
  1972年   14篇
排序方式: 共有4037条查询结果,搜索用时 359 毫秒
91.
92.
93.
Analysis of low‐level usage data collected in empirical studies of user interaction is well known as a demanding task. Existing techniques for data collection and analysis are either application specific or data‐driven. This paper presents a workspace for data cleaning, transformation and analysis of low‐level usage data that we have developed and reports our experience with it. By its five‐level architecture, the workspace makes a distinction between more general data that typically can be used in initial data analysis and the data answering a specific research question. The workspace was used in four studies and in total 6.5M user actions were collected from 238 participants. The collected data have been proven to be useful for: (i) validating solution times, (ii) validating process conformances, (iii) exploratory studies on program comprehension for understanding use of classes and documents and (iv) testing hypotheses on keystroke latencies. We have found workspace creation to be demanding in time. Particularly demanding were determining the context of actions and dealing with deficiencies. However, once these processes were understood, it was easy to reuse the workspace for different experiments and to extend it to answer new research questions. Based on our experience, we give a set of guidelines that might help in setting up studies, collecting and preparing data. We recommend that designers of data collection instruments add context to each action. Furthermore, we recommend rapid iterations starting early in the process of data preparation and analysis, and covering both general and specific data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
94.
Widespread use of GPS and similar technologies makes it possible to collect extensive amounts of trajectory data. These data sets are essential for reasonable decision making in various application domains. Additional information, such as events taking place along a trajectory, makes data analysis challenging, due to data size and complexity. We present an integrated solution for interactive visual analysis and exploration of events along trajectories data. Our approach supports analysis of event sequences at three different levels of abstraction, namely spatial, temporal, and events themselves. Customized views as well as standard views are combined to form a coordinated multiple views system. In addition to trajectories and events, we include on-the-fly derived data in the analysis. We evaluate our integrated solution using the IEEE VAST 2015 Challenge data set. A successful detection and characterization of malicious activity indicate the usefulness and efficiency of the presented approach.  相似文献   
95.
96.
97.
Unit verification, including software inspections and unit tests, is usually the first code verification phase in the software development process. However, principles of unit verification are weakly explored, mostly due to the lack of data, since unit verification data are rarely systematically collected and only a few studies have been published with such data from industry. Therefore, we explore the theory of fault distributions, originating in the quantitative analysis by Fenton and Ohlsson, in the weakly explored context of unit verification in large-scale software development. We conduct a quantitative case study on a sequence of four development projects on consecutive releases of the same complex software product line system for telecommunication exchanges. We replicate the operationalization from earlier studies, analyzed hypotheses related to the Pareto principle of fault distribution, persistence of faults, effects of module size, and quality in terms of fault densities, however, now from the perspective of unit verification. The patterns in unit verification results resemble those of later verification phases, e.g., regarding the Pareto principle, and may thus be used for prediction and planning purposes. Using unit verification results as predictors may improve the quality and efficiency of software verification.  相似文献   
98.
This paper proposes a non-domain-specific metadata ontology as a core component in a semantic model-based document management system (DMS), a potential contender towards the enterprise information systems of the next generation. What we developed is the core semantic component of an ontology-driven DMS, providing a robust semantic base for describing documents’ metadata. We also enabled semantic services such as automated semantic translation of metadata from one domain to another. The core semantic base consists of three semantic layers, each one serving a different view of documents’ metadata. The core semantic component’s base layer represents a non-domain-specific metadata ontology founded on ebRIM specification. The main purpose of this ontology is to serve as a meta-metadata ontology for other domain-specific metadata ontologies. The base semantic layer provides a generic metadata view. For the sake of enabling domain-specific views of documents’ metadata, we implemented two domain-specific metadata ontologies, semantically layered on top of ebRIM, serving domain-specific views of the metadata. In order to enable semantic translation of metadata from one domain to another, we established model-to-model mappings between these semantic layers by introducing SWRL rules. Having the semantic translation of metadata automated not only allows for effortless switching between different metadata views, but also opens the door for automating the process of documents long-term archiving. For the case study, we chose judicial domain as a promising ground for improving the efficiency of the judiciary by introducing the semantics in this field.  相似文献   
99.
100.
Ant-like systems take advantage of agents' situatedness to reduce or eliminate the need for centralized control or global knowledge. This reduces the need for complexity of individuals and leads to robust, scalable systems. Such insect-inspired situated approaches have proven effective both for task performance and task allocation. The desire for general, principled techniques for situated interaction has led us to study the exploitation of abstract situatedness – situatedness in non-physical environments. The port-arbitrated behavior-based control approach provides a well-structured abstract behavior space in which agents can participate in situated interaction. We focus on the problem of role assumption, distributed task allocation in which each agent selects its own task-performing role. This paper details our general, principled Broadcast of Local Eligibility (BLE) technique for role-assumption in such behavior-space-situated systems, and provides experimental results from the CMOMMT target-tracking task. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号