首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   3028篇
  免费   179篇
  国内免费   2篇
电工技术   88篇
综合类   4篇
化学工业   725篇
金属工艺   63篇
机械仪表   103篇
建筑科学   102篇
矿业工程   8篇
能源动力   118篇
轻工业   502篇
水利工程   15篇
石油天然气   14篇
无线电   192篇
一般工业技术   766篇
冶金工业   44篇
原子能技术   9篇
自动化技术   456篇
  2023年   21篇
  2022年   18篇
  2021年   69篇
  2020年   60篇
  2019年   69篇
  2018年   116篇
  2017年   109篇
  2016年   143篇
  2015年   101篇
  2014年   155篇
  2013年   335篇
  2012年   208篇
  2011年   223篇
  2010年   183篇
  2009年   131篇
  2008年   139篇
  2007年   118篇
  2006年   96篇
  2005年   54篇
  2004年   32篇
  2003年   49篇
  2002年   40篇
  2001年   31篇
  2000年   36篇
  1999年   31篇
  1998年   27篇
  1997年   19篇
  1996年   28篇
  1995年   21篇
  1994年   29篇
  1993年   25篇
  1992年   23篇
  1991年   20篇
  1990年   21篇
  1989年   17篇
  1988年   13篇
  1985年   33篇
  1984年   34篇
  1983年   28篇
  1982年   25篇
  1981年   37篇
  1980年   32篇
  1979年   25篇
  1978年   25篇
  1977年   19篇
  1976年   21篇
  1975年   13篇
  1974年   11篇
  1973年   14篇
  1972年   14篇
排序方式: 共有3209条查询结果,搜索用时 15 毫秒
41.
J. Berdajs  Z. Bosnić 《Software》2010,40(7):567-584
When programmers need to modify third‐party applications, they frequently do not have access to their source code. In such cases, DLL injection and API hooking are techniques that can be used to modify applications without intervening into their source code. The commonly used varieties of injection and hooking approaches have many practical limitations: they are inconvenient for a programmer to implement, do not work reliably in conjunction with all applications and with certain low‐level machine instructions. In this paper we present two novel approaches to DLL injection and API hooking, which we call Debugger‐aided DLL injection and Single Instruction Hooking. Our approaches overcome the limitations of the state‐of‐the art approaches. Despite incurring greater execution times, our approach allows extending of the applications in situations where the comparable approaches fail. As such, it has a notable practical value for beneficial practical applications of injection and hooking approaches, which are present in malware detection programs and computer security tools. Copyright © 2010 John Wiley & Sons, Ltd.  相似文献   
42.
43.
44.
Analysis of safety in surface coal mines represents a very complex process. Published studies on mine safety analysis are usually based on research related to accidents statistics and hazard identification with risk assessment within the mining industry. Discussion in this paper is focused on the application of AI methods in the analysis of safety in mining environment. Complexity of the subject matter requires a high level of expert knowledge and great experience. The solution was found in the creation of a hybrid system PROTECTOR, whose knowledge base represents a formalization of the expert knowledge in the mine safety field. The main goal of the system is the estimation of mining environment as one of the significant components of general safety state in a mine. This global goal is subdivided into a hierarchical structure of subgoals where each subgoal can be viewed as the estimation of a set of parameters (gas, dust, climate, noise, vibration, illumination, geotechnical hazard) which determine the general mine safety state and category of hazard in mining environment. Both the hybrid nature of the system and the possibilities it offers are illustrated through a case study using field data related to an existing Serbian surface coal mine.  相似文献   
45.
World lines     
In this paper we present World Lines as a novel interactive visualization that provides complete control over multiple heterogeneous simulation runs. In many application areas, decisions can only be made by exploring alternative scenarios. The goal of the suggested approach is to support users in this decision making process. In this setting, the data domain is extended to a set of alternative worlds where only one outcome will actually happen. World Lines integrate simulation, visualization and computational steering into a single unified system that is capable of dealing with the extended solution space. World Lines represent simulation runs as causally connected tracks that share a common time axis. This setup enables users to interfere and add new information quickly. A World Line is introduced as a visual combination of user events and their effects in order to present a possible future. To quickly find the most attractive outcome, we suggest World Lines as the governing component in a system of multiple linked views and a simulation component. World Lines employ linking and brushing to enable comparative visual analysis of multiple simulations in linked views. Analysis results can be mapped to various visual variables that World Lines provide in order to highlight the most compelling solutions. To demonstrate this technique we present a flooding scenario and show the usefulness of the integrated approach to support informed decision making.  相似文献   
46.
This paper presents a tunable content-based music retrieval (CBMR) system suitable the for retrieval of music audio clips. The audio clips are represented as extracted feature vectors. The CBMR system is expert-tunable by altering the feature space. The feature space is tuned according to the expert-specified similarity criteria expressed in terms of clusters of similar audio clips. The main goal of tuning the feature space is to improve retrieval performance, since some features may have more impact on perceived similarity than others. The tuning process utilizes our genetic algorithm. The R-tree index for efficient retrieval of audio clips is based on the clustering of feature vectors. For each cluster a minimal bounding rectangle (MBR) is formed, thus providing objects for indexing. Inserting new nodes into the R-tree is efficiently performed because of the chosen Quadratic Split algorithm. Our CBMR system implements the point query and the n-nearest neighbors query with the O(logn) time complexity. Different objective functions based on cluster similarity and dissimilarity measures are used for the genetic algorithm. We have found that all of them have similar impact on the retrieval performance in terms of precision and recall. The paper includes experimental results in measuring retrieval performance, reporting significant improvement over the untuned feature space.  相似文献   
47.
48.
Analysis of low‐level usage data collected in empirical studies of user interaction is well known as a demanding task. Existing techniques for data collection and analysis are either application specific or data‐driven. This paper presents a workspace for data cleaning, transformation and analysis of low‐level usage data that we have developed and reports our experience with it. By its five‐level architecture, the workspace makes a distinction between more general data that typically can be used in initial data analysis and the data answering a specific research question. The workspace was used in four studies and in total 6.5M user actions were collected from 238 participants. The collected data have been proven to be useful for: (i) validating solution times, (ii) validating process conformances, (iii) exploratory studies on program comprehension for understanding use of classes and documents and (iv) testing hypotheses on keystroke latencies. We have found workspace creation to be demanding in time. Particularly demanding were determining the context of actions and dealing with deficiencies. However, once these processes were understood, it was easy to reuse the workspace for different experiments and to extend it to answer new research questions. Based on our experience, we give a set of guidelines that might help in setting up studies, collecting and preparing data. We recommend that designers of data collection instruments add context to each action. Furthermore, we recommend rapid iterations starting early in the process of data preparation and analysis, and covering both general and specific data. Copyright © 2009 John Wiley & Sons, Ltd.  相似文献   
49.
Unit verification, including software inspections and unit tests, is usually the first code verification phase in the software development process. However, principles of unit verification are weakly explored, mostly due to the lack of data, since unit verification data are rarely systematically collected and only a few studies have been published with such data from industry. Therefore, we explore the theory of fault distributions, originating in the quantitative analysis by Fenton and Ohlsson, in the weakly explored context of unit verification in large-scale software development. We conduct a quantitative case study on a sequence of four development projects on consecutive releases of the same complex software product line system for telecommunication exchanges. We replicate the operationalization from earlier studies, analyzed hypotheses related to the Pareto principle of fault distribution, persistence of faults, effects of module size, and quality in terms of fault densities, however, now from the perspective of unit verification. The patterns in unit verification results resemble those of later verification phases, e.g., regarding the Pareto principle, and may thus be used for prediction and planning purposes. Using unit verification results as predictors may improve the quality and efficiency of software verification.  相似文献   
50.
Ant-like systems take advantage of agents' situatedness to reduce or eliminate the need for centralized control or global knowledge. This reduces the need for complexity of individuals and leads to robust, scalable systems. Such insect-inspired situated approaches have proven effective both for task performance and task allocation. The desire for general, principled techniques for situated interaction has led us to study the exploitation of abstract situatedness – situatedness in non-physical environments. The port-arbitrated behavior-based control approach provides a well-structured abstract behavior space in which agents can participate in situated interaction. We focus on the problem of role assumption, distributed task allocation in which each agent selects its own task-performing role. This paper details our general, principled Broadcast of Local Eligibility (BLE) technique for role-assumption in such behavior-space-situated systems, and provides experimental results from the CMOMMT target-tracking task. This revised version was published online in August 2006 with corrections to the Cover Date.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号