首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1594篇
  免费   115篇
电工技术   16篇
化学工业   432篇
金属工艺   28篇
机械仪表   35篇
建筑科学   46篇
矿业工程   2篇
能源动力   58篇
轻工业   254篇
水利工程   14篇
石油天然气   9篇
无线电   100篇
一般工业技术   280篇
冶金工业   64篇
原子能技术   22篇
自动化技术   349篇
  2024年   5篇
  2023年   28篇
  2022年   70篇
  2021年   105篇
  2020年   66篇
  2019年   76篇
  2018年   66篇
  2017年   66篇
  2016年   70篇
  2015年   51篇
  2014年   105篇
  2013年   136篇
  2012年   128篇
  2011年   125篇
  2010年   70篇
  2009年   83篇
  2008年   75篇
  2007年   70篇
  2006年   54篇
  2005年   40篇
  2004年   36篇
  2003年   42篇
  2002年   25篇
  2001年   14篇
  2000年   10篇
  1999年   11篇
  1998年   15篇
  1997年   10篇
  1996年   8篇
  1995年   6篇
  1994年   8篇
  1993年   13篇
  1992年   3篇
  1990年   4篇
  1989年   1篇
  1988年   2篇
  1987年   2篇
  1986年   1篇
  1985年   2篇
  1984年   1篇
  1983年   1篇
  1981年   1篇
  1978年   1篇
  1972年   1篇
  1971年   2篇
排序方式: 共有1709条查询结果,搜索用时 593 毫秒
41.
Textual requirements are very common in software projects. However, this format of requirements often keeps relevant concerns (e.g., performance, synchronization, data access, etc.) from the analyst’s view because their semantics are implicit in the text. Thus, analysts must carefully review requirements documents in order to identify key concerns and their effects. Concern mining tools based on NLP techniques can help in this activity. Nonetheless, existing tools cannot always detect all the crosscutting effects of a given concern on different requirements sections, as this detection requires a semantic analysis of the text. In this work, we describe an automated tool called REAssistant that supports the extraction of semantic information from textual use cases in order to reveal latent crosscutting concerns. To enable the analysis of use cases, we apply a tandem of advanced NLP techniques (e.g, dependency parsing, semantic role labeling, and domain actions) built on the UIMA framework, which generates different annotations for the use cases. Then, REAssistant allows analysts to query these annotations via concern-specific rules in order to identify all the effects of a given concern. The REAssistant tool has been evaluated with several case-studies, showing good results when compared to a manual identification of concerns and a third-party tool. In particular, the tool achieved a remarkable recall regarding the detection of crosscutting concern effects.  相似文献   
42.
This work aims at discovering and extracting relevant patterns underlying social interactions. To do so, some knowledge extracted from Facebook, a social networking site, is formalised by means of an Extended Social Graph, a data structure which goes beyond the original concept of a social graph by also incorporating information on interests. When the Extended Social Graph is built, state-of-the-art techniques are applied over it in order to discover communities. Once these social communities are found, statistical techniques will look for relevant patterns common to each of those, in such a way that each cluster of users is characterised by a set of common features. The resulting knowledge will be used to develop and evaluate a social recommender system, which aims at suggesting users in a social network with possible friends or interests.  相似文献   
43.
44.
aITALC, a new tool for automating loop calculations in high energy physics, is described. The package creates Fortran code for two-fermion scattering processes automatically, starting from the generation and analysis of the Feynman graphs. We describe the modules of the tool, the intercommunication between them and illustrate its use with three examples.

Program summary

Title of the program:aITALC version 1.2.1 (9 August 2005)Catalogue identifier:ADWOProgram summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWOProgram obtainable from:CPC Program Library, Queen's University of Belfast, N. IrelandComputer:PC i386Operating system:GNU/Linux, tested on different distributions SuSE 8.2 to 9.3, Red Hat 7.2, Debian 3.0, Ubuntu 5.04. Also on SolarisProgramming language used:GNU Make, Diana, Form, Fortran77Additional programs/libraries used:Diana 2.35 (Qgraf 2.0), Form 3.1, LoopTools 2.1 (FF)Memory required to execute with typical data:Up to about 10 MBNo. of processors used:1No. of lines in distributed program, including test data, etc.:40 926No. of bytes in distributed program, including test data, etc.:371 424Distribution format:tar gzip fileHigh-speed storage required:from 1.5 to 30 MB, depending on modules present and unfolding of examplesNature of the physical problem:Calculation of differential cross sections for e+e annihilation in one-loop approximation.Method of solution:Generation and perturbative analysis of Feynman diagrams with later evaluation of matrix elements and form factors.Restriction of the complexity of the problem:The limit of application is, for the moment, the 2→2 particle reactions in the electro-weak standard model.Typical running time:Few minutes, being highly depending on the complexity of the process and the Fortran compiler.  相似文献   
45.
This paper presents a framework for weighted fusion of several active shape and active appearance models. The approach is based on the eigenspace fusion method proposed by Hall et al., which has been extended to fuse more than two weighted eigenspaces using unbiased mean and covariance matrix estimates. To evaluate the performance of fusion, a comparative assessment on segmentation precision as well as facial verification tests are performed using the AR, EQUINOX, and XM2VTS databases. Based on the results, it is concluded that the fusion is useful when the model needs to be updated online or when the original observations are absent  相似文献   
46.
Computer Supported Collaborative Learning is a pedagogical approach that can be used for deploying educational games in the classroom. However, there is no clear understanding as to which technological platforms are better suited for deploying co-located collaborative games, nor the general affordances that are required. In this work we explore two different technological platforms for developing collaborative games in the classroom: one based on augmented reality technology and the other based on multiple-mice technology. In both cases, the same game was introduced to teach electrostatics and the results were compared experimentally using a real class.  相似文献   
47.
Question–answering systems make good use of knowledge bases (KBs, e.g., Wikipedia) for responding to definition queries. Typically, systems extract relevant facts from articles regarding the question across KBs, and then they are projected into the candidate answers. However, studies have shown that the performance of this kind of method suddenly drops, whenever KBs supply narrow coverage. This work describes a new approach to deal with this problem by constructing context models for scoring candidate answers, which are, more precisely, statistical n‐gram language models inferred from lexicalized dependency paths extracted from Wikipedia abstracts. Unlike state‐of‐the‐art approaches, context models are created by capturing the semantics of candidate answers (e.g., “novel,”“singer,”“coach,” and “city”). This work is extended by investigating the impact on context models of extra linguistic knowledge such as part‐of‐speech tagging and named‐entity recognition. Results showed the effectiveness of context models as n‐gram lexicalized dependency paths and promising context indicators for the presence of definitions in natural language texts.  相似文献   
48.
49.
The complexity of the data warehouse (DW) development process requires to follow a methodological approach in order to be successful. A widely accepted approach for this development is the hybrid one, in which requirements and data sources must be accommodated to a new DW model. The main problem is that we lose the relationships between requirements, elements in the multidimensional (MD) conceptual models and data sources in the process, since no traceability is explicitly specified. Therefore, this hurts requirements validation capability and increases the complexity of Extraction, Transformation and Loading processes. In this paper, we propose a novel trace metamodel for DWs and focus on the relationships between requirements and MD conceptual models. We propose a set of Query/View/Transformation rules to include traceability in DWs in an automatic way, allowing us to obtain a MD conceptual model of the DW, as well as a trace model. Therefore, we are able to trace every requirement to the MD elements, further increasing user satisfaction. Finally, we show the implementation in our Lucentia BI tool.  相似文献   
50.
This work presents an optimization of MPI communications, called Dynamic-CoMPI, which uses two techniques in order to reduce the impact of communications and non-contiguous I/O requests in parallel applications. These techniques are independent of the application and complementaries to each other. The first technique is an optimization of the Two-Phase collective I/O technique from ROMIO, called Locality aware strategy for Two-Phase I/O (LA-Two-Phase I/O). In order to increase the locality of the file accesses, LA-Two-Phase I/O employs the Linear Assignment Problem (LAP) for finding an optimal I/O data communication schedule. The main purpose of this technique is the reduction of the number of communications involved in the I/O collective operation. The second technique, called Adaptive-CoMPI, is based on run-time compression of MPI messages exchanged by applications. Both techniques can be applied on every application, because both of them are transparent for the users. Dynamic-CoMPI has been validated by using several MPI benchmarks and real HPC applications. The results show that, for many of the considered scenarios, important reductions in the execution time are achieved by reducing the size and the number of the messages. Additional benefits of our approach are the reduction of the total communication time and the network contention, thus enhancing, not only performance, but also scalability.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号