首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1094篇
  免费   49篇
电工技术   13篇
综合类   1篇
化学工业   269篇
金属工艺   3篇
机械仪表   12篇
建筑科学   33篇
矿业工程   2篇
能源动力   39篇
轻工业   81篇
水利工程   11篇
石油天然气   4篇
无线电   175篇
一般工业技术   131篇
冶金工业   30篇
原子能技术   4篇
自动化技术   335篇
  2024年   2篇
  2023年   14篇
  2022年   36篇
  2021年   48篇
  2020年   41篇
  2019年   41篇
  2018年   36篇
  2017年   45篇
  2016年   40篇
  2015年   33篇
  2014年   56篇
  2013年   88篇
  2012年   68篇
  2011年   81篇
  2010年   54篇
  2009年   79篇
  2008年   67篇
  2007年   69篇
  2006年   35篇
  2005年   23篇
  2004年   26篇
  2003年   18篇
  2002年   17篇
  2001年   6篇
  2000年   6篇
  1999年   5篇
  1998年   9篇
  1997年   6篇
  1996年   5篇
  1995年   4篇
  1994年   9篇
  1993年   5篇
  1992年   5篇
  1991年   4篇
  1990年   3篇
  1989年   7篇
  1988年   7篇
  1987年   7篇
  1986年   4篇
  1985年   8篇
  1984年   5篇
  1983年   4篇
  1982年   3篇
  1981年   4篇
  1980年   3篇
  1979年   3篇
  1978年   1篇
  1977年   2篇
  1973年   1篇
排序方式: 共有1143条查询结果,搜索用时 0 毫秒
91.
As the computer disappears in the environments surrounding our activities, the objects therein become augmented with sensors, actuators, processors, memories, wireless communication modules and they can receive, store, process and transmit information. In addition to objects, spaces also undergo a change towards becoming smart and eventually Ambient Intelligence (AmI) spaces. In order to model the way everyday activities are carried out within an AmI environment, we introduce the notion of “activity sphere”. In this paper, we are interested in the ontology-based representation of activity spheres from two different perspectives (as creators and as observers), as well as the modeling and control of the dynamic nature of activity spheres.  相似文献   
92.
This paper presents a methodology for Mining Association Rules from Code (MARC), aiming at capturing program structure, facilitating system understanding and supporting software management. MARC groups program entities (paragraphs or statements) based on similarities, such as variable use, data types and procedure calls. It comprises three stages: code parsing/analysis, association rule mining and rule grouping. Code is parsed to populate a database with records and respective attributes. Association rules are then extracted from this database and subsequently processed to abstract programs into groups containing interrelated entities. Entities are then grouped together if their attributes participate to common rules. This abstraction is performed at the program level or even the paragraph level, in contrast to other approaches that work at the system level. Groups can then be visualised as collections of interrelated entities. The methodology was evaluated using real-life COBOL programs. Results showed that the methodology facilitates program comprehension by using source code only, where domain knowledge and documentation are either unavailable or unreliable.  相似文献   
93.
A repairable queueing model with a two-phase service in succession, provided by a single server, is investigated. Customers arrive in a single ordinary queue and after the completion of the first phase service, either proceed to the second phase or join a retrial box from where they retry, after a random amount of time and independently of the other customers in orbit, to find a position for service in the second phase. Moreover, the server is subject to breakdowns and repairs in both phases, while a start-up time is needed in order to start serving a retrial customer. When the server, upon a service or a repair completion finds no customers waiting to be served, he departs for a single vacation of an arbitrarily distributed length. The arrival process is assumed to be Poisson and all service and repair times are arbitrarily distributed. For such a system the stability conditions and steady state analysis are investigated. Numerical results are finally obtained and used to investigate system performance.  相似文献   
94.
An efficient novel strategy for color-based image retrieval is introduced. It is a hybrid approach combining a data compression scheme based on self-organizing neural networks with a nonparametric statistical test for comparing vectorial distributions. First, the color content in each image is summarized by representative RGB-vectors extracted using the Neural-Gas network. The similarity between two images is then assessed as commonality between the corresponding representative color distributions and quantified using the multivariate Wald–Wolfowitz test. Experimental results drawn from the application to a diverse collection of color images show a significantly improved performance (approximately 10–15% higher) relative to both the popular, simplistic approach of color histogram and the sophisticated, computationally demanding technique of Earth Mover’s Distance.  相似文献   
95.
Data stream values are often associated with multiple aspects. For example each value observed at a given time-stamp from environmental sensors may have an associated type (e.g., temperature, humidity, etc.) as well as location. Time-stamp, type and location are the three aspects, which can be modeled using a tensor (high-order array). However, the time aspect is special, with a natural ordering, and with successive time-ticks having usually correlated values. Standard multiway analysis ignores this structure. To capture it, we propose 2 Heads Tensor Analysis (2-heads), which provides a qualitatively different treatment on time. Unlike most existing approaches that use a PCA-like summarization scheme for all aspects, 2-heads treats the time aspect carefully. 2-heads combines the power of classic multilinear analysis with wavelets, leading to a powerful mining tool. Furthermore, 2-heads has several other advantages as well: (a) it can be computed incrementally in a streaming fashion, (b) it has a provable error guarantee and, (c) it achieves significant compression ratio against competitors. Finally, we show experiments on real datasets, and we illustrate how 2-heads reveals interesting trends in the data. This is an extended abstract of an article published in the Data Mining and Knowledge Discovery journal.  相似文献   
96.
A methodology is developed to simulate computationally the uncertain behavior of composite structures. The uncertain behavior includes buckling loads, natural frequencies, displacements, stress/strain, etc., which are the consequences of the random variation (scatter) of the primitive (independent random) variables in the constituent, ply, laminate and structural levels. This methodology is implemented in a computer code IPACS (integrated probabilistic assessment of composite structures). A fuselage-type composite structure is analyzed to demonstrate the code's capability. The probability distribution functions of the buckling loads, natural frequency, displacement, strain and stress are computed. The sensitivity of each primitive (independent random) variable to a given structural response is also identified from the analyses.  相似文献   
97.

The ever-increasing size of data emanating from mobile devices and sensors, dictates the use of distributed systems for storing and querying these data. Typically, such data sources provide some spatio-temporal information, alongside other useful data. The RDF data model can be used to interlink and exchange data originating from heterogeneous sources in a uniform manner. For example, consider the case where vessels report their spatio-temporal position, on a regular basis, by using various surveillance systems. In this scenario, a user might be interested to know which vessels were moving in a specific area for a given temporal range. In this paper, we address the problem of efficiently storing and querying spatio-temporal RDF data in parallel. We specifically study the case of SPARQL queries with spatio-temporal constraints, by proposing the DiStRDF system, which is comprised of a Storage and a Processing Layer. The DiStRDF Storage Layer is responsible for efficiently storing large amount of historical spatio-temporal RDF data of moving objects. On top of it, we devise our DiStRDF Processing Layer, which parses a SPARQL query and produces corresponding logical and physical execution plans. We use Spark, a well-known distributed in-memory processing framework, as the underlying processing engine. Our experimental evaluation, on real data from both aviation and maritime domains, demonstrates the efficiency of our DiStRDF system, when using various spatio-temporal range constraints.

  相似文献   
98.
Contemporary distributed systems usually involve the spreading of information by means of ad-hoc dialogs between nodes (peers). This paradigm resembles the spreading of a virus in the biological perspective (epidemics). Such abstraction allows us to design and implement information dissemination schemes with increased efficiency. In addition, elementary information generated at a certain node can be further processed to obtain more specific, higher-level and more valuable information. Such information carries specific semantic value that can be further interpreted and exploited throughout the network. This is also reflected in the epidemical framework through the idea of virus transmutation which is a key component in our model. We establish an analytical framework for the study of a multi-epidemical information dissemination scheme in which diverse ‘transmuted epidemics’ are spread. We validate our analytical model through simulations. Key outcomes of this study include the assessment of the efficiency of the proposed scheme and the prediction of the characteristics of the spreading process (multi-epidemical prevalence and decay).  相似文献   
99.
The conventional approach for the implementation of the knowledge base of a planning agent, on an intelligent embedded system, is solely of software nature. It requires the existence of a compiler that transforms the initial declarative logic program, specifying the knowledge base, to its equivalent procedural one, to be programmed to the embedded systems microprocessor. This practice increases the complexity of the final implementation (the declarative to sequential transformation adds a great amount of software code for simulating the declarative execution) and reduces the overall systems performance (logic derivations require the use of a stack and a great number of jump instructions for their evaluation). The design of specialized hardware implementations, which are only capable of supporting logic programs, in an effort to resolve the aforementioned problems, introduces limitations in their use in applications where logic programs need to be intertwined with traditional procedural ones in a desired application. In this paper, we exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of logic derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation increases the performance of logic derivations for the control inference process (experimental analysis yields an approximate 1000% – 10 times increase in performance) and reduces the complexity of the final implemented code through the introduction of an extended C language called C-AG that simplifies the programming of hybrid procedural-declarative applications.  相似文献   
100.
We consider Discrete Event Systems (DES) involving tasks with real-time constraints and seek to control processing times so as to minimize a cost function subject to each task meeting its own constraint. It has been shown that the off-line version of this problem can be efficiently solved by the Critical Task Decomposition Algorithm (CTDA) (Mao et al., IEEE Trans Mobile Comput 6(6):678–688, 2007). In the on-line version, random task characteristics (e.g., arrival times) are not known in advance. To bypass this difficulty, worst-case analysis may be used. This, however, does not make use of probability distributions and results in an overly conservative solution. In this paper, we develop a new approach which does not rely on worst-case analysis but provides a “best solution in probability” efficiently obtained by estimating the probability distribution of sample-path-optimal solutions. We introduce a condition termed “non-singularity” under which the best solution in probability leads to the on-line optimal control. Numerical examples are included to illustrate our results and show substantial performance improvements over worst-case analysis.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号