首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   311篇
  免费   7篇
电工技术   1篇
化学工业   25篇
金属工艺   2篇
机械仪表   3篇
建筑科学   4篇
能源动力   3篇
轻工业   5篇
水利工程   1篇
无线电   31篇
一般工业技术   40篇
冶金工业   15篇
原子能技术   1篇
自动化技术   187篇
  2023年   1篇
  2022年   1篇
  2021年   3篇
  2020年   1篇
  2019年   2篇
  2018年   4篇
  2017年   11篇
  2016年   13篇
  2015年   8篇
  2014年   13篇
  2013年   25篇
  2012年   19篇
  2011年   28篇
  2010年   22篇
  2009年   15篇
  2008年   18篇
  2007年   14篇
  2006年   15篇
  2005年   9篇
  2004年   15篇
  2003年   12篇
  2002年   7篇
  2001年   8篇
  2000年   8篇
  1999年   1篇
  1998年   3篇
  1997年   9篇
  1996年   6篇
  1995年   4篇
  1994年   2篇
  1992年   3篇
  1991年   1篇
  1988年   1篇
  1987年   2篇
  1986年   4篇
  1985年   3篇
  1984年   2篇
  1983年   2篇
  1982年   1篇
  1979年   1篇
  1977年   1篇
排序方式: 共有318条查询结果,搜索用时 31 毫秒
11.
Cognitive science is the scientific domain which studies, analyzes, simulates and infers for various aspects, functions and procedures of human mentality such as, thinking, logic, language, knowledge, memory, learning, perception and the ability to solve problems. E-psychology is in close relation with the cognitive science domain, but expands beyond it, as e-psychology is the efficient convergence of psychology and Information and Communication Technologies (ICTs). E-psychology offers a number of services such as supporting, diagnosis, assessment, therapy, counseling, intervention and tests through an effective exploitation of ICTs. This article presents a user-friendly, flexible and adaptive electronic platform, which supports both synchronous and asynchronous e-psychology activities through the use of informative and communicative tools and services, which can be adapted to support various methods of e-psychology activities. It is important to underline that e-psychology is not an alternative psychology field, but a resource to enhance the conventional psychology process.  相似文献   
12.
The advent of the World Wide Web has made an enormous amount of information available to everyone and the widespread use of digital equipment enables end-users (peers) to produce their own digital content. This vast amount of information requires scalable data management systems. Peer-to-peer (P2P) systems have so far been well established in several application areas, with file-sharing being the most prominent. The next challenge that needs to be addressed is (more complex) data sharing, management and query processing, thus facilitating the delivery of a wide spectrum of novel data-centric applications to the end-user, while providing high Quality-of-Service. In this paper, we propose a self-organizing P2P system that is capable to identify peers with similar content and intentionally assign them to the same super-peer. During content retrieval, fewer super-peers need to be contacted and therefore efficient similarity search is supported, in terms of reduced network traffic and contacted peers. Our approach increases the responsiveness and reliability of a P2P system and we demonstrate the advantages of our approach using large-scale simulations.  相似文献   
13.
The international planning competition (IPC) is an important driver for planning research. The general goals of the IPC include pushing the state of the art in planning technology by posing new scientific challenges, encouraging direct comparison of planning systems and techniques, developing and improving a common planning domain definition language, and designing new planning domains and problems for the research community. This paper focuses on the deterministic part of the fifth international planning competition (IPC5), presenting the language and benchmark domains that we developed for the competition, as well as a detailed experimental evaluation of the deterministic planners that entered IPC5, which helps to understand the state of the art in the field.We present an extension of pddl, called pddl3, allowing the user to express strong and soft constraints about the structure of the desired plans, as well as strong and soft problem goals. We discuss the expressive power of the new language focusing on the restricted version that was used in IPC5, for which we give some basic results about its compilability into pddl2. Moreover, we study the relative performance of the IPC5 planners in terms of solved problems, CPU time, and plan quality; we analyse their behaviour with respect to the winners of the previous competition; and we evaluate them in terms of their capability of dealing with soft goals and constraints, and of finding good quality plans in general. Overall, the results indicate significant progress in the field, but they also reveal that some important issues remain open and require further research, such as dealing with strong constraints and computing high quality plans in metric-time domains and domains involving soft goals or constraints.  相似文献   
14.
Ultra-wideband wireless (UWB) can provide the physical layer for high-throughput personal area networks. When UWB is used for communication between many nodes, relatively long acquisition times are needed when dropping and re-establishing wireless links between the nodes. This paper describes the development and use of mathematical and simulation models to investigate the impact of dropping and reacquiring links between nodes on average packet delay; we also consider the performance of the alternative strategy of forwarding packets through intermediate nodes without breaking the established wireless links. The work presented here assumes that no specific MAC layer protocol, such as WiMedia UWB MAC, is in operation. The paper describes the models, explains the selection of modeling parameters used, compares the average packet delay for a network of three simple UWB nodes and for a ring of ten UWB nodes and explains the use of these results for network design engineers.  相似文献   
15.
We explore the automatic generation of test data that respect constraints expressed in the Object-Role Modeling (ORM) language. ORM is a popular conceptual modeling language, primarily targeting database applications, with significant uses in practice. The general problem of even checking whether an ORM diagram is satisfiable is quite hard: restricted forms are easily NP-hard and the problem is undecidable for some expressive formulations of ORM. Brute-force mapping to input for constraint and SAT solvers does not scale: state-of-the-art solvers fail to find data to satisfy uniqueness and mandatory constraints in realistic time even for small examples. We instead define a restricted subset of ORM that allows efficient reasoning yet contains most constraints overwhelmingly used in practice. We show that the problem of deciding whether these constraints are consistent (i.e., whether we can generate appropriate test data) is solvable in polynomial time, and we produce a highly efficient (interactive speed) checker. Additionally, we analyze over 160 ORM diagrams that capture data models from industrial practice and demonstrate that our subset of ORM is expressive enough to handle their vast majority.  相似文献   
16.
We define the notion of controlled hybrid language that allows information share and interaction between a controlled natural language (specified by a context-free grammar) and a controlled visual language (specified by a Symbol-Relation grammar). We present the controlled hybrid language INAUT, used to represent nautical charts of the French Naval and Hydrographic Service (SHOM) and their companion texts (Instructions nautiques).  相似文献   
17.
Users expect applications to successfully cope with the expansion of information as necessitated by the continuous inclusion of novel types of content. Given that such content may originate from ‘not‐seen thus far’ data collections and/or data sources, the challenging issue is to achieve the return of investment on existing services, adapting to new information without changing existing business‐logic implementation. To address this need, we introduce DOLAR (Data Object Language And Runtime), a service‐neutral framework which virtualizes the information space to avoid invasive, time‐consuming, and expensive source‐code extensions that frequently break applications. Specifically, DOLAR automates the introduction of new business‐logic objects in terms of the proposed virtual ‘content objects’. Such user‐specified virtual objects align to storage artifacts and help realize uniform ‘store‐to‐user’ data flows atop heterogeneous sources, while offering the reverse ‘user‐to‐store’ flows with identical effectiveness and ease of use. In addition, the suggested virtual object composition schemes help decouple business logic from any content origin, storage and/or structural details, allowing applications to support novel types of items without modifying their service provisions. We expect that content‐rich applications will benefit from our approach and demonstrate how DOLAR has assisted in the cost‐effective development and gradual expansion of a production‐quality digital library. Experimentation shows that our approach imposes minimal overheads and DOLAR‐based applications scale as well as any underlying datastore(s). Copyright © 2011 John Wiley & Sons, Ltd.  相似文献   
18.
Content distribution networks (CDNs) improve scalability and reliability, by replicating content to the “edge” of the Internet. Apart from the pure networking issues of the CDNs relevant to the establishment of the infrastructure, some very crucial data management issues must be resolved to exploit the full potential of CDNs to reduce the “last mile” latencies. A very important issue is the selection of the content to be prefetched to the CDN servers. All the approaches developed so far, assume the existence of adequate content popularity statistics to drive the prefetch decisions. Such information though, is not always available, or it is extremely volatile, turning such methods problematic. To address this issue, we develop self-adaptive techniques to select the outsourced content in a CDN infrastructure, which requires no apriori knowledge of request statistics. We identify clusters of “correlated” Web pages in a site, called Web site communities, and make these communities the basic outsourcing unit. Through a detailed simulation environment, using both real and synthetic data, we show that the proposed techniques are very robust and effective in reducing the user-perceived latency, performing very close to an unfeasible, off-line policy, which has full knowledge of the content popularity.  相似文献   
19.
DelosDLMS     
DelosDLMS is a novel digital library management system (DLMS) that has been developed as an integration effort within the DELOS Network of Excellence, a European Commission initiative funded under its fifth and sixth framework programs. In this paper, we describe DelosDLMS that takes into account the recommendations of several activities that were initiated by DELOS including the DELOS vision for digital libraries (DLs). A key aspect of DelosDLMS is its novel generic infrastructure that allows the generation of digital library systems out of a set of basic system services and DL services in a modular and extensible way. DL services like feature extraction, visualization, intelligent browsing, media-type-specific indexing, support for multilinguality, relevance feedback and many others can easily be incorporated or replaced. A further key aspect of DelosDLMS is its robustness against failures and its scalability for large collections and many parallel user requests. We discuss the current status of an effort to build DelosDLMS, a Digital Library Management System that integrates in various ways several components developed by DELOS members and showcases a great variety of functionality that is outlined as part of the DELOS vision.  相似文献   
20.
Association Rule Mining algorithms operate on a data matrix (e.g., customers products) to derive association rules [AIS93b, SA96]. We propose a new paradigm, namely, Ratio Rules, which are quantifiable in that we can measure the “goodness” of a set of discovered rules. We also propose the “guessing error” as a measure of the “goodness”, that is, the root-mean-square error of the reconstructed values of the cells of the given matrix, when we pretend that they are unknown. Another contribution is a novel method to guess missing/hidden values from the Ratio Rules that our method derives. For example, if somebody bought $10 of milk and $3 of bread, our rules can “guess” the amount spent on butter. Thus, unlike association rules, Ratio Rules can perform a variety of important tasks such as forecasting, answering “what-if” scenarios, detecting outliers, and visualizing the data. Moreover, we show that we can compute Ratio Rules in a single pass over the data set with small memory requirements (a few small matrices), in contrast to association rule mining methods which require multiple passes and/or large memory. Experiments on several real data sets (e.g., basketball and baseball statistics, biological data) demonstrate that the proposed method: (a) leads to rules that make sense; (b) can find large itemsets in binary matrices, even in the presence of noise; and (c) consistently achieves a “guessing error” of up to 5 times less than using straightforward column averages. Received: March 15, 1999 / Accepted: November 1, 1999  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号