首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   261篇
  免费   14篇
电工技术   4篇
综合类   1篇
化学工业   38篇
金属工艺   7篇
机械仪表   2篇
建筑科学   17篇
能源动力   13篇
轻工业   20篇
水利工程   1篇
石油天然气   3篇
无线电   17篇
一般工业技术   83篇
冶金工业   7篇
自动化技术   62篇
  2023年   1篇
  2022年   2篇
  2021年   7篇
  2020年   2篇
  2019年   10篇
  2018年   1篇
  2017年   8篇
  2016年   6篇
  2015年   14篇
  2014年   6篇
  2013年   15篇
  2012年   15篇
  2011年   27篇
  2010年   12篇
  2009年   16篇
  2008年   22篇
  2007年   11篇
  2006年   23篇
  2005年   21篇
  2004年   8篇
  2003年   6篇
  2002年   4篇
  2001年   2篇
  2000年   5篇
  1999年   2篇
  1998年   4篇
  1997年   2篇
  1996年   1篇
  1995年   1篇
  1994年   3篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1988年   4篇
  1987年   1篇
  1985年   1篇
  1983年   2篇
  1982年   1篇
  1981年   1篇
  1977年   1篇
  1976年   1篇
  1974年   1篇
  1972年   1篇
排序方式: 共有275条查询结果,搜索用时 31 毫秒
1.
Poly(vinyl alcohol) is crosslinked in dilute solution (c=0.1 wt%) with glutaraldehyde. The reaction product is characterized by viscometry and gel permeation chromatography (g.p.c.). The intrinsic viscosity decreases with increasing degree of crosslinking and does not depend on temperature. G.p.c. reveals that the reaction product is not homogeneous, but consists of a mixture of particles with different sizes, possibly both intra- and intermolecularly crosslinked molecules. The intramolecularly crosslinked molecules are smaller in size than the initial polymer molecules and their size depends on the degree of crosslinking. They possess a narrow particle size distribution even if the initial polymer sample had a broad molecular weight distribution.  相似文献   
2.
This paper deals with source localization and strength estimation based on EEG and MEG data. It describes an estimation method (inverse procedure) which uses a four-spheres model of the head and a single current dipole. The dependency of the inverse solution on model parameters is investigated. It is found that sphere radii and conductivities influence especially the strength of the EEG equivalent dipole and not its location or direction. The influence on the equivalent dipole of the gradiometer is investigated. In general the MEG produces better location estimates than the EEG whereas the reverse is found for the component estimates. An inverse solution simultaneously based on EEG and MEG data appears slightly better than the average of separate EEG and MEG solutions. Variances of parameter estimators which can be calculated on the basis of a linear approximation of the model, were tested by Monte Carlo simulations.  相似文献   
3.
Perceiving the affordance of a tool requires the integration of several complementary relationships among actor, tool, and target. Higher order affordance structures are introduced to deal with these forms of complex action from an ecological-realist point of view. The complexity of the higher order affordance structure was used to predict the difficulty of perceiving the tool function. Predictions were tested in 3 experiments involving children between 9 mo and 4 yrs old. In a classical tool use task dating back to W. K?hler (1921), a desirable target was obtained by using a hook as a tool. The relative positions of the hook and the target were systematically varied to obtain structures differing in complexity. The observed difficulty of the task was found essentially in accordance with the theoretical complexity of the higher order affordance structures involved in perceiving the tool function. (PsycINFO Database Record (c) 2010 APA, all rights reserved)  相似文献   
4.
Primary and secondary diagnosis of multi-agent plan execution   总被引:1,自引:1,他引:0  
Diagnosis of plan failures is an important subject in both single- and multi-agent planning. Plan diagnosis can be used to deal with plan failures in three ways: (i) to provide information necessary for the adjustment of the current plan or for the development of a new plan, (ii) to point out which equipment and/or agents should be repaired or adjusted to avoid further violation of the plan execution, and (iii) to identify the agents responsible for plan-execution failures. We introduce two general types of plan diagnosis: primary plan diagnosis identifying the incorrect or failed execution of actions, and secondary plan diagnosis that identifies the underlying causes of the faulty actions. Furthermore, three special cases of secondary plan diagnosis are distinguished, namely agent diagnosis, equipment diagnosis and environment diagnosis.  相似文献   
5.
We investigate the approximation ratio of the solutions achieved after a one-round walk in linear congestion games. We consider the social functions Sum, defined as the sum of the players’ costs, and Max, defined as the maximum cost per player, as a measure of the quality of a given solution. For the social function Sum and one-round walks starting from the empty strategy profile, we close the gap between the upper bound of \(2+\sqrt{5}\approx 4.24\) given in Christodoulou et al. (Proceedings of the 23rd International Symposium on Theoretical Aspects of Computer Science (STACS), LNCS, vol. 3884, pp. 349–360, Springer, Berlin, 2006) and the lower bound of 4 derived in Caragiannis et al. (Proceedings of the 33rd International Colloquium on Automata, Languages and Programming (ICALP), LNCS, vol. 4051, pp. 311–322, Springer, Berlin, 2006) by providing a matching lower bound whose construction and analysis require non-trivial arguments. For the social function Max, for which, to the best of our knowledge, no results were known prior to this work, we show an approximation ratio of \(\Theta(\sqrt[4]{n^{3}})\) (resp. \(\Theta(n\sqrt{n})\)), where n is the number of players, for one-round walks starting from the empty (resp. an arbitrary) strategy profile.  相似文献   
6.
In this paper we present the Name-It-Game, an interactive multimedia game fostering the swift creation of a large data set of region-based image annotations. Compared to existing annotation games, we consider an added semantic structure, by means of the WordNet ontology, the main innovation of the Name-It-Game. Using an ontology-powered game, instead of the more traditional annotation tools, potentially makes region-based image labeling more fun and accessible for every type of user. However, the current games often present the players with hard-to-guess objects. To prevent this from happening in the Name-It-Game, we successfully identify WordNet categories which filter out hard-to-guess objects. To verify the speed of the annotation process, we compare the online Name-It-Game with a desktop tool with similar features. Results show that the Name-It-Game outperforms this tool for semantic region-based image labeling. Lastly, we measure the accuracy of the produced segmentations and compare them with carefully created LabelMe segmentations. Judging from the quantitative and qualitative results, we believe the segmentations are competitive to those of LabelMe, especially when averaged over multiple games. By adding semantics to region-based image annotations, using the Name-It-Game, we have opened up an efficient means to provide precious labels in a playful manner.  相似文献   
7.
Many economic and social systems are essentially large multi-agent systems.By means of computational modeling, the complicated behavior of such systemscan be investigated. Modeling a multi-agent system as an evolutionary agentsystem, several important choices have to be made for evolutionary operators.Especially, it is to be expected that evolutionary dynamics substantiallydepend on the selection scheme. We therefore investigate the influence ofevolutionary selection mechanisms on a fundamental problem: the iteratedprisoner's dilemma (IPD), an elegant model for the emergence of cooperationin a multi-agent system. We observe various types of behavior, cooperationlevel, and stability, depending on the selection mechanism and the selectionintensity. Hence, our results are important for (1) the proper choice andapplication of selection schemes when modeling real economic situations and(2) assessing the validity of the conclusions drawn from computer experimentswith these models. We also conclude that the role of selection in theevolution of multi-agent systems should be investigated further, for instanceusing more detailed and complex agent interaction models.  相似文献   
8.
It is becoming more and more important for companies to have an optimal alignment between business strategy and information strategy. The approach developed in Hewlett-Packard is one way to achieve this. Proper alignment and embeddedness within the IT environment is needed to realise business objectives to achieve competitive advantage. Effectiveness and flexibility are the key-words to ensure competitive advantage by delivering a maximum service level to the customers, by reacting adequately on market changes and to optimise the business processes. Harvesting the creativity of users is an important aspect of the process. ‘Our fundamental goal is to build positive, long-term relationships with our customers, relationships characterised by mutual respect, by courtesy and integrity, by a helpful, effective response to customer needs and concerns, and by a strong commitment to providing products and services of the highest quality.’ (extracted from Hewlett-Packard’s Corporate Objectives for Customers) One of the objectives of Hewlett-Packard is to create a competitive advantage both for the customers and the company by selling and delivering quality business solutions. This objective and the globalisation of our services ask for a business solution delivery approach based on standard products and in conformity with internationally accepted standards. The quality of the business solution delivery process is managed through the project delivery process. The customer’s business solution delivery appreciation is an implicit result of the quality of each individual project. This quality is based on international standards and generic company methodologies. These standards and methodologies combined with the quality of the people are the drivers for the embedding of quality within the entire organisation. The resulting total quality approach triggers a continuous quality improvement process in order to guarantee a win-win situation for both the customer and Hewlett-Packard. For the delivery of quality business solutions there must be a homogeneous set of quality objectives, performance goals, quality system implementation guidelines and a strategy for continuous quality improvement. Within Hewlett-Packard, the Project Management Services department is responsible for the project delivery process with which the quality of the business solution delivery is managed. This paper describes how this is done.  相似文献   
9.
Cees Duin 《Algorithmica》2004,41(2):131-145
We formulate and study an algorithm for all-pairs shortest paths in a network with $n $ nodes and $m $ arcs of positive length. Using the dynamic programming principle of optimality of subpaths the algorithm avoids redundant updates of distance labels. A shortest $v$--$w$ path, say $\langle v, r_{1} , r_{2} , \ldots , r_{k } = w \rangle$ with $k $ arcs ($k \geq 1$), is only then combined with an arc $(w,t) \in A$ to update the distance label of pair $v$--$t$, if $(w,t) $ is present on the shortest $r_{\ell } $--$ t$ path for each node $r_{\ell}$ $(\ell=k- 1 , k- 2, \ldots, 1) $. The algorithm extracts shortest paths in order of length from a data structure and builds two shortest path trees per node, an extra effort of $O(n^{2})$. This way it can execute efficiently only the aforementioned distance updates, by picking the arcs $(w,t)$ out of these trees. The time complexity order per distance update and path extraction is similar as in other algorithms. An implementation with a data structure of heaps is possible, but a bucket-type data structure may be more appropriate. The implied number of distance updates does not exceed $nm_{0}$ ($m_{0}$ being the total number of shortest path arcs), but is frequently much lower. In extreme cases the new algorithm applies $O(n^{2})$ distance updates, whereas known algorithms require $\Omega( n ^{3})$ updates. The algorithm is especially suited for undirected graphs; here the construction of one tree per node is sufficient and the computation times halve.  相似文献   
10.
We propose a novel framework for processing a continuous speech stream that contains a varying number of words, as well as non-speech periods. Speech samples are segmented into word-tokens and non-speech periods. An augmented version of an earlier-proposed, cascaded neuro-computational model is used for recognising individual words within the stream. Simulation studies using both a multi-speaker-dependent and speaker-independent digit string database show that the proposed method yields a recognition performance comparable to that obtained by a benchmark approach using hidden Markov models with embedded training.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号