首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1023篇
  免费   71篇
  国内免费   4篇
电工技术   16篇
综合类   1篇
化学工业   279篇
金属工艺   9篇
机械仪表   6篇
建筑科学   61篇
能源动力   33篇
轻工业   85篇
水利工程   6篇
无线电   102篇
一般工业技术   212篇
冶金工业   17篇
原子能技术   5篇
自动化技术   266篇
  2024年   3篇
  2023年   18篇
  2022年   77篇
  2021年   97篇
  2020年   52篇
  2019年   44篇
  2018年   39篇
  2017年   43篇
  2016年   54篇
  2015年   45篇
  2014年   59篇
  2013年   88篇
  2012年   67篇
  2011年   88篇
  2010年   48篇
  2009年   55篇
  2008年   50篇
  2007年   36篇
  2006年   27篇
  2005年   30篇
  2004年   15篇
  2003年   19篇
  2002年   7篇
  2001年   4篇
  2000年   7篇
  1999年   4篇
  1998年   4篇
  1997年   2篇
  1996年   6篇
  1995年   2篇
  1994年   5篇
  1993年   1篇
  1990年   1篇
  1973年   1篇
排序方式: 共有1098条查询结果,搜索用时 328 毫秒
991.
992.
In this paper, we consider the periodic review joint replenishment problem under the class of cyclic policies. For each item, the demand in the protection interval is assumed stochastic. Moreover, a fraction of shortage is lost, while the other quota is backordered. We suppose that lead times and minor ordering costs are controllable. The problem concerns determining the cyclic replenishment policy, the lead times, and the minor ordering costs in order to minimize the long‐run expected total cost per time unit. We established several properties of the cost function, which permit us to derive a heuristic algorithm. A lower bound on the minimum cost is obtained, which helps us to evaluate the effectiveness of the proposed heuristic. The heuristic is also compared with a hybrid genetic algorithm that is specifically developed for benchmarking purposes. Numerical experiments have been carried out to investigate the performance of the heuristic.  相似文献   
993.
Nowadays, developing effective techniques able to deal with data coming from structured domains is becoming crucial. In this context kernel methods are the state-of-the-art tool widely adopted in real-world applications that involve learning on structured data. Contrarily, when one has to deal with unstructured domains, deep learning methods represent a competitive, or even better, choice. In this paper we propose a new family of kernels for graphs which exploits an abstract representation of the information inspired by the multilayer perceptron architecture. Our proposal exploits the advantages of the two worlds. From one side we exploit the potentiality of the state-of-the-art graph node kernels. From the other side we develop a multilayer architecture through a series of stacked kernel pre-image estimators, trained in an unsupervised fashion via convex optimization. The hidden layers of the proposed framework are trained in a forward manner and this allows us to avoid the greedy layerwise training of classical deep learning. Results on real world graph datasets confirm the quality of the proposal.  相似文献   
994.
Wireless Sensor Networks (WSN) are large networks of tiny sensor nodes that are usually randomly distributed over a geographical region. The network topology may vary in time in an unpredictable manner due to many different causes. For example, in order to reduce power consumption, battery operated sensors undergo cycles of sleeping–active periods; additionally, sensors may be located in hostile environments increasing their likelihood of failure; furthermore, data might also be collected from a range of sources at different times. For this reason multi-hop routing algorithms used to route messages from a sensor node to a sink should be rapidly adaptable to the changing topology. Swarm intelligence has been proposed for this purpose, since it allows the emergence of a single global behavior from the interaction of many simple local agents. Swarm intelligent routing has been traditionally studied by resorting to simulation. The present paper aims to show that the recently proposed modeling technique, known as Markovian Agent Model (MAM), is suited for implementing swarm intelligent algorithms for large networks of interacting sensors. Various experimental results and quantitative performance indices are evaluated to support this claim. The validity of this approach is given a further proof by comparing the results with those obtained by using a WSN discrete event simulator.  相似文献   
995.
996.
Background: Test-Driven Development (TDD) is claimed to have positive effects on external code quality and programmers’ productivity. The main driver for these possible improvements is the tests enforced by the test-first nature of TDD as previously investigated in a controlled experiment (i.e. the original study). Aim: Our goal is to examine the nature of the relationship between tests and external code quality as well as programmers’ productivity in order to verify/ refute the results of the original study. Method: We conducted a differentiated and partial replication of the original setting and the related analyses, with a focus on the role of tests. Specifically, while the original study compared test-first vs. test-last, our replication employed the test-first treatment only. The replication involved 30 students, working in pairs or as individuals, in the context of a graduate course, and resulted in 16 software artifacts developed. We performed linear regression to test the original study’s hypotheses, and analyses of covariance to test the additional hypotheses imposed by the changes in the replication settings. Results: We found significant correlation (Spearman coefficient = 0.66, with p-value = 0.004) between the number of tests and productivity, and a positive regression coefficient (p-value = 0.011). We found no significant correlation (Spearman coefficient = 0.41 with p-value = 0.11) between the number of tests and external code quality (regression coefficient p-value = 0.0513). For both cases we observed no statistically significant interaction caused by the subject units being individuals or pairs. Further, our results are consistent with the original study although there were changes in the timing constraints for finishing the task and the enforced development processes. Conclusions: This replication study confirms the results of the original study concerning the relationship between the number of tests vs. external code quality and programmer productivity. Moreover, this replication allows us to identify additional context variables, for which the original results still hold; namely the subject unit, timing constraint and isolation of test-first process. Based on our findings, we recommend practitioners to implement as many tests as possible in order to achieve higher baselines for quality and productivity.  相似文献   
997.
Negotiation is a collaborative activity that requires the participation of different parties whose behaviors influence the outcome of the whole process. The work presented here focuses on the identification of such behaviors and their impact on the negotiation process. The premise for this study is that identifying and cataloging the behavior of parties during a negotiation may help to clarify the role that stress plays in the process. To do so, an experiment based on a negotiation game was implemented. During this experiment, behavioral and contextual information about participants was acquired. The data from this negotiation game were analyzed in order to identify the conflict styles used by each party and to extract behavioral patterns from the interactions, useful for the development of plans and suggestions for the associated participants. The work highlights the importance of the knowledge about social interactions as a basis for informed decision support in situations of conflict.  相似文献   
998.
This paper reports a study which demonstrates the advantages of using virtual-reality-based systems for training automotive assembly tasks. Sixty participants were randomly assigned to one of the following three training experiences to learn a car service procedure: (1) observational training through video instruction; (2) an experiential virtual training and trial in a CAVE; and (3) an experiential virtual training and trial through a portable 3D interactive table. Results show that virtual trained participants, after the training, can remember significantly better (p < .05) the correct execution of the steps compared to video-trained trainees. No significant differences were identified between the experiential groups neither in terms of post-training performances nor in terms of proficiency, despite differences in the interaction devices. The relevance of the outcomes for the automotive fields and for the designers of virtual training applications are discussed in light of the outcomes, particularly that virtual training experienced through a portable device such as the interactive table can be effective, as can training performed in a CAVE. This suggests the possibility for automotive industries to invest in advanced portable hardware to deliver effectively long-distance programs of training for car service operators placed all over the world.  相似文献   
999.
This paper proposes an explicit model predictive control design approach for regulation of linear time-invariant systems subject to both state and control constraints, in the presence of additive disturbances. The proposed control law is implemented as a piecewise-affine function defined on a regular simplicial partition, and has two main positive features. First, the regularity of the simplicial partition allows one to efficiently implement the control law on digital circuits, thus achieving extremely fast computation times. Moreover, the asymptotic stability (or the convergence to a set including the origin) of the closed-loop system can be enforced a priori, rather than checked a posteriori via Lyapunov analysis.  相似文献   
1000.
International patent corpus is a gigantic source containing today about 80 million of documents. Every patent is manually analyzed by patent officers and then classified by a specific code called Patent Class (PC). Cooperative Patent Classification CPC is the new classification system introduced since January 2013 in order to standardize the classification systems of all major patent offices. Like keywords for papers, PCs point to the core of the invention, describing concisely what they contain inside. Most of patents strategies are based on PC as filter for results therefore the selection of relevant PCs is often a primary and crucial activity. This task is considered particularly challenging and only few tools have been specially developed for this purpose. The most efficient tools are provided by patent offices of EPO and WIPO.This paper analyzes their PCs search strategy (mainly based on keyword-based engines) in order to identify main limitations in terms of missing relevant PCs (recall) and non-relevant results (precision). Patents have been processed by KOM, a semantic patent search tool developed by the authors. Unlike all other PC search tools, KOM uses semantic parser and many knowledge bases for carrying out a conceptual patent search. Its functioning is described step by step through a detailed analysis pointing out the benefits of a concept-based search vis-à-vis a keyword-based search. An exemplary case is proposed dealing with CPCs describing the sterilization of contact lenses. Comparison could be likewise conducted on other PCs such as International (IPC), European (ECLA) or United States (USPC) patent classification codes.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号