首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   1158篇
  免费   49篇
电工技术   13篇
综合类   1篇
化学工业   291篇
金属工艺   9篇
机械仪表   19篇
建筑科学   34篇
矿业工程   2篇
能源动力   39篇
轻工业   87篇
水利工程   11篇
石油天然气   4篇
无线电   183篇
一般工业技术   134篇
冶金工业   34篇
原子能技术   4篇
自动化技术   342篇
  2024年   2篇
  2023年   14篇
  2022年   36篇
  2021年   48篇
  2020年   41篇
  2019年   41篇
  2018年   37篇
  2017年   45篇
  2016年   40篇
  2015年   33篇
  2014年   56篇
  2013年   101篇
  2012年   70篇
  2011年   83篇
  2010年   58篇
  2009年   83篇
  2008年   67篇
  2007年   77篇
  2006年   40篇
  2005年   23篇
  2004年   29篇
  2003年   24篇
  2002年   21篇
  2001年   7篇
  2000年   8篇
  1999年   7篇
  1998年   9篇
  1997年   8篇
  1996年   7篇
  1995年   3篇
  1994年   10篇
  1993年   6篇
  1992年   5篇
  1991年   4篇
  1990年   3篇
  1989年   7篇
  1988年   7篇
  1987年   7篇
  1986年   4篇
  1985年   9篇
  1984年   5篇
  1983年   4篇
  1982年   3篇
  1981年   4篇
  1980年   3篇
  1979年   3篇
  1978年   1篇
  1977年   2篇
  1973年   1篇
  1966年   1篇
排序方式: 共有1207条查询结果,搜索用时 15 毫秒
101.
102.
In this paper, we present Adaptive Smooth Simulcast Protocol (ASSP) for simulcast transmission of multimedia data over best-effort networks. ASSP is a new multiple-rate protocol that implements a single rate TCP-friendly protocol as the underlying congestion control mechanism for each simulcast stream. The key attributes of ASSP are: (a) TCP-friendly behavior, (b) adaptive per-stream transmission rates, (c) adaptive scalability to large sets of receivers and (d) smooth transmission rates that are suitable for multimedia applications. We evaluate the performance of ASSP under an integrated simulation environment which combines the measurements of both network and video performance metrics. We also compare ASSP against other proposed solutions and the results demonstrate that the performance of ASSP is significantly better than the tested solutions. Finally, ASSP is a practical solution with very low implementation complexity for video transmission over best-effort networks.  相似文献   
103.
This paper suggests a modeling framework to investigate the optimal strategy followed by a monopolistic firm aiming to manipulate the process of opinion formation in a social network. The monopolist and a set of consumers communicate to form their beliefs about the underlying product quality. Since the firm’s associated optimization problem can be analytically solved only under specific assumptions, we rely on the sequential quadratic programming computational approach to characterize the equilibrium. When consumers’ initial beliefs are uniform, the firm’s optimal influence strategy always involves targeting the most influential consumer. For the case of non-uniform initial beliefs, the monopolist might target the less influential consumer if the latter’s initial opinion is low enough. The probability of investing more in the consumer with the lower influence increases with the distance between consumers’ initial beliefs and with the degree of trust attributed on consumers by the firm. The firm’s profit is minimized when consumers’ influences become equal, implying that the firm benefits from the presence of consumers with divergent strategic locations in the network. In the absence of a binding constraint on total investment, the monopolist’s incentives to manipulate the network decrease with consumers’ initial beliefs and might either increase or decrease with the trust put in consumers’ opinion by the firm. Finally, the firm’s strategic motivation to communicate persistently high beliefs during the opinion formation process is positively associated with the market size, with the available budget and with the direct influence of the most influential consumer on the other but negatively associated with consumers’ initial valuation of the good.  相似文献   
104.
Context-based caching and routing for P2P web service discovery   总被引:1,自引:0,他引:1  
In modern heterogeneous environments, such as mobile, pervasive and ad-hoc networks, architectures based on web services offer an attractive solution for effective communication and inter-operation. In such dynamic and rapidly evolving environments, efficient web service discovery is an important task. Usually this task is based on the input/output parameters or other functional attributes, however this does not guarantee the validity or successful utilization of retrieved web services. Instead, non-functional attributes, such as device power features, computational resources and connectivity status, that characterize the context of both service providers and consumers play an important role to the quality and usability of discovery results. In this paper we introduce context-awareness in web service discovery, enabling the provision of the most appropriate services at the right location and time. We focus on context-based caching and routing for improving web service discovery in a mobile peer-to-peer environment. We conducted a thorough experimental study, using our prototype implementation based on the JXTA framework, while simulations are employed for testing the scalability of the approach. We illustrate the advantages that this approach offers, both by evaluating the context-based cache performance and by comparing the efficiency of location-based routing to broadcast-based approaches. Recommended by: Zakaria Maamar  相似文献   
105.
Data stream values are often associated with multiple aspects. For example each value observed at a given time-stamp from environmental sensors may have an associated type (e.g., temperature, humidity, etc.) as well as location. Time-stamp, type and location are the three aspects, which can be modeled using a tensor (high-order array). However, the time aspect is special, with a natural ordering, and with successive time-ticks having usually correlated values. Standard multiway analysis ignores this structure. To capture it, we propose 2 Heads Tensor Analysis (2-heads), which provides a qualitatively different treatment on time. Unlike most existing approaches that use a PCA-like summarization scheme for all aspects, 2-heads treats the time aspect carefully. 2-heads combines the power of classic multilinear analysis with wavelets, leading to a powerful mining tool. Furthermore, 2-heads has several other advantages as well: (a) it can be computed incrementally in a streaming fashion, (b) it has a provable error guarantee and, (c) it achieves significant compression ratio against competitors. Finally, we show experiments on real datasets, and we illustrate how 2-heads reveals interesting trends in the data. This is an extended abstract of an article published in the Data Mining and Knowledge Discovery journal.  相似文献   
106.
Contemporary distributed systems usually involve the spreading of information by means of ad-hoc dialogs between nodes (peers). This paradigm resembles the spreading of a virus in the biological perspective (epidemics). Such abstraction allows us to design and implement information dissemination schemes with increased efficiency. In addition, elementary information generated at a certain node can be further processed to obtain more specific, higher-level and more valuable information. Such information carries specific semantic value that can be further interpreted and exploited throughout the network. This is also reflected in the epidemical framework through the idea of virus transmutation which is a key component in our model. We establish an analytical framework for the study of a multi-epidemical information dissemination scheme in which diverse ‘transmuted epidemics’ are spread. We validate our analytical model through simulations. Key outcomes of this study include the assessment of the efficiency of the proposed scheme and the prediction of the characteristics of the spreading process (multi-epidemical prevalence and decay).  相似文献   
107.
An efficient novel strategy for color-based image retrieval is introduced. It is a hybrid approach combining a data compression scheme based on self-organizing neural networks with a nonparametric statistical test for comparing vectorial distributions. First, the color content in each image is summarized by representative RGB-vectors extracted using the Neural-Gas network. The similarity between two images is then assessed as commonality between the corresponding representative color distributions and quantified using the multivariate Wald–Wolfowitz test. Experimental results drawn from the application to a diverse collection of color images show a significantly improved performance (approximately 10–15% higher) relative to both the popular, simplistic approach of color histogram and the sophisticated, computationally demanding technique of Earth Mover’s Distance.  相似文献   
108.
The conventional approach for the implementation of the knowledge base of a planning agent, on an intelligent embedded system, is solely of software nature. It requires the existence of a compiler that transforms the initial declarative logic program, specifying the knowledge base, to its equivalent procedural one, to be programmed to the embedded systems microprocessor. This practice increases the complexity of the final implementation (the declarative to sequential transformation adds a great amount of software code for simulating the declarative execution) and reduces the overall systems performance (logic derivations require the use of a stack and a great number of jump instructions for their evaluation). The design of specialized hardware implementations, which are only capable of supporting logic programs, in an effort to resolve the aforementioned problems, introduces limitations in their use in applications where logic programs need to be intertwined with traditional procedural ones in a desired application. In this paper, we exploit HW/SW codesign methods to present a microprocessor, capable of supporting hybrid applications using both programming approaches. We take advantage of the close relationship between attribute grammar (AG) evaluation and knowledge engineering methods to present a programmable hardware parser that performs logic derivations and combine it with an extension of a conventional RISC microprocessor that performs the unification process to report the success or failure of logic derivations. The extended RISC microprocessor is still capable of executing conventional procedural programs, thus hybrid applications can be implemented. The presented implementation increases the performance of logic derivations for the control inference process (experimental analysis yields an approximate 1000% – 10 times increase in performance) and reduces the complexity of the final implemented code through the introduction of an extended C language called C-AG that simplifies the programming of hybrid procedural-declarative applications.  相似文献   
109.
Stochastic Flow Models (SFMs) are stochastic hybrid systems that abstract the dynamics of many complex discrete event systems and provide the basis for their control and optimization. SFMs have been used to date to study systems with a single user class or some multiclass settings in which performance metrics are not class-dependent. In this paper, we develop a SFM framework for multiple classes and class-dependent performance objectives, where competing classes employ threshold control policies and service is provided on a First Come First Serve (FCFS) basis. In this framework, we analyze new phenomena that result from the interaction of the different classes and give rise to a new class of “induced” events that capture delays in the SFM dynamics. We derive Infinitesimal Perturbation Analysis (IPA) estimators for derivatives of various class-dependent objectives, and use them as the basis for on-line optimization algorithms that apply to the underlying discrete event system (not the SFM). This allows us to contrast system-centric and user-centric objectives, thus putting the resource contention problem in a game framework. The unbiasedness of IPA estimators is established and numerical results are provided to illustrate the effectiveness of our method for the case where there are no constraints on the controllable thresholds and to demonstrate the gap between the results of system-centric optimization and user-centric optimization.  相似文献   
110.
Requirements Engineering - Cloud computing is used by consumers to access cloud services. Malicious actors exploit vulnerabilities of cloud services to attack consumers. The link between these two...  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号