首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   4151篇
  免费   276篇
  国内免费   13篇
电工技术   49篇
化学工业   1190篇
金属工艺   87篇
机械仪表   138篇
建筑科学   141篇
矿业工程   24篇
能源动力   137篇
轻工业   544篇
水利工程   33篇
石油天然气   30篇
武器工业   1篇
无线电   291篇
一般工业技术   749篇
冶金工业   310篇
原子能技术   45篇
自动化技术   671篇
  2024年   10篇
  2023年   48篇
  2022年   96篇
  2021年   216篇
  2020年   142篇
  2019年   149篇
  2018年   184篇
  2017年   166篇
  2016年   175篇
  2015年   143篇
  2014年   199篇
  2013年   347篇
  2012年   252篇
  2011年   308篇
  2010年   211篇
  2009年   203篇
  2008年   192篇
  2007年   163篇
  2006年   139篇
  2005年   123篇
  2004年   86篇
  2003年   70篇
  2002年   68篇
  2001年   49篇
  2000年   41篇
  1999年   46篇
  1998年   86篇
  1997年   57篇
  1996年   43篇
  1995年   40篇
  1994年   35篇
  1993年   37篇
  1992年   20篇
  1991年   18篇
  1990年   15篇
  1989年   18篇
  1988年   16篇
  1987年   28篇
  1986年   16篇
  1985年   16篇
  1984年   13篇
  1983年   10篇
  1982年   16篇
  1981年   9篇
  1979年   14篇
  1978年   10篇
  1976年   14篇
  1975年   9篇
  1973年   14篇
  1972年   8篇
排序方式: 共有4440条查询结果,搜索用时 15 毫秒
111.
Versioning is an important aspect of web service development, which has not been adequately addressed so far. In this article, we propose extensions to WSDL and UDDI to support versioning of web service interfaces at development-time and run-time. We address service-level and operation-level versioning, service endpoint mapping, and version sequencing. We also propose annotation extensions for developing versioned web services in Java. We have tested the proposed solution for versioning in two real-world environments and identified considerable improvements in service development and maintenance efficiency, improved service reuse, and simplified governance.  相似文献   
112.
The bilateral filter is a nonlinear filter that smoothes a signal while preserving strong edges. It has demonstrated great effectiveness for a variety of problems in computer vision and computer graphics, and fast versions have been proposed. Unfortunately, little is known about the accuracy of such accelerations. In this paper, we propose a new signal-processing analysis of the bilateral filter which complements the recent studies that analyzed it as a PDE or as a robust statistical estimator. The key to our analysis is to express the filter in a higher-dimensional space where the signal intensity is added to the original domain dimensions. Importantly, this signal-processing perspective allows us to develop a novel bilateral filtering acceleration using downsampling in space and intensity. This affords a principled expression of accuracy in terms of bandwidth and sampling. The bilateral filter can be expressed as linear convolutions in this augmented space followed by two simple nonlinearities. This allows us to derive criteria for downsampling the key operations and achieving important acceleration of the bilateral filter. We show that, for the same running time, our method is more accurate than previous acceleration techniques. Typically, we are able to process a 2 megapixel image using our acceleration technique in less than a second, and have the result be visually similar to the exact computation that takes several tens of minutes. The acceleration is most effective with large spatial kernels. Furthermore, this approach extends naturally to color images and cross bilateral filtering.  相似文献   
113.
In the present article, we continue the study of the propertiesof the spectra of structures as sets of degrees initiated in[11]. Here, we consider the relationships between the spectraand the jump spectra. Our first result is that every jump spectrumis also a spectrum. The main result sounds like a Jump inversiontheorem. Namely, we show that if a spectrum is contained inthe set of the jumps of the degrees in some spectrum then thereexists a spectrum such that and is equal to the set of thejumps of the degrees in .  相似文献   
114.
A comprehensive quality model for service-oriented systems   总被引:2,自引:0,他引:2  
In a service-oriented system, a quality (or Quality of Service) model is used (i) by service requesters to specify the expected quality levels of service delivery; (ii) by service providers to advertise quality levels that their services achieve; and (iii) by service composers when selecting among alternative services those that are to participate in a service composition. Expressive quality models are needed to let requesters specify quality expectations, providers advertise service qualities, and composers finely compare alternative services. Having observed many similarities between various quality models proposed in the literature, we review these and integrate them into a single quality model, called QVDP. We highlight the need for integration of priority and dependency information within any quality model for services and propose precise submodels for doing so. Our intention is for the proposed model to serve as a reference point for further developments in quality models for service-oriented systems. To this aim, we extend the part of the UML metamodel specialized for Quality of Service with QVDP concepts unavailable in UML.
Stéphane FaulknerEmail:

Ivan J. Jureta   has, after graduating, summa cum laude, received the Master in Management and Master of International Management, respectively, at the Université de Louvain, Belgium, and the London School of Economics, both in 2005. He is currently completing his Ph.D. thesis at the University of Namur, Belgium, under Prof. Stéphane Faulkner’s supervision. His thesis focuses on quality management of adaptable and open service-oriented systems enabling the Semantic Web. Caroline Herssens   received a Master Degree in Computer Science in 2005 at the Université de Louvain. In 2006, she graduated a Master in Business and Administration from the University of Louvain, with a supply chain management orientation. She is currently a teaching and research assistant and has started a Ph.D. thesis at the information systems research unit at Université de Louvain. Her research interests comprise service-oriented computing, conceptual modeling and information systems engineering. Stéphane Faulkner   is an Associate Professor in Technologies and Information Systems at the University of Namur (FUNDP) and an Invited Professor at the Louvain School of Management of the Université de Louvain (UCL). His current research interests revolve around requirements engineering and the development of modeling notations, systematic methods and tool support for the development of multi-agent systems, database and information systems.   相似文献   
115.
116.
During financial crises investors manage portfolios with low liquidity, where the paper-value of an asset differs from the price proposed by the buyer. We consider an optimization problem for a portfolio with an illiquid, a risky and a risk-free asset. We work in the Merton's optimal consumption framework with continuous time. The liquid part of the investment is described by a standard Black–Scholes market. The illiquid asset is sold at a random moment with prescribed distribution and generates additional liquid wealth dependent on its paper-value. The investor has a hyperbolic absolute risk aversion also denoted as HARA-type utility function, in particular, the logarithmic utility function as a limit case. We study two different distributions of the liquidation time of the illiquid asset – a classical exponential distribution and a more practically relevant Weibull distribution. Under certain conditions we show the smoothness of the viscosity solution and obtain closed formulae relevant for numerics.  相似文献   
117.
While fractional calculus (FC) is as old as integer calculus, its application has been mainly restricted to mathematics. However, many real systems are better described using FC equations than with integer models. FC is a suitable tool for describing systems characterised by their fractal nature, long-term memory and chaotic behaviour. It is a promising methodology for failure analysis and modelling, since the behaviour of a failing system depends on factors that increase the model’s complexity. This paper explores the proficiency of FC in modelling complex behaviour by tuning only a few parameters. This work proposes a novel two-step strategy for diagnosis, first modelling common failure conditions and, second, by comparing these models with real machine signals and using the difference to feed a computational classifier. Our proposal is validated using an electrical motor coupled with a mechanical gear reducer.  相似文献   
118.
119.
We propose a novel algorithm, called REGGAE, for the generation of momenta of a given sample of particle masses, evenly distributed in Lorentz-invariant phase space and obeying energy and momentum conservation. In comparison to other existing algorithms, REGGAE is designed for the use in multiparticle production in hadronic and nuclear collisions where many hadrons are produced and a large part of the available energy is stored in the form of their masses. The algorithm uses a loop simulating multiple collisions which lead to production of configurations with reasonably large weights.

Program summary

Program title: REGGAE (REscattering-after-Genbod GenerAtor of Events)Catalogue identifier: AEJR_v1_0Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEJR_v1_0.htmlProgram obtainable from: CPC Program Library, Queen?s University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 1523No. of bytes in distributed program, including test data, etc.: 9608Distribution format: tar.gzProgramming language: C++Computer: PC Pentium 4, though no particular tuning for this machine was performed.Operating system: Originally designed on Linux PC with g++, but it has been compiled and ran successfully on OS X with g++ and MS Windows with Microsoft Visual C++ 2008 Express Edition, as well.RAM: This depends on the number of particles which are generated. For 10 particles like in the attached example it requires about 120 kB.Classification: 11.2Nature of problem: The task is to generate momenta of a sample of particles with given masses which obey energy and momentum conservation. Generated samples should be evenly distributed in the available Lorentz-invariant phase space.Solution method: In general, the algorithm works in two steps. First, all momenta are generated with the GENBOD algorithm. There, particle production is modeled as a sequence of two-body decays of heavy resonances. After all momenta are generated this way, they are reshuffled. Each particle undergoes a collision with some other partner such that in the pair center of mass system the new directions of momenta are distributed isotropically. After each particle collides only a few times, the momenta are distributed evenly across the whole available phase space. Starting with GENBOD is not essential for the procedure but it improves the performance.Running time: This depends on the number of particles and number of events one wants to generate. On a LINUX PC with 2 GHz processor, generation of 1000 events with 10 particles each takes about 3 s.  相似文献   
120.
Security under man-in-the-middle attacks is extremely important when protocols are executed on asynchronous networks, as the Internet. Focusing on interactive proof systems, one would like also to achieve unconditional soundness, so that proving a false statement is not possible even for a computationally unbounded adversarial prover. Motivated by such requirements, in this paper we address the problem of designing constant-round protocols in the plain model that enjoy simultaneously non-malleability (i.e., security against man-in-the-middle attacks) and unconditional soundness (i.e., they are proof systems).We first give a construction of a constant-round one-many (i.e., one honest prover, many honest verifiers) concurrent non-malleable zero-knowledge proof (in contrast to argument) system for every NP language in the plain model. We then give a construction of a constant-round concurrent non-malleable witness-indistinguishable proof system for every NP language. Compared with previous results, our constructions are the first constant-round proof systems that in the plain model guarantee simultaneously security against some non-trivial concurrent man-in-the-middle attacks and against unbounded malicious provers.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号