全文获取类型
收费全文 | 998篇 |
免费 | 0篇 |
国内免费 | 1篇 |
专业分类
电工技术 | 7篇 |
综合类 | 4篇 |
化学工业 | 129篇 |
金属工艺 | 278篇 |
机械仪表 | 20篇 |
建筑科学 | 7篇 |
矿业工程 | 7篇 |
能源动力 | 4篇 |
轻工业 | 82篇 |
水利工程 | 3篇 |
石油天然气 | 2篇 |
无线电 | 32篇 |
一般工业技术 | 102篇 |
冶金工业 | 77篇 |
自动化技术 | 245篇 |
出版年
2019年 | 1篇 |
2018年 | 67篇 |
2017年 | 64篇 |
2016年 | 69篇 |
2015年 | 76篇 |
2014年 | 16篇 |
2013年 | 24篇 |
2012年 | 27篇 |
2011年 | 586篇 |
2010年 | 26篇 |
2009年 | 24篇 |
2008年 | 18篇 |
2002年 | 1篇 |
排序方式: 共有999条查询结果,搜索用时 15 毫秒
21.
Michael Leuschel J��r?me Falampin Fabian Fritz Daniel Plagge 《Formal Aspects of Computing》2011,23(6):683-709
In this paper we describe the successful application of the ProB tool for data validation in several industrial applications. The initial case study centred on the San Juan metro system
installed by Siemens. The control software was developed and formally proven with B. However, the development contains certain
assumptions about the actual rail network topology which have to be validated separately in order to ensure safe operation.
For this task, Siemens has developed custom proof rules for Atelier B. Atelier B, however, was unable to deal with about 80
properties of the deployment (running out of memory). These properties thus had to be validated by hand at great expense,
and they need to be revalidated whenever the rail network infrastructure changes. In this paper we show how we were able to
use ProB to validate all of the about 300 properties of the San Juan deployment, detecting exactly the same faults automatically in
a few minutes that were manually uncovered in about one man-month. We have repeated this task for three ongoing projects at
Siemens, notably the ongoing automatisation of the line 1 of the Paris Métro. Here again, about a man month of effort has
been replaced by a few minutes of computation. This achievement required the extension of the ProB kernel for large sets as well as an improved constraint propagation algorithm. We also outline some of the effort and features
that were required in moving from a tool capable of dealing with medium-sized examples towards a tool able to deal with actual
industrial specifications. We also describe the issue of validating ProB, so that it can be integrated into the SIL4 development chain at Siemens. 相似文献
22.
We present a system to detect and track moving objects from an airborne platform. Given a global map, such as a satellite
image, our approach can locate and track the targets in geo-coordinates, namely longitude and latitude obtained from geo-registration.
A motion model in geo-coordinates is more physically meaningful than the one in image coordinates. We propose to use a two-step
geo-registration approach to stitch images acquired by satellite and UAV cameras. Mutual information is used to find correspondences
between these two very different modalities. After motion segmentation and geo-registration, tracking is performed in a hierarchical
manner: at the temporally local level, moving image blobs extracted by motion segmentation are associated into tracklets;
at the global level, tracklets are linked by their appearance and spatio-temporal consistency on the global map. To achieve
efficient time performance, graphics processing unit techniques are applied in the geo-registration and motion detection modules,
which are the bottleneck of the whole system. Experiments show that our method can efficiently deal with long term occlusion
and segmented tracks even when targets fall out the field of view. 相似文献
23.
Sabine Schlabach Rolf Ochs Thomas Hanemann Doroth��e Vinga Szab�� 《Microsystem Technologies》2011,17(2):183-193
Regarding the development of nanoparticles for polymer matrix composites the particle/agglomerate size and particle/agglomerate
distribution in the composites, respectively, is often crucial. This is exemplarily shown for, e.g. optical applications with
measurements of refractive index and transmittance. Classical blending techniques, where nanoparticles are dispersed in polymers
or resins, are compared to a combination of a special gas-phase synthesis method with subsequent in-situ deposition of nanoparticles
in high-boiling liquids. The particles/agglomerates were characterized regarding particle size and particle size distribution
using transmission electron microscopy and dynamic light scattering. Additionally, important material properties like mechanical
properties, relevant for application, or like viscosity, relevant for processing, are determined. It is shown, that with in-situ
dispersed nanoparticles synthesized in a microwave plasma process composites with finely dispersed particles/agglomerates
are attainable. 相似文献
24.
S��ndor V��gv?lgyi 《Acta Informatica》2011,48(5-6):317-361
We present an additional feature to the Challenge Handshake Authentication Protocol. It makes the protocol resilient to offline brute-force/dictionary attacks. We base our contribution to the protocol on the concept of a rewrite complement for ground term rewrite systems (GTRSs). We also introduce and study the notion of a type-based complement which is a special case of a rewrite complement. We show the following decision results. Given GTRSs A, C, and a reduced GTRS B over some ranked alphabet ??, one can decide whether C is a type-based complement of A for B. Given a GTRS A and a reduced GTRS B over some ranked alphabet ??, one can decide whether there is a GTRS C such that C is a type-based complement of A for B. If the answer is yes, then we can construct such a GTRS C. 相似文献
25.
Francisco Jes��s Mart��n-Mateos Jos�� Luis Ruiz-Reina Jos�� Antonio Alonso Mar��a Jos�� Hidalgo 《Journal of Automated Reasoning》2011,47(3):229-250
Higman??s lemma is an important result in infinitary combinatorics, which has been formalized in several theorem provers. In this paper we present a formalization and proof of Higman??s Lemma in the ACL2 theorem prover. Our formalization is based on a proof by Murthy and Russell, where the key termination argument is justified by the multiset relation induced by a well-founded relation. To our knowledge, this is the first mechanization of this proof. 相似文献
26.
Simultaneous aligning and smoothing of surface triangulations 总被引:1,自引:0,他引:1
Jos�� M. Escobar Rafael Montenegro Eduardo Rodr��guez Gustavo Montero 《Engineering with Computers》2011,27(1):17-29
In this work we develop a procedure to deform a given surface triangulation to obtain its alignment with interior curves.
These curves are defined by splines in a parametric space and, subsequently, mapped to the surface triangulation. We have
restricted our study to orthogonal mapping, so we require the curves to be included in a patch of the surface that can be
orthogonally projected onto a plane (our parametric space). For example, the curves can represent interfaces between different
materials or boundary conditions, internal boundaries or feature lines. Another setting in which this procedure can be used
is the adaption of a reference mesh to changing curves in the course of an evolutionary process. Specifically, we propose
a new method that moves the nodes of the mesh, maintaining its topology, in order to achieve two objectives simultaneously:
the piecewise approximation of the curves by edges of the surface triangulation and the optimization of the resulting mesh.
We will designate this procedure as projecting/smoothing method and it is based on the smoothing technique that we have introduced for surface triangulations in previous works. The
mesh quality improvement is obtained by an iterative process where each free node is moved to a new position that minimizes a certain objective function. The minimization process is done on the parametric
plane attending to the surface piece-wise approximation and to an algebraic quality measure (mean ratio) of the set of triangles that are connected to the free node. So, the 3-D local projecting/smoothing problem is reduced to a 2-D optimization problem. Several applications of this method
are presented. 相似文献
27.
In the Horn theory based approach for cryptographic protocol analysis, cryptographic protocols and (Dolev?CYao) intruders are modeled by Horn theories and security analysis boils down to solving the derivation problem for Horn theories. This approach and the tools based on this approach, including ProVerif, have been very successful in the automatic analysis of cryptographic protocols. However, dealing with the algebraic properties of operators, such as the exclusive OR (XOR), which are frequently used in cryptographic protocols has been problematic. In particular, ProVerif cannot deal with XOR. In this paper, we show how to reduce the derivation problem for Horn theories with XOR to the XOR-free case. Our reduction works for an expressive class of Horn theories. A large class of intruder capabilities and protocols that employ the XOR operator can be modeled by these theories. Our reduction allows us to carry out protocol analysis using tools, such as ProVerif, that cannot deal with XOR, but are very efficient in the XOR-free case. We implemented our reduction and, in combination with ProVerif, used it for the fully automatic analysis of several protocols that employ the XOR operator. Among others, our analysis revealed a new attack on an IBM security module. 相似文献
28.
Henrique Moniz Nuno Ferreira Neves Miguel Correia Paulo Ver��ssimo 《Distributed Computing》2011,24(3-4):165-175
Wireless ad-hoc networks are being increasingly used in diverse contexts, ranging from casual meetings to disaster recovery operations. A promising approach is to model these networks as distributed systems prone to dynamic communication failures. This captures transitory disconnections in communication due to phenomena like interference and collisions, and permits an efficient use of the wireless broadcasting medium. This model, however, is bound by the impossibility result of Santoro and Widmayer, which states that, even with strong synchrony assumptions, there is no deterministic solution to any non-trivial form of agreement if n ? 1 or more messages can be lost per communication round in a system with n processes. In this paper we propose a novel way to circumvent this impossibility result by employing randomization. We present a consensus protocol that ensures safety in the presence of an unrestricted number of omission faults, and guarantees progress in rounds where such faults are bounded by ${f \,{\leq}\,\lceil \frac{n}{2} \rceil (n\,{-}\,k)\,{+}\,k\,{-}\,2}$ , where k is the number of processes required to decide, eventually assuring termination with probability 1. 相似文献
29.
30.
J��r?me Schmid Jos�� A. Iglesias?Guiti��n Enrico Gobbetti Nadia Magnenat-Thalmann 《The Visual computer》2011,27(2):85-95
Despite the ability of current GPU processors to treat heavy parallel computation tasks, its use for solving medical image
segmentation problems is still not fully exploited and remains challenging. A lot of difficulties may arise related to, for
example, the different image modalities, noise and artifacts of source images, or the shape and appearance variability of
the structures to segment. Motivated by practical problems of image segmentation in the medical field, we present in this
paper a GPU framework based on explicit discrete deformable models, implemented over the NVidia CUDA architecture, aimed for
the segmentation of volumetric images. The framework supports the segmentation in parallel of different volumetric structures
as well as interaction during the segmentation process and real-time visualization of the intermediate results. Promising
results in terms of accuracy and speed on a real segmentation experiment have demonstrated the usability of the system. 相似文献