全文获取类型
收费全文 | 90篇 |
免费 | 0篇 |
专业分类
电工技术 | 2篇 |
综合类 | 1篇 |
化学工业 | 5篇 |
金属工艺 | 3篇 |
水利工程 | 1篇 |
无线电 | 48篇 |
一般工业技术 | 10篇 |
冶金工业 | 5篇 |
原子能技术 | 1篇 |
自动化技术 | 14篇 |
出版年
2018年 | 1篇 |
2014年 | 2篇 |
2013年 | 2篇 |
2012年 | 5篇 |
2011年 | 3篇 |
2010年 | 2篇 |
2009年 | 3篇 |
2008年 | 5篇 |
2007年 | 6篇 |
2006年 | 2篇 |
2005年 | 2篇 |
2004年 | 2篇 |
2003年 | 2篇 |
2002年 | 2篇 |
2000年 | 3篇 |
1999年 | 10篇 |
1998年 | 8篇 |
1997年 | 7篇 |
1996年 | 1篇 |
1995年 | 1篇 |
1994年 | 1篇 |
1993年 | 2篇 |
1989年 | 1篇 |
1987年 | 1篇 |
1985年 | 2篇 |
1979年 | 3篇 |
1978年 | 1篇 |
1977年 | 1篇 |
1975年 | 1篇 |
1974年 | 2篇 |
1970年 | 2篇 |
1968年 | 2篇 |
1967年 | 2篇 |
排序方式: 共有90条查询结果,搜索用时 31 毫秒
81.
Vanderlei Bonato Eduardo Marques George A. Constantinides 《Journal of Signal Processing Systems》2009,56(1):41-50
Localization and Mapping are two of the most important capabilities for autonomous mobile robots and have been receiving considerable
attention from the scientific computing community over the last 10 years. One of the most efficient methods to address these
problems is based on the use of the Extended Kalman Filter (EKF). The EKF simultaneously estimates a model of the environment
(map) and the position of the robot based on odometric and exteroceptive sensor information. As this algorithm demands a considerable
amount of computation, it is usually executed on high end PCs coupled to the robot. In this work we present an FPGA-based
architecture for the EKF algorithm that is capable of processing two-dimensional maps containing up to 1.8 k features at real
time (14 Hz), a three-fold improvement over a Pentium M 1.6 GHz, and a 13-fold improvement over an ARM920T 200 MHz. The proposed
architecture also consumes only 1.3% of the Pentium and 12.3% of the ARM energy per feature.
相似文献
Vanderlei BonatoEmail: |
82.
In fast-fading channels, the constant modulus algorithm (CMA) is unable to properly track the time-variations because the magnitude of the received signal changes too rapidly. The Kalman filter (KF), however, works well in time-varying channels but needs a training sequence to operate. Therefore, a combined CMA and KF algorithm is proposed in order to utilise the advantages of both algorithms. The associated step sizes of the CMA and the KF algorithm are also varied in accordance with the magnitude of the output. Simulations are presented to demonstrate the potential of the combination 相似文献
83.
84.
Su-Shin Ang George A. Constantinides Wayne Luk Peter Y. K. Cheung 《Journal of Real-Time Image Processing》2008,3(4):289-302
In an effort to achieve lower bandwidth requirements, video compression algorithms have become increasingly complex. Consequently,
the deployment of these algorithms on field programmable gate arrays (FPGAs) is becoming increasingly desirable, because of
the computational parallelism on these platforms as well as the measure of flexibility afforded to designers. Typically, video
data are stored in large and slow external memory arrays, but the impact of the memory access bottleneck may be reduced by
buffering frequently used data in fast on-chip memories. The order of the memory accesses, resulting from many compression
algorithms are dependent on the input data (Jain in Proceedings of the IEEE, pp. 349–389, 1981). These data-dependent memory accesses complicate the exploitation of data re-use, and subsequently reduce the extent to
which an application may be accelerated. In this paper, we present a hybrid memory sub-system which is able to capture data
re-use effectively in spite of data-dependent memory accesses. This memory sub-system is made up of a custom parallel cache
and a scratchpad memory. Further, the framework is capable of exploiting 2D spatial locality, which is frequently exhibited
in the access patterns of image processing applications. In a case study involving the quad-tree structured pulse code modulation
(QSDPCM) application, the impact of data dependence on memory accesses is shown to be significant. In comparison with an implementation
which only employs an SPM, performance improvements of up to 1.7× and 1.4× are observed through actual implementation on two
modern FPGA platforms. These performance improvements are more pronounced for image sequences exhibiting greater inter-frame
movements. In addition, reductions of on-chip memory resources by up to 3.2× are achievable using this framework. These results
indicate that, on custom hardware platforms, there is substantial scope for improvement in the capture of data re-use when
memory accesses are data dependent.
相似文献
Su-Shin AngEmail: Email: |
85.
B Babbitt L Burtis P Dentinger P Constantinides L Hillis B McGirl L Huang 《Canadian Metallurgical Quarterly》1993,4(3):199-205
Large unilamellar liposomes (d approximately 160 nm) composed of dioleoylphosphatidylethanolamine (DOPE) (80-90%), a negatively charged phospholipid stabilizer (10-20%), and a small amount (0.1-1%) of a haptenated lipid are unusually stable in divalent cation-free isotonic buffer at pH 7.4. The liposomes can be stored under this condition at 4 degrees C for at least 6 months without any detectable leakage of the entrapped fluorescent dye calcein. However, the liposomes undergo a rapid (1 h) aggregation and lysis reaction in the presence of free bivalent anti-hapten antibody. The liposome destabilization was immunospecific in that it did not occur with the normal IgG or in the presence of excess free hapten. Liposome lysis was always accompanied by liposome aggregation. Aggregation and lysis of the liposomes was completed in 5 min if the incubation temperature was raised to 70-80 degrees C. Replacing DOPE with dioleoylphosphatidylcholine in the liposomes did not abolish the liposome aggregation, but no liposome lysis was observed even at 80 degrees C. Since liposome aggregation appeared to be a necessary (but not sufficient) prerequisite for liposome lysis, we have named this new class of liposome "contact-sensitive liposomes." The immunodiagnostic potential of the contact-sensitive liposome was demonstrated with liposomes containing theophylline-DOPE. The aggregation and lysis of the liposomes induced by a monoclonal anti-theophylline antibody could be inhibited by free theophylline at concentrations of therapeutic significance. The observation could be the basis of a homogeneous assay for theophylline. 相似文献
86.
87.
Baykal B. Tanrikulu O. Constantinides G. Chambers J.A. 《Communications Letters, IEEE》1999,3(4):109-110
New blind adaptive channel equalization techniques based on a deterministic optimization criterion are presented. A family of nonlinear functions is proposed which constitutes a generic class of blind algorithms. They have been shown to have better performance than the conventional constant modulus algorithm (CMA)-like approaches. The advantages include the relaxed stability range on the step size and that an automatic gain control unit which estimates the gain of the channel, is no longer of crucial importance 相似文献
88.
Is concrete a poromechanics materials?—A multiscale investigation of poroelastic properties 总被引:1,自引:0,他引:1
Materials and Structures - There is an ongoing debate, in Concrete Science and Engineering, whether cementitious materials can be viewed as poromechanics materials in the sense of the porous media... 相似文献
89.
A simple alternative approach to the conventional method for the synthesis of digital Cheby?chev filters is outlined. Conventionally, digital filters are obtained by transforming Continuous filters. It is shown that synthesis of Cheby?chev digital filters may be performed in the zplane, without referring to continuous filters. 相似文献
90.
Performing computations with a low-bit number representation results in a faster implementation that uses less silicon, and hence allows an algorithm to be implemented in smaller and cheaper processors without loss of performance. We propose a novel formulation to efficiently exploit the low (or non-standard) precision number representation of some computer architectures when computing the solution to constrained LQR problems, such as those that arise in predictive control. The main idea is to include suitably-defined decision variables in the quadratic program, in addition to the states and the inputs, to allow for smaller roundoff errors in the solver. This enables one to trade off the number of bits used for data representation against speed and/or hardware resources, so that smaller numerical errors can be achieved for the same number of bits (same silicon area). Because of data dependencies, the algorithm complexity, in terms of computation time and hardware resources, does not necessarily increase despite the larger number of decision variables. Examples show that a 10-fold reduction in hardware resources is possible compared to using double precision floating point, without loss of closed-loop performance. 相似文献