共查询到20条相似文献,搜索用时 20 毫秒
1.
The advent of the Internet of Things has motivated the use of Field Programmable Gate Array (FPGA) devices with Dynamic Partial Reconfiguration (DPR) capabilities for dynamic non-invasive modifications to circuits implemented on the FPGA. In particular, the ability to perform DPR over the network is essential in the context of a growing number of Internet of Things (IoT)-based and embedded security applications. However, the use of remote DPR brings with it a number of security threats that could lead to potentially catastrophic consequences in practical scenarios. In this paper, we demonstrate four examples where the remote DPR capability of the FPGA may be exploited by an adversary to launch Hardware Trojan Horse (HTH) attacks on commonly used security applications. We substantiate the threat by demonstrating remotely-launched attacks on Xilinx FPGA-based hardware implementations of a cryptographic algorithm, a true random number generator, and two processor based security applications - namely, a software implementation of a cryptographic algorithm and a cash dispensing scheme. The attacks are launched by on-the-fly transfer of malicious FPGA configuration bitstreams over an Ethernet connection to perform DPR and leak sensitive information. Finally, we comment on plausible countermeasures to prevent such attacks. 相似文献
2.
《Journal of Systems and Software》2004,71(3):201-213
If software for embedded processors is based on a time-triggered architecture, using co-operative task scheduling, the resulting system can have very predictable behaviour. Such a system characteristic is highly desirable in many applications, including (but not restricted to) those with safety-related or safety-critical functions. In practice, a time-triggered, co-operatively scheduled (TTCS) architecture is less widely employed than might be expected, not least because care must be taken during the design and implementation of such systems if the theoretically predicted behaviour is to be obtained. In this paper, we argue that the use of appropriate ‘design patterns’ can greatly simplify the process of creating TTCS systems. We briefly explain the origins of design patterns. We then illustrate how an appropriate set of patterns can be used to facilitate the development of a non-trivial embedded system. 相似文献
3.
Arslan Munir Farinaz Koushanfar Ann Gordon-Ross Sanjay Ranka 《The Journal of supercomputing》2013,66(1):431-487
Technological advancements in the silicon industry, as predicted by Moore’s law, have resulted in an increasing number of processor cores on a single chip, giving rise to multicore, and subsequently many-core architectures. This work focuses on identifying key architecture and software optimizations to attain high performance from tiled many-core architectures (TMAs)—an architectural innovation in the multicore technology. Although embedded systems design is traditionally power-centric, there has been a recent shift toward high-performance embedded computing due to the proliferation of compute-intensive embedded applications. The TMAs are suitable for these embedded applications due to low-power design features in many of these TMAs. We discuss the performance optimizations on a single tile (processor core) as well as parallel performance optimizations, such as application decomposition, cache locality, tile locality, memory balancing, and horizontal communication for TMAs. We elaborate compiler-based optimizations that are applicable to TMAs, such as function inlining, loop unrolling, and feedback-based optimizations. We present a case study with optimized dense matrix multiplication algorithms for Tilera’s TILEPro64 to experimentally demonstrate the performance and performance per watt optimizations on TMAs. Our results quantify the effectiveness of algorithmic choices, cache blocking, compiler optimizations, and horizontal communication in attaining high performance and performance per watt on TMAs. 相似文献
4.
The design of very small databases for smart cards and for portable embedded systems is deeply constrained by the peculiar features of the physical medium. Privacy concerns are relevant due to the fact that personal information may be stored on the card (e.g. medical records). We propose a joint approach to the logical and physical database design phases supporting the required security levels, on the basis that all information is stored on the Flash-EEPROM storage medium, managed as a file system by the smart card operating system. 相似文献
5.
In this paper, we are exploring the approach to utilize system-specific static analyses of code with the goal to improve software quality for specific software systems. Specialized analyses, tailored for a particular system, make it possible to take advantage of system/domain knowledge that is not available to more generic analyses. Furthermore, analyses can be selected and/or developed in order to best meet the challenges and specific issues of the system at hand. As a result, such analyses can be used as a complement to more generic code analysis tools because they are likely to have a better impact on (business) concerns such as improving certain software quality attributes and reducing certain classes of failures. We present a case study of a large, industrial embedded system, giving examples of what kinds of analyses could be realized and demonstrate the feasibility of implementing such analyses. We synthesize lessons learned based on our case study and provide recommendations on how to realize system-specific analyses and how to get them adopted by industry. 相似文献
6.
Uwe Petermann 《人工智能实验与理论杂志》2013,25(4):489-498
This case study describes the specification and formal verification of the key part of SPaS, a development tool for the design of open loop programmable control developed at the University of Applied Sciences in Leipzig. SPaS translates the high-level representation of an open loop programmable control into a machine executable instruction list. The produced instruction list has to exhibit the same behaviour as suggested by the high-level representation. We discuss the following features of the case study: characterization of the correctness requirements, design of a verification strategy, the correctness proof, and the relation to the Common Criteria evaluation standard. 相似文献
7.
In this paper, the fixed point iteration and Newton’s methods for iteratively solving nonlinear equations are studied in the control theoretical framework. This work is motivated by the ever increasing demands for integrating iterative solutions of nonlinear functions into embedded control systems. The use of the well-established control theoretical methods for our application purpose is inspired by the recent control-theoretical study on numerical analysis. Our study consists of two parts. In the first part, the existing fixed point iteration and Newton’s methods are analysed using the stability theory for the sector-bounded Lure’s systems. The second part is devoted to the modified iteration methods and the integration of sensor signals into the iterative computations. The major results achieved in our study are, besides some academic examples, applied to the iterative computation of the air path model embedded in the engine control systems. 相似文献
8.
This paper presents a modular architecture called DIPSA, which is intended to be used for building custom-made real-time Computer-Vision systems. It consists of four module types and each of them represents a family of circuits that perform specific visual tasks. Our architectural model proposes an algorithm-dependent methodology and makes good results possible using problem oriented solutions. The desired performance is achieved choosing the appropriate modules and connecting them by means of heterogeneous pipeline and concurrence. Additionally, two DIPSA-based hardware systems for real-time Color Recognition are described here. 相似文献
9.
The embedded systems domain has grown exponentially over the past years. The industry is forced by the market to rapidly improve and release new products to beat the competition. Frenetic development rhythms thus shape this domain and give rise to several new challenges for software design and development. One of them is dealing with trade-offs between run-time and design-time quality attributes. To study practices, processes and tools concerning the management of run-time and design-time quality attributes as well as the trade-offs among them from the perspective of embedded systems software engineers. An exploratory case study with two qualitative data collection steps, namely interviews and a focus group, involving six different companies from the embedded systems domain with a total of twenty participants. The interviewed subjects showed a preference for run-time over design-time qualities. Trade-offs between design-time and run-time qualities are very common, but they are often implicit, due to the lack of adequate monitoring tools and practices. Practitioners prefer to deal with trade-offs in the most lightweight way possible, by applying ad-hoc practices, thus avoiding any overhead incurred. Finally, practitioners have elaborated on how they envision the ideal tool support for dealing with trade-offs. Although it is notoriously difficult to deal with trade-offs, constantly monitoring the quality attributes of interest with automated tools is key in making explicit and prudent trade-offs and mitigating the risk of incurring technical debt. 相似文献
10.
There are still many challenging problems in facial gender recognition which is mainly due to the complex variances of face appearance. Although there has been tremendous research effort to develop robust gender recognition over the past decade, none has explicitly exploited the domain knowledge of the difference in appearance between male and female. Moustache contributes substantially to the facial appearance difference between male and female and could be a good feature to be incorporated into facial gender recognition. Little work on moustache segmentation has been reported in the literature. In this paper, a novel real-time moustache detection method is proposed which combines face feature extraction, image decolorization and texture detection. Image decolorization, which converts a color image to grayscale, aims to enhance the color contrast while preserving the grayscale. On the other hand, moustache appearance is normally grayscale surrounded by the skin color face tissue. Hence, it is a fast and efficient way to segment the moustache by using the decolorization technology. In order to make the algorithm robust to the variances of illumination and head pose, an adaptive decolorization segmentation has been proposed in which both the segmentation threshold selection and the moustache region following are guided by some special regions defined by their geometric relationship with the salient facial features. Furthermore, a texture-based moustache classifier is developed to compensate the decolorization-based segmentation which could detect the darker skin or shadow around the mouth caused by the small lines or skin thicker from where he/she smiles as moustache. The face is verified as the face containing a moustache only when it satisfies: (1) a larger moustache region can be found by applying the decolorization segmentation; (2) the segmented moustache region is detected as moustache by the texture moustache detector. The experimental results on color FERET database showed that the proposed approach can achieve 89 % moustache face detection rate with 0.1 % false acceptance rate. By incorporating the moustache detector into a facial gender recognition system, the gender recognition accuracy on a large database has been improved from 91 to 93.5 %. 相似文献
11.
12.
In recent years, fog computing has emerged as a new distributed system model for a large class of applications that are data-intensive or delay-sensitive. By exploiting widely distributed computing infrastructure that is located closer to the network edge, communication cost and service response time can be significantly reduced. However, developing this class of applications is not straightforward and requires addressing three key challenges, ie, supporting the dynamic nature of the edge network, managing the context-dependent characteristics of application logic, and dealing with the large scale of the system. In this paper, we present a case study in building fog computing applications using our open source platform Distributed Node-RED (DNR). In particular, we show how applications can be decomposed and deployed to a geographically distributed infrastructure using DNR, and how existing software components can be adapted and reused to participate in fog applications. We present a lab-based implementation of a fog application built using DNR that addresses the first two of the issues highlighted earlier. To validate that our approach also deals with large scale, we augment our live trial with a large scale simulation of the application model, conducted in Omnet++, which shows the scalability of the model and how it supports the dynamic nature of fog applications. 相似文献
13.
A significant problem in the application of rule-based expert systems has arisen in the area of re-engineering such systems to support changes in initial requirements. In dynamic performance environments, the rate of change is accelerated and the re-engineering problem becomes significantly more complex. One mechanism to respond to such dynamic changes is to utilize a cultural algorithm (CA). The CA provides self-adaptive capabilities which can generate the information necessary for the expert system to respond dynamically. To illustrate the approach, a fraud detection expert system was embedded inside a CA. To represent a dynamic performance environment, four different application objectives were used. The objectives were characterizing fraudulent claims, nonfraudulent claims, false positive claims, and false negative claims. The results indicate that a culturally enabled expert system can produce the information necessary to respond to dynamic performance environments 相似文献
14.
Sol Pedre Tomáš Krajník Elías Todorovich Patricia Borensztejn 《Journal of Real-Time Image Processing》2016,11(2):349-374
Many image processing applications need real-time performance, while having restrictions of size, weight and power consumption. Common solutions, including hardware/software co-designs, are based on Field Programmable Gate Arrays (FPGAs). Their main drawback is long development time. In this work, a co-design methodology for processor-centric embedded systems with hardware acceleration using FPGAs is proposed. The goal of this methodology is to achieve real-time embedded solutions, using hardware acceleration, but achieving development time similar to that of software projects. Well established methodologies, techniques and languages from the software domain—such as Object-Oriented Paradigm design, Unified Modelling Language, and multithreading programming—are applied; and semiautomatic C-to-HDL translation tools and methods are used and compared. The methodology is applied to achieve an embedded implementation of a global vision algorithm for the localization of multiple robots in an e-learning robotic laboratory. The algorithm is specifically developed to work reliably 24/7 and to detect the robot’s positions and headings even in the presence of partial occlusions and varying lighting conditions expectable in a normal classroom. The co-designed implementation of this algorithm processes 1,600 × 1,200 pixel images at a rate of 32 fps with an estimated energy consumption of 17 mJ per frame. It achieves a 16× acceleration and 92 % energy saving, which compares favorably with the most optimized embedded software solutions. This case study shows the usefulness of the proposed methodology for embedded real-time image processing applications. 相似文献
15.
This article presents a subgrouping approach to the multi-robot, dynamic multi-task allocation problem. It utilizes the percentile values of the distributional information of the tasks to reduce the task space into a number of subgroups that are equal to the number of robotic agents. The subgrouping procedure takes place at run-time and at every designated decision-cycle to update the elements of these subgroups using the relocation information of the elements of the task space. Furthermore, it reduces the complexity of the decision-making process proportional to the number of agents via introduction of the virtual representatives for these subgroups. The coordination strategy then uses the votes of the robotic agents for these virtual representatives to allocate the available subgroups. We use the elapsed time, the distance traveled, and the frequency of the decision-cycle as metrics to analyze the performance of this strategy in contrast to the prioritization, the instantaneous, and the time-extended coordination strategies. 相似文献
16.
There is a tendency for accidents and even fatalities to arise when people enter hazardous work areas during the construction of projects in urban areas. A limited amount of research has been devoted to developing vision-based proximity warning systems that can determine when people enter a hazardous area automatically. Such systems, however, are unable to identify specific hazards and the status of a piece of plant (e.g., excavator) in real-time. In this paper, we address this limitation and develop a real-time smart video surveillance system that can detect people and the status of plant (i.e. moving or stationary) in a hazardous area. The application of this approach is demonstrated during the construction of a mega-project, the Wuhan Rail Transit System in China. We reveal that our combination of computer vision and deep learning can accurately recognize people in a hazardous work area in real-time during the construction of transport projects. Our developed systems can provide instant feedback concerning unsafe behavior and thus enable appropriate actions to be put in place to prevent their re-occurrence. 相似文献
17.
《Computers & Education》2001,37(1):27-40
Most educational software is designed to foster students' learning outcomes but with little consideration of the teaching framework in which it will be used. This paper presents a significantly different model of educational software that was derived from a case study of two teachers participating in a software design process. It shows the relationship between particular elements of the teachers' pedagogy and the characteristics of the software design. In this model, the ‘classroom atmosphere’ is embedded in the human–computer interface scenarios and elements, the ‘teaching strategy’ in the design of the browsing strategies of the software, and the ‘learning strategy’ in the particular forms of interaction with the software. The model demonstrates significant links between the study of Pedagogy and the study of Information Technology in Education and has implications for the relationship between these two areas of research and consequently for teacher training. The model proposes a perspective on educational software design that takes into consideration not only learning theories, but also teaching theories and practice. 相似文献
18.
Elyes Zarrouk Yassine Ben Ayed Faiez Gargouri 《International Journal of Speech Technology》2014,17(3):223-233
This paper presents a new hybrid method for continuous Arabic speech recognition based on triphones modelling. To do this, we apply Support Vectors Machine (SVM) as an estimator of posterior probabilities within the Hidden Markov Models (HMM) standards. In this work, we describe a new approach of categorising Arabic vowels to long and short vowels to be applied on the labeling phase of speech signals. Using this new labeling method, we deduce that SVM/HMM hybrid model is more efficient then HMMs standards and the hybrid system Multi-Layer Perceptron (MLP) with HMM. The obtained results for the Arabic speech recognition system based on triphones are 64.68 % with HMMs, 72.39 % with MLP/HMM and 74.01 % for SVM/HMM hybrid model. The WER obtained for the recognition of continuous speech by the three systems proves the performance of SVM/HMM by obtaining the lowest average for 4 tested speakers 11.42 %. 相似文献
19.
LLOYD E. PEPPARD 《International journal of systems science》2013,44(10):983-999
Tho role of state-space and feedback concepts in the dovolopmont of simulation models for ecological systems is discussed The modelling process as applied to ecology is illustrated by the development of models for three ecological systems ; the growth of a single species limited by food supply, the competition between two species for a common food supply and a predator–prey relationship. Information available in the literature concerning these systems is used to determine the model structures. Digital simulation results for each model are presented and the effect of parameter variations on model behaviour is investigated While no attempt is made at formal validation, model behaviour is compared to that observed in nature. The three models appear to possess much of the general dynamic behaviour of actual animal populations, most notably that of cyclic oscillations. 相似文献
20.
Abdullah Khalili Ashkan Sami Mahdi Azimi Sara Moshtari Zahra Salehi Mahboobe Ghiasi Ali Akbar Safavi 《Empirical Software Engineering》2016,21(1):4-16
Industrial Control Systems (ICS) are the vital part of modern critical infrastructures. Recent attacks to ICS indicate that these systems have various types of vulnerabilities. A large number of vulnerabilities are due to secure coding problems in industrial applications. Several international and national organizations like: NIST, DHS, and US-CERT have provided extensive documentation on securing ICS; however proper details on securing software application for industrial setting were not presented. The notable point that makes securing a difficult task is the contradictions between security priorities in ICS and IT systems. In addition, none of the guidelines highlights the implications on modification of general IT security solutions to industrial settings. Moreover based on the best of our knowledge, steps to develop a successful real-world secure industrial application have not been reported. In this paper, the first attempts to employ secure coding best practices into a real world industrial application (Supervisory Control and Data Acquisition) called OpenSCADA is presented. Experiments indicate that resolving the vulnerabilities of OpenSCADA in addition to possible improvement in its availability, does not jeopardize other dimensions of security. In addition, all experiments are backed up with proper statistical tests to see whether or not, improvements are statistically significant. 相似文献