Rozprawy doktorskie na temat „Systèmes embarqués (informatique) – Architecture”
Utwórz poprawne odniesienie w stylach APA, MLA, Chicago, Harvard i wielu innych
Sprawdź 50 najlepszych rozpraw doktorskich naukowych na temat „Systèmes embarqués (informatique) – Architecture”.
Przycisk „Dodaj do bibliografii” jest dostępny obok każdej pracy w bibliografii. Użyj go – a my automatycznie utworzymy odniesienie bibliograficzne do wybranej pracy w stylu cytowania, którego potrzebujesz: APA, MLA, Harvard, Chicago, Vancouver itp.
Możesz również pobrać pełny tekst publikacji naukowej w formacie „.pdf” i przeczytać adnotację do pracy online, jeśli odpowiednie parametry są dostępne w metadanych.
Przeglądaj rozprawy doktorskie z różnych dziedzin i twórz odpowiednie bibliografie.
Gatti, Marc. "Évolution des Architectures des Systèmes Avioniques Embarqués". Electronic Thesis or Diss., Paris 6, 2016. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2016PA066725.pdf.
Pełny tekst źródłaNowadays, Embedded Systems are key elements of the Avionic Systems. As more and more functions are integrated, their complexity goes increasing. In order to keep mastering this complexity, Avionic Systems Architecture has also evolved so as to minimize the interactions between equipment. This evolution of the Architectures introduced, at the avionic level, the notion of network widely spread in the consumer domain. Our research works aim at accompanying this architectural evolution by minimizing the impact of the technological breakthroughs which were necessary to introduce to support this evolution. For that purpose, we propose an approach which is going to allow us to derisk every new technological brick before its introduction within the Embedded Systems. This introduction can thus be performed by having beforehand defined the conditions as well as the limits of use of every new technology that it is Hardware and/or Software
Saint-jean, Nicolas. "Etude et conception de systèmes multiprocesseurs auto-adaptatifs pour les systèmes embarqués". Montpellier 2, 2008. http://www.theses.fr/2008MON20207.
Pełny tekst źródłaGatti, Marc. "Évolution des Architectures des Systèmes Avioniques Embarqués". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066725/document.
Pełny tekst źródłaNowadays, Embedded Systems are key elements of the Avionic Systems. As more and more functions are integrated, their complexity goes increasing. In order to keep mastering this complexity, Avionic Systems Architecture has also evolved so as to minimize the interactions between equipment. This evolution of the Architectures introduced, at the avionic level, the notion of network widely spread in the consumer domain. Our research works aim at accompanying this architectural evolution by minimizing the impact of the technological breakthroughs which were necessary to introduce to support this evolution. For that purpose, we propose an approach which is going to allow us to derisk every new technological brick before its introduction within the Embedded Systems. This introduction can thus be performed by having beforehand defined the conditions as well as the limits of use of every new technology that it is Hardware and/or Software
Gemayel, Charbel El. "Approche comportementale pour la validation et le test système des systèmes embarqués : Application aux dispositifs médicaux embarqués". Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0135/document.
Pełny tekst źródłaA Biomedical research seeks good reasoning for solving medical problems, based on intensive work and great debate. It often deals with beliefs or theories that can be proven, disproven or often refined after observations or experiments. The problem is how to make tests without risks for patients, including variability and uncertainty on a number of parameters (patients, evolution of disease, treatments …). Nowadays, medical treatment uses more and more embedded devices such as sensors, actuators, and controllers. Treatment depends on the availability and well-functioning of complex electronic systems, comprising thousands of lines of codes. A mathematical representation of patient or device is presented by a number of variables which are defined to represent the inputs, the outputs and a set of equations describing the interaction of these variables. The objective of this research is to develop tools and methodologies for the development of embedded systems for medical fields. The goal is to be able to model and jointly simulate the medical device as well the human body, at least the part of the body involved in the medical device, to analyze the performance and quality of service (QoS) of the interaction of the device with the human body. To achieve this goal our study focused on several points described below. After starting by defining a prototype of a new global and flexible architecture of mathematical model of human body, which is able to contain required data, we begin by proposing a new global methodology for modeling and simulation human body and medical systems, in order to better understand the best way to model and simulate these systems and for detecting performance and the quality of services of all system components. We use two techniques that help to evaluate the calculated QoS value. The first one calculates an index of severity which indicates the severity of the case studied. The second one using a normalization function that represents the simulation as a point in order to construct a new error grid and use it to evaluate the accuracy of value measured by patients. Using Keil development tools designed for ARM processors, we have declared a new framework in the objective to create a new tester model for the glucose-insulin system, and to define the basic rules for the tester which has the ability to satisfy well-established medical decision criteria. The framework begins by simulating a mathematical model of the human body, and this model was developed to operate in the closed loop of the glucose insulin. Then, the model of artificial pancreas has been implemented to control the mathematical model of human body. Finally a new tester model was created in order to analyze the performance of all the components of the glucose-insulin system.. We have used the suitability of partially observable Markov decision processes to formalize the planning of clinical management
Charra, Olivier. "Conception de noyaux de systèmes embarqués reconfigurables". Grenoble 1, 2004. http://www.theses.fr/2004GRE10047.
Pełny tekst źródłaThe vision of the emergence of a global environment for the information management where most of the physical object around us will be equipped with processors, communication capabilities and interconnected through various networks forces us to redesign the computing systems. Instead of heavy, monolithic and non-evolutive systems, we must design light, flexible and reconfigurable systems. This work presents a new architecture allowing the conception and development of flexible and reconfigurable operating system kernels for embedded systems
Sbeyti, Hassan. "Un mécanisme de pré-chargement adaptatif pour les applications multimédias dans les systèmes embarqués". Valenciennes, 2005. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/e32e2ad6-5779-4f3a-8919-8f83b1543071.
Pełny tekst źródłaMultimedia applications in general and MPEG in particular are increasingly popular and important workloads for future embedded systems. Multimedia applications are based on algorithms that require a high computational processing power, and high memory and width The high memory bandwidth requirements do not only affect the real-time behaviour of such applications but also their energy consumption. In this thesis, we extracted multimedia specific characteristics based on memory access behaviour of multimedia applications running on embedded system. Hence, based on these characteristics we proposed a new data prefetch mechanism called Pattern-Driven Prefetching (PDP). PDP inspects the sequence of data cache misses and detects recurring patterns within that sequence. According to the patterns being detected, PDP initiates prefetch actions to anticipate future cache misses. PDP demonstrates interesting features both for existing embedded systems, equipped with small cache, as well as for future high performance embedded systems, equipped with large caches
Amar, Abdelkader. "Envrionnement [sic] fonctionnel distribué et dynamique pour systèmes embarqués". Lille 1, 2003. http://www.theses.fr/2003LIL10109.
Pełny tekst źródłaDarouich, Mehdi. "Reefs : une architecture reconfigurable pour la stéréovision embarquée en contexte temps-réel". Rennes 1, 2010. http://www.theses.fr/2010REN1S151.
Pełny tekst źródłaStereovision allows the extraction of depth information from several images taken from different points of view. In computer vision, stereovision is used to evaluate directly and accurately the distance of objects. In Advanced Driver Assistance Systems (ADAS), number of applications needs an accurate knowledge of the surrounding and can thus benefit from 3D information provided by stereovision. Involved tasks are done in real-time and require a high level of performance that can be provided by hardware accelerators. Moreover, as people safety is affected, the reliability of results is critical. As a result, the hardware solution has to be flexible enough to allow this adaptation. Finally, as the embedded context is considered, the silicon area of the chosen hardware solution must be limited. The purpose of this thesis is to design a processing architecture for stereovision that provides a performance level answering ADAS requirements and a level of flexibility high enough to generate depth maps adapted to various applications. A heterogeneous reconfigurable architecture, named REEFS (Reconfigurable Embedded Engine for Flexible Stereovision), is designed and scaled to answer ADAS requirements and to provide the best trade-off between flexibility, performance and silicon area
Ventroux, Nicolas. "Contrôle en ligne des systèmes multiprocesseurs hétérogènes embarqués : élaboration et validation d’une architecture". Rennes 1, 2006. https://hal-cea.archives-ouvertes.fr/tel-01790327.
Pełny tekst źródłaViswanathan, Venkatasubramanian. "Une architecture évolutive flexible et reconfigurable dynamiquement pour les systèmes embarqués haute performance". Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0029.
Pełny tekst źródłaIn this thesis, we propose a scalable and customizable reconfigurable computing platform, with a parallel full-duplex switched communication network, and a software execution model to redefine the computation, communication and reconfiguration paradigms in High Performance Embedded Systems. High Performance Embedded Computing (HPEC) applications are becoming highly sophisticated and resource consuming for three reasons. First, they should capture and process real-time data from several I/O sources in parallel. Second, they should adapt their functionalities according to the application or environment variations within given Size Weight and Power (SWaP) constraints. Third, since they process several parallel I/O sources, applications are often distributed on multiple computing nodes making them highly parallel. Due to the hardware parallelism and I/O bandwidth offered by Field Programmable Gate Arrays (FPGAs), application can be duplicated several times to process parallel I/Os, making Single Program Multiple Data (SPMD) the favorite execution model for designers implementing parallel architectures on FPGAs. Furthermore Dynamic Partial Reconfiguration (DPR) feature allows efficient reuse of limited hardware resources, making FPGA a highly attractive solution for such applications. The problem with current HPEC systems is that, they are usually built to meet the needs of a specific application, i.e., lacks flexibility to upgrade the system or reuse existing hardware resources. On the other hand, applications that run on such hardware architectures are constantly being upgraded. Thus there is a real need for flexible and scalable hardware architectures and parallel execution models in order to easily upgrade the system and reuse hardware resources within acceptable time bounds. Thus these applications face challenges such as obsolescence, hardware redesign cost, sequential and slow reconfiguration, and wastage of computing power.Addressing the challenges described above, we propose an architecture that allows the customization of computing nodes (FPGAs), broadcast of data (I/O, bitstreams) and reconfiguration several or a subset of computing nodes in parallel. The software environment leverages the potential of the hardware switch, to provide support for the SPMD execution model. Finally, in order to demonstrate the benefits of our architecture, we have implemented a scalable distributed secure H.264 encoding application along with several avionic communication protocols for data and control transfers between the nodes. We have used a FMC based high-speed serial Front Panel Data Port (sFPDP) data acquisition protocol to capture, encode and encrypt RAW video streams. The system has been implemented on 3 different FPGAs, respecting the SPMD execution model. In addition, we have also implemented modular I/Os by swapping I/O protocols dynamically when required by the system. We have thus demonstrated a scalable and flexible architecture and a parallel runtime reconfiguration model in order to manage several parallel input video sources. These results represent a conceptual proof of a massively parallel dynamically reconfigurable next generation embedded computers
Babau, Jean-Philippe. "Formalisation et structuration des architectures opérationnelles pour les systèmes embarqués temps réel". Habilitation à diriger des recherches, INSA de Lyon, 2005. http://tel.archives-ouvertes.fr/tel-00502510.
Pełny tekst źródłaEustache, Yvan. "Reconfigurations algorithmiques et architecturales régulées : contribution à l'auto-adaptation des systèmes embarqués". Lorient, 2008. http://www.theses.fr/2008LORIS117.
Pełny tekst źródłaReconfigurable systems are, today, a solution to efficiently respond to the economic constraints that request more flexibility and hardware component reuses and to the performance constraints requested by increasingly complex application. Reconfiguration management is not completly controlled and little research works aim the decision and the self-adaptation of embedded systems according to their environment. Thus, we suggest, in this thesis, a solution for designing self reconfigurable embedded systems according to quality of service, performance and power consumption objectives. Our approach is based on two originale contributions. The first one is a decision component based on an adaptive close-loop model updated with real measures, moreove it provides a clear separation between application specific and global decisions. The second one is an extension of operation system services for the transparent management of hardware and software tasks according to configuration decisions. The self adaptive method has been theoretically formalized and implemented on a real-life demonstrator that have been able to demonstrate its relevance on a complexe image processing application. The choice has been made to design a smart camera for objects tracking. Power consumptions, logic area and execution time measures bring the proof of weak perturbations from self-adaptive components on system performances
Petreto, Andrea. "Débruitage vidéo temps réel pour systèmes embarqués". Electronic Thesis or Diss., Sorbonne université, 2020. http://www.theses.fr/2020SORUS060.
Pełny tekst źródłaIn many applications, noisy video can be a major problem. There are denoising methods with highly effective denoising capabilities but at the cost of a very high computational complexity. Other faster methods are limited in their applications since they does not handle high levels of noise correctly. For many applications, it is however very important to preserve a good image quality in every situation with sometimes strong embedding constraints. In this work, the goal is to propose an embedded solution for live video denoising. The method needs to remain efficient with even under high level of noise. We limit our work to embedded CPU under 30W of power consumption. This work led to a new video denoising algorithm called RTE-VD: Real-Time Embedded Video Denoising. RTE-VD is composed of 3 steps: stabilization, movement compensation by dense optical flow estimation and spatio-temporal filtering. On an embedded CPU (Jetson AGX), RTE-VD runs at 30 frame per seconds on qHD videos (960x580 pixels). In order to achieve such performance, many compromises and optimizations had to be done. We compare RTE-VD to other state-of-the-art methods in both terms of denoising capabilities and processing time. We show that RTE-VD brings a new relevant tradeoff between quality and speed
Nguyen, Tien Thanh. "Model-driven architecture exploration for fault tolerance improvement". Thesis, Nantes, 2019. http://www.theses.fr/2019NANT4059.
Pełny tekst źródłaReliability becomes a very important feature in the design process of an embedded system. Therefore, the development of fault tolerance strategies is also among the priorities in the early design phases of embedded systems. This thesis aims to establish a framework that allows finding the best platform solution for a given application in heterogeneous Multi- Processor System-on-Chip (MPSoC) systems. The found solution must be integrated the fault tolerance. A new platform meta-model integrated the fault tolerance is presented that roles an infrastructure to build models. The models are then inputs to a Design Space Exploration process. From the user specification, explored dimensions include hardware choice, task mapping, data mapping, and fault-tolerance-strategy choice. A new solution is generated and evaluated in terms of execution time, cost and, reliability level. Then, an optimization process will explore the best solution among the design space. A new tool with a graphical user interface allows to model and run the DSE process. It simplifies the process by interacting with the user through the graphical interface and automating the process of exploring design space. Evaluation of heterogeneous MPSoC platform under the impact of transient and permanent faults is a very important part of the DSE to help designers choose the appropriate strategy fault tolerance regarding a compromise with the requirements of the application. Finally, case-studies are invested. Experimental results showed that the DSE framework provides an effective exploration of large design space
Isavudeen, Ali. "Architecture Dynamiquement Auto-adaptable pour Systèmes de Vision Embarquée Multi-capteurs". Thesis, Paris Est, 2017. http://www.theses.fr/2017PESC1071.
Pełny tekst źródłaAn embedded multi-sensor vision system involves several types of image sensors such as colour, infrared or low-light sensor.Characteristics of the sensors are often various (different resolution, frame rate and pixel depth).Hence, the vision system has to deal with several heterogeneous image streams.That multiplicity and the heterogeneity of the sensors help to face various environmental contexts.We consider a multi-sensor vision system that has to work in different area (city, sea, forest) and handle several operations (multispectral fusion, panoramic, multifocus).The vision system has to also face various luminosity conditions : day, night or low-light condition.The challenge of designing architecture for such a vision system is that the working context can dynamically vary.The designer has to take in account this dynamic variation of the working context.The architecture should be enough flexible to adapt its processing to the requirements of the context.It also has to be able to detect any variation of the context and adapt itself according to the context.Above all, the design should satisfy area and power constraints of an embedded and portable system.In this thesis, we propose an embedded monitor enabling dynamic auto-adaptation of the current multi-stream architecture of Safran.The monitor accomplishes two tasks for the auto-adaptation of the architecture.First, he continuously observes changes of both external and internal contexts.Then, he decides the adaptation that the architecture needs in response to the context variation.Observation of the external context is about the type of the area and the luminosity conditions.While, observation of the internal context focuses on the current status of the vision system and its architecture.To perform the adaptation, the monitor sends adaptation commands toward controllers of the architecture.We introduce a Network-on-Chip (NoC) based interconnexion layer to fulfill monitoring communication.This NoC is inspired from our previous work cite{Ng2011}.This layer allows observing and commanding the processing stages without compromising the existing pixels streams.Routers of the NoC are responsible for routing observation data from processing stages to the monitor and adaptation commands from the monitor toward processing stages.The proposed NoC takes in account the heterogeneity of working frequencies.Finally, we present a memory controller that enables dynamic allocation of the frame memory.When the working context changes, memory resources requirements change too.For an optimised and economical resources utilisation, we propose to dynamically adapt the frame buffer allocation.Also, the proposed has the possibility to dynamically manage the bandwidth of the frame memory.We introduce a pondered round robin-based method with the ability to adapt the weights on-the-fly.Our proposition has been evaluated with a typical Safran multi-stream architecture.It has been implemented in a FPGA target.Area performances have been evaluated through synthesis for a ALTERA Cyclone V FPGA (5CGX).Latency performances have been evaluated thanks to ModelSim simulations
Khenfri, Fouad. "Optimisation holistique pour la configuration d’une architecture logicielle embarquée : application au standard AUTOSAR". Thesis, Nantes, 2016. http://www.theses.fr/2016NANT4002/document.
Pełny tekst źródłaAUTOSAR (AUTomotive Open System ARchitecture) has been created by automotive manufacturers, suppliers and tools developers in order to establish an open industry standard for automotive E/E(Electrical/Electronic) architectures. AUTOSAR provides a set of concepts and defines a common methodology to develop automotive software platforms. The key features of this standard are modularity and configurability of automotive software; this allows functional reuse of software modules provided by different suppliers and guarantees interoperability of these modules through standardized interfaces. However, the development of an embedded application according to AUTOSAR necessitates configuring a lot of parameters related to the large number of Software Components (SWCs), their allocations to the hardware platform and then, the configurationof each Electronic Control Unit (ECU). Different alternatives are possible during the design of such systems. Each implementation decision may impact system performance and needs therefore to be evaluated and compared against performance constraints and optimization goals. In this thesis, we introduce a holistic optimization approach to synthesizearchitecture E/E of an embedded AUTOSAR system. This approach is based on heuristic and metaheuristic methods. The metaheuristics (e.g. genetic algorithm) has the role to find the most satisfactory allocations of SWCs to ECUs. Each allocation step, two heuristics are developed to solve the problem of the ECU configuration (the number of tasks and priorities, allocation of runnables to tasks, etc.) and networks configuration (the number of messagesand priorities, allocation of data-elements to messages, etc.). In order to evaluate the performance of each allocation, we propose a new analysis method to calculate the response time of tasks, runnables, and end-to-end paths. The architectural exploration approach proposed by this thesis considers the model for periodic applications and is evaluated using generic and industrial applications
Garcia, Samuel. "Architecture reconfigurable dynamiquement a grain fin pour le support d'un système d'exploitation temps réel". Paris 6, 2012. http://www.theses.fr/2012PA066495.
Pełny tekst źródłaMost of anticipated future applications share four major characteristics. They might all require an increased computing capacity, they will implies to take real time into account, they represent a big step in terms of complexity compared with todays typical applications, and will have to deal with the dynamic nature of the real physical world. Fine grained dynamically reconfigurable architecture (FGDRA) can be seen as next evolution of today's FPGA, aiming at dealing with very dynamic and complex real time applications while providing comparable potential computing power due to the possibility to fine tune execution architecture at a fine grain level. To make this kind of devices usable for real application designer complexity has to be abstracted by an operating system layer and adequate tool set. This combination would form an adequate solution to support future applications. This thesis exposes an innovative FGDRA architecture called OLLAF. This architecture answer both technical issues on reconfigurable computing and practical problematics of application designers. The whole architecture is designed to work in symbiosis with an operating system. Studies presented here will more particularly focus on hardware task management mechanisms in a preemptive system. We will first present our work toward trying to implement such mechanisms using existing FPGA and show that those existing architectures have to evolve to efficiently support an operating system in a highly dynamic real time situation. The OLLAF architecture will then be explained and the hardware task management mechanism will be highlighted. We then present two studies that prove this approach to constitute a huge gain compared with existing platforms in terms of resulting operating system overhead even for static application cases where dynamical reconfiguration is used only for computing resource sharing. For highly dynamical real time cases we show that not only it could lower the overhead, but it will also support cases that existing devices just cannot support
Dessiatnikoff, Anthony. "Analyse de vulnérabilités de systèmes avioniques embarqués : classification et expérimentation". Phd thesis, INSA de Toulouse, 2014. http://tel.archives-ouvertes.fr/tel-01032444.
Pełny tekst źródłaMaillet, Luc. "Spécification et validation d'une architecture de système distribué pour le contrôle d'exécution d'applications temps réel complexes". Toulouse, ENSAE, 1996. http://www.theses.fr/1996ESAE0007.
Pełny tekst źródłaPouillon, Nicolas. "Modèle de programmation pour applications parallèles multitâches et outil de déploiement sur architecture multicore à mémoire partagée". Paris 6, 2011. http://www.theses.fr/2011PA066389.
Pełny tekst źródłaJovanovic, Slavisa. "Architecture reconfigurable de système embarqué auto-organisé". Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10099/document.
Pełny tekst źródłaThe growing complexity of computing systems, mostly due to the rapid progress in Information Technology (IT) in the last decade, imposes on system designers to orient their traditional design concepts towards the new ones based on self-organizing and self-adaptive architectural solutions. On the one hand, these new architectural solutions should provide a system with a suf?cient computing power, and on the other hand, a great ?exibility and adaptivity in order to cope with all non-deterministic changes and events that may occur in the environnement in which it evolves. Within this framework, a recon?gurable MPSoC self-organizing architecture on the FPGA recon?gurable technology is studied and developped during this PhD
Brunie, Nicolas. "Contribution à l'arithmétique des ordinateurs et applications aux systèmes embarqués". Thesis, Lyon, École normale supérieure, 2014. http://www.theses.fr/2014ENSL0894/document.
Pełny tekst źródłaIn the last decades embedded systems have been challenged with more and more application variety, each time more constrained. This implies an ever growing need for performances and energy efficiency in arithmetic units. This work studies solutions ranging from hardware to software to improve arithmetic support in embedded systems. Some of these solutions were integrated in Kalray's MPPA processor. The first part of this work focuses on floating-Point arithmetic support in the MPPA. It starts with the design of a floating-Point unit (FPU) based on the classical FMA (Fused Multiply-Add) operator. The improvements we suggest, implement and evaluate include a mixed precision FMA, a 3-Operand add and a 2D scalar product, each time with a single rounding and support for subnormal numbers. It then considers the implementation of division and square root. The FPU is reused and modified to optimize the software implementations of those primitives at a lower cost. Finally, this first part opens up on the development of a code generator designed for the implementation of highly optimized mathematical libraries in different contexts (architecture, accuracy, latency, throughput). The second part studies a reconfigurable coprocessor, a hardware operator that could be dynamically modified to adapt on the fly to various applicative needs. It intends to provide performance close to ASIC implementation, with some of the flexibility of software. One of the addressed challenges is the integration of such a reconfigurable coprocessor into the low power embedded cluster of the MPPA. Another is the development of a software framework targeting the coprocessor and allowing design space exploration. The last part of this work leaves micro-Architecture considerations to study the efficient use of parallel arithmetic resources. It presents an improvement of regular architectures (Single Instruction Multiple Data), like those found in graphic processing units (GPU), for the execution of divergent control flow graphs
Azar, Céline. "On the design of a distributed adaptive manycore architecture for embedded systems". Lorient, 2012. http://www.theses.fr/2012LORIS268.
Pełny tekst źródłaChip design challenges emerged lately at many levels: the increase of the number of cores at the hardware stage, the complexity of the parallel programming models at the software level, and the dynamic requirements of current applications. Facing this evolution, the PhD thesis aims to design a distributed adaptive manycore architecture, named CEDAR (Configurable Embedded Distributed ARchitecture), which main assets are scalability, flexibility and simplicity. The CEDAR platform is an array of homogeneous, small footprint, RISC processors, each connected to its four nearest neighbors. No global control exists, yet it is distributed among the cores. Two versions are designed for the platform, along with a user-familiar programming model. A software version, CEDAR-S, is the basic implementation where adjacent cores are connected to each other via shared buffers. A co-processor called DMC (Direct Management of Communications) is added in the CEDAR-H version, to optimize the routing protocol. The DMCs are interconnected in a mesh fashion. Two novel concepts are proposed to enhance the adaptiveness of CEDAR. First, a distributed dynamic routing strategy, based on a bio-inspired algorithm, handles routing in a non-supervised fashion, and is independent of the physical placement of communicating tasks. The second concept presents dynamic distributed task migration in response to several system and application requirements. Results show that CEDAR scores high performances with its optimized routing strategy, compared to state-of-art networks. The migration cost is evaluated and adequate protocols are presented. CEDAR is shown to be a promising design concept for future manycores
Dorie, Laurent. "Modélisation et évaluation de performances en vue de la conception conjointe des systèmes reconfigurables : application à la radio logicielle". Nantes, 2007. http://www.theses.fr/2007NANT2107.
Pełny tekst źródłaThe fast evolution of embedded system context leads to more and more complexity into electronic products that can support many ways of working and different standards. In these systems, the reconfiguration is a solution to face such evolution and also respect embedded constraints. This property points out that a system is able to modify its behaviour. Such property concerns just as well the application development as the technology design. New approaches and tools are needed to take into account this reconfiguration property. Thus, the goal of this thesis is to provide high abstraction level models in order to improve the co-design of reconfigurable systems. The first part of this thesis interested in reconfiguration mechanisms of radiocommunication systems. It led to the definition of modelling in order to describe the reconfigurable mechanisms of radio communication application. The second part of this thesis focused on the reconfigurable architectures. It led to a modelling able to describe the reconfigurable impact of heterogeneous multi-processor platforms on system behaviour and performances. The interest of these modelling is illustrated by a study which deals with a typical case of Software Radio
Boukhechem, Sami. "Contribution à la mise en place d'une plateforme open-source MPSoC sous SystemC pour la Co-simulation d'architectures hétérogènes". Dijon, 2008. http://www.theses.fr/2008DIJOS045.
Pełny tekst źródłaThe increasing complexity of embedded systems imposes to system designers to use higher levels of abstraction than RTL, in order to model, validate and analyze system performances. This permits to prevent costly redesign effort at RTL, which can adversely affect time-to-market. In this thesis we propose a methodology we used for constructing a simulation environment at TLM level (Transaction Level Modeling) which is integrated to our STARSoC tool (Synthesis Tool for Adaptive and Reconfigurable System-On-Chip). The aims of this work is to provide a rapid and accurate design space exploration at higher levels of abstractions for multiprocessor system on chip architectures. The platform reference design contains several OpenRISC 1200 Instruction Set Simulators (ISSs) wrapped under SystemC, and some basic peripherals such as bus model based on Wishbone protocol, memory models, etc. In order to ensure a single development environment, we used SystemC as the modeling and simulating environment for our MPSoC platform at higher level of abstractions. This tool is integrated under Eclipse IDE
Harb, Naim. "Dynamically and Partially Reconfigurable Embedded System Architecture for Automotive and Multimedia Applications". Valenciennes, 2011. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/1810c575-b28e-4817-a3be-f0527631eabd.
Pełny tekst źródłaShort time-to-market windows, high design and fabricationcosts, and fast changing standards of application-specificprocessors, make them a costly and risky investment for embedded system designers. To overcome these problems, embedded system designersare increasingly relying on Field Programmable Gate Arrays(FPGAs) as target design platforms. FPGAs are generally slower and consumemore power than application-specific integrated circuits(ASICs), and this can restrict their use to limited applicationdomains. However, recent advances in FPGA architectures,such as dynamic partial reconfiguration (DPR), are helpingbridge this gap. DPR reduces area and enables mutually exclusive subsystemsto share the same physical space on a chip. It also reducescomplexity, which usually results in faster circuits and lowerpower consumption. The work in this PhD targets first a Driver Assistant System (DAS) system based on a Multiple Target Tracking (MTT) algorithm as our automotive base system. We present a dynamically reconfigurable filtering hardwareblock for MTT applications in DAS. Our system shows thatthere will be no reconfiguration overhead because the systemwill still be functioning with the original configuration until thesystem reconfigures itself. The free reconfigurable regions canbe implemented as improvement blocks for other DAS systemfunctionalities. Two approaches were used to design the filtering block according to driving conditions. We then target another application on the basis of DPR, the H. 264 encoder as a multimedia system. Regarding the H. 264 multimedia system, we propose a reconfigurable H. 264 Motion Estimation (ME) unit whose architecture can be modified to meet specific energy and image quality constraints. By using DPR, we were able to support multiple configurations each with different levels of accuracy and energy consumption. Image accuracy levels were controlled via application demands, user demands or support demands
Carpov, Sergiu. "Scheduling for memory management and prefetch in embedded multi-core architectures". Compiègne, 2011. http://www.theses.fr/2011COMP1962.
Pełny tekst źródłaThis PhD thesis is devoted to the study of several combinatorial optimization problems which arise in the field of parallel embedded computing. Optimal memory management and related scheduling problems for dataflow applications executed on massively multi-core processors are studied. Two memory access optimization techniques are considered: data reuse and prefetch. The memory access management is instantiated into three combinatorial optimization problems. In the first problem, a prefetching strategy for dataflow applications is investigated so as to minimize the application execution time. This problem is modeled as a hybrid flow shop under precedence constraints, an NP-hard problem. An heuristic resolution algorithm together with two lower bounds are proposed so as to conservatively, though fairly tightly, estimate the distance to the optimality. The second problem is concerned by optimal prefetch management strategies for branching structures (data-controlled tasks). Several objective functions, as well as prefetching techniques, are examined. In all these cases polynomial resolution algorithms are proposed. The third studied problem consists in ordering a set of tasks so as to minimize the number of times the memory data are fetched. In this way the data reuse for a set of tasks is optimized. This problem being NP-hard, a result we have established, we have proposed two heuristic algorithms. The optimality gap of the heuristic solutions is estimated using exact solutions. The latter ones are obtained using a branch and bound method we have proposed
Bhasin, Shivam. "Contre-mesures au niveau logique pour sécuriser les architectures de crypto-processeurs dans les FPGA". Paris, Télécom ParisTech, 2011. https://pastel.hal.science/pastel-00683079.
Pełny tekst źródłaModern field programmable gate arrays (FPGA) are capable of implementing complex system on chip (SoC) and providing high performance. Therefore, FPGAs are finding wide application. A complex SoC generally contains embedded cryptographic cores to encrypt/decrypt data to ensure security. These cryptographic cores are computationally secure but their physical implementations can be compromised using side channel attacks (SCA) or fault attacks (FA). This thesis focuses on countermeasures for securing cryptographic cores on FPGAs. First, a register-transfer level countermeasure called ``Unrolling'' is proposed. This hiding countermeasure executes multiple rounds of a cryptographic algorithm per clock which allows deeper diffusion of data. Results show excellent resistance against SCA. This is followed by dual-rail precharge logic (DPL) based countermeasures, which form a major part of this work. Wave dynamic differential logic (WDDL), a commonly used DPL countermeasure well suited for FPGAs is studied. Analysis of WDDL (DPL in general) against FA revealed that it is resistant against a majority of faults. Therefore, if flaws in DPL namely early propagation effect (EPE) and technological imbalance are fixed, DPL can evolve as a common countermeasure against SCA and FA. Continuing on this line of research we propose two new countermeasures: DPL without EPE and Balanced-Cell based DPL (BCDL). Finally advanced evaluation tools like stochastic model, mutual information and combined attacks are discussed which are useful when analyzing countermeasures
Abutaha, Mohammed. "Real-Time and Portable Chaos-based Crypto-Compression Systems for Efficient Embedded Architectures". Thesis, Nantes, 2017. http://www.theses.fr/2017NANT4010/document.
Pełny tekst źródłaImage and video protection have gained a lot of momentum over the last decades. In this work, first we designed and realized in an efficient and secure way a pseudo-chaotic number generator (PCNG) implemented in sequential and parallel (with P-threads) versions. Based on these PCNGs, two central applications were designed, implemented and analyzed. The former application deals with the realization of a random number generator (RNG) based PCNG, and the obtained results are very promising. The latter application concerns the realization of a chaos-based stream cipher. The cryptographic analysis and the statistical study of the realized chaotic systems show their robustness against known attacks. This result is due to the proposed recursive architecture which has a strong non-linearity a technique of disturbance, and a chaotic multiplexing. The computation performance indicate their use in real time applications. Second, based on the previous chaotic system, we designed and implemented in effective manner a real time joint crypto-compression system for embedded architecture. An end-to-end selective encryption solution that protects privacy in the HEVC video content is realized. Then, a ROI encryption is performed at the CABAC bin string level for the most sensitive HEVC parameters including motion vectors and transform coefficients. The format compliant encryption of Intra Prediction Modes has been also investigated. It increases a little bit the bit rate. Subjective evaluation and objective rate-distortion-complexity tests showed that the proposed solution performs a protection of privacy in the HEVC video content with a small overhead in bit rate and coding complexity
Picioroaga, Florentin. "Scalable and efficient middleware for real-time embedded systems : A uniform open service oriented,microkernel based architecture". Université Louis Pasteur (Strasbourg) (1971-2008), 2004. https://publication-theses.unistra.fr/public/theses_doctorat/2004/PICIOROAGA_Florentin_2004.pdf.
Pełny tekst źródłaAlouani, Ihsen. "Conception de systèmes embarqués fiables et auto-réglables : applications sur les systèmes de transport ferroviaire". Thesis, Valenciennes, 2016. http://www.theses.fr/2016VALE0013/document.
Pełny tekst źródłaDuring the last few decades, a tremendous progress in the performance of semiconductor devices has been accomplished. In this emerging era of high performance applications, machines need not only to be efficient but also need to be dependable at circuit and system levels. Several works have been proposed to increase embedded systems efficiency by reducing the gap between software flexibility and hardware high-performance. Due to their reconfigurable aspect, Field Programmable Gate Arrays (FPGAs) represented a relevant step towards bridging this performance/flexibility gap. Nevertheless, Dynamic Reconfiguration (DR) has been continuously suffering from a bottleneck corresponding to a long reconfiguration time.In this thesis, we propose a novel medium-grained high-speed dynamic reconfiguration technique for DSP48E1-based circuits. The idea is to take advantage of the DSP48E1 slices runtime reprogrammability coupled with a re-routable interconnection block to change the overall circuit functionality in one clock cycle. In addition to the embedded systems efficiency, this thesis deals with the reliability chanllenges in new sub-micron electronic systems. In fact, as new technologies rely on reduced transistor size and lower supply voltages to improve performance, electronic circuits are becoming remarkably sensitive and increasingly susceptible to transient errors. The system-level impact of these errors can be far-reaching and Single Event Transients (SETs) have become a serious threat to embedded systems reliability, especially for especially for safety critical applications such as transportation systems. The reliability enhancement techniques that are based on overestimated soft error rates (SERs) can lead to unnecessary resource overheads as well as high power consumption. Considering error masking phenomena is a fundamental element for an accurate estimation of SERs.This thesis proposes a new cross-layer model of circuits vulnerability based on a combined modeling of Transistor Level (TLM) and System Level Masking (SLM) mechanisms. We then use this model to build a self adaptive fault tolerant architecture that evaluates the circuit’s effective vulnerability at runtime. Accordingly, the reliability enhancement strategy is adapted to protect only vulnerable parts of the system leading to a reliable circuit with optimized overheads. Experimentations performed on a radar-based obstacle detection system for railway transportation show that the proposed approach allows relevant reliability/resource utilization tradeoffs
Kofman, Émilien. "Adéquation algorithme architecture automatisée par solveur SMT". Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4009/document.
Pełny tekst źródłaWe describe the Symsched methodology and environment for AAA design (Application Architecture Adequation). It allows to evaluate the energy/performance balance for a given embedded system. We translate the different components of the problem (application requirements et architecture provisions) in a system of equations and inequations made of integer variables for the modeling of temporal aspects and boolean variables for the modeling of admissible task mapping and resource states. We then submit this problem to an automatic search engine SMT solver (SAT Modulo Theories). We study the scalability of this methodology and its compromises with models expressiveness. We then study synthetic, realistic and real scheduling problems using this approach
Tawk, Melhem. "Accélération de la simulation par échantillonnage dans les architectures multiprocesseurs embarquées". Valenciennes, 2009. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/860a8e09-e347-4f85-83bd-d94ca890483d.
Pełny tekst źródłaEmbedded system design relies heavily on simulation to evaluate and validate new platforms before implementation. Nevertheless, as technological advances allow the realization of more complex circuits, simulation time of these systems is considerably increasing. This problem arises mostly in the case of embedded multiprocessor architectures (MPSoC) which offer high performances (in terms of instructions/Joule) but which require powerful simulators. For such systems, simultion should be accelerated in order to speed up their design flow thus reducing the time-to-market. In this thesis, we proposed a series of solutions aiming at accelerating the simulation of MPSoC. The proposed methods are based on application sampling. Thus, the parallel applications are first analyzed in order to detect the different phases which compose them. Thereafter and during the simulation, the phases executed in parallel are combined together in order to generate clusters of phases. We developed techniques that facilitate generating clusters, detecting repeated ones and recording their statistics in an efficient way. Each cluster represents a sample of similar execution intervals of the application. The detection of these similar intervals saves us simulating several times the same sample. To reduce the number of clusters in the applications and to increase the occurrence number of simulated clusters, an optimization of the method was proposed to dynamically adapt phase size of the applications. This makes it possible to easily detect the scenarios of the executed clusters when a repetition in the behavior of the applications takes place. Finally, to make our methodology viable in an MPSoC design environment, we proposed efficient techniques to construct the real system state at the simulation starting point (checkpoint) of the cluster
Grivault, Ludovic. "Architecture multi-agent pour la conception et l'ordonnancement de systèmes multi-senseur embarqués sur plateformes aéroportées". Electronic Thesis or Diss., Sorbonne université, 2018. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2018SORUS152.pdf.
Pełny tekst źródłaThe problem of planning and scheduling the sensors of an airborne platform can be likened to an active perception problem with maximization of the environment knowledge and optimization of resulting sensors actions. The problem of decision-making entails the need for an architecture within the decision algorithms, the products of the sensors and the data allowing their interpretation are organized as efficiently as possible while being in conformity with the constraints brought by the context. The products of the planning phase must then be executed with minimal delay by the sensors and thus require a reactive and efficient scheduling adapted to mission criticality and the complexity of the environment. The decision of the sensor actions to be performed with regard to the environment of the platform and the command requires a planning capability specific to all sensors, in order to make decisions according to the perceptions and operating constraints of each sensor. A multi-sensor system, as presented in this manuscript, can be formally related to a job-shop type of workshop in which the machines correspond to the SMS sensors. In recent years the sensors on board airborne platforms have continuously grown. We will see in this manuscript how the multi-agent paradigm enables to design an architecture that responds to the context and its medium-term evolution, then how heuristic-based scheduling optimizes the sensors on board
Vaslin, Romain. "Hardware core for off-chip memory security management in embedded system". Lorient, 2008. http://www.theses.fr/2008LORIS119.
Pełny tekst źródłaWe offer a secure hardware architecture for system boot up, secure software execution and on field update. A new scheme is presented to guarantee dat confidentiality and integrity for off-chip memories. The architecture capabilities are extended to support on the fly security level management of data. The goal is to minimize the overhead due to security like logic area, performance, memory footprint and power consumption for the architecture. After careful evaluation through real time applications execution with this secure architecture, the next step was to provide an end to end solution. Toward th solution, a secure boot up mechanism is proposed in order to securely start applications from a flash memory. More techniques are also introduced to allow on field software update for later secure execution with the architecture. A complete set ofresults has been generated in order to underline the fact that the proposed solution matches with the current needs and constraints of embedded systems. For the first time the security cost in area, performance, memory and power has been evaluated for embedded systems with an end to end solution
Casalino, Lorenzo. "(On) The Impact of the Micro-architecture on Countermeasures against Side-Channel Attacks". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS036.
Pełny tekst źródłaSide-channels attacks are recognized as a threat for the confidentiality of data, in particular on embedded systems. The masking countermeasure constitutes a provably secure protection approach. Nonetheless, physical non-idealities reduce its proven security guarantees. In particular, in the software implementations, the Instruction Set Architecture (ISA) supported by a processor hides to the masking scheme designer one cause of such physical non-idealities: the micro-architecture. As such, the designer is not aware of the actual micro-architecture-induced side-channel sources and their security impact on a software implementation. Information can leak, for instance, during the state transition of hidden registers, or in the case signals of combinatorial elements exhibit different propagation times. Furthermore, speculative features and the memory subsystems can play a role in such information leakage. Several methodologies allow the mitigation of the impact of the micro-architecture on masked software implementations, but these approaches depend on the detailed knowledge of the micro-architecture, which implies several shortcomings: limited portability of the security guarantees between different micro-architectures, incomplete knowledge of the microarchitecture, complexity of the micro-architecture design. Thus, one might wonder whether there exist approaches less dependent on the underlying micro-architecture. With this thesis, we address, along two axes, the problem of developing practically secure masked software. The first axis targets the automated development of masked software resilient to transition-based leakages. We propose a methodology that takes advantage of optimizing compilers: given in input a software implementation, annotated with sensitive-data-related information, and a description of the target micro-architecture, we show how to exploit the instruction scheduling and register allocation tools to mitigate transition-based leakages in an automated manner. The second axis targets an architecture-independent approach. In literature, most of the works focuses on mitigating the impact of the micro-architecture on software implementations protected with the so-called Boolean masking scheme. Theoretical studies show the better resilience of alternative types masking schemes against transition-based leakages, suggesting their employment against micro-architectural leakage. Yet, their practical resilience has not been explored. Furthermore, the potential exploitation of the information leaked by data parallelism, potentially induced by the micro-architecture, has not been studied for software implementations. As such, we study the practical security offered by first-order Boolean, arithmetic and Inner-Product masking against micro-architecture-induced leakage, encompassing data parallelism as well. We first show that data parallelism can manifest also on simple scalar micro-architectures. Then, we evaluate the impact of transition-based leakage and data parallelism on values masked with the studied masking schemes. Eventually, we evaluate the impact of such information leakages on different masked implementations of the AES-128 cryptosystem. We show that, although their different leakage resilience, none of the studied masking schemes can perfectly mitigate the considered micro-architectural leakages
Krichen, Fatma. "Architectures logicielles à composants reconfigurables pour les systèmes temps réel répartis embarqués (TR²E)". Phd thesis, Université Toulouse le Mirail - Toulouse II, 2013. http://tel.archives-ouvertes.fr/tel-00921209.
Pełny tekst źródłaCargnini, Luís Vitório. "Applications des technologies mémoires MRAM appliquées aux processeurs embarqués". Thesis, Montpellier 2, 2013. http://www.theses.fr/2013MON20091/document.
Pełny tekst źródłaThe Semiconductors Industry with the advent of submicronic manufacturing flows below 45 nm began to face new challenges to keep evolving according with the Moore's Law. Regarding the widespread adoption of embedded systems one major constraint became power consumption of IC. Also, memory technologies like the current standard of integrated memory technology for memory hierarchy, the SRAM, or the FLASH for non-volatile storage have extreme intricate constraints to be able to yield memory arrays at technological nodes below 45nm. One important is up until now Non-Volatile Memory weren't adopted into the memory hierarchy, due to its density and like flash the necessity of multi-voltage operation. These theses has by objective work into these constraints and provide some answers. Into the thesis will be presented methods and results extracted from this methods to corroborate our goal of delineate a roadmap to adopt a new memory technology, non-volatile, low-power, low-leakage, SEU/MEU-resistant, scalable and with similar performance as the current SRAM, physically equivalent to SRAM, or even better with a area density between 4 to 8 times the area of a SRAM cell, without the necessity of multi-voltage domain like FLASH. This memory is the MRAM (Magnetic Memory), according with the ITRS one candidate to replace SRAM in the near future. MRAM instead of storing charge, they store the magnetic orientation provided by the spin-torque orientation of the free-layer alloy in the MTJ (Magnetic Tunnel Junction). Spin is a quantical state of matter, that in some metallic materials can have it orientation or its torque switched applying a polarized current in the sense of the field orientation desired. Once the magnetic field orientation is set, using a sense amplifier, and a current flow through the MTJ, the memory cell element of MRAM, it is possible to measure the orientation given the resistance variation, higher the resistance lower the passing current, the sense will identify a logic zero, lower the resistance the SA will sense a one logic. So the information is not a charge stored, instead it is a magnetic field orientation, reason why it is not affected by SEU or MEU caused due to high energy particles. Also it is not due to voltages variations to change the memory cell content, trapping charges in a floating gate. Regarding the MRAM, this thesis has by objective address the following aspects: MRAM applied to memory Hierarchy: - By describing the current state of the art in MRAM design and use into memory hierarchy; - by providing an overview of a mechanism to mitigate the latency of writing into MRAM at the cache level (Principle to composite memory bank); - By analyzing power characteristics of a system based on MRAM on CACHE L1 and L2, using a dedicated evaluation flow- by proposing a methodology to infer a system power consumption, and performances.- and for last based into the memory banks analysing a Composite Memory Bank, a simple description on how to generate a memory bank, with some compromise in power, but equivalent latency to the SRAM, that keeps similar performance
Guerre, Alexandre. "Approche hiérarchique pour la gestion dynamique des tâches et des communications dans les architectures massivement parallèles programmables". Paris 11, 2010. http://www.theses.fr/2010PA112102.
Pełny tekst źródłaNowadays, embedded systems have many uses like cell phones, GPS, etc. . Moreover, all these applications become complex. Hence, embedded world needs powerful and flexible processors able to manage the execution of dynamic applications. Mono-processors reach their limits and cannot provide enough computing power with the respect of embedded constraints. To solve this problem, embedded systems use multi-core processors. This thesis focuses on the problem of communication into many-core processors and the management of thousands of tasks on this kind of architecture. It presents an execution model and a many-core architecture able to respect embedded constraints. The architecture is composed of clusters of processors, and a hierarchical control to manage the execution of tasks and communications. The application is cut into Iinear task groups. These groups are dynamically dispatched on the architecture. We demonstrate that a hierarchical approach can provide a significant benefit in term of transistor efficiency in embedded systems
Pierrefeu, Lionel. "Algorithmes et architecture pour l'authentification de visages en situation réelle : système embarqué sur FPGA". Saint-Etienne, 2009. http://www.theses.fr/2009STET4024.
Pełny tekst źródłaThis thesis is concerned with image processing and embedded systems domains. More specifically, the aim of this work is to study and develop an on chip system capable of efficiently performing face detection, face recognition and face identification. The goal of the study is to design an electronic consumer product while taking into account constraints such as a real time processing and uncontrollable acquisition conditions. This work consists in the selection and development of algorithms suitable for face recognition applications and their optimization, taking into account the best compromise between performance and processing cost for the hardware implementation. This document is composed of three parts. The first part deals with the face authentification algorithms, presenting an overview of existing approaches and details of the selected neural network type RBF solution. Second part develops the study of the system's sensitivity to general face acquisitions conditions (range of lighting and positioning of the face in images) and also presents the selected chain of algorithms developed ton increase the system robustness. The final section presents the choices made taking into account the potential parallelism of algorithms selected. This section also details the results obtained for the integration of the complete system on FPGA
Senni, Sophiane. "Exploration of non-volatile magnetic memory for processor architecture". Thesis, Montpellier, 2015. http://www.theses.fr/2015MONTS264/document.
Pełny tekst źródłaWith the downscaling of the complementary metal-oxide semiconductor (CMOS) technology,designing dense and energy-efficient systems-on-chip (SoC) is becoming a realchallenge. Concerning the density, reducing the CMOS transistor size faces up to manufacturingconstraints while the cost increases exponentially. Regarding the energy, a significantincrease of the power density and dissipation obstructs further improvement inperformance. This issue is mainly due to the growth of the leakage current of the CMOStransistors, which leads to an increase of the static energy consumption. Observing currentSoCs, more and more area is occupied by embedded volatile memories, such as staticrandom access memory (SRAM) and dynamic random access memory (DRAM). As a result,a significant proportion of total power is spent into memory systems. In the past twodecades, alternative memory technologies have emerged with attractive characteristics tomitigate the aforementioned issues. Among these technologies, magnetic random accessmemory (MRAM) is a promising candidate as it combines simultaneously high densityand very low static power consumption while its performance is competitive comparedto SRAM and DRAM. Moreover, MRAM is non-volatile. This capability, if present inembedded memories, has the potential to add new features to SoCs to enhance energyefficiency and reliability. In this thesis, an area, performance and energy exploration ofembedding the MRAM technology in the memory hierarchy of a processor architectureis investigated. A first fine-grain exploration was made at cache level for multi-core architectures.A second study evaluated the possibility to design a non-volatile processorintegrating MRAM at register level. Within the context of internet of things, new featuresand the benefits brought by the non-volatility were investigated
Maalej, Issam. "Exploration haut niveau des architectures multiprocesseurs : analyse et métrique". Lorient, 2007. http://www.theses.fr/2007LORIS092.
Pełny tekst źródłaArchitecture exploration is a fundamental step in the design flow of embedded systems since the decisions made at this level have a significant impact on the final performance of the system. Applications and architectures have evolved and are still evolving, which increases the complexity of architecture exploration approaches. Indeed, these approaches have reached their own limits and are less efficient to handle applications which include a huge number of tasks and multiprocessor platforms with an increasing number of processors. In this PhD, to address this major lock, we discuss about the issue caused by multiprocessor architectures exploration with a high number of processors for applications that include many tasks. A study has been performed in order to identify and analyse design problems caused by these applications. A new architecture exploration approach has been implemented in order to overcome those problems. For that purpose a multi-PACM (Processor Accelerator Coprocessor Memory) architecture template has been established to represent both architectures specification and its new parameters (proximity, parallelism and software diversity). Our design space exploration approach is divided in two steps. The exploration step is preceded by a pre-exploration step which takes place at a higher level (functional) in order to reduce the architecture space as well as the exploration costs and complexity. Pre-exploration step consists in distributing tasks into the architecture’s PACMs. The distribution of tasks among the PACMs is called "partition". Pre-exploration step is a multi-objective optimisation aiming at maximising six metrics that have been defined and formalized. The purpose of these metrics consists in optimising the distribution of data exchanges, data sharing, and throughput constraints at the level of partitions in order to optimise the system time, area and consumption. Projecting time, area and consumption space used in traditional methods into a six-metric-based space allows the reduction of the exploration costs since metrics are less dependent on technology. This approach which is based on a genetic algorithm is flexible and helps the designer to enrich and guide the exploration process. UMTS transmitter, AC3 signal encoding and ICAM object tracking applications have been used to validate the metrics and their analysis through the genetic algorithm. They have also demonstrated the exploration approach and its ability to face with an extended design space, both from an application and an architecture point of view
Monthe, Djiadeu Valéry Marcial. "Développement des systèmes logiciels par transformation de modèles : application aux systèmes embarqués et à la robotique". Thesis, Brest, 2017. http://www.theses.fr/2017BRES0113/document.
Pełny tekst źródłaWith the construction of increasingly complex robots, the growth of robotic software architectures and the explosion of ever greater diversity of applications and robots missions, the design, development and integration of software entities of robotic systems, constitute a major problem for the robotics community. Indeed, robotic software architectures and software development platforms for robotics are numerous, and are dependent on the type of robot (service robot, collaborative, agricultural, medical, etc.) and its usage mode (In cage, outdoor, environment with obstacles, etc.).The maintenance effort of these platforms and their development cost are therefore considerable.Roboticists are therefore asking themselves a fundamental question: how to reduce the development costs of robotic software systems, while increasing their quality and preserving the specificity and independence of each robotic system? This question induces several others: on the one hand, how to describe and encapsulate the various functions that the robot must provide, in the form of a set of interactive software entities? And on the other hand, how to give these software entities, properties of modularity, portability, reusability, interoperability etc.?In our opinion, one of the most likely and promising solutions to this question, is to raise the level of abstraction in defining the software entities that make up robotic systems. To do this, we turn to model-driven engineering, specifically the design of Domain Specific Modeling Language (DSML).In this thesis, we first realize a comparative study of modeling languages and methods used in the development of embedded real time systems in general. The objective of this first work is to see if there are some that can make it possible to answer the aforementioned questions of the roboticists. This study not only shows that these approaches are not adapted to the definition of robotic software architectures, but mainly results in a framework, which we propose and which helps to choose the method (s) and / or the modeling language (s) best suited to the needs of the designer. Subsequently, we propose a DSML called Robotic Software Architecture Modeling Language (RsaML), for the definition of robotic software architectures with real-time properties. To do this, a meta-model is proposed from the concepts that roboticists are used to in defining their applications. It constitutes the abstract syntax of the language. Real-time properties are identified and included in the relevant concepts. Semantic rules in the field of robotics are then defined as OCL constraints and then integrated into the meta-model, to allow non-functional and realtime property checks to be performed on the constructed models.Eclipse Modeling Framework has been used to implement an editor that supports the RsaML language. The rest of the work done in this thesis involved defining model transformations and then using them to implement generators. These generators make it possible from a RsaML model built, to produce its documentation and source code in C language. These contributions are validated through a case study describing a scenario based on the Khepera III robot
Lelong, Lionel. "Architecture SoC-FPGA pour la mesure temps réel par traitement d'images. Conception d'un système embarqué : imageur CMOS et circuit logique programmable". Saint-Etienne, 2005. http://www.theses.fr/2005STET4008.
Pełny tekst źródłaThe measurements method by PIV (Particle Image Velocimetry) is a technique to measure a motion vector field in a non-intrusive way and multi points. This technique uses the cross-correlation algorithm between two images to estimate the motion. The computation quantity required by this method limits its use to off-line processing with computer. The computers performances remain insufficient for this type of applications under constraint real time on high data rates. Within sight of these specific needs, the definition and the design of dedicated architectures seem to be an adequate solution to reach significant performances. The evolution of the integration levels allows the development of structures dedicated to image processing in real time at low prices. We propose a hardware implementation of cross-correlation algorithm adapted to internal architecture of FPGA with an aim of obtaining the real time PIV. In this thesis, we were interested in the architecture design of System on-a-Chip dedicated to physical measurements of parameters by real time image processing. This is a hierarchical and modular architecture dedicated to applications of “Dominant input data flow”. This hierarchical description allows a modification of number and/or nature of elements without architecture modifications. For one measurement computation, it needs 267 µs with a FPGA at the frequency of 50 MHz. To estimate the system performances, a CMOS image sensor was connected directly to the FPGA. That makes it possible to carry out a compact, dedicated and easily reuse system. An architecture made up of 5 computation modules allows satisfying the constraint of real time processing with this prototype
Tournier, Jean-Charles. "Qinna : une architecture à base de composants pour la gestion de la qualité de service dans les systèmes embarqués mobiles". Phd thesis, INSA de Lyon, 2005. http://tel.archives-ouvertes.fr/tel-00009704.
Pełny tekst źródłaCe travail de thèse présente une architecture de gestion de qualité de service pour les systèmes embarqués mobiles à composants. Cette architecture, appelée Qinna, est définie à l'aide de composants Fractal et permet la mise en œuvre, ainsi que la gestion dynamique, de contrats de qualité de service entre les différents composants d'un système. L'originalité de l'approche proposée permet de prendre en compte la qualité de service quelque soit le niveau considéré du système (niveau applicatif, niveau services, niveau système d'exploitation et niveau ressources).
L'architecture Qinna a été validée par une évaluation qualitative à base de patrons génériques d'architecture, puis par une évaluation quantitative permettant de montrer que le coût de l'architecture reste faible.
Le travail réalisé ouvre de nombreuses perspectives de recherche notamment celle de généraliser l'approche utilisée (définition d'une architecture abstraite de composant pour la prise en charge de la gestion d'une propriété non-fonctionnelle, ici la QdS) à d'autres propriétés non-fonctionnelles (par exemple la sécurité ou la tolérance aux fautes), et d'en tirer des conclusions sur la définition et la génération de conteneurs ouverts.
Ma, Yue. "Compositional modeling of globally asynchronous locally synchronous (GALS) architectures in a polychronous model of computation". Rennes 1, 2010. https://tel.archives-ouvertes.fr/tel-00675438.
Pełny tekst źródłaAADL est dédié à la conception de haut niveau et l’évaluation de systèmes embarqués. Il permet de décrire la structure d’un système et ses aspects fonctionnels par une approche à base de composants. Des processus localement synchrones sont alloués sur une architecture distribuée et communiquent de manière globalement asynchrone (système GALS). Une spécificité du modèle polychrone est qu’il permet de spécifier un système dont les composants peuvent avoir leur propre horloge d’activation : il est bien adapté à une méthodologie de conception GALS. Dans ce cadre, l’atelier Polychrony fournit des modèles et des méthodes pour la modélisation, la transformation et la validation de systèmes embarqués. Cette thèse propose une méthodologie pour la modélisation et la validation de systèmes embarqués spécifiés en AADL via le langage synchrone multi-horloge Signal. Cette méthodologie comprend la modélisation de niveau système en AADL, des transformations automatiques du modèle AADL vers le modèle polychrone, la distribution de code, la vérification formelle et la simulation du modèle polychrone. Notre transformation prend en compte l’architecture du système, décrite dans un cadre IMA, et les aspects fonctionnels, les composants logiciels pouvant être mis en oeuvre en Signal. Les composants AADL sont modélisés dans le modèle polychrone en utilisant une bibliothèque de services ARINC. L’annexe comportementale d’AADL est interprétée dans ce modèle via SSA. La génération de code distribué est obtenue avec Polychrony. La vérification formelle et la simulation sont eectuées sur deux études de cas qui illustrent notre méthodologie pour la conception fiable des applications AADL
Dechelotte, Jonathan. "Etude et mise en oeuvre d'un environnement d'exécution pour architecture hétérogène reconfigurable". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0025.
Pełny tekst źródłaToday, embedded systems have taken a leading role in our world. Whether for communication, travel, work or entertainment, their use is preponderant. Together, research and industry efforts are constantly developing various parts that make up these systems: processor, FPGA, memory, operating system.From an architectural point of view, the contribution of a generalist architecture coupled with a reconfigurable architecture positions SoC FPGA as popular targets for use in embedded systems. However, their implementation's complexity makes their adoption difficult. The abstraction of low-level layers seems to be an investigation's axis that would tend to reverse this trend. The use of an operating system seems suitable at first glance because they deliver an ecosystem of drivers and services for access to hardware resources, native scheduling capacities and libraries for security. However, this solution brings constraints and lead to evaluate other approaches.This manuscript evaluates the ability of a high-level language, Lua, to provide an execution environment in such a case that the implementation does not provide operating system. It gives, through an ecosystem named Lynq, the necessary building blocks for the management and allocation of resources present on the SoC FPGA as well as a method for isolation between applications. Besides the adoption of this execution environment, our work explores the capacity of generalist architectures such as CPUs to become specialized when implemented on a FPGA. This is done through a contribution allowing the generation of a RISC-V CPU and its associated microcode
Leserf, Patrick. "Optimisation de l’architecture de systèmes embarqués par une approche basée modèle". Thesis, Toulouse, ISAE, 2017. http://www.theses.fr/2017ESAE0008/document.
Pełny tekst źródłaFinding the set of optimal architectures is an important challenge for the designer who uses the Model-Based System Engineering (MBSE). Design objectives such as cost, performance are often conflicting. Current methods (OOSEM with SysML or ARCADIA) are focused on the design and the analysis of a particular alternative of the system. In these methods, the topology and the execution platform are frozen before the optimization. To improve the optimization from MBSE, we propose a methodology combining SysML with the concept of “decision point”. An initial SysML model is complemented with “decisions points” to show up the different alternatives for component redundancy, instance selection and allocation. The constraints and objective functions are also added to the initial SysML model, with an optimiza-tion context and parametric diagram. Then a representation of a constraint satisfaction problem for optimization (CSMOP) is generated with an algorithm and solved with an existing solver. A demonstrator implements this transformation in an Eclipse plug-in, combining the Papyrus open-source tool and CSP solvers. Two case studies illustrate the methodology: a stereoscopic camera sensor module and a mission controller for an Unmanned Aerial Vehi-cle (UAV)
Romera, Thomas. "Adéquation algorithme architecture pour flot optique sur GPU embarqué". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS450.
Pełny tekst źródłaThis thesis focus on the optimization and efficient implementation of pixel motion (optical flow) estimation algorithms on embedded graphics processing units (GPUs). Two iterative algorithms have been studied: the Total Variation - L1 (TV-L1) method and the Horn-Schunck method. The primary objective of this work is to achieve real-time processing, with a target frame processing time of less than 40 milliseconds, on low-power platforms, while maintaining acceptable image resolution and flow estimation quality for the intended applications. Various levels of optimization strategies have been explored. High-level algorithmic transformations, such as operator fusion and operator pipelining, have been implemented to maximize data reuse and enhance spatial/temporal locality. Additionally, GPU-specific low-level optimizations, including the utilization of vector instructions and numbers, as well as efficient memory access management, have been incorporated. The impact of floating-point number representation (single-precision versus half-precision) has also been investigated. The implementations have been assessed on Nvidia's Jetson Xavier, TX2, and Nano embedded platforms in terms of execution time, power consumption, and optical flow accuracy. Notably, the TV-L1 method exhibits higher complexity and computational intensity compared to Horn-Schunck. The fastest versions of these algorithms achieve a processing rate of 0.21 nanoseconds per pixel per iteration in half-precision on the Xavier platform, representing a 22x time reduction over efficient and parallel CPU versions. Furthermore, energy consumption is reduced by a factor of x5.3. Among the tested boards, the Xavier embedded platform, being both the most powerful and the most recent, consistently delivers the best results in terms of speed and energy efficiency. Operator merging and pipelining have proven to be instrumental in improving GPU performance by enhancing data reuse. This data reuse is made possible through GPU Shared memory, which is a small, high-speed memory that enables data sharing among threads within the same GPU thread block. While merging multiple iterations yields performance gains, it is constrained by the size of the Shared memory, necessitating trade-offs between resource utilization and speed. The adoption of half-precision numbers accelerates iterative algorithms and achieves superior optical flow accuracy within the same time frame compared to single-precision counterparts. Half-precision implementations converge more rapidly due to the increased number of iterations possible within a given time window. Specifically, the use of half-precision numbers on the best GPU architecture accelerates execution by up to x2.2 for TV-L1 and x3.7 for Horn-Schunck. This work underscores the significance of both GPU-specific optimizations for computer vision algorithms, along with the use and study of reduced floating point numbers. They pave the way for future enhancements through new algorithmic transformations, alternative numerical formats, and hardware architectures. This approach can potentially be extended to other families of iterative algorithms
Belaggoun, Amel. "Adaptability and reconfiguration of automotive embedded systems". Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066252.
Pełny tekst źródłaModern vehicles have become increasingly computerized to satisfy the more strict safety requirements and to provide better driving experiences. Therefore, the number of electronic control units (ECUs) in modern vehicles has continuously increased in the last few decades. In addition, advanced applications put higher computational demand on ECUs and have both hard and soft timing constraints, hence a unified approach handling both constraints is required. Moreover, economic pressures and multi-core architectures are driving the integration of several levels of safety-criticality onto the same platform. Such applications have been traditionally designed using static approaches; however, static approaches are no longer feasible in highly dynamic environments due to increasing complexity and tight cost constraints, and more flexible solutions are required. This means that, to cope with dynamic environments, an automotive system must be adaptive; that is, it must be able to adapt its structure and/or behaviour at runtime in response to frequent changes in its environment. These new requirements cannot be faced by the current state-of-the-art approaches of automotive software systems. Instead, a new design of the overall Electric/Electronic (E/E) architecture of a vehicle needs to be developed. Recently, the automotive industry agreed upon changing the current AUTOSAR platform to the “AUTOSAR Adaptive Platform”. This platform is being developed by the AUTOSAR consortium as an additional product to the current AUTOSAR classic platform. This is an ongoing feasibility study based on the POSIX operating system and uses service-oriented communication to integrate applications into the system at any desired time. The main idea of this thesis is to develop novel architecture concepts based on adaptation to address the needs of a new E/E architecture for Fully Electric Vehicles (FEVs) regarding safety, reliability and cost-efficiency, and integrate these in AUTOSAR. We define the ASLA (Adaptive System Level in AUTOSAR) architecture, which is a framework that provides an adaptive solution for AUTOSAR. ASLA incorporates tasks-level reconfiguration features such as addition, deletion and migration of tasks in AUTOSAR. The main difference between ASLA and the Adaptive AUTOSAR platform is that ASLA enables the allocation of mixed critical functions on the same ECU as well as time-bound adaptations while adaptive AUTOSAR separates critical, hard real-time functions (running on the classic platform) from non-critical/soft-real-time functions (running on the adaptive platform). To assess the validity of our proposed architecture, we provide an early prototype implementation of ASLA and evaluate its performance through experiments