Dissertations / Theses on the topic 'Systèmes adaptatifs (informatique) – Réseaux logiques programmables par l'utilisateur'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Systèmes adaptatifs (informatique) – Réseaux logiques programmables par l'utilisateur.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Perez, Castañeda Oscar Leopoldo. "Modélisation des effets de la reconfiguration dynamique sur la flexibilité d'une architecture de traitement temps réel." Nancy 1, 2007. http://www.theses.fr/2007NAN10139.
Full textThe principal contribution of the wired logic compared to the microprocessor is the degree of parallelism which is in higher several orders of magnitude. However, the property of configurability of these circuits involves an additionnal cost in term of silicon surface, delay and power consumption compared to circuits ASICs. The dynamic reconfiguration of the FPGA is often presented in the literature like a means of increasing their flexibility, to approach that of the microprocessors, while preserving a level of performance that if not is close to the ASIC is higher than of the microprocessors. If the performance is in general, for a given application, more easy to quantify, the situation is quite different for flexibility. In the litterature this metric has never been defined and quantified. Moreover we did not find any definition of the flexibility of an architecture for processing of data. The principal objective of this work is by one hand, to define and quantify the flexibility and by the other hand, to model the influence of the dynamic reconfiguration on flexibility. We put at the disposition the designer a metric as well as the bases of methodology allowing it to choose or not this solution according to its constraints and objectives
Garcia, Samuel. "Architecture reconfigurable dynamiquement a grain fin pour le support d'un système d'exploitation temps réel." Paris 6, 2012. http://www.theses.fr/2012PA066495.
Full textMost of anticipated future applications share four major characteristics. They might all require an increased computing capacity, they will implies to take real time into account, they represent a big step in terms of complexity compared with todays typical applications, and will have to deal with the dynamic nature of the real physical world. Fine grained dynamically reconfigurable architecture (FGDRA) can be seen as next evolution of today's FPGA, aiming at dealing with very dynamic and complex real time applications while providing comparable potential computing power due to the possibility to fine tune execution architecture at a fine grain level. To make this kind of devices usable for real application designer complexity has to be abstracted by an operating system layer and adequate tool set. This combination would form an adequate solution to support future applications. This thesis exposes an innovative FGDRA architecture called OLLAF. This architecture answer both technical issues on reconfigurable computing and practical problematics of application designers. The whole architecture is designed to work in symbiosis with an operating system. Studies presented here will more particularly focus on hardware task management mechanisms in a preemptive system. We will first present our work toward trying to implement such mechanisms using existing FPGA and show that those existing architectures have to evolve to efficiently support an operating system in a highly dynamic real time situation. The OLLAF architecture will then be explained and the hardware task management mechanism will be highlighted. We then present two studies that prove this approach to constitute a huge gain compared with existing platforms in terms of resulting operating system overhead even for static application cases where dynamical reconfiguration is used only for computing resource sharing. For highly dynamical real time cases we show that not only it could lower the overhead, but it will also support cases that existing devices just cannot support
Vidal, Jorgiano. "Dynamic and partial reconfigurable embedded systems design with UML." Lorient, 2010. http://www.theses.fr/2010LORIS203.
Full textAdvances in reconfigurable technologies allow entire multiprocessor systems to be implemented in a single FPGA (Multiprocessor System on Programmable Chip, MP- SoPC). In order to speed up the design time of such heterogeneous systems, new modelling techniques must be developed. Furthermore, dynamic execution is a key point for modern systems, i. E. Systems that can partially change their behavior at run time in order to adjust their execution to the environment. UML (Unified Modeling Language) has been used for software modeling since its first version. Recently, with new modeling concepts added to later versions (UML 2), it has become more and more suitable for hardware modeling. This thesis is a contribution to the MOPCOM project, where we propose a set of modeling techniques in order to build complex embedded systems by using UML. The modeling techniques proposed here consider the system to be built in one complete model. Moreover, we propose a set of transformation that allows the system to be automatically generated. Our approach allows the modelling of dynamic applications onto reconfigurable platforms. Design time reduction up to 30% has been measured while using our methodology
Liu, Ting. "Optimisation par synthèse architecturale des méthodes de partitionnement temporel pour les circuits reconfigurables." Thesis, Nancy 1, 2008. http://www.theses.fr/2008NAN10013/document.
Full textAThe research work presented in the context of methodologies is to assist the implementation of data flow graph algorithms on dynamically reconfigurable RSoC (Reconfigurable System on Chip)-based FPGA architectures.The main strategy consists in implementing a design approach based on simultaneously both the dynamic reconfiguration (DR) and synthesis architecture (SA) in order to achieve a best Adequacy Algorithm Architecture (A3). The methodology consists in identifying and extracting the parts of an application which is described in form of DFG in order to implement either by successively partial reconfiguration (TP), or by the AS or by combining the two approaches.To develop our solution with a view of optimizing and suitable compromise between the two approaches RD and SA, we propose a parameter in order to evaluate the degree of the inter-partition implementation based on functional units shared. In order to validate the proposed methodological strategy, we present the results of the implementation of our approach on two real-time applications. A comparative analysis with the respecting of the implementation results illustrates the interest and the optimisation ability of our method, which is also for dynamic reconfiguration implementation of the complex applications on RSoC
Zhang, Xun. "Contribution aux architectures adaptatives : etude de l'efficacité énergétique dans le cas des applications à parallélisme de données." Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10106/document.
Full textMy PhD project focuses on Dynamic Adaptive Runtime parallelism and frequency scaling techniques in coarse grain reconfigurable hardware architectures. This new architectural approach offers a set of new features to increase the flexibility and scalability for applications in an evolving environment with reasonable energy cost. In this architecture, the parallelism granularity and running frequency can be reconfigured by using partial and dynamic reconfiguration. The adaptive method and architecture have been already developed and tested on FPGA platforms. The measurements and results analysis based on DWT show that the energy efficiency is adjustable dynamically by using our approach. The main contribution to the research project involves an auto-adaptive method development; this means using partial and dynamic reconfiguration can reconfigure the parallelism granularity and running frequency of application. The adaptive method by adjusting the parallelism granularity and running frequency is tested with the same application. We are presenting results coming from implementations of Image processing key application and analyses the behavior of this architecture on these applications
Fournier, Émilien. "Accélération matérielle de la vérification de sûreté et vivacité sur des architectures reconfigurables." Electronic Thesis or Diss., Brest, École nationale supérieure de techniques avancées Bretagne, 2022. http://www.theses.fr/2022ENTA0006.
Full textModel-Checking is an automated technique used in industry for verification, a major issue in the design of reliable systems, where performance and scalability are critical. Swarm verification improves scalability through a partial approach based on concurrent execution of randomized analyses. Reconfigurable architectures promise significant performance gains. However, existing work suffers from a monolithic design that hinders the exploration of reconfigurable architecture opportunities. Moreover, these studies are limited to safety verification. To adapt the verification strategy to the problem, this thesis first proposes a hardware verification framework, allowing to gain, through a modular architecture, a semantic and algorithmic genericity, illustrated by the integration of 3 specification languages and 6 algorithms. This framework allows efficiency studies of swarm algorithms to obtain a scalable safety verification core. The results, on a high-end FPGA, show gains of an order of magnitude compared to the state-of-the-art. Finally, we propose the first hardware accelerator for safety and liveness verification. The results show an average speed-up of 4875x compared to software
Jovanovic, Slavisa. "Architecture reconfigurable de système embarqué auto-organisé." Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10099/document.
Full textThe growing complexity of computing systems, mostly due to the rapid progress in Information Technology (IT) in the last decade, imposes on system designers to orient their traditional design concepts towards the new ones based on self-organizing and self-adaptive architectural solutions. On the one hand, these new architectural solutions should provide a system with a suf?cient computing power, and on the other hand, a great ?exibility and adaptivity in order to cope with all non-deterministic changes and events that may occur in the environnement in which it evolves. Within this framework, a recon?gurable MPSoC self-organizing architecture on the FPGA recon?gurable technology is studied and developped during this PhD
Hentati, Manel. "Reconfiguration dynamique partielle de décodeurs vidéo sur plateformes FPGA par une approche méthodologique RVC (Reconfigurable Video Coding)." Rennes, INSA, 2012. http://www.theses.fr/2012ISAR0027.
Full textThe main purpose of this PhD is to contribute to the design and the implementation of a reconfigurable decoder using MPEGRVC standard. The standard MPEG-RVC is developed by MPEG. Lt aims at providing a unified high-level specification of current and future MPEG video coding technologies by using dataflow model named RVC-CAL. This standard offers the means to overcome the lack of interpretability between many video codecs deployed in the market. Ln this work, we propose a rapid prototyping methodology to provide an efficient and optimized implementation of RVC decoders in target hardware. Our design flow is based on using the dynamic partial reconfiguration (DPR) to validate reconfiguration approaches allowed by the MPEG-RVC. By using DPR technique, hardware module can be replaced by another one which has the same function or the same algorithm but a different architecture. This concept allows to the designer to configure various decoders according to the data inputs or her requirements (latency, speed, power consumption,. . ). The use of the MPEG-RVC and the DPR improves the development process and the decoder performance. But, DPR poses several problems such as the placement of tasks and the fragmentation of the FPGA area. These problems have an influence on the application performance. Therefore, we need to define methods for placement of hardware tasks on the FPGA. Ln this work, we propose an off-line placement approach which is based on using linear programming strategy to find the optimal placement of hardware tasks and to minimize the resource utilization. Application of different data combinations and a comparison with sate-of-the art method show the high performance of the proposed approach
Feki, Oussama. "Contribution à l'implantation optimisée de l'estimateur de mouvement de la norme H.264 sur plates-formes multi composants par extension de la méthode AAA." Thesis, Paris Est, 2015. http://www.theses.fr/2015PEST1009/document.
Full textMixed architectures containing programmable devices and reconfigurable ones can provide calculation performance necessary to meet constraints of real-time applications. But the implementation and optimization of these applications on this kind of architectures is a complex task that takes a lot of time. In this context, we propose a rapid prototyping tool for this type of architectures. This tool is based on our extension of the Adequacy Algorithm Architecture methodology (AAA). It allows to automatically perform optimized partitioning and scheduling of the application operations on the target architecture components and generation of correspondent codes. We used this tool for the implementation of the motion estimator of the H.264/AVC on an architecture composed of a Nios II processor and Altera Stratix III FPGA. So we were able to verify the correct running of our tool and validate our automatic generator of mixed code
Bruguier, Florent. "Méthodes de caractérisation et de surveillance des variations technologiques et environnementales pour systèmes reconfigurables adaptatifs." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00965377.
Full textColancon, Stéphane. "Conception de systèmes analogiques : méthodologie et environnement de prototypage." Montpellier 2, 2001. http://www.theses.fr/2001MON20181.
Full textHarb, Naim. "Dynamically and Partially Reconfigurable Embedded System Architecture for Automotive and Multimedia Applications." Valenciennes, 2011. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/1810c575-b28e-4817-a3be-f0527631eabd.
Full textShort time-to-market windows, high design and fabricationcosts, and fast changing standards of application-specificprocessors, make them a costly and risky investment for embedded system designers. To overcome these problems, embedded system designersare increasingly relying on Field Programmable Gate Arrays(FPGAs) as target design platforms. FPGAs are generally slower and consumemore power than application-specific integrated circuits(ASICs), and this can restrict their use to limited applicationdomains. However, recent advances in FPGA architectures,such as dynamic partial reconfiguration (DPR), are helpingbridge this gap. DPR reduces area and enables mutually exclusive subsystemsto share the same physical space on a chip. It also reducescomplexity, which usually results in faster circuits and lowerpower consumption. The work in this PhD targets first a Driver Assistant System (DAS) system based on a Multiple Target Tracking (MTT) algorithm as our automotive base system. We present a dynamically reconfigurable filtering hardwareblock for MTT applications in DAS. Our system shows thatthere will be no reconfiguration overhead because the systemwill still be functioning with the original configuration until thesystem reconfigures itself. The free reconfigurable regions canbe implemented as improvement blocks for other DAS systemfunctionalities. Two approaches were used to design the filtering block according to driving conditions. We then target another application on the basis of DPR, the H. 264 encoder as a multimedia system. Regarding the H. 264 multimedia system, we propose a reconfigurable H. 264 Motion Estimation (ME) unit whose architecture can be modified to meet specific energy and image quality constraints. By using DPR, we were able to support multiple configurations each with different levels of accuracy and energy consumption. Image accuracy levels were controlled via application demands, user demands or support demands
Boussaid, Lotfi. "Etude et implémentation de descripteurs de contenu AV pour les applications multimedia temps réel." Dijon, 2006. http://www.theses.fr/2006DIJOS049.
Full textThe works presented in this thesis contribute to the design of embedded electronic systems which are dedicated for real time multimedia applications. They fall within the framework of design methodology of the new hardware and/or software architecture used for analysis and description of audiovisual content. In this thesis we are first interested in the validation and optimization of shot boundary detection algorithms and in the extraction of high level semantic information using low level audiovisual descriptors. After that, we present the solutions of hardware and/or software implementation related to cut and dissolve detectors at different abstraction levels (logic, RTL and high level based platform). In the last part of this thesis, we propose a generic architecture template for audiovisual content analysis and description. The transposition of this template on embedded systems became possible with the evolution of recently marketed FPGA and the new tools and methodology used on system on programmable chip (SOPC)
Marques, Nicolas. "Méthodologie et architecture adaptative pour le placement efficace de tâches matérielles de tailles variables sur des partitions reconfigurables." Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0139/document.
Full textFPGA-based reconfigurable architectures can deliver appropriate solutions for several applications as they allow for changing the performance of a part of the FPGA while the rest of the circuit continues to run normally. These architectures, despite their improvements, still suffer from their lack of adaptability when confronted with applications consisting of variable size material tasks. This heterogeneity may cause wrong placements leading to a sub-optimal use of resources and therefore a decrease in the system performances. The contribution of this thesis focuses on the problematic of variable size material task placement and reconfigurable region effective generation. A methodology and an intermediate layer between the FPGA and the application are proposed to allow for the effective placement of variable size material tasks on reconfigurable partitions of a predefined size. To approve the method, we suggest an architecture based on the use of partial reconfiguration in order to adapt the transcoding of one video compression format to another in a flexible and effective way. A study on the reconfigurable region partitioning for the entropy encoder material tasks (CAVLC / VLC) is proposed in order to show the contribution of partitioning. Then an assessment of the gain obtained and of the method additional costs is submitted
Favard, Sébastien. "Adéquation granularité opérateur - granularité architecture dans un système de traitement reconfigurable." Compiègne, 2002. http://www.theses.fr/2002COMP1428.
Full textPetit, Éric. "Vers un partitionnement automatique d'applications en codelets spéculatifs pour les systèmes hétérogènes à mémoires distribuées." Rennes 1, 2009. http://www.theses.fr/2009REN1S087.
Full textIn light of the increase of development cost, power consumption and silicon area for new single-core architecture optimisations, the new way for performance improvements leads to multicore architecture, with parallel programming and specialised coprocessors. They give the best trade-off between high computing performance and required resources. In order to efficiently address this new kind of architecture, applications have to be split into tasks, also called codelets, which will be mapped onto the different computing units of the host system. The purpose of this thesis is to propose an automatic and efficient model to generate speculative codelets from applications. Speculation allows the compiler to handle a number of optimisations which would have been impossible or unavailable without speculative data. My second contribution deals with the data transfer optimisation between the processor and the coprocessor by using speculation
Sbai, Hugo. "Système de vidéosurveillance intelligent et adaptatif, dans un environnement de type Fog/Cloud." Thesis, Lille, 2018. http://www.theses.fr/2018LIL1I018.
Full textCCTV systems use sophisticated cameras (network cameras, smart cameras) and computer servers for video recording in a fully digital system. They often integrate hundreds of cameras generating a huge amount of data, far beyond human agent monitoring capabilities. One of the most important and modern challenges, in this field, is to scale an existing cloud-based video surveillance system with multiple heterogeneous smart cameras and adapt it to a Fog / Cloud architecture to improve performance without a significant cost overhead. Recently, FPGAs are becoming more and more present in FCIoT (FoG-Cloud-IoT) platform architectures. These components are characterized by dynamic and partial configuration modes, allowing platforms to quickly adapt themselves to changes resulting from an event, while increasing the available computing power. Today, such platforms present a certain number of serious scientific challenges, particularly in terms of deployment and positioning of FoGs. This thesis proposes a video surveillance model composed of plug & play smart cameras, equipped with dynamically reconfigurable FPGAs on a hierarchical FOG / CLOUD basis. In this highly dynamic and scalable system, both in terms of intelligent cameras (resources) and in terms of targets to track, we propose an automatic and optimized approach for camera authentication and their dynamic association with the FOG components of the system. The proposed approach also includes a methodology for an optimal allocation of hardware trackers to the electronic resources available in the system to maximize performance and minimize power consumption. All contributions have been validated with a real size prototype
Pierrefeu, Lionel. "Algorithmes et architecture pour l'authentification de visages en situation réelle : système embarqué sur FPGA." Saint-Etienne, 2009. http://www.theses.fr/2009STET4024.
Full textThis thesis is concerned with image processing and embedded systems domains. More specifically, the aim of this work is to study and develop an on chip system capable of efficiently performing face detection, face recognition and face identification. The goal of the study is to design an electronic consumer product while taking into account constraints such as a real time processing and uncontrollable acquisition conditions. This work consists in the selection and development of algorithms suitable for face recognition applications and their optimization, taking into account the best compromise between performance and processing cost for the hardware implementation. This document is composed of three parts. The first part deals with the face authentification algorithms, presenting an overview of existing approaches and details of the selected neural network type RBF solution. Second part develops the study of the system's sensitivity to general face acquisitions conditions (range of lighting and positioning of the face in images) and also presents the selected chain of algorithms developed ton increase the system robustness. The final section presents the choices made taking into account the potential parallelism of algorithms selected. This section also details the results obtained for the integration of the complete system on FPGA
Kebbati, Youssef. "Développement d'une méthodologie de conception matériel à base de modules génériques VHDL/VHDL-AMS en vue d'une intégration de systèmes de commande électriques." Université Louis Pasteur (Strasbourg) (1971-2008), 2002. http://www.theses.fr/2002STR13194.
Full textPower electronic and electrical drive controllers are generally implemented by microprocessors or Digital Signal Processor (DSP) solutions. Recent progress in hardware solutions such as Very Large Scale Integration (VLSI) applications have improved implementation performances of controllers. However, the main problems of integrated circuit conception are their complexity and a long conception time. In this thesis, the author develops a new architectural approach for the integration of electrical controllers on ASIC and FPGA circuits. He proposes to apply a modular methodology that is based on specific Intellectual Properties (IPs) library. This methodology was confirmed by a large number of applications such as: a direct torque controller for an ac motor, a sensorless speed controller for a switched reluctance motor, and an active shunt filter. Results in terms of integration and control performances show that the adopted modular methodology matches perfectly the requirements of the integration of electrical controller. The same developed approach was used to model a global electrical systems. The obtained model is based on a mixed library of analogue and digital parts described in VHDL and VHDL-AMS language
Corre, Youenn. "Automated generation of heterogeneous multiprocessor architectures : software and hardware aspects." Lorient, 2013. https://hal.archives-ouvertes.fr/tel-01130482.
Full textEmbedded systems evolution has led to the emergence of H-MPSoCs which provide a way to respect the cost and performance constraints inherent to embedded systems. However they also make the task of designing and programming such systems a long and arduous process. It is thus necessary to develop tools that will free designers from architectural and programming details, so that they can focus on the tasks where they can bring added-value. The objective is thus to automatize the tasks that burden the design of H-MPSoC, in particular on FPGA, by providing a higher-level of abstraction following a method that brings together HLS and hardware/software co-design beyond the existing solutions which are whether incomplete or unfit. The presented work introduces a design framework relying on the automation of tedious tasks and allowing designers to express their expertise where they want to. For this, we rely on an architecture model defined with a high-level formalism independent from implementation details, providing a solution to the lack of multiprocessor architecture in FPGAs. This specification model also allows designers to provide design constraints in accordance with their level of expertise or involvement. The DSE is implemented as a scalable algorithm relying on fast and accurate estimation techniques. A method for the exploration of hardware accelerators based on HLS to provide fast cost estimations is introduced. The use of MDE methods enables portability and reuse by generating the final design implementation. The framework is validated through two case studies: an MJPEG video decoder and a face detection application
Bollengier, Théotime. "Du prototypage à l’exploitation d’overlays FPGA." Thesis, Brest, École nationale supérieure de techniques avancées Bretagne, 2018. http://www.theses.fr/2018ENTA0003/document.
Full textDue to their reconfigurable capability and the performance they offer, FPGAs are good candidates for accelerating applications in the cloud. However, FPGAs have some features that hinder their use in the Cloud as well as their adoption by customers : first, FPGA programming is done at low level and requires some expertise that usual Cloud clients do not necessarily have. Secondly, FPGAs do not have native mechanisms allowing them to easily fit in the dynamic execution model of the Cloud.In this work, we propose to use overlay architectures to facilitate FPGA adoption, integration, and operation in the Cloud. Overlays are reconfigurable architectures synthesized on FPGA. As hardware abstraction layers placed between the FPGA and applications, overlays allow to raise the abstraction level of the execution model presented to applications and users, as well as to implement mechanisms making them fit in a Cloud infrastructure.This work presents a vertical approach addressing all aspects of overlay operation in the Cloud as reconfigurable accelerators programmable by tenants : from designing and implementing overlays, integrating them on commercial FPGA platforms, setting up their operating mechanisms, to developping their programming tools. The environment developped in this work is complete, modular and extensible, it is partially based on several existing tools, and demonstrate the feasibility of our approach
Crenne, Jérémie. "Sécurité Haut-débit pour les Systèmes Embarqués à base de FPGAs." Phd thesis, Université de Bretagne Sud, 2011. http://tel.archives-ouvertes.fr/tel-00655959.
Full textKillian, Cédric. "Réseaux embarqués sur puce reconfigurable dynamiquement et sûrs de fonctionnement." Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0396.
Full textThe need of performance of embedded Syxtena-on-Chlps (Socs) are increasing constantly to meet the requirements of applications becoming more and more complexes, and new processing architectures and new computing paradigms have emerged. The integration within a single chip of dozens, or hundreds of computing and processing elements has given birth to Mukt1 Pmcesmr Systena-on-Chp (MPSoC) allowing to feature a high level of parallel processing. Nowaday s, the performance of these systems rely on the communication medium between the interconnected processing elements. The problematic of the communication medium to feature a high bandwidth and flexibility is primordial in order to efficiently use the parallel processing capacity of the MPSoC In this context, Network-on-Chlps (NoCs) are developed where the aim is to allow the interconnection of a large number of elements in the same device while maintaining a tradeoff between performance and logical resources. Moreover, the emergence of the partial reconfigurable FPGA technology allows to the MPSoC to adapt their elements during its operation in order to meet the system requirements. Given this increasing complexity of the electronic systems and the shrinking size of the devices, the sensibility of the chip against phenomena generating fault has increased. Thereby, to design efficient and reliable Socs, new error detection and localization techniques must be proposed for the dynamic NoCs where the main difficulty is the identification and the distinction between real errors and adaptive behavior of the NoCs. In this context, we present new mechanisms and architectural solutions allowing to check during the system operation the correctness of dynamic NoCs in order to locate and isolate efficiently the faulty components avoiding a failure of the system
Pérez, Patricio Madain. "Stéréovision dense par traitement adaptatif temps réel : algorithmes et implantation." Lille 1, 2005. https://ori-nuxeo.univ-lille1.fr/nuxeo/site/esupversions/0c4f5769-6f43-455c-849d-c34cc32f7181.
Full textKrichene, Haná. "SCAC : modèle d'exécution faiblement couplé pour les systèmes massivement parallèles sur puce." Thesis, Lille 1, 2015. http://www.theses.fr/2015LIL10093.
Full textThis work proposes an execution model for massively parallel systems aiming at ensuring the communications overlap by the computations. The execution model defined in this PhD thesis is named SCAC: Synchronous Communication Asynchronous Computation. This weakly coupled model separates the execution of communication phases from those of computation in order to facilitate their overlapping, thus covering the data transfer time. To allow the simultaneous execution of these two phases, we propose an approach based on three levels: two globally-centralized/locally-distributed hierarchical control levels and a parallel computation level. A generic and parametric implementation of the SCAC model was performed to fit different applications. This implementation allows the designer to choose the system components (from pre-designed ones) and to set its parameters in order to build the adequate SCAC configuration for the target application. An analytical estimation is proposed to evaluate the performance of an application running in SCAC mode. This estimation is used to predict the execution time without passing through the physical implementation in order to facilitate the parallel program design and the SCAC architecture configuration. The SCAC model was validated by simulation, synthesis and implementation on an FPGA platform, with different examples of parallel computing applications. The comparison of the results obtained by the SCAC model with other models has shown its effectiveness in terms of flexibility and execution time acceleration
Pagani, Marco. "Enabling Predictable Hardware Acceleration in Heterogeneous SoC-FPGA Computing Platforms." Thesis, Lille 1, 2020. http://www.theses.fr/2020LIL1I016.
Full textModern computing platforms for embedded systems are evolving towards heterogeneous architectures comprising different types of processing elements and accelerators. Such an evolution is driven by the steady increasing computational demand required by modern cyber-physical systems. These systems need to acquire large amounts of data from multiple sensors and process them for performing the required control and monitoring tasks. These requirements translate into the need to execute complex computing workloads such as machine learning, encryption, and advanced signal processing algorithms, within the timing constraints imposed by the physical world. Heterogeneous systems can meet this computational demand with a high level of energy efficiency by distributing the computational workload among the different processing elements.This thesis contributes to the development of system support for real-time systems on heterogeneous platforms by presenting novel methodologies and techniques for enabling predictable hardware acceleration on SoC-FPGA platforms. The first part of this thesis presents a framework designed for supporting the development of real-time applications on SoC-FPGAs, leveraging hardware acceleration and logic resource “Virtualization” through dynamic partial reconfiguration. The proposed framework is based on a device model that matches the capabilities of modern SoC-FPGA devices, and it is centered around a custom scheduling infrastructure designed to guarantee bounded response times. This characteristic is crucial for making dynamic hardware acceleration viable for safety-critical applications. The second part of this thesis presents a full implementation of the proposed framework on Linux. Such implementation allows developing predictable applications leveraging the large number of software systems available on GNU/Linux while relying on dynamic FPGA-based hardware acceleration for performing heavy computations. Finally, the last part of this thesis introduces a reservation mechanism for the AMBA AXI bus aimed at improving the predictability of hardware accelerators by regulating BUS contention through a bandwidth reservation mechanism
Ochoa, Ruiz Gilberto. "A high-level methodology for automatically generating dynamically reconfigurable systems using IP-XACT and the UML MARTE profile." Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00932118.
Full textSavary, Yannig. "Étude du potentiel des architectures reconfigurables pour maîtriser la consommation dans les applications embarquées." Lorient, 2007. http://www.theses.fr/2007LORIS093.
Full textThe power consumption control in electronics systems become a major risk in embedded systems. Because it limits the performance computing, battery life and their life span. . Between the heterogeneity of embedded applications and theirs new performance requirements, a novel type of particular architectures, the reconfigurable architecture has been developed. These architectures contribute good performances and flexibility characteristics. However, despite that a more and more utilization of theses architectures, few studies are risk to characterize their potential to control the power consumptionFor this purpose, a high-level power modeling and estimate reconfigurable architectures are purpose in this thesis. It has been confirm on real components to confront power estimations and physical measures. The last aspects of this works related the evaluation of the reconfigurable architecture potential to control de power consumption on a real application. Then, the power consumption of an embedded vision application implemented with and without dynamic reconfigurable of the component. A critical analysis of these results can know application, architectural and technological requirement conditions to control de power consumption in reconfigurable architectures
Lelong, Adrien. "Méthodes de diagnostic filaire embarqué pour des réseaux complexes." Thesis, Lille 1, 2010. http://www.theses.fr/2010LIL10121/document.
Full textResearch works presented in this thesis rely on on line diagnosis of wire networks. It consists indetecting and locating intermittent or permanent electrical faults, on a system's network while this system is running. Such a diagnosis is based on the principle of reflectometry which is used for off line diagnosis until then. The aim is the analysis and improvement of reflectometry methods and the implementation of related processing in order to automate and to embed it in the target system for a real time execution. The first contribution refers to the use of multicarrier signals so as to minimize interferences between the running target system and the reflectometry module. Pulse deconvolution algorithms are required for this purpose. These algorithms are also used for high resolution processing described subsequently. A low computational cost semi-blind deconvolution method is proposed among others. Distributed reflectometry, consisting in the simultaneous injection of signals at several points of the network, is then studied. An innovative filtering method called "selective average" is proposed as a solution to the problem of interferences due to the simultaneous injection of the modules. Finally several considerations on the implementation and automation are studied. An innovative intermittent fault detection algorithm for noisy environment is also proposed
Payet, Matthieu. "Conception de systèmes programmables basés sur les NoC par synthèse de haut niveau : analyse symbolique et contrôle distribué." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSES051/document.
Full textNetwork-on-Chip (NoC) introduces parallelism in communications and emerges with the growing integration of circuits as large designs need scalable communication architectures. This introduces the separation between communication tasks and processing tasks, and makes the design with NoC more complex. High level synthesis (HLS) tools can help designers to quickly generate high quality HDL (Hardware Description Level) designs. But their control schemes are centralized, usually using finite state machines. To take benefit from parallel algorithms and the ever growing FPGAs, HLS tools must properly extract the parallelism from the input representation and use the available resources efficiently. Algorithm designers are used with programming languages. This behavioral specification has to be enriched with architectural details for a correct optimization of the generated design. The C to FPGA path is not straightforward, and the need for architectural knowledges limits the adoption of FPGAs, and more generally, parallel architecture. In this thesis, we present a method that uses a symbolic analysis technique to extract the parallelism of an algorithmic specification written in a high level language. Parallelization skills are not required from the users. A methodology is then proposed for adding NoCs in the automatic design generation that takes the benefit of potential parallelizations. To dimension the design, we estimate the design resource consumption using a mathematical model for the NoC. A scalable application, hardware specific, is then generated using a High Level Synthesis flow. We provide a distributed mechanism for data path reconfiguration that allows different applications to run on the same set of processing elements. Thus, the output design is programmable and has a processor-less distributed control. This approach of using NoCs enables us to automatically design generic architectures that can be used on FPGA servers for High Performance Reconfigurable Computing. The generated design is programmable. This enable users to avoid the logic synthesis step when modifying the algorithm if a existing design provide the needed operators
Baklouti, Kammoun Mouna. "Méthode de conception rapide d’architecture massivement parallèle sur puce : de la modélisation à l’expérimentation sur FPGA." Thesis, Lille 1, 2010. http://www.theses.fr/2010LIL10101/document.
Full textThe main purpose of this PhD is to contribute to the design and implementation of high-performance Systems on Chip to accelerate and facilitate the design and execution of systematic data parallel applications. A massively parallel SIMD processing System-on-Chip named mppSoC is defined. This system is generic, parametric in order to be adapted to the application requirements. We propose a rapid and modular design method based on IP assembling to construct an mppSoC configuration. To this end, an IP library, mppSoCLib, is implemented. The designer can select the necessary components and define the parameters to implement the SIMD configuration satisfying his needs. An automated generation chain was developed. It allows the automatic generation of the corresponding VHDL code of an mppSoC configuration modeled at high abstraction level model (in UML). The generated code is simulable and synthetizable on FPGA. The developed chain allows the definition at a high abstraction level of an mppSoC configuration adequate for a given application. Based on the simulation of the automatically generated code, we can modify the SIMD configuration in a semi-automatic exploration process. We validate mppSoC in a real video application based on FPGA. In this same context, a comparison between mppSoC and other embedded systems shows the sufficient performance and effectiveness of mppSoC
Bouderbane, Mustapha. "Système de vision à haute gamme dynamique auto adaptable." Thesis, Bourgogne Franche-Comté, 2020. http://www.theses.fr/2020UBFCK048.
Full textHigh dynamic range (HDR) image generation using temporal exposure bracketing is widely used to recover the whole dynamic range of a filmed scene by fusion of two or more low dynamic range (LDR) images. Temporal exposure bracketing technique should be employed for static scenes and it cannot be applied directly for dynamic scenes. Motions introduced by moving objects in the LDR stack images create ghosts artifacts in the reconstructed HDR image. In this thesis, we have studied and evaluated a large nuber of algorithms used to correct or avoid these artifacts and we mad a trade-off between robustness and complexity in order to propose a real-time HDR video generation system.The real-time HDR image generation system is implemented on a FPGA circuit. This FPGA-based smart camera is presented with some experimental results to demonstrate the selected method and design efficiency. The proposed system enables HDR video streams, including ghost removal processing, to be generated at 60 f ps for a full sensor resolution (1280 × 1024)
Cheng, Kevin. "Reconfigurable self-organised systems : architecture and implementation." Thesis, Metz, 2011. http://www.theses.fr/2011METZ039S/document.
Full textIncreasing needs of computation power, flexibility and interoperability are making systems more and more difficult to integrate and to control. The high number of possible configurations, alternative design decisions or the integration of additional functionalities in a working system cannot be done only at the design stage any more. In this context, where the evolution of networked systems is extremely fast, different concepts are studied with the objective to provide more autonomy and more computing power. This work proposes a new approach for the utilization of reconfigurable hardware in a self-organised context. A concept and a working system are presented as Reconfigurable Self-Organised Systems (RSS). The proposed hardware architecture aims to study the impact of reconfigurable FPGA based systems in a self-organised networked environment and partial reconfiguration is used to implement hardware accelerators at runtime. The proposed system is designed to observe, at each level, the parameters that impact on the performances of the networked self-adaptive nodes. The results presented here aim to assess how reconfigurable computing can be efficiently used to design a complex networked computing system and the state of the art allowed to enlighten and formalise characteristics of the proposed self-organised hardware concept. Its evaluation and the analysis of its performances were possible using a custom board: the Potsdam Intelligent Camera System (PICSy). It is a complete implementation from the electronic board to the control application. To complete the work, measurements and observations allow analysis of this realisation and contribute to the common knowledge
Dechelotte, Jonathan. "Etude et mise en oeuvre d'un environnement d'exécution pour architecture hétérogène reconfigurable." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0025.
Full textToday, embedded systems have taken a leading role in our world. Whether for communication, travel, work or entertainment, their use is preponderant. Together, research and industry efforts are constantly developing various parts that make up these systems: processor, FPGA, memory, operating system.From an architectural point of view, the contribution of a generalist architecture coupled with a reconfigurable architecture positions SoC FPGA as popular targets for use in embedded systems. However, their implementation's complexity makes their adoption difficult. The abstraction of low-level layers seems to be an investigation's axis that would tend to reverse this trend. The use of an operating system seems suitable at first glance because they deliver an ecosystem of drivers and services for access to hardware resources, native scheduling capacities and libraries for security. However, this solution brings constraints and lead to evaluate other approaches.This manuscript evaluates the ability of a high-level language, Lua, to provide an execution environment in such a case that the implementation does not provide operating system. It gives, through an ecosystem named Lynq, the necessary building blocks for the management and allocation of resources present on the SoC FPGA as well as a method for isolation between applications. Besides the adoption of this execution environment, our work explores the capacity of generalist architectures such as CPUs to become specialized when implemented on a FPGA. This is done through a contribution allowing the generation of a RISC-V CPU and its associated microcode
Lorandel, Jordane. "Etude de la consommation énergétique de systèmes de communications numériques sans fil implantés sur cible FPGA." Thesis, Rennes, INSA, 2015. http://www.theses.fr/2015ISAR0036/document.
Full textWireless communication systems are still evolving since the last decades, driven by the growing demand of the electronic market for energy efficient and high performance devices. Thereby, new design constraints have appeared that aim at taking into account power consumption in order to improve battery-life of circuits. Current wireless communication systems commonly dissipate a lot of power. On the other hand, the complexity of such systems keeps on increasing through the generations to always satisfy more users at a high degree of performance. In this highly constrained context, FPGA devices seem to be an attractive technology, able to support complex systems thanks to their important number of resources. According to the FPGA nature, designers need to estimate the power consumption and the performance of their wireless communication systems as soon as possible in the design flow. In this way, they will be able to perform efficient design space exploration and make decisive implementation and optimization choices. Throughout this thesis, a power estimation methodology for hardware-focused FPGA device is described and aims at making design space exploration a lot easier, providing early and fast power and performance estimation at high-level. It also proposes an efficient way to efficiently compare several systems. The methodology is effective through an lP characterisation step and the development of their SystemC models. Then, a high level description of the entire system is realized from the SystemC models that have been previously developed. High-level simulations enable to check the functionality and evaluate the power and performance of the system. One of the contributions consists in monitoring the JP time-activities during the simulation. We show that this has an important impact on both power and performances. The effectiveness of the methodology has been demonstrated throughout several baseband processing chains of the wireless communication domain such as a SISO-OFDM generic chain, LTE transmitters etc. To conclude, the main limitations of the proposed methodology have been investigated and addressed
Alouani, Ihsen. "Conception de systèmes embarqués fiables et auto-réglables : applications sur les systèmes de transport ferroviaire." Thesis, Valenciennes, 2016. http://www.theses.fr/2016VALE0013/document.
Full textDuring the last few decades, a tremendous progress in the performance of semiconductor devices has been accomplished. In this emerging era of high performance applications, machines need not only to be efficient but also need to be dependable at circuit and system levels. Several works have been proposed to increase embedded systems efficiency by reducing the gap between software flexibility and hardware high-performance. Due to their reconfigurable aspect, Field Programmable Gate Arrays (FPGAs) represented a relevant step towards bridging this performance/flexibility gap. Nevertheless, Dynamic Reconfiguration (DR) has been continuously suffering from a bottleneck corresponding to a long reconfiguration time.In this thesis, we propose a novel medium-grained high-speed dynamic reconfiguration technique for DSP48E1-based circuits. The idea is to take advantage of the DSP48E1 slices runtime reprogrammability coupled with a re-routable interconnection block to change the overall circuit functionality in one clock cycle. In addition to the embedded systems efficiency, this thesis deals with the reliability chanllenges in new sub-micron electronic systems. In fact, as new technologies rely on reduced transistor size and lower supply voltages to improve performance, electronic circuits are becoming remarkably sensitive and increasingly susceptible to transient errors. The system-level impact of these errors can be far-reaching and Single Event Transients (SETs) have become a serious threat to embedded systems reliability, especially for especially for safety critical applications such as transportation systems. The reliability enhancement techniques that are based on overestimated soft error rates (SERs) can lead to unnecessary resource overheads as well as high power consumption. Considering error masking phenomena is a fundamental element for an accurate estimation of SERs.This thesis proposes a new cross-layer model of circuits vulnerability based on a combined modeling of Transistor Level (TLM) and System Level Masking (SLM) mechanisms. We then use this model to build a self adaptive fault tolerant architecture that evaluates the circuit’s effective vulnerability at runtime. Accordingly, the reliability enhancement strategy is adapted to protect only vulnerable parts of the system leading to a reliable circuit with optimized overheads. Experimentations performed on a radar-based obstacle detection system for railway transportation show that the proposed approach allows relevant reliability/resource utilization tradeoffs
Ben, Jmaa Chtourou Yomna. "Implémentation temps réel des algorithmes de tri dans les applications de transports intelligents en se basant sur l'outil de synthèse haut niveau HLS." Thesis, Valenciennes, 2019. http://www.theses.fr/2019VALE0013.
Full textIntelligent transport systems play an important role in minimizing accidents, traffic congestion, and air pollution. Among these systems, we mention the avionics domain, which uses in several cases the sorting algorithms, which are one of the important operations for real-time embedded applications. However, technological evolution is moving towards more and more complex architectures to meet the application requirements. In this respect, designers find their ideal solution in reconfigurable computing, based on heterogeneous CPU / FPGA architectures that house multi-core processors (CPUs) and FPGAs that offer high performance and adaptability to real-time constraints. Of the application. The main objective of my work is to develop hardware implementations of sorting algorithms on the heterogeneous CPU / FPGA architecture by using the high-level synthesis tool to generate the RTL design from the behavioral description. This step requires additional efforts on the part of the designer in order to obtain an efficient hardware implementation by using several optimizations with different use cases: software, optimized and nonoptimized hardware and for several permutations / vectors generated using the generator pf permutation based on Lehmer method. To improve performance, we calculated the runtime, standard deviation and resource number used for sorting algorithms by considering several data sizes ranging from 8 to 4096 items. Finally, we compared the performance of these algorithms. This algorithm will integrate the applications of decision support, planning the flight plan
Nejat, Arash. "Tirer parti du masquage logique pour faciliter les méthodes de détection des chevaux de Troie hardware." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT004.
Full textThe ever-increasing complexity of integrated circuits (ICs) design and manufacturing has necessitated the employment of third parties such as design-houses, intellectual property (IP) providers and fabrication foundries to accelerate and economize the development process. The separation of these parties results in some security threats. Untrustworthy fabrication foundries are suspected of three security threats: hardware Trojans, IP piracy, and IC overproduction. Hardware Trojans are malicious circuitry alterations in IC layouts intended for sabotage objectives.Some IC design modifications, known as Design-for-Trust (DfTr) have been proposed to facilitate Trojan detection methods or prevent Trojan insertion. In addition, key-based modifications, known as design masking or obfuscation, have been proposed to protect IPs/ICs from IP piracy and IC overproduction. They obscure circuits’ functionality by modifying circuits such that they do not correctly work without being fed with a correct key.In this thesis, we propose three DfTr methods based on leveraging the masking approach to hinder Trojan insertion. The first proposed DfTr method aims to maximize obscurity and simultaneously minimize the rare signal counts in circuits under masking. Rare signals barely have transitions during circuit operations and so the use of them causes hardware Trojans will not be easily activated and detected during circuit tests. The second proposed DfTr facilitates path delay analysis-based Trojan detection methods. Since the delay of shorter paths varies less than longer ones’, the objective is to generate fake short paths for nets which only belong to long paths by repurposing the masking elements. Our experiments show that this DfTr method increases the Trojan detectability in modified circuits and also provides the advantages of masking methods. The aim of the third DfTr method is to facilitate power-analysis-based Trojan detection. In a masked circuit by the proposed method, one has more control over the switching activity of the different circuit parts. For instance, one can target one part of the circuit, increase its switching activity, and simultaneously decrease the other parts’ switching activity; consequently, if the target part includes an hardware Trojan, its switching activity and so power consumption rises, although the total power consumption of the circuit goes down due to low switching activity rates in most parts of the circuit. When the circuit consumes less power, the power measurement noise abates. The noise can disturb to observe Trojans’ effects on the power consumption of Trojan-infected circuits.In addition, in this thesis, we introduce a CAD tool that can run various masking algorithms on gate-level netlists. The tool can also perform logic simulation and estimate circuit area, power consumption, and performance at the gate level
Wahab, Muhammad Abdul. "Hardware support for the security analysis of embedded softwares : applications on information flow control and malware analysis." Thesis, CentraleSupélec, 2018. http://www.theses.fr/2018CSUP0003.
Full textInformation flow control (also known as Dynamic Information Flow Tracking, DIFT), allows a user to detect several types of software attacks such as buffer overflow or SQL injections. In this thesis, a solution based on the ARM Cortex-A9 processor family is proposed. Our approach relies on the use of ARM CoreSight components, which are able to trace software as executed by the processor in order to perform the information flow tracking. The DIFT coprocessor proposed in this thesis is implemented in an Artix-7 FPGA, embedded in a System-on-Chip (SoC) Zynq provided by Xilinx. It is shown that using ARM CoreSight components does not add a latency overhead while giving a better communication time between the ARM processor and the DIFT coprocessor
Afonso, George. "Vers une nouvelle génération de systèmes de test et de simulation avionique dynamiquement reconfigurables." Phd thesis, Université des Sciences et Technologie de Lille - Lille I, 2013. http://tel.archives-ouvertes.fr/tel-00921874.
Full textCabanes, Quentin. "New hardware platform-based deep learning co-design methodology for CPS prototyping : Objects recognition in autonomous vehicle case-study." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG042.
Full textCyber-Physical Systems (CPSs) are a mature research technology topic that deals with Artificial Intelligence (AI) and Embedded Systems (ES). A CPS can be defined as a networked ES that can analyze a physical environment, via sensors, and make decisions from its current state to affect it toward a desired outcome via actuators. These CPS deal with data analysis, which need powerful algorithms combined with robust hardware architectures. On one hand, Deep Learning (DL) is proposed as the main solution algorithm. On the other hand, the standard design and prototyping methodologies for ES are not adapted to modern DL-based CPS. In this thesis, we investigate AI design for CPS around embedded DL using a hybrid CPU/FPGA platform. We proposed a methodology to develop DL applications for CPS which is based on the usage of a neural network accelerator and an automation software to speed up the prototyping time. We present our hardware neural network accelerator design and prototyping. Finally, we validate our work using a smart LIDAR (LIght Detection And Ranging) application use-case with several algorithms for pedestrians detection using a 3D point cloud from a LIDAR
Chouchene, Wissem. "Vers une reconfiguration dynamique partielle parallèle par prise en compte de la régularité des architectures FPGA-Xilinx." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10135/document.
Full textThis work proposes two complementary design flows allowing the broadcast of a partial bitstream to a set of identical Partially Reconfigurable Regions (PRRs). These two design flows are applicable with FPGAs - Xilinx. The first one called ADForMe (Automatic DPPR Flow For Multi-RPRs Architecture) allows the automation of the traditional flow of Xilinx RDP through the automation of the floorplanning phase. This floorplanning is carried out by the AFLORA (Automatic Floorplanning For Multi-RPRs Architectures) algorithm which we have designed that allows the same allocation of these RPRs in terms of geometric shape taking into account the technological parameters of the FPGA and the architectural parameters of the design in order to allow the relocation of bitstream. The second proposed flow aims to promote the 1D and 2D relocation technique in order to allow the broadcast of a partial bitstream (functionality) to a set of RPRs for a system configuration. Therefore, this flow allows optimizing the size of the bitstream memory. We have also proposed suitable hardware architecture capable of performing this broadcast. The experimental results have been performed on the recent Xilinx FPGAs and have proved the speed of execution of our AFLORA algorithm as well as the efficiency of the results obtained by the application of the automation of the bitstream relocation technique flow. These two flows allow flexibility and reusability of IP components embedded in Multi-RPRs architectures to reduce complexity in design time and improve design productivity
Causo, Matteo. "Neuro-Inspired Energy-Efficient Computing Platforms." Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10004/document.
Full textBig Data highlights all the flaws of the conventional computing paradigm. Neuro-Inspired computing and other data-centric paradigms rather address Big Data to as resources to progress. In this dissertation, we adopt Hierarchical Temporal Memory (HTM) principles and theory as neuroscientific references and we elaborate on how Bayesian Machine Learning (BML) leads apparently totally different Neuro-Inspired approaches to unify and meet our main objectives: (i) simplifying and enhancing BML algorithms and (ii) approaching Neuro-Inspired computing with an Ultra-Low-Power prospective. In this way, we aim to bring intelligence close to data sources and to popularize BML over strictly constrained electronics such as portable, wearable and implantable devices. Nevertheless, BML algorithms demand for optimizations. In fact, their naïve HW implementation results neither effective nor feasible because of the required memory, computing power and overall complexity. We propose a less complex on-line, distributed nonparametric algorithm and show better results with respect to the state-of-the-art solutions. In fact, we gain two orders of magnitude in complexity reduction with only algorithm level considerations and manipulations. A further order of magnitude in complexity reduction results through traditional HW optimization techniques. In particular, we conceive a proof-of-concept on a FPGA platform for real-time stream analytics. Finally, we demonstrate we are able to summarize the ultimate findings in Machine Learning into a generally valid algorithm that can be implemented in HW and optimized for strictly constrained applications
Azzaz, Mohamed Salah. "Implantation paramétrable d'un nouvel algorithme de cryptage symétrique basé Chaos par inclusion au sein d'une architecture reconfigurable de type FPGA." Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0385.
Full textSince 1980, the idea of using dynamic systems with chaotic behaviour for the design of encryption/decryption algorithms has attracted increasing: attention from researchers. The strong dynamics of chaotic systems such as sensitivity to initial conditions and control parameters, the unpredictability in the long term and broad-spectrum can provide important properties such as confusion and diffusion usually meet in standard cryptography. In addition, there are two possible approaches for designing chaos-based cryptosystems: analog and digital. Analog encryption techniques are primarily based on chaos-synchronization, while the chaotic digital encryption approaches do not depend on the chaos-synchronization and can be implemented either in software or hardware. This thesis focuses on the digital design and implementation of a new cryptosystem based on chaos-synchronization. The discovery of the possibility of chaos synchronization in 1990 opens the door to investigation digital chaos-based encryption. Indeed, many contributions are made for many promising achievements of digital cryptosystems. However, a number of recently proposed digital chaotic ciphers have been shown that they are not secure enough and they are cryptanalyzed. In addition, in order to design more secure digital chaotic ciphers and meet the security requirements in embedded systems, rules and new mechanisms must be carefully considered to make up the flaws in the design flow. However, the problem of the degradation dynamics of chaotic systems has not been seriously considered by most designers of digital chaotic ciphers. Furthermore, most all the digital chaos-based cryptosystems proposed in the literature does not address the issue of real-time embedded applications. Consequently, the tasks of these thesis works focus on the design solutions providing the real secure suitable for embedded applications. Our contributions in this thesis are, firstly the design and hardware implementation on reconfigurable FPGA technology of a pseudo-random key generator based on chaotic systems (continuous and discrete). Secondly, the statistical analysis detailed security of the proposed generators. Thirdly, the development, the conception and the integration of a new chaotic generator in a symmetric stream cipher, includes the resolution problem of the chaos synchronization between the transmitter (encryption) and receiver (decryption). Fourthly, the hardware implementation of the proposed cryptosystem on real encryption applications. i.e. the encryption/decryption of real-time audio, image and video data. In addition, a performance evaluation and comparisons with previous conventional and chaos-based ciphers is performed in order to extract these weaknesses and strengths and define future work prospects
Larouche, Jean-Benoit. "Implémentation d'une couche physique temps réel MIMO-OFDM sur FPGA." Thesis, Université Laval, 2014. http://www.theses.ulaval.ca/2014/30389/30389.pdf.
Full textThis report is focused on a detailed description of a physical layer implemented on an FPGA platform. The physical layer integrates many of the up to date technologies used in the latest generation telecommunication standards. First of all, an overview of the OFDM and MIMO technologies is presented since both technologies are very important in today’s telecommunications. Thereafter, there is a description of the hardware used to test the proper functioning of the physical layer. The major part of this report is aimed toward the description of the physical layer itself. A detailed block diagram of the latter is presented. The physical layer is divided in two main sections: the transmitter and the receiver. Regarding the transmitter, the structure of the generated packet is presented together with the acquisition and channel estimation symbols. On the receiver side, we will focus on the implemented algorithms to decode a packet. The automatic gain control algorithm, the carrier frequency offset estimator, the block boundary detector and the channel estimator are detailed. Finally, binary error rate curves in an additive white Gaussian noise channel will be presented and compared to theoretical curves. A discussion about the obtained results will follow as well as a list of the future improvements which could be made to take the physical layer further.
Viswanathan, Venkatasubramanian. "Une architecture évolutive flexible et reconfigurable dynamiquement pour les systèmes embarqués haute performance." Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0029.
Full textIn this thesis, we propose a scalable and customizable reconfigurable computing platform, with a parallel full-duplex switched communication network, and a software execution model to redefine the computation, communication and reconfiguration paradigms in High Performance Embedded Systems. High Performance Embedded Computing (HPEC) applications are becoming highly sophisticated and resource consuming for three reasons. First, they should capture and process real-time data from several I/O sources in parallel. Second, they should adapt their functionalities according to the application or environment variations within given Size Weight and Power (SWaP) constraints. Third, since they process several parallel I/O sources, applications are often distributed on multiple computing nodes making them highly parallel. Due to the hardware parallelism and I/O bandwidth offered by Field Programmable Gate Arrays (FPGAs), application can be duplicated several times to process parallel I/Os, making Single Program Multiple Data (SPMD) the favorite execution model for designers implementing parallel architectures on FPGAs. Furthermore Dynamic Partial Reconfiguration (DPR) feature allows efficient reuse of limited hardware resources, making FPGA a highly attractive solution for such applications. The problem with current HPEC systems is that, they are usually built to meet the needs of a specific application, i.e., lacks flexibility to upgrade the system or reuse existing hardware resources. On the other hand, applications that run on such hardware architectures are constantly being upgraded. Thus there is a real need for flexible and scalable hardware architectures and parallel execution models in order to easily upgrade the system and reuse hardware resources within acceptable time bounds. Thus these applications face challenges such as obsolescence, hardware redesign cost, sequential and slow reconfiguration, and wastage of computing power.Addressing the challenges described above, we propose an architecture that allows the customization of computing nodes (FPGAs), broadcast of data (I/O, bitstreams) and reconfiguration several or a subset of computing nodes in parallel. The software environment leverages the potential of the hardware switch, to provide support for the SPMD execution model. Finally, in order to demonstrate the benefits of our architecture, we have implemented a scalable distributed secure H.264 encoding application along with several avionic communication protocols for data and control transfers between the nodes. We have used a FMC based high-speed serial Front Panel Data Port (sFPDP) data acquisition protocol to capture, encode and encrypt RAW video streams. The system has been implemented on 3 different FPGAs, respecting the SPMD execution model. In addition, we have also implemented modular I/Os by swapping I/O protocols dynamically when required by the system. We have thus demonstrated a scalable and flexible architecture and a parallel runtime reconfiguration model in order to manage several parallel input video sources. These results represent a conceptual proof of a massively parallel dynamically reconfigurable next generation embedded computers
Ahmad, Mohamad El. "Investigation of monitoring techniques for self-adaptive integrated systems." Thesis, Montpellier, 2018. http://www.theses.fr/2018MONTS048.
Full textOver the last decade, the miniaturization of semiconductor technologies and the large-scale integration has given rise to complex system design, including the integration of several billions of transistors on a single die. This trend poses many manufacturing and reliability challenges such as power dissipation, technological variability and application versatility. The reliability issues represented by the presence of thermal hotspots can accelerate the degradation of the transistors, and consequently reduce the chip lifetime, also referred to as “aging”. In order to address these challenges, new solutions are required, based in particular on self-adaptive systems. Such systems are mainly composed of a control loop with three processes: (i) the monitoring, which is responsible for observing the state of the system, (ii) the diagnosis, which analyzes the information collected and makes decisions to optimize the behavior of the system, and (iii) the action that adjusts the system parameters accordingly. However, effective adaptations depend critically on the monitoring process that should provide an accurate estimation about the system state in a cost-effective manner. In this thesis, we firstly investigate the monitoring of the power consumption. We develop a method, based on several data mining algorithm, to monitor the toggling activity on a few relevant signals selected at the RTL level. The proposed method consists of a generic flow that can be used to model the power consumption for any RTL circuit on any technology. Secondly, we improve the proposed flow by estimating the overall chip thermal behavior and developing a new technique of on-die thermal sensor placement. The proposed algorithms systematically choose the best trade-off between accuracy and overhead. The surface of the chip is decomposed into several thermally homogeneous regions.Besides the design part, modern embedded systems integrates hardware sensors (analog or digital) that can be used to monitor the system’s state. These industrial methods are usually very expensive, and require a large number of units to produce precise information at a fine-grained resolution. An alternative solution to provide an accurate estimation of system’s state is achieved with a set of performance counters that can be configured to track logical events at different levels. To this end, we propose a novel algorithm for the selection of the relevant performance events from the local, shared and system resources. We propose then an implementation of a neural network based estimation algorithm. The proposed method is robust against the external temperature variations. Furthermore, thermal estimation is also can be achieved using the current and historic logical events, and the accuracy is evaluated on the basis of the depth in the past.Finally, once the tracking method and target are defined and the system is configured, the monitoring method should be used at “Run-time”. We implemented a complete adaptation loop, with a dynamic monitoring of the system’s state in order to achieve better energy efficiency
Hoang, Van Trinh. "Design under constraints of Dependability and Energy for Wireless Sensor Network." Thesis, Lorient, 2014. http://www.theses.fr/2014LORIS351/document.
Full textThe uncertain contexts in which recent WSN embedded applications evolve have bigimpact on these applications. Traditionally, the objective of availability generally doubleshardware and functional redundancy; it means that the overhead is doubled in term ofenergy and cost. Besides, wireless node system is powered by limited battery; hencepower consumption parameter is only set to a number of components and functionalitiesat minimum resources. However, due to the technology reduction, process variabilityconducts to increase the possibility of failures. In order to guarantee an acceptablequality of service for the users, and on the operating lifetime of the system, it should carrystudies at the upper phases involving both dependability and consumption constraints.This thesis aims to propose novel design for wireless sensor networks, in order to reduceenergy consumption and to increase network dependability
Devic, Florian. "Securing embedded systems based on FPGA technologies." Thesis, Montpellier 2, 2012. http://www.theses.fr/2012MON20107.
Full textEmbedded systems may contain sensitive data. They are usually exchanged in plaintext between the system on chips and the memory, but also internally. This is a weakness: an attacker can spy this exchange and retrieve information or insert malicious code. The aim of the thesis is to provide a dedicated and suitable solution for these problems by considering the entire lifecycle of the embedded system (boot, updates and execution) and all the data (FPGA bitstream, operating system kernel, critical data and code). Furthermore, it is necessary to optimize the performance of hardware security mechanisms introduced to match the expectations of embedded systems. This thesis is distinguished by offering innovative and suitable solutions for the world of FPGAs
Petura, Oto. "True random number generators for cryptography : Design, securing and evaluation." Thesis, Lyon, 2019. http://www.theses.fr/2019LYSES053.
Full textRandom numbers are essential for modern cryptographic systems. They are used as cryptographic keys, nonces, initialization vectors and random masks for protection against side channel attacks. In this thesis, we deal with random number generators in logic devices (Field Programmable Gate Arrays – FPGAs and Application Specific Integrated Circuits – ASICs). We present fundamental methods of generation of random numbers in logic devices. Then, we discuss different types of TRNGs using clock jitter as a source of randomness. We provide a rigorous evaluation of various AIS-20/31 compliant TRNG cores implemented in three different FPGA families : Intel Cyclone V, Xilinx Spartan-6 and Microsemi SmartFusion2. We then present the implementation of selected TRNG cores in custom ASIC and we evaluate them. Next, we study PLL-TRNG in depth in order to provide a secure design of this TRNG together with embedded tests. Finally, we study oscillator based TRNGs. We compare different randomness extraction methods as well as different oscillator types and the behavior of the clock jitter inside each of them. We also propose methods of embedded jitter measurement for online testing of oscillator based TRNGs