Dissertationen zum Thema „Adaptation à l'utilisateur“
Geben Sie eine Quelle nach APA, MLA, Chicago, Harvard und anderen Zitierweisen an
Machen Sie sich mit Top-44 Dissertationen für die Forschung zum Thema "Adaptation à l'utilisateur" bekannt.
Neben jedem Werk im Literaturverzeichnis ist die Option "Zur Bibliographie hinzufügen" verfügbar. Nutzen Sie sie, wird Ihre bibliographische Angabe des gewählten Werkes nach der nötigen Zitierweise (APA, MLA, Harvard, Chicago, Vancouver usw.) automatisch gestaltet.
Sie können auch den vollen Text der wissenschaftlichen Publikation im PDF-Format herunterladen und eine Online-Annotation der Arbeit lesen, wenn die relevanten Parameter in den Metadaten verfügbar sind.
Sehen Sie die Dissertationen für verschiedene Spezialgebieten durch und erstellen Sie Ihre Bibliographie auf korrekte Weise.
Artusi, Xavier. „Interface Cerveau Machine avec adaptation automatique à l'utilisateur“. Phd thesis, Ecole centrale de nantes - ECN, 2012. http://tel.archives-ouvertes.fr/tel-00822833.
Der volle Inhalt der QuelleArtusi, Xavier. „Interface cerveau machine avec adaptation automatique à l'utilisateur“. Phd thesis, Ecole centrale de Nantes, 2012. http://www.theses.fr/2012ECDN0018.
Der volle Inhalt der QuelleWe study a brain computer interface (BCI) to control a prosthesis with thought. The aim of the BCI is to decode the movement desired by the subject from electroencephalographic (EEG) signals. The core of the BCI is a classification algorithm characterized by the choice of signals descriptors and decision rules. The purpose of this thesis is to develop an accurate BCI system, able to improve its performance during its use and to adapt to the user evolutions without requiring multiple learning sessions. We combine two ways to achieve this. The first one is to increase the precision of the decision system by looking for relevant descriptors for the classification. The second one is to include a feedback to the user on the system decision : the idea is to estimate the error of the BCI from evoked brain poten tials, reflecting the emotional state of the patient correlated to the success or failure of the decision taken by the BCI, and to correct the decision system of the BCI accordingly. The main contributions are : we have proposed a method to optimize the feature space based on wavelets for multi-channel EEG signals ; we quantified theoretically the performances of the complete system improved by the detector ; a simulator of the corrected and looped system has been developed to observe the behavior of the overall system and to compare different strategies to update the learning set ; the complete system has been implemented and works online in real conditions
Naderi, Hassan. „Accès personnalisé à l'information-adaptation au contexte de l'utilisateur“. Lyon, INSA, 2008. http://theses.insa-lyon.fr/publication/2008ISAL0005/these.pdf.
Der volle Inhalt der QuelleLes informations disponibles sur l'Internet se développent à un rythme tel que bientôt les méthodes de recherche textuelle utilisant la fréquence des termes ne seront plus suffisantes. Un courant de pensée est consacré à la personnalisation de la recherche, à savoir prendre en compte des traits spécifiques et le contexte de l’utilisateur pour répondre à sa requête. Nous pensons que le profil, la communauté, et le contexte de l'utilisateur sont les trois concepts essentiels à envisager pour faire face au problème de la croissance du World Wide Web. La thèse étudie la combinaison de ces trois courants de pensée. Dans la première partie de cette thèse, nous développons un système de recherche d’information personnalisé et collaboratif (appelé PERCIRS), qui utilise les deux premiers concepts (le profil et la communauté). PERCIRS crée une liste classée de documents pertinents, pour la requête q de l'utilisateur U. Cette classification se fait sur la base des documents sélectionnés pour les requêtes semblables à q par les utilisateurs au profil similaire à celui de U. Le choix de la méthode de recherche d’utilisateurs similaires joue un rôle important dans l’efficacité de PER-CIRS. À cette fin, nous avons proposé trois catégories de formules pour calculer la similarité entre deux profils d'utilisateurs: formule fondée sur l'égalité, formule fondée sur la similarité et enfin une formule fondée sur les graphes. Afin de trouver la catégorie optimale, nous avons proposé deux mécanismes d'évaluation: fondés sur les concepts de la catégorisation et de la classification. Ces deux mécanismes s’appuient sur les formules de calcul de profil utilisateur fondées sur les graphes. PERCIRS étant un système de recherche d’information (SRI) personnalisé (en raison de la prise en compte des profils d'utilisateur), il ne peut pas être évalué par des mécanismes d'évaluation tels que Cranfield (par exemple TREC). Par conséquent, dans cette thèse, nous proposons un nouveau mécanisme qui permet de l'évaluer en même temps que les autres SRI classiques tels que BM25 – Okapi. Dans la deuxième partie de ce travail de thèse,, le contexte de l'utilisateur est utilisé pour adapter, aux préférences de l'utilisateur, un document trouvé par PERCIRS. Nous proposons d'adapter physiquement et sémantiquement un document selon le profil de l'utilisateur et selon le profil de contexte. Un mécanisme est également proposé pour naviguer dans des documents adaptés en fonction des préférences de l'utilisateur
Bouzit, Sara. „Plasticité de l'interaction Homme-Machine : présentation à l'utilisateur, une question de compromis“. Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM023/document.
Der volle Inhalt der QuelleResearch contributes to the engineering of Human Computer Interaction. It deals with the plasticity property, i.e. the ability of User Interfaces to withstand variations of the context of use while preserving user centered properties. More specifically, the object under study is the UI transformation for speeding up interaction
Naderi, Hassan Pinon Jean-Marie Rumpler Béatrice. „Accès personnalisé à l'information-adaptation au contexte de l'utilisateur rsonalized information retrieval and adaptation to user's context /“. Villeurbanne : Doc'INSA, 2009. http://docinsa.insa-lyon.fr/these/pont.php?id=naderi.
Der volle Inhalt der QuelleGalindo, losada Julian. „Adaptation des interfaces utilisateurs aux émotions“. Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAM021/document.
Der volle Inhalt der QuelleUser interfaces adaptation by using emotions.Perso2U, an approach to personalize user interfaces with user emotions.User experience (UX) is nowadays recognized as an important quality factor to make systems or software successful in terms of user take-up and frequency of usage. UX depends on dimensions like emotion, aesthetics or visual appearance, identification, stimulation, meaning/value or even fun, enjoyment, pleasure, or flow. Among these dimensions, the importance of usability and aesthetics is recognized. So, both of them need to be considered while designing user interfaces (UI).It raises the question how designers can check UX at runtime and improve it if necessary. To achieve a good UI quality in any context of use (i.e. user, platform and environment), plasticity proposes to adapt UI to the context while preserving user-centered properties. In a similar way, our goal is to preserve or improve UX at runtime, by proposing UI adaptations. Adaptations can concern aesthetics or usability. They can be triggered by the detection of specific emotion, that can express a problem with the UI.So the research question addressed in this PhD is how to drive UI adaptation with a model of the user based on emotions and user characteristics (age & gender) to check or improve UX if necessary.Our approach aims to personalize user interfaces with user emotions at run-time. An architecture, Perso2U, has been designed to adapt the UI according to emotions and user characteristics (age and gender). Perso2U includes three main components: (1) Inferring Engine, (2) Adaptation Engine and (3) Interactive System. First, the inferring engine recognizes the user’s situation and in particular him/her emotions (happiness, anger, disgust, sadness, surprise, fear, contempt) plus neutral which are into Ekman emotion model. Second, after emotion recognition, the best suitable UI structure is chosen and the set of UI parameters (audio, Font-size, Widgets, UI layout, etc.) is computed based on such detected emotions. Third, this computation of a suitable UI structure and parameters allows the UI to execute run-time changes aiming to provide a better UI. Since the emotion recognition is performed cyclically then it allows UI adaptation at run-time.To go further into the inferring engine examination, we run two experiments about the (1) genericity of the inferring engine and (2) UI influence on detected emotions regarding age and gender.Since this approach relies on emotion recognition tools, we run an experiment to study the similarity of detecting emotions from faces to understand whether this detection is independent from the emotion recognition tool or not. The results confirmed that the emotions detected by the tools provide similar emotion values with a high emotion detection similarity.As UX depends on user interaction quality factors like aesthetics and usability, and on individual characteristics such as age and gender, we run a second experimental analysis. It tends to show that: (1) UI quality factors (aesthetics and/or usability) influences user emotions differently based on age and gender, (2) the level (high and/or low) of UI quality factors seem to impact emotions differently based on age and gender. From these results, we define thresholds based on age and gender that allow the inferring engine to detect usability and/or aesthetics problems
Maïs, Chantal. „L' adaptation de l'aide à l'utilisateur : aider les programmeurs occasionnels à opérationaliser leurs plans sous-optimaux“. Aix-Marseille 1, 1989. http://www.theses.fr/1989AIX10004.
Der volle Inhalt der QuelleThis thesis criticizes, from a psychological point of view, the kind of assistance provided by current help systems to casual users of computing devices. The criticism turns on the appropriateness of the reference model (i. E. The representation of the knowledge acquired or to be acquired by the user) used by those systems for interpreting user's actions and for defining the assistance they need. The reference model generally used in current help systems is mainly some expert model. The thesis shows, from a psychological analysis of the activity of some casual programmers, the inappropriateness of the expert model for interpreting the behavior of this category of users. Likewise the thesis shows the inappropriateness of the expert model for defining the assistance that meets casual users'needs and expectations. In fact, the use of some expert model implies that casual users, like experts, attempt to achieve some optimal solution (optimization principle) whereas they attempt to achieve some satisficing solution more often (operationalization principle), by elaborating sub-optimal plans. Thereby they cannot exploit helps and solutions given in terms of the expert model, because they are too far from their representation of the problem. Thus the thesis suggests to design systems that help casual programmers to realize their sub-optimal plans (help to operationalization). More generally, it is suggested to provide help systems with a reference model that is stereotypical of the category of users to whom the help is adressed
Dedieu, Sébastien. „Adaptation d'un système de reconstruction de modèles numériques 3D à partir de photographies face aux connaissances de l'utilisateur“. Bordeaux 1, 2001. http://www.theses.fr/2001BOR12481.
Der volle Inhalt der QuelleAsfari, Ounas. „Personnalisation et Adaptation de L'accès à L'information Contextuelle en utilisant un Assistant Intelligent“. Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00650115.
Der volle Inhalt der QuelleDiallo, Mamadou Tourad. „Quality of experience and video services adaptation“. Electronic Thesis or Diss., Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0010.
Der volle Inhalt der QuelleWith the network heterogeneity and increasing demand of multimedia services, Quality of Experience (QoE) becomes a crucial determinant of the success or failure of these services. In this thesis, we first propose to analyze the impact of quality metrics on user engagement, in order to understand the effects of video metrics (video startup time, average bitrate, buffering ratio) and content popularity on user engagement. Our results show that video buffering and content popularity are critical parameters which strongly impacts the end-user’s satisfaction and user engagement, while the video startup time appears as less significant. On other hand, we consider subjective approaches such as the Mean Opinion Score (MOS) for evaluating QoE, in which users are required to give their assessment according to contextual information. A detailed statistical analysis of our study shows the existence of non-trivial parameters impacting MOS (the type of device and the content type). We propose mathematical models to develop functional relationships between the QoE and the context information which in turn permits us to estimate the QoE. A video content optimization technique called MDASH (for MOS Dynamic Adaptive Streaming over HTTP) is proposed, which improves the perceived QoE for different video sessions sharing the same local network, while taking QoE fairness among users as a leitmotiv. We also propose a utility-based approach for video delivery optimization, in which a global utility function is computed based on different constraints (e.g. target strategies coming from the actors of the delivery chain)
Diallo, Mamadou Tourad. „Quality of experience and video services adaptation“. Thesis, Evry, Institut national des télécommunications, 2015. http://www.theses.fr/2015TELE0010/document.
Der volle Inhalt der QuelleWith the network heterogeneity and increasing demand of multimedia services, Quality of Experience (QoE) becomes a crucial determinant of the success or failure of these services. In this thesis, we first propose to analyze the impact of quality metrics on user engagement, in order to understand the effects of video metrics (video startup time, average bitrate, buffering ratio) and content popularity on user engagement. Our results show that video buffering and content popularity are critical parameters which strongly impacts the end-user’s satisfaction and user engagement, while the video startup time appears as less significant. On other hand, we consider subjective approaches such as the Mean Opinion Score (MOS) for evaluating QoE, in which users are required to give their assessment according to contextual information. A detailed statistical analysis of our study shows the existence of non-trivial parameters impacting MOS (the type of device and the content type). We propose mathematical models to develop functional relationships between the QoE and the context information which in turn permits us to estimate the QoE. A video content optimization technique called MDASH (for MOS Dynamic Adaptive Streaming over HTTP) is proposed, which improves the perceived QoE for different video sessions sharing the same local network, while taking QoE fairness among users as a leitmotiv. We also propose a utility-based approach for video delivery optimization, in which a global utility function is computed based on different constraints (e.g. target strategies coming from the actors of the delivery chain)
Carrillo, Ramos Angela Cristina. „Agents ubiquitaires pour un accès adapté aux systèmes d'information : Le Framework PUMAS“. Phd thesis, Université Joseph Fourier (Grenoble), 2007. http://tel.archives-ouvertes.fr/tel-00136931.
Der volle Inhalt der QuelleA travers deux propositions, les travaux de thèse exposés ici tentent d'apporter une réponse à cette double problématique. Tout d'abord, nous avons conçu et réalisé un framework appelé PUMAS qui offre à des utilisateurs nomades un accès à l'information, qui prend en compte le contexte d'utilisation. L'approche que nous avons choisie est celle des agents. Ainsi, l'architecture de PUMAS est composée de quatre Systèmes Multi-Agents (SMA) respectivement dédiés à la connexion aux SI, la communication entre les utilisateurs et les SI, la gestion de l'information et l'adaptation de celle-ci. Ensuite, nous avons élaboré un Système de Gestion de Profil Contextuel (SGPC) qui contribue à l'adaptation de l'information délivrée à un utilisateur nomade sur trois aspects : i) une formalisation de la notion de préférence de l'utilisateur qui permet de modéliser les activités accomplies dans le système, les résultats attendus de ces activités et la manière dont ces résultats sont présentés ; ii) un algorithme de correspondance contextuelle qui génère le profil contextuel d'un utilisateur nomade à partir du contexte d'utilisation ; iii) un mécanisme qui gère les conflits pouvant survenir entre les préférences de l'utilisateur. Enfin, le SGPC a été intégré à PUMAS au sein du SMA dédié à l'adaptation de l'information.
Popineau, Fabrice. „Approche Logique de la Personnalisation dans les Environnements Informatiques pour l'Apprentissage Humain“. Electronic Thesis or Diss., université Paris-Saclay, 2023. http://www.theses.fr/2023UPASG001.
Der volle Inhalt der QuelleWe present here a body of works dealing with learner adaptation in Technology Enhanced Learning (TLE) platforms. These works are based on a classical approach of artificial intelligence, essentially grounded on logic and an agent approach. We develop several arguments on the appropriateness of situation calculus to drive these platforms. We propose a method of transforming logic programs to adapt the agent's behavior to the user, or to give a personality to the agent. We also discuss the possibility of integrating this approach in a MOOC platform, to accompany learners and to provide them with personalized recommendations
Aubry, Willy. „Etude et mise en place d’une plateforme d’adaptation multiservice embarquée pour la gestion de flux multimédia à différents niveaux logiciels et matériels“. Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14678/document.
Der volle Inhalt der QuelleOn the one hand, technology advances have led to the expansion of the handheld devices market. Thanks to this expansion, people are more and more connected and more and more data are exchanged over the Internet. On the other hand, this huge amound of data imposes drastic constrains in order to achieve sufficient quality. The Internet is now showing its limits to assure such quality. To answer nowadays limitations, a next generation Internet is envisioned. This new network takes into account the content nature (video, audio, ...) and the context (network state, terminal capabilities ...) to better manage its own resources. To this extend, video manipulation is one of the key concept that is highlighted in this arising context. Video content is more and more consumed and at the same time requires more and more resources. Adapting videos to the network state (reducing its bitrate to match available bandwidth) or to the terminal capabilities (screen size, supported codecs, …) appears mandatory and is foreseen to take place in real time in networking devices such as home gateways. However, video adaptation is a resource intensive task and must be implemented using hardware accelerators to meet the desired low cost and real time constraints.In this thesis, content- and context-awareness is first analyzed to be considered at the network side. Secondly, a generic low cost video adaptation system is proposed and compared to existing solutions as a trade-off between system complexity and quality. Then, hardware conception is tackled as this system is implemented in an FPGA based architecture. Finally, this system is used to evaluate the indirect effects of video adaptation; energy consumption reduction is achieved at the terminal side by reducing video characteristics thus permitting an increased user experience for End-Users
Heil, Mikael. „Conception architecturale pour la tolérance aux fautes d'un système auto-organisé multi-noeuds en réseau à base de NoC reconfigurables“. Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0351.
Der volle Inhalt der QuelleThe need of growing performance and reliability of embedded System-on-Chips SoCs are increasing constantly to meet the requirements of applications becoming more and more complexes, new architectural processing paradigms and communication structures based in particular on self-adaptive and self-organizing structures have emerged. These new computing systems integrate within a single chip of hundreds of computing or processing elements (Multiprocessor Systems on Chip - MPSoC) allowing to feature a high level of parallel processing while providing high flexibility or adaptability. The goal is to change possible configurations of the distributed processing characterizing the evolving context of the networked systems. Nowadays, the performance of these systems relies on autonomous and intelligence allowing to deploy and redeploy the compute modules in real time to the request processing and computing power, the communication medium and data exchange between interconnected processing elements to provide bandwidth scalability and high efficiency for the potential parallelism of the available computing power of MPSoC. Moreover, the emergence of the partial reconfigurable FPGA technology allows to the MPSoC to adapt their elements during its operation in order to meet the system requirements. In this context, flexibility, computing power and high bandwidth requirements lead new approach to the design of self-organized and self-adaptive communication systems based Network-on-Chips (NoC). The aim is to allow the interconnection of a large number of elements in the same device while maintaining fault tolerance requirement and a compromise between parallel processing capacity of the MPSoC, communication performance, interconnection resources and tradeoff between performance and logical resources. This thesis work aims to provide innovative architectural solutions for networked fault tolerant MPSoC based on FPGA technology and configured as a distributed and self-organized structure. The objective is to obtain performance and reliable systems on chips incorporating detection, localization and correction of errors in their reconfigurable or adaptive NoC structures where the main difficulty lies in the identification and distinction between real errors and adaptive properties in these network nodes. More precisely, this work consists to perform a networked node based on reconfigurable FPGA which integrates dynamic or adaptive NoC capable of self-organized and self-test in order to achieve maximum maintainability of system operation in a networked environment (WSN). In this work, we developed a reconfigurable multi-node system based on MPSoC which can exchange and interact, allowing an efficient task management and self-management of fault tolerance mechanisms. Different techniques are combined and used to identify and precisely locate faulty elements of such a structure in order to correct or isolate them in order to prevent failures of the system. Validations through the many hardware simulations to estimate their capacity of detecting and locating sources of error within a network have been presented. Likewise, synthesized logic systems incorporating the various proposed solutions are analyzed in terms of performance and logic resources in the case of FPGA technology
Ranwez, Sylvie. „Composition Automatique de Documents Hypermédia Adaptatifs à partir d'Ontologies et de Requêtes Intentionnelles de l'Utilisateur“. Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2000. http://tel.archives-ouvertes.fr/tel-00142722.
Der volle Inhalt der QuelleChabert-Ranwez, Sylvie. „Composition Automatique de Documents Hypermédia Adaptatifs à partir d'Ontologies et de Requêtes Intentionnelles de l'Utilisateur“. Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2000. http://tel.archives-ouvertes.fr/edutice-00000381.
Der volle Inhalt der QuelleFontaine, Emeric. „Programmation d'espace intelligent par l'utilisateur final“. Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00744415.
Der volle Inhalt der QuelleGuffroy, Marine. „Adaptation de méthodes d’évaluation dans le cadre de la conception d’une application numérique pour un jeune public avec troubles du spectre autistique : étude au cours de la conception et de l'évaluation de l’application çATED au sein d'une ULIS TED“. Thesis, Le Mans, 2017. http://www.theses.fr/2017LEMA3001/document.
Der volle Inhalt der QuelleTaking into account the specificities of the target audience and the contexts of use of an interactive application imply to question the generally recommended methodological principles for computer application design and evaluation. In the case of the creation of a tool dedicated to a young audience with Autistic Spectrum Disorders (ASD), communication difficulties and alterations in their social interactions prevent the implementation of an "ordinary", user-centered design methodology. Beyond the consideration of the established potential offered by digital technology for young people with ASD, it is a question of identifying and implementing the methods and techniques useful in order to evaluate the proposed prototypes, taking into account the characteristics of the public and its everyday context in school structure.The research presented in this thesis belong to the field of human-machine interactions (HMI). It questions the methodological principles of user-centered design: How to meet evaluation objectives when verbal communication is not guaranteed? What role for the target user in the design-evaluation cycle? What are the roles for the human and specialized relatives? How to observe and analyze the use of tablet software in the daily context of the target audience?More specifically, the research concerns the adaptation of ordinary methods of evaluation to the specific characteristics and behaviors of an extraordinary audience
Zhang, Xun. „Contribution aux architectures adaptatives : etude de l'efficacité énergétique dans le cas des applications à parallélisme de données“. Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10106/document.
Der volle Inhalt der QuelleMy PhD project focuses on Dynamic Adaptive Runtime parallelism and frequency scaling techniques in coarse grain reconfigurable hardware architectures. This new architectural approach offers a set of new features to increase the flexibility and scalability for applications in an evolving environment with reasonable energy cost. In this architecture, the parallelism granularity and running frequency can be reconfigured by using partial and dynamic reconfiguration. The adaptive method and architecture have been already developed and tested on FPGA platforms. The measurements and results analysis based on DWT show that the energy efficiency is adjustable dynamically by using our approach. The main contribution to the research project involves an auto-adaptive method development; this means using partial and dynamic reconfiguration can reconfigure the parallelism granularity and running frequency of application. The adaptive method by adjusting the parallelism granularity and running frequency is tested with the same application. We are presenting results coming from implementations of Image processing key application and analyses the behavior of this architecture on these applications
Perez, Castañeda Oscar Leopoldo. „Modélisation des effets de la reconfiguration dynamique sur la flexibilité d'une architecture de traitement temps réel“. Nancy 1, 2007. http://www.theses.fr/2007NAN10139.
Der volle Inhalt der QuelleThe principal contribution of the wired logic compared to the microprocessor is the degree of parallelism which is in higher several orders of magnitude. However, the property of configurability of these circuits involves an additionnal cost in term of silicon surface, delay and power consumption compared to circuits ASICs. The dynamic reconfiguration of the FPGA is often presented in the literature like a means of increasing their flexibility, to approach that of the microprocessors, while preserving a level of performance that if not is close to the ASIC is higher than of the microprocessors. If the performance is in general, for a given application, more easy to quantify, the situation is quite different for flexibility. In the litterature this metric has never been defined and quantified. Moreover we did not find any definition of the flexibility of an architecture for processing of data. The principal objective of this work is by one hand, to define and quantify the flexibility and by the other hand, to model the influence of the dynamic reconfiguration on flexibility. We put at the disposition the designer a metric as well as the bases of methodology allowing it to choose or not this solution according to its constraints and objectives
Liu, Ting Weber Serge. „Optimisation par synthèse architecturale des méthodes de partitionnement temporel pour les circuits reconfigurables“. S. l. : Nancy 1, 2008. http://www.scd.uhp-nancy.fr/docnum/SCD_T_2008_0013_LIU.pdf.
Der volle Inhalt der QuelleGarcia, Samuel. „Architecture reconfigurable dynamiquement a grain fin pour le support d'un système d'exploitation temps réel“. Paris 6, 2012. http://www.theses.fr/2012PA066495.
Der volle Inhalt der QuelleMost of anticipated future applications share four major characteristics. They might all require an increased computing capacity, they will implies to take real time into account, they represent a big step in terms of complexity compared with todays typical applications, and will have to deal with the dynamic nature of the real physical world. Fine grained dynamically reconfigurable architecture (FGDRA) can be seen as next evolution of today's FPGA, aiming at dealing with very dynamic and complex real time applications while providing comparable potential computing power due to the possibility to fine tune execution architecture at a fine grain level. To make this kind of devices usable for real application designer complexity has to be abstracted by an operating system layer and adequate tool set. This combination would form an adequate solution to support future applications. This thesis exposes an innovative FGDRA architecture called OLLAF. This architecture answer both technical issues on reconfigurable computing and practical problematics of application designers. The whole architecture is designed to work in symbiosis with an operating system. Studies presented here will more particularly focus on hardware task management mechanisms in a preemptive system. We will first present our work toward trying to implement such mechanisms using existing FPGA and show that those existing architectures have to evolve to efficiently support an operating system in a highly dynamic real time situation. The OLLAF architecture will then be explained and the hardware task management mechanism will be highlighted. We then present two studies that prove this approach to constitute a huge gain compared with existing platforms in terms of resulting operating system overhead even for static application cases where dynamical reconfiguration is used only for computing resource sharing. For highly dynamical real time cases we show that not only it could lower the overhead, but it will also support cases that existing devices just cannot support
Cheng, Kevin. „Reconfigurable self-organised systems : architecture and implementation“. Thesis, Metz, 2011. http://www.theses.fr/2011METZ039S/document.
Der volle Inhalt der QuelleIncreasing needs of computation power, flexibility and interoperability are making systems more and more difficult to integrate and to control. The high number of possible configurations, alternative design decisions or the integration of additional functionalities in a working system cannot be done only at the design stage any more. In this context, where the evolution of networked systems is extremely fast, different concepts are studied with the objective to provide more autonomy and more computing power. This work proposes a new approach for the utilization of reconfigurable hardware in a self-organised context. A concept and a working system are presented as Reconfigurable Self-Organised Systems (RSS). The proposed hardware architecture aims to study the impact of reconfigurable FPGA based systems in a self-organised networked environment and partial reconfiguration is used to implement hardware accelerators at runtime. The proposed system is designed to observe, at each level, the parameters that impact on the performances of the networked self-adaptive nodes. The results presented here aim to assess how reconfigurable computing can be efficiently used to design a complex networked computing system and the state of the art allowed to enlighten and formalise characteristics of the proposed self-organised hardware concept. Its evaluation and the analysis of its performances were possible using a custom board: the Potsdam Intelligent Camera System (PICSy). It is a complete implementation from the electronic board to the control application. To complete the work, measurements and observations allow analysis of this realisation and contribute to the common knowledge
Pérez, Patricio Madain. „Stéréovision dense par traitement adaptatif temps réel : algorithmes et implantation“. Lille 1, 2005. https://ori-nuxeo.univ-lille1.fr/nuxeo/site/esupversions/0c4f5769-6f43-455c-849d-c34cc32f7181.
Der volle Inhalt der QuelleSbai, Hugo. „Système de vidéosurveillance intelligent et adaptatif, dans un environnement de type Fog/Cloud“. Thesis, Lille, 2018. http://www.theses.fr/2018LIL1I018.
Der volle Inhalt der QuelleCCTV systems use sophisticated cameras (network cameras, smart cameras) and computer servers for video recording in a fully digital system. They often integrate hundreds of cameras generating a huge amount of data, far beyond human agent monitoring capabilities. One of the most important and modern challenges, in this field, is to scale an existing cloud-based video surveillance system with multiple heterogeneous smart cameras and adapt it to a Fog / Cloud architecture to improve performance without a significant cost overhead. Recently, FPGAs are becoming more and more present in FCIoT (FoG-Cloud-IoT) platform architectures. These components are characterized by dynamic and partial configuration modes, allowing platforms to quickly adapt themselves to changes resulting from an event, while increasing the available computing power. Today, such platforms present a certain number of serious scientific challenges, particularly in terms of deployment and positioning of FoGs. This thesis proposes a video surveillance model composed of plug & play smart cameras, equipped with dynamically reconfigurable FPGAs on a hierarchical FOG / CLOUD basis. In this highly dynamic and scalable system, both in terms of intelligent cameras (resources) and in terms of targets to track, we propose an automatic and optimized approach for camera authentication and their dynamic association with the FOG components of the system. The proposed approach also includes a methodology for an optimal allocation of hardware trackers to the electronic resources available in the system to maximize performance and minimize power consumption. All contributions have been validated with a real size prototype
Vidal, Jorgiano. „Dynamic and partial reconfigurable embedded systems design with UML“. Lorient, 2010. http://www.theses.fr/2010LORIS203.
Der volle Inhalt der QuelleAdvances in reconfigurable technologies allow entire multiprocessor systems to be implemented in a single FPGA (Multiprocessor System on Programmable Chip, MP- SoPC). In order to speed up the design time of such heterogeneous systems, new modelling techniques must be developed. Furthermore, dynamic execution is a key point for modern systems, i. E. Systems that can partially change their behavior at run time in order to adjust their execution to the environment. UML (Unified Modeling Language) has been used for software modeling since its first version. Recently, with new modeling concepts added to later versions (UML 2), it has become more and more suitable for hardware modeling. This thesis is a contribution to the MOPCOM project, where we propose a set of modeling techniques in order to build complex embedded systems by using UML. The modeling techniques proposed here consider the system to be built in one complete model. Moreover, we propose a set of transformation that allows the system to be automatically generated. Our approach allows the modelling of dynamic applications onto reconfigurable platforms. Design time reduction up to 30% has been measured while using our methodology
Liu, Ting. „Optimisation par synthèse architecturale des méthodes de partitionnement temporel pour les circuits reconfigurables“. Thesis, Nancy 1, 2008. http://www.theses.fr/2008NAN10013/document.
Der volle Inhalt der QuelleAThe research work presented in the context of methodologies is to assist the implementation of data flow graph algorithms on dynamically reconfigurable RSoC (Reconfigurable System on Chip)-based FPGA architectures.The main strategy consists in implementing a design approach based on simultaneously both the dynamic reconfiguration (DR) and synthesis architecture (SA) in order to achieve a best Adequacy Algorithm Architecture (A3). The methodology consists in identifying and extracting the parts of an application which is described in form of DFG in order to implement either by successively partial reconfiguration (TP), or by the AS or by combining the two approaches.To develop our solution with a view of optimizing and suitable compromise between the two approaches RD and SA, we propose a parameter in order to evaluate the degree of the inter-partition implementation based on functional units shared. In order to validate the proposed methodological strategy, we present the results of the implementation of our approach on two real-time applications. A comparative analysis with the respecting of the implementation results illustrates the interest and the optimisation ability of our method, which is also for dynamic reconfiguration implementation of the complex applications on RSoC
Lequay, Victor. „Une approche ascendante pour la gestion énergétique d'une Smart-Grid : modèle adaptatif et réactif fondé sur une architecture décentralisée pour un système générique centré sur l'utilisateur permettant un déploiement à grande échelle“. Thesis, Lyon, 2019. http://www.theses.fr/2019LYSE1304.
Der volle Inhalt der QuelleThe field of Energy Management Systems for Smart Grids has been extensively explored in recent years, with many different approaches being described in the literature. In collaboration with our industrial partner Ubiant, which deploys smart homes solutions, we identified a need for a highly robust and scalable system that would exploit the flexibility of residential consumption to optimize energy use in the smart grid. At the same time we observed that the majority of existing works focused on the management of production and storage only, and that none of the proposed architectures are fully decentralized. Our objective was then to design a dynamic and adaptive mechanism to leverage every existing flexibility while ensuring the user's comfort and a fair distribution of the load balancing effort ; but also to offer a modular and open platform with which a large variety of devices, constraints and even algorithms could be interfaced. In this thesis we realised (1) an evaluation of state of the art techniques in real-time individual load forecasting, whose results led us to follow (2) a bottom-up and decentralized approach to distributed residential load shedding system relying on a dynamic compensation mechanism to provide a stable curtailment. On this basis, we then built (3) a generic user-centered platform for energy management in smart grids allowing the easy integration of multiple devices, the quick adaptation to changing environment and constraints, and an efficient deployment
Bruguier, Florent. „Méthodes de caractérisation et de surveillance des variations technologiques et environnementales pour systèmes reconfigurables adaptatifs“. Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2012. http://tel.archives-ouvertes.fr/tel-00965377.
Der volle Inhalt der QuelleFournier, Émilien. „Accélération matérielle de la vérification de sûreté et vivacité sur des architectures reconfigurables“. Electronic Thesis or Diss., Brest, École nationale supérieure de techniques avancées Bretagne, 2022. http://www.theses.fr/2022ENTA0006.
Der volle Inhalt der QuelleModel-Checking is an automated technique used in industry for verification, a major issue in the design of reliable systems, where performance and scalability are critical. Swarm verification improves scalability through a partial approach based on concurrent execution of randomized analyses. Reconfigurable architectures promise significant performance gains. However, existing work suffers from a monolithic design that hinders the exploration of reconfigurable architecture opportunities. Moreover, these studies are limited to safety verification. To adapt the verification strategy to the problem, this thesis first proposes a hardware verification framework, allowing to gain, through a modular architecture, a semantic and algorithmic genericity, illustrated by the integration of 3 specification languages and 6 algorithms. This framework allows efficiency studies of swarm algorithms to obtain a scalable safety verification core. The results, on a high-end FPGA, show gains of an order of magnitude compared to the state-of-the-art. Finally, we propose the first hardware accelerator for safety and liveness verification. The results show an average speed-up of 4875x compared to software
Marques, Nicolas. „Méthodologie et architecture adaptative pour le placement efficace de tâches matérielles de tailles variables sur des partitions reconfigurables“. Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0139/document.
Der volle Inhalt der QuelleFPGA-based reconfigurable architectures can deliver appropriate solutions for several applications as they allow for changing the performance of a part of the FPGA while the rest of the circuit continues to run normally. These architectures, despite their improvements, still suffer from their lack of adaptability when confronted with applications consisting of variable size material tasks. This heterogeneity may cause wrong placements leading to a sub-optimal use of resources and therefore a decrease in the system performances. The contribution of this thesis focuses on the problematic of variable size material task placement and reconfigurable region effective generation. A methodology and an intermediate layer between the FPGA and the application are proposed to allow for the effective placement of variable size material tasks on reconfigurable partitions of a predefined size. To approve the method, we suggest an architecture based on the use of partial reconfiguration in order to adapt the transcoding of one video compression format to another in a flexible and effective way. A study on the reconfigurable region partitioning for the entropy encoder material tasks (CAVLC / VLC) is proposed in order to show the contribution of partitioning. Then an assessment of the gain obtained and of the method additional costs is submitted
Feki, Oussama. „Contribution à l'implantation optimisée de l'estimateur de mouvement de la norme H.264 sur plates-formes multi composants par extension de la méthode AAA“. Thesis, Paris Est, 2015. http://www.theses.fr/2015PEST1009/document.
Der volle Inhalt der QuelleMixed architectures containing programmable devices and reconfigurable ones can provide calculation performance necessary to meet constraints of real-time applications. But the implementation and optimization of these applications on this kind of architectures is a complex task that takes a lot of time. In this context, we propose a rapid prototyping tool for this type of architectures. This tool is based on our extension of the Adequacy Algorithm Architecture methodology (AAA). It allows to automatically perform optimized partitioning and scheduling of the application operations on the target architecture components and generation of correspondent codes. We used this tool for the implementation of the motion estimator of the H.264/AVC on an architecture composed of a Nios II processor and Altera Stratix III FPGA. So we were able to verify the correct running of our tool and validate our automatic generator of mixed code
Dabellani, Éric. „Méthodologie de conception d'architectures reconfigurables dynamiquement, application au transcodage vidéo“. Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0168/document.
Der volle Inhalt der QuelleDespite clear benefits in terms of fexibility and surface efficiency, dynamic reconfiguration of FPGAs is still finding it hard to break through into massive industrial project. One of the main reasons is the lack of means and methods for evaluation of reconfigurable architectures. Worse, main FPGA vendors do not provide official tools allowing developers to easily determine an optimal scheduling reconfiguration for a specific architecture. Within this framework, the proposed research work described in this thesis proposes a methodology for modeling dynamically reconfigurable architectures based on SystemC. The proposed methodology allows designers to save significant time during the design phases of an application specific reconfigurable architecture by providing an initial estimate of performance and resources needed for its development. It also allows development and validation of scheduling reconfiguration scenarios, while respecting real-time constraints associated with the given application. To validate our methodology on a real application, video transcoding IP have been developed and tested. This application consists in the realization of a H.264/MPEG-2 transcoder made self-adaptable through the use of dynamic reconfiguration. This work was conducted as a part of the ARDMAHN project sponsored by the National Research Agency (Agence Nationale de Recheche) with the reference number ANR-09-SEGI-001
Hentati, Manel. „Reconfiguration dynamique partielle de décodeurs vidéo sur plateformes FPGA par une approche méthodologique RVC (Reconfigurable Video Coding)“. Rennes, INSA, 2012. http://www.theses.fr/2012ISAR0027.
Der volle Inhalt der QuelleThe main purpose of this PhD is to contribute to the design and the implementation of a reconfigurable decoder using MPEGRVC standard. The standard MPEG-RVC is developed by MPEG. Lt aims at providing a unified high-level specification of current and future MPEG video coding technologies by using dataflow model named RVC-CAL. This standard offers the means to overcome the lack of interpretability between many video codecs deployed in the market. Ln this work, we propose a rapid prototyping methodology to provide an efficient and optimized implementation of RVC decoders in target hardware. Our design flow is based on using the dynamic partial reconfiguration (DPR) to validate reconfiguration approaches allowed by the MPEG-RVC. By using DPR technique, hardware module can be replaced by another one which has the same function or the same algorithm but a different architecture. This concept allows to the designer to configure various decoders according to the data inputs or her requirements (latency, speed, power consumption,. . ). The use of the MPEG-RVC and the DPR improves the development process and the decoder performance. But, DPR poses several problems such as the placement of tasks and the fragmentation of the FPGA area. These problems have an influence on the application performance. Therefore, we need to define methods for placement of hardware tasks on the FPGA. Ln this work, we propose an off-line placement approach which is based on using linear programming strategy to find the optimal placement of hardware tasks and to minimize the resource utilization. Application of different data combinations and a comparison with sate-of-the art method show the high performance of the proposed approach
Jovanovic, Slavisa. „Architecture reconfigurable de système embarqué auto-organisé“. Thesis, Nancy 1, 2009. http://www.theses.fr/2009NAN10099/document.
Der volle Inhalt der QuelleThe growing complexity of computing systems, mostly due to the rapid progress in Information Technology (IT) in the last decade, imposes on system designers to orient their traditional design concepts towards the new ones based on self-organizing and self-adaptive architectural solutions. On the one hand, these new architectural solutions should provide a system with a suf?cient computing power, and on the other hand, a great ?exibility and adaptivity in order to cope with all non-deterministic changes and events that may occur in the environnement in which it evolves. Within this framework, a recon?gurable MPSoC self-organizing architecture on the FPGA recon?gurable technology is studied and developped during this PhD
Causo, Matteo. „Neuro-Inspired Energy-Efficient Computing Platforms“. Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10004/document.
Der volle Inhalt der QuelleBig Data highlights all the flaws of the conventional computing paradigm. Neuro-Inspired computing and other data-centric paradigms rather address Big Data to as resources to progress. In this dissertation, we adopt Hierarchical Temporal Memory (HTM) principles and theory as neuroscientific references and we elaborate on how Bayesian Machine Learning (BML) leads apparently totally different Neuro-Inspired approaches to unify and meet our main objectives: (i) simplifying and enhancing BML algorithms and (ii) approaching Neuro-Inspired computing with an Ultra-Low-Power prospective. In this way, we aim to bring intelligence close to data sources and to popularize BML over strictly constrained electronics such as portable, wearable and implantable devices. Nevertheless, BML algorithms demand for optimizations. In fact, their naïve HW implementation results neither effective nor feasible because of the required memory, computing power and overall complexity. We propose a less complex on-line, distributed nonparametric algorithm and show better results with respect to the state-of-the-art solutions. In fact, we gain two orders of magnitude in complexity reduction with only algorithm level considerations and manipulations. A further order of magnitude in complexity reduction results through traditional HW optimization techniques. In particular, we conceive a proof-of-concept on a FPGA platform for real-time stream analytics. Finally, we demonstrate we are able to summarize the ultimate findings in Machine Learning into a generally valid algorithm that can be implemented in HW and optimized for strictly constrained applications
Raimbaud, Pierre. „La réalité virtuelle pour les besoins de l'industrie du bâtiment : guider la conception des interactions utilisateurs grâce à une méthodologie centrée sur les tâches“. Thesis, Paris, HESAM, 2020. http://www.theses.fr/2020HESAE012.
Der volle Inhalt der QuelleThe field of virtual reality (VR) has undergone significant development these last years due to the growing maturity of this technology. This has allowed for its diffusion in many domains, notably in the building construction one. However, a problem resulting from such use of VR is that the VR interactions provided to the users are often not sufficiently adapted to their needs. It is indeed difficult to make a synthesis between the building trades’ expertise and the VR expertise. A research question then arises, which is how to allow the experts in a specific field, who do not have VR expertise, to be able to define and obtain the specifications of VR interaction technique, in autonomy. To answer this question, in this research we propose a new task-oriented methodology for the design of VR interactions. This one contains semi-automated systems that allow for the decomposition of the user task and for the determination of proposals of VR interaction techniques - i.e. the obtaining of the two types of specifications that are expected here. This methodology has been tested and evaluated on two case studies from the building sector. The results obtained show that our methodology can be used by experts from the construction industry, in autonomy, and that they have obtained specifications similar to the ones obtained by following a traditional user-centred design methodology
Pratomo, Istas. „Adaptive NoC for reconfigurable SoC“. Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00980066.
Der volle Inhalt der QuelleMabrouk, Lhoussein. „Contribution à l'implémentation des algorithmes de vision avec parallélisme massif de données sur les architectures hétérogènes CPU/GPU“. Thesis, Université Grenoble Alpes, 2020. http://www.theses.fr/2020GRALT009.
Der volle Inhalt der QuelleMixture of Gaussians (MoG) and Compressive Sensing (CS) are two common algorithms in many image and audio processing systems. Their combination, CS-MoG, was recently used for mobile objects detecting through background subtraction. However, the implementations of CS-MoG presented in previous works do not take full advantage of the heterogeneous architectures evolution. This thesis proposes two contributions for the efficient implementation of CS-MoG on heterogeneous parallel CPU/GPU architectures. These technologies nowadays offer great programming flexibility, which allows optimizing performance as well as energy efficiency.Our first contribution consists in offering the best acceleration-precision compromise on CPU and GPU. The second is the proposition of a new adaptive approach for data partitioning, allowing to fully exploiting the CPUs-GPUs. Whatever their relative performance, this approach, called the Optimal Data Distribution Cursor (ODDC), aims to ensure automatic balancing of the computational load by estimating the optimal proportion of data that has to be affected to each processor, taking into account its computing capacity.This approach updates the partitioning online, which allows taking into account any influence of the irregularity of the processed images content. In terms of mobile objects, we mainly target vehicles whose detection represents some challenges, but in order to generalize our approach, we test also scenes containing other types of targets.Experimental results, on different platforms and datasets, show that the combination of our two contributions makes it possible to reach 98% of the maximal possible performance on these platforms. These results can also be beneficial for other algorithms where calculations are performed independently on small grains of data
Bey, Christophe. „Gestion des ressources cognitives et stratégies d'adaptation court terme chez les pilotes d'aéronefs“. Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0428/document.
Der volle Inhalt der QuelleThe aviation industry has for many years pursued the objective of an optimum level of safetyin the air transport sector. With regard to military aviation, more precisely tactical, thispriority is coupled with an increasingly high and polymorphic search for performance. Whatcharacterize this type of aviation is the relationship between the performance pursued and theaccepted risks. It depends essentially on the context and the stakes of the missions to becarried out.The human factor approach is a major leverage for achieving this challenge. Thus, within theconstrained domain of aeronautics, the design and development of tools to assist crewcognition remains a prospect for the future, even if pilot training also becomes a majorchallenge for the coming years. In this context, the management of cognitive resources, and inparticular the specific management strategies put in place by the pilots, are central to thedecision-making process under constraints.In a research and engineering approach in cognition, we undertook a study involving pilotsand allowing the understanding of these mechanisms as well as the production ofrecommendations for the design of tools to help manage their cognitive resources. On thebasis of the analysis of feedback, and results of a preliminary experimental approach, we havebuilt a protocol to highlight the strategies implemented by the pilots in the context of anactivity during the descent and the final approach on the Clermont-Ferrand airport with acritical breakdown. The experimental results reconciled with our understanding hypotheseson the management of cognitive resources and management strategies, complete our analysisand recommendations for a tool to help manage the resources of the pilots
Bollengier, Théotime. „Du prototypage à l’exploitation d’overlays FPGA“. Thesis, Brest, École nationale supérieure de techniques avancées Bretagne, 2018. http://www.theses.fr/2018ENTA0003/document.
Der volle Inhalt der QuelleDue to their reconfigurable capability and the performance they offer, FPGAs are good candidates for accelerating applications in the cloud. However, FPGAs have some features that hinder their use in the Cloud as well as their adoption by customers : first, FPGA programming is done at low level and requires some expertise that usual Cloud clients do not necessarily have. Secondly, FPGAs do not have native mechanisms allowing them to easily fit in the dynamic execution model of the Cloud.In this work, we propose to use overlay architectures to facilitate FPGA adoption, integration, and operation in the Cloud. Overlays are reconfigurable architectures synthesized on FPGA. As hardware abstraction layers placed between the FPGA and applications, overlays allow to raise the abstraction level of the execution model presented to applications and users, as well as to implement mechanisms making them fit in a Cloud infrastructure.This work presents a vertical approach addressing all aspects of overlay operation in the Cloud as reconfigurable accelerators programmable by tenants : from designing and implementing overlays, integrating them on commercial FPGA platforms, setting up their operating mechanisms, to developping their programming tools. The environment developped in this work is complete, modular and extensible, it is partially based on several existing tools, and demonstrate the feasibility of our approach
Hannachi, Marwa. „Placement des tâches matérielles de tailles variables sur des architectures reconfigurables dynamiquement et partiellement“. Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0297/document.
Der volle Inhalt der QuelleAdaptive systems based on Field-Programmable Gate Arrays (FPGA) architectures can benefit greatly from the high degree of flexibility offered by dynamic partial reconfiguration (DPR). Thanks to DPR, hardware tasks composing an adaptive system can be allocated and relocated on demand or depending on the dynamically changing environment. Existing design flows and commercial tools have evolved to meet the requirements of reconfigurables architectures, but that are limited in functionality. These tools do not allow an efficient placement and relocation of variable-sized hardware tasks. The main objective of this thesis is to propose a new methodology and a new approaches to facilitate to the designers the design phase of an adaptive and reconfigurable system and to make it operational, valid, optimized and adapted to dynamic changes in the environment. The first contribution of this thesis deals with the issues of relocation of variable-sized hardware tasks. A design methodology is proposed to address a major problem of relocation mechanisms: storing a single configuration bitstream to reduce memory requirements and increasing the reusability of generating hardware modules. A reconfigurable region partitioning technique is applied in this proposed relocation methodology to increase the efficiency of use of hardware resources in the case of reconfigurable tasks of variable sizes. This methodology also takes into account communication between different reconfigurable regions and the static region. To validate the design method, several cases studies are implemented. This validation shows an efficient use of hardware resources and a significant reduction in reconfiguration time. The second part of this thesis presents and details a mathematical formulations in order to automate the floorplanning of the reconfigurable regions in the FPGAs. The algorithms presented in this thesis are based on the optimization technique MILP (mixed integer linear programming). These algorithms allow to define automatically the location, the size and the shape of the dynamic reconfigurable region. We are mainly interested in this research to satisfy the constraints of placement of the reconfigurable zones and those related to the relocation. In addition, we consider the optimization of the hardware resources in the FPGA taking into account the tasks of variable sizes. Finally, an evaluation of the proposed approach is presented
Ochoa, Ruiz Gilberto. „A high-level methodology for automatically generating dynamically reconfigurable systems using IP-XACT and the UML MARTE profile“. Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00932118.
Der volle Inhalt der Quelle