To see the other types of publications on this topic, follow the link: Core Reconfiguration.

Dissertations / Theses on the topic 'Core Reconfiguration'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 21 dissertations / theses for your research on the topic 'Core Reconfiguration.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Ballagh, Jonathan Bartlett. "An FPGA-based Run-time Reconfigurable 2-D Discrete Wavelet Transform Core." Thesis, Virginia Tech, 2001. http://hdl.handle.net/10919/33649.

Full text
Abstract:
FPGAs provide an ideal template for run-time reconfigurable (RTR) designs. Only recently have RTR enabling design tools that bypass the traditional synthesis and bitstream generation process for FPGAs become available. The JBits tool suite is an environment that provides support for RTR designs on Xilinx Virtex and 4K devices. This research provides a comprehensive design process description of a two-dimensional discrete wavelet transform (DWT) core using the JBits run-time reconfigurable FPGA design tool suite. Several aspects of the design process are discussed, including implementation, simulation, debugging, and hardware interfacing to a reconfigurable computing platform. The DWT lends itself to a straightforward implementation in hardware, requiring relatively simple logic for control and address generation circuitry. Through the application of RTR techniques to the DWT, this research attempts to exploit certain advantages that are unobtainable with static implementations. Performance results of the DWT core are presented, including speed of operation, resource consumption, and reconfiguration overhead times.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
2

BALBONI, Marco. "NoC-Centric Partitionin and Reconfiguration Technology for the Efficient Sharing of General-Purose Prorammable Many-core Accelerators." Doctoral thesis, Università degli studi di Ferrara, 2016. http://hdl.handle.net/11392/2403510.

Full text
Abstract:
Negli ultimi decenni si sta assistendo ad una crescita tecnologica senza precedenti al centro dell’affermazione dei sistemi embedded, con la Legge di Moore come fattore dominante nel sostenere questo trend. Al giorno d’oggi, infatti, un sempre maggiore numero di cores può essere integrato nello stesso die, segnando il passaggio dallo Stato dell’Arte rappresentato dai chips multi-core ai nuovi paradigmi di design di chips manycore. Proprio questi chips many-core presentano un duplice scopo: fornire alte performance computazionalieaumentarel’efficienzadell’hardwareinterminidiOPS/Watt. Nonostante la potenza computazionale estremamente elevata, la complessità di questi nuovi chips sta dando vita a numerose sfide che i progettisti stanno fronteggiando sia per quanto riguarda l’hardware che per il software, focalizzate soprattutto sulla gestione a runtime dell’intelaiatura di computazione. La sfida affrontata in questa tesi è duplice e incentrata sullo sfruttare a pieno il potenziale di queste architetture many-core eterogenee. Da un lato il parallelismo software non scala con la stessa entità di quello hardware, perciò un problema è rappresentato da come condividere le risorse computazionali tra un batch di applicazioni concorrenti. Dall’altro lato, i tasks per la gestione del sistema many-core diventano fondamentali operazioni a runtime, che necessitano di essere eseguite trasparentemente e allo stesso tempo senza sospendere la computazione in corso sul sistema. Questa tesi fornisce un completo set di metodi di design volti dominare la complessità del runtime di acceleratori many-core ricchi di funzionalità, affidandosi a estensioni hardware della rete di interconnessione on-chip (Network-on-Chip, NoC). L’idea chiave, al centro del lavoro di questa tesi, è quella di sfruttare una strategia di Space-Division Multiplexing per schedulare l’esecuzione di applicazioni che richiedono di essere accelerate contemporaneamente sullo stesso array di tiles omogenei di computazione, così abilitando lo sfruttamento efficiente delle potenzialità delle risorse hardware presenti. L’applicazione più avanzata di questa idea consiste nella virtualizzazione del sistema embedded controllando le architetture di computazione eterogenee, scenario in cui molteplici macchine virtuali attive sullo stesso processore host potrebbero voler assegnare parte della computazione ad un acceleratore many-core programmabile. In questo contesto, una vir tualizzazione efficiente implica un partizionamento flessibile delle risorse computazionali e delle memorie, un isolamento tra applicazioni concorrenti per motivi di sicurezza e la capacità di riconfigurarsi per adattarsi a runtime a diversi workloads. Mentre la gestione delle risorse dovrebbe essere un compito di una "torre di controllo" in software (hypervisor), il partizionamento, l’isolamento e la riconfigurazione necessitano di essere assistite in hardware, specialmente nell’infrastruttura di integrazione della piattaforma, che consiste nell’architettura di comunicazione. Il primo contributo di questa tesi consiste nel validare il nuovo paradigma di condivisione delle risorse basato sull’approccio SDM. Quindi, per prima cosa, si vuole comparare un approccio di tipo SDM con quello tradizionalmente usato e basato sul Time-Division Multiplexing. Per valutare le differenti strategie, in questa tesi si fa uso di benchmarks parallelizzati di Image Processing, la cui esecuzione è gestita da una versione ottimizzata del Runtime Environment OpenMP, necessario per abilitarne l’esecuzione parallelizzata. I benchmarks sono eseguiti su diversi ambienti di simulazione (VirtualSoC e gem5), che hanno richiesto entrambi una customizzazione per abilitare nuove funzionalità necessarie a simulare un acceleratore programmabile general-purpose (General-Purpose Programmable Accelerator, GPPA). Come risultato, questa tesi ha lo scopo di catturare l’impatto sulle performance del parallelismo, della dimensione e forma delle partizioni (numero di cluster computazionali riservati all’applicazione e loro posizione sulla struttura del manycore) e diversi settaggi di configurazione delle memorie. Ilsecondocontributoprincipaledellatesiconsistenell’abilitareunagestionealtamente dinamica delle risorse dell’acceleratore manycore. Infatti, la flessibile strategia di condivisione del manycore dipende essenzialmente dalla capacità di rinconfigurare a runtime la funzione di routing (che determina l’instradamento dei pacchetti) di una NoC, quindi in questa tesi si punta ad implementare un meccanismo di rinconfigurazione del routing veloce e scalabile e con una perturbazione minima del traffico di background. Si fornisce prima una soluzione centralizzata del problema e alla fine una completamente distribuita, valutando le implicazioni in termini di area e performance attraverso un’avanzata prototipazione su FPGA. Questo contributo apre la strada ad un futuro sviluppo di sistemi con la possibilità di configurarsi in modo molto fine, adattandosi ai diversi carichi richiesti, nonché a strategie di testing selettivo online di componenti che risultino trasparenti alle applicazioni eseguite. Inoltre, questa tesi si punta all’introduzione della strategia SDM sviluppata a sistemi più futuristici, caratterizzati dall’integrazione nella struttura del manycore di tecnologie emergenti. In particolare ci si focalizza sull’integrazione della tecnologia ottica (fotonica) e sul co-design di caratteristiche di riconfigurazione e partizionamento di acceleratori programmabili con il requisito principale di minimizzare l’overhead in potenza statica consumata delle NoCs ottiche. Questo risultato è ottenuto attraverso il riutilizzo delle stesse sorgenti laser tra diverse partizioni di computazione. In ultimo questa tesi re-architetta la completa infrastruttura gerarchica di comunicazione promuovendo un template di un’architettura di computazione eterogenea e parallela con integrazione fotonica, e giungendo ad una struttura di interconnessione ibrida che apre la strada a ricerche future.
During the last few decades an unprecedented technological growth has been at the center of the embedded systems design paramountcy, with Moore’s Law being the leading factor of this trend. Today, in fact, an ever increasing number of cores can be integrated on the same die, marking the transition from state-of-the-art multi-core chips to the new many-core design paradigm. Such manycore chips aim is twofold: provide high computing performance and increase the energy eciency of the hardware in terms of OPS/Watt. Despite the extraordinarily high computing power, the complexity of many-core chips opens the door to several challenges for designers that are today facing with the huge intricacy of both hardware and software, trying to unmask the best solutions to exploit the potential of these heterogeneous many-core architectures. This thesis provides a whole set of design methods to enable and manage the runtime heterogeneity of features-rich industry-ready Tile-Based Networks-on-Chip and it is focused on virtualization techniques with the goal to mitigate, and overtake when possible, some of the challenges introduced by the many-core design paradigm. The key idea is to exploit a Space-Division Multiplexing strategy to schedule the execution of applications that require to be accelerated or multiple active Virtual Machines, enabling an e↵ective virtualization by means of resources sharing, relying on both hardware and software support to this new highly dynamic environment, thus eciently exploiting the high parallel hardware of many-core chips. Virtualization implies flexible partitioning of resources and isolation for protection and requires a control tower in software (hypervisor) but it needs that the proper course of action, following the hypervisor, is taken by the on-chip network (NoC) that is the best on-chip communication infrastructure suitable for many-core architectures and that is becoming also the real system integration and control framework. The resources management concept depends mainly on the runtime reconfiguration capability of the NoC routing function so, the first contribution of this thesis indeed tackles this challenge with the final outcome of a distributed, fast reconfiguration and scalable mechanism with minimum perturbation on the background trac and finally it undergo FPGA prototyping, allowing to compare area overhead and critical path. Another main contribution of my work, related to the scheduling of execution of several applications on the manycore, is comparing a SDM approach to a TDM one. To evaluate the di↵erent strategies I rely on parallelized Image Processing benchmarks, whose execution is managed by an optimized version of an OpenMP Runtime, needed to enable their parallel execution. I run the benchmark on di↵erent simulation environments (VirtualSoC and gem5) customized and enhanced with new functionalities to emulate a General-Purpose Programmable Accelerator, thus studying the impact on performance of parallelism, dimensions and shapes of partitions (numbers of computational clusters reserved and their position) and memory configuration. Finally, I focus also on emerging technologies, in particular on Optical NoC and their partitioning strategy proposed to decrease the static power consumption, tearing-down unused laser sources and relying on re-use of the same wavelengths. I also re-architect the communication infrastructure in a template GPPA architecture, and coming up with a hybrid interconnect fabric, thus proposing the first assessment of optical interconnect technology in the context of these devices.
APA, Harvard, Vancouver, ISO, and other styles
3

Khuat, Quang Hai. "Definition and evaluation of spatio-temporal scheduling strategies for 3D multi-core heterogeneous architectures." Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S007/document.

Full text
Abstract:
Empilant une couche multiprocesseur (MPSoC) et une couche de FPGA pour former un système sur puce reconfigurable en trois dimension (3DRSoC), est une solution prometteuse donnant un niveau de flexibilité élevé en adaptant l'architecture aux applications visées. Pour une application exécutée sur ce système, l'un des principaux défis vient de la gestion de haut niveau des tâches. Cette gestion est effectuée par le service d'ordonnancement du système d'exploitation et elle doit être en mesure de déterminer, lors de l'exécution de l'application, quelle tâche est exécutée logiciellement et/ou matériellement, quand (dimension temporelle) et sur quelles ressources (dimension spatiale, c'est à dire sur quel processeur ou quelle région du FPGA) pour atteindre la haute performance du système. Dans cette thèse, nous proposons des stratégies d'ordonnancement spatio-temporel pour les architectures 3DRSoCs. La première stratégie décide la nécessité de placer une tâche matérielle et une tâche logicielle en face-à-face afin que le coût de la communication entre tâches soit minimisé. La deuxième stratégie vise à minimiser le temps d'exécution globale de l'application. Cette stratégie exploits la présence de processeurs de la couche MPSoC afin d'anticiper, en temps-réel, l'exécution d'une tâche logicielle quand sa version matérielle ne peut pas être allouée sur le FPGA. Ensuite, un outil de simulation graphique a été développé pour vérifier le bon fonctionnement des stratégies développées et aussi nous permettre de produire des résultats
Stacking a multiprocessor (MPSoC) layer and a FPGA layer to form a 3D Reconfigurable System-on- Chip (3DRSoC) is a promising solution giving a high flexibility level in adapting the architecture to the targeted application. For an application defined as a graph of parallel tasks running on the 3DRSoC system, one of the main challenges comes from the high-level management of tasks. This management is done by the scheduling service of the Operating System and it must be able to determine, on the fly, what task should be run in software and/or hardware, when (temporal dimension) and where (spatial dimension, i.e. on what processor or what area of the FPGA) in order to achieve high performance of the system. In this thesis, we propose online spatio-temporal scheduling strategies for 3DRSoCs. The first strategy decides, during the task scheduling, the need for a SW task and a HW task to communicate in face-to-face so that the communication cost between tasks is minimized. The second strategy aims at minimizing the overall execution time of the application. It exploits the presence of processors in the MPSoC layer in order to anticipate, at run-time, the SW execution of a task when its HW version cannot be allocated to the FPGA. Then, a graphical simulation tool has been developed to verify the proper functioning of the developed strategies and also enable us to produce results
APA, Harvard, Vancouver, ISO, and other styles
4

Gammoudi, Aymen. "Stratégie de placement et d'ordonnancement de taches logicielles pour architectures reconfigurables sous contrainte énergétique." Thesis, Rennes 1, 2018. http://www.theses.fr/2018REN1S030/document.

Full text
Abstract:
La conception de systèmes temps-réel embarqués se développe de plus en plus avec l’intégration croissante de fonctionnalités critiques pour les applications de surveillance, notamment dans le domaine biomédical, environnemental, domotique, etc. Le développement de ces systèmes doit relever divers défis en termes de minimisation de la consommation énergétique. Gérer de tels dispositifs embarqués, entièrement autonomes, nécessite cependant de résoudre différents problèmes liés à la quantité d’énergie disponible dans la batterie, à l’ordonnancement temps-réel des tâches qui doivent être exécutées avant leurs échéances, aux scénarios de reconfiguration, particulièrement dans le cas d’ajout de tâches, et à la contrainte de communication pour pouvoir assurer l’échange des messages entre les processeurs, de façon à assurer une autonomie durable jusqu’à la prochaine recharge et ce, tout en maintenant un niveau de qualité de service acceptable du système de traitement. Pour traiter cette problématique, nous proposons dans ces travaux une stratégie de placement et d’ordonnancement de tâches permettant d’exécuter des applications temps-réel sur une architecture contenant des cœurs hétérogènes. Dans cette thèse, nous avons choisi d’aborder cette problématique de façon incrémentale pour traiter progressivement les problèmes liés aux contraintes temps-réel, énergétique et de communications. Tout d’abord, nous nous intéressons particulièrement à l’ordonnancement des tâches sur une architecture mono-cœur. Nous proposons une stratégie d’ordonnancement basée sur le regroupement des tâches dans des packs pour pouvoir calculer facilement les nouveaux paramètres des tâches afin de réobtenir la faisabilité du système. Puis, nous l’avons étendu pour traiter le cas de l’ordonnancement sur une architecture multi-cœurs homogènes. Finalement, une extension de ce dernier sera réalisée afin d’arriver à l’objectif principal qui est l’ordonnancement des tâches pour les architectures hétérogènes. L’idée est de prendre progressivement en compte des contraintes d’exécution de plus en plus complexes. Nous formalisons tous les problèmes en utilisant la formulation ILP afin de pouvoir produire des résultats optimaux. L’idée est de pouvoir situer nos solutions proposées par rapport aux solutions optimales produites par un solveur et par rapport aux autres algorithmes de l’état de l’art. Par ailleurs, la validation par simulation des stratégies proposées montre qu’elles engendrent un gain appréciable vis-à-vis des critères considérés importants dans les systèmes embarqués, notamment le coût de la communication entre cœurs et le taux de rejet des tâches
The design of embedded real-time systems is developing more and more with the increasing integration of critical functionalities for monitoring applications, particularly in the biomedical, environmental, home automation, etc. The developement of these systems faces various challenges particularly in terms of minimizing energy consumption. Managing such autonomous embedded devices, requires solving various problems related to the amount of energy available in the battery and the real-time scheduling of tasks that must be executed before their deadlines, to the reconfiguration scenarios, especially in the case of adding tasks, and to the communication constraint to be able to ensure messages exchange between cores, so as to ensure a lasting autonomy until the next recharge, while maintaining an acceptable level of quality of services for the processing system. To address this problem, we propose in this work a new strategy of placement and scheduling of tasks to execute real-time applications on an architecture containing heterogeneous cores. In this thesis, we have chosen to tackle this problem in an incremental manner in order to deal progressively with problems related to real-time, energy and communication constraints. First of all, we are particularly interested in the scheduling of tasks for single-core architecture. We propose a new scheduling strategy based on grouping tasks in packs to calculate the new task parameters in order to re-obtain the system feasibility. Then we have extended it to address the scheduling tasks on an homogeneous multi-core architecture. Finally, an extension of the latter will be achieved in order to realize the main objective, which is the scheduling of tasks for the heterogeneous architectures. The idea is to gradually take into account the constraints that are more and more complex. We formalize the proposed strategy as an optimization problem by using integer linear programming (ILP) and we compare the proposed solutions with the optimal results provided by the CPLEX solver. Inaddition, the validation by simulation of the proposed strategies shows that they generate a respectable gain compared with the criteria considered important in embedded systems, in particular the cost of communication between cores and the rate of new tasks rejection
APA, Harvard, Vancouver, ISO, and other styles
5

Fuguet, Tortolero César. "Introduction de mécanismes de tolérance aux pannes franches dans les architectures de processeur « many-core » à mémoire partagée cohérente." Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066462/document.

Full text
Abstract:
L'augmentation continue de la puissance de calcul requise par les applications telles que la cryptographie, la simulation, ou le traitement du signal a fait évoluer la structure interne des processeurs vers des architectures massivement parallèles (dites « many-core »). Ces architectures peuvent contenir des centaines, voire des milliers de cœurs afin de fournir une puissance de calcul importante avec une consommation énergétique raisonnable. Néanmoins, l'importante densité de transistors fait que ces architectures sont très susceptibles aux pannes matérielles. L'augmentation dans la variabilité du processus de fabrication, et dans les facteurs de stress des transistors, dégrade à la fois le rendement de fabrication, et leur durée de vie. Nous proposons donc un mécanisme complet de tolérance aux pannes franches, permettant les architectures « many-core » à mémoire partagée cohérente de fonctionner dans un mode dégradé. Ce mécanisme s'appuie sur un logiciel embarqué et distribué dans des mémoires sur puce (« firmware »), qui est exécuté par les cœurs à chaque démarrage du processeur. Ce logiciel implémente plusieurs algorithmes distribués permettant de localiser les composants défaillants (cœurs, bancs mémoires, et routeurs des réseaux sur puce), de reconfigurer l'architecture matérielle, et de fournir une cartographie de l'infrastructure matérielle fonctionnelle au système d'exploitation. Le mécanisme supporte aussi bien des défauts de fabrication, que des pannes de vieillissement après que la puce est en service dans l'équipement. Notre proposition est évaluée en utilisant un prototype virtuel précis au cycle d'une architecture « many-core » existante
The always increasing performance demands of applications such as cryptography, scientific simulation, network packets dispatching, signal processing or even general-purpose computing has made of many-core architectures a necessary trend in the processor design. These architectures can have hundreds or thousands of processor cores, so as to provide important computational throughputs with a reasonable power consumption. However, their important transistor density makes many-core architectures more prone to hardware failures. There is an augmentation in the fabrication process variability, and in the stress factors of transistors, which impacts both the manufacturing yield and lifetime. A potential solution to this problem is the introduction of fault-tolerance mechanisms allowing the processor to function in a degraded mode despite the presence of defective internal components. We propose a complete in-the-field reconfiguration-based permanent failure recovery mechanism for shared-memory many-core processors. This mechanism is based on a firmware (stored in distributed on-chip read-only memories) executed at each hardware reset by the internal processor cores without any external intervention. It consists in distributed software procedures, which locate the faulty components (cores, memory banks, and network-on-chip routers), reconfigure the hardware architecture, and provide a description of the functional hardware infrastructure to the operating system. Our proposal is evaluated using a cycle-accurate SystemC virtual prototype of an existing many-core architecture. We evaluate both its latency, and its silicon cost
APA, Harvard, Vancouver, ISO, and other styles
6

Grand, Michaël. "Conception d’un crypto-système reconfigurable pour la radio logicielle sécurisée." Thesis, Bordeaux 1, 2011. http://www.theses.fr/2011BOR14388/document.

Full text
Abstract:
Les travaux de recherche détaillés dans ce document portent sur la conception et l’implantation d’un composant matériel jouant le rôle du sous-système cryptographique d’une radio logicielle sécurisée.A partir du début des années 90, les systèmes radios ont peu à peu évolué de la radio classique vers la radio logicielle. Le développement de la radio logicielle a permis l’intégration d’un nombre toujours plus grand de standards de communication sur une même plateforme matérielle. La réalisation concrète d’une radio logicielle sécurisée amène son concepteur à faire face à de nombreuses problématiques qui peuvent se résumer par la question suivante : Comment implanter un maximum de standards de communication sur une même plateforme matérielle et logicielle ? Ce document s’intéresse plus particulièrement à l’implantation des standards cryptographiques destinés à protéger les radiocommunications.Idéalement, la solution apportée à ce problème repose exclusivement sur l’utilisation de processeurs numériques. Cependant, les algorithmes cryptographiques nécessitent le plus souvent une puissance de calcul telle que leur implantation sous forme logicielle n’est pas envisageable. Il s’ensuit qu’une radio logicielle doit parfois intégrer des composants matériels dédiés dont l'utilisation entre en conflit avec la propriété de flexibilité propre aux radios logicielles.Or depuis quelques années, le développement de la technologie FPGA a changé la donne. En effet, les derniers FPGA embarquent un nombre de ressources logiques suffisant à l’implantation des fonctions numériques complexes utilisées par la radio logicielle. Plus précisément, la possibilité offerte par les FPGA d'être reconfiguré dans leur totalité (voir même partiellement pour les derniers d’entre eux) fait d’eux des candidats idéaux à l’implantation de composants matériels flexibles et évolutifs dans le temps. À la suite de ces constatations, des travaux de recherche ont été menés au sein de l’équipe Conception des Systèmes Numériques du Laboratoire IMS. Ces travaux ont d’abord débouché sur la publication d’une architecture de sous-système cryptographique pour la radio logicielle sécurisée telle qu’elle est définie par la Software Communication Architecture. Puis, ils se sont poursuivis par la conception et l’implantation d’un cryptoprocesseur multi-cœur dynamiquement reconfigurable sur FPGA
The research detailed in this document deal with the design and implementation of a hardware integrated circuit intended to be used as a cryptographic sub-system in secure software defined radios.Since the early 90’s, radio systems have gradually evolved from traditional radio to software defined radio. Improvement of the software defined radio has enabled the integration of an increasing number of communication standards on a single radio device. The designer of a software defined radio faces many problems that can be summarized by the following question: How to implement a maximum of communication standards into a single radio device? Specifically, this work focuses on the implementation of cryptographic standards aimed to protect radio communications.Ideally, the solution to this problem is based exclusively on the use of digital processors. However, cryptographic algorithms usually require a large amount of computing power which makes their software implementation inefficient. Therefore, a secure software defined radio needs to incorporate dedicated hardware even if this usage is conflicting with the property of flexibility specific to software defined radios.Yet, in recent years, the improvement of FPGA circuits has changed the deal. Indeed, the latest FPGAs embed a number of logic gates which is sufficient to meet the needs of the complex digital functions used by software defined radios. The possibility offered by FPGAs to be reconfigured in their entirety (or even partially for the last of them) makes them ideal candidates for implementation of hardware components which have to be flexible and scalable over time.Following these observations, research was conducted within the Conception des Systèmes Numériques team of the IMS laboratory. These works led first to the publication of an architecture of cryptographic subsystem compliant with the security supplement of the Software Communication Architecture. Then, they continued with the design and implementation of a partially reconfigurable multi-core cryptoprocessor intended to be used in the latest FPGAs
APA, Harvard, Vancouver, ISO, and other styles
7

Das, Satyajit. "Architecture and Programming Model Support for Reconfigurable Accelerators in Multi-Core Embedded Systems." Thesis, Lorient, 2018. http://www.theses.fr/2018LORIS490/document.

Full text
Abstract:
La complexité des systèmes embarqués et des applications impose des besoins croissants en puissance de calcul et de consommation énergétique. Couplé au rendement en baisse de la technologie, le monde académique et industriel est toujours en quête d'accélérateurs matériels efficaces en énergie. L'inconvénient d'un accélérateur matériel est qu'il est non programmable, le rendant ainsi dédié à une fonction particulière. La multiplication des accélérateurs dédiés dans les systèmes sur puce conduit à une faible efficacité en surface et pose des problèmes de passage à l'échelle et d'interconnexion. Les accélérateurs programmables fournissent le bon compromis efficacité et flexibilité. Les architectures reconfigurables à gros grains (CGRA) sont composées d'éléments de calcul au niveau mot et constituent un choix prometteur d'accélérateurs programmables. Cette thèse propose d'exploiter le potentiel des architectures reconfigurables à gros grains et de pousser le matériel aux limites énergétiques dans un flot de conception complet. Les contributions de cette thèse sont une architecture de type CGRA, appelé IPA pour Integrated Programmable Array, sa mise en œuvre et son intégration dans un système sur puce, avec le flot de compilation associé qui permet d'exploiter les caractéristiques uniques du nouveau composant, notamment sa capacité à supporter du flot de contrôle. L'efficacité de l'approche est éprouvée à travers le déploiement de plusieurs applications de traitement intensif. L'accélérateur proposé est enfin intégré à PULP, a Parallel Ultra-Low-Power Processing-Platform, pour explorer le bénéfice de ce genre de plate-forme hétérogène ultra basse consommation
Emerging trends in embedded systems and applications need high throughput and low power consumption. Due to the increasing demand for low power computing and diminishing returns from technology scaling, industry and academia are turning with renewed interest toward energy efficient hardware accelerators. The main drawback of hardware accelerators is that they are not programmable. Therefore, their utilization can be low is they perform one specific function and increasing the number of the accelerators in a system on chip (SoC) causes scalability issues. Programmable accelerators provide flexibility and solve the scalability issues. Coarse-Grained Reconfigurable Array (CGRA) architecture consisting of several processing elements with word level granularity is a promising choice for programmable accelerator. Inspired by the promising characteristics of programmable accelerators, potentials of CGRAs in near threshold computing platforms are studied and an end-to-end CGRA research framework is developed in this thesis. The major contributions of this framework are: CGRA design, implementation, integration in a computing system, and compilation for CGRA. First, the design and implementation of a CGRA named Integrated Programmable Array (IPA) is presented. Next, the problem of mapping applications with control and data flow onto CGRA is formulated. From this formulation, several efficient algorithms are developed using internal resources of a CGRA, with a vision for low power acceleration. The algorithms are integrated into an automated compilation flow. Finally, the IPA accelerator is augmented in PULP - a Parallel Ultra-Low-Power Processing-Platform to explore heterogeneous computing
APA, Harvard, Vancouver, ISO, and other styles
8

Abdelrahman, Tarig. "Evaluation of Wales Postgraduate Medical and Dental Education Deanery outcomes at core and higher general surgery before and after national reconfiguration, enhanced selection, and Joint Committee on Surgical Training defined curricular standards." Thesis, Cardiff University, 2017. http://orca.cf.ac.uk/100975/.

Full text
Abstract:
This thesis examines contemporary outcomes of surgical training in Wales and the UK. The hypotheses tested were: Core Surgical Training (CST) outcome is related to specific curricular defined goals, and themed focused CST rotations improve success at National Training Number (NTN)appointment; CST rotations including rural placements provide training comparable with non-rural placements; General Surgery (GS) Certificate of Completion of Training (CCT) curricular guidelines require focused appraisal and rotation planning; GS Higher Surgical Trainee (HST) indicative procedure targets are not in keeping with competence achievement determined by Procedural Based Assessment (PBA); Dedicated Emergency General Surgery (EGS) modules enhance HST training experience; H-Indices are a valid measure of GS consultant academic productivity and identify training research opportunity. Successful ST3 NTN appointment improved from 5.3 to 33.3% (p=0.005) following CST [OR 4.789 (1.666 - 13.763), p=0.004] and is independently associated with success. ST3 appointment was similar irrespective of rural or non-rural CST rotational placement (18.1 vs. 22.1%, p=0.695). Of the 155 UK GS HST CCTs awarded in 2013, global operative log book and academic achievements varied widely, with two-thirds of trainees achieving elective operative targets, but only half the requisite experience in EGS, and 5% nonoperative targets. Wales’ HSTs level 4 GS operative competencies varied 4- fold, ranging from 0.76 to 3.4 times national targets. EGS modular training introduction delivered a high volume of index EGS procedures and higher rates of PBA completion when compared with controls. H-indices were a robust measure of surgeons’ academic activity (p < 0.001).
APA, Harvard, Vancouver, ISO, and other styles
9

Krill, Benjamin. "A reconfigurable environment for IP cores implementation using dynamic partial reconfiguration." Thesis, University of Ulster, 2012. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.556481.

Full text
Abstract:
Hardware acceleration is becoming increasingly important in high performance applications due to their computational complexity. Recently, field programmable gate arrays (FPGAs) have gained popularity as a suitable platform for many high performance applications. FPGAs offer low power, reconfigurability, high performance and low design- turnaround time which enable FPGAs to be used in a number of image and signal processing applications. Due to the increasing complexity of acceleration systems, abstraction of these technologies and power dissipation has become one of the most important challenges. Addressing these issues require awareness at all levels of the system and FPGA design flow. The key achievements of the work presented in this thesis are summarised as follows. Novel architectures based on different design approaches and abstraction techniques - virtual file systems (VFS), dynamic partial reconfiguration (DPR) mechanism, distributed arithmetic (DA) and parallel digital signal processing - are developed for a generic frame- work and for three-dimensional (3-D) algorithms. Furthermore, solutions to divide large algorithms into small modules that fit on smaller, less power consuming FPGAs are investigated. An abstraction layer using a VFS for different platforms is carried out, and as a result a partial reconfiguration design flow framework is developed. The ultimate aim of this dissertation is to examine an efficient reconfigurable architecture for generic 3-D cyclic convolution (3-DeC). This is achieved with the previously investigated abstraction layer and framework, to demonstrate the operation of the framework, allowing discussion and the evaluation of techniques that are only possible on new FPGA devices. Results obtained have shown the advantages offered by the DPR framework and abstraction layer, and lead to a processing solution for implementing computationally intensive applications. A key section of this work included the development of the complete integration of the dynamic partial reconfiguration design flow and application usage - using the proposed abstraction model. The technique used explores the logic space and power consumptions needed to divide the algorithm for optimal application runtime situations.
APA, Harvard, Vancouver, ISO, and other styles
10

GIULIANO, Fabrizio. "Supporting code mobility and dynamic reconfigurations over Wireless MAC Processor Prototype." Doctoral thesis, Università degli Studi di Palermo, 2014. http://hdl.handle.net/10447/91036.

Full text
Abstract:
Mobile networks for Internet Access are a fundamental segment of Internet access net- works, where resource optimization are really critical because of the limited bandwidth availability. While traditionally resource optimizations have been focused on high effi- cient modulation and coding schemes, to be dynamically tuned according to the wireless channel and interference conditions, it has also been shown how medium access schemes can have a significant impact on the network performance according to the application and networking scenarios. This thesis work proposes an architectural solution for supporting Medium Access Con- trol (MAC) reconfigurations in terms of dynamic programming and code mobility. Since the MAC protocol is usually implemented in firmware/hardware (being constrained to very strict reaction times and to the rules of a specific standard), our solution is based on a different wireless card architecture, called Wireless MAC Processor (WMP), where standard protocols are replaced by standard programming interfaces. The control architecture developed in this thesis exploits this novel behavioral model of wireless cards for extending the network intelligence and enabling each node to be remotely reprogrammed by means a so called “MAC Program”, i.e. a software element that defines the description of a MAC protocol. This programmable protocol can be remotely injected and executed on running network devices allowing on-the-fly MAC reconfigurations. This work aim to obtain a formal description of the a software defined wireless network requirements and define a mechanism for a reliable MAC program code mobility throw the network elements, transparently to the upper-level and supervised by a global con- trol logic that optimizes the radio resource usage; it extends a single protocol paradigm implementation to a programmable protocol abstraction and redefines the overall wire- less network view with support for cognitive adaptation mechanisms. The envisioned solutions have been supported by real experiments running on different WMP proto- types , showing the benefits given by a medium control infrastructure which is dynamic, message-oriented and reconfigurable.
Mobile networks for Internet Access are a fundamental segment of Internet access net- works, where resource optimization are really critical because of the limited bandwidth availability. While traditionally resource optimizations have been focused on high effi- cient modulation and coding schemes, to be dynamically tuned according to the wireless channel and interference conditions, it has also been shown how medium access schemes can have a significant impact on the network performance according to the application and networking scenarios. This thesis work proposes an architectural solution for supporting Medium Access Con- trol (MAC) reconfigurations in terms of dynamic programming and code mobility. Since the MAC protocol is usually implemented in firmware/hardware (being constrained to very strict reaction times and to the rules of a specific standard), our solution is based on a different wireless card architecture, called Wireless MAC Processor (WMP), where standard protocols are replaced by standard programming interfaces. The control architecture developed in this thesis exploits this novel behavioral model of wireless cards for extending the network intelligence and enabling each node to be remotely reprogrammed by means a so called “MAC Program”, i.e. a software element that defines the description of a MAC protocol. This programmable protocol can be remotely injected and executed on running network devices allowing on-the-fly MAC reconfigurations. This work aim to obtain a formal description of the a software defined wireless network requirements and define a mechanism for a reliable MAC program code mobility throw the network elements, transparently to the upper-level and supervised by a global con- trol logic that optimizes the radio resource usage; it extends a single protocol paradigm implementation to a programmable protocol abstraction and redefines the overall wire- less network view with support for cognitive adaptation mechanisms. The envisioned solutions have been supported by real experiments running on different WMP proto- types , showing the benefits given by a medium control infrastructure which is dynamic, message-oriented and reconfigurable.
APA, Harvard, Vancouver, ISO, and other styles
11

CHIESA, DAVIDE. "Development and experimental validation of a Monte Carlo simulation model for the Triga Mark II reactor." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2014. http://hdl.handle.net/10281/50064.

Full text
Abstract:
In recent years, many computer codes, based on Monte Carlo methods or deterministic calculations, have been developed to separately analyze different aspects regarding nuclear reactors. Nuclear reactors are very complex systems, which require an integrated analysis of all the variables which are intrinsically correlated: neutron fluxes, reaction rates, neutron moderation and absorption, thermal and power distributions, heat generation and transfer, criticality coefficients, fuel burnup, etc. For this reason, one of the main challenges in the analysis of nuclear reactors is the coupling of neutronics and thermal-hydraulics simulation codes, with the purpose of achieving a good modeling and comprehension of the mechanisms which rule the transient phases and the dynamic behavior of the reactor. This is very important to guarantee the control of the chain reaction, for a safe operation of the reactor. In developing simulation tools, benchmark analyses are needed to prove the reliability of the simulations. The experimental measurements conceived to be compared with the results coming out from the simulations are really precious and can provide useful information to improve the description of the physics phenomena in the simulation models. My PhD research activity was held in this framework, as part of the research project Analysis of Reactor COre (ARCO, promoted by INFN) whose task was the development of modern, flexible and integrated tools for the analysis of nuclear reactors, relying on the experimental data collected at the research reactor TRIGA Mark II, installed at the Applied Nuclear Energy Laboratory (LENA) at the University of Pavia. In this way, once the effectiveness and the reliability of these tools for modeling an experimental reactor have been demonstrated, these could be applied to develop new generation systems. In this thesis, I present the complete neutronic characterization of the TRIGA Mark II reactor, which was analyzed in different operating conditions through experimental measurements and the development of a Monte Carlo simulation tool (relied on the MCNP code) able to take into account the ever increasing complexity of the conditions to be simulated. First of all, after giving an overview of some theoretical concepts which are fundamental for the nuclear reactor analysis, a model that reconstructs the first working period of the TRIGA Mark II reactor, in which the “fresh” fuel was not heavily contaminated with fission reaction products, is described. In particular, all the geometries and the materials are described in the MCNP simulation model with good detail, in order to reconstruct the reactor criticality and all the effects on the neutron distributions. The very good results obtained from the simulations of the reactor at low power condition -in which the fuel elements can be considered to be in thermal equilibrium with the water around them- are then used to implement a model for simulating the full power condition (250kW), in which the effects arising from the temperature increase in the fuel-moderator must be taken into account. The MCNP simulation model was exploited to evaluate the reactor power distribution and a dedicated experimental campaign was performed to measure the water temperature within the reactor core. In this way, through a thermal-hydraulic calculation tool, it has been possible to determine the temperature distribution within the fuel elements and to include the description of the thermal effects in the MCNP simulation model. Thereafter, since the neutron flux is a crucial parameter affecting the reaction rates and thus the fuel burnup, its energy and space distributions are analyzed presenting the results of several neutron activation measurements. Particularly, the neutron flux was firstly measured in the reactor's irradiation facilities through the neutron activation of many different isotopes. Hence, in order to analyze the energy flux spectra, I implemented an analysis tool, based on Bayesian statistics, which allows to combine the experimental data from the different activated isotopes and reconstruct a multi-group flux spectrum. Subsequently, the spatial neutron flux distribution within the core was measured by activating several aluminum-cobalt samples in different core positions, thus allowing the determination of the integral and fast flux distributions from the analysis of cobalt and aluminum, respectively. Finally, I present the results of the fuel burnup calculations, that were performed for simulating the current core configuration after a 48 years-long operation. The good accuracy that was reached in the simulation of the neutron fluxes, as confirmed by the experimental measurements, has allowed to evaluate the burnup of each fuel element from the knowledge of the operating hours and the different positions occupied in the core over the years. In this way, it has been possible to exploit the MCNP simulation model to determine a new optimized core configuration which could ensure, at the same time, a higher reactivity and the use of less fuel elements. This configuration was realized in September 2013 and the experimental results confirm the high quality of the work done. The results of this Ph.D. thesis highlight that it is possible to implement analysis tools -ranging from Monte Carlo simulations to the fuel burnup time evolution software, from neutron activation measurements to the Bayesian statistical analysis of flux spectra, and from temperature measurements to thermal-hydraulic models-, which can be appropriately exploited to describe and comprehend the complex mechanisms ruling the operation of a nuclear reactor. Particularly, it was demonstrated the effectiveness and the reliability of these tools in the case of an experimental reactor, where it was possible to collect many precious data to perform benchmark analyses. Therefore, for as these tools have been developed and implemented, they can be used to analyze other reactors and, possibly, to project and develop new generation systems, which will allow to decrease the production of high-level nuclear waste and to exploit the nuclear fuel with improved efficiency.
APA, Harvard, Vancouver, ISO, and other styles
12

Arad, Cosmin Ionel. "Programming Model and Protocols for Reconfigurable Distributed Systems." Doctoral thesis, KTH, Programvaruteknik och Datorsystem, SCS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-122311.

Full text
Abstract:
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for largescale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics.

QC 20130520

APA, Harvard, Vancouver, ISO, and other styles
13

Arad, Cosmin. "Programming Model and Protocols for Reconfigurable Distributed Systems." Doctoral thesis, SICS, 2013. http://urn.kb.se/resolve?urn=urn:nbn:se:ri:diva-24202.

Full text
Abstract:
Distributed systems are everywhere. From large datacenters to mobile devices, an ever richer assortment of applications and services relies on distributed systems, infrastructure, and protocols. Despite their ubiquity, testing and debugging distributed systems remains notoriously hard. Moreover, aside from inherent design challenges posed by partial failure, concurrency, or asynchrony, there remain significant challenges in the implementation of distributed systems. These programming challenges stem from the increasing complexity of the concurrent activities and reactive behaviors in a distributed system on the one hand, and the need to effectively leverage the parallelism offered by modern multi-core hardware, on the other hand. This thesis contributes Kompics, a programming model designed to alleviate some of these challenges. Kompics is a component model and programming framework for building distributed systems by composing message-passing concurrent components. Systems built with Kompics leverage multi-core machines out of the box, and they can be dynamically reconfigured to support hot software upgrades. A simulation framework enables deterministic execution replay for debugging, testing, and reproducible behavior evaluation for large-scale Kompics distributed systems. The same system code is used for both simulation and production deployment, greatly simplifying the system development, testing, and debugging cycle. We highlight the architectural patterns and abstractions facilitated by Kompics through a case study of a non-trivial distributed key-value storage system. CATS is a scalable, fault-tolerant, elastic, and self-managing key-value store which trades off service availability for guarantees of atomic data consistency and tolerance to network partitions. We present the composition architecture for the numerous protocols employed by the CATS system, as well as our methodology for testing the correctness of key CATS algorithms using the Kompics simulation framework. Results from a comprehensive performance evaluation attest that CATS achieves its claimed properties and delivers a level of performance competitive with similar systems which provide only weaker consistency guarantees. More importantly, this testifies that Kompics admits efficient system implementations. Its use as a teaching framework as well as its use for rapid prototyping, development, and evaluation of a myriad of scalable distributed systems, both within and outside our research group, confirm the practicality of Kompics.
Kompics
CATS
REST
APA, Harvard, Vancouver, ISO, and other styles
14

Borde, Etienne. "Configuration et Reconfiguration des Systèmes Temps-Reél Répartis Embarqués Critiques et Adaptatifs." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00563947.

Full text
Abstract:
Aujourd'hui, de plus en plus de systèmes industriels s'appuient sur des applications logicielles temps-réel réparties embarquées (TR2E). La réalisation de ces applications demande de répondre à un ensemble important de contraintes très hétérogènes, voire contradictoires. Pour satisfaire ces contraintes, il est presque toujours nécessaire de fournir à ces systèmes des capacités d'adaptation. Par ailleurs, certaines de ces applications pilotent des systèmes dont la défection peut avoir des conséquences financières - voire humaines - dramatiques. Pour concevoir de telles applications, appelées applications critiques, il faut s'appuyer sur des processus de développpement rigoureux capables de repérer et d'éliminer les erreurs de conception potentielles. Malheureusement, il n'existe pas à notre connaissance de processus de développement capable de traiter ce problème dans le cas où l'adaptation du système à son environnement conduit à modifier sa configuration logicielle. Ce travail de thèse présente une nouvelle méthodologie qui répond à cette problématique en s'appuyant sur la notion de mode de fonctionnement : chacun des comportements possibles du système est représenté par le biais d'un mode de fonctionnement auquel est associé une configuration logicielle. La spécification des règles de transition entre ces modes de fonctionnement permet alors de générer l'implantation des mécanismes de changement de mode, ainsi que des reconfigurations logicielles associées. Le code ainsi produit respecte les contraintes de réalisation des systèmes critiques et implante des mécanismes de reconfiguration sûrs et analysables. Pour ce faire, nous avons défini un nouveau langage de description d'architecture (COAL : Component Oriented Architecture Language) qui permet de bénéficier à la fois des avantages du génie logiciel à base de composants (de type Lightweight CCM), et des techniques d'analyse, de déploiement et de configuration statique, qu'apporte l'utilisation des langages de description d'architecture (et en particulier AADL : Architecture Analysis and Description Language). Nous avons alors réalisé un nouveau framework à composant, MyCCM-HI (Make your Component Container Model - High Integrity), qui exploite les constructions de COAL pour (i) générer le modèle AADL permettant de réaliser le déploiement et la configuration statique de l'application TR2E, (ii) générer le code de déploiement et de configuration des composants logiciels de type Lightweight CCM, (iii) générer le code correspondant aux mécanismes d'adaptation du système, et (iv) analyser formellement le comportement du système, y compris en cours d'adaptation. Ce framework à composant est disponible au téléchargement à l'adresse http ://myccm-hi.sourceforge.net.
APA, Harvard, Vancouver, ISO, and other styles
15

Hentati, Manel. "Reconfiguration dynamique partielle de décodeurs vidéo sur plateformes FPGA par une approche méthodologique RVC (Reconfigurable Video Coding)." Rennes, INSA, 2012. http://www.theses.fr/2012ISAR0027.

Full text
Abstract:
Les travaux présentés dans cette thèse s'inscrivent dans le cadre de la conception et l'implémentation des décodeurs reconfigurables en utilisant la norme MPEG-RVC. Cette norme est développée par MPEG. Elle permet une grande flexibilité et la réutilisation des normes existantes dans un processus de reconfiguration des solutions de décodage. RVC fournit une nouvelle spécification basée sur une modélisation à flux de données nommée RVC-CAL. Dans ce travail, nous proposons une méthodologie de prototypage rapide permettant une implémentation efficace et optimisée des décodeurs reconfigurables RVC sur des cibles matérielles. Notre flot de conception est basé sur l'utilisation de la reconfiguration dynamique partielle (RDP) afin de valider les approches de reconfiguration permises par la norme MPEG-RVC. En exploitant la technique RDP, le module matériel peut être remplacé par un autre module qui a la même fonction ou le même algorithme mais une architecture différente. Ce concept permet au concepteur de configurer différents décodeurs selon les données d'entrées ou ses exigences (temps de latence, la vitesse, la consommation de la puissance). La RDP peut être aussi utilisée pour réaliser une implémentation hiérarchique des applications RVC. L'utilisation de la norme MPEG-RVC et la RDP permet d'améliorer le processus de développement ainsi que les performances du décodeur. Cependant, la RDP pose plusieurs problèmes tels que le placement des tâches et la fragmentation du FPGA. Ces problèmes ont une influence sur les performances de l'application. Pour remédier à ces problèmes, nous avons proposé une approche de placement hors ligne qui est basée sur l'utilisation d'une méthode d'optimisation, appelée la programmation linéaire. L'application de cette approche sur différentes combinaisons de données ainsi que la comparaison avec une autre méthode ont montré l'efficacité et les performances de l'approche proposée
The main purpose of this PhD is to contribute to the design and the implementation of a reconfigurable decoder using MPEGRVC standard. The standard MPEG-RVC is developed by MPEG. Lt aims at providing a unified high-level specification of current and future MPEG video coding technologies by using dataflow model named RVC-CAL. This standard offers the means to overcome the lack of interpretability between many video codecs deployed in the market. Ln this work, we propose a rapid prototyping methodology to provide an efficient and optimized implementation of RVC decoders in target hardware. Our design flow is based on using the dynamic partial reconfiguration (DPR) to validate reconfiguration approaches allowed by the MPEG-RVC. By using DPR technique, hardware module can be replaced by another one which has the same function or the same algorithm but a different architecture. This concept allows to the designer to configure various decoders according to the data inputs or her requirements (latency, speed, power consumption,. . ). The use of the MPEG-RVC and the DPR improves the development process and the decoder performance. But, DPR poses several problems such as the placement of tasks and the fragmentation of the FPGA area. These problems have an influence on the application performance. Therefore, we need to define methods for placement of hardware tasks on the FPGA. Ln this work, we propose an off-line placement approach which is based on using linear programming strategy to find the optimal placement of hardware tasks and to minimize the resource utilization. Application of different data combinations and a comparison with sate-of-the art method show the high performance of the proposed approach
APA, Harvard, Vancouver, ISO, and other styles
16

Constantin, Nicolas 1964. "Analysis and design of a gated envelope feedback technique for automatic hardware reconfiguration of RFIC power amplifiers, with full on-chip implementation in gallium arsenide heterojunction bipolar transistor technology." Thesis, McGill University, 2009. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=115666.

Full text
Abstract:
In this doctoral dissertation, the author presents the theoretical foundation, the analysis and design of analog and RF circuits, the chip level implementation, and the experimental validation pertaining to a new radio frequency integrated circuit (RFIC) power amplifier (PA) architecture that is intended for wireless portable transceivers.
A method called Gated Envelope Feedback is proposed to allow the automatic hardware reconfiguration of a stand-alone RFIC PA in multiple states for power efficiency improvement purposes. The method uses self-operating and fully integrated circuitry comprising RF power detection, switching and sequential logic, and RF envelope feedback in conjunction with a hardware gating function for triggering and activating current reduction mechanisms as a function of the transmitted RF power level. Because of the critical role that RFIC PA components occupy in modern wireless transceivers, and given the major impact that these components have on the overall RF performances and energy consumption in wireless transceivers, very significant benefits stem from the underlying innovations.
The method has been validated through the successful design of a 1.88GHz COMA RFIC PA with automatic hardware reconfiguration capability, using an industry renowned state-of-the-art GaAs HBT semiconductor process developed and owned by Skyworks Solutions, Inc., USA. The circuit techniques that have enabled the successful and full on-chip embodiment of the technique are analyzed in details. The IC implementation is discussed, and experimental results showing significant current reduction upon automatic hardware reconfiguration, gain regulation performances, and compliance with the stringent linearity requirements for COMA transmission demonstrate that the gated envelope feedback method is a viable and promising approach to automatic hardware reconfiguration of RFIC PA's for current reduction purposes. Moreover, in regard to on-chip integration of advanced PA control functions, it is demonstrated that the method is better positioning GaAs HBT technologies, which are known to offer very competitive RF performances but inherently have limited integration capabilities.
Finally, an analytical approach for the evaluation of inter-modulation distortion (IMD) in envelope feedback architectures is introduced, and the proposed design equations and methodology for IMD analysis may prove very helpful for theoretical analyses, for simulation tasks, and for experimental work.
APA, Harvard, Vancouver, ISO, and other styles
17

Cardoso, Jason Barbosa. "Reconfiguração ótima para cortes de cargas em sistemas de distribuição de energia elétrica." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-06092016-104021/.

Full text
Abstract:
Neste trabalho é proposta uma modelagem matemática para a otimização do problema de corte de carga em sistemas de distribuição de energia elétrica radiais. O problema de corte de carga consiste em uma estratégia de reconfiguração topológica da rede elétrica para interrupção do fornecimento de energia elétrica. O objetivo principal é desligar a quantidade mínima de carga do sistema, de forma a manter restrições físicas e operacionais dentro dos limites da rede elétrica. Um segundo objetivo é alterar o mínimo possível a estrutura topológica inicial do sistema. Para isso, é feita a minimização de chaveamentos. Inicialmente foi modelado como uma programação não-linear inteira mista, e transformado em uma programação cônica de segunda ordem inteira mista, que pode ser resolvida de forma eficiente usando vários solver comerciais. O modelo matemático foi implementado dentro do ambiente de programação matemática GAMS e resolvido utilizando o solver comercial CPLEX. Testes foram realizados no sistema de distribuição de 53 barras. Os resultados encontrados evidenciam a consistência e a eficiência da modelagem proposta neste trabalho.
This research proposed a mathematical model to optimize the load shedding problem in radial distribution power systems. The load shedding problem consists in a topological reconfiguration strategy of the power grid in order to interrupt the power supply. The main goal is to disconnect the minimum amount of system loads while respecting the physical and operational restrictions of the grid. The second goal of this research was to modify as little as possible the initial topological structure of the system. To achieve this, a switching minimization was performed. First, the problem was modeled as a mixed integer nonlinear programming, and then it was transformed into a mixed integer second order cone programming using various commercial solvers. The mathematical model was implemented in the mathematical programming environment GAMS and solved using the CPLEX commercial solver. Tests were performed at the 53 nodes distribution system. The test results showed the consistency and efficiency of the model proposed in this dissertation.
APA, Harvard, Vancouver, ISO, and other styles
18

Borges, Guilherme Pereira. "Metodologia para planejamento de ações de alívio de carregamento em sistemas de distribuição de energia elétrica em média tensão." Universidade de São Paulo, 2016. http://www.teses.usp.br/teses/disponiveis/18/18154/tde-02082017-165127/.

Full text
Abstract:
O objetivo desta tese é desenvolver e implementar em computador uma metodologia para resolver o problema de alívio de carregamento utilizando técnicas de remanejamento ou corte de carga. Tal metodologia, fundamenta-se no Algoritmo Evolutivo Multiobjetivo em Tabelas, que foi desenvolvido inicialmente para o problema de restabelecimento de energia em sistemas de distribuição. Já metodologia desenvolvida nesta tese trata o problema de alívio de carregamento, buscando minimizar o número de consumidores sem fornecimento de energia elétrica e o número de operações de chaveamento. Todavia, é necessário a obtenção de um plano de manobras em chaves adequado de modo que este não inviabilize a implantação na prática e que resulte em soluções que contemplem: a ausência de sobrecarga na rede e nas subestações, a manutenção dos níveis de tensão dentro dos intervalos exigidos pela legislação e a manutenção da radialidade da rede. Para alcançar estes objetivos, utiliza-se técnicas que determinam a sequência de chaveamento necessária para o plano de alívio de carga obtido. Prioriza-se, ainda, o atendimento aos consumidores prioritários e o corte seletivo de cargas em condições de esgotamento das possibilidades de remanejamento entre os alimentadores primários. Ao aplicar a metodologia proposta em um sistema de distribuição real de grande porte da Companhia Energética de Pernambuco - CELPE, constatou-se que, em comparação com a técnica atualmente utilizada, a metologia desenvolvida é confiável e apresenta bons resultados no que se refere a: sequência exequível de manobras, diminuição do número de chaveamentos e redução do número de consumidores e consumidores prioritários sem atendimento. Além disso, a metodologia possui capacidade de aplicação em outros sistemas semelhantes e foi integrada em um sistema computacional com ambiente gráfico permitindo estudos de caso e armazenamento em banco de dados.
The objective of this research is to develop and implement a methodology for the treatment of load shedding problem due to the existence (operation) or the possibility to occur (planning) contingencies in supply system (High Voltage/Subtransmission). The methodology is based on Multiobjective Evolutionary Algorithm in Tables, initially developed for the service restoration problems in distribution systems. It aims to minimize the number of customers without electricity supply; minimizing the number of switching operations, so that it does not impede the implementation in practice; absence of overload in network and substations; maintaining the voltage levels within the ranges required by the laws of radiality and maintenance of the network. To achieve these goals, are used techniques for determining the required switching sequence for the load shedding plan obtained; prioritization of special consumer in service and selective load shedding when exhausted the possibilities of relocating loads between primary feeders. When applies the proposed methodology in a real large distribution system of the Energy Company of Pernambuco - CELPE, it can be seen that, compared with the technique currently used, it is reliable with good results regarding viable sequence of maneuvers; reducing the number of switchings and number of consumers and priority consumers without service in addition to be able to be applied in similar systems. The methodology has been integrated into a computer system in a graphical environment with facilities of case studies and storing information in the database.
APA, Harvard, Vancouver, ISO, and other styles
19

Annamalai, Arunachalam. "A Dynamic Reconfiguration Framework to Maximize Performance/Power in Asymmetric Multicore Processors." 2013. https://scholarworks.umass.edu/theses/1104.

Full text
Abstract:
Recent trends in technology scaling have shifted the processing paradigm to multicores. Depending on the characteristics of the cores, the multicores can be either symmetric or asymmetric. Prior research has shown that Asymmetric Multicore Processors (AMPs) outperform their symmetric (SMP) counterparts within a given resource and power budget. But, due to the heterogeneity in core-types and time-varying workload behavior, thread-to-core assignment is always a challenge in AMPs. As the computational requirements vary significantly across different applications and with time, there is a need to dynamically allocate appropriate computational resources on demand to suit the applications’ current needs, in order to maximize the performance and minimize the energy consumption. Performance/power of the applications could be further increased by dynamically adapting the voltage and frequency of the cores to better fit the changing characteristics of the workloads. Not only can a core be forced to a low power mode when its activity level is low, but the power saved by doing so could be opportunistically re-budgeted to the other cores to boost the overall system throughput. To this end, we propose a novel solution that seamlessly combines heterogeneity with a Dynamic Reconfiguration Framework (DRF). The proposed dynamic reconfiguration framework is equipped with Dynamic Resource Allocation (DRA) and Voltage/Frequency Adaptation (DVFA) capabilities to adapt the core resources and operating conditions at runtime to the changing demands of the applications. As a proof of concept, we illustrate our proposed approach using a dual-core AMP and demonstrate significant performance/power benefits over various baselines.
APA, Harvard, Vancouver, ISO, and other styles
20

Couture, Stéphane. "Le code source informatique comme artefact dans les reconfigurations d'Internet." Thèse, 2012. http://www.archipel.uqam.ca/5210/1/D2415.pdf.

Full text
Abstract:
Cette thèse en communication porte sur le code source informatique. Le code source est l'objet de la programmation informatique et peut être défini comme un ensemble de commandes informatiques humainement lisibles, « écrites » dans un langage de programmation de haut niveau (Krysia et Grzesiek, 2008). Depuis quelques années, le code source fait l'objet d'une signification sociale et politique grandissante. Le mouvement du logiciel libre place par exemple au cœur de sa politique le libre accès au code source. Ce mouvement a d'ailleurs permis l'émergence de nombreux collectifs articulés autour de la fabrication collective du code source. Si plusieurs études se sont attardées aux différents aspects de ces collectifs et aux usages des technologies numériques en général, force est toutefois de constater que le code source reste un objet remarquablement négligé dans les études en communication. L'objectif principal de cette thèse est donc d'aborder frontalement l'artefact code source, en répondant à cette question centrale de recherche : qu'est-ce que le code source et comment cet artefact agit-il dans les reconfigurations d'Internet? Notre problématique s'articule selon trois axes. D'abord, le constat de la signification sociale et politique grandissante du code source, qui s'exprime notamment dans un discours faisant du code source une forme d'expression. Ensuite, la manière dont, pour certains auteurs, le code informatique agit à la manière d'une loi en prescrivant ou limitant certains comportements. Finalement, un dernier axe concerne les rapports d'autorité et le travail invisible dans la fabrication du code source. Sur le plan théorique, notre étude se situe à l'intersection du champ « Science, technologie et société » (STS) et de celui des études en communication. Elle s'appuie largement sur les travaux récents de Lucy Suchman (2007) qui cherchent à poser le regard sur des dynamiques de reconfigurations mutuelles et permanentes des relations entre humains et machines. Notre étude mobilise également certains travaux français se situant en continuité de la théorie de l'acteur-réseau, et s'attardant au travail nécessaire à la stabilité et la performativité des artefacts. D'un point de vue méthodologique, notre étude prend comme terrain SPIP et symfony, deux logiciels qui ont en commun d'être utilisés comme infrastructures dans le fonctionnement de nombreux sites web interactifs, souvent désignés sous l'appellation « web 2.0 ». Les deux logiciels sont originaires de France et continuent de mobiliser un nombre significatif d'acteurs français. Ces projets se distinguent par les valeurs mises de l'avant, plus militantes et non commerciales dans le cas de SPIP, plus professionnelles et commerciales dans le cas de symfony. La langue utilisée dans l'écriture du code source est également différente : français pour SPIP, anglais pour symfony. L'enquête combine l'analyse de documents et de traces en ligne, des entretiens semi-dirigés avec les acteurs des projets, de même que l'observation de différentes rencontres entre les acteurs. Notre étude fait tout d'abord clairement ressortir une certaine ambiguïté entourant la définition de la notion du « code source ». Alors que le code source est souvent appréhendé comme un « texte », « que l'on écrit », l'analyse des définitions plus formelles, ou encore de l'objet désigné par les acteurs par le terme de « code source », montre que cet objet renvoie souvent à différents types de médias, comme des images, et même des artefacts qui ne sont pas directement destinés au fonctionnement des ordinateurs. À l'instar des propos de certains acteurs, nous croyons que la définition de ce qui constitue le code source revêt même une dimension politique, dans ce sens qu'elle tend à valoriser certains types d'activités plutôt que d'autres. L'analyse du processus de fabrication collective du code source dans les deux projets montre également des différences importantes au niveau de l'organisation du code source, de même que dans la mise en œuvre des normes et des « autorisations » d'écriture dans chacun des projets. Ces différences s'articulent avec les valeurs des projets et participent d'une certaine configuration du type d'acteur destiné à interagir avec telle ou telle partie du code source. En conclusion, nous insistons sur le fait que le code source ne doit pas seulement être appréhendé comme étant le noyau des infrastructures d'information. Il doit aussi être appréhendé, dans une perspective communicationnelle et sociologique, comme un artefact à travers duquel des acteurs humains entrent en relation entre eux pour reconfigurer le monde socionumérique au sein duquel ils et elles sont engagés. Suivant l'approche de Suchman, nous proposons donc d'appréhender le code source comme une interface, ou une multiplicité d'interfaces, dans les reconfigurations d'Internet, en insistant sur la manière dont le design de ces interfaces entraîne certaines conséquences, en particulier en privilégiant la participation de certains acteurs plutôt que d'autres. ______________________________________________________________________________ MOTS-CLÉS DE L’AUTEUR : code source, logiciels libres, artefacts, études STS, reconfigurations humain-machine.
APA, Harvard, Vancouver, ISO, and other styles
21

Kai-ChunLin and 林楷鈞. "Power-Comparison-based Eavesdropping Detection and Signature Reconfiguration for Optical Code-Division Multiple- Access Networks." Thesis, 2018. http://ndltd.ncl.edu.tw/handle/ztpc5a.

Full text
Abstract:
碩士
國立成功大學
電腦與通信工程研究所
106
In communication networks, security is traditionally divided into three categories: integrity, confidentiality and availability. Potentially, optical code-division multiple-access (OCDMA) system may provide both confidentiality and availability protection. Therefore, OCDMA has been seen as a superior candidate to offer confidentiality. However, OCDMA techniques still suffer from inherent security disadvantages, such as eavesdropping by an attacker with specific device to intercept and recover the transmitted signals that has been encoded. In this thesis, a scheme of signature code reconfiguration over OCDMA network is proposed to enhance multiple-users data transmission security. The security scheme is devised on the basis of two mechanisms: (1). Eavesdropping detection based on power comparison in local node; (2). Signature codes reconfiguration in each node on command of central control station. On eavesdropping detection, we sense significant power change while communicating nodes pair is suffering malicious attack. On signature reconfiguration, central station sends commands to the communicating transceiver nodes to change their signature keys. We illustrate with maximal-length sequence (M-sequence) codes as signature keys to the network nodes. These signatures are structured over arrayed-waveguide gratings (AWGs) devices. Simulation result shows that the spectral amplitude drops obviously after eavesdropping and the threshold value can be determined in order to detect the eavesdropping effectively. Also, the result of the analysis on eavesdropping probability shows that the confidentiality performance is significantly enhanced when considering the proposed eavesdropping detection on signature reconfiguration.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography