Tesis sobre el tema "Optimisation mémoire"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Optimisation mémoire".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Julié-Mollo, Catherine. "Optimisation de l'espace mémoire pour l'évaluation de grammaires attribuées". Orléans, 1989. http://www.theses.fr/1989ORLE2013.
Texto completoPapaix, Caroline. "Optimisation des performances des mémoires EEPROM embarquées". Montpellier 2, 2002. http://www.theses.fr/2002MON20098.
Texto completoHabhab, Radouane. "Optimisation d'architectures mémoires non-volatiles à piégeage de charges pour les applications microcontrôleur et mémoire autonome". Electronic Thesis or Diss., Université Côte d'Azur, 2023. http://www.theses.fr/2023COAZ4102.
Texto completoThe aim of this thesis work is to evaluate the performance in programming/cycling/retention of a SONOS memory cell based on a highly innovative split-gate architecture developed by STMicroelectronics, the eSTM™ (embedded Select in Trench Memory). Firstly, we explain the realization of this SONOS memory, which is based on a process step modification of the floating gate eSTM™ memory, with this modification carried out without additional cost.Secondly, we investigate the most efficient program and erase mechanisms for this memory, which also leads us to propose a new SONOS memory architecture. Thirdly, we electrically characterize the P/E activations of the SONOS eSTM™ cell for the two available architectures: dual gate and overlap. For dual gate memory, both memory cells on either side of the selection transistor have their own "ONO/control gate" stack. For overlap memory, the ONO layer is common to both memory cells. Even though this layer is shared, the information storage in ONO is localized only under the relevant control gate due to the discrete nature of charge trapping. The mechanism implemented for write and erase operations is carrier hot injection, and we detail the optimization of biases (different for the two available architectures) of the drain and select gate, which define the written and erased threshold voltages. We then perform endurance tests up to one million cycles for both architectures. Finally, we conduct a study on retention and charge pumping to assess the oxide quality at the interface of our cells. In a fourth phase, we seek to better understand the operation of the memory transistor and the variability of eSTM™ using TCAD simulations and electrical measurements on structures with various geometries
Le, Bouder Gabriel. "Optimisation de la mémoire pour les algorithmes distribués auto-stabilisants". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS002.
Texto completoSelf-stabilization is a suitable paradigm for distributed systems, particularly prone to transient faults. Errors such as memory or messages corruption, break of a communication link, can put the system in an inconsistent state. A protocol is self-stabilizing if, whatever the initial state of the system, it guarantees that it will return a normal behavior in finite time. Several constraints concern algorithms designed for distributed systems. Asynchrony is one emblematic example. With the development of networks of connected, autonomous devices, it also becomes crucial to design algorithms with a low energy consumption, and not requiring much in terms of resources. One way to address these problems is to aim at reducing the size of the messages exchanged between the nodes of the network. This thesis focuses on the memory optimization of the communication for self-stabilizing distributed algorithms. We establish in this thesis several negative results, which prove the impossibility to solve some problems under a certain limit on the size of the exchanged messages, by showing an impossibility to fully use the presence of unique identifiers in the network below that minimal size. Those results are generic, and may apply to numerous distributed problems. Secondly, we propose particularly efficient algorithms in terms of memory for two fundamental problems in distributed systems: the termination detection, and the token circulation
Fraboulet, Antoine. "Optimisation de la mémoire et de la consommation des systèmes multimédia embarqués". Lyon, INSA, 2001. http://theses.insa-lyon.fr/publication/2001ISAL0054/these.pdf.
Texto completoThe development in technologies and tool for software compilation and automatic hardware synthesis now makes it possible to conceive in a joint way (Co design) the electronic systems integrated on only one silicon chip, called "System on Chip". These systems in their embedded versions must answer specific constrain s of place, speed and consumption. Moreover, the unceasingly increasing capacities of these systems make it possible today to develop complex applications like multimedia ones. These multimedia applications work, amongst other things, on images and signals of big size; they generate large memory requirements and data transfers handled by nested loops. It is thus necessary to concentrate on memory optimizations when designing such applications in the embedded world. Two means of action are generally used: the choice of a dedicated memory architecture (memory hierarchy and caches) and adequacy of the code describing the application with the generated architecture. We will develop this second axis of memory optimization and how to transform automatically the implementation code, particularly nested loops, to minimize data transfers (large consumer of energy) and memory size (large consumer of surface and energy)
Ninin, Jordan. "Optimisation Globale basée sur l'Analyse d'Intervalles : Relaxation Affine et Limitation de la Mémoire". Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2010. http://tel.archives-ouvertes.fr/tel-00580651.
Texto completoGamoudi, Oussama. "Optimisation adaptative appliquée au préchargement de données". Paris 6, 2012. http://www.theses.fr/2012PA066192.
Texto completoData prefetching is an effective way to bridge the increasing performance gap between processor and memory. Prefetching can improve performance but it has some side effects which may lead to no performance improvement while increasing memory pressure or to performance degradation. Adaptive prefetching aims at reducing negative effects of prefetching while keeping its advantages. This paper proposes an adaptive prefetching method based on runtime activity, which corresponds to the processor and memory activities retrieved by hardware counters, to predict the prefetch efficiency. Our approach highlights and relies on the correlation between the prefetch effects and runtime activity. Our method learns all along the execution this correlation to predict the prefetch efficiency in order to filter out predicted inefficient prefetches. Experimental results show that the proposed filter is able to cancel thenegative impact of prefetching when it is unprofitable while keeping the performance improvement due to prefetching when it is beneficial. Our filter works similarly well when several threads are running simultane-ously which shows that runtime activity enables an efficient adaptation of prefetch by providing information on running-applications behaviors and interactions
Barreteau, Michel. "Optimisation du placement des scans et des réductions pour machines parallèles à mémoire répartie". Versailles-St Quentin en Yvelines, 1998. http://www.theses.fr/1998VERS0001.
Texto completoNovytskyi, Dimitri. "Méthodes géométriques pour la mémoire et l'apprentissage". Phd thesis, Université Paul Sabatier - Toulouse III, 2007. http://tel.archives-ouvertes.fr/tel-00285602.
Texto completoZuckerman, Stéphane. "Méthodologie de mesure et optimisation de l'utilisation des hiérarchies mémoire dans les systèmes multicoeur". Versailles-St Quentin en Yvelines, 2010. http://www.theses.fr/2010VERS0063.
Texto completoMicroprocessors embedding multicore technology are nowadays the new building blocks of computation nodes for supercomputers. To the classic instruction-level parallelism found in every modern microprocessor, task-level parallelism is now added. The most critical shared ressource – memory – becomes even more critical with the advent of shared caches between multiple cores. This dissertation proposes to give methodological leads to determine where the bottlenecks are situated in a system built on multicores chips, as well as caracterize some problems specific to multicore. Among them, one can find in particular the contention in cache hierarchies : RAM, and last level of cache. The presence of prefetch mechanisms can also lead to cacheline stealing. It can deeply hurt performance in compute- and memory-intensive applications manipulating complex data structures such as multidimensional arrays. Finally, based on optimized building blocks for unicore execution in matrix computations, we propose a methodology to determine the best partitioning to get acceptable performance in a multicore environment
Agharben, El Amine. "Optimisation et réduction de la variabilité d’une nouvelle architecture mémoire non volatile ultra basse consommation". Thesis, Lyon, 2017. http://www.theses.fr/2017LYSEM013.
Texto completoThe global semiconductor market is experiencing steady growth due to the development of consumer electronics and the wake of the non-volatile memory market. The importance of these memory products has been accentuated since the beginning of the 2000s by the introduction of nomadic products such as smartphones or, more recently, the Internet of things. Because of their performance and reliability, Flash technology is currently the standard for non-volatile memory. However, the high cost of microelectronic equipment makes it impossible to depreciate them on a technological generation. This encourages industry to adapt equipment from an older generation to more demanding manufacturing processes. This strategy is not without consequence on the spread of the physical characteristics (geometric dimension, thickness ...) and electrical (current, voltage ...) of the devices. In this context, the subject of my thesis is “Optimization and reduction of the variability of a new architecture ultra-low power non-volatile memory”.This study aims to continue the work begun by STMicroelectronics on the improvement, study and implementation of Run-to-Run (R2R) control loops on a new ultra-low power memory cell. In order to ensure the implementation of a relevant regulation, it is essential to be able to simulate the process manufacturing influence on the electrical behavior of the cells, using statistical tools as well as the electric characterization
Novytskyy, Dmytro. "Méthodes géométriques pour la mémoire et l'apprentissage". Toulouse 3, 2007. http://www.theses.fr/2007TOU30152.
Texto completoThis thesis is devoted to geometric methods in optimization, learning and neural networks. In many problems of (supervised and unsupervised) learning, pattern recognition, and clustering there is a need to take into account the internal (intrinsic) structure of the underlying space, which is not necessary Euclidean. For Riemannian manifolds we construct computational algorithms for Newton method, conjugate-gradient methods, and some non-smooth optimization methods like the r-algorithm. For this purpose we develop methods for geodesic calculation in submanifolds based on Hamilton equations and symplectic integration. Then we construct a new type of neural associative memory capable of unsupervised learning and clustering. Its learning is based on generalized averaging over Grassmann manifolds. Further extension of this memory involves implicit space transformation and kernel machines. Also we consider geometric algorithms for signal processing and adaptive filtering. Proposed methods are tested for academic examples as well as real-life problems of image recognition and signal processing. Application of proposed neural networks is demonstrated for a complete real-life project of chemical image recognition (electronic nose)
Laga, Arezki. "Optimisation des performance des logiciels de traitement de données sur les périphériques de stockage SSD". Thesis, Brest, 2018. http://www.theses.fr/2018BRES0087/document.
Texto completoThe growing volume of data poses a real challenge to data processing software like DBMS (DataBase Management Systems) and data storage infrastructure. New technologies have emerged in order to face the data volume challenges. We considered in this thesis the emerging new external memories like flash memory-based storage devices named SSD (Solid State Drive).SSD storage devices offer a performance gain compared to the traditional magnetic devices.However, SSD devices offer a new performance model that involves 10 cost optimization for data processing and management algorithms.We proposed in this thesis an 10 cost model to evaluate the data processing algorithms. This model considers mainly the SSD 10 performance and the data distribution.We also proposed a new external sorting algorithm: MONTRES. This algorithm includes optimizations to reduce the 10 cost when the volume of data is greater than the allocated memory space by an order of magnitude. We proposed finally a data prefetching mechanism: Lynx. This one makes use of a machine learning technique to predict and to anticipate future access to the external memory
Chekaf, Mustapha. "Capacité de la mémoire de travail et son optimisation par la compression de l'information". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCC010/document.
Texto completoSimple span tasks are tasks commonly used to measure short-term memory, while complex span tasks are usually considered typical measures of working memory. Because complex span tasks were designed to create a concurrent task, the average span is usually lower (4 ± 1items) than in simple span tasks (7±2 items). One possible reason for measuring higher spansduring simple span tasks is that participants can take profit of the spare time between the stimuli to detect, and recode regularities in the stimulus series (in the absence of a concurrent task), and such regularities can be used to pack a few stimuli into 4 ± 1 chunks. Our main hypothesis was that information compression in immediate memory is an excellent indicator for studying the relationship between immediate-memory capacity and fluid intelligence. The idea is that both depend on the efficiency of information processing, and more precisely, on the interaction between storage and processing. We developed various span tasks measuringa chunking capacity, in which compressibility of memoranda was estimated using different algorithmic complexity metrics. The results showed that compressibility can be used to predictworking-memory performance, and that fluid intelligence is well predicted by the ability to compress information.We conclude that the ability to compress information in working memoryis the reason why both manipulation and retention of information are linked to intelligence
Guermouche, Abdou. "Étude et optimisation du comportement mémoire dans les méthodes parallèles de factorisation de matrices creuses". Lyon, École normale supérieure (sciences), 2004. http://www.theses.fr/2004ENSL0284.
Texto completoDirect methods for solving sparse linear systems are known for their large memory requirements that can represent the limiting factor to solve large systems. The work done during this thesis concerns the study and the optimization of the memory behaviour of a sparse direct method, the multifrontal method, for both the sequential and the parallel cases. Thus, optimal memory minimization algorithms have been proposed for the sequential case. Concerning the parallel case, we have introduced new scheduling strategies aiming at improving the memory behaviour of the method. After that, we extended these approaches to have a good performance while keeping a good memory behaviour. In addition, in the case where the data to be treated cannot fit into memory, out-of-core factorization schemes have to be designed. To be efficient, such approaches require to overlap I/O operations with computations and to reuse the data sets already in memory to reduce the amount of I/O operations. Therefore, another part of the work presented in this thesis concerns the design and the study of implicit out-of-core techniques well-adapted to the memory access pattern of the multifrontal method. These techniques are based on a modification of the standard paging policies of the operating system using a low-level tool (MMUM&MMUSSEL)
Puma, Sébastien. "Optimisation des apprentissages : modèles et mesures de la charge cognitive". Thesis, Toulouse 2, 2016. http://www.theses.fr/2016TOU20058/document.
Texto completoLearning allows you to gain the necessary knowledge to adapt to the world. Cognitive load theory takes into consideration cognitive resources invested during school learning. However, two main limitations can be identified: a theoretical one and a methodological one. From a theoretical perspective, CLT invoke working memory (WM) to describe the cognitive resources used during learning and these models do not take time into account. The other limit is related to methodology: CLT doesn’t offer measures of cognitive load either reliable or dynamic.Taking into consideration these limitations, we suggest the use of physiological measurement and a new WM model: the TBRS (Time Based Resource Sharing). Physiological measurement is a mean to analyze the temporal variations implied by the cognitive load while TBRS model takes the temporal variation of the attentional focus allocation into account. However, the TBRS has not yet been used with meaningful items, which could be gathered into chunks. Thus, the aim of the present work is to study the benefits of using physiological measurement and the TBRS model with CLT.To address the question of cognitive load measurement, a first experiment used a task included in the ENAC’s (École Nationale d’Aviation Civile) recruitment selection process. During the experiment, cerebral activity (EEG) and eye movements (Eye-tracking) were recorded. Another series of four experiments stressed the question of the use of the TBRS model in CLT. They began by replicating a previous study using the TBRS model (exp. 2 & 3), replacing items to be held in memory by items which could be chunked. The other two experiments extended these results. Finally a sixth experiment used physiological measures to assess cognitive load variations while participants performed a protocol similar to the previous experiments.Results from these six experiments show that TBRS model and physiological measurement are consistent with CLT and also complete its findings
Ben, Fradj Hanene. "Optimisation de l'énergie dans une architecture mémoire multi-bancs pour des applications multi-tâches temps réel". Phd thesis, Université de Nice Sophia-Antipolis, 2006. http://tel.archives-ouvertes.fr/tel-00192473.
Texto completoDemers, Vincent. "Optimisation des propriétés fonctionnelles des alliages à mémoire de forme suite à l'application de traitements thermomécaniques". Mémoire, École de technologie supérieure, 2009. http://espace.etsmtl.ca/36/1/DEMERS_Vincent.pdf.
Texto completoEisenbeis, Christine. "Optimisation automatique de programmes sur "Array-Processors"". Paris 6, 1986. http://www.theses.fr/1986PA066181.
Texto completoCabout, Thomas. "Optimisation technologique et caractérisation électrique de mémoires résistives OxRRAM pour applications basse consommation". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4778/document.
Texto completoToday, non-volatile memory market is dominated by charge storage based technologies. However, this technology reaches his scaling limits and solutions to continue miniaturization meet important technological blocks. Thus, to continue scaling for advanced nodes, new non-volatile solutions are developed. Among them, oxide based resistive memories (OxRRAM) are intensively studied. Based on resistance switching of Metal/Isolator/Metal stack, this technology shows promising performances and scaling perspective but isn’t mature and still suffer from a lake of switching mechanism physical understanding.Results presented in this thesis aim to contribute to the development of OxRRAM technology. In a first part, an analysis of different materials constituting RRAM allow us to compare unipolar and bipolar switching modes and select the bipolar one that benefit from lower programming voltage and better performances. Then identified memory stack TiNHfO2Ti have been integrated in 1T1R structure in order to evaluate performances and limitation of this structure. Operating of 1T1R structure have been carefully studied and good endurance and retention performances are demonstrated. Finally, in the last part, thermal activation of switching characteristics have been studied in order to provide some understanding of the underling physical mechanisms. Reset operation is found to be triggered by local temperature while retention performances are dependent of Set temperature
Dupuis, Xavier. "Contrôle optimal d'équations différentielles avec - ou sans - mémoire". Phd thesis, Ecole Polytechnique X, 2013. http://tel.archives-ouvertes.fr/tel-00914246.
Texto completoCoativy, Gildas. "Optimisation des propriétés de mémoire de forme de l’amidon : rôle des procédés thermomécaniques et apport de l’introduction de nanocharges". Nantes, 2013. http://archive.bu.univ-nantes.fr/pollux/show.action?id=dd59b6e5-214f-4120-a9fc-fc73e3210d86.
Texto completoStarch has shape memory properties: after hot forming and quenching, it is able to recover its initial shape by crossing the glass transition, by heating and/or by moisture uptake. The target of the present work is to improve the material’s thermomechanical performances during shape recovery. Two approaches were studied: the optimization of the hot forming process and the introduction of lamellar nanofillers (montmorillonites) in the matrix by twin screw extrusion. Model processes and specific structural and thermomechanical characterization methods allowed optimizing the elaboration process and allowed a better understanding of the shape memory and stress relaxation mechanisms. Composites containing 1 to 10% of nanofillers have been processed using a twin screw microcompounder allowing simulating the extrusion process. The best dispersion states were obtained without addition of a surfactant. Indeed, an aggregation of the nanoparticles was induced by the cationic starch used. The obtained bionanocomposites showed a significant increase of mechanical performances, without decrease of the shape memory properties and with an improvement of the relaxation stress. However, the shape relaxation kinetics appears to be slowed down. This could be related to a modification of the macromolecular dynamics observed in presence of the nanofiller by calorimetry and dynamic mechanical thermal analysis
Beyler, Jean-Christophe. "Dynamic software memory access optimization : Dynamic low-cost reduction of memory latencies by binary analyses and transformations". Université Louis Pasteur (Strasbourg) (1971-2008), 2007. http://www.theses.fr/2007STR13171.
Texto completoThis thesis concerns the development of dynamic approaches for the control of the hardware/software couple. More precisely, works presented here have the main goal of minimizing program execution times on mono or multi-processor architectures, by anticipating memory accesses through dynamic prefetch of useful data in cache memory and in a way that is entirely transparent to the user. The developed systems consist in a dynamic analysis phase, where memory access latencies are measured, a phase of binary optimizing transformations when they have been evaluated as efficient, and where data prefetching instructions are inserted into the binary code, a dynamic analysis phase of the optimizations efficiency, and finally a canceling phase for transformations that have been evaluated as inefficient. Every phase applies individually to every memory access, and eventually applies several times if memory accesses have behaviors that are varying during the execution time of the target software
Béra, Clément. "Sista : a metacircular architecture for runtime optimisation persistence". Thesis, Lille 1, 2017. http://www.theses.fr/2017LIL10071/document.
Texto completoMost high-level programming languages run on top of a virtual machine (VM) to abstract away from the underlying hardware. To reach high-performance, the VM typically relies on an optimising just-in-time compiler (JIT), which speculates on the program behavior based on its first runs to generate at runtime efficient machine code and speed-up the program execution. As multiple runs are required to speculate correctly on the program behavior, such a VM requires a certain amount of time at start-up to reach peak performance. The optimising JIT itself is usually compiled ahead-of-time to executable code as part of the VM. The dissertation proposes Sista, an architecture for an optimising JIT, in which the optimised state of the VM can be persisted across multiple VM start-ups and the optimising JIT is running in the same runtime than the program executed. To do so, the optimising JIT is split in two parts. One part is high-level: it performs optimisations specific to the programming language run by the VM and is written in a metacircular style. Staying away from low-level details, this part can be read, edited and debugged while the program is running using the standard tool set of the programming language executed by the VM. The second part is low-level: it performs machine specific optimisations and is compiled ahead-of-time to executable code as part of the VM. The two parts of the JIT use a well-defined intermediate representation to share the code to optimise. This representation is machine-independent and can be persisted across multiple VM start-ups, allowing the VM to reach peak performance very quickly. To validate the architecture, the dissertation includes the description of an implementation on top of Pharo Smalltalk and its VM. The implementation is able to run a large set of benchmarks, from large application benchmarks provided by industrial users to micro-benchmarks used to measure the performance of specific code patterns. The optimising JIT is implemented according to the architecture proposed and shows significant speed-up (up to 5x) over the current production VM. In addition, large benchmarks show that peak performance can be reached almost immediately after VM start-up if the VM can reuse the optimised state persisted from another run
Bastos, castro Marcio. "Optimisation de la performance des applications de mémoire transactionnelle sur des plates-formes multicoeurs : une approche basée sur l'apprentissage automatique". Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00766983.
Texto completoCastro, Márcio. "Optimisation de la performance des applications de mémoire transactionnelle sur des plates-formes multicoeurs : une approche basée sur l'apprentissage automatique". Thesis, Grenoble, 2012. http://www.theses.fr/2012GRENM074/document.
Texto completoMulticore processors are now a mainstream approach to deliver higher performance to parallel applications. In order to develop efficient parallel applications for those platforms, developers must take care of several aspects, ranging from the architectural to the application level. In this context, Transactional Memory (TM) appears as a programmer friendly alternative to traditional lock-based concurrency for those platforms. It allows programmers to write parallel code as transactions, which are guaranteed to execute atomically and in isolation regardless of eventual data races. At runtime, transactions are executed speculatively and conflicts are solved by re-executing conflicting transactions. Although TM intends to simplify concurrent programming, the best performance can only be obtained if the underlying runtime system matches the application and platform characteristics. The contributions of this thesis concern the analysis and improvement of the performance of TM applications based on Software Transactional Memory (STM) on multicore platforms. Firstly, we show that the TM model makes the performance analysis of TM applications a daunting task. To tackle this problem, we propose a generic and portable tracing mechanism that gathers specific TM events, allowing us to better understand the performances obtained. The traced data can be used, for instance, to discover if the TM application presents points of contention or if the contention is spread out over the whole execution. Our tracing mechanism can be used with different TM applications and STM systems without any changes in their original source codes. Secondly, we address the performance improvement of TM applications on multicores. We point out that thread mapping is very important for TM applications and it can considerably improve the global performances achieved. To deal with the large diversity of TM applications, STM systems and multicore platforms, we propose an approach based on Machine Learning to automatically predict suitable thread mapping strategies for TM applications. During a prior learning phase, we profile several TM applications running on different STM systems to construct a predictor. We then use the predictor to perform static or dynamic thread mapping in a state-of-the-art STM system, making it transparent to the users. Finally, we perform an experimental evaluation and we show that the static approach is fairly accurate and can improve the performance of a set of TM applications by up to 18%. Concerning the dynamic approach, we show that it can detect different phase changes during the execution of TM applications composed of diverse workloads, predicting thread mappings adapted for each phase. On those applications, we achieve performance improvements of up to 31% in comparison to the best static strategy
Pinaud, Bruno. "Contribution à la visualisation des connaissances par des graphes dans une mémoire d'entreprise : application sur le serveur Atanor". Phd thesis, Université de Nantes, 2006. http://tel.archives-ouvertes.fr/tel-00335934.
Texto completoLe passage au modèle de graphes pose le problème de sa représentation visuelle. Les tracés doivent rester lisibles et compréhensibles par les utilisateurs. Ceci se traduit notamment par le respect de critères esthétiques qui permettent de modéliser un problème d'optimisation combinatoire consistant à trouver un ordre optimal des sommets dans chaque niveau. Pour résoudre ce problème, nous avons développé un algorithme génétique qui possède deux particularités : deux opérateurs de croisements spécifiques et une hybridation par une recherche locale. Les expérimentations montrent que pour des graphes de taille standard, l'algorithme génétique donne de meilleurs résultats que les autres méthodes que nous connaissons. La comparaison des modèles de représentation des connaissances sur un exemple industriel montre qu'en plus de faciliter la lecture, Graph'Atanor permet de facilement suivre la trace des utilisateurs et de mettre en avant les sommets critiques.
Roux, Olivier. "La Mémoire dans les algorithmes à colonie de fourmis : applications à l'optimisation et à la programmation automatique". Phd thesis, Littoral, 2001. http://www.theses.fr/2001DUNK0063.
Texto completoThis thesis presents meta-heuristic based on the behaviour of natural ants looking for food. These heuristics are known as Ants Colony Optimization or ACO. We propose to compare the ACO paradigm with other well-known heuristics with regards to the use of the memory. Then, we introduce two applications of the ACO algorithms. The first application, ANTabu is an ACO scheme for the QAP. ANTabu combines the ants' paradigm with a robust local search technique (Tabu search). A parallel model developed for ANTabu is introduced. The second application lies in the machine-learning field. This scheme called AP (Automatic Programming) applies the cooperative behaviour of ants to automatically buid programs. This method is then compared to the classical automatic generation of programs : Genetic Programming
Campigotto, Romain. "Algorithmes d'approximation à mémoire limitée pour le traitement de grands graphes : le problème du Vertex Cover". Phd thesis, Université d'Evry-Val d'Essonne, 2011. http://tel.archives-ouvertes.fr/tel-00677774.
Texto completoBouzidi, Mohamed Chérif. "" Étude d'une Décharge à Barrière Diélectrique (DBD) homogène dans l'azote à pression atmosphérique : Effet mémoire et Optimisation du transfert de Puissance"". Phd thesis, Institut National Polytechnique de Toulouse - INPT, 2013. http://tel.archives-ouvertes.fr/tel-00925594.
Texto completoCarpov, Sergiu. "Ordonnancement pour la gestion de la mémoire et du préchargement dans les architectures multicoeurs embarquées". Phd thesis, Université de Technologie de Compiègne, 2011. http://tel.archives-ouvertes.fr/tel-00637066.
Texto completoTaillefer, Edith. "Méthodes d'optimisation d'ordre zéro avec mémoire en grande dimension : application à la compensation des aubes de compresseurs et de turbines". Toulouse 3, 2008. http://thesesups.ups-tlse.fr/205/.
Texto completoThis thesis presents the result of collaboration between Snecma and IMT (Institut de Mathématiques de Toulouse). New efficient optimisation methods have been developed in IMT and then applied on blade design at Technical Department of Snecma. In many industrial applications, the gradient of a cost function is not available and if it is available, its domain of validity is very restricted. This led to the recent development of numerous zero order optimisation methods. Two numerical tools for large dimension optimisation without derivative computation are discussed here. The main idea is to use the cost function evaluations, which are performed during the optimisation process, to build a surrogate model. Addition of a new point during the optimisation process must reach a double target: progress towards the optimum and improve the approximation of the cost function for the next step. Among all approximation techniques, we focus here on those which catch easily constant behaviour. As a matter of fact, other methods introduce false local minima. Consequently we focus on two methods: neural networks and sparse grids. Especially sparse grid is a new promising way for various scientific topics thanks to its adaptative and hierarchical properties. Efficiency of these methods is proved on analytical functions and confirmed on industrial cases and especially for bend momentum balance of compressor and turbine blades
Assif, Safa. "Fiabilité et optimisation des structures mécaniques à paramètres incertains : application aux cartes électroniques". Phd thesis, INSA de Rouen, 2013. http://tel.archives-ouvertes.fr/tel-00950354.
Texto completoTillie, Luc. "Etude et optimisation de la stabilité thermique et de la tenue en température de P-STT-MRAM pour des applications industrielles". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAT133.
Texto completoWith the amount of data increasing drastically during the last few decades, the need for new technological solutions rose. One of the answers to this problem consists in improving the actual hardware with emerging Non-Volatile Memories (e-NVM). Within these new solutions, the Magnetic Random Access Memory (MRAM) gains a lot of attention from the industrial market. With their supposed unlimited endurance, high speed switching, low voltage operations and high data retention at room temperature, the MRAM, especially the Perpendicular Spin Transfer Torque MRAM (P-STT-MRAM), is seen as one of the best contenders for DRAM, SRAM and embedded Flash replacement. To be used in industrial applications, the P-STT-MRAM has to answer to a large range of requirements in terms of data retention (e.g 10 years) and high operating temperature (more than 200°C). However, as measuring high data retention is not practical, solutions have to be found to extract it fastly. This manuscript will propose and compare different thermal stability factor extraction protocols for P-STTMRAM. The most adapted will be used to model the temperature and size dependence of this factor. Then, the temperature limits of P-STT-MRAM will be characterized and different flavours of storage layers will be match with industrial applications. Finally, the electrical parameters dependence with an external magnetic field will be studied and a linear magnetic sensor based on a P-STT-MRAM device will be proposed
Idrissi, Aouad Maha. "Conception d'algorithmes hybrides pour l'optimisation de l'énergie mémoire dans les systèmes embarqués et de fonctions multimodales". Thesis, Nancy 1, 2011. http://www.theses.fr/2011NAN10029/document.
Texto completoRésumé en anglais : Memory is considered to be greedy in energy consumption, a sensitive issue, especially in embedded systems. The global optimization of multimodal functions is also a difficult problem because of the large number of local optima of these functions. In this thesis report, I present various new hybrid and distributed algorithms to solve these two optimization problems. These algorithms are compared with conventional methods used in the literature and the results obtained are encouraging. Indeed, these results show a reduction in memory energy consumption by about 76% to more than 98% on our benchmarks on one hand. On the other hand, in the case of global optimization of multimodal functions, our hybrid algorithms converge more often to the global optimum solution. Distributed and cooperative versions of these new hybrid algorithms are also proposed. They are more faster than their respective sequential versions
Saadane, Sofiane. "Algorithmes stochastiques pour l'apprentissage, l'optimisation et l'approximation du régime stationnaire". Thesis, Toulouse 3, 2016. http://www.theses.fr/2016TOU30203/document.
Texto completoIn this thesis, we are studying severa! stochastic algorithms with different purposes and this is why we will start this manuscript by giving historicals results to define the framework of our work. Then, we will study a bandit algorithm due to the work of Narendra and Shapiro whose objectif was to determine among a choice of severa! sources which one is the most profitable without spending too much times on the wrong orres. Our goal is to understand the weakness of this algorithm in order to propose an optimal procedure for a quantity measuring the performance of a bandit algorithm, the regret. In our results, we will propose an algorithm called NS over-penalized which allows to obtain a minimax regret bound. A second work will be to understand the convergence in law of this process. The particularity of the algorith is that it converges in law toward a non-diffusive process which makes the study more intricate than the standard case. We will use coupling techniques to study this process and propose rates of convergence. The second work of this thesis falls in the scope of optimization of a function using a stochastic algorithm. We will study a stochastic version of the so-called heavy bali method with friction. The particularity of the algorithm is that its dynamics is based on the ali past of the trajectory. The procedure relies on a memory term which dictates the behavior of the procedure by the form it takes. In our framework, two types of memory will investigated : polynomial and exponential. We will start with general convergence results in the non-convex case. In the case of strongly convex functions, we will provide upper-bounds for the rate of convergence. Finally, a convergence in law result is given in the case of exponential memory. The third part is about the McKean-Vlasov equations which were first introduced by Anatoly Vlasov and first studied by Henry McKean in order to mode! the distribution function of plasma. Our objective is to propose a stochastic algorithm to approach the invariant distribution of the McKean Vlasov equation. Methods in the case of diffusion processes (and sorne more general pro cesses) are known but the particularity of McKean Vlasov process is that it is strongly non-linear. Thus, we will have to develop an alternative approach. We will introduce the notion of asymptotic pseudotrajectory in odrer to get an efficient procedure
Devarenne, Isabelle. "Etudes en recherche locale adaptative pour l'optimisation combinatoire". Besançon, 2007. http://www.theses.fr/2007BESA2012.
Texto completoAll optimization methods have internal parameters that influenced their performance. The challenge for users is to find a good adjustment for each problem. In recent years an important part of research in combinatorial optimization focuses on the conception of adaptive methods. The objective of the approach is to define processes that attempt to adapt dynamically parameters of methods according to each problem. In this context, this thesis focuses on the mechanisms of memory and adaptation in order to develop an Adaptative Local Search (ALS) method combining mechanisms of extension and restriction of the neighborhood. The neighborhood extension is defined as a procedure which detect blockage during the search by studying the historic of the choices made by the method in order to intervene on his behavior. The restriction mechanism is based on the use of an adaptative tabu list to manage access to the variables. The resulting method has been applied to two problems: an academic problem, graph k-coloring problem, and a real problem, the frequency allocation in radio networks. Several variants of ALS were developed and compared to public results on the two issues
Zaourar, Lilia Koutchoukali. "Recherche opérationnelle et optimisation pour la conception testable de circuits intégrés complexes". Grenoble, 2010. http://www.theses.fr/2010GRENM055.
Texto completoThis thesis is a research contribution interfacing operations research and microelectronics. It considers the use of combinatorial optimization techniques for DFT (Design For Test) of Integrated Circuits (IC). With the growing complexity of current IC both quality and cost during manufacturing testing have become important parameters in the semiconductor industry. To ensure proper functioning of the IC, the testing step is more than ever a crucial and difficult step in the overall IC manufacturing process. To answer market requirements, chip testing should be fast and effective in uncovering defects. For this, it becomes essential to apprehend the test phase from the design steps of IC. In this context, DFT techniques and methodologies aim at improving the testability of IC. In previous research works, several problems of optimization and decision making were derived from the micro- electronics domain. Most of previous research contributions dealt with problems of combinatorial optimization for placement and routing during IC design. In this thesis, a higher design level is considered where the DFT problem is analyzed at the Register Transfer Level (RTL) before the logic synthesis process starts. This thesis is structured into three parts. In the first part, preliminaries and basic concepts of operations research, IC design and manufacturing are introduced. Next, both our approach and the solution tools which are used in the rest of this work are presented. In the second part, the problem of optimizing the insertion of scan chains is considered. Currently, " internal scan" is a widely adopted DFT technique for sequential digital designs where the design flip-flops are connected into a daisy chain manner with a full controllability and observability from primary inputs and outputs. In this part of the research work, different algorithms are developed to provide an automated and optimal solution during the generation of an RTL scan architecture where several parameters are considered: area, test time and power consumption in full compliance with functional performance. This problem has been modelled as the search for short chains in a weighted graph. The solution methods used are based on finding minimal length Hamiltonian chains. This work was accomplished in collaboration with DeFacTo Technologies, an EDA start-up close to Grenoble. The third part deals with the problem of sharing BIST (Built In Self Test) blocks for testing memories. The problem can be formulated as follows: given the memories with various types and sizes, and sharing rules for series and parallel wrappers, we have to identify solutions to the problem by associating a wrapper with each memory. The solution should minimize the surface, the power consumption and test time of IC. To solve this problem, we designed a prototype called Memory BIST Optimizer (MBO). It consists of two steps of resolution and a validation phase. The first step creates groups of compatibility in accordance with the rules of abstraction and sharing that depend on technologies. The second phase uses genetic algorithms for multi-objective optimization in order to obtain a set of non dominated solutions. Finally, the validation verifies that the solution provided is valid. In addition, it displays all solutions through a graphical or textual interface. This allows the user to choose the solution that fits best. The tool MBO is currently integrated into an industrial flow within ST-microelectronics
Alonso, Thierry. "Caractérisation par essais DMA et optimisation du comportement thermomécanique de fils de NiTi - Application à une aiguille médicale déformable". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAI028/document.
Texto completoMany medical procedures use needles. A solution is proposed to control and modifyneedle trajectory during its insertion. This steerable needle must be able to avoid anobstacle and reach the target with more accuracy. The solution uses Nickel Titanium(NiTi) shape memory alloy. A new experimental method is proposed to characterize NiTiwires. This method is based on experimental device wich allows to perform DynamicMechanical Analysis (DMA) during a tensile test or during a temperature sweep understress. DMA measurements can detect many phenomena : elasticity, phase transformation,reorientation, plasticity. Results for a commercial NiTi wire are presented and analyzed.Storage modulus evolution analysis shows multistage phase transformations for which thestress-temperature diagram has been established. Values of elastic modulus are determinedfor austenite, martensite and R phase. Estimation models are proposed to determinestorage modulus evolution during tensile test with DMA and temperature sweep understress with DMA. The last part of this work studies the effect of heat treatment on acold worked Niti wire. A range of heat treatments was performed. Thermomechanicaltreatment effects were investigated both with tensile tests and temperature sweeps understress with DMA
Ahmad, Mumtaz. "Stratégies d'optimisation de la mémoire pour le calcul d'applications linéaires et l'indexation de document partagés". Phd thesis, Université Henri Poincaré - Nancy I, 2011. http://tel.archives-ouvertes.fr/tel-00641866.
Texto completoJacquemoud-Collet, Fanny. "Etiquette RFID bas coût sur support papier : optimisation du procédé industriel innovant / intégration d’une fonctionnalité capteur". Electronic Thesis or Diss., Montpellier 2, 2014. http://www.theses.fr/2014MON20194.
Texto completoThe RFID, for Radio Frequency Identification, has grown considerably in recent years become an essential mode of traceability and identification. Market players are numerous and among them, Tageos (Montpellier, France) established since 2008 an innovative process for manufacturing the RFID tag on paper, economic and ecologic. However, even if the performances obtained during a previous work (Thesis C. Ramade 2008-2011) were sufficient to allow mass production, they are not optimal in particular with respect to the established results in laboratory. It is in this context that ranks this work which is always carried out in close collaboration between the Institute of Electronics of South and TAGEOS company S.A.S. Our efforts were focused : on process optimization of realization of RFID antenna working on the analysis, methods or protocols and technical resources on the preparation of the paper substrate, on alternative and complementary solutions to realize RFID antenna and RFID chip bonding and on the reliability and quality of finished products. Moreover, in this work we have also demonstrated the valorization of our low cost RFID tag by integrating a sensor functionality. The industrial process of producing of this tag taking accounts of TAGEOS process
Amstel, Duco van. "Optimisation de la localité des données sur architectures manycœurs". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM019/document.
Texto completoThe continuous evolution of computer architectures has been an important driver of research in code optimization and compiler technologies. A trend in this evolution that can be traced back over decades is the growing ratio between the available computational power (IPS, FLOPS, ...) and the corresponding bandwidth between the various levels of the memory hierarchy (registers, cache, DRAM). As a result the reduction of the amount of memory communications that a given code requires has been an important topic in compiler research. A basic principle for such optimizations is the improvement of temporal data locality: grouping all references to a single data-point as close together as possible so that it is only required for a short duration and can be quickly moved to distant memory (DRAM) without any further memory communications.Yet another architectural evolution has been the advent of the multicore era and in the most recent years the first generation of manycore designs. These architectures have considerably raised the bar of the amount of parallelism that is available to programs and algorithms but this is again limited by the available bandwidth for communications between the cores. This brings some issues thatpreviously were the sole preoccupation of distributed computing to the world of compiling and code optimization techniques.In this document we present a first dive into a new optimization technique which has the promise of offering both a high-level model for data reuses and a large field of potential applications, a technique which we refer to as generalized tiling. It finds its source in the already well-known loop tiling technique which has been applied with success to improve data locality for both register and cache-memory in the case of nested loops. This new "flavor" of tiling has a much broader perspective and is not limited to the case of nested loops. It is build on a new representation, the memory-use graph, which is tightly linked to a new model for both memory usage and communication requirements and which can be used for all forms of iterate code.Generalized tiling expresses data locality as an optimization problem for which multiple solutions are proposed. With the abstraction introduced by the memory-use graph it is possible to solve this optimization problem in different environments. For experimental evaluations we show how this new technique can be applied in the contexts of loops, nested or not, as well as for computer programs expressed within a dataflow language. With the anticipation of using generalized tiling also to distributed computations over the cores of a manycore architecture we also provide some insight into the methods that can be used to model communications and their characteristics on such architectures.As a final point, and in order to show the full expressiveness of the memory-use graph and even more the underlying memory usage and communication model, we turn towards the topic of performance debugging and the analysis of execution traces. Our goal is to provide feedback on the evaluated code and its potential for further improvement of data locality. Such traces may contain information about memory communications during an execution and show strong similarities with the previously studied optimization problem. This brings us to a short introduction to the algorithmics of directed graphs and the formulation of some new heuristics for the well-studied topic of reachability and the much less known problem of convex partitioning
Longnos, Florian. "Etude et optimisation des performances électriques et de la fiabilité de mémoires résistives à pont conducteur à base de chalcogénure/Ag ou d'oxyde métallique/Cu". Thesis, Grenoble, 2014. http://www.theses.fr/2014GRENT046.
Texto completoNon-volatile memory technology has recently become the key driver for growth in the semiconductor business, and an enabler for new applications and concepts in the field of information and communication technologies (ICT). In order to overcome the limitations in terms of scalability, power consumption and fabrication complexity of Flash memory, semiconductor industry is currently assessing alternative solutions. Among them, Conductive Bridge Memories (CBRAM) rely on the resistance switching of a solid electrolyte induced by the migration and redox reactions of metallic ions. This technology is appealing due to its simple two-terminal structure, and its promising performances in terms of low power consumption, program/erase speed. Furthermore, the CBRAM is a memory technology that can be easily integrated with standard CMOS technology in the back end of line (BEOL). In this work we study the electrical performances and reliability of two different CBRAM technologies, specifically using chalcogenides (GeS2) and metal oxide as electrolyte. We first focus on GeS2-based CBRAM, where the effect of doping with Ag and Sb of GeS2 electrolyte is extensively investigated through electrical characterization analysis. The physical mechanisms governing the switching kinetics and the thermal stability are also addressed by means of electrical measurements, empirical model and 1st principle calculations. The influence of the different set/reset programming conditions is studied on a metal oxide based CBRAM technology. Based on this analysis, the programming conditions able to maximize the memory window, improve the endurance and minimize the variability are determined
Carpov, Sergiu. "Scheduling for memory management and prefetch in embedded multi-core architectures". Compiègne, 2011. http://www.theses.fr/2011COMP1962.
Texto completoThis PhD thesis is devoted to the study of several combinatorial optimization problems which arise in the field of parallel embedded computing. Optimal memory management and related scheduling problems for dataflow applications executed on massively multi-core processors are studied. Two memory access optimization techniques are considered: data reuse and prefetch. The memory access management is instantiated into three combinatorial optimization problems. In the first problem, a prefetching strategy for dataflow applications is investigated so as to minimize the application execution time. This problem is modeled as a hybrid flow shop under precedence constraints, an NP-hard problem. An heuristic resolution algorithm together with two lower bounds are proposed so as to conservatively, though fairly tightly, estimate the distance to the optimality. The second problem is concerned by optimal prefetch management strategies for branching structures (data-controlled tasks). Several objective functions, as well as prefetching techniques, are examined. In all these cases polynomial resolution algorithms are proposed. The third studied problem consists in ordering a set of tasks so as to minimize the number of times the memory data are fetched. In this way the data reuse for a set of tasks is optimized. This problem being NP-hard, a result we have established, we have proposed two heuristic algorithms. The optimality gap of the heuristic solutions is estimated using exact solutions. The latter ones are obtained using a branch and bound method we have proposed
Bonarota, Matthieu. "Optimisation de la programmation d'un cristal dopé aux ions de terres rares, opérant comme processeur analogique d'analyse spectrale RF, ou de stockage d'information quantique". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00766334.
Texto completoBonarota, Matthieu. "Optimisation de la programmation d’un cristal dopé aux ions de terres rares, opérant comme processeur analogique d’analyse spectrale RF, ou de stockage d’information quantique". Thesis, Paris 11, 2012. http://www.theses.fr/2012PA112324/document.
Texto completoThe development of a quantum memory for light involves the most fundamental aspects of the light-matter interaction. To store the quantum information carried by light, the material has to be able to stay in a state of quantum superposition. The storage time is limited by the lifetime of this state, characterized by the coherence time. The first experiments involved the well-known cold atomic vapors. More recently, Rare Earth Ions doped Crystals (REIC) have drawn attention because of their remarkably long coherence time, combined with a large interaction bandwidth. Specific protocols have been proposed to take the most out of these properties. We have opted for a promising spin-off of the well-known photon echo, named the Atomic Frequency Comb (AFC, proposed in 2008), based on the transmission of the incoming field through a spectrally periodic absorption profile. The first chapters of this manuscript present this protocol and our works aimed at improving its efficiency (the probability for capturing and retrieving the incoming information), increasing its bandwidth and its multiplexing capacity and measuring its noise. The following chapters present a new protocol, proposed in our group during this thesis, and called Revival Of Silenced Echo (ROSE). This protocol, similar to the photon echo, have been demonstrated and characterized experimentally. It seems really promising in terms of efficiency, bandwidth and noise
Glaudin, Lilian. "Stratégies multicouche, avec mémoire, et à métrique variable en méthodes de point fixe pour l'éclatement d'opérateurs monotones et l'optimisation". Thesis, Sorbonne université, 2019. http://www.theses.fr/2019SORUS119.
Texto completoSeveral apparently unrelated strategies coexist to implement algorithms for solving monotone inclusions in Hilbert spaces. We propose a synthetic framework for fixed point construction which makes it possible to capture various algorithmic approaches, clarify and generalize their asymptotic behavior, and design new iterative schemes for nonlinear analysis and convex optimization. Our methodology, which is anchored on an averaged quasinonexpansive operator composition model, allows us to advance the theory of fixed point algorithms on several fronts, and to impact their application fields. Numerical examples are provided in the context of image restoration, where we propose a new viewpoint on the formulation of variational problems
Dahmani, Safae. "Modèles et protocoles de cohérence de données, décision et optimisation à la compilation pour des architectures massivement parallèles". Thesis, Lorient, 2015. http://www.theses.fr/2015LORIS384/document.
Texto completoManycores architectures consist of hundreds to thousands of embedded cores, distributed memories and a dedicated network on a single chip. In this context, and because of the scale of the processor, providing a shared memory system has to rely on efficient hardware and software mechanisms and data consistency protocols. Numerous works explored consistency mechanisms designed for highly parallel architectures. They lead to the conclusion that there won't exist one protocol that fits to all applications and hardware contexts. In order to deal with consistency issues for this kind of architectures, we propose in this work a multi-protocol compilation toolchain, in which shared data of the application can be managed by different protocols. Protocols are chosen and configured at compile time, following the application behaviour and the targeted architecture specifications. The application behaviour is characterized with a static analysis process that helps to guide the protocols assignment to each data access. The platform offers a protocol library where each protocol is characterized by one or more parameters. The range of possible values of each parameter depends on some constraints mainly related to the targeted platform. The protocols configuration relies on a genetic-based engine that allows to instantiate each protocol with appropriate parameters values according to multiple performance objectives. In order to evaluate the quality of each proposed solution, we use different evaluation models. We first use a traffic analytical model which gives some NoC communication statistics but no timing information. Therefore, we propose two cycle- based evaluation models that provide more accurate performance metrics while taking into account contention effect due to the consistency protocols communications.We also propose a cooperative cache consistency protocol improving the cache miss rate by sliding data to less stressed neighbours. An extension of this protocol is proposed in order to dynamically define the sliding radius assigned to each data migration. This extension is based on the mass-spring physical model. Experimental validation of different contributions uses the sliding based protocols versus a four-state directory-based protocol
Jacquemoud-Collet, Fanny. "Etiquette RFID bas coût sur support papier : Optimisation du procédé industriel innovant / intégration d’une fonctionnalité capteur". Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20194.
Texto completoThe RFID, for Radio Frequency Identification, has grown considerably in recent years become an essential mode of traceability and identification. Market players are numerous and among them, Tageos (Montpellier, France) established since 2008 an innovative process for manufacturing the RFID tag on paper, economic and ecologic. However, even if the performances obtained during a previous work (Thesis C. Ramade 2008-2011) were sufficient to allow mass production, they are not optimal in particular with respect to the established results in laboratory. It is in this context that ranks this work which is always carried out in close collaboration between the Institute of Electronics of South and TAGEOS company S.A.S. Our efforts were focused : on process optimization of realization of RFID antenna working on the analysis, methods or protocols and technical resources on the preparation of the paper substrate, on alternative and complementary solutions to realize RFID antenna and RFID chip bonding and on the reliability and quality of finished products. Moreover, in this work we have also demonstrated the valorization of our low cost RFID tag by integrating a sensor functionality. The industrial process of producing of this tag taking accounts of TAGEOS process
Nizard, Mevyn. "Optimisation d'un vaccin thérapeutique dans les tumeurs des voies aérodigestives supérieures associées aux papillomavirus : rôle de l'induction d'une immunité muqueuse et de la combinaison à la radiothérapie". Thesis, Sorbonne Paris Cité, 2015. http://www.theses.fr/2015PA05T027/document.
Texto completoCancer is the second mortality cause worldwide while mucosal cancers (lung, stomac, …) is the first mortality cause from. The majority of cancer vaccines against mucosal tumors have not given rise yet to significant clinical results. In this work we developed a strong immunotherapy based on the nontoxic subunit B from shiga toxin and showed for the first time that the localization of the immunization is crucial to induce potent and effective anti-tumoral responses. In a preclinical model a systemic immunization failed to induce a therapeutical protection against mucosal tumor challenge while intranasal immunization completely succeed. We identified a CD8 T lymphocyte population as a required cells in this protection and more precisely the T resident memory (Trm) cells. This Trm showed the classical CD103 phenotype as well as the CD49a which can play a specific role in the retention or the migration of this cells in the tumor tissue and might play a role in the survival. We also demonstrate that dendritic cells from the mucosal parenchyma was required to induce the CD49a expression on CD8 T cells while dendritic cells from the spleen was not. Our work shows that the Trm number as an impact in the anti-tumoral protection. We were able to reduce the Trm number in vivo using an anti-TGF-β antibody. This number diminution was correlated with a less efficient anti-tumoral protection. Patients with head and neck cancers are treated with radiotherapy. In this situation we showed that the combination of radiotherapy and our immunotherapy was associated with a better protection than radiotherapy alone or immunotherapy alone thanks to a vascular normalization. These results might rapidly lead to clinical trials and might open new ways to work with immunotherapies