Dissertations / Theses on the topic 'Implémentation et optimisation'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 22 dissertations / theses for your research on the topic 'Implémentation et optimisation.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Cognot, Richard. "La méthode D. S. I. : optimisation, implémentation et applications." Vandoeuvre-les-Nancy, INPL, 1996. http://www.theses.fr/1996INPL003N.
Full textFerrer, Ludovic. "Dosimétrie clinique en radiothérapie moléculaire : optimisation de protocoles et implémentation clinique." Nantes, 2011. https://archive.bu.univ-nantes.fr/pollux/show/show?id=b7183ac7-6fc1-4281-be4f-8a358c9320fc.
Full textMolecular radiotherapy (mrt) consists in destructing tumour targets by radiolabelled vectors. This nuclear medi¬cine specialty is being considered with increasing interest for example via the success achieved in the treatment of non-Hodgkin lymphomas by radioimmunotherapy. One of the keys of mrt optimization relies on the personalisa¬tion of absorbed doses delivered to the patient : This is required to ascertain that irradiation is focused on tumour cells while keeping surrounding healthy tissue irradiation at an acceptable — non-toxic — level. Radiation dose evaluation in mrt requires in one hand, the spatial and temporal localization of injected radioactive sources by scintigraphic imaging, and on a second hand, the knowledge of the emitted radiation propagating media, given by CT imaging. Global accuracy relies on the accuracy of each of the steps that contribute to clinical dosimetry. There is no reference, standardized dosimetric protocol to date. Due to heterogeneous implementations, evaluation of the accuracy of the absorbed dose is a difficult task. In this thesis, we developped and evaluated different dosimetric approaches that allow us to find a relationship between the aborbed dose to the bone marrow and haematological toxicity. Besides, we built a scientific project, called DosiTest, which aims at evaluating the impact of the various step that contribute to the realization of a dosimetric study, by means of a virtual multicentric comparison based on Monte–Carlo modelling
Chaarani, Jamal. "Etude d'une classe d'algorithmes d'optimisation non convexe : implémentation et applications." Phd thesis, Grenoble 1, 1989. http://tel.archives-ouvertes.fr/tel-00333443.
Full textMarina, Sahakyan. "Optimisation des mises à jours XML pour les systèmes main-memory: implémentation et expériences." Phd thesis, Université Paris Sud - Paris XI, 2011. http://tel.archives-ouvertes.fr/tel-00641579.
Full textLabonté, Francis. "Étude, optimisation et implémentation d'un quantificateur vectoriel algébrique encastré dans un codeur audio hybride ACELP/TCX." Mémoire, Université de Sherbrooke, 2002. http://savoirs.usherbrooke.ca/handle/11143/1205.
Full textPaquier, Williams. "Apprentissage ouvert de représentations et de fonctionalités en robotique : analyse, modèles et implémentation." Toulouse 3, 2004. http://www.theses.fr/2004TOU30233.
Full textAutonomous acquisition of representations and functionalities by a machine address several theoretical questions. Today’s autonomous robots are developed around a set of functionalities. Their representations of the world are deduced from the analysis and modeling of a given problem, and are initially given by the developers. This limits the learning capabilities of robots. In this thesis, we propose an approach and a system able to build open-ended representation and functionalities. This system learns through its experimentations of the environment and aims to augment a value function. Its objective consists in acting to reactivate the representations it has already learnt to connote positively. An analysis of the generalization capabilities to produce appropriate actions enable define a minimal set of properties needed by such a system. The open-ended representation system is composed of a network of homogeneous processing units and is based on position coding. The meaning of a processing unit depends on its position in the global network. This representation system presents similarities with the principle of numeration by position. A representation is given by a set of active units. This system is implemented in a suite of software called NeuSter, which is able to simulate million unit networks with billions of connections on heterogeneous clusters of POSIX machines. .
Guenard, Nicolas. "Optimisation et implémentation de lois de commande embarquées pour la téléopération de micro drones aériens X4-flyer." Nice, 2007. http://www.theses.fr/2007NICE4066.
Full textNow days, the interest for the small size Unmanned Aerial Vehicle (UAV) area is very important. In order to lead inspection and recognition missions, the French Atomic Energy Commission (CEA) is interested in the use of a rotary wings aerial vehicle suited for quasi-stationary flight conditions. Consequently, a prototype ideally suited for this type of mission and for stationary fight conditions, known as an "X4-flyer" has been built. This kind of small aerial robot is known from the modelist and can be bought as a toy. However, it is very difficult to control it without a lot of hours of traning. Consequently, this document presents several embedded algorithms allowing the simple control of the vehicule from simple user translational speed orders. Then, image-based visual servo controls, computed on a ground station, are equally presented and allow the stabilization of the UAV above a target situated on the ground. In order to do this, in a first time, we are interested in the understanding of the different aerodynamical effects applying on the "X4-flyer" and to the elaboration of a mathematical model of the vehicule. Then, a state feedback and a non linear adaptative control, easy to on board and based on the precedent model, are designed. This control law take into account the model non linearity. At least, 3D visual servoing and 2D visual servoing derived for the full dynamics of the system is designed. Each theoretical part of the document has been tested and validated on the experimental UAV
Nguyen, Tung Lam. "Contrôle et optimisation distribués basés sur l'agent dans les micro-réseaux avec implémentation Hardware-in-the-Loop." Thesis, Université Grenoble Alpes (ComUE), 2019. http://www.theses.fr/2019GREAT022/document.
Full textIn terms of the control hierarchy of microgrids, the coordination of local controllers is mandatory in the secondary and tertiary levels. Instead of using a central unit as conventional approaches, in this work, distributed schemes are considered. The distributed approaches have been taken attention widely recently due to the advantages of reliability, scalability, and security. The multi-agent system is an advanced technique having properties that make them suitable for acting as a basis for building modern distributed control systems. The thesis focuses on the design of agents aiming to distributed control and optimization algorithms in microgrids with realistic on-line deployment on a Hardware-in-the-loop platform. Based on the provided three-layer architecture of microgrids, a laboratory platform with Hardware-in-the-loop setup is constructed in the system level. This platform includes two parts: (1) a digital real-time simulator uses to simulate test case microgrids with local controllers in real-time; and (2) a cluster of hardware Raspberry PIs represents the multi-agent system operating in a sparse physical communication network. An agent is a Python-based program run on a single Raspberry PI owing abilities to transfer data with neighbors and computing algorithms to control the microgrid in a distributed manner.In the thesis, we apply the distributed algorithms for both secondary and tertiary control level. The distributed secondary controls in an islanded microgrid are presented in two approaches of finite-time consensus algorithm and average consensus algorithm with the improvements in performances. An extension of the platform with Power Hardware-in-the-Loop and IEC 61850-based communication is processed to make the deployment of agents closer to industrial applications. On the top control level, the agents execute the Alternating Direction Method of Multipliers to find out the optimal operation points of microgrid systems in both islanded and grid-connect state. The secondary and tertiary control objectives are achieved in a single framework which is rarely reported in other studies.Overall, the agent is explicitly investigated and deployed in the realistic conditions to facilitate applications of the distributed algorithms for the hierarchical control in microgrids. This research gives a further step making the distributed algorithms closer to onsite implementation
Dudka, Andrii. "etude, optimisation et implémentation en silicium du circuit de conditionnement intelligent haute-tension pour le système de récupération électrostatique d'énergie vibratoire." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2014. http://tel.archives-ouvertes.fr/tel-01056404.
Full textDudka, Andrii. "etude, optimisation et implémentation en silicium du circuit de conditionnement intelligent haute-tension pour le système de récupération électrostatique d'énergie vibratoire." Electronic Thesis or Diss., Paris 6, 2014. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2014PA066054.pdf.
Full textVibration energy harvesting is a relatively new concept that can be used in powering micro-scale power embedded devices with the energy of vibrations omnipresent in the surrounding. This thesis contributes to a general study of vibration energy harvesters (VEHs) employing electrostatic transducers. A typical electrostatic VEH consists of a capacitive transducer, conditioning electronics and a storage element. This work is focused on investigations of the reported by MIT in 2006 auto-synchronous conditioning circuit, which combines the diode-based charge pump and the inductive flyback energy return driven by the switch. This architecture is very promising since it eliminates precise gate control of transistors employed in synchronous architectures, while a unique switch turns on rarely. This thesis addresses the theoretical analysis of the conditioning circuit. We developed an algorithm that by proper switching of the flyback allows the optimal energy conversion strategy taking into account the losses associated with the switching. By adding the calibration function, the system became adaptive to the fluctuations in the environment. This study was validated by the behavioral modeling. Another contribution consists in realization of the proposed algorithm on the circuit level. The major design difficulties were related to the high-voltage requirement and the low-power design priority. We designed a high-voltage analog controller of the switch using AMS035HV technology. Its power consumption varies between several hundred nanowatts and a few microwatts, depending on numerous factors - parameters of external vibrations, voltage levels of the charge pump, frequency of the flyback switching, frequency of calibration function, etc. We also implemented on silicon, fabricated and tested a high-voltage switch with a novel low power level-shifting driver. By mounting on discrete components the charge pump and flyback circuit and employing the proposed switch, we characterized the wideband high-voltage operation of the MEMS transducer prototype fabricated alongside this thesis in ESIEE Paris. When excited with stochastic vibrations having an acceleration level of 0.8 g rms distributed in the band 110-170 Hz, up to 0.75 µW of net electrical power has been harvested
Vinot, Benoît. "Conception d'un système d'information distribué pour la conduite des flexibilités dans un réseau de distribution électrique : modélisation, simulation et implémentation." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM043/document.
Full textThe energy sector and the electrical networks in particular, provide great and indispensable services to our modern societies. Unfortunately, they also bring some serious drawbacks, especially with regard to the environment. These drawbacks are becoming more and more unacceptable; that is why the energy sector is trying to reduce them as much as possible, in the framework of the so-called energy transition.In addition to mandatory efforts in terms of energy efficiency and sobriety, two major directions of improvement have been identified: on the one hand, the progressive replacement of some conventional power plants with renewable production units; and on the other hand, the transfer of several non-electrical usages towards electricity --- in particular in the area of mobility.The integration of these new devices into electrical networks raise new technical challenges which, since the early 2000s, have been driving a lot of work about so-called "smart grids": electrical networks compatible with the requirements of the energy transition, ie. able to host new devices like photovoltaic solar panels and charging stations for electric vehicles, notably through the increasing usage of new information and communications technologies.Among the difficulties mentioned above, which limit the hosting capacity of the network, there are congestions ie. physical constraints limiting the amount of power that may be transmitted through a given infrastructure. Our work is devoted to the management of congestions. The fundamental issue thereon is to define a sequence of decisions, computations, communications and in fine actionsthat allows to move from a constrained situation on the electrical distribution network, to a situation in which the action of local flexibilities has lifted the constraint; in other words, to a situation where increasing or decreasing local generation and/or consumption, or taking some other control action, relieved the network.The aim of this thesis is to contribute to the development of conceptual and computing tools that will allow us to answer the fundamental aforementioned issue. Our work thus deals with the modelling of flexible electrical distribution networks, and with the tangible implementation of selected models in the form of ad hoc simulation software, specifically designed for the study of such networks
Tano, Krongrossi. "Conception et implémentation d'un système intégrant des modèles de simulation et un SIADS (système interactif d'aide à la décision spécifique) de gestion portuaire : application à la gestion du port autonome d'Abidjan." Paris 9, 1994. https://portail.bu.dauphine.fr/fileviewer/index.php?doc=1994PA090020.
Full textDoré, Jean-Baptiste. "Optimisation conjointe de codes LDPC et de leurs architectures de décodage et mise en œuvre sur FPGA." Phd thesis, INSA de Rennes, 2007. http://tel.archives-ouvertes.fr/tel-00191155.
Full textDans un premier temps, une large présentation des codes LDPC est proposée incluant les notations et les outils algorithmiques indispensables à la compréhension. Cette introduction des codes LDPC souligne l'intérêt qu'il existe à concevoir conjointement le système de codage/décodage et les architectures matérielles. Dans cette optique, une famille de codes LDPC particulièrement intéressante est décrite. En particulier nous proposons des règles de construction de codes pour en contraindre le spectre des distances de Hamming. Ces contraintes sont intégrées dans la définition d'un nouvel algorithme de définition de codes travaillant sur une représentation compressée du code par un graphe.
Les propriétés structurelles du code sont ensuite exploitées pour définir l'algorithme de décodage. Cet algorithme, caractérisé par le fait qu'il considère une partie du code comme un code convolutif, converge plus rapidement que les algorithmes habituellement rencontrés tout en permettant une grande flexibilité en termes de rendements de codage. Différentes architectures de décodeurs sont alors décrites et discutées. Des contraintes sur les codes sont ensuite exposées pour exploiter pleinement les propriétés des architectures.
Dans un dernier temps, une des architectures proposées est évaluée par l'intégration d'un décodeur sur un composant programmable. Dans différents contextes, des mesures de performances et de complexité montrent l'intérêt de l'architecture proposée.
Khoury, Jawad. "Optimisation de dimensionnement et de fonctionnement d’un système photovoltaïque assistant un réseau électrique intermittent pour une application résidentielle." Thesis, Cergy-Pontoise, 2016. http://www.theses.fr/2016CERG0763/document.
Full textThis thesis addresses the issue of intermittent primary energy source in several developing countries and considers, in particular, the case study of Lebanon. A PV-battery backup system is proposed and assessed as a replacement of the grid energy during daily power outage periods for a high energy consuming residential house. The proposed system topology introduces more critical conditions and additional constraints on the operation of the system compared to standard on-grid or standalone PV systems. The main concern is to provide permanent electricity supply to the house, reduce the resulting fees, and ensure high performance and reliability of the backup system while respecting the residents’ comfort levels. This thesis aims at thoroughly assessing the suitability of the proposed backup system by focusing on various aspects of the system. First, its configuration is optimized through the development of a detailed economic study estimating the resulting fees over its 20-year lifetime. The sizing process is formulated as an optimization problem having the sole objective of minimizing the overall cost of the system. Furthermore, a detailed comparative study of various water heating techniques is conducted to the end of determining the most suitable configuration to be coupled with the proposed backup solution. Second, the thesis targets the operation optimization of the PV-battery system by implementing a Demand Side Management (DSM) program aiming at preventing the occurrence of loss of power supply to the house while maintaining high comfort levels to the inhabitants and respecting the operation constraints of the system. The control is divided into several layers in order to manage predictable and unpredictable home appliances. The strength of the developed control lies in ensuring the complete coordination between all the components of the installation: the grid, PV panels, battery storage, and the load demand. The benefits of the DSM are proven to go beyond the operation optimization of the system since they highly affect the sizing of the backup, and by extension, the overall resulting cost. The established program is optimized for the hardware implementation process by ensuring a low memory consumption and fast decision making. The developed C codes of the full DSM program are implemented on ARM Cortex-A9 processors. The simulation and implementation results show that the developed management program is highly generic, flexible, accurate, fast, and reliable.The results presented in this thesis validate that the proposed PV-Battery backup system is highly suitable to assist unreliable grids. It outperforms currently installed Diesel Generators and demonstrates a remarkable reliability especially when coupled with the developed DSM program
Elloumi, Yaroub. "Parallélisme des nids de boucles pour l’optimisation du temps d’exécution et de la taille du code." Thesis, Paris Est, 2013. http://www.theses.fr/2013PEST1199/document.
Full textThe real time implementation algorithms always include nested loops which require important execution times. Thus, several nested loop parallelism techniques have been proposed with the aim of decreasing their execution times. These techniques can be classified in terms of granularity, which are the iteration level parallelism and the instruction level parallelism. In the case of the instruction level parallelism, the techniques aim to achieve a full parallelism. However, the loop carried dependencies implies shifting instructions in both side of nested loops. Consequently, these techniques provide implementations with non-optimal execution times and important code sizes, which represent limiting factors when implemented on embedded real-time systems. In this work, we are interested on enhancing the parallelism strategies of nested loops. The first contribution consists of purposing a novel instruction level parallelism technique, called “delayed multidimensional retiming”. It aims to scheduling the nested loops with the minimal cycle period, without achieving a full parallelism. The second contribution consists of employing the “delayed multidimensional retiming” when providing nested loop implementations on real time embedded systems. The aim is to respect an execution time constraint while using minimal code size. In this context, we proposed a first approach that selects the minimal instruction parallelism level allowing the execution time constraint respect. The second approach employs both instruction level parallelism and iteration level parallelism, by using the “delayed multidimensional retiming” and the “loop striping”
Cassagne, Adrien. "Méthodes d’optimisation et de parallélisation pour la radio logicielle." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0231.
Full textA software-defined radio is a radio communication system where components traditionally implemented in hardware are instead implemented by means of software. With the growing number of complex digital communication standards and the general purpose processors increasing power, it becomes interesting to trade the energy efficiency of the dedicated architectures for the flexibility and the reduced time to market on general purpose processors.Even if the resulting implementation of a signal processing is made on an application-specific integrated circuit, the software version of this processing is necessary to evaluate and verify the correct properties of the functionality. This is generally the role of the simulation. Simulations are often expensive in terms of computational time. To evaluate the global performance of a communication system can require from few days to few weeks.In this context, this thesis proposes to study the most time consuming algorithms in today's digital communication chains. These algorithms often are the channel decoders located on the receivers. The role of the channel coding is to improve the error resilience of the system. Indeed, errors can occur at the channel level during the transmission between the transmitter and the receiver. Three main channel coding families are then presented: the LDPC codes, the polar codes and the turbo codes. These three code families are used in most of the current digital communication standards like the Wi-Fi, the Ethernet, the 3G, 4G and 5G mobile networks, the digital television, etc. The resulting decoders offer the best compromise between error resistance and decoding speed known to date. Each of these families comes with specific decoding algorithms. One of the main challenge of this thesis is to propose optimized software implementations for each of them. Specific efficient implementations are proposed as well as more general optimization strategies. The idea is to extract the generic optimization strategies from a representative subset of decoders.The last part of the thesis focuses on the implementation of a complete digital communication system in software. Thanks to the efficient decoding implementations proposed before, a full transceiver, compatible with the DVB-S2 standard, is implemented. This standard is typically used for broadcasting multimedia contents via satellite. To this purpose, an embedded domain specific language targeting the software-defined radio is introduced. The main objective of this language is to take advantage of the parallel architecture of the current general purpose processors. The results show that the system achieves sufficient throughputs to be deployed in real-world conditions.These contributions have been made in a dynamic of openness, sharing and reusability, it results in an open source library named AFF3CT for A Fast Forward Error Correction Toolbox. Thus, all the results proposed in this thesis can easily be reproduced and extended. This philosophy is detailed in a specific chapter of the thesis manuscript
Chaker, Jade. "Développements analytiques pour la caractérisation non-ciblée et par profilage de suspects de l’exposome chimique dans le plasma et le sérum humain par LC-ESI-HRMS : optimisation et implémentation d’un workflow haut débit pour l’identification de nouveaux biomarqueurs d’exposition dans le plasma et le sérum sanguins." Electronic Thesis or Diss., Rennes, École des hautes études en santé publique, 2022. http://www.theses.fr/2022HESP0002.
Full textChronic exposure to complex mixtures of chemical contaminants (xenobiotics) is suspected to contribute to the onset of chronic diseases. The technological advances high-resolution mass spectrometry (HRMS), as well as the concept of exposome, have set the stage for the development of new non-targeted methods to characterize human exposure to xenobiotics without a priori. These innovative approaches may therefore allow changing scale to identify chemical risk factors in epidemiological studies. However, non-targeted approaches are still subject to a number of barriers, partly linked to the presence of these xenobiotics at trace levels in biological matrices. An optimization of every analytical (i.e. sample preparation) and bioinformatical (i.e. data processing, annotation) step of the workflow is thus required. The main objective of this work is to implement an HRMS-based non-targeted workflow applicable to epidemiological studies, to provide an operational solution to characterize the internal chemical exposome at a large scale. The undertaken developments allowed proposing a simple sample preparation workflow based on two complementary methods to expand the visible chemical space (up to 80% of features specific to one method). The optimization of various data processing tools, performed for the first time in an exposomics context, allowed demonstrating the necessity to adjust key parameters to accurately detect xenobiotics. Moreover, the development of a software to automatize suspect screening approaches using MS1 predictors, and of algorithms to compute confidence indices, allowed efficiently prioritizing features for manual curation. A large-scale application of this optimized workflow on 125 serum samples from the Pélagie cohort allowed demonstrating the robustness and sensitivity of this new workflow, and enriching the documented chemical exposome with the uncovering of new biomarkers of exposure
Yassine, Adnan. "Etudes adaptatives et comparatives de certains algorithmes en optimisation : implémentations effectives et applications." Grenoble 1, 1989. http://tel.archives-ouvertes.fr/tel-00332782.
Full textSeznec, Mickaël. "From the algorithm to the targets, optimization flow for high performance computing on embedded GPUs." Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG074.
Full textCurrent digital processing algorithms require more computing power to achieve more accurate results and process larger data. In the meantime, hardware architectures are becoming more specialized, with highly efficient accelerators designed for specific tasks. In this context, the path of deployment from the algorithm to the implementation becomes increasingly complex. It is, therefore, crucial to determine how algorithms can be modified to take advantage of new hardware capabilities. Our study focused on graphics processing units (GPUs), a massively parallel processor. Our algorithmic work was done in the context of radio-astronomy or optical flow estimation and consisted of finding the best adaptation of the software to the hardware. At the level of a mathematical operator, we modified the traditional image convolution algorithm to use the matrix units and showed that its performance doubles for large convolution kernels. At a broader method level, we evaluated linear solvers for the combined local-global optical flow to find the most suitable one on GPU. With additional optimizations, such as iteration fusion or memory buffer re-utilization, the method is twice as fast as the initial implementation, running at 60 frames per second on an embedded platform (30 W). Finally, we also pointed out the interest of this hardware-aware algorithm design method in the context of deep neural networks. For that, we showed the hybridization of a convolutional neural network for optical flow estimation with a pre-trained image classification network, MobileNet, that was initially designed for efficient image classification on low-power platforms
Jaber, Mohamad. "Implémentations Centralisée et Répartie de Systèmes Corrects par construction à base des Composants par Transformations Source-à-source dans BIP." Phd thesis, Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00531082.
Full textJaber, Mohamad. "Implémentations Centralisée et Répartie de Systèmes Corrects par construction à base des Composants par Transformations Source-à-source dans BIP." Phd thesis, Grenoble, 2010. http://www.theses.fr/2010GRENM062.
Full textThe thesis studies theory and methods for generating automatically centralized and distributed implementations from a high-level model of an application software in BIP. BIP (Behavior, Interaction, Priority) is a component framework with formal operational semantics. Coordination between components is achieved by using multiparty interactions and dynamic priorities for scheduling interactions. A key idea is to use a set of correct source-to-source transformations preserving the functional properties of a given application software. By application of these transformations we can generate a full range of implementations from centralized to fully distributed. Centralized Implementation: the implementation method transforms the interactions of an application software described in BIP and generates a functionally equivalent program. The method is based on the successive application of three types of source-to-source transformations: flattening of components, flattening of connectors and composition of atomic components. We shown that the system of the transformations is confluent and terminates. By exhaustive application of the transformations, any BIP component can be transformed into an equivalent monolithic component. From this component, efficient standalone C++ code can be generated. Distributed Implementation: the implementation method transforms an application software described in BIP for a given partition of its interactions, into a Send/Receive BIP model. Send/Receive BIP models consist of components coordinated by using asynchronous message passing (Send/Receive primitives). The method leads to 3-layer architectures. The bottom layer includes the components of the application software where atomic strong synchronization is implemented by sequences of Send/Receive primitives. The second layer includes a set of interaction protocols. Each protocol handles the interactions of a class of the given partition. The third layer implements a conflict resolution protocol used to resolve conflicts between conflicting interactions of the second layer. Depending on the given partition, the execution of obtained Send/Receive BIP model range from centralized (all interactions in the same class) to fully distributed (each class has a single interaction). From Send/Receive BIP models and a given mapping of their components on a platform providing Send/Receive primitives, an implementation is automatically generated. For each class of the partition we generate C++ code implementing the global behavior of its components. The transformations have been fully implemented and integrated into BIP tool-set. The experimental results on non trivial examples and case studies show the novelty and the efficiency of our approach
Yassine, Adnan. "Études adaptatives et comparatives de certains algorithmes en optimisation : implémentations effectives et applications." Phd thesis, 1989. http://tel.archives-ouvertes.fr/tel-00332782.
Full text