Dissertations / Theses on the topic 'Optimisation de protocoles expérimentaux'
Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles
Consult the top 50 dissertations / theses for your research on the topic 'Optimisation de protocoles expérimentaux.'
Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.
You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.
Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.
Doha, Baccar. "Optimisation des protocoles expérimentaux dans les modèles de population." Paris 6, 1996. http://www.theses.fr/1996PA066460.
Full textLasnon, Charline. "Optimisation des protocoles et de la quantification en tomographie par émission de positon pour l'évaluation thérapeutique dans les domaines de la recherche préclinique et clinique." Caen, 2016. http://www.theses.fr/2016CAEN4083.
Full textSeveral sources of SUVs variability were studied in preclinical research. The use of iodinated contrast media improves the PET-CT diagnostic performance and has a low impact on quantitative data, and no effect on therapeutic evaluation. Furthermore, although tracer extravasation is quite frequent, it doesn’t impact therapeutic evaluation and a systematic correction does not seem necessary. Finally, the different reconstructions available (FBP, OSEM and MAP) have identical performance with regards treatment response and recurrence after treatment. This last result contrasts with those obtained in clinical research where PSF reconstruction improves the diagnostic performance and significantly increases the quantitative values compared with the OSEM reconstruction used as a reference in the EANM recommendations. SUVs extracts from PSF and OSEM reconstructions are therefore not comparable. This has a significant impact on therapeutic evaluation and emphasizes the need of a harmonization strategy. This work has validated a harmonization strategy based on the use of two sets of images: PSF reconstruction for diagnosis and PSF filtered reconstruction for quantification. This strategy, proposed in the EANM and EARL guidelines, has been validated for any type of solid tumor. Beyond SUVs which are widely used in clinical practice, this method has also been validated for the harmonization of intra-tumoral heterogeneity data
Gross, Viktoriia. "An integrative approach to characterize and predict cell death and escape to beta-lactam antibiotic treatments." Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASB035.
Full textResistance to first-line antimicrobial drugs is now commonly encountered. In particular, the increasing fraction of commensal and pathogenic Escherichia coli expressing extended-spectrum beta-lactamases and/or carbapenemases is alarming. E. coli is a major cause of common infections such as urinary tract infections, affecting over 150 million people worldwide. Importantly, many infections relapse. Therefore, an in-depth understanding of the susceptibility of E. coli clinical isolates to beta-lactams is essential for proposing effective treatments.Bacteria might escape treatments in many different ways. Resistant bacteria grow and divide normally in the presence of antibiotics. Their characterization is easy using standard diagnostic tests. Resilient bacteria merely survive in the presence of antibiotics and regrow when the antibiotic is removed or degraded. This biphasic behavior complicates the prediction of treatment outcomes. Resilience to treatment is notably observed in collective antibiotic tolerance, where dead cells release beta-lactamases degrading the antibiotic in the environment. Standard approaches are not adapted for quantifying and understanding the role of resistance and/or resilience.Our main objectives are to quantify the dynamics of cell death during repeated treatments and to quantify the impact of different growth conditions on cell death. First, we developed novel protocols to address variability issues in optical density measurements, and to perform colony forming unit assays in an efficient manner. Using these techniques, we generated an extensive dataset describing the impact of repeated treatments on different clinical isolates. We calibrated a previously developed in the team model of population response to antibiotic and evolution of the environment in the context of collective antibiotic tolerance. We calibrated the model to our dataset, and we showed that the model accounts for the temporal evolution of both biomass and live cell counts. Further, we demonstrated that using this model we can predict live cell number from biomass measurements.In addition, in this work we highlighted the in vitro - in vivo gap by assessing the effect of different growth conditions on cell survival. To address this challenge, we studied the bacterial response in human urine and in Mueller-Hinton media (media used for standard antibiotic susceptibility tests), as well as a defined media with different carbon sources. First, we observed better survival in urine compared to Mueller-Hinton media, but this result varied depending on the strain and the antibiotic concentration. Interestingly, the experimental data showed that nutrient concentration had no effect on growth rate, but a strong effect on carrying capacity and antibiotic response. Through model calibration and analysis of identified model parameter values, we identified biological processes that could explain the differences between bacterial behavior in different media
De, Araujo Novaes Magdala. "Automatisation de protocoles expérimentaux : application au séquencage d'ADN." Aix-Marseille 2, 1993. http://www.theses.fr/1993AIX22037.
Full textFlandé, Yves. "Protocoles expérimentaux, tests d'hypothèses et transfert, en sciences, à l'école primaire." Paris 7, 2000. http://www.theses.fr/2000PA070095.
Full textThe aim of the work is to study the possibilities and difficulties encountered by pupils (9 to 11 years old) confronted to the generalisation of hypotheses within the framework of an investigation approach. It deals with teaching aids helping to acquire and even transfer to capabilities brought into play in two classes concerned i. E. 4th and 5th grades. The first part of the research work states the various researches in the fields of epistemology, psychology, general education and educational software. It also underlines the questions from which to reduce the requirements to be met with during the learning periods. A second part describes four teaching units on four different subjects. In the course of their investigation approach, pupils have to bring forward hypotheses relates to the purpose dependence of the phenomenon upon one or more factors. They also have to plan the tests (it is required that each hypothesis must be tested with at least two conditions assigned to the factor) to draw up the procedures, assess the results of the measures carried out and finally infer conclusions about the dependence of one factor upon another. The third part brings together the various data (minutes of the group sessions, individual or group proposal in a written from by pupils, individual assessments) collected during the learning periods. It stresses the interest of the procedures and analyses put forward and focuses on types of difficulties that pupils come across. It analyses the impact of the teaching aids offered in order to acquire methodological capabilities and how such aids can be used in other studies. It also assesses the evolution of each individual pupil as well as that the two classes, over the last two school years. It also emphasises the role of the teachers in the way they manage the class (how activities are planned out, situations proposed, techniques brought in and debates dealt with. . . )
Claudet, Joachim. "Aires marines protégées et récifs artificiels : méthodes d'évaluation, protocoles expérimentaux et indicateurs." Perpignan, 2006. http://www.theses.fr/2006PERP0736.
Full textMarine Protected Areas’ (MPAs) and Artificial Reef’s (ARs) management requires complex assessment and monitoring programmes, dealing with different sources of variability. We studied and developed experimental designs and analysis methods suited for the establishment of a monitoring of MPAs and ARs. This methodology is developed from existing data sets in the Northwestern Mediterranean. We build multi-criteria indicators allowing a statistically testable diagnosis of the impact of MPAs and ARs on reef fish assemblages. Using ecological performance indicators permits to monitor and to give an image of the assessed system to managers. It was possible to show the global response of the fish assemblages to the protection by a MPA. This response was evidenced by increases in abundance, species richness or diversity, gradually through space, time and among various taxonomic groups or fish individual sizes. Large fishes reacted faster to protection and shallow habitats were more sensitive to the existence of a MPA. Our results can be useful for the implementation of new MPAs or for the immersions of ARs and for the development of their management plans. Key-words : Marines Protected Areas, Artificial Reefs, Impact Assessment, Temperate Fish, Indicators, Multivariate, Habitat, Monitoring, Statistical Power, Northwestern Mediterranean, Management
Pichot, Antoine. "Co-allocation de ressources distribuées : architectures, protocoles, optimisation." Phd thesis, Paris, ENST, 2008. http://pastel.archives-ouvertes.fr/pastel-00003806.
Full textNew computing applications require nowadays a physical distribution of computing resources. These geographically distributed resources belonging to different organizations must be associated logically in order to solve cooperatively a given problem or to provide a given service. The virtual infrastructure corresponding to the set of these distributed and remote resources and to the inherent underlying networking facilities is called a Grid. Present models do not enable network and other resources such as computing or storage to be co-allocated on demand, nor do they guarantee the Quality of Service. The aim of this thesis is first to provide a review of the state of the art on co-allocation. For that purpose, various environments such as Web Services distributed resources management systems, IP Multimedia Subsystem and Generalized Multi-protocol Label Switching architecture are considered. We propose extensions to existing Grid toolkits, WS, IMS and GMPLS for dynamic resource co-allocation provisioning. The suitability of each of these approaches for Grid services provisioning is investigated and compared to the other alternatives. We then analyze a WS based protocol between a global resource coordinator (Grid Scheduler) and local resources managers (local schedulers). Algorithms are proposed to model the possible interactions between the grid scheduler, the network resource manager and the local schedulers. A co-allocation algorithm is proposed to improve the efficiency as seen by the end user and the resource providers. An analytical model is proposed to predict and understand the performance; simulations are run to verify the validity of the model and the results
Roche, Gilles. "L'angiographie dynamique des membres inferieurs : evaluation, optimisation et protocoles." Rennes 1, 1992. http://www.theses.fr/1992REN1M036.
Full textSzelechowski, Caroline. "Développement et validation de protocoles expérimentaux pour la réduction de la traînée aérodynamique d'un tricycle." Mémoire, Université de Sherbrooke, 2014. http://hdl.handle.net/11143/10577.
Full textEl, Chamie Mahmoud. "Optimisation, contrôle et théorie des jeux dans les protocoles de consensus." Thesis, Nice, 2014. http://www.theses.fr/2014NICE4094/document.
Full textConsensus protocols have gained a lot of interest in the recent years. In this thesis, we study optimization, control, and game theoretical problems arising in consensus protocols. First, we study optimization techniques for weight selection problems to increase the speed of convergence of discrete-time consensus protocols on networks. We propose to select the weights by applying an approximation algorithm: minimizing the Schatten p-norm of the weight matrix. We characterize the approximation error and we show that the proposed algorithm has the advantage that it can be solved in a totally distributed way. Then we propose a game theoretical framework for an adversary that can add noise to the weights used by averaging protocols to drive the system away from consensus. We give the optimal strategies for the game players (the adversary and the network designer) and we show that a saddle-point equilibrium exists in mixed strategies. We also analyze the performance of distributed averaging algorithms where the information exchanged between neighboring agents is subject to deterministic uniform quantization (e.g., when real values sent by nodes to their neighbors are truncated). Consensus algorithms require that nodes exchange messages persistently to reach asymptotically consensus. We propose a distributed algorithm that reduces the communication overhead while still guaranteeing convergence to consensus. Finally, we propose a score metric that evaluates the quality of clusters such that the faster the random walk mixes in the cluster and the slower it escapes, the higher is the score. A local clustering algorithm based on this metric is proposed
Zayani, Hafedh. "Etude et optimisation des protocoles de réseaux de capteurs sans fil." Paris, CNAM, 2009. http://www.theses.fr/2009CNAM0953.
Full textLes travaux présentés dans cette thèse portent sur la mise en place de nouveaux protocoles économiseurs d’énergie pour les réseaux de capteurs sans fil (RCSF). Suite à une analyse approfondie des principaux travaux de recherches sur ce sujet, nous avons proposé de nouveaux protocoles multicouches de routage et de contrôle d’accès au medium (ECo-MAC) permettant d’augmenter de manière significative la durée de vie des ces réseaux. Après la conception de deux modèles génériques de nœuds : « capteur » et « station de base », nous évaluons sous le simulateur OPNET les performances de ces nouvelles propositions. Une analyse comparative avec des travaux de référence, a montré pour différentes configurations de réseaux, l’efficacité de nos propositions en termes de gains énergétiques et de latence de bout en bout. Dans une seconde étape, après description des activités des nœuds du réseau par des automates temporisés, nous avons, à l’aide de l’outil UPPAAL, vérifié le comportement du protocole MAC et justifié formellement les valeurs adoptés en phase de simulation de certains paramètres, en particulier la durée d’un time slot. Dans une dernière étape, partant d’une modélisation de la procédure backoff de notre protocole ECo-MAC basée sur les chaînes de Markov à temps discret, et à l’aide de l’environnement de vérification probabiliste PRISM, nous justifions les choix que nous avons retenus pour les valeurs de certains paramètres intégrés dans cette procédure
Ferrer, Ludovic. "Dosimétrie clinique en radiothérapie moléculaire : optimisation de protocoles et implémentation clinique." Nantes, 2011. https://archive.bu.univ-nantes.fr/pollux/show/show?id=b7183ac7-6fc1-4281-be4f-8a358c9320fc.
Full textMolecular radiotherapy (mrt) consists in destructing tumour targets by radiolabelled vectors. This nuclear medi¬cine specialty is being considered with increasing interest for example via the success achieved in the treatment of non-Hodgkin lymphomas by radioimmunotherapy. One of the keys of mrt optimization relies on the personalisa¬tion of absorbed doses delivered to the patient : This is required to ascertain that irradiation is focused on tumour cells while keeping surrounding healthy tissue irradiation at an acceptable — non-toxic — level. Radiation dose evaluation in mrt requires in one hand, the spatial and temporal localization of injected radioactive sources by scintigraphic imaging, and on a second hand, the knowledge of the emitted radiation propagating media, given by CT imaging. Global accuracy relies on the accuracy of each of the steps that contribute to clinical dosimetry. There is no reference, standardized dosimetric protocol to date. Due to heterogeneous implementations, evaluation of the accuracy of the absorbed dose is a difficult task. In this thesis, we developped and evaluated different dosimetric approaches that allow us to find a relationship between the aborbed dose to the bone marrow and haematological toxicity. Besides, we built a scientific project, called DosiTest, which aims at evaluating the impact of the various step that contribute to the realization of a dosimetric study, by means of a virtual multicentric comparison based on Monte–Carlo modelling
Yahya, Eslam. "Modélisation, analyse et optimisation des performances des circuits asynchrones multi-protocoles." Grenoble INPG, 2009. http://www.theses.fr/2009INPG0145.
Full textAsynchronous circuits show potential interest from many aspects. However modeling, analysis and optimization of asynchronous circuits are stumbling blocks to spread this technology on commercial level. This thesis concerns the development of asynchronous circuit modeling method which is based on analytical models for the underlying handshaking protocols. Based on this modeling method, a fast and accurate circuit analysis method is developed. This analysis provides a full support for statistically variable delays and is able to analyze different circuit structures (Linear/Nonlinear, Unconditional/Conditional). In addition, it enables the implementation of timing analysis, power analysis and process-effect analysis for asynchronous circuits. On top of these modeling and analysis methods, an optimization technique has been developed. This optimization technique is based on selecting the minimum number of asynchronous registers required for satisfying the performance constraints. By using the proposed methods, the asynchronous handshaking protocol effect on speed, power consumption distribution and effect of process variability is studied. For validating the proposed methods, a group of tools is implemented using C++, Java and Matlab. These tools show high efficiency, high accuracy and fast time response
Arnal, Fabrice. "Optimisation de la fiabilité pour des communications multipoints par satellite géostationnaire." Phd thesis, Télécom ParisTech, 2004. http://pastel.archives-ouvertes.fr/pastel-00001160.
Full textTourki, Kamel. "Conception et optimisation de protocoles de coopération pour les communications sans fil." Nice, 2008. http://www.theses.fr/2008NICE4006.
Full textCooperative mechanisms are becoming increasingly important in wireless communications and networks to substantially enhance system performance with respect to much less power consumption, higher system capacity and smaller packet loss rate. The idea of cooperation can be traced back to the information theory investigation on relay channel in cellular network. From the system point of view, since Mobile Station (MS) has limitations in single antenna, power, cost and hardware, it is infeasible to use MIMO technology in MS. Mobile users with single antennas can still take advantage of spatial diversity through cooperative space-time encoded transmission. The objective of this thesis is to introduce and discuss various cooperative strategies in wireless communications. In the first part, we present an end-to-end performance analysis of two-hop asynchronous cooperative diversity with regenerative relays over Rayleigh block-flat-fading channels, in which a precoding frame-based scheme with packet-wise encoding is used. This precoding is based on the addition of a cyclic prefix which is implemented as a training sequence. We derive, for equal and unequal sub-channel gains, the bit-error rate and the end-to-end bit-error rate expressions for binary phase-shift keying. We also present the performance of the frame-error rate and the end-to-end frame-error rate. Finally, comparisons between three system configurations, differing by the amount of cooperation, are presented. The second part contains two chapters. In the first chapter, we consider a scheme in which a relay chooses to cooperate only if its source-relay channel is of an acceptable quality and we evaluate the usefulness of relaying when the source acts blindly and ignores the decision of the relays whether they may cooperate or not. In our study, we consider the regenerative relays in which the decisions to cooperate are based on a signal-to-noise ratio (SNR) threshold and consider the impact of the possible erroneously detected and transmitted data at the relays. We derive the end-to-end bit-error rate (BER) for binary phase-shift keying modulation and look at two power allocation strategies between the source and the relays in order to minimize the end-to-end BER at the destination for high SNR. In the second chapter, we consider a scheme in which the relay chooses to cooperate only if the source-destination channel is of an unacceptable quality. In our study, we consider a regenerative relay in which the decision to cooperate is based on a signal-to-noise ratio (SNR) threshold and consider the effect of the possible erroneously detected and transmitted data at the relay. We derive an expression for the end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation and look at the optimal strategy to minimize this end-to-end BER at the destination for high SNR. In the third part, we consider a multiple access MAC fading channel with two users communicating with a common destination, where each user mutually acts as a relay for the other one as well as wishes to transmit his own information as opposed to having dedicated relays. We wish to evaluate the usefulness of relaying from the point of view of the system's throughput (sum rate) rather than from the sole point of view of the user benefiting from the cooperation as is typically done. We do this by allowing a trade-off between relaying and fresh data transmission through a resource allocation framework. Specifically, we propose cooperative transmission scheme allowing each user to allocate a certain amount of power for his own transmitted data while the rest is devoted to relaying. The underlying protocol is based on a modification of the so-called non-orthogonal amplify and forward (NAF) protocol. We develop capacity expressions for our scheme and derive the rate-optimum power allocation, in closed form for centralized and distributed frameworks. In the distributed scenario, partially statistical and partially instantaneous channel information is exploited. The centralized power allocation algorithm indicates that even in a mutual cooperation setting like ours, on any given realization of the channel, cooperation is never truly mutual, i. E. One of the users will always allocate zero power to relaying the data of the other one, and thus act selfishly. But in distributed framework, our results indicate that the sum rate is maximized when both mobiles act selfishly
Randriatsiferana, Rivo Sitraka A. "Optimisation énergétique des protocoles de communication des réseaux de capteurs sans fil." Thesis, La Réunion, 2014. http://www.theses.fr/2014LARE0019/document.
Full textTo increase the lifetime of wireless sensor networks, a solution is to improve the energy efficiency of the communication's protocol. The grouping of nodes in the wireless sensor network clustering is one of the best methods. This thesis proposes several improvements by changing the settings of the reference protocol LEACH. To improve the energy distribution of "cluster-heads", we propose two centralized clustering protocols LEACH and k-optimized version k-LEACH-VAR. A distributed algorithm, called e-LEACH, is proposed to reduce the periodic exchange of information between the nodes and the base station during the election of "cluster-heads". Moreover, the concept of energy balance is introduced in metric election to avoid overloading nodes. Then we presented a decentralized version of k-LEACH, which in addition to the previous objectives, integrates the overall energy consumption of the network. This protocol, called k-LEACH-C2D, also aims to promote the scalability of the network. To reinforce the autonomy and networks, both routing protocols "multi-hop" probability, denoted CB-RSM and FRSM build elementary paths between the "cluster-heads" and elected the base station. The protocol, CB-RSM, forms a hierarchy of "cluster-heads" during the training phase clusters, with an emphasis on self-scheduling and self-organization between "cluster-heads" to make the networks more scalable. These protocols are based on the basic idea that the nodes have the highest residual energy and lower variance of energy consumption become "cluster-head". We see the central role of consumption of the node in our proposals. This point will be the last part of this thesis. We propose a methodology to characterize experimentally the consumption of a node. The objectives are to better understand the consumption for different sequences of the node status. In the end, we propose a global model of the consumption of the node
Diebold, Simon. "Contribution à la définition d'une dosimétrie laser en thérapie photo-dynamique anti-cancéreuse : protocoles expérimentaux, mesures automatisées, modélisation associée." Vandoeuvre-les-Nancy, INPL, 1991. http://www.theses.fr/1991INPL034N.
Full textCastelluccia, Claude. "Generation automatique d'implementation optimisees de protocoles." Nice, 1996. http://www.theses.fr/1996NICE4957.
Full textBanaouas, Iskander. "Analyse et optimisation des protocoles d'accès dans les réseaux sans fil ad hoc." Paris 6, 2012. http://www.theses.fr/2012PA066005.
Full textIn this thesis, we study the Medium Access protocols in wireless ad hoc networks by evaluating their performances with respect to others. Hence we use an analytical modeling of these access techniques using stochastic models. These models, sustained by simulations, are useful to estimate a MAC protocol's sensitivity for some network constraints. This thesis is presented as follows: First we present the models used to describe the random behavior of some MAC protocols. Second, we apply those models to compare Signal to noise ratio in TDMA and CSMA networks. We compare then the most frequently used access protocols (CSMA and Aloha). We use an analytical model based Poisson Point Processes to calculate the coverage probability of Aloha. Finally we evaluate the gain of CSMA with respect to the two latter protocols. Although CSMA is the most widely used MAC protocol for MANETs, this technique is exposed to unfairness phenomenon if the network is overloaded. We quantify this problem using a Markovian approach. This approach lets us prevent the distribution of unfairness in the network, and then suggest unfairness mitigating solutions based on contention window tuning. Finally, we study the using of Aloha in cognitive networks. In this context, a secondary network tries to optimize its network access probability so that it maximizes it's coverage density while the coverage probability of the primary network is altered by an acceptable rate
Kermorvant, Claire. "Optimisation de protocoles d'échantillonage appliqués aux suivis de la biodiversité et des ressources." Thesis, Pau, 2019. http://www.theses.fr/2019PAUU3018/document.
Full textDeveloping robust, and reliable, environmental surveys can be a challenge because of the inherent variation in natural environmental systems. This variation, which creates uncertainty in the survey results, can lead to difficulties in interpretations. The objective of this thesis was to develop a general framework, adaptable to environmental surveys, to improve scientific survey-results. We have developed a method that allows the user, by defining their desired level of accuracy for the survey results, to develop an efficient sampling design with a minimal sample size.Once the sample size is known, calculating total cost of the survey becomes straightforward. We start from the definition of sampling design performance and build a method for comparison and assessment of an optimal sampling design. As a rule of thumb, the more efficient a sampling design is, the fewer statistical units are needed to achieve the desired accuracy. With less sampling effort the sampling procedure becomes more cost effective. Our method assists in identifying cost-efficient sampling procedures.In this PhD thesis a general methodology is developed, and it is assessed with three case studies. The first case study involved design of an efficient survey when no prior data are available. Here we used the example of tiger mosquito in the Bayonne-Anglet-Biarritz agglomeration (south-west France). This species has only now started invading this area and therefore there are no site-specific data available. We used data from other French Mediterranean’s cities to model the probability mosquito are present in the Bayonne-Anglet-Biarritz area of interest. We used this modelled-population to assess, compare and select an effective sampling procedure.The second case study was for survey optimization when only one season of data are available. The chosen example was from Arcachon bay’s manila clam survey in western France. This clam-monitoring has been done biennially (i.e. once every two years) since 2006. We applied our general methodology on one-year data and demonstrated that survey costs can be reduced by 30% a year with no loss of accuracy or reduction in resource management information. The third case study was based on optimization of a survey when several seasons of data are available. We used the clam surveyed but here as a multi-year dataset. We proposed a long-term spatial and temporal sampling design for monitoring the clam resource
Rombaut, Matthieu. "Optimisation des réseaux, routage et dimensionnement." Versailles-St Quentin en Yvelines, 2003. http://www.theses.fr/2003VERS0015.
Full textCette étude propose une approche industrielle du problème de routage de données sur des réseaux aux capacités contraintes. Un certain nombre d'études mathématiques ont été réalisées pour définir des plans de routage, par résolution de problèmes linéaires ou en nombres entiers. On constate alors que des approximations doivent être faites pour appliquer les méthodes mathématiques aux problèmes réels. D'autre part, les routages proposés sont pour la plupart simples (mono-routage). L'utilisation des algorithmes de plus courts chemins contraint souvent les flux sur une route unique, ils ne permettent généralement pas l'utilisation de liens annexes dont la charge est faible. Nous proposons des méthodes de routage de flux sur des liens de capacités finies, le routage Mille Feuilles, et des variantes de ce routage permettant de limiter le nombre de routes. Ces méthodes sont applicables au niveau de la conception ou de l'exploitation des réseaux. Ces méthodes d'optimisation par projections successives permettent de mettre en œuvre différentes fonctions coût, elles permettent d'approcher des solutions optimales obtenues à l'aide de méthode de gradient projeté. Associée à une métrique non cumulative sur la route, elles permettent de calculer des plans de routage multi-routes, de diminuer le taux charge du lien le plus chargé sur le réseau 'augmenter la résistance du réseau aux variations de trafic et à l'apparition d'une panne simple. D'autre part, nous évaluons les performances de plusieurs méthodes de re-routage en cas de panne simple d'un lien, en fonction des méthodes de routage appliquées. L'impact des re-routages sur le réseau est évalué, la variation de la charge des liens et la variation de la longueur moyenne des routes sont bornées. Les méthodes de routages ne sont pas équivalentes et elles s'adaptent différemment aux politiques de re-routage proposées. En outre, une nouvelle politique de re-routage applicable aux plans de routage multi-routes est introduite
Philippe, Bernard. "Système d'aide à la conception des protocoles expérimentaux en communication homme-machine, fondé sur une méthodologie d'analyse orientée par les objets." Valenciennes, 1993. https://ged.uphf.fr/nuxeo/site/esupversions/60866969-4737-4d1c-bb57-bf7f40582f61.
Full textBouallegue, Mehdi. "Protocoles de communication et optimisation de l'énergie dans les réseaux de capteurs sans fil." Thesis, Le Mans, 2016. http://www.theses.fr/2016LEMA1011/document.
Full textWireless sensor networks (WSNs) are composed of a large number of sensor nodes that are typicallybattery-powered and designed to operate for a long period. Application areas are many and varied, such as the environmental field, medical and military.The major advantage of this device is a large-scale deployment without any maintenance. The sensors do not need to achieve an established infrastructure to transmit vital data to the study of the environment. It is also necessary to ensure good quality service, because without son sensor networks must incorporate mechanisms that allow users to extend the life of the entire network, as each node is supplied by a limited power source and generally irreplaceable. Therefore, it is necessary to optimize the power consumption at all levels of design of this type of network. Accordingly, minimization of power consumption is one of the most important design factors in sensor networks.The aim of this thesis is study the different existing routing techniques in a context without multi-hop son to get better performance. We carry our study of the most popular routing protocols to offer in a second part a new routing protocol for optimizing energy consumption without son sensor networks, keeping an optimal quality of service
Pavkovic, Bogdan. "Vers le futur Internet d'objets au travers d'une optimisation inter-couches des protocoles standardisés." Phd thesis, Université de Grenoble, 2012. http://tel.archives-ouvertes.fr/tel-00849136.
Full textMedjiah, Samir. "Optimisation des protocoles de routage dans les réseaux multi-sauts sans fil à contraintes." Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14663/document.
Full textGreat research efforts have been carried out in the field of challenged multihop wireless networks (MWNs). Thanks to the evolution of the Micro-Electro-Mechanical Systems (MEMS) technology and nanotechnologies, multihop wireless networks have been the solution of choice for a plethora of problems. The main advantage of these networks is their low manufacturing cost that permits one-time application lifecycle. However, if nodes are low-costly to produce, they are also less capable in terms of radio range, bandwidth, processing power, memory, energy, etc. Thus, applications need to be carefully designed and especially the routing task because radio communication is the most energy-consuming functionality and energy is the main issue for challenged multihop wireless networks.The aim of this thesis is to analyse the different challenges that govern the design of challenged multihop wireless networks such as applications challenges in terms of quality of service (QoS), fault-tolerance, data delivery model, etc., but also networking challenges in terms of dynamic network topology, topology voids, etc. Our contributions in this thesis focus on the optimization of routing under different application requirements and network constraints. First, we propose an online multipath routing protocol for QoS-based applications using wireless multimedia sensor networks. The proposed protocol relies on the construction of multiple paths while transmitting data packets to their destination, i.e. without prior topology discovery and path establishment. This protocol achieves parallel transmissions and enhances the end-to-end transmission by maximizing path bandwidth and minimizing the delays, and thus meets the requirements of QoS-based applications. Second, we tackle the problem of routing in mobile delay-tolerant networks by studying the intermittent connectivity of nodes and deriving a contact model in order to forecast future nodes' contacts. Based upon this contact model, we propose a routing protocol that makes use of nodes' locations, nodes' trajectories, and inter-node contact prediction in order to perform forwarding decisions. The proposed routing protocol achieves low end-to-end delays while using efficiently constrained nodes' resources in terms of memory (packet queue occupancy) and processing power (forecasting algorithm). Finally, we present a topology control mechanism along a packet forwarding algorithm for event-driven applications using stationary wireless sensor networks. Topology control is achieved by using a distributed duty-cycle scheduling algorithm. Algorithm parameters can be tuned according to the desired node's awake neighbourhood size. The proposed topology control mechanism ensures trade-off between event-reporting delay and energy consumption
Perronnet, Florent. "Régulation coopérative des intersections : protocoles et politiques." Thesis, Belfort-Montbéliard, 2015. http://www.theses.fr/2015BELF0259/document.
Full textThe objective of this work is to use the potential offered by the wireless communication and autonomous vehicles to improve traffic flow in an isolated intersection and in a network of intersections. We define a protocol, called CVAS (Cooperative Vehicle Actuator System) for managing an isolated intersection. CVAS distributes separately the right of way to each vehicle according to a specific order determined by a computed sequence.In order to optimize the sequence, we define a DCP (Distributed Clearing Policy) to improve the total evacuation time of the intersection. The control strategy is investigated through two modeling approaches. First graph theory is used for calculating the optimal solution according to the arrival times of all vehicles, and then a timed Petri Net model is used to propose a real-time control algorithm. Tests with real vehicles are realized to study the feasibility of CVAS. Simulations of realistic traffic flows are performed to assess our algorithm and to compare it versus conventional traffic lights.Managing a network of intersections raises the issue of gridlock. We propose CVAS-NI protocol (Cooperative Vehicle actuator system for Networks of Intersections), which is an extension of CVAS protocol. This protocol prevents the deadlock in the network through occupancy and reservation constraints. With a deadlock free network we extend the study to the traffic routing policy. Finally, we generalize the proposed control system for synchronizing the vehicle velocities at intersections
Henriet, Julien. "Evaluation, optimisation et validation de protocoles distribués de gestion de la concurrence pour les interactions coopératives." Besançon, 2005. http://www.theses.fr/2005BESA2023.
Full textCooperative work over Internet introduces constraints in terms of access and modification of shared objects. Users need to access the same objects concurrently in real-time. At each moment, the same image of the production area is to be shared on each connected site. The first chapter of this thesis is a state of the art of communication protocols, consistency management protocols and telemedicine platforms that allow collaborative work. The second chapter presents an new protocol for consistency management over that kind of platform. A probabilistic study of this new protocol proves and evaluates the optimization and the cost of this new protocol called the Optimistic Pilgrim. An analysis, an optimization and a validation of the Chameleon protocol dedicated to communication management over a collaborative platform is presented in the third chapter. At last, the fourth chapter evaluates the performance of these protocols through an implementation of a prototype and also presents the adequation of each protocol to each tool of the collaborative teleneurological platform of the TéNeCi project
Rattaz, Aurélie. "Étude de l'oxydation de la cellulose II par le NO2 : caractérisation et optimisation des paramètres expérimentaux." Université Joseph Fourier (Grenoble ; 1971-2015), 2009. http://www.theses.fr/2009GRE10046.
Full textOxidized cellulose and its biomedical use are known for more than 50 years. It can be absorbed by human body, making the oxidized cellulose an attractive material for medical applications. The oxidation of cellulose can be achieved by the treatment of cellulose with several oxidizing agents, including meta-periodate, hypochlorite, dichromate and nitrogen dioxide. However, among these, the only suitable method for preparing bio-absorbable, antibacterial and hemostatic materials and maintaining the appropriate physical properties is the oxidation by nitrogen dioxide. Early works on the oxidation of cellulose to produce bio-absorbable cellulose were conducted by W. O. Kenyon et al. In these pioneering oxidation processes, the authors found that cellulose could be oxidized by using either gaseous nitrogen dioxide or a solution of nitrogen dioxide in chlorinated hydrocarbon solvents which were inert. Then, in 1968, W. H Ashton et al. Ameliorated the process for preparing oxidized cellulose in NO2 in nonaqueous solvents. These chlorinated hydrocarbons and chlorinated fluorocarbons solvents (CFCs), though suitable in cellulose oxidation process due to their stability towards NO2, are known to cause environmental problems. In an effort to oxidize cellulose and minimize the problems associated with CFCs, Boardman et al. Proposed a process for oxidizing cellulose with a solution of nitrogen dioxide in perfluorocarbons solvents. We have developed an alternative process, based on the use of supercritical carbon dioxide (SCCO2) as NO2 solvent. SCCO2 and densified CO2 in general have the advantage of not only being inert toward NO2, but also of not leaving trace of solvent residue to the final product. This feature is a critical issue when the production of medical devices is considered. In this work, the degrees of oxidation were determined by conductometric and pH-metric titration, and were in agreement with those deduced by solid state NMR and FT-IR. Quantitative treatments of X-ray diffraction and solid state NMR spectra allowed determining crystallinity index, size and orientation of oxidized cellulose crystals. The obtained results permitted to appreciate the influence of temperature, NO2 concentration, pressure and reaction time. A liquid state NMR study allowed proposing structures formed from secondary reactions
Lorchat, Jean. "Optimisation des communications pour les environnements mobiles embarqués." Université Louis Pasteur (Strasbourg) (1971-2008), 2005. https://publication-theses.unistra.fr/public/theses_doctorat/2005/LORCHAT_Jean_2005.pdf.
Full textEmbedded mobile devices are battery-powered devices. This includes traditional laptops as well as PDAs, cell phones and onboard car computers. During this thesis work, we focused on studying communication protocols available to these devices, in order to optimize their energy efficiency. To remain useable, these optimized protocols must be able to support highly constrained traffic like video and voice. We did first develop some models for several wireless communications technologies, based on measurements from the field for various states of the hardware. Then we focused on the IEEE 802. 11 family of protocols, in which the only power saving mechanism is disabling realtime communications by adding a huge amount of latency on inbound traffic. In addition, this mechanism only works with inbound traffic. We thus developped and patent a new mechanism that adds on top of the existing protocol, but with backwards compatibility. This mechanism allowed to increase battery life by 10% , with so little latency that realtime communications are not impacted. This mechanism behaves the same in both ways, which makes it bidirectionnal. Eventually, based on our observations on the limits of IEEE 802. 11 medium access layer, from both energy efficiency and fairness standpoints, we developped a new medium access layer that replaces the old one from the IEEE 802. 11 protocol, but keeps the same underlying physical layer
Calcagni, Nicolas. "L’évaluation des prises en charge non-médicamenteuses dans le cadre d’affections chroniques. Etudes interventionnelles basées sur des Protocoles Expérimentaux à Cas Unique." Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0222.
Full textToday, Non-Pharmacological Interventions (NPIs), and other procedures that may be associated with them (Traditional Medicine, Complementary and Alternative Medicine), are of a preponderance that should not be underestimated in the perspective of integrative health. A robust scientific evaluation is necessary to sort out the harmful or inefficient practices from those that show real benefits. In this field, randomized controlled trials (RCTs) reign supreme, but their intrinsic limitations are debatable. Through a systematic review of the literature focusing on manipulative and body-based practices as supportive care in cancer, we confirmed the difficulty of RCTs to infer a definitive decision. We then presented a different and little-taught intervention method, the single-case experimental design (SCED) and illustrated them through four studies in various health topics for the evaluation of different NPIs (Parkinson's Disease and Serious Game, Musical Intervention in Palliative Care, Hypnosis and Renal Disease, and Shiatsu and Painful Menstruations). These studies reported interesting results and provided an opportunity to discuss the strengths and weaknesses of this method. We then argued in favor of its use given its legitimate experimental principles and its adequacy with evidence-based practice. Finally, the low quality of the studies we conducted gave us an opportunity to propose a list of recommendations and pitfalls to consider when using SCED
Retout, Sylvie. "Optimisation de protocoles en régression non linéaire à effets mixtes : applications en pharmacocinétique de population." Paris 11, 2003. http://www.theses.fr/2003PA11TO16.
Full textRani, Kaddour. "Stratégies d’optimisation des protocoles en scanographie pédiatrique." Thesis, Université de Lorraine, 2015. http://www.theses.fr/2015LORR0282/document.
Full textFor the last 10-years, computed tomography (CT) procedures and their increased use have been a major source for concern in the scientific community. This concern has been the starting point for several studies aiming to optimize the dose while maintaining a diagnostic image quality. In addition, it is important to pay special attention to dose levels for children (age range considered to be from a newborn baby to a 16-y-old patient). Indeed, children are more sensitive to ionizing radiations, and they have a longer life expectancy. Optimizing the CT protocols is a very difficult process due to the complexity of the acquisition parameters, starting with the individual patient characteristics, taking into account the available CT device and the required diagnostic image quality. This PhD project is contributing to the advancement of knowledge by: (1) Developing a new approach that can minimize the number of testing CT scans examinations while developing a predictive mathematical model allowing radiologists to prospectively anticipate how changes in protocols will affect the image quality and the delivered dose for four models of CT scan. (2) Setting-up a Generic Optimized Protocol (based on the size of the phantom CATPAHN 600) for four models of CT scan. (3) Developing a methodology to adapt the GOP to five sizes of pediatric patient using Size Specific Dose Estimate calculation (SSDE). (4) Evaluating subjective and objective image quality between size-based optimised CT protocol and age-based CT protocols. (5) Developing a CT protocol optimization tool and a tutorial helping the radiologists in the process of optimization
Anouar, Hicham. "Conception et Optimisation de Protocoles d'Accès Multiple au Canal pour les Réseaux Ad Hoc sans Fil." Phd thesis, Télécom ParisTech, 2006. http://pastel.archives-ouvertes.fr/pastel-00001867.
Full textJung, Laura. "Optimisation de protocoles de reprogrammation de cellules somatiques humaines en cellules souches à pluripotence induite (hiPSC)." Thesis, Strasbourg, 2013. http://www.theses.fr/2013STRAJ066.
Full textIn 2006 and 2007, Yamanaka and Thomson teams achieved the reprogramming of mouse and human somatic cells into pluripotent stem cells through the transfection of two cocktails of genes: OCT4, SOX2, KLF4, cMYC (OSKM) and OCT4, NANOG, SOX2, LIN28 (ONSL). The generated cells, called induced Pluripotent Stem Cells (iPSC) share the same fundamental properties of ESC : self-renewing, pluripotency maintenance and capacity of differentiation into the three germ layers and suggest the same application potential in basic research (developmental and epigenetic biology) as well as in therapy (regenerative medicine, disease modeling for drug development). One of the major advantages of iPSC lies in their non-embryonic origin. Indeed, the use of iPSC resolves the ethical constraints and offers the possibility to work with extensive cell types directly from the patient to treat. Stéphane Viville’s research team aims to develop a hiPSC bank from patient suffering from genetic or other diseases which will be available for the scientific community. We are experienced in human primary fibroblasts reprogramming especially with the use of two polycistronic cassettes: ONSL encoding Thomson’s cocktail and OSKM encoding Yamanaka’s cocktail separated with 2A peptides. Thanks to the combination of RV-ONSL and RV-OSKM retroviral vectors (developed with Vectalys) we are yielding more than 2% of reprogramming efficiency in a highly reproducible way. Indeed, we demonstrated the reprogramming synergy of ONSL and OSKM combination. We are now focusing our effort on non-integrative strategies (ie mRNA) which are more appropriate for clinical usage
Anouar, Hicham. "Conception et optimisation de protocoles d'accès multiple au canal pour les réseaux ad hoc sans fil /." Paris : École nationale supérieure des télécommunications, 2006. http://catalogue.bnf.fr/ark:/12148/cb40979941q.
Full textBonaldi, Vincent-Marie. "Tomodensitométrie spiralée et optimisation des protocoles d'injection du contraste iodé intra-veineux : études expérimentales en pathologie humaine." Lyon 1, 1998. http://www.theses.fr/1998LYO1T072.
Full textDos, santos Nicolas. "Optimisation de l'approche de représentativité et de transposition pour la conception neutronique de programmes expérimentaux dans les maquettes critiques." Phd thesis, Université de Grenoble, 2013. http://tel.archives-ouvertes.fr/tel-00979526.
Full textDos, Santos Nicolas. "Optimisation de l'approche de représentativité et de transposition pour la conception neutronique de programmes expérimentaux dans les maquettes critiques." Thesis, Grenoble, 2013. http://www.theses.fr/2013GRENI033/document.
Full textThe work performed during this thesis focused on uncertainty propagation (nuclear data, technological uncertainties, calculation biases,...) on integral parameters, and the development of a novel approach enabling to reduce this uncertainty a priori directly from the design phase of a new experimental program. This approach is based on a multi-parameter multi-criteria extension of representativity and transposition theories. The first part of this PhD work covers an optimization study of sensitivity and uncertainty calculation schemes to different modeling scales (cell, assembly and whole core) for LWRs and FBRs. A degraded scheme, based on standard and generalized perturbation theories, has been validated for the calculation of uncertainty propagation to various integral quantities of interest. It demonstrated the good a posteriori representativity of the EPICURE experiment for the validation of mixed UOX-MOX loadings, as the importance of some nuclear data in the power tilt phenomenon in large LWR cores. The second part of this work was devoted to methods and tools development for the optimized design of experimental programs in ZPRs. Those methods are based on multi-parameters representativity using simultaneously various quantities of interest. Finally, an original study has been conducted on the rigorous estimation of correlations between experimental programs in the transposition process. The coupling of experimental correlations and multi-parametric representativity approach enables to efficiently design new programs, able to answer additional qualification requirements on calculation tools
Rani, Kaddour. "Stratégies d’optimisation des protocoles en scanographie pédiatrique." Electronic Thesis or Diss., Université de Lorraine, 2015. http://www.theses.fr/2015LORR0282.
Full textFor the last 10-years, computed tomography (CT) procedures and their increased use have been a major source for concern in the scientific community. This concern has been the starting point for several studies aiming to optimize the dose while maintaining a diagnostic image quality. In addition, it is important to pay special attention to dose levels for children (age range considered to be from a newborn baby to a 16-y-old patient). Indeed, children are more sensitive to ionizing radiations, and they have a longer life expectancy. Optimizing the CT protocols is a very difficult process due to the complexity of the acquisition parameters, starting with the individual patient characteristics, taking into account the available CT device and the required diagnostic image quality. This PhD project is contributing to the advancement of knowledge by: (1) Developing a new approach that can minimize the number of testing CT scans examinations while developing a predictive mathematical model allowing radiologists to prospectively anticipate how changes in protocols will affect the image quality and the delivered dose for four models of CT scan. (2) Setting-up a Generic Optimized Protocol (based on the size of the phantom CATPAHN 600) for four models of CT scan. (3) Developing a methodology to adapt the GOP to five sizes of pediatric patient using Size Specific Dose Estimate calculation (SSDE). (4) Evaluating subjective and objective image quality between size-based optimised CT protocol and age-based CT protocols. (5) Developing a CT protocol optimization tool and a tutorial helping the radiologists in the process of optimization
Delpon, Grégory. "Optimisation des protocoles d'imagerie quantitative planaire pour la dosimétrie lors des études cliniques de radioimmunothérapie à l'iode 131." Toulouse 3, 2002. http://www.theses.fr/2002TOU30165.
Full textDahmani, Safae. "Modèles et protocoles de cohérence de données, décision et optimisation à la compilation pour des architectures massivement parallèles." Thesis, Lorient, 2015. http://www.theses.fr/2015LORIS384/document.
Full textManycores architectures consist of hundreds to thousands of embedded cores, distributed memories and a dedicated network on a single chip. In this context, and because of the scale of the processor, providing a shared memory system has to rely on efficient hardware and software mechanisms and data consistency protocols. Numerous works explored consistency mechanisms designed for highly parallel architectures. They lead to the conclusion that there won't exist one protocol that fits to all applications and hardware contexts. In order to deal with consistency issues for this kind of architectures, we propose in this work a multi-protocol compilation toolchain, in which shared data of the application can be managed by different protocols. Protocols are chosen and configured at compile time, following the application behaviour and the targeted architecture specifications. The application behaviour is characterized with a static analysis process that helps to guide the protocols assignment to each data access. The platform offers a protocol library where each protocol is characterized by one or more parameters. The range of possible values of each parameter depends on some constraints mainly related to the targeted platform. The protocols configuration relies on a genetic-based engine that allows to instantiate each protocol with appropriate parameters values according to multiple performance objectives. In order to evaluate the quality of each proposed solution, we use different evaluation models. We first use a traffic analytical model which gives some NoC communication statistics but no timing information. Therefore, we propose two cycle- based evaluation models that provide more accurate performance metrics while taking into account contention effect due to the consistency protocols communications.We also propose a cooperative cache consistency protocol improving the cache miss rate by sliding data to less stressed neighbours. An extension of this protocol is proposed in order to dynamically define the sliding radius assigned to each data migration. This extension is based on the mass-spring physical model. Experimental validation of different contributions uses the sliding based protocols versus a four-state directory-based protocol
Besch, Guillaume. "Optimisation du contrôle glycémique en chirurgie cardiaque : variabilité glycémique, compliance aux protocoles de soins, et place des incrétino-mimétiques." Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCE016/document.
Full textStress hyperglycemia and glycemic variability are associated with increased morbidity and mortality in cardiac surgery patients. Intravenous (IV) insulin therapy using complex dynamic protocols is the gold standard treatment for stress hyperglycemia. If the optimal blood glucose target range remains a matter of debate, blood glucose control using IV insulin therapy protocols has become part of the good clinical practices during the postoperative period, but implies a significant increase in nurse workload. In the 1st part of the thesis, we aimed at checking the nurse-compliance to the insulin therapy protocol used in our Cardiac Surgery Intensive Care Unit 7 years after its implementation. Major deviations have been observed and simple corrective measures have restored a high level of nurse compliance. In the 2nd part of this thesis, we aimed at assessing whether blood glucose variability could be related to poor outcome in transcatheter aortic valve implantation (TAVI) patients, as reported in more invasive cardiac surgery procedures. The analysis of data from patients who undergone TAVI in our institution and included in the multicenter France and France-2 registries suggested that increased glycemic variability is associated with a higher rate of major adverse events occurring between the 3rd and the 30th day after TAVI, regardless of hyperglycemia. In the 3rd part if this thesis, we conducted a randomized controlled phase II/III trial to investigate the clinical effectiveness of IV exenatide in perioperative blood glucose control after coronary artery bypass graft surgery. Intravenous exenatide failed to improve blood glucose control and to decrease glycemic variability, but allowed to delay the start in insulin infusion and to lower the insulin dose required. Moreover, IV exenatide could allow a significant decrease in nurse workload. The ancillary analysis of this trial suggested that IV exenatide did neither provide cardio protective effect against myocardial ischemia-reperfusion injuries nor improve the left ventricular function by using IV exenatide. Strategies aiming at improving nurse compliance to insulin therapy protocols and at reducing blood glucose variability could be suitable to improve blood glucose control in cardiac surgery patients. The use of the analogues of GLP-1 in cardiac surgery patients needs to be investigated otherwise
Blanc, de Lanaute Nicolas. "Développement et optimisation de méthodes de mesures neutroniques par chambre à fission auprès de réacteurs expérimentaux. Maîtrise , traitement et réduction des incertitudes." Thesis, Clermont-Ferrand 2, 2012. http://www.theses.fr/2012CLF22217.
Full textNuclear measurement, and in particular neutronic measurement, plays a key role in nuclearresearch and industry. Neutrons, when detected, are able to provide capital pieces ofinformation on the behavior of, for example, a nuclear reactor core. This allows, amongothers things, a safe operating of the reactor, qualifying calculation tools used for theconception of future reactors (such as the JHR or the 4th generation reactors) and makingprogress in fundamental research by improving nuclear data libraries. The result of eachmeasurement is affected by an uncertainty which depends on many factors. Its estimation isa necessity and its reduction is one of the major challenges taken up to by the CEA.Neutrons are not charged particles and are therefore unable to directly ionize the gas of agas filled detector; therefore their detection using this kind of measurement tool requires aconversion reaction, which is, in the case of the fission chamber detector, the induced fissionreaction. The reduction and mastery of the uncertainties affecting the fission chambersmeasurements are the core of the thesis subject. This work was achieved within theExperimental Program Laboratory (LPE) of the Experimental Physics Section (SPEx) at CEACadarache. It is divided into four parts:· the first one consists in a state of the art of fission chambers measurements within theframework of the zero-power experimental reactors. It compiles knowledge aboutmeasurement techniques, technologies and physics used for neutron detection.· the second part study the optimization of two of the key parameters defining thedesign of a fission chamber:o the fissile deposit thickness. The results, obtained thanks to simulation,allowed a better understanding of this parameter’s impact on measurementswhich lead to an improvement of the future detectors design.o the filling gas type and pressure. A deep experimental parametric study wascarried out in the MINERVE facility which enables understanding the impact ofboth filling gas characteristics on results. New filling standards have beendiscovered and are now taken into account when designing new detectors.Those standards allow dividing by two the measurement uncertainties due topressure variations and enable using fission chambers in more variousexperimental setups.· the third part of this works is focused on the improvement of the electronic equipmentand post-treatments used for fission chambers measurements. Three innovativeacquisition devices were chosen for testing in MINERVE. The results obtained enablegiving a set of short term and long term recommendations considering the update ofthe instrumentation used in the SPEx zero power reactors. In addition, a new deadtime correction method was developed during the thesis and is presented in this part.Its positive impact on rod-drop measurement is given for illustration as the gapbetween experimental results and expected values is divided by four thanks to thisinnovative correction method.· the last part is about the optimization of spectral indices measurement. The mostimportant parameters regarding spectral indices assessment are studied, their impacton spectral index is quantified and their respective acquisition methods are optimized.The study was mainly concentrated on the calibration data acquisition. This work ledto significant improvement, most notably concerning the « 238U fission / 235U fission»spectral index measured in the MINERVE core. The gap between calculation andexperimental results has been greatly reduced (from 35.70% to 0.17%) and theassociated uncertainty has also been diminished (from 15.7% to 5.6%). Those resultsalso allowed explaining abnormal gaps between calculation and experimentationobserved in measurement performed in the MINERVE facility in 2004. (...)
Incandella, Olga. "Définition de protocoles rationnels d'identification de lois de comportement élastoplastiques. : Application à la simulation éléments finis d'opérations industrielles d'emboutissage." Chambéry, 2006. http://www.theses.fr/2006CHAMS021.
Full textThe objective of the thesis carried out within the framework of the European project called “Intelligent system for NET shape FORming of Sheet MEtal Product" is the research of optimized procedures for the adjustment of numeric simulations for industrial operations of stamping. The procedure puts forward the key role of the identification of the elastoplastic behaviour laws for metal sheet to assure a reliable simulation. For this purpose, the traditional identifications of the constitutive laws from tensile tests are initially used. Various techniques and protocols are retained to take the best advantages from this kind of tests, in particular using the experimental technique of optical analysis of deformation. These choices lead to a notable improvement of the numeric results concerning the effect of the elastic spring-back, the state of residual stresses and the prediction of the strain localisation. However the limits of this kind of identification realized from homogeneous tests appear clearly. This is the reason why we propose an original method of identification of the parameters of the behaviour law from heterogeneous tests close to the real case of sheet metal forming. The optimisation of the coefficients of the mechanical constitutive law is founded on the use of artificial neural networks. The advantages of this method are shown on an example of stamping of axisymmetric parts with blocked blank
Fortuny, Cédric. "Estimation du trafic, planification et optimisation des ressources pour l'ingénierie des réseaux IP/MPLS." Toulouse 3, 2008. http://thesesups.ups-tlse.fr/1198/.
Full textIP networks have become critical systems in the last decade: service interruptions or even significant service degradations are less and less tolerable. Therefore, a new network engineering approach is required to help design, plan and control IP architectures on the basis of supervision information. Our contributions to this new approach are related to traffic matrix estimation from SNMP link loads, to IP routing weights optimization and to network dimensioning. The models and algorithms proposed in this thesis take into account many technological constraints in order to provide operational solutions
Sadki, Abdellah. "Planification des chimiothérapies ambulatoires avec la prise en compte des protocoles de soins et des incertitudes." Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2012. http://tel.archives-ouvertes.fr/tel-00732983.
Full textBertrand, Sébastien. "Optimisation de réseaux multiprotocoles avec encapsulation." Clermont-Ferrand 2, 2004. http://www.theses.fr/2004CLF21486.
Full textLampin, Quentin. "Réseaux urbains de capteurs sans-fil : Applications, caractérisation et protocoles." Thesis, Lyon, INSA, 2014. http://www.theses.fr/2014ISAL0001/document.
Full textWireless Sensors are small electronic devices made for measuring physical properties of their environment and communicating them wirelessly to an information system. In this thesis, we study existing network architectures and to devise the best-suited configuration for typical urban wireless Sensor Network use-cases. To that effect, we provide comprehensive analytical models to compare the different families of MAC protocols in terms of Delivery Rate and Energy Consumption, e.g. synchronous vs asynchronous, contention-based vs direct access etc. Headlines results a mathematical framework to devise the least energy-cost contention algorithm for a given Delivery Rate and closed-form expressions of the Energy Consumption and Delivery Rate for popular access control protocols. These results are then synthesised in a comparison study of the two prevailing urban sensors network architectures, i.e. long-range and multihop. We show that long-range sensor networks are best-suited for low-traffic and sparser network topologies while higher traffic loads and denser network topologies demand switching to a multihop network operating a synchronous MAC protocol on higher bitrate radios. Based on the analysis of the architectures best suited for each use-case scenario, i.e. low traffic loads/sparse network and high traffic loads/dense network, we identify suitable optimisations to improve the QoS performance and energy efficiency of said architectures. First, we improve on the energy efficiency of the arbitration of the medium access by defining a cascading tournament contention algorithm. This protocol, CT-MAC, resolves multiple timeslots allocation in a single, energy efficient contention tournament. Second, we propose an adaptive relaying scheme for the long-range network architecture named SARI-MAC. This scheme is an attempt to cope with coverage holes that occurs when using long-range in a dense urban habitat by letting sensor nodes relay communications of nodes whose link budgets are incompatible with the QoS requirements of the network. To that effect, we propose a receiver-initiated MAC protocol that self-adapts to the traffic condition so that the duty-cycle of relayers is kept as low as possible with respect to the load of frames to relay. Finally, we propose an opportunistic relaying scheme named QOR. QOR is a routing protocol that exploits long-range, opportunistic radio links to provide faster and more reliable transmissions. To that effect, QOR proposes a joint routing structure and addressing scheme that allows identifying a limited set of nodes than can become opportunistic relayers between a source sensor and the sink. Those nodes then follow an original cascaded acknowledgement mechanism that brings reliable acknowledgment and ensures a replication-free forwarding of the data frames
Jaubert, Jean. "Etudes de quelques interactions entre espèces et facteurs de l'environnement (lumière, température et oxygène dissous) mesures in situ en milieu récifal : conception et réalisation d'instruments de mesure et protocoles expérimentaux." Nice, 1987. http://www.theses.fr/1987NICE4110.
Full textBhatia, Sapan. "Optimisations de compilateur optimistes pour les systèmes réseaux." Bordeaux 1, 2006. http://www.theses.fr/2006BOR13169.
Full text