Tesis sobre el tema "Prédiction de contact"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 47 mejores tesis para su investigación sobre el tema "Prédiction de contact".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Parent, Marie-Océane. "Prédiction de la stabilité en contact rotor-stator dans les turboréacteurs d'avion". Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0004/document.
Texto completoThis work aims to predict the dynamic stability of a turbofan engine submitted to light contacts between blade tips and casing. Reducing the clearance between the rotating blades and the casing improves indeed the performances of turbomachines ; however, it also increases the possible contacts between rotating and stationary parts, which can cause unstable dynamic behavior. The approach is based on a hybrid model which introduces a simplified bladed wheel and a flexible casing to a rotor-shaft model. A 3D contact formulation has also been implemented ; it considers the model kinematic and introduces the local geometry of the contact area. The model behavior with blade-to-casing contacts is analyzed through two approaches : the first one assumes permanent contacts while the other one allows contact intermittence. The results highlight the importance of couplings in the outbreak of unstable phenomena and the relevance of the 3D contact formulation in predicting the stability of the system
Legrand, Mathias. "Modèles de prédiction de l'interaction rotor/stator dans un moteur d'avion". Phd thesis, Ecole centrale de nantes - ECN, 2005. http://tel.archives-ouvertes.fr/tel-00011631.
Texto completoDegré, Fabien. "Prédiction numérique des caractéristiques d'une pièce traitée par galetage : application au secteur du décolletage". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00729061.
Texto completoRousseau, Clément. "Prédiction par transfert inverse d'un champ de conductance thermique de contact dans un mur de réacteur métallurgique". Mémoire, Université de Sherbrooke, 2011. http://savoirs.usherbrooke.ca/handle/11143/1636.
Texto completoCaignot, Alain. "Prédiction par essais virtuels de l'amortissement dans les structures spatiales". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2009. http://tel.archives-ouvertes.fr/tel-00422291.
Texto completoCaignot, Alain. "Prédiction par essais virtuels de l'amortissement dans les structures spatiales". Phd thesis, Cachan, Ecole normale supérieure, 2009. http://www.theses.fr/2009DENS0018.
Texto completoIn the context of a significant cost reduction in the design of space launchers, it is on crucial to control all the factors involved in the dimensionning process. The decrease in mass is compensated by an increase in stiffness and results in a decrease of damping, which is the parameter that determines the level of the dynamic response. At the present time, the damping is taken into account in a global model and most often identified on the final structure. The objective of this work is to improve the launcher design process by introducing the capability to predict damping a priori. In order to do that, the idea is to develop a database containing the dissipation due to the materials and the dissipation relative to the joints in the launcher for each type and each level of solicitation. . . Damping in materials is relatively well-known in the case of the composites which make up the launcher. Therefore, the challenge is the prediction of the damping in the joints where the dissipations can be very important. The experimental approaches are expensive and complex to implement, that is why this work is based on a finite element computation of the joints. This type of simulations is beyond the reach of standard industrial computing codes ans has needed the development of specific parallel computationnal code based on the LATIN method. The robustness of the numerical tool has been studied and its results validated from experimental values obtained in a previous study. Finally, the computation of different joints of the launcher has been done as well as the methodology for integrating these results in the design process of Ariane
Del, Bufalo Aurelia. "Effets des sensibilisants sur la synthèse de la prostaglandine E2 : Mécanismes et intérêt dans la prédiction de l’allergie de contact". Thesis, Paris, AgroParisTech, 2012. http://www.theses.fr/2012AGPT0003/document.
Texto completoContact sensitizers are defined as reactive molecules (electrophilic) which have the ability to modify skin proteins to form an antigen (hapten). In addition to the haptenation mechanism, danger signals, leading to the activation of dendritic cells, are described to be crucial for the effective induction of an hapten-specific T cell immune response. In the context of the 7th amendment to the Cosmetic Directive, the cosmetic industry is concerned by the challenge of finding non-animal approaches to assess the sensitizing potential of chemicals. While danger signals induced by sensitizers in steady-state conditions have already been analyzed, we chose to investigate the impact of sensitizers on the course of an inflammatory response. For this purpose we used the U937 cell line differentiated with PMA and activated with LPS. In these conditions, cells produce a large amount of inflammatory mediators (IL-β, TNF-α, IL-6, IL-10, IL-8, PGE2, PGD2, TxB2) through the activation of pathways leading to the activation of the transcription factors NF-κB and Nrf2 and through AA metabolism by the cPLA2/COX-2 cascade. Interestingly, we showed that 6 contact sensitizers with various potential (DNCB, PPD, HQ, PG, CIN, EUG) significally and specifically decrease the production of prostanoïds and in particular of PGE2 induced by PMA/LPS. We further demonstrated that there is no unique inhibition profile of the sensitizers even if the majority (except for DNCB) of the effects applies on COX-2 (i.e. inhibition of the expression and/or activity). For DNCB, inhibition mechanism appears to be dependant of its capacity to react with thiols residues and in particular to deplete intracellular glutathione possibly leading to the inactivation of the PG-synthases. In parallel, we assess a statistical analysis on 160 molecules that allow us to define the test parameters (a molecule is a sensitizer if the PGE2 inhibition at 24h is more than 60%) and to calculate the test performance toward LLNA (78%). Moreover we demonstrated that the PGE2 test could be complementary to other already existing in vitro tests like MUSST or Nrf2-HTS. In summary, we add here a new insight into the multiple biochemical effects described so far for sensitizers. Even if the underlying biological relevance remains unclear, the parameter “PGE2 inhibition” is good test for skin sensitization evaluation. Further studies will precise how this parameter could be implemented into an alternative testing strategy for the evaluation of skin sensitization
Del, Bufalo Aurelia. "Effets des sensibilisants sur la synthèse de la prostaglandine E2 : Mécanismes et intérêt dans la prédiction de l'allergie de contact". Phd thesis, AgroParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-01016611.
Texto completoChauvineau, Guillaume. "Modélisation de la dynamique des boîtes de vitesses automobiles soumises à des sollicitations acycliques : applications à la prédiction du bruit de grenaille et validation expérimentale". Thesis, Ecully, Ecole centrale de Lyon, 2014. http://www.theses.fr/2014ECDL0022.
Texto completoThe gearbox is an important component of an automobile and its development is complex. Numerous constraints must be taken into account, particularly its noise and vibration behavior. This aspect of the conception is nowadays poorly mastered and the noise pollution of gearboxes, such as the rattle noise, are often discovered late. The aim of this work was the development of a numerical model of gear dynamics adapted to gearboxes’ modelling and allowing to predict the conditions of gear rattle noise appearance. The model proposed in this thesis is based on an original combination of different models and is applicable to the vast majority of gearboxes. Flexible components, such as shafts and housings are modeled by the finite element method. A gear model based on Kelvin-Voigt contact model is developed. It allows to take into account the contact losses and the back side contacts. To complete this model, mechanical loss models are implemented in order to take into account the influence of the bearings, plain bearings, synchronizers and gears’ immersion in oil. This model coupled with a noise indicator allows for sensitivity analyzes to identify influential parameters on the rattle noise but also to compare the dynamic behavior of different configurations. Finally, a test campaign on an industrial gerbox is conducted and simulation results are confronted to measurements
Belloula, Amar. "Contribution à l’étude de la prédiction de la durée de vie en fretting-fatigue : application à un contact acier-alliage d’aluminium". Thesis, Lille 1, 2013. http://www.theses.fr/2013LIL10029/document.
Texto completoThe purpose of this study is to predict fretting fatigue crack nucleation of a mono contact steel/aluminum at different applied loads.An experimental device was first designed and adapted on a multiaxial fatigue apparatus. Tests were conducted aluminum alloy at constant amplitude loading under both different load levels and load ratios. As expected, the fretting fatigue life was found lower compared to uniaxial fatigue under the same loading conditions. Finite element analysis was conducted using \textit{Abaqus} software. The computed stress and strain fields were used to estimate the parameters of different multiaxial fatigue criteria based on the critical plane approach. When using stress and strain values corresponding to the material point exhibiting the maximum value of the considered parameter, we have found that, whatever the multiaxial fatigue parameter, conservative fatigue life estimates due to the severe gradients in the contact zone. An averaging method of the mechanical quantities over a given reference volume was then used to attenuate these gradients effects. The estimates show a good correlation with experimental results. However, the size of the reference volume depends on both the multiaxial fatigue criteria and on loading conditions applied. So that it could not be directly linked to the grain size for the material studied. Finally, we made an attempt to extend these criteria and the developed method to variable loadings. Fretting fatigue tests using two and four blocks loadings were performed and the previous criteria were coupled with two damage laws. The estimates we have obtained seems very promising
Filali, Oussama. "Approche multi physique du contact frottant en grande déformation plastique : prédiction numérique du grippage d'alliages d'aluminium en mise en forme à froid". Thesis, Valenciennes, Université Polytechnique Hauts-de-France, 2020. http://www.theses.fr/2020UPHF0035.
Texto completoThe thesis proposes a new approach to predict the galling defect encountered during cold forming of aluminum alloys. Numerous experimental studies show that this defect is strongly linked to the conditions of contact and friction and is a function of the roughness of the manufacturing tools. Models to predict the appearance of this defect are rare and are generally based on indirect observables, such as pressure or temperature fields, without explicitly taking into account the influence of first-order factors such as lubrication and 'surface condition of the materials in contact. The proposed methodology of our work assumes that the defect appears when the material of the part near its contact surface reaches a critical level of damage. However, in a previous study, it was shown that damage models based exclusively on hydrostatic pressure, such as GTN or Lemaitre models, were only able to predict damage if they model roughness. surfaces. This leads to multi-scale numerical simulations which are very costly in terms of computation time and incompatible with the modeling of real industrial processes. To get around this difficulty, the present study proposes to use damage models considering the shear effects generated by the friction contact. The influence of roughness is then based on a relevant choice of the friction law. First, a bibliographical chapter deals with damage models. Particular attention is paid to models using the Lode parameter to consider the effect of shear stresses on the evolution of damage variables. Secondly, a bibliographical review of friction and lubrication models is presented. The study notably highlights models based on a mesoscopic approach to lubrication, with the modeling of the crushing of roughness during rubbing contact. At the end of these chapters, the damage model developed by L. Xue and a lubrication model explicitly considering the value of surface roughness is used to predict seizure in different contact configurations. Initially, this numerical methodology is applied to the study of the flat drawing process of 6082-T4 aluminum alloy plates. Then the methodology is applied to a pion / plane contact on 6082-T6 alloy plates. Finally, a process for spinning before cylindrical slips is studied with the same digital tools. These different configurations are tested with or without lubricant and with tools having different roughness values. The results show that the proposed procedure allows in most of the cases tested to predict the appearance of the defect, whether in configurations with or without lubricant. The predictions are nevertheless optimistic, the slip distances before the onset of digital seizure being generally greater than the distances measured experimentally. The results are however promising, and several perspectives are presented to improve the precision of the proposed methodology
Belaid, Abedessalem. "Modélisation tridimensionnelle du comportement mécanique de la garniture de forage dans les puits à trajectoires complexes : application à la prédiction des frottements garniture-puits". Paris, ENMP, 2005. http://www.theses.fr/2005ENMP1323.
Texto completoDrilling an oil well rapidly and at lower cost is now becoming a challenge in the drilling industry. To face and carry out this challenge, a good design of the drilling installation is essential. Therefore, predicting correctly friction losses, known as “Torque&Drag”, between the drill string and the borehole is a major asset especially for directional wells with complex trajectories. The existing models of friction calculation show some weaknesses in the case of complex trajectory wells. A new mechanical model of friction calculations inside complex trajectory wells was developed and validated. This model uses a 3D method of wellpath calculations called “Minimum of Torsion” that includes two geometric parameters: curvature and torsion. Unlike traditional models, which assume that the drilling structure lies, by gravity, on the low side of the borehole and often neglect the stiffness of the drill string, the new “Stiff String” model calculates the deformation state of the drilling assembly inside the hole via an iterative contact algorithm. Furthermore, the new model, by using a direct integration method of equilibrium equations, is much more faster than other one solved with finite element method. A comparison with the widely used “Soft String” model was carried out on several actual and theoretical wells. The comparison shows that the new model rectify many weaknesses of the existing model. Confrontations with measured data, in the case of wells having smooth trajectory (low tortuosity), show agreement of both models with field measurements. However the new model has better friction losses prediction when wellpath tortuosity increases
Guda, Vamsi Krishna. "Contributions à l'utilisation de cobots comme interfaces haptiques à contact intermittent en réalité virtuelle". Thesis, Ecole centrale de Nantes, 2022. http://www.theses.fr/2022ECDN0033.
Texto completoVirtual reality (VR) is evolving and being used in industrial simulations but the possibility to touch objects is missing, for example to judge the perceived quality in the design of a car. The current haptic interfaces do not allow to easily restore the notion of texture, therefore an approach is considered “intermittent contact interface” to achieve this. A cobot positions a mobile surface at the point of contact with a virtual object to allow physical contact with the operator's hand. The contributions of this thesis concern several aspects: the placement of the robot, the modeling of the operator, the management of the displacement and the speed of the robot and the detection of the operator's intentions. The placement of the robot is chosen to allow reaching the different working areas and to ensure passive safety by making it impossible for the robot to hit the head and chest of the operator in a normal working position, i.e. sitting in a chair. A model of the user, including a torso and arms, is designed and tested to follow the user's movements in real time Interaction is possible on a set of predefined poses that the user chains together as desired. Different strategies are proposed to predict the user's intentions. The key aspects of the prediction are based on the gaze direction and the hand position of the user. An experimental study as well as the resulting analysis show the contribution of taking into account the gaze direction. The interest of introducing "safety" points to move the robot away from the operator and allow fast robot movements is highlighted
Zhang, Yuanyuan. "Friction prediction for rough surfaces in an elastohydrodynamically lubricated contact". Thesis, Lyon, 2019. http://www.theses.fr/2019LYSEI063.
Texto completoThe friction of interfacial surfaces greatly influences the performance of mechanical elements. Friction has been investigated experimentally inmost studies. In this work, the friction is predicted by means of numerical simulation under an elastohydrodynamic lubrication (EHL) rough contact condition. The classical Multigrid technique performs well in limiting computing time and memory requirements. However, the coarse grid choice has an important influence on code robustness and code efficiency to solve the rough problem. In the first part of this work, a coarse grid construction method proposed by Alcouffe et al. is implemented in the current time-independent EHL Multi-Grid code. Then this modified solver is extended to transient cases to solve the rough contact problem. The friction curve is usually depicted as a function of “lambda ratio”, the ratio of oil film thickness to root-mean-square of the surface roughness. However this parameter is less suitable to plot friction variations under high pressure conditions (piezoviscous elastic regime). In the second part of this work, the friction coefficient is computed using themodified EHL code for many operating conditions as well as surface waviness parameters. Simulation results show that there is no single friction curve when the old parameter "lambda ratio" used. Based on the Amplitude Reduction Theory, a new scaling parameter depends on operating condition and waviness parameters is found, which can give a unified friction curve for high pressure situation. For more complex rough surfaces, a power spectral density (PSD) based method is proposed to predict friction variations in the third part of this work. The artificial surface roughness is employed to test the rapid prediction method firstly. Good agreement is found between the full numerical simulation and this rapid prediction. Then the rapid prediction method is applied to analyze the friction variation of measured surface roughness. Both the new scaling parameter and the friction increase predicted by the PSD method show good engineering accuracy for practical use
Allain, Fabrice. "Calcul efficace de la structure des protéines à partir de contacts évolutifs". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066366/document.
Texto completoStructural prediction methods provide a relatively effective alternative to experimental approaches to provide a first insight into native folding of a protein. The gap between the number of structures and protein sequences available in databases has steadily increased since the advent of high throughput sequencing technologies. This strong growth of genomic information helped bring to light prediction tools using coevolutionary data. Conservation of a specific function implies strong restraints on interacting residues involved in the folding and function. Once detected, these interactions can help to model the conformation of a protein. Some important aspects needs to be improved during the modelling process including the detection of false positive among the predicted contacts. Limitations in the field are similar to those encountered in nuclear magnetic resonance spectrometry structure determination where data integration is a clearly established and largely automated process. The Ambiguous Restraints for Iterative Assignment (ARIA) software uses the concept of ambiguous distance restraints and follows an iterative process to assign and refine the list of nearby nuclei in space to compute a set of structural models in accordance with the data. This work aims to adapt this approach to de novo predict the structure of a protein using evolutionary information
Belaid, Abdessalem. "Modélisation tridimensionnelle du comportement mécanique de la garniture de forage dans les puits à trajectoires complexes : application à la prédiction des frottements garniture-puits". Phd thesis, École Nationale Supérieure des Mines de Paris, 2005. http://pastel.archives-ouvertes.fr/pastel-00579916.
Texto completoLes modèles usuels de prédiction des frottements montrent certaines insuffisances lorsque la trajectoire du puits se complexifie. Un nouveau modèle de calcul de frottements dans les puits de forage à trajectoires complexes a été développé et validé. Ce modèle utilise une méthode tridimensionnelle de reconstitution de la trajectoire intégrant à la fois la courbure et la torsion géométrique. Contrairement aux modèles classiques, qui supposent simplement que la garniture repose par gravité sur la paroi basse du trou de forage et qui négligent souvent la rigidité des tiges, le nouveau modèle rigide calcule la vraie déformée de la garniture de forage à l'intérieur du trou via un algorithme itératif de contact unilatéral. En outre, pour un gain important du temps de calcul, le modèle se base sur une intégration numérique directe des équations d'équilibre local sans avoir recours à la méthode des éléments finis.
La comparaison avec un modèle couramment utilisé dans l'industrie de forage, appelé modèle LISSE, a été effectuée sur plusieurs puits réels et théoriques. Il ressort de cette comparaison que le nouveau modèle vient palier plusieurs faiblesses du modèle LISSE dans le cas des trajectoires à géométrie complexe (surestimation des zones de contacts et des forces de contact en présence de micro-tortuosité, non sensibilité au signe du gradient d'azimut en présence de fort gauchissement, hypothèse de contact sur la paroi basse du trou pas toujours vérifiée). Par ailleurs, la confrontation avec les mesures du terrain pour la plupart des puits à géométrie bidimensionnelle ou faiblement tridimensionnelle avec des faibles dog legs (ne dépassant pas 2 à 3°/30 m) fournit généralement des concordances entre les résultats des deux types de modèles et les valeurs mesurées. En revanche, en présence de tortuosités et dog legs locaux le nouveau modèle fournit une meilleure prédiction des pertes en frottement.
Allain, Fabrice. "Calcul efficace de la structure des protéines à partir de contacts évolutifs". Electronic Thesis or Diss., Paris 6, 2017. http://www.theses.fr/2017PA066366.
Texto completoStructural prediction methods provide a relatively effective alternative to experimental approaches to provide a first insight into native folding of a protein. The gap between the number of structures and protein sequences available in databases has steadily increased since the advent of high throughput sequencing technologies. This strong growth of genomic information helped bring to light prediction tools using coevolutionary data. Conservation of a specific function implies strong restraints on interacting residues involved in the folding and function. Once detected, these interactions can help to model the conformation of a protein. Some important aspects needs to be improved during the modelling process including the detection of false positive among the predicted contacts. Limitations in the field are similar to those encountered in nuclear magnetic resonance spectrometry structure determination where data integration is a clearly established and largely automated process. The Ambiguous Restraints for Iterative Assignment (ARIA) software uses the concept of ambiguous distance restraints and follows an iterative process to assign and refine the list of nearby nuclei in space to compute a set of structural models in accordance with the data. This work aims to adapt this approach to de novo predict the structure of a protein using evolutionary information
Fremont, Julien. "Etude des contributions aux surfaces de potentiel et couplages non-adiabatiques par calculs ab initio de structures électroniques et mise aux points des Hamiltoniens effectifs pour les prédictions vibrationnelles : applications aux molécules LiH, H+3 et PH3". Reims, 2010. http://theses.univ-reims.fr/sciences/2010REIMS019.pdf.
Texto completoThis work takes place at the halfway between theoretical chemistry and theoretical molecular spectroscopy by proposing to use at their limits the quantum chemistry methods and then make theoretical predictions on the three molecules LiH, PH3 and H3 +. First of all, a chapter is devoted to quantum chemistry methods used in this work to obtain the potential energy surfaces and electronic states. For applications in molecular spectroscopy, the potential energy surface requires to be very accurate. The second chapter examines the influence of the basis, electronic correlations, relativistic corrections and extrapolation methods on the vibrational levels of LiH molecule. For a molecule with a small number of electrons, it is possible to get such quality on the potential energy surface that the limit the Born-Oppenheimer approximation is reached. The third chapter develops the concepts of contact transformation and effective Hamiltonian. It introduce the terms derived from quantum chemistry calculations needed to finally apply this formalism beyond the Born-Oppenheimer approximation to the LiH molecule. The aim of these two preveiws chapters is to explore the limits of ab-initio methods to characterize the vibrational states of all isotopologues near to the dissociation. The study of molecules with a large number of variables has other types of difficulties. The large number of electrons on one hand makes difficult to quantum chemistry calculations to obtain potential energy surface of good quality, on the other hand, the increase in the number of variables complicates the nuclear wave functions and therefore the calculation of vibrational energy levels. The fourth chapter presents the study within the framework of Born-Oppenheimer of the PH3 molecule containing the derivation of the kinetic energy operator in valence coordinates and calculating the potential energy surface by quantum chemistry method. The molecular symmetries of the Hamiltonian produces effects still poorly understood by scientists. The H3 + molecule in its C3v configuration has a conical intersection where the non-adiabatic effects are revealed. In this final chapter the kinetic energy operator in hyperspherical coordinates and the Jacobian matrix associated with this transformation are derived. The non-adiabatic couplings calculated in Cartesian coordinates are reexpressed hypersheriques. After introducing the adiabatic-diabatic transformation, the geometric phase effects are studied
Biboulet, Nans. "Influence of indentations on rolling bearing life". Phd thesis, INSA de Lyon, 2008. http://tel.archives-ouvertes.fr/tel-00663264.
Texto completoLaporte, Julie. "Etude et modélisation de l'endurance électrique de micro-contacts soumis à des sollicitations de fretting-usure : caractérisation de nouveaux dépôts base Argent". Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEC034/document.
Texto completoAdvanced instrumentation in mechanical systems (aeronautical, automobile etc…) goes hand in hand with an ever increased use of electrical connectors. However, the unfavorable operating environment (chemical attack and vibrational loads) causes more or less severe degradation of electrical contacts, which in turn perturbs their electrical conductivity. Gold plating is usually applied in electrical contacts in order to limit damage and to ensure connector stability. However, economic constraints and the high cost of gold require cheaper alternatives. Amongst conductive metals, silver is the best candidate. Hence, the purpose of this PhD project is to investigate the electrical response and the degradation of silver coatings when subjected to fretting loadings. The study is divided into three main research axes. The first axis consists in realizing a complete study of a homogeneous silver/silver contact in order to identify the degradation mechanisms that are responsible for the electrical failure, both in fretting loadings and reciprocating sliding. It was possible to formalize a predictive model, using an energy density approach, allowing to extrapolate the lifetime of the contact as a function of various loading parameters. A complementary study also showed the impact of a corrosive sulfur atmosphere on these electric contacts. As part of the second research axis, an investigation of the tribological and electrical behavior of novel silver-based materials, solely synthesized as a gold replacement, was performed. The analysis of these homogeneous contacts allowed to explain the degradation mechanism and the mechanical behavior of these contacts when subjected to a wet environment. In the last research axis a study was led on the same silver-based materials but in a heterogenous configuration against a gold coating in order to identify the tribological and electrical behavior of these contacts when composed by materials with similar or opposite properties
Shimagaki, Kai. "Advanced statistical modeling and variable selection for protein sequences". Electronic Thesis or Diss., Sorbonne université, 2021. http://www.theses.fr/2021SORUS548.
Texto completoOver the last few decades, protein sequencing techniques have been developed and continuous experiments have been done. Thanks to all of these efforts, nowadays, we have obtained more than two hundred million protein sequence data. In order to deal with such a huge amount of biological data, now, we need theories and technologies to extract information that we can understand and interpret.The key idea to resolve this problem is statistical physics and the state of the art of machine learning (ML). Statistical physics is a field of physics that can successfully describe many complex systems by extracting or reducing variables to be interpretable variables based on simple principles. ML, on the other hand, can represent data (such as reconstruction and classification) without assuming how the data was generated, i.e. physical phenomenon behind of data. In this dissertation, we report studies of protein sequence generative modeling and protein-residue contact predictions using statistical physics-inspired modeling and ML-oriented methods. In the first part, we review the general background of biology and genomics. Then we discuss statistical modelings for protein sequence. In particular, we review Direct Coupling Analysis (DCA), which is the core technology of our research. We also discuss the effects of higher-order statistics contained in protein sequences and introduces deep learning-based generative models as a model that can go beyond pairwise interaction
Coulon, Sandrine. "Prédiction de la durée de vie des contacts ponctuels lubrifiés en présence d'indentations". Lyon, INSA, 2001. http://www.theses.fr/2001ISAL0038.
Texto completoThe purpose of this work is to predict the rolling contact fatigue (RCF) life reduction and analyze the failure process for lubricated indented point contacts. A first part studies the influence of a dent on the pressure and stress field from a numerical and analytical point of view. An analytical relation of the pressure peak versus two geometric dent parameters is obtained: slope, ratio between the dent depth over the dent diameter, and shoulder radius of curvature. The stress analysis leads to a damage criterion. Based on a local approach, the endurance limit criterion is representative of the contact severity. Complemented with a stressed volume analysis, a damaged risk is defined. Finally, a damage risk abacus as a function of the two parameters mentioned previously is presented. The results predicted with these two criteria are compared to RCF tests performed on a two disk-machine. A second part is dedicated to the damage process and its analysis. Two kinds of defects are studied, artificial ones made with a Rockwell penetrator, and natural ones obtained from solid particles passing through the contact. Whatever the considered dent, the plastic deformation due to the over rolling occurs in the very first cycles, quickly leading to a stabilized geometry. Depending on the test conditions (load, speed, slide to roll ratio), geometric and hydrodynamic damage effects are identified. The analysis of the direction of the crack propagation is studied using cross-sections. This work represents a step forward in the prediction and understanding of the influence of solid particle contamination on fatigue life
Kharboutly, Mohamed. "Modélisation, réalisation et commande d'un système de micro-manipulation sans contact par diélectrophorèse". Phd thesis, Université de Franche-Comté, 2011. http://tel.archives-ouvertes.fr/tel-00582992.
Texto completoCumin, Julien. "Reconnaissance et prédiction d'activités dans la maison connectée". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM071/document.
Texto completoUnderstanding the context of a home is essential in order to provide services to occupants that fit their situations and thus fulfil their needs. One example of service that such a context-aware smart home could provide is that of a communication assistant, which can for example advise correspondents outside the home on the availability for communication of occupants. In order to implement such a service, it is indeed required that the home understands the situations of occupants, in order to derive their availability.In this thesis, we first propose a definition of context in homes. We argue that one of the primary context dimensions necessary for a system to be context-aware is the activity of occupants. As such, we then study the problem of recognizing activities, from ambient smart home sensors. We propose a new supervised place-based approach which both improves activity recognition accuracy as well as computing times compared to standard approaches.Smart home services, such as our communication assistance example, may often need to anticipate future situations. In particular, they need to anticipate future activities of occupants. Therefore, we design a new supervised activity prediction model, based on previous state-of-the-art work. We propose a number of extensions to improve prediction accuracy based on the specificities of smart home environments.Finally, we study the problem of inferring the availability of occupants for communication, in order to illustrate the feasibility of our communication assistant example. We argue that availability can be inferred from primary context dimensions such as place and activity (which can be recognized or predicted using our previous contributions), and by taking into consideration the correspondent initiating the communication as well as the modality of communication used. We discuss the impact of the activity recognition step on availability inference.We evaluate those contributions on various state-of-the-art datasets, as well as on a new dataset of activities and availabilities in homes which we constructed specifically for the purposes of this thesis: Orange4Home. Through our contributions to these 3 problems, we demonstrate the way in which an example context-aware communication assistance service can be implemented, which can advise on future availability for communication of occupants. More generally, we show how secondary context dimensions such as availability can be inferred from other context dimensions, in particular from activity. Highly accurate activity recognition and prediction are thus mandatory for a smart home to achieve context awareness
Medjiah, Samir. "Optimisation des protocoles de routage dans les réseaux multi-sauts sans fil à contraintes". Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14663/document.
Texto completoGreat research efforts have been carried out in the field of challenged multihop wireless networks (MWNs). Thanks to the evolution of the Micro-Electro-Mechanical Systems (MEMS) technology and nanotechnologies, multihop wireless networks have been the solution of choice for a plethora of problems. The main advantage of these networks is their low manufacturing cost that permits one-time application lifecycle. However, if nodes are low-costly to produce, they are also less capable in terms of radio range, bandwidth, processing power, memory, energy, etc. Thus, applications need to be carefully designed and especially the routing task because radio communication is the most energy-consuming functionality and energy is the main issue for challenged multihop wireless networks.The aim of this thesis is to analyse the different challenges that govern the design of challenged multihop wireless networks such as applications challenges in terms of quality of service (QoS), fault-tolerance, data delivery model, etc., but also networking challenges in terms of dynamic network topology, topology voids, etc. Our contributions in this thesis focus on the optimization of routing under different application requirements and network constraints. First, we propose an online multipath routing protocol for QoS-based applications using wireless multimedia sensor networks. The proposed protocol relies on the construction of multiple paths while transmitting data packets to their destination, i.e. without prior topology discovery and path establishment. This protocol achieves parallel transmissions and enhances the end-to-end transmission by maximizing path bandwidth and minimizing the delays, and thus meets the requirements of QoS-based applications. Second, we tackle the problem of routing in mobile delay-tolerant networks by studying the intermittent connectivity of nodes and deriving a contact model in order to forecast future nodes' contacts. Based upon this contact model, we propose a routing protocol that makes use of nodes' locations, nodes' trajectories, and inter-node contact prediction in order to perform forwarding decisions. The proposed routing protocol achieves low end-to-end delays while using efficiently constrained nodes' resources in terms of memory (packet queue occupancy) and processing power (forecasting algorithm). Finally, we present a topology control mechanism along a packet forwarding algorithm for event-driven applications using stationary wireless sensor networks. Topology control is achieved by using a distributed duty-cycle scheduling algorithm. Algorithm parameters can be tuned according to the desired node's awake neighbourhood size. The proposed topology control mechanism ensures trade-off between event-reporting delay and energy consumption
Perrot, Clément. "Imagerie directe de systèmes planétaires avec SPHERE et prédiction des performances de MICADO sur l’E-ELT". Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCC212/document.
Texto completoThis thesis is performed in the context of the study of the formation and evolution of planetary systems using high contrast imaging, also known as direct imaging in contrast to so-called "indirect" detection methods. The work I present in this manuscript is divided into two distinct parts.The first part concerns the observational component of my thesis, using the SPHERE instrument installed at Very LargeTelescope. This work was done as part of the consortium of the same name. The purpose of the SPHERE instrument is to detect and characterize young and massive exoplanets, but also circumstellar disks ranging from very young protoplanetary disks to older debris disks. In this manuscript, I present my contribution to the program SHINE, a large survey with an integration time of 200 nights' worth of observation, the goal of which is the detection of new exoplanets and the spectral and orbital characterization of some previously-known companions. I also present the two studies of circumstellar disks that I made, around the stars HD 141569 and HIP 86598. The first study allowed the discovery of concentric rings at about ten AU of the star along with an unusual flux asymmetry in the disk. The second study is about the discovery of a debris disk that also has an unusual flux asymmetry. The second part concerns the instrumental component of my thesis work done within the MICADO consortium, in charge of the design of the camera of the same name which will be one of the first light instruments of the European Extremely Large Telescope (ELT). In this manuscript, I present the study in which I define the design of some components of the coronagraphic mode of MICADO while taking into account the constraints of the instrument - which is not dedicated to high contrast imaging, unlike SPHERE
Merabet, Samir. "Vers un droit de l'intelligence artificielle". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0528.
Texto completoEven if its appearance is recent among technological inventions history, artificial intelligence has nevertheless quickly established itself, disrupting economy and the job market. Yet, upon assessment, it seems that these two forms of intelligence cannot be regarded as equivalent. Even if artificial intelligence borrows some aspects of human intelligence, many others are missing. Conscience, reason and emotions are unknown to machines, even intelligent ones. Yet, law rests upon such qualities. Hence, applying rules created for human to intelligent computer systems may be inappropriate. Indeed, the confrontation between law and artificial intelligence reveals the existence of a paradigm on which positive law is based. To a large extent, French law relies on the subjectivity proper to humans. All branches of law appear to be concerned, civil law as well as criminal law or intellectual property law. Therefore, the legal regime of artificial intelligence seems very uncertain. Consequently, the purpose of this study is to clear up the doubts surrounding the nature of artificial intelligence in order to neatly distinguish it from human intelligence. Eventually, the acknowledgment of the fundamental difference opposing these two forms of intelligence should lead to the recognition of a new public order of humanity and the preservation of an exclusive field for human intelligence
Djebili, Omar. "Contribution à la maintenance prédictive par analyse vibratoire des composants mécaniques tournants. Application aux butées à billes soumises à la fatigue de contact de roulement". Thesis, Reims, 2013. http://www.theses.fr/2013REIMS030/document.
Texto completoThe bearing is one of the most important components of rotating machines. Nevertheless, in normal conditions of use, it is subject to fatigue which creates a defect called a rolling fatigue spalling. In this work, we present a follow-up of the thrust bearing fatigue on a test bench. Vibration analysis is a method used to characterize the defect. In order to obtain the fatigue curve more adjusted, we have studied the vibration level according to statistical indicators: the Root Mean Square value (RMS value), which is one of the best indicators to show the evolution of the bearing degradation. The approach follows the working of the bearing until the degradation with an on line acquisition of vibration statements in form of time signals. With the signal treatment, we obtain the values of the vibration amplitudes which characterize the vibration state of the bearing. Consequently, these values allow us to plot the fatigue curves. During our experimental work, this operation is applied for a batch of thrust bearings for which we have obtained similar fatigue curves where the evolution trend follows a mathematical model from the detection of the onset of the first spall. The result of this work will contribute to predict the working residual time before failure
Toumi, Mohamed Yessine. "Étude de l'endommagement des composants mécaniques soumis à de la fatigue de roulement dans le cadre d'une maintenance prédictive : cas des butées à billes". Thesis, Reims, 2015. http://www.theses.fr/2015REIMS033.
Texto completoThe bearing is an essential element in the design of rotating machines. In an industrial context, bearing failure can have extremely costly consequences. Predictive maintenance minimizes the intervention costs and warns about the state of fatigue of the mechanical component. In this frame, we propose a study of the rolling contact fatigue damage applied to thrust ball bearings. This study is twofold: numerical and experimental. The first axis consists in establishing a dynamic three-dimensional numerical model of the cyclic shift of a ball on a running surface in the presence of an indent, using the finite element method. An estimation of the size evolution of a surface initiated spall depending on loading cycles is also performed. These results are consistent with laboratory tests executed in the same conditions using a fatigue test rig dedicated to ball bearings. The second axis consists in determining a vibratory indicator using modal analysis to estimate the on-line structural damage level of the ball bearing in the presence of an indent. The technique developed in this work enables monitoring the evolution of the modal damping values based on the life cycles determined from tests in static and dynamic modes. This study will contribute to estimate the residual life of the mechanical component after onset of a spall using the finite element method and accounting for the structural damage state
Merabet, Samir. "Vers un droit de l'intelligence artificielle". Electronic Thesis or Diss., Aix-Marseille, 2018. https://buadistant.univ-angers.fr/login?url=https://bibliotheque.lefebvre-dalloz.fr/secure/isbn/9782247201235.
Texto completoEven if its appearance is recent among technological inventions history, artificial intelligence has nevertheless quickly established itself, disrupting economy and the job market. Yet, upon assessment, it seems that these two forms of intelligence cannot be regarded as equivalent. Even if artificial intelligence borrows some aspects of human intelligence, many others are missing. Conscience, reason and emotions are unknown to machines, even intelligent ones. Yet, law rests upon such qualities. Hence, applying rules created for human to intelligent computer systems may be inappropriate. Indeed, the confrontation between law and artificial intelligence reveals the existence of a paradigm on which positive law is based. To a large extent, French law relies on the subjectivity proper to humans. All branches of law appear to be concerned, civil law as well as criminal law or intellectual property law. Therefore, the legal regime of artificial intelligence seems very uncertain. Consequently, the purpose of this study is to clear up the doubts surrounding the nature of artificial intelligence in order to neatly distinguish it from human intelligence. Eventually, the acknowledgment of the fundamental difference opposing these two forms of intelligence should lead to the recognition of a new public order of humanity and the preservation of an exclusive field for human intelligence
Muscat, Maureen. "Machine learning and co-evolution methods for protein-protein interactions". Electronic Thesis or Diss., Sorbonne université, 2022. http://www.theses.fr/2022SORUS507.
Texto completoIn this thesis, we focus on the use of machine learning to solve the problem of the prediction of protein-protein interactions (PPI). The study of PPI is a central problem in biology, as proteins interact with each other to form complex networks that carry out the biological functions of cells. Experimental techniques to determine when and how proteins interact are very costly and time-consuming, so there is a great need for computational methods that can predict PPIs. We will explore the use of machine learning based on coevolution and deep learning for PPI prediction. Coevolutionary methods such as Direct Coupling Analysis have been used successfully for a number of different tasks, such as the prediction of intra-protein contacts, inter-protein contacts, and the prediction of mutational landscape. During my Ph.D., I developed a supervised machine learning algorithm to predict inter-domain and inter-protein contact maps called FilterDCA. The aim was to add some supervision, using typical contact patterns, while keeping the tool interpretable. I have also worked on PPIs in the SARS-CoV2 virus and on a multi-protein complex present in some bacteria membranes
Marconnet, Bertrand. "Contexte augmenté basé sur les prédictions pour une réutilisation efficace des connaissances métier en conception : application à la conception proactive pour l'assemblage". Thesis, Bourgogne Franche-Comté, 2017. http://www.theses.fr/2017UBFCA018/document.
Texto completoWorks on the design context augmentation in product design, makes it possible to make awareness designers on their choice in the design for assembly. The presented approach will conclude on the method to develop a informatic system, based on case studies of mechanical design. The purpose of the project is to capture the design intent, in order to provide useful assistance to stakeholders, such as design support, decision making, verification/validation, and data structuring
Germain, Dimitri. "Développement d'un modèle d'efforts de coupe intégrant le contact en dépouille : application au tournage de superfinition du cuivre Cu-c2". Phd thesis, Paris, ENSAM, 2011. http://pastel.archives-ouvertes.fr/pastel-00661684.
Texto completoFlayols, Thomas. "Exploitation du Retour de Force pour l'Estimation et le Contrôle des Robots Marcheurs". Thesis, Toulouse, INSA, 2018. http://www.theses.fr/2018ISAT0025/document.
Texto completoIn this thesis, we are interested in the control of walking robots. Controlling these naturally unstable, non-linear, non-convex, large and contact-dependent systems is a major challenge in mobile robotics. Traditional approaches formulate a chain of control formed by a cascade of sub-problems such as perception, planning, full body control and joint servoing. The contributions reported here are all intended to provide state feedback at the whole body control stage or at the planning stage. Specifically, a first technical contribution is the formulation and experimental comparison of two estimators of the robot base. A second contribution is the implementation of a reverse dynamic controller to control the HRP-2 robot in torque. A variant of this controller is also formulated and tested in simulation to stabilize a robot in flexible contact with its environment. Finally, a predictive control operation generator coupled to a whole body controller is presented
Bensadon, Jérémy. "Applications de la théorie de l'information à l'apprentissage statistique". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLS025/document.
Texto completoWe study two different topics, using insight from information theory in both cases: 1) Context Tree Weighting is a text compression algorithm that efficiently computes the Bayesian combination of all visible Markov models: we build a "context tree", with deeper nodes corresponding to more complex models, and the mixture is computed recursively, starting with the leaves. We extend this idea to a more general context, also encompassing density estimation and regression; and we investigate the benefits of replacing regular Bayesian inference with switch distributions, which put a prior on sequences of models instead of models. 2) Information Geometric Optimization (IGO) is a general framework for black box optimization that recovers several state of the art algorithms, such as CMA-ES and xNES. The initial problem is transferred to a Riemannian manifold, yielding parametrization-invariant first order differential equation. However, since in practice, time is discretized, this invariance only holds up to first order. We introduce the Geodesic IGO (GIGO) update, which uses this Riemannian manifold structure to define a fully parametrization invariant algorithm. Thanks to Noether's theorem, we obtain a first order differential equation satisfied by the geodesics of the statistical manifold of Gaussians, thus allowing to compute the corresponding GIGO update. Finally, we show that while GIGO and xNES are different in general, it is possible to define a new "almost parametrization-invariant" algorithm, Blockwise GIGO, that recovers xNES from abstract principles
Risser-Maroix, Olivier. "Similarité visuelle et apprentissage de représentations". Electronic Thesis or Diss., Université Paris Cité, 2022. http://www.theses.fr/2022UNIP7327.
Texto completoThe objective of this CIFRE thesis is to develop an image search engine, based on computer vision, to assist customs officers. Indeed, we observe, paradoxically, an increase in security threats (terrorism, trafficking, etc.) coupled with a decrease in the number of customs officers. The images of cargoes acquired by X-ray scanners already allow the inspection of a load without requiring the opening and complete search of a controlled load. By automatically proposing similar images, such a search engine would help the customs officer in his decision making when faced with infrequent or suspicious visual signatures of products. Thanks to the development of modern artificial intelligence (AI) techniques, our era is undergoing great changes: AI is transforming all sectors of the economy. Some see this advent of "robotization" as the dehumanization of the workforce, or even its replacement. However, reducing the use of AI to the simple search for productivity gains would be reductive. In reality, AI could allow to increase the work capacity of humans and not to compete with them in order to replace them. It is in this context, the birth of Augmented Intelligence, that this thesis takes place. This manuscript devoted to the question of visual similarity is divided into two parts. Two practical cases where the collaboration between Man and AI is beneficial are proposed. In the first part, the problem of learning representations for the retrieval of similar images is still under investigation. After implementing a first system similar to those proposed by the state of the art, one of the main limitations is pointed out: the semantic bias. Indeed, the main contemporary methods use image datasets coupled with semantic labels only. The literature considers that two images are similar if they share the same label. This vision of the notion of similarity, however fundamental in AI, is reductive. It will therefore be questioned in the light of work in cognitive psychology in order to propose an improvement: the taking into account of visual similarity. This new definition allows a better synergy between the customs officer and the machine. This work is the subject of scientific publications and a patent. In the second part, after having identified the key components allowing to improve the performances of thepreviously proposed system, an approach mixing empirical and theoretical research is proposed. This secondcase, augmented intelligence, is inspired by recent developments in mathematics and physics. First applied tothe understanding of an important hyperparameter (temperature), then to a larger task (classification), theproposed method provides an intuition on the importance and role of factors correlated to the studied variable(e.g. hyperparameter, score, etc.). The processing chain thus set up has demonstrated its efficiency byproviding a highly explainable solution in line with decades of research in machine learning. These findings willallow the improvement of previously developed solutions
Fourty, Guillaume. "Recherche de contraintes structurales pour la modélisation ab initio du repliement protéique". Paris 7, 2006. http://www.theses.fr/2006PA077101.
Texto completoUnderstanding the protein folding process and predicting protein structures from sequence data only remain two challenging questions for structural biologists. In this work, we first observe highly frequent proximities between N- and C-termini of protein domain, probably reflecting early stages of folding. Then we address the problem of polymer folding on regular lattices. We enumerate Hamiltonian Orbits and Cyclic Hamiltonian Orbits on n x n square lattices to evaluate the conformational space reduction associated to the termini contact constraint. Exhaustive Exploration of those maximally compact structures provides a baseline for minimum search algorithm in the HP- folding problem. Finally, we study multiple alignments at low sequence identity and introduce topohydrophobicity, a measure of topohydrophobicity conservation. We use it through decision tree to predict structural features such as Central/Edge position of beta strands in beta sheets and solvent accessibility (RAPT - Relative Accessibility Prediction Tool). These data can be used in ab initio prediction procedures of protein structures
Crombé, Amandine. "Développement des approches radiomics à visées diagnostique et pronostique pour la prise en charge de patients atteints des sarcomes des tissus mous". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0059.
Texto completoSoft-tissue sarcomas (STS) are malignant ubiquitous mesenchymal tumors that are characterized by their heterogeneity at several levels, i.e. in terms of clinical presentation, radiological presentation, histology, molecular features and prognosis. Magnetic resonance imaging (MRI) with a contrast-agent injection is the imaging of reference for these tumors. MRI enables to perform the local staging, the evaluation of response to treatment, to plan the surgery and to look for local relapse. Furthermore, MRI can access non-invasively to the whole tumor in situ and in vivo which is complementary to histopathological and molecular analyses requiring invasive biopsy samples at risk of sampling bias. However, no imaging biomarker dedicated to STS has been validated so far. Meanwhile, technical innovations have been developed, namely: (i) alternative imaging modalities or MRI sequences that can quantify intratumoral physiopathological phenomenon; (ii) image analysis tools that can quantify radiological phenotypes better than human’s eyes through hundreds of textural and shape quantitative features (named radiomics features); and (iii) mathematical algorithms that can integrate all these information into predictive models (: machine-learning). Radiomics approaches correspond to the development of predictive models based on machine-learning algorithms and radiomics features, eventually combined with other clinical, pathological and molecular features. The aim of this thesis was to put these innovations into practice and to optimize them in order to improve the diagnostic and therapeutic managements of patients with STS.In the first part, we combined radiological and radiomics features extracted from the baseline structural MRIs of patients with a locally-advanced subtype of STS in order to build a radiomics signature that could help to identify patients with higher risk of metastatic relapse and may benefit from neoadjuvant treatments. In the second part, we elaborated a model based on the early changes in intratumoral heterogeneity (: delta-radiomics) on structural MRIs of patients with locally-advanced high-grade STS treated with neoadjuvant chemotherapy, in order to rapidly identify patients who do not respond to treatment and would benefit from early therapeutic adjustments. In the last part, we tried to better identify and control potential bias in radiomics approaches in order to optimize the predictive models based on radiomics features
Bantegnie, Brice. "Eliminating propositional attitudes concepts". Thesis, Paris, Ecole normale supérieure, 2015. http://www.theses.fr/2015ENSU0020.
Texto completoIn this dissertation, I argue for the elimination of propositional attitudes concepts. In the first chapter I sketch the landscape of eliminativism in contemporary philosophy of mind and cognitive science. There are two kinds of eliminativism: eliminative materialism and concept eliminativism. One can further distinguish between folk and science eliminativism about concepts: whereas the former says that the concept should be eliminated from our folk theories, the latter says that the concept should be eliminated form our scientific theories. The eliminativism about propositional attitudes concepts I defend is a species of the latter. In the next three chapters I put forward three arguments for this thesis. I first argue that the interventionist theory of causation cannot lend credit to our claims of mental causation. I then support the thesis by showing that propositional attitudes concepts aren't natural kind concepts because they cross-cut the states of the modules posited by the thesis of massive modularity, a thesis which, I contend, is part of our best research-program. Finally, my third argument rests on science eliminativism about the concept of mental content. In the two last chapters of the dissertation I first defend the elimination of the concept of mental content from the success argument, according to which as psychologists produce successful science while using the concept of mental content, the concept should be conserved. Then, I dismiss an alternative way of eliminating the concept, that is, the way taken by proponents of extended cognition, by refuting what I take to be the best argument for extended cognition, namely, the system argument
Giaoui, Franck S. "Indemnisation du préjudice économique en cas d'inexécution contractuelle : étude comparative en common law américaine, droit civil français et droit commercial international : application aux avant-contrats, atteintes à la réputation commerciale et activités sans base établie". Thesis, Paris 1, 2018. http://www.theses.fr/2018PA01D036.
Texto completoLaw statutes and codes lack of a precise definition of the « full compensation » principle, and a fortiori they Jack of rules for assessing compensatory damages. The legal doctrine tries to fil] in the blank by describing the different types of damages awarded, notably in the United States. Yet, the issue remains full when the Joss is certain but its valuation remains complex or uncertain. The assessment of the economic Joss and the calculation methodology for damages are considered to be only matters of facts: trial courts and judges thus retain a sovereign power, resulting in great uncertainty for the parties. Reducing judicial uncertainty requires the choice and creation of a common framework. Based on the results of an empirical law and economics analysis of several hundreds of precedent cases, the dissertation formulates simple and practical suggestions for parties looking to improve their chances of success in recouping lost profits and lost opportunities. It also evidences which improvements of the judicial systems are required in order to actually implement the current right to full compensation. More importantly the research reaches a fundamental nonnative conclusion: economic Joss, compensatory damages and hence, the calculation of the quantum granted should be considered, not as mere matters of facts but also as matters of law. Henceforth, it would be logical that the Cour de cassation ( or the highest court) advises and controls the use of calculation methodology. Each head of damages would thus be legally qualified and the principle of full compensation would be extended in order to better compensate the loss when evaluating its quantum is complex. It finally results that referenced compensatory scales can be practically developed from compiling relevant legal precedents. The introduction of such scales would benefit academics in their debates, parties in the drafting of their contracts and counsels in their pre-trial exchanges. Eventually, judges could use them as tools to assist their rulings. If those scales were to be adopted and shared, they would enable the creation of such artificial intelligence as machine learning which value - notably the predictive value - would far exceed what is perceived today
Affes, Hend. "Modélisation au niveau transactionnel de l'architecture et du contrôle relatifs à la gestion d'énergie de systèmes sur puce". Thesis, Nice, 2015. http://www.theses.fr/2015NICE4137/document.
Texto completoEmbedded systems-on-chip (SoC) invade our daily life. With advances in semiconductor technology, these systems integrate more and more complex and energy-intensive features which generate increasing computation load and memory size requirements. While the complexity of these systems is a key trend, energy consumption has emerged as a critical factor for SoC designers. In this context, we have studied a modeling transactional level approach allowing a description of a clock tree and its management structure to be associated with a functional model, both described at the same abstraction level. This structure developed in a separation of concerns approach provides both the interface to the power consumption management of the hardware components and the application software. All the models developed are gathered in a C++ ClkArch library. To apply to a SystemC-TLM architecture model a clock tree intent with its control part, we propose a methodology based on three steps: specification, modeling and simulation. A verification step based on simulation is also considered using contracts of assertion type. This work aims to build a modelling approach on current design tools. So we propose a representation of a clock and power management structure in the IP-XACT standard allowing a C++ description of the SoC power management structures to be generated. Finally, a power management strategy based on the global functional states of the components of the system architecture is proposed. This strategy avoids local decision-making unsuited to optimized overall power/energy management
Braconnier, Jean-Baptiste. "Maintien de l'intégrité de robots mobiles en milieux naturels". Thesis, Clermont-Ferrand 2, 2016. http://www.theses.fr/2016CLF22667/document.
Texto completoThis thesis focused on the issue of the preseving of the integrity of mobile robots in off-road conditions. The objective is to provide control laws to guarantee the integrity of a vehicle during autonomous displacements in natural environments at high speed (5 to 7 m.s -1 ) and more particularly in The framework of precision farming. Integrity is here understood in the broad sense. Indeed, control of the movements of a mobile robot can generate orders that affect its physical integrity, or restrains the achievement of its task (rollover, spin, control stability, maintaining accuracy , etc.). Moreover, displacement in natural environments leads to problems linked in particular to relatively variable and relatively low adhesion conditions (especially since the speed of the vehicle is high), which results in strong sliding of wheels on the ground, or to ground geometries that can not be crossed by the robot. This thesis aims to determine in real time the stability space in terms of permissible controls allowing to moderate the actions of the robot. After a presentation of the existing modelings and observers that allow the use of these modelizations for the implementation of predictive control law for trajectory tracking, a new method of estimation of side-slip angles based on a kinematic observation is proposed. It permit to address the problem of variable speed of the vehicle (and in particular the case of zero values) and also to allow the observation during a displacement without reference trajectory. This new observer is essential for the further development of this thesis, since the rest of the work is concerned with the modulation of the speed of the vehicle. So, in the further work, two predictive control laws acting on the speed of the vehicle have been set up. The first one provides a solution to the problem of the saturation of steering actuators, when the speed or side-slip angles make the trajectory inadmissible to follow with respect to the physical capacities of the vehicle. The second one adress the problem of guaranteeing the accuracy of trajectory tracking (keeping the vehicle in a corridor of displacement). In both cases, the control strategy is similar: the future state of the vehicle is predicted according to the current conditions of evolution and the simulated one for the future evolution (obtained by simulating the evolution of dynamics models of the vehicle) in order to determine the value of the optimum speed so that the target variables (in one case the value of the steering and in the other the lateral deviation from the trajectory) comply with the imposed conditions (not exceeding a target value). The results presented in this thesis were realized either in simulations or in real conditions on robotic platforms. It follows that the proposed algorithms make it possible : in one case to reduce the speed of the vehicle in order to avoid the saturation of the steering actuator and therefore the resulting over and under steering phenomena and thus make it possible to preserve the vehicle’s controllability. And in the other case, to ensure that the lateral deviation from the trajectory remains below a target value
Guillet, Audrey. "Commande locale décentralisée de robots mobiles en formation en milieu naturel". Thesis, Clermont-Ferrand 2, 2015. http://www.theses.fr/2015CLF22609/document.
Texto completoThis thesis focuses on the issue of the control of a formation of wheeled mobile robots travelling in off-road conditions. The goal of the application is to follow a reference trajectory (entirely or partially) known beforehand. Each robot of the fleet has to track this trajectory while coordinating its motion with the other robots in order to maintain a formation described as a set of desired distances between vehicles. The off-road context has to be considered thoroughly as it creates perturbations in the motion of the robots. The contact of the tire on an irregular and slippery ground induces significant slipping and skidding. These phenomena are hardly measurable with direct sensors, therefore an observer is set up in order to get an estimation of their value. The skidding effect is included in the evolution of each robot as a side-slip angle, thus creating an extended kinematic model of evolution. From this model, adaptive control laws on steering angle and velocity for each robot are designed independently. These permit to control respectively the lateral distance to the trajectory and the curvilinear interdistance of the robot to a target. Predictive control techniques lead then to extend these control laws in order to account for the actuators behavior so that positioning errors due to the delay of the robot response to the commands are cancelled. The elementary control law on the velocity control ensures an accurate longitudinal positioning of a robot with respect to a target. It serves as a base for a global fleet control strategy which declines the overall formation maintaining goal in local positioning objective for each robot. A bidirectionnal control strategy is designed, in which each robot defines 2 targets, the immediate preceding and following robot in the fleet. The velocity control of a robot is finally defined as a linear combination of the two velocity commands obtained by the elementary control law for each target. The linear combination parameters are investigated, first defining constant parameters for which the stability of the formation is proved through Lyapunov techniques, then considering the effect of variable coefficients in order to adapt in real time the overall behavior of the formation. The formation configuration can indeed be prone to evolve, for application purposes and to guarantee the security of the robots. To fulfill this latter requirement, each robot of the fleet estimates in real time a minimal stopping distance in case of emergency and two avoidance trajectories to get around the preceding vehicle if this one suddenly stops. Given the initial configuration of the formation and the emergency behaviors calculated, the desired distances between the robots can be adapted so that the new configuration thus described ensures the security of each and every robot of the formation against potential collisions
Bresson, Damien. "Étude de l’écoulement sanguin dans un anévrysme intracrânien avant et après traitement par stent flow diverter : quantification par traitement d’images de séquences angiographiques 2D". Thesis, Compiègne, 2016. http://www.theses.fr/2016COMP2308/document.
Texto completoIntracranial aneurysms treatment based on intra aneurismal flow modification tend to replace traditionally coiling in many cases and not only complex aneurysms for which they were initially designed. Dedicated stents (low porosity, high pores density stents) called “flow diverter” stents are deployed across the neck of the aneurysm to achieve this purpose. The summation of three different mechanisms tend to lead to the healing of the aneurysm: immediate flow alteration due to the mechanical screen effect of the stent, physiological triggering of acute or progressive thrombus formation inside the aneurysm’s pouch and long term biological response leading in neointima formation and arterial wall remodeling. This underlying sequence of processes is also supposed to decrease the recanalization rate. Scientific data supporting the flow alteration theory are numerous and especially computational flow dynamics (CFD). These approaches are very helpful for improving biomechanical knowledge of the relations between blood flow and pathology, but they do not fit in real-time treatments. Neuroendovascular treatments are performed under dynamic x-ray modality (digital subtracted angiography a DSA-).However, in daily practice, FD stents are sized to the patient’s 3D vasculature anatomy and then deployed. The flow modification is then evaluated by the clinician in an intuitive manner: the decision to deploy or not another stent is based solely on a visual estimation. The lack of tools available in the angioroom for quantifying in real time the blood flow hemodynamics should be pointed out. It would make sense to take advantage of functional data contained in contrast bolus propagation and not only anatomical data. Thus, we proposed to create flow software based on angiographic analysis. This software was built using algorithms developed and validated on 2D-DSA sequences obtained in a swine intracranial aneurysm model. This intracranial animal model was also optimized to obtain 3D vascular imaging and experimental hemodynamic data that could be used to realize realistic computational flow dynamic. In a third step, the software tool was used to analyze flow modification from angiographic sequences acquired during unruptured IA from patients treated with a FD stent. Finally, correlation between flow change and aneurysm occlusion at long term follow-up with the objective of identifying predictive markers of long term occlusion was performed
Paquin-Lafleur, Stéphanie. "Des délits et des hommes : portrait des auteurs d’actions indécentes du Québec et caractéristiques associées à la récidive et à la commission de crimes sexuels avec contacts". Thesis, 2020. http://hdl.handle.net/1866/25756.
Texto completoThe crime of indecent act has long been defined by the scientific community and administered by justice more as a nuisance than a sexual crime. This can be explained by the fact that this type of crime does not involve physical sexual contact with the victim and that the negative consequences associated with the assault on the victim are often minimized. Thus, this study is part of an internship at the Crimes Against the Person Division of the Sûreté du Québec. The concrete application of this research is to support the process of targeting potential recidivists for indecent acts in investigation files. The goal is therefore to describe a retrospective portrait of the criminal careers of the 3,572 alleged or proven perpetrators of indecent acts reported in Quebec between 2011 and 2018. Univariate, bivariate and multivariate analyses and a ROC analysis were performed on the study population. The results of the study suggest that nearly 22% of the offenders are repeat offenders in indecent acts and that 18% of the study population have committed a sexual crime with contact with a victim. Both groups were also found to be overwhelmingly male, younger in terms of age, more prolific, and more diverse in terms of the variety of crime categories than those who had never re-offended or committed a contact sexual crime. Finally, the best predictors were found to be the gender of the offenders, the presence of a history of sexual crimes, violent crimes, and crimes in the "Other Criminal Offences" category that may be associated with breaches of orders or failures to comply with court undertakings. However, these individuals remain marginal since the vast majority of the indecent act offender population has not committed a recidivism or a sexual crime involving contact with a victim.
Lara-Carrasco, Jessica. "Les rêves durant la grossesse : étude de leur nature et de leur rôle prédictif dans l’adaptation psychologique à la maternité". Thèse, 2013. http://hdl.handle.net/1866/11301.
Texto completoDugré, Jules. "L'évaluation du risque de comportements suicidaires et d'automutilation en obéissance aux hallucinations auditives impérieuses". Thèse, 2016. http://hdl.handle.net/1866/19075.
Texto completoThe current research consist, in a first step, to better document the relationship between harmful command hallucinations and compliance. More specifically, the study resulted from a literature review on this subject that aims to identify the risk factors associated with compliance to self-harm command hallucinations in individuals with a major mental disorder. To accomplish this, secondary analyses were performed using the MacArthur Violence Risk Assessment Study database. Binary logistic regressions revealed that emotional distress, history of compliance, actual drug abuse disorder, actual major depressive disorder, victimization of physical abuse during adolescence and severity of the hallucinatory behavior were all significant predictors of compliance to self-harm command hallucinations. The study highlights an important predictive model that may guide clinicians to improve the assessment and management of deliberate self-harm and suicidal behaviors in response to command hallucinations in individuals diagnosed with a major mental disorder.