Tesis sobre el tema "Approches basées sur la structure"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Approches basées sur la structure".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Legendre, Audrey. "Prédiction de structures secondaires d’ARN et de complexes d’ARN avec pseudonoeuds - Approches basées sur la programmation mathématique multi-objectif". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLE031.
Texto completoIn this thesis, we propose new algorithms and tools to predict RNA and RNA complex secondary structures, including particular RNA motifs, difficult to predict, like pseudoknots. RNA structure prediction stays a difficult task, and the numerous existing tools don't always give good predictions.In order to predict structures that are as close as possible to the real ones, we propose to develop algorithms that: i) predict the k-best structures; ii) combine several models of prediction to take advantage of each; iii) are able to take into account user constraints and structural data like SHAPE.We developed three tools: BiokoP for predicting RNA secondary structures and RCPred and C-RCPred for predicting RNA complex secondary structures.The tool BiokoP proposes several optimal and sub-optimal structures thanks to the combination oftwo prediction models, the energy model MFE and the probabilistic model MEA. This combination isdone with multi-objective mathematical programming, where each model is associated to an objective function. To this end, we developed a generic algorithm returning the k-best Pareto curves of a bi-objective integer linear program.The tool RCPred, based on the MFE model, proposes several sub-optimal structures. It takes advantage of the numerous existing tools for RNA secondary structure prediction and for RNA-RNA interaction prediction, by taking as input predicted secondary structures and RNA-RNA interactions. The goal of RCPred is to find the best combination among these inputs.The tool C-RCPred is a new version of RCPred, taking into account user constraints and structural data(SHAPE, PARS, DMS). C-RCPred is based on a multi-objective algorithm, where the different objectives are the MFE model, the fulfillment of the user constraints and the concordance with the structural data
Verdret, Yassine. "Analyse du comportement parasismique des murs à ossature bois : approches expérimentales et méthodes basées sur la performance sismique". Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0010/document.
Texto completoThis thesis presents a study of the seismic behavior of light timber frame walls with stapled and nailed sheathings through experimental approaches and the development of a methodology for the application of seismic performance-based methods. The experimental approaches consist of three test campaigns: (1) a series of static tests on stapled and nailed connections, (2) a series of static tests performed on light timber frame walls and (3) a series of dynamic tests performed on light timber frame walls on a vibrating table. The database consists of these test results then allows the examination of strength and stiffness properties of the wall elements according to the stress conditions (strain rate, vertical load). The development of a macro-scale modeling of the cyclic and dynamic behavior of such elements is also proposed using constitutive law models. A framework of the application to light timber frame structures of seismic performance-based methods based (N2 method and MPA method) and a vulnerability analysis - fragility curves - using the N2 method are proposed
Tataie, Laila. "Méthodes simplifiées basées sur une approche quasi-statique pour l’évaluation de la vulnérabilité des ouvrages soumis à des excitations sismiques". Thesis, Lyon, INSA, 2011. http://www.theses.fr/2011ISAL0123/document.
Texto completoIn the context of building’s protection against seismic risk, simplified analysis techniques, based on quasi-static analysis of pushover, have strongly developed over the past two decades. The thesis aims to optimize a simplified method proposed by Chopra and Goel in 2001 and adopted by American standards FEMA 273. This method is a nonlinear decoupled modal analysis, called by the authors UMRHA (Uncoupled Modal for Response History Analysis) which is mainly characterized by: pushover modal analysis according to the dominant modes of vibration of the structure, setting up nonlinear single degree of freedom systems drawn from modal pushover curves, then determining the history response of the structure by combining of the temporal responses associated with each mode of vibration. The decoupling of nonlinear history responses associated with each mode is the strong assumption of the method UMRHA. In this study, the UMRHA method has been improved by investigating the following points. First of all, several nonlinear single degree of freedom systems drawn from modal pushover curves are proposed to enrich the original UMRHA method, in which a simple elastic-plastic model is used, other elastic-plastic models with different envelope curves, Takeda model taking into account an hysteretic behavior characteristic of structures under earthquakes, and finally, a simplified model based on the frequency degradation as a function of a damage index. The latter nonlinear single degree of freedom model privileges the view of the frequency degradation during the structure damage process relative to a realistic description of hysteresis loops. The total response of the structure is obtained by summing the contributions of the non linear dominant modes to those of linear non dominant modes. Finally, the degradation of the modal shapes due to the structure damage during the seismic loading is taken into account in the new simplified method M-UMRHA (Modified UMRHA) proposed in this study. By generalizing the previous model of frequency degradation as a function of a damage index: the modal shape becomes itself also dependent on a damage index, the maximum displacement at the top of the structure; the evolution of the modal shape as a function of this index is directly obtained from the modal pushover analysis. The pertinence of the new method M-UMRHA is investigated for several types of structures, by adopting tested models of structures simulation under earthquakes: reinforced concrete frame modeled by multifibre elements with uniaxial laws under cyclic loading for concrete and steel, infill masonry wall with diagonal bars elements resistant only in compression, existing building (Grenoble City Hall) with multilayer shell elements and nonlinear biaxial laws based on the concept of smeared and fixed cracks. The obtained results by the proposed simplified method are compared to the reference results derived from the nonlinear response history analysis
Dan, Pan. "Nouvelles approches en ingénierie vasculaire basées sur un scaffold fonctionnalisé, une matrice extracellulaire naturelle et une cellularisation intraluminale : de la caractérisation à la validation chez l’animal". Thesis, Université de Lorraine, 2016. http://www.theses.fr/2016LORR0304/document.
Texto completoPanei, Francesco Paolo. "Advanced computational techniques to aid the rational design of small molecules targeting RNA". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS106.
Texto completoRNA molecules have recently gained huge relevance as therapeutic targets. The direct targeting of RNA with small molecule drugs emerges for its wide applicability to different classes of RNAs. Despite this potential, the field is still in its infancy and the number of available RNA-targeted drugs remains limited. A major challenge is constituted by the highly flexible and elusive nature of the RNA targets. Nonetheless, RNA flexibility also presents unique opportunities that could be leveraged to enhance the efficacy and selectivity of newly designed therapeutic agents. To this end, computer-aided drug design techniques emerge as a natural and comprehensive approach. However, existing tools do not fully account for the flexibility of the RNA. The project of this PhD work aims to build a computational framework toward the rational design of compounds targeting RNA. The first essential step for any structure-based approach is the analysis of the available structural knowledge. However, a comprehensive, curated, and regularly updated repository for the scientific community was lacking. To fill this gap, I curated the creation of HARIBOSS ("Harnessing RIBOnucleic acid - Small molecule Structures"), a database of all the experimentally-determined structures of RNA-small molecule complexes retrieved from the PDB database. HARIBOSS is available via a dedicated web interface (https://hariboss.pasteur.cloud), and is regularly updated with all the structures resolved by X-ray, NMR, and cryo-EM, in which ligands with drug-like properties interact with RNA molecules. Each HARIBOSS entry is annotated with physico-chemical properties of ligands and RNA pockets. HARIBOSS repository, constantly updated, will facilitate the exploration of drug-like compounds known to bind RNA, the analysis of ligands and pockets properties and, ultimately, the development of in silico strategies to identify RNA-targeting small molecules. In coincidence of its release, it was possible to highlight that the majority of RNA binding pockets are unsuitable for interactions with drug-like molecules, attributed to the lower hydrophobicity and increased solvent exposure compared to protein binding sites. However, this emerges from a static depiction of RNA, which may not fully capture their interaction mechanisms with small molecules. In a broader perspective, it was necessary to introduce more advanced computational techniques for an effective accounting of RNA flexibility in the characterization of potential binding sites. In this direction, I implemented SHAMAN, a computational technique to identify potential small-molecule binding sites in RNA structural ensembles. SHAMAN enables the exploration of the target RNA conformational landscape through atomistic molecular dynamics. Simultaneously, it efficiently identifies RNA pockets using small probe compounds whose exploration of the RNA surface is accelerated by enhanced-sampling techniques. In a benchmark encompassing diverse large, structured riboswitches as well as small, flexible viral RNAs, SHAMAN accurately located experimentally resolved pockets, ranking them as preferred probe hotspots. Notably, SHAMAN accuracy was superior to other tools working on static RNA structures in the realistic drug discovery scenario where only apo structures of the target are available. This establishes SHAMAN as a robust platform for future drug design endeavors targeting RNA with small molecules, especially considering its potential applicability in virtual screening campaigns. Overall, my research contributed to enhance our understanding and utilization of RNA as a target for small molecule drugs, paving the way for more effective drug design strategies in this evolving field
Belbachir, Faiza. "Approches basées sur les modèles de langue pour la recherche d'opinions". Toulouse 3, 2014. http://thesesups.ups-tlse.fr/2341/.
Texto completoEvolution of the World Wide Web has brought us various forms of data like factual data, product reviews, arguments, discussions, news data, temporal data, blog data etc. The blogs are considered to be the best way for the expression of the one's opinions about something including from a political subject to a product. These opinions become more important when they influence govt. Policies or companies marketing agendas and much more because of their huge presence on the web. Therefore, it becomes equally important to have such information systems that could process this kind of information on the web. In this thesis, we propose approach (es) that distinguish between factual and opinion documents with the purpose of further processing of opinionated information. Most of the current opinion finding approaches, some base themselves on lexicons of subjective terms while others exploit machine learning techniques. Within the framework of this thesis, we are interested in both types of approaches by mitigating some of their limits. Our contribution revolves around three main aspects of opinion mining. First of all we propose a lexical approach for opinion finding task. We exploit various subjective publicly available resources such as IMDB, ROTTEN, CHESLY and MPQA that are considered to be opinionated data collections. The idea is that if a document is similar to these, it is most likely that it is an opinionated document. We seek support of language modeling techniques for this purpose. We model the test document (i. E. The document whose subjectivity is to be evaluated) the source of opinion by language modeling technique and measure the similarity between both models. The higher the score of similarity is, the more the document subjective. Our second contribution of detection of opinion is based on the machine learning. For that purpose, we propose and evaluate various features such as the emotivity, the subjectivity, the addressing, the reflexivity and report results to compare them with current approaches. Our third contribution concerns the polarity of the opinion which determines if a subjective document has a positive or negative opinion on a given topic. We conclude that the polarity of a term can depend on the domain in which it is being used
Don, Anthony. "Indexation et navigation dans les contenus visuels : approches basées sur les graphes". Bordeaux 1, 2006. http://www.theses.fr/2006BOR13258.
Texto completoHoarau, Christophe. "Nouvelles approches synthétiques de produits naturels basées sur la réactivité d'aminocarbanions stabilisés". Lille 1, 2000. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2000/50376-2000-354-355.pdf.
Texto completoDe nouvelles approches conceptuelles de la charpente d'alcaloides de la famille des aristolactames, de l'eupolauramine, des tetrahydroisoindolobenzazepines et des 4,5-dioxoaporphines ont ensuite ete developpees. Elles reposent toutes sur l'exploitation des proprietes yluriques des anions derivant soit d'amines phosphoryles et ont en commun la formation concomitante d'une unite de type enamide ou enamide acyclique ou exocyclique, ces reactions s'inscrivant dans le cadre des reactions de horner. La mise en uvre de techniques d'annelation de differente nature, radicalaire, carbocationique ou enfin photochimique, ont permis de completer la construction des charpentes aromatiques polycycliques azotees
Nguyen, Thi Minh Tam. "Approches basées sur DCA pour la programmation mathématique avec des contraintes d'équilibre". Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0113/document.
Texto completoIn this dissertation, we investigate approaches based on DC (Difference of Convex functions) programming and DCA (DC Algorithm) for mathematical programs with equilibrium constraints. Being a classical and challenging topic of nonconvex optimization, and because of its many important applications, mathematical programming with equilibrium constraints has attracted the attention of many researchers since many years. The dissertation consists of four main chapters. Chapter 2 studies a class of mathematical programs with linear complementarity constraints. By using four penalty functions, we reformulate the considered problem as standard DC programs, i.e. minimizing a DC function on a convex set. The appropriate DCA schemes are developed to solve these four DC programs. Two among them are reformulated again as general DC programs (i.e. minimizing a DC function under DC constraints) in order that the convex subproblems in DCA are easier to solve. After designing DCA for the considered problem, we show how to develop these DCA schemes for solving the quadratic problem with linear complementarity constraints and the asymmetric eigenvalue complementarity problem. Chapter 3 addresses a class of mathematical programs with variational inequality constraints. We use a penalty technique to recast the considered problem as a DC program. A variant of DCA and its accelerated version are proposed to solve this DC program. As an application, we tackle the second-best toll pricing problem with fixed demands. Chapter 4 focuses on a class of bilevel optimization problems with binary upper level variables. By using an exact penalty function, we express the bilevel problem as a standard DC program for which an efficient DCA scheme is developed. We apply the proposed algorithm to solve a maximum flow network interdiction problem. In chapter 5, we are interested in the continuous equilibrium network design problem. It was formulated as a Mathematical Program with Complementarity Constraints (MPCC). We reformulate this MPCC problem as a general DC program and then propose a suitable DCA scheme for the resulting problem
Nguyen, Thi Minh Tam. "Approches basées sur DCA pour la programmation mathématique avec des contraintes d'équilibre". Electronic Thesis or Diss., Université de Lorraine, 2018. http://www.theses.fr/2018LORR0113.
Texto completoIn this dissertation, we investigate approaches based on DC (Difference of Convex functions) programming and DCA (DC Algorithm) for mathematical programs with equilibrium constraints. Being a classical and challenging topic of nonconvex optimization, and because of its many important applications, mathematical programming with equilibrium constraints has attracted the attention of many researchers since many years. The dissertation consists of four main chapters. Chapter 2 studies a class of mathematical programs with linear complementarity constraints. By using four penalty functions, we reformulate the considered problem as standard DC programs, i.e. minimizing a DC function on a convex set. The appropriate DCA schemes are developed to solve these four DC programs. Two among them are reformulated again as general DC programs (i.e. minimizing a DC function under DC constraints) in order that the convex subproblems in DCA are easier to solve. After designing DCA for the considered problem, we show how to develop these DCA schemes for solving the quadratic problem with linear complementarity constraints and the asymmetric eigenvalue complementarity problem. Chapter 3 addresses a class of mathematical programs with variational inequality constraints. We use a penalty technique to recast the considered problem as a DC program. A variant of DCA and its accelerated version are proposed to solve this DC program. As an application, we tackle the second-best toll pricing problem with fixed demands. Chapter 4 focuses on a class of bilevel optimization problems with binary upper level variables. By using an exact penalty function, we express the bilevel problem as a standard DC program for which an efficient DCA scheme is developed. We apply the proposed algorithm to solve a maximum flow network interdiction problem. In chapter 5, we are interested in the continuous equilibrium network design problem. It was formulated as a Mathematical Program with Complementarity Constraints (MPCC). We reformulate this MPCC problem as a general DC program and then propose a suitable DCA scheme for the resulting problem
Allouche, Benyamine. "Modélisation et commande des robots : nouvelles approches basées sur les modèles Takagi-Sugeno". Thesis, Valenciennes, 2016. http://www.theses.fr/2016VALE0021/document.
Texto completoEvery year more than 5 million people worldwide become hemiplegic as a direct consequence of stroke. This neurological deficiency, often leads to a partial or a total loss of standing up abilities and /or ambulation skills. In order to propose new supporting solutions lying between the wheelchair and the walker, this thesis comes within the ANR TECSAN project named VHIPOD “self-balanced transporter for disabled persons with sit-to-stand function”. In this context, this research provides some answers for two key issues of the project : the sit-to-stand assistance (STS) of hemiplegic people and their mobility through a two wheeled self-balanced solution. These issues are addressed from a robotic point of view while focusing on a key question : are we able to extend the use of Takagi-Sugeno approach (TS) to the control of complex systems ? Firstly, the issue of mobility of disabled persons was treated on the basis of a self-balanced solution. Control laws based on the standard and descriptor TS approaches have been proposed for the stabilization of gyropod in particular situations such as moving along a slope or crossing small steps. The results have led to the design a two-wheeled transporter which is potentially able to deal with the steps. On the other hand, these results have also highlighted the main challenge related to the use of TS approach such as the conservatisms of the LMIs constraints (Linear Matrix Inequalities). In a second time, a test bench for the STS assistance based on parallel kinematic manipulator (PKM) was designed. This kind of manipulator characterized by several closed kinematic chains often presents a complex dynamical model (given as a set of ordinary differential equations, ODEs). The application of control laws based on the TS approach is often doomed to failure given the large number of non-linear terms in the model. To overcome this problem, a new modeling approach was proposed. From a particular set of coordinates, the principle of virtual power was used to generate a dynamical model based on the differential algebraic equations (DAEs). This approach leads to a quasi-LPV model where the only varying parameters are the Lagrange multipliers derived from the constraint equations of the DAE model. The results were validated on simulation through a 2-DOF (degrees of freedom) parallel robot (Biglide) and a 3-DOF manipulator (Triglide) designed for the STS assistance
Plot, Alexandre. "Approches numériques de conception CEM de cartes électroniques basées sur les techniques d'apprentissage". Electronic Thesis or Diss., Rennes, INSA, 2024. http://www.theses.fr/2024ISAR0001.
Texto completoThe constant evolution of electronic systems technologies presents a challenge in terms of electromagnetic compatibility (EMC) performance. The need to design equipment compliant with EMC standards from the first iteration requires considering EMC in the early stages of design. This thesis focuses on EMC design of electronic boards from the perspective of surrogate modeling. The use of such a technique, substituting for a costly-tocalculate physical model, poses a triple challenge involving the choice of an appropriate method, determining the number of learning data, and addressing limits in the context of dimension increase (number of design variables). The thesis proposes a comprehensive methodology addressing the training of a reliable metamodel and the challenge of EMC analysis of electronic boards. A systematic learning process, based on identifying significant variables and competing multiple metamodels in an iterative learning process, is established. The metamodel is then used as a parametric model of the printed circuit board, able to compute characteristic EMC observables. Conducting sensitivity and criticality analyses of printed circuit parameters helps establishing routing rules favoring a healthier EMC design of the board. Several scenarios are studied to validate the learning method and confirm the relevance of the established design rules
Dabladji, Rima. "Classification du cancer du sein par des approches basées sur les systèmes immunitaires artificiels". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLE026/document.
Texto completoBreast cancer arrives in the world in first place in terms of incidence and mortality among the different cancer localizations in women. Despite the significant progress made in recent decades to improve the management of this type of cancer, more accurate diagnostic tools are still necessary to help experts fight against this fatal disease. In this context, considerable research studies have been carried out to bring new perspectives for the improvement of the diagnosis of breast cancer, by developing Computer-Aided Diagnosis systems (CAD). Many works were directed to detecting the presence of cancerous tissues in the breast and tumor classification using tools from artificial intelligence often inspired by natural systems. In this case, Artificial Immune Systems (AIS) are a research field that bridges the fields of immunology, computer science and engineering. The main developments in artificial immune systems, have focused on three main immunological theories: clonal selection, immune networks and negative selection. We focus in this work on the use of clonal selection algorithms for classification of breast cells in Benign / Malignant. Indeed, these approaches are generally based on two main processes: the shape recognition of the antigen and selection of the specific memory cell to it. The established idea is that only memory cells capable of recognizing the antigen are selected for cloning and mutation. After introducing the principle of these algorithms we will study, through various approaches, their performances. First, we focus on improving CLONALG algorithm, which is a basic algorithm in the field of artificial clonal selection. To enhance the learning of the latter, with a better initialization and controlled diversity, three different methods are proposed appointed Median Filter Clonal ALGorithm (MF-CLONALG), Average Cells Clonal ALGorithm (AC-CLONALG) and Validity Interval Clonal Selection (VI-CS). However, although successful, these approaches require significant computing time. In this context, the second proposed approach aims at reducing the computational rates of these algorithms (and those of the AIS in general) without affecting their performance. The Local Database Categorization Artificial Immune System algorithm (LDC-AIS) uses clustering by K-means for local data categorization, and RBF neural network for learning categories, to accelerate the selection process. The last part of the thesis is dedicated to multimodal optimization. Indeed, after having presented the clonal selection algorithms as competitive tools of pattern recognition and classification, we were interested in exploring this concept, to demonstrate the benefits of cloning and mutation operators in functions optimization’s framework. In response to some drawbacks of the MLP neural network (Multi-Layer Perceptron), an optimization procedure in several stages is proposed, in which the back-propagation is assisted by cloning and mutation processes, for fast and accurate convergence of MLP. Being close to evolutionary techniques, the Multi-Layer Perceptron based Clonal Selection approach (MLP-CS) is compared to an MLP optimized by a genetic algorithm. Each of the approaches proposed in this work is tested and compared to different previous works using two different breast databases which are the Wisconsin Diagnostic Breast Cancer (WDBC), and the Digital Database for Screening Mammography (DDSM)
Bureau, Louis 1961. "Ordinateur et applications créatives en musique : étude de deux approches basées sur le logiciel Tuneblocks". Thesis, McGill University, 1995. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=22568.
Texto completoResults revealed that type of task does not affect the process-product orientation adopted by children. Pedagogical implications explore the findings in terms of practical applications of both approaches.
Robaczewska, Magdalena. "Développement de nouvelles approches antivirales des hépatites B chroniques basées sur l'utilisation des oligonucléotides antisens". Lyon 1, 2002. http://www.theses.fr/2002LYO10031.
Texto completoKhelif, Racha. "Estimation du RUL par des approches basées sur l'expérience : de la donnée vers la connaissance". Thesis, Besançon, 2015. http://www.theses.fr/2015BESA2019/document.
Texto completoOur thesis work is concerned with the development of experience based approachesfor criticalcomponent prognostics and Remaining Useful Life (RUL) estimation. This choice allows us to avoidthe problematic issue of setting a failure threshold.Our work was based on Case Based Reasoning (CBR) to track the health status of a new componentand predict its RUL. An Instance Based Learning (IBL) approach was first developed offering twoexperience formalizations. The first is a supervised method that takes into account the status of thecomponent and produces health indicators. The second is an unsupervised method that fuses thesensory data into degradation trajectories.The approach was then evolved by integrating knowledge. Knowledge is extracted from the sensorydata and is of two types: temporal that completes the modeling of instances and frequential that,along with the similarity measure refine the retrieval phase. The latter is based on two similaritymeasures: a weighted one between fixed parallel windows and a weighted similarity with temporalprojection through sliding windows which allow actual health status identification.Another data-driven technique was tested. This one is developed from features extracted from theexperiences that can be either mono or multi-dimensional. These features are modeled by a SupportVector Regression (SVR) algorithm. The developed approaches were assessed on two types ofcritical components: turbofans and ”Li-ion” batteries. The obtained results are interesting but theydepend on the type of the treated data
Benyoucef, Rayane. "Contribution à la cinématique à partir du mouvement : approches basées sur des observateurs non linéaires". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG091.
Texto completoThis manuscript outlines the various scientific contributions during this thesis, with the objective of solving the problem of the 3D reconstructure Structure from Motion (SfM) through the synthesis of dynamic models and observation algorithms for thereconstruction of unmeasured variables. Undoubtedly, vision sensors are the most important source providing rich information about the environment and that's where the scene structure estimation begins to find professional uses. There has been significant interest in recovering the 3D structure of a scene from 2D pixels considering a sequence of images. Several solutions have been proposed in the literature for this issue. The objective of the classic “structure from motion (SfM)” problem is to estimate the Euclidean coordinates of tracked feature points attached to an object (i.e., 3-D structure) provided the relative motion between the camera and the object is known. It isthoroughly addressed in literature that the camera translational motion has a strong effect on the performance of 3D structure estimation. For eye-in-hand cameras carried by robot manipulators fixed to the ground, it is straightforward to measure linear velocity. However when dealing with cameras riding on mobile robots, some practical issues may occur. In this context, particular attention has been devoted to estimating the camera linear velocities. Hence another objective of this thesis is to find strategies to develop deterministic nonlinear observers to address the SfM problem and obtain a reliable estimation of the unknown time-varying linear velocities used as input of the estimator scheme which is an implicit requirement for most of active vision. The techniques used are based on Takagi-Sugeno models and Lyapunov-based analysis. In the first part, a solution for the SfM problem is presented, where the main purpose is to identify the 3D structure of a tracked feature point observed by a moving calibrated camera assuming a precise knowledge of the camera linear and angular velocities. Then this work is extended to investigate the depth reconstruction considering a partially calibratedcamera. In the second part, the thesis focuses on identifying the 3D structure of a tracked feature point and the camera linear velocity. At first, the strategy presented aims to recover the 3D information and partial estimate of the camera linear velocity. Then, the thesis introduces the structure estimation and full estimation of the camera linear velocity, where the proposed observer only requires the camera angular velocity and the corresponding linear and angular accelerations. The performance of eachobserver is highlighted using a series of simulations with several scenarios and experimental tests
Bengharbi, Amar. "Contribution au test et diagnostic des circuits analogiques par des approches basées sur des techniques neuronales". Paris 12, 1997. http://www.theses.fr/1997PA120083.
Texto completoBousquet, Cédric. "Raisonnement terminologique et fouille de données en pharmacovigilance : de nouvelles approches basées sur la terminologie MedDRA". Paris 6, 2004. http://www.theses.fr/2004PA066024.
Texto completoBeatrix, Christopher. "Justifications dans les approches ASP basées sur les règles : application au backjumping dans le solveur ASPeRiX". Thesis, Angers, 2016. http://www.theses.fr/2016ANGE0026/document.
Texto completoAnswer set programming (ASP) is a formalism able to represent knowledge in Artificial Intelligence thanks to a first order logic program which can contain default negations. In recent years, several efficient solvers have been proposed to compute the solutions of an ASP program called answer sets. We are particularly interested in the ASPeRiX solver that instantiates the first order rules on the fly during the computation of answer sets. It applies a forward chaining of rules from literals previously determined. The study of this solver leads us to consider the concept of justification as part of a rule-based approach for computing answer sets. Justifications enable to explain why some properties are true or false. Among them, we focus particularly on the failure reasons which justify why some branches of the search tree does not result in an answer set. This encourages us to implement a version of ASPeRiX with backjumping in order to jump to the last choice point related to the failure in the search tree thanks to information provided by the failure reasons
Boumaiza, Ameni. "Reconnaissance et localisation de symboles dans les documents graphiques : approches basées sur le treillis de concepts". Thesis, Université de Lorraine, 2013. http://www.theses.fr/2013LORR0028/document.
Texto completoComputer vision is a field that includes methods for the acquisition, processing, analysis and understanding of images to produce numerical or symbolic information. A research contributing to the development of this area is to replicate the capabilities of human vision to perceive and understand images. Our thesis is part of this research axis. We propose several original contributions belonging to the context of graphics recognition and spotting context. The originality of the proposed approaches is the proposal of an interesting alliance between the Formal Concept Analysis and the Computer Vision fields. We face the study of the FCA field and more precisely the adaptation of the structure of concept lattice and its use as the main tool of our work. The main feature of our work lies in its generic aspect because the proposed methods can be combined with various other tools keeping the same strategies and following a similar procedure. Our foray into the area of the Formal Concept Analysis and more precisely our choice of the structure of the Galois lattice, also called concept lattice is motivated by the many advantages offered by this tool. The main advantage of concept lattice is the symbolic aspect. It is a concise, accurate and flexible search space thus facilitating decision making. Our contributions are recorded as part of the recognition and localization of symbols in graphic documents. We propose to recognize and spot symbols in graphical documents (technical drawings for example) using the alliance between the bag of words representation and the Galois lattice formalism. We opt for various methods belonging to the computer vision field
Boumaiza, Ameni. "Reconnaissance et localisation de symboles dans les documents graphiques : approches basées sur le treillis de concepts". Electronic Thesis or Diss., Université de Lorraine, 2013. http://www.theses.fr/2013LORR0028.
Texto completoComputer vision is a field that includes methods for the acquisition, processing, analysis and understanding of images to produce numerical or symbolic information. A research contributing to the development of this area is to replicate the capabilities of human vision to perceive and understand images. Our thesis is part of this research axis. We propose several original contributions belonging to the context of graphics recognition and spotting context. The originality of the proposed approaches is the proposal of an interesting alliance between the Formal Concept Analysis and the Computer Vision fields. We face the study of the FCA field and more precisely the adaptation of the structure of concept lattice and its use as the main tool of our work. The main feature of our work lies in its generic aspect because the proposed methods can be combined with various other tools keeping the same strategies and following a similar procedure. Our foray into the area of the Formal Concept Analysis and more precisely our choice of the structure of the Galois lattice, also called concept lattice is motivated by the many advantages offered by this tool. The main advantage of concept lattice is the symbolic aspect. It is a concise, accurate and flexible search space thus facilitating decision making. Our contributions are recorded as part of the recognition and localization of symbols in graphic documents. We propose to recognize and spot symbols in graphical documents (technical drawings for example) using the alliance between the bag of words representation and the Galois lattice formalism. We opt for various methods belonging to the computer vision field
Torregrosa, jordan Sergio. "Approches Hybrides et Méthodes d'Intelligence Artificielle Basées sur la Simulation Numérique pour l'Optimisation des Systèmes Aérodynamiques Complexes". Electronic Thesis or Diss., Paris, HESAM, 2024. http://www.theses.fr/2024HESAE002.
Texto completoThe industrial design of a component is a complex, time-consuming and costly process constrained to precise physical, styling and development specifications led by its future conditions and environment of use. Indeed, an industrial component is defined and characterized by many parameters which must be optimized to best satisfy all those specifications. However, the complexity of this multi-parametric constrained optimization problem is such that its analytical resolution is compromised.In the recent past, such a problem was solved experimentally, by trial and error, leading to expensive and time-consuming design processes. Since the mid-20th century, with the advancement and widespread access to increasingly powerful computing technologies, the ``virtual twins'', or physics-based numerical simulations, became an essential tool for research and development, significantly diminishing the need for experimental measurements. However, despite the computing power available today, ``virtual twins'' are still limited by the complexity of the problem solved and present some significant deviations from reality due to the ignorance of certain subjacent physics. In the late 20th century, the volume of data has surge enormously, massively spreading in the majority of fields and leading to a wide proliferation of Artificial Intelligence (AI) techniques, or ``digital twins'', partially substituting the ``virtual twins'' thanks to their lower intricacy. Nevertheless, they need an important training stage and can lead to some aversion since they operate as black boxes. Today, these technological evolutions have resulted in a framework where theory, experimentation, simulation and data can interact in synergy and reinforce each other.In this context, Stellantis aims to explore how AI can improve the design process of a complex aerodynamic system: an innovative cockpit air vent. To this purpose, the main goal of this thesis is to develop a parametric surrogate of the aerator geometry which outputs the norm of the velocity field at the pilot's face in order to explore the space of possible geometries while evaluating their performances in real time. The development of such a data-based metamodel entails several conceptual problems which can be addressed with AI.The use of classical regression techniques can lead to unphysical interpolation results in some domains such as fluid dynamics. Thus, the proposed parametric surrogate is based on Optimal Transport (OT) theory which offers a mathematical approach to measure distances and interpolate between general objects in a novel way.The success of a data-driven model relies on the quality of the training data. On the one hand, experimental data is considered as the most realistic but is extremely costly and time-consuming. On the other hand, numerical simulations are cheaper and faster but present a significant deviation from reality. Therefore, a Hybrid Twin approach is proposed based on Optimal Transport theory in order to bridge the ignorance gap between simulation and measurement.The sampling process of training data has become a central workload in the development process of a data-based model. Hence, an Active Learning methodology is proposed to iteratively and smartly select the training points, based on industrial objectives expected from the studied component, in order to minimize the number of needed samples. Thus, this sampling strategy maximizes the performance of the model while converging to the optimal solution of the industrial problem.The accuracy of a data-based model is usually the main concern of its training process. However, reality is complex and unpredictable leading to input parameters known with a certain degree of uncertainty. Therefore, a data-based Uncertainty Quantifcation methodology, based on Monte Carlo estimators and OT, is proposed to take into account the uncertainties propagation into the surrogate and to quantify their impact on its precision
Marques, Guillaume. "Problèmes de tournées de véhicules sur deux niveaux pour la logistique urbaine : approches basées sur les méthodes exactes de l'optimisation mathématique". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0199.
Texto completoThe main focus of this thesis is to develop mathematical optimization based exact methods to solve vehicle routing problems in two-level distribution systems. In such a system, the first level involves trucks that ships goods from a distribution center to intermediate depots called satellites. The second level involves city freighters that are loaded with goods at satellites and deliver the customers. Each customer must be visited once. The two-echelon vehicle routing problem seeks to minimize the total transportation cost in such a distribution system.The first chapter gives an overview of the branch-and-cut-and-price framework that we use throughout the thesis.The second chapter tackles the Two-Echelon Capacitated Vehicle Routing Problem. We introduce a new route based formulation for the problem which does not use variables to determine product flows in satellites. We propose a new branching strategy which significantly decreases the size of the branch-and-bound tree. Most importantly, we suggest a new family of satellite supply inequalities, and we empirically show that it improves the quality of the dual bound at the root node of the branch-and-bound tree. Experiments reveal that our algorithm can solve all literature instances with up to 200 customers and 10 satellites. Thus, we double the size of instances which can be solved to optimality.The third chapter tackles the Two-Echelon Vehicle Routing Problem with Time Windows. We consider the variant with precedence constraints at the satellites: products should be delivered by an urban truck to a satellite before loading them to a city freighter. This is a relaxation of the synchronisation variant usually considered in the literature. We consider single-trip and multi-trip variants of this problem. In the first one, city freighters start from satellites and do a single trip. In the second one, city freighters start from a depot, load product at satellites, and do several trips. We introduce a route based formulation that involves an exponential number of constraints to ensure precedence relations. A minimum-cut based algorithm is proposed to separate these constraints. We also show how these constraints can be taken into account in the pricing problem of the column generation approach. Experiments show that our algorithm can solve to optimality instances with up to 100 customers. The algorithm outperforms significantly another recent approach proposed the literature for the single-trip variant of the problem. Moreover, the “precedence relaxation” is exact for single-trip instances.The fourth chapter considers vehicle routing problems with knapsack-type constraints in the master problem. For these problems, we introduce new route load knapsack cuts and separation routines for them. We use these cuts to solve to optimality three problems: the Capacitated Vehicle Routing Problem with Capacitated Multiple Depots, the standard Location-Routing Problem, and the Vehicle Routing Problem with Time Windows and Shifts. These problems arise when routes at first level of the two-level distribution system are fixed. Our experiments reveal computational advantage of our algorithms over ones from the literature
Lainé, Régis. "Contribution à la conception de cellules robotisées : une approche basée sur les algorithmes génétiques". Poitiers, 2004. http://www.theses.fr/2004POIT2323.
Texto completoThis thesis deals with the kinematic synthesis problem of industrial robots. It presents an optimization method based on a genetic algorithm, which led to the implementation of a design module, in the CAD-Robotics SMAR system developed at the Laboratoire de Mécanique des Solides. This method determines automatically well adapted geometrical parameters of a robot (kinematic architecture, dimensions and the robot placement) for a given task and an objective function. The function depends on three homogenized coefficients which measure: the distance between the joint values and the joint limits of the robot, the necessary volume of the configuration space to achieve the task and the robot size. The geometrical parameters are determined so that the task is accessible continuously without collision with the environment. As illustration, several examples present the proposed method of kinematic synthesis. The results show that the method is effective, robust and fast
Lisi, Samuele. "Approches innovantes basées sur la Résonance des Plasmons de Surface pour le diagnostic biomoléculaire de la maladie d’Alzheimer". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAV003/document.
Texto completoAlzheimer’s disease (AD) is a widespread pathogenic condition causing memory and behavior impairment mostly in elderlies because of the accumulation of amyloid beta peptide and tau protein in human brain. Current therapeutic approaches, based on the amyloid hypothesis, are unable to arrest the progression of the disease, hence early diagnosis is crucial for an effective intervention. Based on the updated criteria for AD probable diagnosis, and considering the limits associated with the actual analytical techniques, my work in this thesis was dedicated to develop novel strategies for AD diagnosis. The whole project focused on the analysis of tau protein by Surface Plasmon Resonance (SPR) biosensing. Such protein is well known for being relevant as neurodegenerative marker. In particular if the measurement of tau is associated with that of the amyloid beta peptide and that of the phosphorylated tau, the clinical specificity of this protein become significant to detect Alzheimer. Two aspects were studied; first of all an immunosensor was developed taking advantage of the well-established antigen-antibody interaction. After characterization of the analytical parameters of the direct assay (with primary antibody), a sandwich assay (using two monoclonal antibodies mapping on different analyte i.e. protein tau epitopes) was developed, allowing very low sensitivity to be obtained in artificial Cerebrospinal Fluid (aCSF). In particular to enhance the analytical signal Carbon Nano Tubes (CNTs) were used. Secondly, the research was focused on the selection of aptamers for tau. To this aim two SELEX (Systematic Evolution of Ligands by EXponential enrichment) methods were compared, both based on Capillary Electrophoresis (CE) for partitioning step of the process. Whether with CE-SELEX (first method), no significant affinity improvement was measured, using the CE-Non-SELEX (second method) affinity of the DNA library for tau protein was consistently improved. After isolation of a limited population of aptamer candidates, five sequences were chosen to be analyzed for their affinity for the target. Fluorescence Anisotropy (FA) measurements and SPR highlight similar behavior for the selected sequences, despite the detection principles of these techniques are significantly different. In conclusion the work highlight versatility of SPR technology used both for quantitative analysis and for new selected aptamers characterization in terms of affinity for the analyte tau. The above mentioned versatility is of great interest in a field such AD, which is rapidly expanding. Lowering the total tau levels has been recently identified as a new goal for therapy. Therefore many drug candidates are likely going to be tested in the near future. SPR technology is already widely used in pharmaceutical industry to investigate novel molecules, since it gives access to a large panel of information. In this panorama aptamer technology may improve the overall quality of the analytical data, allowing better comparison among drug candidates. With respect of these receptors, the thesis opened the door to new studies for DNA aptamers to recognize tau, with considerable advantages in term of the receptor stability. Moreover the whole potential of DNA aptamers selected in this work still remain to be explored. New selection methodologies, combined with fast progression of bioinformatics tools might give rise to affinity improvement, which will lead to sensitivity improvement for tau detection in the next few years
Besancon, Marjorie. "Immunothérapie non-spécifique du cancer de la vessie : développement de nouvelles approches basées sur la combinaison d'agents thérapeutiques". Doctoral thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27996.
Texto completoBladder cancer (BCa) is the ninth most common cancer in the world, with 430 000 new cases diagnosed in 2014. Muscle-invasive bladder cancer represents about 25% of bladder tumors, while non-muscle-invasive bladder cancer represents about 75% of these tumors. The latter are usually associated with a favorable prognosis but are characterized by a high rate of recurrence and progression while the former are aggressive from the onset and are at high risk of progression toward advanced disease. Among the various therapies available for the management of bladder tumors is non-specific immunotherapy using bacillus Calmette-Guerin (BCG) for the treatment on non-muscle invasive bladder tumors and, more recently, inhibitors of immune checkpoint (IC) for the treatment of advanced bladder tumors. BCa is one of the rare cancers to respond well to immunotherapy but, nevertheless, these treatments are suboptimal. The main objective of my project was to develop new immunotherapeutic approaches to fight more efficiently against BCa. To achieve this, three complementary approaches were investigated in murine BCa models. We first assessed in vitro and in vivo the potential of poly(I:C), a TLR3 agonist, used alone or in combination with BCG. While poly(I:C) induced anti-proliferative and apoptotic effects on BCa cells in vitro, the combination of poly(I:C) and BCG induced in vivo a complete tumor regression in 28% of treated mice. Then, we evaluated the potential of two combinations of IC inhibitors in two murine BCa models. The first combination studied was that of the simultaneous blockade of PD-1 and TIM-3 tested in the MBT-2 model because the characterization of the MBT-2 tumors showed that these two receptors where frequently IC expressed in these tumors. In vivo blockade of these pathways revealed that in MBT-2 tumors, PD-1 is associated to a pro-tumoral activity, whereas, TIM-3 is associated with anti-tumoral activity, revealing opposite functions of these IC in these tumors. The second combination studied was that of PD-1 and LAG-3 tested in the MB49 BCa model. The characterization of MB49 tumors showed that PD-1 and LAG-3 were important IC in these tumors. The in vivo study showed that the simultaneous blockade of PD-1 and LAG-3 increased the survival rate, since 67% of mice showed a complete tumor regression while the survival rates were 33% and 0% when anti-PD-1 and anti-LAG-3, respectively, were used in monotherapy. Finally, since androgens seem to play an important role in BCa, we tested an approach combining the inhibition of PD-1 and of the androgen receptor (AR). We showed that enzalutamide and seviteronel, two second generation antiandrogens, induced in vitro a decrease of the proliferation of human and murine BCa cells. In vivo, the combination of enzalutamide with anti-PD-1 showed a 66% overall survival rate, a rate that is much higher than the 16% rate observed when enzalutamide or anti-PD-1 were used alone. Thus, these studies allowed us to identify various possible ways to increase anti-tumor immune response that could be tested in clinical trials. They also show that the combination of therapeutic approaches is very promising the future of BCa immunotherapy.
Esculier, Jean-François. "Le syndrome fémoropatellaire chez les coureurs : effets de différentes approches de réadaptation basées sur les mécanismes sous-jacents". Doctoral thesis, Université Laval, 2017. http://hdl.handle.net/20.500.11794/27727.
Texto completoSoixante-neuf coureurs ayant un syndrome fémoropatellaire (SFP) ont participé à une session d’évaluation pendant laquelle les symptômes et limitations fonctionnelles, la force musculaire, les chaussures de course, la cinématique et la cinétique de course et l’état radiologique du genou atteint ont été évalués. Pendant la technique de course habituelle, l’utilisation de chaussures montrant un niveau de minimalisme plus important tel qu’objectivé par l’Indice minimaliste était associée à une cadence des pas plus élevée ainsi qu’un angle d’inclinaison du pied et une force à l’articulation fémoropatellaire moindres, qui représentent des variables clés dans le traitement du SFP selon la littérature actuelle. Des modifications au patron de course habituel ont permis de diminuer la douleur de façon immédiate chez une portion des coureurs, en lien avec une réduction de la cinétique fémoropatellaire. Chez les attaqueurs du talon, le fait d’augmenter la cadence, de transférer vers une attaque de l’avant-pied et de courir sans bruit ont été les modifications les plus efficaces, en comparaison avec l’augmentation de la cadence et la course sur l’avant-pied chez les attaqueurs non-talon. Les coureurs ont ensuite été assignés à un programme de réadaptation d’une durée de 8 semaines parmi (1) éducation sur les modifications d’entraînement en fonction des symptômes; (2) programme d’exercices en plus de la composante d’éducation; (3) modifications au patron de course en plus de la composante d’éducation. Les trois programmes ont mené à des améliorations significatives sur les niveaux de symptômes et de limitations fonctionnelles. Cependant, bien que les mécanismes sous-jacents aux modifications du patron de course et au programme d’exercices aient bel et bien été modifiés, l’ajout de telles composantes à l’éducation n’a pas procuré de bénéfice supplémentaire à une intervention impliquant seulement l’éducation. Globalement, 78% des coureurs ont atteint le critère de succès clinique au suivi 3 mois après la fin de l’intervention. La combinaison du niveau de symptômes et de limitations fonctionnelles, de la force des extenseurs du genou et de l’intégrité du tendon rotulien a montré la meilleure capacité à prédire le succès clinique suite à une intervention mettant l’emphase sur l’éducation chez cette cohorte de coureurs ayant un SFP.
Sixty-nine runners with patellofemoral pain (PFP) took part in a baseline assessment during which their level of symptoms and function, lower limb strength, running shoes, running kinematics and kinetics as well as radiological outcomes were evaluated. During the habitual running pattern, the use of footwear showing a greater level of minimalism as indicated by the Minimalist Index was associated with outcomes that have previously been suggested to treat PFP in runners, namely greater step rate, lower foot inclination angle and lower patellofemoral joint kinetics. Running gait modifications, which were linked with decreased patellofemoral joint kinetics, were efficient in immediately decreasing symptoms in a subset of runners. In rearfoot strikers, the most efficient modifications were increasing step rate, forefoot striking and running softer, while non-rearfoot strikers benefited from increasing step rate and forefoot striking. Runners were then assigned to one of three 8-week rehabilitation programs among (1) education on training modifications based on symptoms; (2) an exercise program in addition to the education component; (3) gait retraining in addition to the education component. All programs led to significant improvements in the levels of symptoms and functional limitations. Even though gait retraining and exercises improved their targeted mechanisms, their addition to education did not provide additional benefits. Globally, 78% reached clinical success at the follow-up 3 months after the end of the program. A combination of the level of symptoms and function, knee extensors strength and patellar tendon integrity best predicted clinical success of runners with PFP following an intervention focused on education.
Gan, Changquan. "Une approche de classification non supervisée basée sur la notion des K plus proches voisins". Compiègne, 1994. http://www.theses.fr/1994COMP765S.
Texto completoEmbe, Jiague Michel. "Approches formelles de mise en oeuvre de politiques de contrôle d'accès pour des applications basées sur une architecture orientée services". Thèse, Université de Sherbrooke, 2012. http://hdl.handle.net/11143/6683.
Texto completoMazzuca, Lidia. "Préparation, détermination de la structure et des propriétés physiques de composés moléculaires basées sur le formiate". Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAY004.
Texto completoThe synthesis and the characterization of new materials are key challenges in chemistry and physics. In particular, metal–organic frameworks (MOFs) with two or more coupled functionalities are still rare and very attractive candidates because of their wide variety of properties, and promising applications interesting many disciplines. The impact of the development of new fascinating materials on our day life might be considerable. This is also one of the reason explaining the intense increasing of the research in material science and condensed matter.This thesis is focused on the synthesis and the physical characterization of magnetic metal formate frameworks using the combination of neutron and synchrotron X-Ray diffraction as well as other techniques. Metal-formate frameworks are a specific subgroup of metal-organic frameworks, where the metal centres are linked by the formate molecules to form an anionic framework. The negative charge of the framework is balanced by a counter-cation inside the frameworks’ cavities, that can be for example protonated amines.Typically, these compounds are synthesized by reacting formate or formic acid with a metal salt under solvo-thermal conditions or by slow evaporation or diffusion techniques.In this work, I investigated the crystal structure, phase transitions and magnetic properties of two families of metal formate frameworks, which are represented by the hetero-metallic or mixed valance compounds adopting a niccolite-like structure, and the homo-metallic compounds adopting a perovskite-like (ABX3) structure.Altogether, the following compounds were synthesized and characterized: [(CH3)2NH3][FeIIIMII(HCOO)6] (M = Mg, Mn, Fe, Co, Ni), [(CH3)2NH3][FeIIIFeII(HCOO)6], [(CH3NH3)[M(HCOO)3] (M = Co, Mn, Fe, Ni, Cu), and [(NH4)[Mn(HCOO)3].The choice of using specific metal ions has been motivated by their different electronic configuration and therefore different physical behaviours, i.e. a large difference in the magnetic anisotropy is well known among the different divalent ions used in this study. Beside the effects on the properties when different divalent metal ions were introduced within the framework, the effects of the nature of the counterions was investigated.Even though there is not a clear correlation between the selected counterion and the change of magnetic behaviour, neutron diffraction allows elucidating the differences in the nuclear and in the magnetic structure when different counterions are used. Moreover, these works help us to compare our neutron results with the magnetometry measurements, which is a complementary technique.A variety of magnetic phenomena such as ferromagnetic behaviour, antiferromagnetic ordering, spin canting have been observed in the compounds studied. Furthermore, from the nuclear structure point of view many different kind of phase transitions were detected involving for instance, the order-disorder of the counter ion employed (in [(CH3)2NH3][FeIIIMII(HCOO)6] for example), or the transition from a commensurate to incommensurate phase giving rise to a modulation of the structure (in [(CH3NH3)[Co(HCOO)3] for example)
Tchekiken, El Hadi. "Estimation de l'orientation 3D du cœur : une approche basée sur l'analyse de la structure spatio-temporelle de l'électrocardiogramme orthogonal". Lyon, INSA, 1997. http://www.theses.fr/1997ISAL0008.
Texto completoThis work is about the estimation of the 3D orientation of the heart from the orthogonal electrocardiogram. The long term objective is to perform a spatio-temporal ECG-gated echocardiography technique for the acquisition of serial, but comparable images during a stress echocardiography test. We first propose a method for the determination of the position of the successive spatio-temporal QRS and T loops with reference to the body coordinates system. We determine the inertia axes which define a new coordinates system closely related to the loop's preferential space. We then study the spatial and the structural reproducibility of the ECG loops for body posture, respiration and stress. The obtained results have confirmed the beat-to-beat reproducibility and evidenced the correlation between the position of the heart and the spatial position of the ECG loops. Starting from the hypothesis that the QRS and T loops are tied to the heart, we have developed a method to estimate the orientation of any anatomic axis of the heart. For the evaluation, we proposed two approaches which allow the confrontation of the electric and the anatomic positions of the heart. The first approach is based on the spatial determination of echocardiographic planes of the four chambers and the second one on the 3D MRI of the thorax and the heart. The results show that there is a correlation between electric and anatomic positions of the heart. In this work, we also evidenced that our method is suitable for the estimation of the changes of the beat-to-beat orientation of the heart. Hence, we can consider the development of a spatio-temporal ECG-gated echocardiography technique
Chemak, Chokri. "Elaboration de nouvelles approches basées sur les turbos codes pour augmenter la robustesse du tatouage des images médicales : application au projet PocketNeuro". Besançon, 2008. http://www.theses.fr/2008BESA2016.
Texto completoFor the most communicating systems, the turbo code is used for the transmission in the power limited canal. This Error Correcting Code (ECC) is very efficient against distortions on the network transmissions. The main goal of this Ph. D work is to integrate the canal coding field or more precisely the turbo code in medical image watermarking. The image watermarking approach consists in embedding a message in image, and we will try to extract it with the maximum of possible fidelity. The message embedded will be coded with the turbo code before embedding it in image. This is for the aim to decrease errors in the message embedded after image transmission on networks. First we elaborate a new watermarking scheme, based on turbo code; robust against the most usually attacks on images: noises, JPEG compression image filtering and geometric transformations like the cropping. In this case, the message embedded is a binary mark that identify image own. Secondly we use these new schemes on telemedicine and more precisely on the PocketNeuro project. In this case, the message embedded into image is medical information about patient. This information will be coded with turbo code, and after that it will be embedded into patient diagnosis image. Two important studies on the PocketNeuro contribution, the confidentiality of the medical information transmitted between the different participants, and the image adaptation at the terminals
Prosperi, Paolo. "Mesures de la sécurité alimentaire et de l'alimentation durable en Méditerranée, basées sur les approches de la vulnérabilité et de la résilience". Electronic Thesis or Diss., Montpellier, SupAgro, 2015. http://www.supagro.fr/theses/extranet/15-0003_Prosperi.pdf.
Texto completoRecurrent food crises and global change, along with habitat loss and micronutrient deficiencies, placed food security and environmental sustainability at the top of the political agenda. Analyses of the dynamic interlinkages between food consumption patterns and environmental concerns recently received considerable attention from the international community. Socioeconomic and biophysical changes affect the food system functions including food and nutrition security. The sustainability of food system is at risk. Building sustainable food systems has become a key effort to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems involve multiple interactions between human and natural components. The systemic nature of these interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system can help tracking progress towards sustainability and setting policies towards positive transformations.The general objective of this thesis is to analyze and explore the sustainability of the food system through identifying a set of metrics at the Mediterranean region level. The specific aims consist of developing a multidimensional framework to evaluate the sustainability of food systems and diets, identifying the main variables to formalize and operationalize the abstract and multidimensional concept of sustainable food systems, and defining metrics for assessing the sustainability of food systems and diets, at a subregional level.Through a broad understanding of sustainability, the methodological approach of this thesis builds on the theories of vulnerability and resilience. Following the steps of the global change vulnerability assessment a causal factor analysis is presented concerning three Mediterranean countries, namely Spain, France and Italy. Formulating "what is vulnerable to what" hypotheses, we identified eight causal models of vulnerability. A three-round Delphi survey was then applied to select indicators on the basis of the vulnerability/resilience theoretical framework.A conceptual hierarchical framework was identified for modeling the complex relationships between food and nutrition security and sustainability for developing potential indicators of sustainable diets and food systems. A feedback-structured framework of the food system formalized eight selected causal models of vulnerability and resilience and identified intrinsic properties of the food system, shaping the interactions where a set of drivers of change (Water depletion; Biodiversity loss; Food price volatility; Changes in food consumption patterns) directly affect food and nutrition security outcomes at a subregional level (Nutritional quality of food supply; Affordability of food; Dietary energy balance; Satisfaction of cultural food preferences). Each interaction was disentangled in exposure, sensitivity and resilience. This theoretical framework was operationalized through the identification of a set of 136 indicators. The Delphi study revealed low, medium, and high consensus and majority level on indicators in 75% of the interactions out of the 24 initial ones. The results obtained in terms of global response, expert participation rates, and consensus on indicators were then satisfactory. Also, expert confirmed with positive feedback the appraisal of the components of the framework.This theoretical modeling exercise and the Delphi survey allowed the identification of a first suite of indicators, moving beyond single and subjective evaluation, and reaching consensus on metrics of sustainable diets and food systems for supporting decision-making. The operationalization of the theories of vulnerability and resilience, through an indicator-based approach, can contribute to further analyses on the socioeconomic and biophysical aspects and interlinkages concerning the sustainability of diets and food systems
Martin, Amaury. "Développement de nouvelles approches antivirales du virus de l'hépatite C basées sur l'utilisation d'interférons alpha variants et d'antisens de type Peptide Nucleic Acids". Phd thesis, Université Claude Bernard - Lyon I, 2007. http://tel.archives-ouvertes.fr/tel-00136018.
Texto completoEl, Moutaouakkil Abdelmajid. "Approche basée sur le squelette pour la segmentation et l'analyse 2D et 3D des images tomographiques de la structure osseuse". Lyon, INSA, 1999. http://www.theses.fr/1999ISAL0027.
Texto completoThe purpose of this thesis is to present segmentation of radiological images of the trabecular bone, and extraction of quantification parameters, characterising the bone structure. Segmentation methods are typically classified in “region” and “edge” approaches. In this study we developed a new “squeleton” based approach, particularly well suited to segment tomographic images of the trabecular bone. First the skeleton of the structure is extracted using a gray levet generalization of binary squeletonisation, and then the skeleton is perpendicularly thickened to detect the total structure. The proposed method is easily adaptable to tomographic images obtained from different radiological systems and performed well. We also introduced several direct methods for the parametrisation of the trabecular architecture such as connectivity, directional and morphometric parameters, which are interpreted and compared with parameters described earlier in the literature. The important influence of the segmentation factors on architectural parameters' measures is clearly demonstrated and quantified. The developed method was calibrated and validated on phantom and anatomical specimens. Generalisation of the methodology for the segmentation and the parametrisation of 3D images were performed
Fraisse, Stéphane. "Structure de la communauté phytoplanctonique des fleuves côtiers en réponse aux contraintes hydrodynamique : une approche basée sur les traits morpho-fonctionnels". Phd thesis, Université Rennes 1, 2013. http://tel.archives-ouvertes.fr/tel-00911634.
Texto completoYaacoub, Tina. "Nouvelles approches pour l'estimation du canal ultra-large bande basées sur des techniques d'acquisition compressée appliquées aux signaux à taux d'innovation fini IR-UWB". Thesis, Brest, 2017. http://www.theses.fr/2017BRES0077/document.
Texto completoUltra-wideband impulse radio (IR-UWB) is a relatively new communication technology that provides an interesting solution to the problem of RF spectrum scarcity and meets the high data rate and precise localization requirements of an increasing number of applications, such as indoor communications, personal and body sensor networks, IoT, etc. Its unique characteristics are obtained by transmitting pulses of very short duration (less than 1 ns), occupying a bandwidth up to 7.5 GHz, and having an extremely low power spectral density (less than -43 dBm / MHz). The best performances of an IR-UWB system are obtained with Rake coherent receivers, at the expense of increased complexity, mainly due to the estimation of UWB channel, which is characterized by a large number of multipath components. This processing step requires the estimation of a set of spectral components for the received signal, without being able to adopt usual sampling techniques, because of the extremely high Nyquist limit (several GHz).In this thesis, we propose new low-complexity approaches for the UWB channel estimation, relying on the sparse representation of the received signal, the compressed sampling theory, and the reconstruction of the signals with finite rate of innovation. The complexity reduction thus obtained makes it possible to significantly reduce the IR-UWB receiver cost and consumption. First, two existent compressed sampling schemes, single-channel (SoS) and multi-channel (MCMW), are extended to the case of UWB signals having a bandpass spectrum, by taking into account realistic implementation constraints. These schemes allow the acquisition of the spectral coefficients of the received signal at very low sampling frequencies, which are not related anymore to the signal bandwidth, but only to the number of UWB channel multipath components. The efficiency of the proposed approaches is demonstrated through two applications: UWB channel estimation for low complexity coherent Rake receivers, and precise indoor localization for personal assistance and home care.Furthermore, in order to reduce the complexity of the MCMW approach in terms of the number of channels required for UWB channel estimation, we propose a reduced number of channel architecture by increasing the number of transmitted pilot pulses. The same approach is proven to be also useful for reducing the sampling frequency associated to the MCMW scheme.Another important objective of this thesis is the performance optimization for the proposed approaches. Although the acquisition of consecutive spectral coefficients allows a simple implementation of the MCMW scheme, we demonstrate that it not results in the best performance of the reconstruction algorithms. We then propose to rely on the coherence of the measurement matrix to find the optimal set of spectral coefficients maximizing the signal reconstruction performance, as well as a constrained suboptimal set, where the positions of the spectral coefficients are structured so as to facilitate the design of the MCMW scheme. Finally, the approaches proposed in this thesis are experimentally validated using the UWB equipment of Lab-STICC CNRS UMR 6285
Vincent, Bruno. "Investigation fonctionnelle et structurale des mécanismes d'activation ou de régulation de RCPGs opiacés basée sur une approche par fragments". Toulouse 3, 2008. http://www.theses.fr/2008TOU30251.
Texto completoThe opiate receptors (MOP, DOP, KOP, NOP) belong to the G Protein Coupled Receptor (RCPG). They are implicated in the effects of analgesia and tolerance and all dependent phenomena associated with. The molecular mechanisms for their activation signal transduction and regulation are still under investigation. During my thesis, an approach by fragment was used to pass through the difficulties lie on the intrinsic properties of the membrane proteins. The extracellular loop 2 (EC2) plays a crucial role for the specificity of recognition with its ligand the Nociceptin. Using mimicking peptidic fragments, the interaction of EC2- Nociceptin was studied by fluorescence and NMR spectroscopy. The results obtained allow us to specify the interaction between ligand and extracellular loop and prove the importance of the message part of the Nociceptin. The C-terminal part of the GPCR is implicated in the regulation phenomena that occur during the physiological process. The C-terminal peptide of MOP receptor (352-398) was expressed in E. Coli and purified. MOP fragments in fusion with the GST were used for GST or Histidines pulldown experiments to find new partner proteins. Biochemical studies of this peptidic fragment allowed us to identify two phosphorylation sites (Ser363 and Ser375), respectively specific of both PKC and PKA. Moreover, structural biology made essentially by NMR spectroscopy showed us two alpha helices domains surrounding crucial phosphorylation sites
Cragnolini, Tristan. "Prédire la structure des ARN et la flexibilité des ARN par des simulations basées sur des modèles gros grains". Paris 7, 2014. http://www.theses.fr/2014PA077163.
Texto completoIn contrast to proteins, there are relatively few experimental and computational studies on RNAs. This is likely to change, however, due to the discovery that RNA molecules fulfil a considerable diversity of biological tasks, including, aside from its encoding and translational activity, also enzymatic and regulatory functions. Despite the simplicity of its four-letter alphabet, RNAs are able to fold into a wide variety of tertiary structures where dynamic conformational ensembles appear also to be essential for understanding their functions. In spite of constant experimental efforts and theoretical developments, the gap between sequences and 3D structures is increasing, and our lmowledge of RNA flexibility is still limited at an atomic level of detail. In this thesis, I present improvements to the HiRE-RNA model, and folding simulations that were performed with it. After presenting the computational methods used to sample the energy landscapes of RNA and the experimental methods providing information about RNA structures, I present the RNA topologies and the structural data I used to improve the model, and to study RNA folding. The improvements of HiRE-RNA in version 2 and version 3 are then described, as well as the simulations performed with each version of the model
Tataie, Laila. "Méthodes simplifiées basées sur une approche quasi-statique pour l'évaluation de la vulnérabilité des ouvrages soumis à des excitations sismiques". Phd thesis, INSA de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00708575.
Texto completoNguyen, Quang Thuan. "Approches locales et globales basées sur la programmation DC et DCA pour des problèmes combinatoires en variables mixtes 0-1 : applications à la planification opérationnelle". Thesis, Metz, 2010. http://www.theses.fr/2010METZ037S/document.
Texto completoThis thesis develops two local and global approaches based on DC programming and DCA for mixed 0-1 combinatorial optimization and their applications to many problems in operational planning. More particularly, this thesis consists of: the improvement of the outer approximation algorithm based on DCA (called DCACUT) introduced by Nguyen V.V and Le Thi for mixed 0-1 linear programming, the combinations of global algorithms and DCA and the comparative numerical study of these approaches for mixed 0-1 linear programming, the use of DCA for solving mixed 0-1 programming via an exact penalty technique, the implementation of the algorithms developed for solving large scale problems in operational planning: two problems in wireless telecommunication network, two scheduling problems, an UAV task assignment problem and an inventory routing problem in supply chains
Nguyen, Quang Thuan. "Approches locales et globales basées sur la programmation DC et DCA pour des problèmes combinatoires en variables mixtes 0-1 : applications à la planification opérationnelle". Electronic Thesis or Diss., Metz, 2010. http://www.theses.fr/2010METZ037S.
Texto completoThis thesis develops two local and global approaches based on DC programming and DCA for mixed 0-1 combinatorial optimization and their applications to many problems in operational planning. More particularly, this thesis consists of: the improvement of the outer approximation algorithm based on DCA (called DCACUT) introduced by Nguyen V.V and Le Thi for mixed 0-1 linear programming, the combinations of global algorithms and DCA and the comparative numerical study of these approaches for mixed 0-1 linear programming, the use of DCA for solving mixed 0-1 programming via an exact penalty technique, the implementation of the algorithms developed for solving large scale problems in operational planning: two problems in wireless telecommunication network, two scheduling problems, an UAV task assignment problem and an inventory routing problem in supply chains
Ahmad, Alexandre. "Animation de structures déformables et modélisation des interactions avec un fluide basées sur des modèles physiques". Limoges, 2007. https://aurore.unilim.fr/theses/nxfile/default/4f73d6f8-b8f0-4794-924b-8f827db44689/blobholder:0/2007LIMO4046.pdf.
Texto completoThe presented works' main focus is the interaction of liquids and thin shells, such as sheets of paper, fish fins and even clothes. Even though such interactions is an every day scenario, few research work in the computer graphics community have investigated this phenomenon. Thereby, I propose an algorithm which resolves contacts between Lagrangian fluids and deformable thin shells. Visual artefacts may appear during the surface extraction procedure due to the proximity of the fluids and the shells. Thus, to avoid such artefacts, I propose a visibility algorithm which projects the undesired overlapping volume of liquid onto the thin shells' surface. In addition, an intuitive parametrisation model for the definition of heterogeneous friction coefficients on a surface is presented. I also propose two optimisation methods. The first one reduces the well-known dependency of numerical stability and the timestep when using explicit schemes by filtering particles' velocities. This reduction is quantified with the use of frequency analysis. The second optimisation method is a unified dynamic spatial acceleration model, composed of a hierarchical hash table data structure, that speeds up the particle neighbourhood query and the collision broad phase. The proposed unified model is besides used to efficiently prune unnecessary computations during the surface extraction procedure
Zhu, Chaobin. "Rôle du facteur de croissance IGF-1 (Insulin-Like Growth Factor-1) sur la progression tumorale invasive et métastatique du mélanome : approches anti-tumorales basées sur l'inhibition du facteur IGF-1". Thesis, Paris 11, 2015. http://www.theses.fr/2015PA11T012.
Texto completoMetastatic melanoma is the least common (5-7 %), but is responsible for most skin cancer deaths by its strong resistance to conventional anti-cancer treatments. Although immunogen, no effective treatment currently exists against this aggressive form, making urgent to find new therapeutic targets. In this context, we assessed whether the Insulin-like Growth Factor-1 (IGF-1) could represent a target of therapeutic interest in melanoma inhibiting the expression of IGF-1 by means of an episome-based vector encoding antisense IGF-1, in two cellular models: primary melanoma cells B16-F0 and metastatic B16-F10 (designated B16-F0mod and B16-F10mod when IGF-1 expression is inhibited).In experimental models in vivo, our results show that the reduction of IGF-1 expression induced a decrease of the melanoma cells tumorigenicity, generating smaller tumors under the skin (B16-F0 and B16-F10 in the C57BL/6 mice) and inhibiting totally (C57BL/6) or strongly (NSG mice) the developpment of B16-F10 lung metastases. We sought to understand whether this loss of tumorigenicity, following IGF-1 inhibition, was due to a change of immunogenicity/antigenicity of tumor cells and/or to intrinsic tumorigenic potential modification of metastatic tumor cells.1 / Immunization of mice C57BL/6 mice with B16-F0mod cells induces the formation of humoral lytic effectors in the presence of complement against the parental line, but also CD8+ effector cells capable of inducing tumor cells lysis in vitro and inhibiting tumor growth in vivo. Although the analysis of humoral and cellular pathways did not demonstrate IGF-1- dependent mechanisms involved in B16-F10 cells, immunization of C57BL/6 mice with B16 cells F0mod leads to skin tumor growth inhibition and a reduction in pulmonary metastases number, confirming the involvement of IGF-1 factor in tumor escape mechanisms of the immune system.2 / Our results also show that IGF-1 plays a direct role in the intrinsic tumorigenic potential of tumor cells. In addition to its effect on tumor cells proliferation, IGF-1 is involved in epithelial-mesenchymal transition (increased N-cadherin, vimentin, CD44 and CD29 markers), promoting the maintenance of tumor populations with stemness properties (Sox2, Oct3/4, CD44, CD24, ALDH activity side-population and ability to form spheroids). By this mechanism, IGF-1 promotes both migration properties and drugs efflux such as mitoxantrone, via ABC transporters, which partly explains the strong resistance of melanoma to conventional therapies.This work shows that the inhibition of IGF1/IGF1-R pathway might be a good strategy for the development of anti-tumor treatments against melanoma. In addition to developing immunotherapy strategies, blocking the IGF-1 pathway would also sensitize melanoma cells to conventional therapy and decrease the metastatic potential of tumor cells
Martiny, Virginie. "Vers la prédiction des propriétés ADME-Tox des molécules à visée thérapeutique par une approche basée sur la structure des enzymes du métabolisme". Paris 7, 2013. http://www.theses.fr/2013PA077175.
Texto completoPredicting ADME-Tox (Absorption, Distribution, Metabolism, Excretion and Toxicity) properties of drug candidates early in drug discovery process is critical to detect compounds with undesirable drug-like profile. Drug metabolizing enzymes are capable to modify compounds by interacting with them or can be involved in drug¬drug interactions. Cytochromes P450 (CYPs) and sulfotransferases (SULTs) are key drug metabolizing enzymes catalyzing oxidation and sulfoconjugation, respectively, leading to a possible modification of ADME-Tox properties of compounds, and in some cases to toxicity. The goal of this PhD study was to predict interactions of drug candidates with CYPs and SULTs, using in silico structure-based methods. We thus developed an approach based on docking/scoring methodology that considered the important flexibility of their active site by using molecular dynamic simulations (MD). When applied on CYPs, our protocol identified three MD-derived structures successfully discriminating active and inactive compounds. Regarding SULTs, we also identified several MD-derived structures able to correctly discriminate active and inactive compounds. In addition, we incorporated docking/scoring results of SULTs to create QSAR models. We anticipate to make our approach freely accessible to the scientific community to be able to predict interactions of their compounds of interest with CYPs and SULTs
Hoareau, Violette. "Etudes des mécanismes de maintien en mémoire de travail chez les personnes jeunes et âgées : approches computationnelle et comportementale basées sur les modèles TBRS* et SOB-CS". Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAS050/document.
Texto completoWorking memory is a cognitive system essential to our daily life. It allows us to temporarily store information in order to perform a cognitive task. One of the main features of this type of memory is to be limited in capacity. The reasons for this limitation are widely debated in the literature. Some models consider that a main cause of forgetting in working memory is the existence of a passive temporal decay in the activation of memory representations whereas other models assume that interference between information are sufficient to explain the limited capacity of this memory. Two computational models have recently been proposed (TBRS* and SOB-CS) and they perfectly illustrate this debate. Indeed, they both describe differently what happens during a working memory task involving both storage and information processing. In addition to opposing the causes of forgetting, they propose separate maintenance processes: refreshing relevant information according to TBRS* versus removing irrelevant information according to SOB-CS. This thesis was organized around two main objectives. First, we focused on the study of these two models and their maintenance mechanisms. To do so, we performed behavioral experiments using the complex span task to test specific hypotheses of these models. Second, using computational models, we investigated the causes of working memory deficits observed in the elderly, with the aim, in the long term, of creating or improving remediation tools. Regarding the first objective, results showed a discrepancy between human behavior and simulations. Indeed, TBRS* and SOB-CS did not reproduce a positive effect of the number of distractors contrary to what has been observed experimentally. We propose that this positive effect, not predicted by the models, is related to the long-term storage not taken into account in these two models. Regarding the second objective, the behavioral results suggest that older people would have difficulty mainly in refreshing memory traces and in stabilizing information in the long term during a complex task. Overall, the results of this thesis suggest to deepen the research on the links between the maintenance mechanisms and the long-term storage, for example by proposing a new computational model accounting for our results. Beyond advances in understanding the functioning of working memory, this thesis also shows that the use of computational models is of particular relevance for the study of a theory as well as for the comparison of different populations
Sourty, Raphael. "Apprentissage de représentation de graphes de connaissances et enrichissement de modèles de langue pré-entraînés par les graphes de connaissances : approches basées sur les modèles de distillation". Electronic Thesis or Diss., Toulouse 3, 2023. http://www.theses.fr/2023TOU30337.
Texto completoNatural language processing (NLP) is a rapidly growing field focusing on developing algorithms and systems to understand and manipulate natural language data. The ability to effectively process and analyze natural language data has become increasingly important in recent years as the volume of textual data generated by individuals, organizations, and society as a whole continues to grow significantly. One of the main challenges in NLP is the ability to represent and process knowledge about the world. Knowledge graphs are structures that encode information about entities and the relationships between them, they are a powerful tool that allows to represent knowledge in a structured and formalized way, and provide a holistic understanding of the underlying concepts and their relationships. The ability to learn knowledge graph representations has the potential to transform NLP and other domains that rely on large amounts of structured data. The work conducted in this thesis aims to explore the concept of knowledge distillation and, more specifically, mutual learning for learning distinct and complementary space representations. Our first contribution is proposing a new framework for learning entities and relations on multiple knowledge bases called KD-MKB. The key objective of multi-graph representation learning is to empower the entity and relation models with different graph contexts that potentially bridge distinct semantic contexts. Our approach is based on the theoretical framework of knowledge distillation and mutual learning. It allows for efficient knowledge transfer between KBs while preserving the relational structure of each knowledge graph. We formalize entity and relation inference between KBs as a distillation loss over posterior probability distributions on aligned knowledge. Grounded on this finding, we propose and formalize a cooperative distillation framework where a set of KB models are jointly learned by using hard labels from their own context and soft labels provided by peers. Our second contribution is a method for incorporating rich entity information from knowledge bases into pre-trained language models (PLM). We propose an original cooperative knowledge distillation framework to align the masked language modeling pre-training task of language models and the link prediction objective of KB embedding models. By leveraging the information encoded in knowledge bases, our proposed approach provides a new direction to improve the ability of PLM-based slot-filling systems to handle entities
Malinenko, Alla. "Effet d’ion specifique sur l’auto-assemblage d’amphiphiles cationiques : des approches experimentale et informatique". Thesis, Bordeaux, 2015. http://www.theses.fr/2015BORD0065/document.
Texto completoThe present study is a holistic approach focused on the investigation of ion specific effects on the self-assembly properties of cationic gemini surfactants. Our main focus was on the effect of various counterions on the self-organization features of cationic surfactants in aqueous solution. In order to obtain amore comprehensive understanding of the effect of interfacial ionic and molecular interactions on aggregate properties we used different approaches. We combined an experimental study focused on the bulk solution properties (critical micelle concentration, ionization degree, aggregation number, etc.), with approaches focused on investigating the interfacial micellar properties by analyzing the interfacial counterion and waterconcentrations, experimentally (chemical trapping) and computationally (molecular dynamic simulations). Moreover, the impact of counterion nature was investigated by studying the growth of wormlike micelles using rheology. Besides the examination of the surfactants properties in solution, the ion specific effects onthe crystalline structures of gemini surfactants were studied.We found that ion specific effects which determine the behavior of micellar aggregates of cationic quaternary ammonium gemini in aqueous solutions strongly depend on the free energy of hydration of the counterions, in others words, on their hydrophilic/hydrophobic properties. Contrarily to aqueous solution, in crystals, the size of the ion becomes the determining factor. Comparison of the results obtained for the same system in aqueous solution and in solid state showed the importance of ion-water interactions in ion specific effects. However, one should note that the properties of substrate (the gemini in our case) should be taken into account not less carefully in order to fully predict Hofmeister effects
Bensaad, Karim. "Etude des relations structure-fonction du produit du gène suppresseur de tumeur TP53 : une approche basée sur la conservation phylogénétique des membres de la famille p53". Paris 5, 2002. http://www.theses.fr/2002PA05N104.
Texto completoP53 is a key protein in the cellular integrity. After a stress, p53 induces a cell cycle arrest or apoptosis. Two approaches have been used to study the relationship between structure and function of p53. I) Characterization of human p53 / Xenopus p53 hybrid proteins enabled the identification of the p53X central domain as responsible for its thermosensitivity. Then, we demonstrated that p53 protein interacts specifically with p53 hybrids with an altered conformation. This interaction leads to p73 loss of activity. These data allow the understanding of the p53 mutants gain-of-function effect in human cancers. Ii) p53 mutants at position 98 (Pro), conserved among p53 in all species but never found mutated in human cancers