Thèses sur le sujet « Méthode à patchs »
Créez une référence correcte selon les styles APA, MLA, Chicago, Harvard et plusieurs autres
Consultez les 50 meilleures thèses pour votre recherche sur le sujet « Méthode à patchs ».
À côté de chaque source dans la liste de références il y a un bouton « Ajouter à la bibliographie ». Cliquez sur ce bouton, et nous générerons automatiquement la référence bibliographique pour la source choisie selon votre style de citation préféré : APA, MLA, Harvard, Vancouver, Chicago, etc.
Vous pouvez aussi télécharger le texte intégral de la publication scolaire au format pdf et consulter son résumé en ligne lorsque ces informations sont inclues dans les métadonnées.
Parcourez les thèses sur diverses disciplines et organisez correctement votre bibliographie.
Mathias, Jean-Denis. « Etude du comportement mécanique de patchs composites utilisés pour le renforcement de structures métalliques aéronautiques ». Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2005. http://tel.archives-ouvertes.fr/tel-00159157.
Texte intégraltype fissures, criques ou impacts. Une alternative à la réparation de ces structures consiste à les renforcer préventivement, avant que les défauts n'apparaissent. Le
contexte de ce travail est celui de la maintenance préventive de structures métalliques aéronautiques par renforts composites, dans le but de retarder l'apparition ou la propagation de fissures.
La conception des renforts nécessite l'utilisation d'outils spécialisés pour définir les caractéristiques optimales du patch : géométrie, nombre de plis unidirectionnels,
orientation des plis les uns par rapport aux autres, positionnement autour de la zone à soulager... Pour cela, un programme d'optimisation de patchs par algorithme
génétique a été écrit. Il est couplé à un logiciel de calcul par éléments finis : Ansys.
L'algorithme génétique mis en oeuvre a permis de déterminer des caractéristiques de patchs dont la géométrie extérieure est définie par une courbe spline fermée, ceci afin de réduire de manière optimale les contraintes mécaniques dans une zone donnée, et
ce pour différents types de sollicitations.
De nombreuses d´efaillances des assemblages collés patch/substrat sont liées à des concentrations de contraintes dans la colle dues à l'existence d'une zone de
transfert progressif d'effort du substrat vers le patch. Des approches unidirectionnelles du transfert d'effort sont classiquement utilisées dans la littérature. Elles ne
tiennent cependant pas compte d'effets bidimensionnels comme la différence des coefficients de Poisson qui peut exister entre le substrat et le composite. A partir des
équations d'équilibre, des modèles analytique et numérique bidimensionnels ont donc été développés. Des phénomènes de couplages bidimensionnels ont ainsi bien été mis en évidence.
Parallèlement, des essais de traction uniaxiale ont été réalisés sur des éprouvettes en aluminium renforcées par des patchs en carbone/époxyde. La méthode de la grille a été utilisée pour mesurer des champs cinématiques en surface du patch composite.
Cette méthode a permis d'étudier expérimentalement le transfert des efforts entre le substrat et le renfort suivant les deux dimensions du problème et de comparer les résultats obtenus avec les différents modèles développés au préalable.
Irrera, Paolo. « Traitement d'images de radiographie à faible dose : Débruitage et rehaussement de contraste conjoints et détection automatique de points de repère anatomiques pour l'estimation de la qualité des images ». Thesis, Paris, ENST, 2015. http://www.theses.fr/2015ENST0031/document.
Texte intégralWe aim at reducing the ALARA (As Low As Reasonably Achievable) dose limits for images acquired with EOS full-body system by means of image processing techniques. Two complementary approaches are studied. First, we define a post-processing method that optimizes the trade-off between acquired image quality and X-ray dose. The Non-Local means filter is extended to restore EOS images. We then study how to combine it with a multi-scale contrast enhancement technique. The image quality for the diagnosis is optimized by defining non-parametric noise containment maps that limit the increase of noise depending on the amount of local redundant information captured by the filter. Secondly, we estimate exposure index (EI) values on EOS images which give an immediate feedback on image quality to help radiographers to verify the correct exposure level of the X-ray examination. We propose a landmark detection based approach that is more robust to potential outliers than existing methods as it exploits the redundancy of local estimates. Finally, the proposed joint denoising and contrast enhancement technique significantly increases the image quality with respect to an algorithm used in clinical routine. Robust image quality indicators can be automatically associated with clinical EOS images. Given the consistency of the measures assessed on preview images, these indices could be used to drive an exposure management system in charge of defining the optimal radiation exposure
Irrera, Paolo. « Traitement d'images de radiographie à faible dose : Débruitage et rehaussement de contraste conjoints et détection automatique de points de repère anatomiques pour l'estimation de la qualité des images ». Electronic Thesis or Diss., Paris, ENST, 2015. http://www.theses.fr/2015ENST0031.
Texte intégralWe aim at reducing the ALARA (As Low As Reasonably Achievable) dose limits for images acquired with EOS full-body system by means of image processing techniques. Two complementary approaches are studied. First, we define a post-processing method that optimizes the trade-off between acquired image quality and X-ray dose. The Non-Local means filter is extended to restore EOS images. We then study how to combine it with a multi-scale contrast enhancement technique. The image quality for the diagnosis is optimized by defining non-parametric noise containment maps that limit the increase of noise depending on the amount of local redundant information captured by the filter. Secondly, we estimate exposure index (EI) values on EOS images which give an immediate feedback on image quality to help radiographers to verify the correct exposure level of the X-ray examination. We propose a landmark detection based approach that is more robust to potential outliers than existing methods as it exploits the redundancy of local estimates. Finally, the proposed joint denoising and contrast enhancement technique significantly increases the image quality with respect to an algorithm used in clinical routine. Robust image quality indicators can be automatically associated with clinical EOS images. Given the consistency of the measures assessed on preview images, these indices could be used to drive an exposure management system in charge of defining the optimal radiation exposure
De, bortoli Valentin. « Statistiques non locales dans les images : modélisation, estimation et échantillonnage ». Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASN020.
Texte intégralIn this thesis we study two non-localstatistics in images from a probabilistic point of view: spatialredundancy and convolutional neural network features. Moreprecisely, we are interested in the estimation and detection ofspatial redundancy in naturalimages. We also aim at sampling images with neural network constraints.We start by giving a definition of spatial redundancy in naturalimages. This definition relies on two concepts: a Gestalt analysisof the notion of similarity in images, and a hypothesis testingframework (the a contrario method). We propose an algorithm toidentify this redundancy in natural images. Using this methodologywe can detect similar patches in images and, with this information,we propose new algorithms for diverse image processing tasks(denoising, periodicity analysis).The rest of this thesis deals with sampling images with non-localconstraints. The image models we consider are obtained via themaximum entropy principle. The target distribution is then obtainedby minimizing an energy functional. We use tools from stochasticoptimization to tackle thisproblem.More precisely, we propose and analyze a new algorithm: the SOUL(Stochastic Optimization with Unadjusted Langevin) algorithm. Inthis methodology, the gradient is estimated using Monte Carlo MarkovChains methods. In the case of the SOUL algorithm we use an unadjustedLangevin algorithm. The efficiency of the SOUL algorithm is relatedto the ergodic properties of the underlying Markov chains. Thereforewe are interested in the convergence properties of certain class offunctional autoregressive models. We characterize precisely thedependency of the convergence rates of these models with respect totheir parameters (dimension, smoothness,convexity).Finally, we apply the SOUL algorithm to the problem ofexamplar-based texture synthesis with a maximum entropy approach. Wedraw links between our model and other entropy maximizationprocedures (macrocanonical models, microcanonical models). Usingconvolutional neural network constraints we obtain state-of-the artvisual results
Samuth, Benjamin. « Ηybrid mοdels cοmbining deep neural representatiοns and nοn-parametric patch-based methοds fοr phοtοrealistic image generatiοn ». Electronic Thesis or Diss., Normandie, 2024. http://www.theses.fr/2024NORMC249.
Texte intégralImage generation has encountered great progress thanks to the quickevolution of deep neural models. Their reach went beyond thescientific domain and thus multiple legitimate concerns and questionshave been raised, in particular about how the training data aretreated. On the opposite, lightweight and explainable models wouldbe a fitting answer to these emerging problematics, but their qualityand range of applications are limited.This thesis strives to build “hybrid models”. They would efficientlycombine the qualities of lightweight or frugal methods with theperformance of deep networks. We first study the case of artisticstyle transfer with a multiscale and constrained patch-basedmethod. We qualitatively find out the potential of perceptual metricsin the process. Besides, we develop two hybrid models forphotorealistic face generation, each built around a pretrainedauto-encoder. The first model tackles the problem of few-shot facegeneration with the help of latent patches. Results shows a notablerobustness and convincing synthesis with a simple patch-basedsequential algorithm. The second model uses Gaussian mixtures modelsas a way to generalize the previous method to wider varieties offaces. In particular, we show that these models perform similarly toother neural methods, while removing a non-negligible number ofparameters and computing steps at the same time
Salmon, Joseph. « Agrégation d'estimateurs et méthodes à patchs pour le débruitage d'images numériques ». Paris 7, 2010. http://www.theses.fr/2010PA077195.
Texte intégralThe problem studied in this thesis is denoising images corrupted by additive Gaussian white noise. The methods we use to get a better picture from a noisy one, are based on patches and are variations of the well known Non-Local Means. The contributions of this thesis are both practical and theoretical. First, we study precisely the influence of various parameters of the method. We highlight a limit observed on the treatment of edges by the usual patches based methods. Then, we give a better method to get pixel estimates by combining information from patches estimates. From a theoretical point of view we provide a non-asymptotic control of our estimators. The results proved are oracle inequalities, holding for a restrictive class of estimators, close to the form of the Non-Local Means estimetes. The techniques we use are based on aggregation of estimators, and more precisely on exponentially weighed aggregates. Typically, the last method requires a measure of the risk, that is obtained through a unbiased estimator of the risk. A common way to get such a mesure is to use the Stein Unbiased Risk Estimate (SURE). The denoising methods studied are analyzed numerically by simulations
Crouzy, Serge. « Méthodes d'analyse des signaux de Patch-Clamp à temps discret ». Grenoble INPG, 1989. http://www.theses.fr/1989INPG0005.
Texte intégralSalmon, Jospeh. « Agrégation d'estimateurs et méthodes à patch pour le débruitage d'images numériques ». Phd thesis, Université Paris-Diderot - Paris VII, 2010. http://tel.archives-ouvertes.fr/tel-00545643.
Texte intégralMakhlouf, Abdelkader. « Justification et amélioration de modèles d'antennes patch par la méthode des développements asymptotiques raccordés ». Toulouse, INSA, 2008. http://eprint.insa-toulouse.fr/archive/00000277/.
Texte intégralThis thesis is devoted to the study and the mathematical justification of some models used in the numerical simulation of patch antenna. The reduction of dimension required for a correct description of the electromagnetic field lying between the patch and the metallic masse plan and the involvement of a boundary layer in the vicinity of the antenna edges makes difficult the direct numerical simulation of the electromagnetic field radiated or emitted by the antenna. A heuristic approach, called ”the cavity with magnetic walls model” is wide-spread in the engineering literature and used application for this type of simulation. In this work, we give a rigorous mathematical justification to this heuristic model using the matched asymptotic expansions method. Indeed, we show that in fact the heuristic model is a first order approximation of the true electromagnetic field emitted or radiated by the antenna. We also construct a higher-order model imporving the accuracy in the determination of the electromagnetic field. The construction of these models requires the handling of non standard boundary value problems which are thoroughly studied
Vigoureux, Dorian. « Déconfinement de sources acoustiques par utilisation d'une méthode holographique à double information ». Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00759412.
Texte intégralAdnet, Nicolas. « Modélisation numérique du couplage Mécanique/Electromagnétisme pour l’étude de la sensibilité du comportement électromagnétique d’antennes patch aux déformations mécaniques ». Paris 10, 2012. http://www.theses.fr/2012PA100079.
Texte intégralThis study deals with a weak formulation of a bi-disciplinary coupling, involving Mechanical and Electromagnetic skills. This interaction is numerically modelled by coupling the Finite Element Method to a Boundary Integral Equation, used in order to take into account the infinite space in which electromagnetic waves are radiated or caught by a patch antenna. The mechanical and electromagnetic fields are computed through a 3D hexahedral finite element, having lead us to implement a numerical tool dedicated to the investigation of the coupling and the computations of some antennal properties, with respect to the emitting or backscattering cases. The formulation of the coupling and the numerical implementation were numerically and experimentally benchmarked. Numerical tests involving the electromagnetic behaviour of planar and conformal path antennas were achieved. The results computed through the coupling approach are in a good agreement with those mentioned in the electromagnetic literature. But such benchmarks have not been performed concerning deformed antennas, any reference related to the topic was found. Thus, few experimental tests were done in order to complete the validation of this work. A metamaterial-based antenna and a conventional patch antenna have been designed and measured when these are subjected to bending and twisting loads. With regard to the conventional patch antenna, the comparisons between computed and measured results allowed us to conclude that the chosen numerical approach successfully models the “natural” phenomenon that relates the electromagnetic characteristics of an antenna to its geometrical states
Ocampo, Blandon Cristian Felipe. « Patch-Based image fusion for computational photography ». Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0020.
Texte intégralThe most common computational techniques to deal with the limited high dynamic range and reduced depth of field of conventional cameras are based on the fusion of images acquired with different settings. These approaches require aligned images and motionless scenes, otherwise ghost artifacts and irregular structures can arise after the fusion. The goal of this thesis is to develop patch-based techniques in order to deal with motion and misalignment for image fusion, particularly in the case of variable illumination and blur.In the first part of this work, we present a methodology for the fusion of bracketed exposure images for dynamic scenes. Our method combines a carefully crafted contrast normalization, a fast non-local combination of patches and different regularization steps. This yields an efficient way of producing contrasted and well-exposed images from hand-held captures of dynamic scenes, even in difficult cases (moving objects, non planar scenes, optical deformations, etc.).In a second part, we propose a multifocus image fusion method that also deals with hand-held acquisition conditions and moving objects. At the core of our methodology, we propose a patch-based algorithm that corrects local geometric deformations by relying on both color and gradient orientations.Our methods were evaluated on common and new datasets created for the purpose of this work. From the experiments we conclude that our methods are consistently more robust than alternative methods to geometric distortions and illumination variations or blur. As a byproduct of our study, we also analyze the capacity of the PatchMatch algorithm to reconstruct images in the presence of blur and illumination changes, and propose different strategies to improve such reconstructions
Cheng, Wenqing. « Numerical simulation of shrinkage and cracks in clayey soils on drying paths ». Electronic Thesis or Diss., Université de Lorraine, 2020. http://www.theses.fr/2020LORR0210.
Texte intégralClay soil is widely distributed on the Earth’s surface, and because it is cheap and readily available, clay soil has been widely used as a building material for a very long history. Furthermore, clay can be used as not only a natural barrier in the dam cores, but also a matrix for the storage of radioactive wastes because of its retention properties. The mechanical behavior of clay materials is complex, one of the difficulties is that it is sensitive to water. During the desiccation process, clay soils undergo shrinkage, which can cause cracking. The aim of this thesis is, initially, to develop a numerical approach capable of reproducing the phenomenon of shrinkage, the distribution of water content as well as that of suction. In a second step, based on Coussy's theory for unsaturated porous media, and the mechanics of unsaturated soils, a constitutive law will be proposed to describe the behavior observed during desiccation. Finally, to reproduce the cracks distribution, based on the extended finite element method (X-FEM). The realization of numerical simulation is based on the analysis of the desiccation experiments of clay soils in laboratory. The application of the digital image correlation (DIC) technology in the desiccation experiments makes the study on the desiccation process in clay soils more accurate. The experimental results show that the clay soils will generate the theoretical shrinkage deformation caused by its own water loss in the drying path. This deformation in simulation can be related to the water content of clays through the Fredlund function. The desiccation shrinkage of clay soils has an anisotropic phenomenon. The coefficient of shrinkage ratio is used to describe this phenomenon in simulation. One of the ways to construct the constitutive of the initially saturated soft clays during drying could be in using two independent stress tensors which will enable the decomposition of total strain tensor into strain tensor due to drying shrinkage (induced part due to suction variation) and a “mechanical” strain tensor due to the total stress variation. Mechanical strain tensor can be related to total stress by using stiffness matrix. In fact, the initially saturated clay soil resistance increases during desiccation. The result of cracking in the soil under controlled suction is the result of competitions between increased soil resistance and damage caused by shrinkage during desiccation. The soil moisture crack initiation criterion will be based on soil damage and resistance. The criterion of crack propagation, meanwhile, will be based on the theory of conservation of energy. To reproduce the cracks distribution, based on X-FEM. Weibull's law will be used to consider the heterogeneous distribution of the soil. After digital model validation, applications in the geotechnical field are then considered
Duteil, Philippe. « Mise au point d'une méthode d'analyse microstructurale par voie chimique de fibres cellulosiques : application à l'étude de pates à papier ». Bordeaux 1, 1988. http://www.theses.fr/1988BOR10581.
Texte intégralAssal, Reda. « Méthodes de production et étude électrophysiologique de canaux ioniques : application à la pannexine1 humaine et au canal mécanosensible bactérien MscL ». Thesis, Paris 11, 2011. http://www.theses.fr/2011PA11T093.
Texte intégralThe production of heterologous membrane protein is notoriously difficult; this might be due to the fact that insertion of the protein in the membrane host is a limiting step. To by-pass this difficulty, two modes of synthesis were tested: 1) production in a cell-free system devoid of biological membrane but supplemented with detergent or liposomes, 2) production in bacteria, with targeting of the membrane protein to inclusion bodies. Both strategies were tested for the production of the human pannexin 1 channel (Px1). The gene coding the protein was fused with an “enhancer” sequence resulting in the addition of a peptide or short protein at the N terminus of the protein of interest. This enhancer sequence which is well produced in vitro or in vivo is supposed to facilitate the translation of the protein of interest. Three enhancer sequences were chosen: 1) the small porin OmpX of E. coli, which, in addition, should target the protein to inclusion bodies when the protein is expressed in bacteria 2) a peptide of phage T7 for expression in E.coli lysate or E.coli cells 3) the small protein SUMO for production in a wheat germ cell-free system. In a bacterial cell-free system, neither OmpX nor T7 promoted Px1 production. Px1 is only produced when the SUMO enhancer sequence is used in the wheat germ system. In bacteria, OmpX, known to form inclusions bodies did not promote the targeting of the fusion protein to inclusion bodies. Unexpectedly, the peptide T7 was able to do it.Px1 obtained from inclusion bodies (T7his-Px1) was renatured and reconstituted in liposomes. Similarly his6-Px1 produced in wheat germ system was reconstituted in liposomes. Both preparations were used for electrophysiological studies (patch-clamp and planar bilayers). With the refolded T7his-Px1, channel activity reminiscent of that observed with Px1 expressed in Xenope oocyte (Bao et al., 2004) could be detected, but only in three cases. In the case of his6-Px1, no clear channel activity could be observed. The second part of this work deals with the involvement of the periplasmic loop of the bacterial mechanosensitive channel MscL in its sensitivity to pressure. Mscl has become a model system for the investigation of mechanosensisity. Nearly all functional studies have been performed on MscL from E.coli while the structure of the protein has been obtained from the Mycobacterium tuberculosis homologue. In one functional study it was shown that MscL from M. tuberculosis is extremely difficult to open, gating at twice the pressure needed for E.coli MscL The periplasmic loop is the most variable sequence between the two homologues, being longer in E.coli than in M. tuberculosis. In order to assess the role of the periplamic loop in the sensitivity to pressure, we compared the activity of the E.coli and M. tuberculosis MscL and of a chimeric protein made of the M. tuberculosis protein in which the periplasmic loop has been exchanged for that of the E. coli channel. Unexpectedly, M. tuberculosis and E .coli MscL were observed to gate at a similar applied pressure. The chimeric protein had no functional activity. In conclusion, this study does not allow any conclusion as to the role of the loop in the sensitivity to pressure, but it shows clearly that, in contrast to the results of a previous study, there is no functional difference between E. coli and M. tuberculosis MscL
Astoul, Julien. « Méthodes et outil pour la conception optimale d'une denture spiroconique ». Thesis, Toulouse, INSA, 2011. http://www.theses.fr/2011ISAT0036.
Texte intégralThe performance of a helicopter is closely linked to its weight. The components are lightened to benefit the carried payload. That usually involves a reduction in their stiffness, so an increase in their deformation. The transmission gear boxes are particularly affected. They must ensure the transmission of high powers with a minimal mass. The load makes the axes of the gears misaligned. The topographies of the spiral bevel gear teeth are corrected in order to tolerate the displacement and optimize the mechanism performances. The contact path must not touch tooth edges to avoid any overpressure and premature degradation. The distribution of the transmitted load and of the contact pressures must be improved. The transmission error induces vibrations and noise. Therefore, it must be minimized. The study of the correction to be applied to the teeth is tedious and requires a long learning period when it is done manually. The presented works fit into the scheme of an automated process. The machining and meshing of the teeth are simulated numerically. The proposed methods are simple and robust. Three different optimization problems are discussed and analyzed
Nguyen, Minh Khoa. « Exploration efficace de chemins moléculaires par approches aussi rigides que possibles et par méthodes de planification de mouvements ». Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM013/document.
Texte intégralProteins are macromolecules participating in important biophysical processes of living organisms. It has been shown that changes in protein structures can lead to changes in their functions and are found linked to some diseases such as those related to neurodegenerative processes. Hence, an understanding of their structures and interactions with other molecules such as ligands is of major concern for the scientific community and the medical industry for inventing and assessing new drugs.In this dissertation, we are particularly interested in developing new methods to find for a system made of a single protein or a protein and a ligand, the pathways that allow changing from one state to another. During past decade, a vast amount of computational methods has been proposed to address this problem. However, these methods still have to face two challenges: the high dimensionality of the representation space, associated to the large number of atoms in these systems, and the complexity of the interactions between these atoms.This dissertation proposes two novel methods to efficiently find relevant pathways for such biomolecular systems. The methods are fast and their solutions can be used, analyzed or improved with more specialized methods. The first proposed method generates interpolation pathways for biomolecular systems using the As-Rigid-As-Possible (ARAP) principle from Computer Graphics. The method is robust and the generated solutions preserve at best the local rigidity of the original system. An energy-based extension of the method is also proposed, which significantly improves the solution paths. However, in scenarios requiring complex deformations, this geometric approach may still generate unnatural paths. Therefore, we propose a second method called ART-RRT, which combines the ARAP principle for reducing the dimensionality, with the Rapidly-exploring Random Trees from Robotics for efficiently exploring possible pathways. This method not only gives a variety of pathways in reasonable time but the pathways are also low-energy and clash-free, with the local rigidity preserved as much as possible. The mono-directional and bi-directional versions of the ART-RRT method were applied for finding ligand-unbinding and protein conformational transition pathways, respectively. The results are found to be in good agreement with experimental data and other state-of-the-art solutions
Lucas, François. « Des métaheuristiques pour le guidage d’un solveur de contraintes dédié à la planification automatisée de véhicules ». Thesis, Paris, ENMP, 2012. http://www.theses.fr/2012ENMP0027/document.
Texte intégralThis thesis, led in collaboration with Sagem Defence & Security, focuses on defining an efficient search strategy to solve vehicle path planning problems. This work addresses more precisely planning problems in which waypoints and "capacity" constraints (energy, radio bandwidth) are applied to vehicles.This document proposes an original approach, mixing an Ant Colony algorithm with an existing Constraint Programming solver. The former is used to fastly solve a relaxed version of the problem. The partial solution returned is then employed to guide the search of the latter, through a Probe Backtrack mechanism, towards the most promising areas of the state space. This approach allows to combine the metaheuristics solving fastness and the Constraint Programming completeness. We experimentally show that this approach meets the requirements for an on-line use of the planner
Lei, Zhen. « Isogeometric shell analysis and optimization for structural dynamics ». Thesis, Ecully, Ecole centrale de Lyon, 2015. http://www.theses.fr/2015ECDL0028/document.
Texte intégralIsogeometric method is a promising method in bridging the gap between the computer aided design and computer aided analysis. No information is lost when transferring the design model to the analysis model. It is a great advantage over the traditional finite element method, where the analysis model is only an approximation of the design model. It is advantageous for structural optimization, the optimal structure obtained will be a design model. In this thesis, the research is focused on the fast three dimensional free shape optimization with isogeometric shell elements. The related research, the development of isogeometric shell elements, the patch coupling in isogeometric analysis, the modal synthesis with isogeometric elements are also studied. We proposed a series of mixed grid Reissner-Minlin shell formulations. It adopts both the interpolatory basis functions, which are from the traditional FEM, and the non-interpolatory basis functions, which are from IGA, to approximate the unknown elds. It gives a natural way to define the fiber vectors in IGA Reissner-Mindlin shell formulations, where the non-interpolatory nature of IGA basis functions causes complexity. It is also advantageous for applying the rotational boundary conditions. A modified reduce quadrature scheme was also proposed to improve the quadrature eficiency, at the same time, relieve the locking in the shell formulations. We gave a method for patch coupling in isogeometric analysis. It is used to connect the adjacent patches. The classical modal synthesis method, the fixed interface Craig-Bampton method, is also used as well as the isogeometric Kirchhoff-Love shell elements. The key problem is also the connection between adjacent patches. The modal synthesis method can largely reduce the time costs in analysis concerning structural dynamics. This part of work lays a foundation for the fast shape optimization of built-up structures, where the design variables are only relevant to certain substructures. We developed a fast shape optimization framework for three dimensional thin wall structure design. The thin wall structure is modelled with isogeometric Kirchhoff-Love shell elements. The analytical sensitivity analysis is the key focus, since the gradient base optimization is normally more fast. There are two models in most optimization problem, the design model and the analysis model. The design variables are defined in the design model, however the analytical sensitivity is normally obtained from the analysis model. Although it is possible to use the same model in analysis and design under isogeomeric framework, it might give either a highly distorted optimum structure or a unreliable structural response. We developed a sensitivity mapping scheme to resolve this problem. The design sensitivity is extracted from the analysis model mesh level sensitivity, which is obtained by the discrete analytical sensitivity analysis. It provides exibility for the design variable definition. The correctness of structure response is also ensured. The modal synthesis method is also used to further improve the optimization eficiency for the built-up structure optimization concerning structural dynamics criteria
Picciani, Massimiliano. « Rare events in many-body systems : reactive paths and reaction constants for structural transitions ». Phd thesis, Université Pierre et Marie Curie - Paris VI, 2012. http://tel.archives-ouvertes.fr/tel-00706510.
Texte intégralOuafi, Rachid. « Analyse et contrôle des réseaux de trafic urbain par la méthode de Frank-Wolfe ». Paris 6, 1988. http://www.theses.fr/1988PA066453.
Texte intégralAïssat, Romain. « Infeasible Path Detection : a Formal Model and an Algorithm ». Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS036/document.
Texte intégralWhite-box, path-based, testing is largely used for the validation of programs. Given the control-flow graph (CFG) of the program under test, a test suit is generated by selecting a collection of paths of interest, then trying to provide, for each path, some concrete input values that will make the program follow that path during a run.For the first step, there are various ways to define paths of interest: structural testing methods select some set of paths that fulfills coverage criteria related to elements of the graph; in random-based techniques, paths are selected according to a given distribution of probability over these elements (for instance, uniform probability over all paths of length less than a given bound). Both approaches can be combined as in structural statistical testing. The random-based methods above have the advantage of providing a way to assess the quality of a test set as the minimal probability of covering an element of a criterion.The second step requires to produce for each path its path predicate, i.e. the conjunction of the constraints over the input parameters that must hold for the system to run along that path. This is done using symbolic execution. Then, constraint-solving is used to compute test data. If there is no input values such that the path predicate evaluates to true, the path is infeasible. It is very common for a program to have infeasible paths and such paths can largely outnumber feasible paths. Infeasible paths selected during the first step will not contribute to the final test suite, and there is no better choice than to select another path, hoping for its feasibility. Handling infeasible paths is the serious limitation of structural methods since most of the time is spent selecting useless paths. It is also a major challenge for all techniques in static analysis of programs, since the quality of the approximations they provide is lowered by data computed for paths that do not correspond to actual program runs.To overcome this problem, different methods have been proposed, like concolic testing or random testing based on the input domain. In path-biased random testing, paths are drawn according to a given distribution and their feasibility is checked in a second step. We present an algorithm that builds better approximations of the behavior of a program than its CFG, providing a transformed CFG, which still over-approximates the set of feasible paths but with fewer infeasible paths. This transformed graph is used for drawing paths at random.We modeled our graph transformations and formally proved, using the interactive theorem proving environment Isabelle/HOL, the key properties that establish the correctness of our approach.Our algorithm uses symbolic execution and constraint solving, which allows to detect whether some paths are infeasible. Since programs can contain loops, their graphs can contain cycles. In order to avoid to follow infinitely a cyclic path, we enrich symbolic execution with the detection of subsumptions. A subsumption can be interpreted as the fact that some node met during the analysis is a particular case of another node met previously: there is no need to explore the successors of the subsumed node: they are subsumed by the successors of the subsumer. Our algorithm has been implemented by a prototype, whose design closely follows said formalization, giving a good level of confidence in its correctness.In this thesis, we introduce the theoretical concepts on which our approach relies, its formalization in Isabelle/HOL, the algorithms our prototype implements and the various experiments done and results obtained using it
Le, Foll Frank. « Etude électrophysiologique du contrôle GABAergique de la cellule mélanotrope : effets du GABA sur l'activité bioélectrique et modulation du récepteur GABA(A) par les stéroïdes ». Rouen, 1997. http://www.theses.fr/1997ROUES077.
Texte intégralHadjou, Tayeb. « Analyse numérique des méthodes de points intérieurs : simulations et applications ». Rouen, 1996. http://www.theses.fr/1996ROUES062.
Texte intégralHajjine, Bouchta. « Conception, réalisation et intégration technologique d'un patch électronique : application à la surveillance des personnes âgées ». Thesis, Toulouse, INSA, 2016. http://www.theses.fr/2016ISAT0002/document.
Texte intégral30 % of the French population being over the age of 60 years in 2035, the notion of accompaniment of the elderly dependence is a societal challenge with the imperative of risks prevention at home. It is in this context, with the arrival of the technologies of integration and the IoT that we undertook to conceive and realize a miniature electronic patch capable of geolocalization to trigger alarms in the case of fugue, fall or wandering. A challenge is the design of antennas on flexible substrates as key elements of the functions of geolocalization and charging by induction. A modeling work allowed the optimization of printed antennas presenting a good compromise integration / performance. A technological process in the cleanroom was developed to carry out bilayers antennas on flexible substrate (polyimide). Several prototypes of complete patch were tested and validated in the EHPAD center
Potvin, Léna. « Effet de l'isradipine sur le INa(tr), le ICa(T), et les activités calciques spontanées des cellules ventriculaires des embryons de poulet âgés de trois et dix jours, obtenu par la méthode de patch clamp et de fluorescence ». Mémoire, Université de Sherbrooke, 1993. http://hdl.handle.net/11143/12069.
Texte intégralLouiset, Estelle. « Implication de l'activité électrique des cellules mélanotropes de grenouille dans les processus de couplage stimulus-sécrétion : étude par la technique de patch-clamp ». Rouen, 1989. http://www.theses.fr/1989ROUES033.
Texte intégralXu, Fan. « Étude numérique des modes d'instabilités des systèmes film-substrat ». Thesis, Université de Lorraine, 2014. http://www.theses.fr/2014LORR0309/document.
Texte intégralSurface wrinkles of stiff thin layers attached on soft materials have been widely observed in nature and these phenomena have raised considerable interests over the last decade. The post-buckling evolution of surface morphological instability often involves strong effects of geometrical nonlinearity, large rotation, large displacement, large deformation, loading path dependence and multiple symmetry-breakings. Due to its notorious difficulty, most nonlinear buckling analyses have resorted to numerical approaches since only a limited number of exact analytical solutions can be obtained. This thesis proposes a whole framework to study the film/substrate buckling problem in a numerical way: from 2D to 3D modeling, from classical to multi-scale perspective. The main aim is to apply advanced numerical methods for multiple-bifurcation analyses to various film/substrate models, especially focusing on post-buckling evolution and surface mode transition. The models incorporate Asymptotic Numerical Method (ANM) as a robust path-following technique and bifurcation indicators well adapted to the ANM to detect a sequence of multiple bifurcations and the associated instability modes on their post-buckling evolution path. The ANM gives interactive access to semi-analytical equilibrium branches, which offers considerable advantage of reliability compared with classical iterative algorithms. Besides, an original nonlocal coupling strategy is developed to bridge classical models and multi-scale models concurrently, where the strengths of each model are fully exploited while their shortcomings are accordingly overcome. Discussion on the transition between different scales is provided in a general way, which can also be seen as a guide for coupling techniques involving other reduced-order models. Lastly, a general macroscopic modeling framework is developed and two specific Fourier-related models are derived from the well-established classical models, which can predict the pattern formation with much fewer elements so as to significantly reduce the computational cost
Castel, Alexis. « Comportement vibratoire de structures composites intégrant des éléments amortissants ». Phd thesis, Université de Bourgogne, 2013. http://tel.archives-ouvertes.fr/tel-00983378.
Texte intégralTopart, Hélène. « Etude d’une nouvelle classe de graphes : les graphes hypotriangulés ». Thesis, Paris, CNAM, 2011. http://www.theses.fr/2011CNAM0776/document.
Texte intégralIn this thesis, we define a new class of graphs : the hypochordal graphs. These graphs satisfy that for any path of length two, there exists a chord or another path of length two between its two endpoints. This class can represent robust networks. Indeed, we show that in such graphs, in the case of an edge or a vertex deletion, the distance beween any pair of nonadjacent vertices remains unchanged. Then, we study several properties for this class of graphs. Especially, after introducing a family of specific partitions, we show the relations between some of these partitions and hypochordality. Moreover, thanks to these partitions, we characterise minimum hypochordal graph, that are, among connected hypochordal graphs, those that minimise the number of edges for a given number of vertices. In a second part, we study the complexity, for hypochordal graphs, of problems that are NP-hard in the general case. We first show that the classical problems of hamiltonian cycle, colouring, maximum clique and maximum stable set remain NP-hard for this class of graphs. Then, we analyse graph modification problems : deciding the minimal number of edges to add or delete from a graph, in order to obtain an hypochordal graph. We study the complexity of these problems for sevaral classes of graphs
Kazemipour, Alireza. « Contribution à l'étude du couplage entre antennes, application à la compatibilité électromagnétique et à la conception d'antennes et de réseaux d'antennes ». Paris, ENST, 2003. http://www.theses.fr/2002ENST0029.
Texte intégralGiraud, Remi. « Algorithmes de correspondance et superpixels pour l’analyse et le traitement d’images ». Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0771/document.
Texte intégralThis thesis focuses on several aspects of image analysis and processing with non local methods. These methods are based on the redundancy of information that occurs in other images, and use matching algorithms, that are usually patch-based, to extract and transfer information from the example data. These approaches are widely used by the computer vision community, and are generally limited by the computational time of the matching algorithm, applied at the pixel scale, and by the necessity to perform preprocessing or learning steps to use large databases. To address these issues, we propose several general methods, without learning, fast, and that can be easily applied to different image analysis and processing applications on natural and medical images. We introduce a matching algorithm that enables to quickly extract patches from a large library of 3D images, that we apply to medical image segmentation. To use a presegmentation into superpixels that reduces the number of image elements, in a way that is similar to patches, we present a new superpixel neighborhood structure. This novel descriptor enables to efficiently use superpixels in non local approaches. We also introduce an accurate and regular superpixel decomposition method. We show how to evaluate this regularity in a robust manner, and that this property is necessary to obtain good superpixel-based matching performances
Souissi, Omar. « Planification de chemin d'hélicoptères sur une architecture hétérogène CPU FPGA haute performance ». Thesis, Valenciennes, 2015. http://www.theses.fr/2015VALE0003.
Texte intégralSecurity issues are today a key-differentiator in the aviation sector. Indeed, it comes to ensure the safety of expensive equipments but above all to save human lives. In this context, it is necessary to offer an important level of autonomy to helicopters. Although some studies have been carried out in this area, the dynamic generation of a sequence of maneuvers under hard time constraints in an unknown environment still represents a major challenge for many academic and industrial working groups. AIRBUS HELICOPTERS as a leader of helicopters manufacturing, looks forward to integrate an assistance system for mission re-planning in the next generation of aircrafts.The work conducted in this PhD thesis falls within a collaboration between AIRBUS HELICOPTERS and UNIVERSITE DE VALENCIENNES ET DU HAINAUTCAMBRESIS. One of the main purposes of this work is efficient flight plan generation. Indeed, for intelligent assistant systems we need to generate a new path planning inorder to face emergency events such as an equipment failure or adverse weather conditions. The second major objective of this work is the deployment of mission planning tasks onto a high performance architecture CPU/FPGA in order to meet real-time requirements for the dynamic optimization process. In the present work, we first studied efficient flight plan generation. Indeed, we developed efficient and effective algorithms for helicopter path planning. Then, in order to obtain a real-time system, we resolved the problem of scheduling optimization on a heterogeneous architecture CPU / FPGA by proposing several scheduling methods including exact approaches and heuristics
Topart, Hélène. « Etude d’une nouvelle classe de graphes : les graphes hypotriangulés ». Electronic Thesis or Diss., Paris, CNAM, 2011. http://www.theses.fr/2011CNAM0776.
Texte intégralIn this thesis, we define a new class of graphs : the hypochordal graphs. These graphs satisfy that for any path of length two, there exists a chord or another path of length two between its two endpoints. This class can represent robust networks. Indeed, we show that in such graphs, in the case of an edge or a vertex deletion, the distance beween any pair of nonadjacent vertices remains unchanged. Then, we study several properties for this class of graphs. Especially, after introducing a family of specific partitions, we show the relations between some of these partitions and hypochordality. Moreover, thanks to these partitions, we characterise minimum hypochordal graph, that are, among connected hypochordal graphs, those that minimise the number of edges for a given number of vertices. In a second part, we study the complexity, for hypochordal graphs, of problems that are NP-hard in the general case. We first show that the classical problems of hamiltonian cycle, colouring, maximum clique and maximum stable set remain NP-hard for this class of graphs. Then, we analyse graph modification problems : deciding the minimal number of edges to add or delete from a graph, in order to obtain an hypochordal graph. We study the complexity of these problems for sevaral classes of graphs
Andurand, Lewis. « Développement d'une méthode de génération de trajectoire versatile pour la réalisation de pièces par procédés DED multi-axes à partir de surfaces facettisées ». Electronic Thesis or Diss., Toulon, 2023. http://www.theses.fr/2023TOUL0001.
Texte intégralAdditive manufacturing is a category of processes that allows the production of mechanical parts by the adding of material. Directed Energy Deposition (DED) processes can be combined with multi-axis robots and are a promising option to obtain parts with complex structures. However, the path generation methods and the machine structures used remain an issue. With innovations in these areas, the industrial possibilities would increase tenfold.This thesis presents a numerical and systematic path generation method based on meshed surfaces and adapted to DED processes. The method was validated through simulations on minimal triply periodic surfaces and allows the creation of a first deposition path that meets the distance constraint between the part and the tool. This first path can be combined with region prioritization feedback to obtain a final path adapted to the physical warnings provided by the robot, the manufacturing material and the tool
Brunet, Antoine Pierre. « Modélisation multi-échelle de l’effet d’un générateur solaire sur la charge électrostatique d’un satellite ». Thesis, Toulouse, ISAE, 2017. http://www.theses.fr/2017ESAE0042/document.
Texte intégralThe numerical simulation of spacecraft charging can require to resolve widely different geometrical scales. In particular, solar array interconnects on the surface of solar panels have a major impact ona satellite electrostatic equilibrium. A classical model of this effect would require a mesh refined tosub-millimetre scales, on a spacecraft spanning several dozen metres, which would make the simulation computationally expensive. Moreover, the solar array interconnects can have a large positive potentialrelative to the space plasma, preventing the use of the classical Maxwell-Boltzmann model for theelectrons in the plasma. In a first part, we have developed an iterative patch method to solve thenonlinear Poisson-Boltzmann equation used in plasma simulations. This multigrid numerical scheme allows to resolve the impact of small-scale components on the surface of a complete spacecraft. In asecond part, we have developed a corrective scheme for the Maxwell-Boltzmann model to account for the presence of charged surfaces in the simulation. We have shown that this simple model is able to precisely compute the currents collected by the spacecraft surfaces
Berge-Gras, Rébécca. « Analyse expérimentale de la propagation de fissures dans des tôles minces en al-li par méthodes de champs ». Phd thesis, Ecole Nationale Supérieure des Mines de Saint-Etienne, 2011. http://tel.archives-ouvertes.fr/tel-00716429.
Texte intégralSchaack, Sofiane. « Nuclear quantum effects in hydrated nanocrystals ». Electronic Thesis or Diss., Sorbonne université, 2019. https://accesdistant.sorbonne-universite.fr/login?url=https://theses-intra.sorbonne-universite.fr/2019SORUS370.pdf.
Texte intégralThe quantum nature of nuclei yields unexpected and often paradoxical behaviors. Due to the lightness of its nucleus, the hydrogen is a most likely candidate for such effects. During this thesis, we focus on complexe hydrated systems, namely, the brucite minerals (Mg(OH)2), the methane hydrate (CH4-H2O) and the sodium hydroxide (NaOH), which display complex mechanisms driven by the proton quantum properties. Brucite exhibits the coexistence of thermally activated hopping and quantum tunneling with opposite behaviors as pressure is increased. The unforeseen consequence is a pressure sweet spot for proton diffusion. Simultaneously, pressure gives rise to a «quantum» quasi two-dimensional hydrogen plane, non-trivially connected with proton diffusion. Upon compression, methane hydrate displays an important increase of the inter-molecular interactions between water and enclosed methane molecules. In contrast with ice, the hydrogen bond transition does not shift by H/D isotopic substitution. This is explained by an important delocalization of the proton which also triggers a transition toward a new MH-IV methane hydrate phase, stable up to 150 GPa which represents the highest pressure reached to date by any hydrate. Sodium hydroxide has a phase transition below room temperature at ambient pressure only in its deuterated version. This radical isotope effect can be explained by the quantum delocalization of the proton as compared with deuteron shifting the temperature-induced phase transition of NaOD towards a pressure-induced one in NaOH
Ahmat, idriss Hassane gogo. « Développement d'un banc ellipsométrique hyperfréquence pour la caractérisation de l'anisotropie des indices ». Thesis, Lyon, 2017. http://www.theses.fr/2017LYSES021.
Texte intégralTo characterize non-transparent anisotropic samples in order to determine their refractive indices. The technic used is based on free-space oblique transmission ellipsometry in microwave frequency range (26 to 30 GHz) with a vector network analyzer (VNA). Its principle is based on the determination of complex diagonal tensor. Measurement of the sample transmission coefficients is required. Calibration of the network vector analyzer is needed to correct these coefficient values. The calibration method used to correct measurement errors is the method called One Path Two Ports
Simeoni, Massimiliano. « Conception, réalisation et test de nouvelles topologies de résonateurs et filtres microondes ». Limoges, 2002. http://www.theses.fr/2002LIMO0006.
Texte intégralPassive microwave filters represent a vey important feature in many modern telecommunications systems. Size and loss reductions are the main goals to achieve and researchers around the world are looking for new structures to match these needs. The research work done and summarized in this thesis is to be insert in this context. Here we propose some new passive resonating and filtering structures showing reduced size and energy loss compared to the classic ones
Hett, Kilian. « Multi-scale and multimodal imaging biomarkers for the early detection of Alzheimer’s disease ». Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0011/document.
Texte intégralAlzheimer’s disease (AD) is the most common dementia leading to a neurodegenerative process and causing mental dysfunctions. According to the world health organization, the number of patients having AD will double in 20 years. Neuroimaging studies performed on AD patients revealed that structural brain alterations are advanced when the diagnosis is established. Indeed, the clinical symptoms of AD are preceded by brain changes. This stresses the need to develop new biomarkers to detect the first stages of the disease. The development of such biomarkers can make easier the design of clinical trials and therefore accelerate the development of new therapies. Over the past decades, the improvement of magnetic resonance imaging (MRI) has led to the development of new imaging biomarkers. Such biomarkers demonstrated their relevance for computer-aided diagnosis but have shown limited performances for AD prognosis. Recently, advanced biomarkers were proposed toimprove computer-aided prognosis. Among them, patch-based grading methods demonstrated competitive results to detect subtle modifications at the earliest stages of AD. Such methods have shown their ability to predict AD several years before the conversion to dementia. For these reasons, we have had a particular interest in patch-based grading methods. First, we studied patch-based grading methods for different anatomical scales (i.e., whole brain, hippocampus, and hippocampal subfields). We adapted patch-based grading method to different MRI modalities (i.e., anatomical MRI and diffusion-weighted MRI) and developed an adaptive fusion scheme. Then, we showed that patch comparisons are improved with the use of multi-directional derivative features. Finally, we proposed a new method based on a graph modeling that enables to combine information from inter-subjects’ similarities and intra-subjects’ variability. The conducted experiments demonstrate that our proposed method enable an improvement of AD detection and prediction
Riot, Paul. « Blancheur du résidu pour le débruitage d'image ». Thesis, Paris, ENST, 2018. http://www.theses.fr/2018ENST0006/document.
Texte intégralWe propose an advanced use of the whiteness hypothesis on the noise to imrove denoising performances. We show the interest of evaluating the residual whiteness by correlation measures in multiple applications. First, in a variational denoising framework, we show that a cost function locally constraining the residual whiteness can replace the L2 norm commonly used in the white Gaussian case, while significantly improving the denoising performances. This term is then completed by cost function constraining the residual raw moments which are a mean to control the residual distribution. In the second part of our work, we propose an alternative to the likelihood ratio, leading to the L2 norm in the white Gaussian case, to evaluate the dissimilarity between noisy patches. The introduced metric, based on the autocorrelation of the patches difference, achieves better performances both for denoising and similar patches recognition. Finally, we tackle the no reference quality evaluation and the local model choice problems. Once again, the residual whiteness bring a meaningful information to locally estimate the truthfulness of the denoising
Stanic, Andjelka. « Solution methods for failure analysis of massive structural elements ». Thesis, Compiègne, 2017. http://www.theses.fr/2017COMP2383/document.
Texte intégralThe thesis studies: the methods for failure analysis of solids and structures, and the embedded strong discontinuity finite elements for modelling material failures in quasi brittle 2d solids. As for the failure analysis, the consistently linearized path-following method with quadratic constraint equation is first presented and studied in detail. The derived path-following method can be applied in the nonlinear finite element analysis of solids and structures in order to compute a highly nonlinear solution path. However, when analysing the nonlinear problems with the localized material failures (i.e. materialsoftening), standard path-following methods can fail. For this reason we derived new versions of the pathfollowing method, with other constraint functions, more suited for problems that take into account localized material failures. One version is based on adaptive one-degree-of-freedom constraint equation, which proved to be relatively successful in analysing problems with the material softening that are modelled by the embedded-discontinuity finite elements. The other versions are based on controlling incremental plastic dissipation or plastic work in an inelastic structure. The dissipation due to crack opening and propagation, computed by e.g. embedded discontinuity finite elements, is taken into account. The advantages and disadvantages of the presented path-following methods with different constraint equations are discussed and illustrated on a set of numerical examples. As for the modelling material failures in quasi brittle 2d solids (e.g. concrete), several embedded strong discontinuity finite element formulations are derived and studied. The considered formulations are based either on: (a) classical displacement-based isoparametric quadrilateral finite element or (b) on quadrilateral finite element enhanced with incompatible displacements. In order to describe a crack formation and opening, the element kinematics is enhanced by four basic separation modes and related kinematic parameters. The interpolation functions that describe enhanced kinematics have a jump in displacements along the crack. Two possibilities were studied for deriving the operators in the local equilibrium equations that are responsible for relating the bulk stresses with the tractions in the crack. For the crack embedment, the major-principle-stress criterion was used, which is suitable for the quasi brittle materials. The normal and tangential cohesion tractions in the crack are described by two uncoupled, nonassociative damage-softening constitutive relations. A new crack tracing algorithm is proposed for computation of crack propagation through the mesh. It allows for crack formation in several elements in a single solution increment. Results of a set of numerical examples are provided in order to assess the performance of derived embedded strong discontinuity quadrilateral finite element formulations, the crack tracing algorithm, and the solution methods
Doktorska disertacija obravnava: (i) metode za porušno analizo trdnih teles in konstrukcij, ter (ii) končne elemente z vgrajeno močno nezveznostjo za modeliranje materialne porušitve v kvazi krhkih 2d trdnih telesih. Za porušno analizo smo najprej preučili konsistentno linearizirano metodo sledenja ravnotežne poti skvadratno vezno enačbo (metoda krožnega loka). Metoda omogoča izračun analize nelinearnih modelov, ki imajo izrazito nelinearno ravnotežno pot. Kljub temu standardne metode sledenja poti lahko odpovedo,kadar analiziramo nelinearne probleme z lokalizirano materialno porušitvijo (mehčanje materiala). Zatosmo izpeljali nove različice metode sledenja poti z drugimi veznimi enačbami, ki so bolj primerne zaprobleme z lokalizirano porušitvijo materiala. Ena različica temelji na adaptivni vezni enačbi, pri katerivodimo izbrano prostostno stopnjo. Izkazalo se je, da je metoda relativno uspešna pri analizi problemov zmaterialnim mehčanjem, ki so modelirani s končnimi elementi z vgrajeno nezveznostjo. Druge različicetemeljijo na kontroli plastične disipacije ali plastičnega dela v neelastičnem trdnem telesu ali konstrukciji.Upoštevana je tudi disipacija zaradi širjenja razpok v elementih z vgrajeno nezveznostjo. Prednosti inslabosti predstavljenih metod sledenja ravnotežnih poti z različnimi veznimi enačbami so predstavljeni naštevilnih numeričnih primerih. Za modeliranje porušitve materiala v kvazi krhkih 2d trdnih telesih (npr. betonskih) smo izpeljali različne formulacije končnih elementov z vgrajeno močno nezveznostjo v pomikih. Obravnavane formulacije temeljijo bodisi (a) na klasičnem izoparametričnem štirikotnem končnem elementu bodisi (b) na štirikotnem končnem elementu, ki je izboljšan z nekompatibilnimi oblikami za pomike. Nastanek in širjenje razpoke opišemo tako, da kinematiko v elementu dopolnimo s štirimi osnovnimi oblikami širjenja razpoke in pripadajočimi kinematičnimi parametri. Interpolacijske funkcije, ki opisujejo izboljšano kinematiko, zajemajo skoke v pomikih vzdolž razpoke. Obravnavali smo dva načina izpeljave operatorjev, ki nastopajo v lokalni ravnotežni enačbi in povezujejo napetosti v končnem elementu z napetostmi na vgrajeni nezveznosti. Kriterij za vstavitev nezveznosti (razpoke) temelji na kriteriju največje glavne napetosti in je primeren za krhke materiale. Normalne in tangentne kohezijske napetosti v razpoki opišemo z dvema nepovezanima, poškodbenima konstitutivnima zakonoma za mehčanje. Predlagamo novi algoritem za sledenje razpoki za izračun širjenja razpoke v mreži končnih elementov. Algoritem omogoča formacijo razpok v več končnih elementih v enem obtežnem koraku. Izračunali smo številne numerične primere, da bi ocenili delovanje izpeljanih formulacij štirikotnih končnih elementov z vgrajeno nezveznostjo in algoritma za sledenje razpoki kot tudi delovanje metod sledenja ravnotežnih poti
Riot, Paul. « Blancheur du résidu pour le débruitage d'image ». Electronic Thesis or Diss., Paris, ENST, 2018. http://www.theses.fr/2018ENST0006.
Texte intégralWe propose an advanced use of the whiteness hypothesis on the noise to imrove denoising performances. We show the interest of evaluating the residual whiteness by correlation measures in multiple applications. First, in a variational denoising framework, we show that a cost function locally constraining the residual whiteness can replace the L2 norm commonly used in the white Gaussian case, while significantly improving the denoising performances. This term is then completed by cost function constraining the residual raw moments which are a mean to control the residual distribution. In the second part of our work, we propose an alternative to the likelihood ratio, leading to the L2 norm in the white Gaussian case, to evaluate the dissimilarity between noisy patches. The introduced metric, based on the autocorrelation of the patches difference, achieves better performances both for denoising and similar patches recognition. Finally, we tackle the no reference quality evaluation and the local model choice problems. Once again, the residual whiteness bring a meaningful information to locally estimate the truthfulness of the denoising
Mora, Gómez Luis Fernando. « Bifurcations dans des systèmes avec bruit : applications aux sciences sociales et à la physique ». Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4228.
Texte intégralBifurcations in continuous dynamical systems, i.e., those described by ordinary differential equations, are found in a multitude of models such as those used to study phenomena related to physical, chemical, biological, ecological, economic and social systems. Using this concept as a unifying idea, in this thesis, we apply it to model and explore both Social as well as Physical systems. In the first part of this thesis we apply tools of statistical physics and bifurcation theory to model a problem of binary decision in Social Sciences. We find an scheme to predict the appearance of extreme jumps in these systems based on the notion of precursors which act as a kind of warning signal for the upcoming appearance of these catastrophic events. We also solve a mathematical model of social collapse based on a logistic re-growing equation used to model population grow and how limited resources change grow patterns. This model exhibits subcritical bifurcations and its relation to the social phenomenon of sunk-cost effect is studied. This last phenomenon explains how past investments affect current decisions and the combination of both phenomena is used as a model to explain the disintegration of some ancient societies, based on evidence from archeological records. In the second part of this thesis, we study macroscopic systems described by multidimensional stochastic differential equations or equivalently by their deterministic counterpart, the multidimensional FokkerPlanck equation. A new and alternative scheme of computation based on Path Integrals, related to stochastic processes is introduced in order to calculate the Probability Distribution Function. The computations based on this Path Integral scheme are performed on systems in one and two dimensions and contrasted to some soluble models completely validating this method. We also extended this scheme to the case of computation of Mean Exit Time, finding a new expression for each computation in systems in arbitrary dimensions. It is worth noting that in case of two-dimensional dynamical systems, the computations of both the probability distribution function as well as of the mean exit time validated the Path Integral scheme and the perspective for continuing this line of work are based on the fact that this method is valid for both arbitrary non gradient systems and noise intensities. This opens the possibility to explore new cases, for which no methods are known to obtain them
Sun, Zhenzhou. « Amélioration de la localisation de défauts dans les circuits digitaux par diagnostic au niveau transistor ». Thesis, Montpellier 2, 2014. http://www.theses.fr/2014MON20018/document.
Texte intégralThe rapid growth in semiconductor field results in an increasing complexity of digital circuits. The ability to identify the root cause of a failing digital circuit is becoming critical for defect localization. Logic diagnosis is the process of isolating the source of observed errors in a defective circuit, so that a physical failure analysis can be performed to determine the root cause of such errors. Effective and precise logic diagnosis is crucial to speed up the failure analysis and eventually to improve the yield.“Effect-Cause” and “Cause-Effect” are the two classical approaches for logic diagnosis. Logic diagnosis provides a list of gates as suspects. However, this approach may not leads to accurate results in the case of the defect is inside a gate.We propose a new intra-cell diagnosis method based on “Effect-Cause” approach to improve the defect localization accuracy at transistor level. The proposed approach exploits the CPT (Critical Path Tracing) applied at transistor level. For each suspected cell, we apply the CPT for every given failing test vector. The result is a preliminary list of candidates. Each candidate can be a net or a transistor drain, gate or source. After that, we apply the CPT for each passing test vector in order to narrow down the the list of candidates. The proposed method gives precise localization of the root cause of the observed errors. Moreover, it does not require the explicit use of a fault model
Abdul, Karim Ahmad. « Procedural locomotion of multi-legged characters in complex dynamic environments : real-time applications ». Thesis, Lyon 1, 2012. http://www.theses.fr/2012LYO10181/document.
Texte intégralMulti-legged characters like quadrupeds, arachnids, reptiles, etc. are an essential part of any simulation and they greatly participate in making virtual worlds more life-like. These multi-legged characters should be capable of moving freely and in a believable way in order to convey a better immersive experience for the users. But these locomotion animations are quite rich due to the complexity of the navigated environments and the variety of the animated morphologies, gaits, body sizes and proportions, etc. Another challenge when modeling such animations arises from the lack of motion data inherent to either the difficulty to obtain them or the impossibility to capture them.This thesis addresses these challenges by presenting a system capable of procedurally generating locomotion animations fordozens of multi-legged characters in real-time and without anymotion data. Our system is quite generic thanks to the chosen Procedural-Based techniques and it is capable of animating different multi-legged morphologies. On top of that, the simulated characters have more freedom while moving, as we adapt the generated animations to the dynamic complex environments in real-time. Themain focus is plausible movements that are, at the same time,believable and fully controllable. This controllability is one of the forces of our system as it gives the user the possibility to control all aspects of the generated animation thus producing the needed style of locomotion
Moussaud, Simon. « Etude de l'implication des cellules microgliales et de l'α-synucleine dans la maladie neurodégénérative de Parkinson ». Phd thesis, Université de Bourgogne, 2011. http://tel.archives-ouvertes.fr/tel-00668186.
Texte intégralRoudaut, Yann. « Sensibilité des cellules de Merkel humaines au froid : vers un rôle des complexes de Merkel dans la sensibilité thermique cutanée ? » Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM5016.
Texte intégralThe role of Merkel cells in cutaneous sensitivity remains imprecise. They provide continuous discharge of the receptor to a pressure. Nevertheless, other stimuli able to activate this complex, mediators regulating their activity are unknown. This ignorance is partly related to the difficulty to isolate these cells that represent only 3 to 5 % of skin cells.In this work, we have developed a Merkel cells cultured technique from human skin, using cell sorting based on the expression of the CD56 receptor. In this work, we show that Merkel cells are temperature sensitive. Their cool sensitivity is associated to TRPM8 channel. This thermal sensitivity does not modulate the discharge of the receptors during tactile stimulation. However, contacts between cutaneous Aδ and C fibres, which are known to carry the thermal sensations, and Merkel cells suggest that these receptors may also be involved in thermosensation. We propose for the first time that Merkel receptors are also temperature sensitive receptors providing a concurring detection of cutaneous pressure and temperature
Chelda, Samir. « Simulation du parcours des électrons élastiques dans les matériaux et structures. Application à la spectroscopie du pic élastique multi-modes MM-EPES ». Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2010. http://tel.archives-ouvertes.fr/tel-00629659.
Texte intégral