Teses / dissertações sobre o tema "Méthodes basées sur des exemples"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 50 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Méthodes basées sur des exemples".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Sallaberry, Arnaud. "Visualisation d'information : de la théorie sémiotique à des exemples pratiques basés sur la représentation de graphes et d'hypergraphes". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2011. http://tel.archives-ouvertes.fr/tel-00646397.
Texto completo da fonteBarroso, Nicolas. "Génération d'images intermédiaires pour la création d'animations 2D stylisées à base de marques". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES083.
Texto completo da fonteAs part of my thesis, I am interested in the issue of creating traditional 2D animations, where all the images are handcrafted. Specifically, I explore how computers can assist artists in producing animations efficiently without reducing the artistic creative process. To address this problem, my work falls within the scope of automatic methods, where the animator collaborates iteratively with the computer. I propose a method that takes two keyframe images and a series of 2D vector fields describing the motion in image space of the animation, and generates intermediate images while preserving the given style as an example. My method combines two manual animation techniques: pose-to-pose and frame-by-frame animation, providing strong control by allowing any generated image to be edited in the same way as the example images provided. My research covers several domains: motion analysis, 2D curve control, mark-based rendering, and paint simulation
Bas, Patrick. "Méthodes de tatouage d'images basées sur le contenu". Grenoble INPG, 2000. http://www.theses.fr/2000INPG0089.
Texto completo da fonteGhoumari, Asmaa. "Métaheuristiques adaptatives d'optimisation continue basées sur des méthodes d'apprentissage". Thesis, Paris Est, 2018. http://www.theses.fr/2018PESC1114/document.
Texto completo da fonteThe problems of continuous optimization are numerous, in economics, in signal processing, in neural networks, and so on. One of the best-known and most widely used solutions is the evolutionary algorithm, a metaheuristic algorithm based on evolutionary theories that borrows stochastic mechanisms and has shown good performance in solving problems of continuous optimization. The use of this family of algorithms is very popular, despite the many difficulties that can be encountered in their design. Indeed, these algorithms have several parameters to adjust and a lot of operators to set according to the problems to solve. In the literature, we find a plethora of operators described, and it becomes complicated for the user to know which one to select in order to have the best possible result. In this context, this thesis has the main objective to propose methods to solve the problems raised without deteriorating the performance of these algorithms. Thus we propose two algorithms:- a method based on the maximum a posteriori that uses diversity probabilities for the operators to apply, and which puts this choice regularly in play,- a method based on a dynamic graph of operators representing the probabilities of transitions between operators, and relying on a model of the objective function built by a neural network to regularly update these probabilities. These two methods are detailed, as well as analyzed via a continuous optimization benchmark
Bui, Huyen Chi. "Méthodes d'accès basées sur le codage réseau couche physique". Thesis, Toulouse, ISAE, 2012. http://www.theses.fr/2012ESAE0031.
Texto completo da fonteIn the domain of satellite networks, the emergence of low-cost interactive terminals motivates the need to develop and implement multiple access protocols able to support different user profiles. In particular, the European Space Agency (ESA) and the German Aerospace Center (DLR) have recently proposed random access protocols such as Contention Resolution Diversity Coded ALOHA (CRDSA) and Irregular Repetition Slotted ALOHA (IRSA). These methods are based on physical-layer network coding and successive interference cancellation in order to attempt to solve the collisions problem on a return channel of type Slotted ALOHA.This thesis aims to provide improvements of existing random access methods. We introduce Multi-Slot Coded Aloha (MuSCA) as a new generalization of CRDSA. Instead of transmitting copies of the same packet, the transmitter sends several parts of a codeword of an error-correcting code ; each part is preceded by a header allowing to locate the other parts of the codeword. At the receiver side, all parts transmitted by the same user, including those are interfered by other signals, are involved in the decoding. The decoded signal is then subtracted from the total signal. Thus, the overall interference is reduced and the remaining signals are more likely to be decoded. Several methods of performance analysis based on theoretical concepts (capacity computation, density evolution) and simulations are proposed. The results obtained show a significant gain in terms of throughput compared to existing access methods. This gain can be even more increased by varying the codewords stamping rate. Following these concepts, we also propose an application of physical-layer network coding based on the superposition modulation for a deterministic access on a return channel of satellite communications. We observe a gain in terms of throughput compared to more conventional strategies such as the time division multiplexing
Cruz, Rodriguez Lidice. "Méthodes de dynamique quantique ultrarapide basées sur la propagation de trajectoires". Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30254/document.
Texto completo da fonteIn this thesis different trajectory-based methods for the study of quantum mechanical phenomena are developed. The first approach is based on a global expansion of the hydrodynamic fields in Chebyshev polynomials. The scheme is used for the study of one-dimensional vibrational dynamics of bound wave packets in harmonic and anharmonic potentials. Furthermore, a different methodology is developed, which, starting from a parametrization previously proposed for the density, allows the construction of effective interaction potentials between the pseudo-particles representing the density. Within this approach several model problems are studied and important quantum mechanical effects such as, zero point energy, tunneling, barrier scattering and over barrier reflection are founded to be correctly described by the ensemble of interacting trajectories. The same approximation is used for study the laser-driven atom ionization. A third approach considered in this work consists in the derivation of an approximate many-body quantum potential for cryogenic Ar and Kr matrices with an embedded Na impurity. To this end, a suitable ansatz for the ground state wave function of the solid is proposed. This allows to construct an approximate quantum potential which is employed in molecular dynamics simulations to obtain the absorption spectra of the Na impurity isolated in the rare gas matrix
Bois, Léo. "Méthodes numériques basées sur l'apprentissage pour les EDP hyperboliques et cinétiques". Electronic Thesis or Diss., Strasbourg, 2023. http://www.theses.fr/2023STRAD060.
Texto completo da fonteDifferent applications of neural networks for numerical methods are explored, in the context of fluid or plasma simulation.A first application is the learning of a closure for a macroscopic model, based on data from a kinetic model. Numerical results are given for the Vlasov-Poisson equation in 1D and the Boltzmann equation in 2D.A second application is the learning of problem-dependent parameters in numerical schemes. In this way, an artificial viscosity coefficient is learned for a discontinuous Galerkin scheme, and a relaxation matrix for the Lattice-Boltzmann method
Silveira, Filho Geraldo. "Contributions aux méthodes directes d'estimation et de commande basées sur la vision". Phd thesis, École Nationale Supérieure des Mines de Paris, 2008. http://pastel.archives-ouvertes.fr/pastel-00005340.
Texto completo da fonteMoalla, Koubaa Ikram. "Caractérisation des écritures médiévales par des méthodes statistiques basées sur la cooccurrences". Lyon, INSA, 2009. http://theses.insa-lyon.fr/publication/2009ISAL0128/these.pdf.
Texto completo da fonte[The purpose of this work consists to elaborate methodologies to describe and to compare a ancient handwritten writings. The developed image feature are global and do not require any segmentation. It proposes new robust features based on second order statistics. We introduce the generalized co-occurrence matrix concept which measures the joint probability of any information from the images. This new statistical measure in an extension of the grey level co-occurrence matrix used until now to characterize the textures. We propose spatial co-occurrence matrix relative to the orientations and to the local curvatures of the forms, and parametric matrices which measure the evolution of an image under successive transformations. Because the obtained number of descriptors is very high, we suggest designed methods using eigen co-occurrence matrices in order to reduce this number. In this application part, we propose some clustering methods of medieval writings to test our propositions. The number of groups and their contents depend on used parameters and on applied methods. We also developed a Content Based Image Retrieval system to search for similar writings. Within the framework of the project ANR-MCD Graphem, we elaborate methods to analyse and to observe the evolution of the writings of the Middle Ages. ]
Gayraud, Nathalie. "Méthodes adaptatives d'apprentissage pour des interfaces cerveau-ordinateur basées sur les potentiels évoqués". Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4231/document.
Texto completo da fonteNon-invasive Brain Computer Interfaces (BCIs) allow a user to control a machine using only their brain activity. The BCI system acquires electroencephalographic (EEG) signals, characterized by a low signal-to-noise ratio and an important variability both across sessions and across users. Typically, the BCI system is calibrated before each use, in a process during which the user has to perform a predefined task. This thesis studies of the sources of this variability, with the aim of exploring, designing, and implementing zero-calibration methods. We review the variability of the event related potentials (ERP), focusing mostly on a late component known as the P300. This allows us to quantify the sources of EEG signal variability. Our solution to tackle this variability is to focus on adaptive machine learning methods. We focus on three transfer learning methods: Riemannian Geometry, Optimal Transport, and Ensemble Learning. We propose a model of the EEG takes variability into account. The parameters resulting from our analyses allow us to calibrate this model in a set of simulations, which we use to evaluate the performance of the aforementioned transfer learning methods. These methods are combined and applied to experimental data. We first propose a classification method based on Optimal Transport. Then, we introduce a separability marker which we use to combine Riemannian Geometry, Optimal Transport and Ensemble Learning. Our results demonstrate that the combination of several transfer learning methods produces a classifier that efficiently handles multiple sources of EEG signal variability
Moro, Pierre. "Techniques de vérification basées sur des représentations symboliques par automates et l'abstraction guidée par les contre-exemples". Paris 7, 2008. http://www.theses.fr/2008PA077013.
Texto completo da fonteThis thesis studies automatic verification techniques where programs are represented symbolically by automata. The set of configurations of such programs are represented by automata whereas instructions are represented by transducers. Computing the set of reachable states of such programs is an important ingredient for the verification of safety properties. This problem is undecidable, since the naive iterative computation does not terminate in general. Therefore, one has to use techniques to accelerate the computation. One of the techniques mainly studied consists to over-approximate the set of reachable states to enforce the convergence of the computation. These over-approximation techniques can introduce behaviours that do not exist in the program and for which the property i false. In that case, we check if the counterexamples is present in real program or due to the upper approximation. In the latter case, we use refinement techniques in order to eliminate the spurious counterexample from our abstraction and restart the computation. Using this abstract-check-refine loop, we propose techniques in order to verify sequential, non-recursive programs manipulating linked lists. We develop methods allowing to represent memory configurations as automata, and instructions as transducers. We then propose specific abstraction and refinement techniques for such representations. Then, we show that this kind of programs can also be represented like counter automata, i. E. , only few special cells of the heap (and their number is finite) are relevant and counters indicating the number of elements between this points allows to represent the heap of the program. We develop then methods for counter automata verification using the abstract-check-refine loop for this special kind of automata. We have tested our methods with a tool that supports the automaton described previously. In an other part of the thesis, we study the size of the counterexamples for Buchi automaton that represent the product between a linear temporal logic formula and a finite automaton. These counterexamples allow to correct the program and it is important to have as small as possible counterexamples to improve the time needed for the correction. Using SPIN's memory representation for a state, such algorithms have to optimize memory usage while keeping time complexity as small as possible
Messine, Frédéric. "Méthodes d'optimisation globale basées sur l'analyse d'intervalle pour la résolution de problèmes avec contraintes". Toulouse, INPT, 1997. http://www.theses.fr/1997INPT082H.
Texto completo da fonteSossa, Hasna El. "Quelques méthodes d'éléments finis mixtes raffinées basées sur l'utilisation des champs de Raviart-Thomas". Valenciennes, 2001. https://ged.uphf.fr/nuxeo/site/esupversions/6f5acb08-fa86-418a-bee2-65cc41f30556.
Texto completo da fonteIn this work, we study the refinement of mesh for the mixed finite elements methods and this for two types of problems: the first concerns the problem of Laplace and the second the problem of Strokes. For these two types of problems and in nonregular domains, the methods analysed until now, are those which relate to “traditional” mixed formulations such as the velocity-pressure formulation for the Strokes problem. Here, we analyse, for the Laplace equation, the dual mixed formulation in (p : = grad u, u) and for the system of Strokes, the dual mixed formulation in ((o := grad u,p) , u). For the problem of Laplace, we approximate on each triangle K of the triangulation p by a Raviart-Thomas vectorfield of degree 0 (resp. Of degree 1) and u by a constant on each triangle K (resp. By a polynomial of degree 1). To recapture convergence of order 1, we must use a refinement of the meshes according to Raugel method. Then we treat the case of finite elements of quadrilateral type and we propose appropriate regular family of quadrangulations, in order to obtain the optimal order of convergence. We investigate next the system of Strokes. We approximate on each triangle K each of the two lines of the tensor o by a Raviart-Thomas vectorfields of degree 0 (resp. 1), the pressure p by a constant (resp. By a polynomial of degree 1) and u by constant vectorfields (resp. By fieldvectors whose each component is a polynomial of degree of 1). Using, an appropriate refinement mesh of Raugel’s type, we obtain an error estimate of order h (resp. Of order h²), similar to those in the regular case. Finally we treat finite elements of the quadrilateral type. We use analogous refined family of quadrangulations as proposed for the problem of Laplace, to obtain optimal order of convergence
Ban, Tian. "Méthodes et architectures basées sur la redondance modulaire pour circuits combinatoires tolérants aux fautes". Phd thesis, Télécom ParisTech, 2012. http://pastel.archives-ouvertes.fr/pastel-00933194.
Texto completo da fonteCamelin, Nathalie. "Stratégies robustes de compréhension de la parole basées sur des méthodes de classification automatique". Avignon, 2007. http://www.theses.fr/2007AVIG0149.
Texto completo da fonteThe work presented in this PhD thesis deals with the automatic Spoken Language Understanding (SLU) problem in multiple speaker applications which accept spontaneous speech. The study consists in integrating automatic classification methods in the speech decoding and understanding processes. My work consists in adapting methods, wich have already shown good performance in text domain, to the particularities of an Automatic Speech Recognition system outputs. The main difficulty of the process of this type of data is due to the uncertainty in the input parameters for the classifiers. Among all existing automatic classification methods, we choose to use three of them. The first is based on Semantic Classification Trees, the two others classification methods, considered among the most performant in the scientific community of machine learning, are large margin ones based on boosting and support vector machines. A sequence labelling method, Conditional Random Fields (CRF), is also studied and used. Two applicative frameworks are investigated : -PlanResto is a tourism application of human-computer dialogue. It enables users to ask information about a restaurant in Paris in natural language. The real-time speech understanding process consists in building a request for a database. Within this framework, the consensual agreement of the different classifiers, considered as semantic experts, is used as a confidence measure ; -SCOrange is a spoken telephone survey corpus. The purpose is to collect messages of mobile users expressing their opinion about the customer service. The off-line speech understanding process consists in evaluating proportions of opinions about a topic and a polarity. Classifiers enable the extraction of user's opinions in a strategy that can reliably evalute the distribution of opinions and their temporal evolution
Ban, Tian. "Méthodes et architectures basées sur la redondance modulaire pour circuits combinatoires tolérants aux fautes". Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0038.
Texto completo da fonteIn this thesis, we mainly take into account the representative technique Triple Module Redundancy (TMR) as the reliability improvement technique. A voter is an necessary element in this kind of fault-tolerant architectures. The importance of reliability in majority voter is due to its application in both conventional fault-tolerant design and novel nanoelectronic systems. The property of a voter is therefore a bottleneck since it directly determines the whole performance of a redundant fault-tolerant digital IP (such as a TMR configuration). Obviously, the efficacy of TMR is to increase the reliability of digital IP. However, TMR sometimes could result in worse reliability than a simplex function module could. A better understanding of functional and signal reliability characteristics of a 3-input majority voter (majority voting in TMR) is studied. We analyze them by utilizing signal probability and boolean difference. It is well known that the acquisition of output signal probabilities is much easier compared with the obtention of output reliability. The results derived in this thesis proclaim the signal probability requirements for inputs of majority voter, and thereby reveal the conditions that TMR technique requires. This study shows the critical importance of error characteristics of majority voter, as used in fault-tolerant designs. As the flawlessness of majority voter in TMR is not true, we also proposed a fault-tolerant and simple 2-level majority voter structure for TMR. This alternative architecture for majority voter is useful in TMR schemes. The proposed solution is robust to single fault and exceeds those previous ones in terms of reliability
Coq, Guilhelm. "Utilisation d'approches probabilistes basées sur les critères entropiques pour la recherche d'information sur supports multimédia". Poitiers, 2008. http://theses.edel.univ-poitiers.fr/theses/2008/Coq-Guilhelm/2008-Coq-Guilhelm-These.pdf.
Texto completo da fonteModel selection problems appear frequently in a wide array of applicative domains such as data compression and signal or image processing. One of the most used tools to solve those problems is a real quantity to be minimized called information criterion or penalized likehood criterion. The principal purpose of this thesis is to justify the use of such a criterion responding to a given model selection problem, typically set in a signal processing context. The sought justification must have a strong mathematical background. To this end, we study the classical problem of the determination of the order of an autoregression. . We also work on Gaussian regression allowing to extract principal harmonics out of a noised signal. In those two settings we give a criterion the use of which is justified by the minimization of the cost resulting from the estimation. Multiple Markov chains modelize most of discrete signals such as letter sequences or grey scale images. We consider the determination of the order of such a chain. In the continuity we study the problem, a priori distant, of the estimation of an unknown density by an histogram. For those two domains, we justify the use of a criterion by coding notions to which we apply a simple form of the “Minimum Description Length” principle. Throughout those application domains, we present alternative methods of use of information criteria. Those methods, called comparative, present a smaller complexity of use than usual methods but allow nevertheless a precise description of the model
Giraldi, Loïc. "Contributions aux méthodes de calcul basées sur l'approximation de tenseurs et applications en mécanique numérique". Phd thesis, Ecole centrale de nantes - ECN, 2012. http://tel.archives-ouvertes.fr/tel-00861986.
Texto completo da fonteHassani, Bertrand Kian. "Quantification des risques opérationnels : méthodes efficientes de calcul de capital basées sur des données internes". Paris 1, 2011. http://www.theses.fr/2011PA010009.
Texto completo da fonteAsse, Abdallah. "Aide au diagnostic industriel par des méthodes basées sur la théorie des sous-ensembles flous". Valenciennes, 1985. https://ged.uphf.fr/nuxeo/site/esupversions/c72e776b-0420-445e-bc8f-063e67804dad.
Texto completo da fonteLegendre, Sylvie. "Méthodes d'inspection non destructive par ultrasons de réservoirs d'hydrogène basées sur la transformée en ondelettes". Thèse, Université du Québec à Trois-Rivières, 2000. http://depot-e.uqtr.ca/6648/1/000671506.pdf.
Texto completo da fonteNguyen, Van Quang. "Méthodes d'éclatement basées sur les distances de Bregman pour les inclusions monotones composites et l'optimisation". Thesis, Paris 6, 2015. http://www.theses.fr/2015PA066183/document.
Texto completo da fonteThe goal of this thesis is to design splitting methods based on Bregman distances for solving composite monotone inclusions in reflexive real Banach spaces. These results allow us to extend many techniques that were so far limited to Hilbert spaces. Furthermore, even when restricted to Euclidean spaces, they provide new splitting methods that may be more avantageous numerically than the classical methods based on the Euclidean distance. Numerical applications in image processing are proposed
Sy, Kombossé. "Étude et développement de méthodes de caractérisation de défauts basées sur les reconstructions ultrasonores TFM". Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLS040/document.
Texto completo da fonteIn non-destructive testing, with a view to improving defect images but also to simplify their interpretation by non-specialized operators,new ultrasonic imaging methods such as TFM imaging (Total Focusing Method ) have appeared for some years as an alternative to conventional imaging methods. They offer realistic images of defects and allow from the same acquisition to have a large number of images each that can carry different and complementary information on the characteristics of the same defect. When properly selected, these images are easier to analyze, they present less risk of misinterpretation and allow to consider faster fault characterizations by less specialized operators.However, for an industrial operation, it remains necessary to strengthen the robustness and ease of implementation of these imaging techniques. All the work carried out during the thesis allowed to develop new tools to improve the characterization of defects by TFM imaging techniques in terms of position,orientation and sizing
Horstmann, Tobias. "Méthodes numériques hybrides basées sur une approche Boltzmann sur réseau en vue de l'application aux maillages non-uniformes". Thesis, Lyon, 2018. http://www.theses.fr/2018LYSEC027/document.
Texto completo da fonteDespite the inherent efficiency and low dissipative behaviour of the standard lattice Boltzmann method (LBM) relying on a two step stream and collide algorithm, a major drawback of this approach is the restriction to uniform Cartesian grids. The adaptation of the discretization step to varying fluid dynamic scales is usually achieved by multi-scale lattice Boltzmann schemes, in which the computational domain is decomposed into multiple uniform subdomains with different spatial resolutions. For the sake of connectivity, the resolution factor of adjacent subdomains has to be a multiple of two, introducing an abrupt change of the space-time discretization step at the interface that is prone to trigger instabilites and generate spurious noise sources that contaminate the expected physical pressure signal. In the present PhD thesis, we first elucidate the subject of mesh refinement in the standard lattice Boltzmann method and point out challenges and potential sources of error. Subsequently, we propose a novel hybrid lattice Boltzmann method (HLBM) that combines the stream and collide algorithm with an Eulerian flux-balance algorithm that is obtained from a finite-volume discretization of the discrete velocity Boltzmann equations. The interest of a hybrid lattice Boltzmann method is the pairing of efficiency and low numerical dissipation with an increase in geometrical flexibility. The HLBM allows for non-uniform grids. In the scope of 2D periodic test cases, it is shown that such an approach constitutes a valuable alternative to multi-scale lattice Boltzmann schemes by allowing local mesh refinement of type H. The HLBM properly resolves aerodynamics and aeroacoustics in the interface regions. A further part of the presented work examines the coupling of the stream and collide algorithm with a finite-volume formulation of the isothermal Navier-Stokes equations. Such an attempt bears the advantages that the number of equations of the finite-volume solver is reduced. In addition, the stability is increased due to a more favorable CFL condition. A major difference to the pairing of two kinetic schemes is the coupling in moment space. Here, a novel technique is presented to inject the macroscopic solution of the Navier-Stokes solver into the stream and collide algorithm using a central moment collision. First results on 2D tests cases show that such an algorithm is stable and feasible. Numerical results are compared with those of the previous HLBM
Kucerova, Anna. "Identification des paramètres des modèles mécaniques non-linéaires en utilisant des méthodes basées sur intelligence artificielle". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2007. http://tel.archives-ouvertes.fr/tel-00256025.
Texto completo da fonteDelest, Sébastien. "Segmentation de maillages 3D à l'aide de méthodes basées sur la ligne de partage des eaux". Phd thesis, Université François Rabelais - Tours, 2007. http://tel.archives-ouvertes.fr/tel-00211378.
Texto completo da fonteNous proposons dans un premier temps une étude assez large des méthodes de segmentation de maillages polygonaux. Nous abordons les algorithmes pour les deux principales familles de méthodes que sont la segmentation en carreaux surfaciques et la segmentation en parties significatives. Nous avons concentré nos travaux sur la ligne de partage des eaux (LPE) et formulé des propositions originales pour la fonction de hauteur de la LPE et des stratégies pour limiter la sur-segmentation que produit naturellement la LPE.
Viel, Stéphane. "Méthodes de Résonance Magnétique Nucléaire, basées sur la mobilité moléculaire, appliquées à l'étude de systèmes chimiques". Aix-Marseille 3, 2004. http://www.theses.fr/2004AIX30034.
Texto completo da fonteNuclear Magnetic Resonance (NMR) is one of the most powerful analytical technique in Chemistry, and many NMR methodologies are now available to elucidate molecular structures and dynamics. In this context, Pulsed Gradient Spin Echo (PGSE) and Diffusion Ordered NMR Spectroscopy (DOSY) are two closely related NMR methodologies, based on molecular mobility, which allow molecular diffusion to be encoded into NMR datasets by means of Pulsed Field Gradients (PFG) of magnetic field. In this thesis, PGSE and DOSY are applied to study micelles and micellar phases, chiral molecules, and uncharged mono and polysaccharides. Finally, a new analytical method is proposed in which High Resolution Magic Angle Spinning and chromatographic phases are combined to enhance the resolution of NMR diffusion edited studies of model mixtures
Cadet, Héloïse. "Utilisation combinée des méthodes basées sur le bruit de fond dans le cadre du microzonage sismique". Phd thesis, Grenoble 1, 2007. http://www.theses.fr/2007GRE10152.
Texto completo da fonteRecent destructive earthquakes repeatedly showed that site effects can drastically exacerbate damage. Improving the way such local hazard modifications are accounted for in earthquake risk mitigation policies is therefore a major concern, balanced however by tight economical constraints, which emphasize the need for inexpensive, though reliable methods. Noise measurements, an original geophysical method using ambient natural and anthropic vibrations, correspond to this need. The goal of this thesis is to develop and validate physically sound methodologies based on seismic noise allowing to account for site effects in a regulatory context, and combining simplicity, robustness and reliability. The idea is, in a first stage, to couple information from H/V measurements (resonance frequency f0) and array measurements (shear waves velocity, at least at shallow depth) to characterize the site conditions, and, in a second stage, to empirically develop statistical correlations between such limited site information and amplification functions, on the basis of the largest available high quality data set, i. E. , the Japanese Kik-net data. This thesis is therefore divided into two main sections dedicated, respectively, to each of these two steps. The 1st section is mainly targeted at proposing a field and processing protocol for the combined utilization of methods based on seismic noise. A series of investigations on synthetic and real data allows to identify the key factors controlling the reliability of estimates of the resonance frequency f0 and the mean S-waves velocity of the top z meters, Vsz (with z varying from 5 to 30 meters according to site conditions and array aperture). The proposed protocol is then intended to warrant – as much as possible – a good control of these key factors. The goal of the 2nd section is to develop a simple method for proposing a site-specific spectrum on the basis of the regional hazard and the site conditions characterized by f0 and Vsz. A subset of the Kik-net strong motion data is first selected, corresponding to nearly 500 sites with reliable P and S waves velocity profiles down to an average depth far larger than 50 m, and more than 4000 pairs of surface and down-hole seismic recordings. For each site, the site conditions can be characterized by reliable estimates of f0 and Vsz, and the borehole amplification function is estimated with spectral ratio between surface and down-hole recordings. Considering the large variability of depths and velocities for the borehole sensors, these "raw" functions are then normalized with respect to a carefully chosen "standard" reference rock, and corrected from depth effects, in order to approximate amplification function with respect to outcropping, standard rock. A statistical analysis then allows toderive empirical relationships between these normalized and corrected empirical amplification functions, and site conditions described by f0 and Vsz parameters. These amplification functions lead to significantly improved ground motion estimates compared to present earthquake regulations (such as EC8): our results could be readily applied in microzonation studies, and could as well pave the way for the next generation of building codes, with new site classifications and associated amplification functions
Cadet, Héloïse. "Utilisation combinée des méthodes basées sur le bruit de fond dans le cadre du microzonage sismique". Phd thesis, Université Joseph Fourier (Grenoble), 2007. http://tel.archives-ouvertes.fr/tel-00271292.
Texto completo da fonteLe but de cette thèse est de développer et de valider des méthodologies opérationnelles utilisant le bruit de fond pour une prise en compte robuste et satisfaisante des effets de site dans un contexte réglementaire.
Cette thèse comprend deux volets principaux :
- Tout d'abord le développement d'un protocole sur les méthodes utilisant le bruit de fond, dans le but de qualifier la détermination sur un site de la fréquence fondamentale f0 (mesures "H/V") et de la vitesse moyenne des ondes de cisaillement sur les z 1ers mètres Vsz (mesures en réseau) ;
- Puis l'établissement d'une fonction empirique décrivant l'amplification d'un site en fonction des deux seuls paramètres f0 et Vsz. Cette étude est effectuée à partir des données japonaises Kik-net: les paramètres f0 et Vsz ont pu être estimés de façon fiables pour près de 500 sites, de même que leur fonction d'amplification entre les enregistrements en surface et en profondeur. Après correction de l'effet de profondeur et normalisation pour se ramener à une fonction d'amplification par rapport à un rocher affleurant "standard", une analyse statistique permet alors de définir la fonction cherchée, qui s'avère sensiblement meilleure que les coefficients de site proposés dans les règlementations actuelles type EC8.
Delest, Sébastien. "Ségmentation de maillages 3D à l'aide de méthodes basées sur la ligne de partage des eaux". Tours, 2007. http://www.theses.fr/2007TOUR4025.
Texto completo da fonteMesh segmentation is a necessary tool for many applications. The mesh is decomposed into several regions from surface or shape information. In the last several years, many algorithms have been proposed in this growing area, with applications in many different areas as 3D shape matching and retrieval, compression, metamorphosis, collision detection, texture mapping, simplification, etc. First, we propose a review of mesh segmentation methods. We discuss about the algorithms relative to the two main types of methods: the patch-type segmentation and the part-type segmentation. We focused on the watershed transformation and proposed new approches relativing to the height function and strategies to avoid over segmentation produced by the watershed
Chabory, Alexandre. "Modélisation électromagnétique des radômes par des techniques basées sur les faisceaux gaussiens". Toulouse 3, 2004. http://www.theses.fr/2004TOU30240.
Texto completo da fonteSchwarzenberg, Adrian. "Développement de méthodes basées sur la spectrométrie de masse pour la détection de composés organophosphorés et d'explosifs". Thesis, Paris 6, 2014. http://www.theses.fr/2014PA066598.
Texto completo da fonteOver the years, the widespread use of harmful compounds has increased exponentially, and this is the main reason to develop methods for the identification of dangerous compounds, such as organophosphorus (OP) compounds and organic nitroaromatic explosives. The analysis of OP compounds and explosives is an important issue in homeland security, forensic and environmental sciences. To this aim, it is crucial to develop reliable, sensitive and efficient analytical methods to accurately identify OP compounds and explosives. In this context, the goal of this research work was to develop accurate mass spectrometric-based methods for the unambiguous identification of these compounds. An identification tree was developed for the structural elucidation of OP compounds. This approach was assessed using a biological matrix. On the other hand, nitroaromatic explosives have been investigated and several new findings were found and reported. Furthermore, the application of Direct Analysis in Real Time (DART) coupled to the Orbitrap mass spectrometer (DART-FTMS) is discussed herein for the fast screening and characterization of cotton swab samples obtained from military weapons
Dubois, Mathieu. "Méthodes probabilistes basées sur les mots visuels pour la reconnaissance de lieux sémantiques par un robot mobile". Phd thesis, Université Paris Sud - Paris XI, 2012. http://tel.archives-ouvertes.fr/tel-00679650.
Texto completo da fonteMoradi, Marani Farid. "Développement de méthodes non destructives basées sur l'acoustique non linéaire pour évaluer l'état des ouvrages en béton". Thèse, Université de Sherbrooke, 2014. http://savoirs.usherbrooke.ca/handle/11143/5357.
Texto completo da fonteTorregrosa, jordan Sergio. "Approches Hybrides et Méthodes d'Intelligence Artificielle Basées sur la Simulation Numérique pour l'Optimisation des Systèmes Aérodynamiques Complexes". Electronic Thesis or Diss., Paris, HESAM, 2024. http://www.theses.fr/2024HESAE002.
Texto completo da fonteThe industrial design of a component is a complex, time-consuming and costly process constrained to precise physical, styling and development specifications led by its future conditions and environment of use. Indeed, an industrial component is defined and characterized by many parameters which must be optimized to best satisfy all those specifications. However, the complexity of this multi-parametric constrained optimization problem is such that its analytical resolution is compromised.In the recent past, such a problem was solved experimentally, by trial and error, leading to expensive and time-consuming design processes. Since the mid-20th century, with the advancement and widespread access to increasingly powerful computing technologies, the ``virtual twins'', or physics-based numerical simulations, became an essential tool for research and development, significantly diminishing the need for experimental measurements. However, despite the computing power available today, ``virtual twins'' are still limited by the complexity of the problem solved and present some significant deviations from reality due to the ignorance of certain subjacent physics. In the late 20th century, the volume of data has surge enormously, massively spreading in the majority of fields and leading to a wide proliferation of Artificial Intelligence (AI) techniques, or ``digital twins'', partially substituting the ``virtual twins'' thanks to their lower intricacy. Nevertheless, they need an important training stage and can lead to some aversion since they operate as black boxes. Today, these technological evolutions have resulted in a framework where theory, experimentation, simulation and data can interact in synergy and reinforce each other.In this context, Stellantis aims to explore how AI can improve the design process of a complex aerodynamic system: an innovative cockpit air vent. To this purpose, the main goal of this thesis is to develop a parametric surrogate of the aerator geometry which outputs the norm of the velocity field at the pilot's face in order to explore the space of possible geometries while evaluating their performances in real time. The development of such a data-based metamodel entails several conceptual problems which can be addressed with AI.The use of classical regression techniques can lead to unphysical interpolation results in some domains such as fluid dynamics. Thus, the proposed parametric surrogate is based on Optimal Transport (OT) theory which offers a mathematical approach to measure distances and interpolate between general objects in a novel way.The success of a data-driven model relies on the quality of the training data. On the one hand, experimental data is considered as the most realistic but is extremely costly and time-consuming. On the other hand, numerical simulations are cheaper and faster but present a significant deviation from reality. Therefore, a Hybrid Twin approach is proposed based on Optimal Transport theory in order to bridge the ignorance gap between simulation and measurement.The sampling process of training data has become a central workload in the development process of a data-based model. Hence, an Active Learning methodology is proposed to iteratively and smartly select the training points, based on industrial objectives expected from the studied component, in order to minimize the number of needed samples. Thus, this sampling strategy maximizes the performance of the model while converging to the optimal solution of the industrial problem.The accuracy of a data-based model is usually the main concern of its training process. However, reality is complex and unpredictable leading to input parameters known with a certain degree of uncertainty. Therefore, a data-based Uncertainty Quantifcation methodology, based on Monte Carlo estimators and OT, is proposed to take into account the uncertainties propagation into the surrogate and to quantify their impact on its precision
Marques, Guillaume. "Problèmes de tournées de véhicules sur deux niveaux pour la logistique urbaine : approches basées sur les méthodes exactes de l'optimisation mathématique". Thesis, Bordeaux, 2020. http://www.theses.fr/2020BORD0199.
Texto completo da fonteThe main focus of this thesis is to develop mathematical optimization based exact methods to solve vehicle routing problems in two-level distribution systems. In such a system, the first level involves trucks that ships goods from a distribution center to intermediate depots called satellites. The second level involves city freighters that are loaded with goods at satellites and deliver the customers. Each customer must be visited once. The two-echelon vehicle routing problem seeks to minimize the total transportation cost in such a distribution system.The first chapter gives an overview of the branch-and-cut-and-price framework that we use throughout the thesis.The second chapter tackles the Two-Echelon Capacitated Vehicle Routing Problem. We introduce a new route based formulation for the problem which does not use variables to determine product flows in satellites. We propose a new branching strategy which significantly decreases the size of the branch-and-bound tree. Most importantly, we suggest a new family of satellite supply inequalities, and we empirically show that it improves the quality of the dual bound at the root node of the branch-and-bound tree. Experiments reveal that our algorithm can solve all literature instances with up to 200 customers and 10 satellites. Thus, we double the size of instances which can be solved to optimality.The third chapter tackles the Two-Echelon Vehicle Routing Problem with Time Windows. We consider the variant with precedence constraints at the satellites: products should be delivered by an urban truck to a satellite before loading them to a city freighter. This is a relaxation of the synchronisation variant usually considered in the literature. We consider single-trip and multi-trip variants of this problem. In the first one, city freighters start from satellites and do a single trip. In the second one, city freighters start from a depot, load product at satellites, and do several trips. We introduce a route based formulation that involves an exponential number of constraints to ensure precedence relations. A minimum-cut based algorithm is proposed to separate these constraints. We also show how these constraints can be taken into account in the pricing problem of the column generation approach. Experiments show that our algorithm can solve to optimality instances with up to 100 customers. The algorithm outperforms significantly another recent approach proposed the literature for the single-trip variant of the problem. Moreover, the “precedence relaxation” is exact for single-trip instances.The fourth chapter considers vehicle routing problems with knapsack-type constraints in the master problem. For these problems, we introduce new route load knapsack cuts and separation routines for them. We use these cuts to solve to optimality three problems: the Capacitated Vehicle Routing Problem with Capacitated Multiple Depots, the standard Location-Routing Problem, and the Vehicle Routing Problem with Time Windows and Shifts. These problems arise when routes at first level of the two-level distribution system are fixed. Our experiments reveal computational advantage of our algorithms over ones from the literature
Denimal, Emmanuel. "Détection de formes compactes en imagerie : développement de méthodes cumulatives basées sur l'étude des gradients : Applications à l'agroalimentaire". Thesis, Bourgogne Franche-Comté, 2018. http://www.theses.fr/2018UBFCK006/document.
Texto completo da fonteThe counting cells (Malassez, Thoma ...) are designed to allow the enumeration of cells under a microscope and the determination of their concentration thanks to the calibrated volume of the grid appearing in the microscopic image. Manual counting has major disadvantages: subjectivity, non-repeatability ... There are commercial automatic counting solutions, the disadvantage of which is that a well-controlled environment is required which can’t be obtained in certain studies ( eg glycerol greatly affects the quality of the images ). The objective of the project is therefore twofold: an automated cell count and sufficiently robust to be feasible regardless of the acquisition conditions.In a first step, a method based on the Fourier transform has been developed to detect, characterize and erase the grid of the counting cell. The characteristics of the grid extracted by this method serve to determine an area of interest and its erasure makes it easier to detect the cells to count.To perform the count, the main problem is to obtain a cell detection method robust enough to adapt to the variable acquisition conditions. The methods based on gradient accumulations have been improved by the addition of structures allowing a finer detection of accumulation peaks. The proposed method allows accurate detection of cells and limits the appearance of false positives.The results obtained show that the combination of these two methods makes it possible to obtain a repeatable and representative count of a consensus of manual counts made by operators
Salman, Solafa. "Développement de nouvelles méthodes de préservation du bois basées sur l'utilisation combinée d'un traitement thermique et de borax". Thesis, Université de Lorraine, 2017. http://www.theses.fr/2017LORR0005/document.
Texto completo da fonteEnvironmental pressures appeared in France and in Europe in the last decades have substantially changed the methods for wood protection. In this context the Biocidal Products Regulations and the Biocidal Products Directive lead to the development of more environmentally friendly preservation methods and the growing interest in non-biocidal alternatives such as thermal treatment or chemical modification. Wood heat treatment at temperatures of 180 to 220 °C leads to the chemical modification of wood cell wall polymers conferring new properties to the material like its increased decay resistance and high dimensional stability. Despite these improvements, the durability of wood heat treatment is not sufficient to envisage use class 3 and 4 applications; where the wood is in contact with soil or termites. Moreover, Boron compounds present fungicidal and termiticidal properties. However, boron compounds have the drawback of being very easily leached out from wood making it unusable for applications in outdoor conditions. Wood chemical modification carried by the impregnation of aqueous solutions (10 %) of maleic anhydride polyglycerol adduct or polyglycerol methacrylates or phenol-formaldehyde resin, with or without borax followed by heat treatment at 220°C has shown some improvement of thermally modified wood properties particularly its resistance to termites in case of leach or not
Verdret, Yassine. "Analyse du comportement parasismique des murs à ossature bois : approches expérimentales et méthodes basées sur la performance sismique". Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0010/document.
Texto completo da fonteThis thesis presents a study of the seismic behavior of light timber frame walls with stapled and nailed sheathings through experimental approaches and the development of a methodology for the application of seismic performance-based methods. The experimental approaches consist of three test campaigns: (1) a series of static tests on stapled and nailed connections, (2) a series of static tests performed on light timber frame walls and (3) a series of dynamic tests performed on light timber frame walls on a vibrating table. The database consists of these test results then allows the examination of strength and stiffness properties of the wall elements according to the stress conditions (strain rate, vertical load). The development of a macro-scale modeling of the cyclic and dynamic behavior of such elements is also proposed using constitutive law models. A framework of the application to light timber frame structures of seismic performance-based methods based (N2 method and MPA method) and a vulnerability analysis - fragility curves - using the N2 method are proposed
Nguyen, Van Vinh. "Méthodes exactes pour l'optimisation DC polyédrale en variables mixtes 0-1 basées sur DCA et des nouvelles coupes". INSA de Rouen, 2006. http://www.theses.fr/2006ISAM0003.
Texto completo da fonteTouzani, Samir. "Méthodes de surface de réponse basées sur la décomposition de la variance fonctionnelle et application à l'analyse de sensibilité". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00614038.
Texto completo da fonteLonjou, Christine. "HLA et analyse de données-méthodes basées sur la vraisemblance : applications en transplantation de moëlle osseuse et en anthropologie". Toulouse 3, 1998. http://www.theses.fr/1998TOU30130.
Texto completo da fonteMoretton, Cédric. "Analyse des caramels liquides : développement et validation de nouvelles méthodes basées sur la chromatographie en phase liquide bidimensionnelle (LC-LC)". Phd thesis, Université Claude Bernard - Lyon I, 2009. http://tel.archives-ouvertes.fr/tel-00675449.
Texto completo da fonteTataie, Laila. "Méthodes simplifiées basées sur une approche quasi-statique pour l’évaluation de la vulnérabilité des ouvrages soumis à des excitations sismiques". Thesis, Lyon, INSA, 2011. http://www.theses.fr/2011ISAL0123/document.
Texto completo da fonteIn the context of building’s protection against seismic risk, simplified analysis techniques, based on quasi-static analysis of pushover, have strongly developed over the past two decades. The thesis aims to optimize a simplified method proposed by Chopra and Goel in 2001 and adopted by American standards FEMA 273. This method is a nonlinear decoupled modal analysis, called by the authors UMRHA (Uncoupled Modal for Response History Analysis) which is mainly characterized by: pushover modal analysis according to the dominant modes of vibration of the structure, setting up nonlinear single degree of freedom systems drawn from modal pushover curves, then determining the history response of the structure by combining of the temporal responses associated with each mode of vibration. The decoupling of nonlinear history responses associated with each mode is the strong assumption of the method UMRHA. In this study, the UMRHA method has been improved by investigating the following points. First of all, several nonlinear single degree of freedom systems drawn from modal pushover curves are proposed to enrich the original UMRHA method, in which a simple elastic-plastic model is used, other elastic-plastic models with different envelope curves, Takeda model taking into account an hysteretic behavior characteristic of structures under earthquakes, and finally, a simplified model based on the frequency degradation as a function of a damage index. The latter nonlinear single degree of freedom model privileges the view of the frequency degradation during the structure damage process relative to a realistic description of hysteresis loops. The total response of the structure is obtained by summing the contributions of the non linear dominant modes to those of linear non dominant modes. Finally, the degradation of the modal shapes due to the structure damage during the seismic loading is taken into account in the new simplified method M-UMRHA (Modified UMRHA) proposed in this study. By generalizing the previous model of frequency degradation as a function of a damage index: the modal shape becomes itself also dependent on a damage index, the maximum displacement at the top of the structure; the evolution of the modal shape as a function of this index is directly obtained from the modal pushover analysis. The pertinence of the new method M-UMRHA is investigated for several types of structures, by adopting tested models of structures simulation under earthquakes: reinforced concrete frame modeled by multifibre elements with uniaxial laws under cyclic loading for concrete and steel, infill masonry wall with diagonal bars elements resistant only in compression, existing building (Grenoble City Hall) with multilayer shell elements and nonlinear biaxial laws based on the concept of smeared and fixed cracks. The obtained results by the proposed simplified method are compared to the reference results derived from the nonlinear response history analysis
Tataie, Laila. "Méthodes simplifiées basées sur une approche quasi-statique pour l'évaluation de la vulnérabilité des ouvrages soumis à des excitations sismiques". Phd thesis, INSA de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00708575.
Texto completo da fonteMahboubi, Amal Kheira. "Méthodes d'extraction, de suivi temporel et de caractérisation des objets dans les vidéos basées sur des modèles polygonaux et triangulés". Nantes, 2003. http://www.theses.fr/2003NANT2036.
Texto completo da fonteBernard, Francis. "Méthodes d'analyse des données incomplètes incorporant l'incertitude attribuable aux valeurs manquantes". Mémoire, Université de Sherbrooke, 2013. http://hdl.handle.net/11143/6571.
Texto completo da fonteEl, Tannoury Charbel. "Développement d'outils de surveillance de la pression dans les pneumatiques d'un véhicule à l'aides des méthodes basées sur l'analyse spectrale et sur la synthèse d'observateurs". Phd thesis, Ecole centrale de nantes - ECN, 2012. http://tel.archives-ouvertes.fr/tel-00693340.
Texto completo da fonteLeboucher, Julien. "Développement et évaluation de méthodes d'estimation des masses segmentaires basées sur des données géométriques et sur les forces externes : comparaison de modèles anthropométriques et géométriques". Valenciennes, 2007. http://ged.univ-valenciennes.fr/nuxeo/site/esupversions/e2504d99-e61b-4455-8bb3-2c47771ac853.
Texto completo da fonteUse of body segment parameters close to reality is of the utmost importance in order to obtain reliable kinetics during human motion analysis. Human body is modeled as a various number of solids in the majority of human movement studies. This research aims at developing and testing two methods for the estimation of these solid masses, also known as segment masses. Both methods are based on the static equilibrium principle for several solids. The first method’s goal is to provide with limb masses using total limb centre of mass and centre of pressure, projection on the horizontal plane of the total subject’s body centre of gravity, displacements. Ratio between these displacement being the same as the ratio of limb and total body masses, the knowledge of the latter allows for the calculation of the former. The second method aims at estimation all segment masses simultaneously by resolving series of static equilibrium equations, making the same assumption that centre of pressure is total body centre of mass projection and using segment centre of mass estimations. Interest of the new methods used in this research is due to the use of individual segment centre of mass estimations using a geometrical model together with material routinely utilized in human motion analysis in order to obtain estimates of body segment masses. Limb mass estimations method performs better predicting a posteriori center of mass displacement when compared to other methods. Some of the potential causes of the second method’s failure have been investigated through the study of centre of pressure location uncertainty
Palagi, Emilie. "Évaluation des moteurs de recherche exploratoire : élaboration d'un corps de méthodes centrées utilisateurs, basées sur une modélisation du processus de recherche exploratoire". Thesis, Université Côte d'Azur (ComUE), 2018. http://www.theses.fr/2018AZUR4116/document.
Texto completo da fonteExploratory search systems are search engines that help users to explore a topic of interest. A shortcoming of current evaluation methods is that they cannot be used to determine if an exploratory search system can effectively help the user in performing exploratory search tasks. Indeed, the assessment cannot be the same between classic search systems (such as Google, Bing, Yahoo!...) and exploratory search systems. The complexity and the difficulty to have a consensus definition of the exploratory search concept and process are reflected in the difficulties to evaluate such systems. Indeed, they combine several specifics features and behaviors forming an alchemy difficult to evaluate. The main objective of this thesis is to propose for the designers of these systems (i.e. computer scientists) user-centered evaluation methods of exploratory search systems. These methods are based on a model of exploratory search process in order to help the evaluators to verify if a given system supports effectively the exploratory search process. Thus, after elaborating a model of exploratory search process, we propose two model-based methods, with and without users, which can be used all along the design process. The first method, without users, can be used from the first sketch of the system, consists of a set of heuristics of exploratory search and a procedure for using them. We also propose two tools facilitating their use: an online form format and an Google Chrome plugin, CheXplore. The second method involves real end-users of exploratory search systems who test a functional prototype or version of an exploratory search system. In this thesis, we mainly focus on two model-based elements of a customizable user testing procedure: a protocol for the elaboration of exploratory search tasks and a video analysis grid for the evaluation of recorded exploratory search sessions