Дисертації з теми "Méthodes efficaces"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся з топ-50 дисертацій для дослідження на тему "Méthodes efficaces".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Переглядайте дисертації для різних дисциплін та оформлюйте правильно вашу бібліографію.
Ben, Zineb Tarik. "Méthodes numériques efficaces pour la valorisation des GMWB." Palaiseau, Ecole polytechnique, 2012. http://www.theses.fr/2012EPXX0091.
Повний текст джерелаFerran, Ghislain. "Nouvelles méthodes numériques pour le traitement des sections efficaces nucléaires." Palaiseau, Ecole polytechnique, 2014. https://tel.archives-ouvertes.fr/tel-01077764/document.
Повний текст джерелаNuclear data allow to describe how a particle interacts with matter. These data are therefore at the basis of neutron transport and reactor physics calculations. Once measured and evaluated, they are given in libraries as a list of parameters. Before they can be used in neutron transport calculations, processing is required which includes taking into account several physical phenomena. This can be done by several softwares, such as NJOY, which all have the drawback to use old numerical methods derived from the same algorithms. For nuclear safety applications, it is important to rely on independent methods, to have a comparison point and to isolate the effects of the treatment on the final results. Moreover, it is important to properly master processing accuracy during its different steps. The objective of this PhD is then to develop independent numerical methods that can guarantee nuclear data processing within a given precision and to implement them practically, with the creation of the GAIA software. Our first step was the reconstruction of cross sections from the parameters given in libraries, with different approximations of the R-matrix theory. Reconstruction using the general formalism, without any approximation, has also been implemented, which has required the development of a new method to calculate the R-matrix. Tests have been performed on all existing formalisms, including the newest one. They have shown a good agreement between GAIA and NJOY. Reconstruction of angular differential cross sections directly from R-matrix parameters, using the Blatt-Biedenharn formula, has also been implemented and tested. The cross sections we have obtained at this point correspond to a target nucleus at absolute zero temperature. Because of thermal agitation, these cross sections are subject to a Doppler effect that is taken into account by integrating them with Solbrig's kernel. Our second step was then to calculate this integral. First, we have elaborated and validated a reference method that is precise but slow. Then, we have developed a new method based on Fast Fourier Transform algorithm. Comparisons with the reference method suggest that the precision of our method is better than the one achieved with NJOY, with comparable computation times. Besides, we have adapted this method to the case where target nuclei are in a condensed state (solid or liquid). For this latter case, an alternative implementation was done to obtain cross sections by integrating the S(a,b) law that characterize the chemical binding effect on collisions between neutrons and matter. Finally, a method was developed to generate an energy grid fine enough to allow a linear interpolation of cross sections between its points. At this point, we have at our disposal the minimum amount of information required to produce input files for the Monte-Carlo transport code MCNP. Such data have been translated into the correct format thanks to a module of NJOY. Calculations have been performed using our input files on several configurations, to demonstrate that our methods can actually be used to process modern evaluated files. In parallel, as part of a collaboration with Institut Laue-Langevin, we have participated in the treatment of experimental measurements of the S(a,b) law for light and heavy water. With GAIA, we have combined experimental values with values from a molecular dynamics simulation, with the objective to avoid using a molecular model in the domain where experimental values are available. This has only been a first step, but the values obtained improves the predictions of the model of ILL reactor. As a conclusion, during this PhD, new numerical methods were developed and we have shown that they can be used in practical cases
Bonazzoli, Marcella. "Méthodes d'ordre élevé et méthodes de décomposition de domaine efficaces pour les équations de Maxwell en régime harmonique." Thesis, Université Côte d'Azur (ComUE), 2017. http://www.theses.fr/2017AZUR4067/document.
Повний текст джерелаThe time-harmonic Maxwell’s equations present several difficulties when the frequency is large, such as the sign-indefiniteness of the variational formulation, the pollution effect and the problematic construction of iterative solvers. We propose a precise and efficient solution strategy that couples high order finite element (FE) discretizations with domain decomposition (DD) preconditioners. High order FE methods make it possible for a given precision to reduce significantly the number of unknowns of the linear system to be solved. DD methods are then used as preconditioners for the iterative solver: the problem defined on the global domain is decomposed into smaller problems on subdomains, which can be solved concurrently and using robust direct solvers. The design, implementation and analysis of both these methods are particularly challenging for Maxwell’s equations. FEs suited for the approximation of the electric field are the curl-conforming or edge finite elements. Here, we revisit the classical degrees of freedom (dofs) defined by Nédélec to obtain a new more friendly expression in terms of the chosen high order basis functions. Moreover, we propose a general technique to restore duality between dofs and basis functions. We explicitly describe an implementation strategy, which we embedded in the open source language FreeFem++. Then we focus on the preconditioning of the linear system, starting with a numerical validation of a one-level overlapping Schwarz preconditioner, with impedance transmission conditions between subdomains. Finally, we investigate how two-level preconditioners recently analyzed for the Helmholtz equation work in the Maxwell case, both from the theoretical and numerical points of view. We apply these methods to the large scale problem arising from the modeling of a microwave imaging system, for the detection and monitoring of brain strokes. In this application accuracy and computing speed are indeed of paramount importance
Campos, Ciro Guillermo. "Développement de méthodes d'ordonnancement efficaces et appliquées dans un système de production mécanique." Thesis, Troyes, 2015. http://www.theses.fr/2015TROY0035/document.
Повний текст джерелаThe continuous evolution of manufacturing environments and the growing of customer needings, leads to a faster and more efficient production process that controls an increasing number of parameters. This thesis is focused on the development of decision making methods in order to improve the production scheduling. The industrial partner (Norelem) produces standardized mechanical elements, so many different resource constraints (humans and tools) are presented in its workshop.We study an open shop scheduling problem where one job can follow multiple production sequences because there is no fixed production sequence and the objective function is to minimize the total flow time. In addition, multi-skilled personnel assignment and tool’s availability constraints are involved.Mathematical models: linear and non-linear formulations have been developed to describe the problem. Knowing the exact method limitations in terms of instance sizes because of the duration, heuristics methods have been proposed and compared. Besides that, the multi-objective optimization was exposed to deal with three objectives as total flow time minimization and workload balancing concerning both, humans and machines.The efficiency of these methods was proved by several theoretical instance tests and the application on the real industrial case
Ranwez, Vincent. "Méthodes efficaces pour reconstruire de grandes phylogénies suivant le principe du maximum de vraisemblance." Phd thesis, Université Montpellier II - Sciences et Techniques du Languedoc, 2002. http://tel.archives-ouvertes.fr/tel-00843175.
Повний текст джерелаBenki, Aalae. "Méthodes efficaces de capture de front de pareto en conception mécanique multicritère : applications industrielles." Phd thesis, Université Nice Sophia Antipolis, 2014. http://tel.archives-ouvertes.fr/tel-00959099.
Повний текст джерелаGarstecki, Lukasz. "Méthodes efficaces de test de conformité fonctionelle pour les bibliothèques de programmation parallèle et distribuée." Grenoble INPG, 2004. http://www.theses.fr/2004INPG0068.
Повний текст джерелаThe thesis presents a complete methodology for creating Conformance Test Suites for programming languages, libraries and APIs, with a special attention to parallel and distributed programming libraries. Author has started his research in the field of conformance testing for parallel and distributed programming libraries, but methodology Consecutive Confinements Method (CoCoM), invented by Author, turned out to be general enough to be applied to a wider class of programming libraries, languages and APIs, including both sequential and non-sequential programming models based on imperative programming paradigm. Methodology CoCoM is based on notions defined in the international standard ISO/IEC 13210: Information Technology , - Requirements and Guidelines for Test Methods Specifications and Test Method Implementations for Measuring Conformance to POSIX Standards, and attempts to extend it. CoCoM can be seen as a framework, where many different methodologies and tools can be incorporated, allowing for expansion towards any specific formalism, for which a supporting processing engine can be found. For the purpose of the Thesis Author has developed a prototype tool called CTS Designer, which implements significant parts of the standard ISO/IEC 13210 and the CoCoM methodology and shall be considered a top integrating component of a dedicated formal framework postulated before
Geffroy, Thomas. "Vers des outils efficaces pour la vérification de systèmes concurrents." Thesis, Bordeaux, 2017. http://www.theses.fr/2017BORD0848/document.
Повний текст джерелаThe goal of this thesis is to solve in practice the coverability problem in Petri nets and lossy channel systems (LCS). These systems are interesting to study because they can be used to model concurrent and distributed systems. The coverability problem in a transition system is to decide whether it is possible, from an initial state, to reach a greater state than a target state. In the first part, we discuss how to solve this problem for well-structured transition systems (WSTS). Petri nets and LCS are WSTS. In the first part, we present a general method to solve this problem quickly in practice. This method uses coverability invariants, which are over-approximations of the set of coverable states. The second part studies Petri nets.We present comparisons of coverability invariants, both in theory and in practice. A particular attention will be paid on the combination of the classical state inequation and a simple sign analysis. LCS are the focus of the third part. We present a variant of the state inequation for LCS and two invariants that compute properties for the order in which messages are sent. Two tools, ICover and BML, were developed to solve the coverability problem in Petri nets and LCS respectively
Catella, Adrien. "Schémas d'intégration en temps efficaces pour la résolution numérique des équations de Maxwell instationnaires par des méthodes Galerkin discontinues d'ordre élevé en maillages non-structurés." Nice, 2008. http://www.theses.fr/2008NICE4106.
Повний текст джерелаThis general objective of this study is the development and assesment of efficient time integration scheme for Discontinuous Galerkin time domain (DGTD) method on unstructured tetraedral meshes for numerical resolution of Maxwell equations. In first part of this thesis, we remind Maxwell's equations and summarize main numerical methods used to solve this system. In the second part, we present the Discontinuous Galerkin method based on centred approximations for generic order. In this chapter, we focuse to time explicit scheme. We detailed, in third chapter, the main part of this work, in other words time implicit scheme, especially the Crank-Nicolson scheme, which is most studied in scientific litterature and in a second time a scheme of order 4 obtained by the defect correction technique. We realized a comparative study of both solvers (iterative and direct) to solve the linear system in chapter 4. For a memory space consideration , we apply the implicit scheme on a subdomain only. To do this, we use a hybrid explicit/implicit scheme. On chapter 6, we present the results 3D obtained with this method. Problems considered has several millions unknowns
Averseng, Martin. "Méthodes efficaces pour la diffraction acoustique en 2 et 3 dimensions : préconditionnement sur des domaines singuliers et convolution rapide." Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLX083.
Повний текст джерелаIn this thesis, we are concerned with the numerical resolution of the problem of acoustic waves scattering by an obstacle in dimensions 2 and 3, with the boundary element method. In the first three chapters, we consider objects with singular geometries. We focus on the case of objects with edge singularities, first open curves in the plane, and then open surfaces in dimension 3. We present a formalism that allows to restore the good properties that held for smooth objects. A weight function is defined on the scattering object, and the usual layer potentials (single-layer and hypersingular) are adequately rescaled by this weight function. Suitable preconditioners are proposed, that take the form of square roots of local operators. In dimension 2, we give a complete theoretical and numerical analysis of the problem. We show in particular that the weighted layer potentials belong to a class of pseudo-differential operators on open curves that we define and analyze here. The pseudo-differential calculus thus developed allows us to compute parametrices for the weighted layer potentials, which correspond to the continuous versions of our preconditioners. In dimension 3, we show how those ideas can be extended theoretically and numerically, for the particular case of the scattering by an infinitely thin disk. In the last chapter, we present a new method for the rapid evaluation of discrete convolutions by radial functions in dimension 2. Such convolutions represent a computational bottleneck in the boundary element methods. Our algorithm relies on the non-uniform fast Fourier transform and generalizes to dimension 2 an analogous algorithm available in dimension 3, namely the sparse cardinal sine decomposition
Bounliphone, Wacha. "Tests d’hypothèses statistiquement et algorithmiquement efficaces de similarité et de dépendance." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLC002/document.
Повний текст джерелаThe dissertation presents novel statistically and computationally efficient hypothesis tests for relative similarity and dependency, and precision matrix estimation. The key methodology adopted in this thesis is the class of U-statistic estimators. The class of U-statistics results in a minimum-variance unbiased estimation of a parameter.The first part of the thesis focuses on relative similarity tests applied to the problem of model selection. Probabilistic generative models provide a powerful framework for representing data. Model selection in this generative setting can be challenging. To address this issue, we provide a novel non-parametric hypothesis test of relative similarity and test whether a first candidate model generates a data sample significantly closer to a reference validation set.Subsequently, the second part of the thesis focuses on developing a novel non-parametric statistical hypothesis test for relative dependency. Tests of dependence are important tools in statistical analysis, and several canonical tests for the existence of dependence have been developed in the literature. However, the question of whether there exist dependencies is secondary. The determination of whether one dependence is stronger than another is frequently necessary for decision making. We present a statistical test which determine whether one variables is significantly more dependent on a first target variable or a second.Finally, a novel method for structure discovery in a graphical model is proposed. Making use of a result that zeros of a precision matrix can encode conditional independencies, we develop a test that estimates and bounds an entry of the precision matrix. Methods for structure discovery in the literature typically make restrictive distributional (e.g. Gaussian) or sparsity assumptions that may not apply to a data sample of interest. Consequently, we derive a new test that makes use of results for U-statistics and applies them to the covariance matrix, which then implies a bound on the precision matrix
Vincent, Thomas. "Caractérisation des solutions efficaces et algorithmes d’énumération exacts pour l’optimisation multiobjectif en variables mixtes binaires." Nantes, 2013. http://archive.bu.univ-nantes.fr/pollux/show.action?id=c984a17c-6904-454d-9b3a-e63846e9fb9b.
Повний текст джерелаThe purpose of this work is the exact solution of multiple objective binary mixed integer linear programmes. The mixed nature of the variables implies significant differences with purely continuous or purely discrete programmes. Thus, we propose to take these differences into account using a proper representation of the solution sets and a dedicated update procedure. These propositions allow us to adapt for the biobjective case two solution methods commonly used for combinatorial problems: the Branch & Bound algorithm and the two phase method. Several improvements are proposed, such as bound sets or visiting strategies. We introduce a new routine for the second phase of the two phase method that takes advantage of all the relevant features of the previously studied methods. In the 3-objective context, the solution sets representation is extended by analogy with the biobjective case. Solutions methods are extended and studied as well. In particular, the decomposition of the search area during the second phase is thoroughly described. The proposed software solution has been applied on a real world problem: evaluation of a vehicle choice policy. The possible choices range from classical to electric vehicles that are powered by grid or solar power
BEN, AMOR NADIA. "Developpement de methodes ab initio size-extensives et d'algorithmes efficaces pour le calcul de la correlation electronique." Toulouse 3, 1997. http://www.theses.fr/1997TOU30246.
Повний текст джерелаPoyer, Salomé. "Développement de méthodes d'analyse par spectrométrie de masse à mobilité ionique pour l'identification des analogues de saxitoxine." Rouen, 2015. http://www.theses.fr/2015ROUES053.
Повний текст джерелаThis thesis work focused on the separation and identification of saxitoxin analogues, natural neurotoxic compounds. Modern separation techniques such as hydrophilic interaction liquid chromatography (HILIC) and ion mobility spectrometry (IMS) coupled to mass spectrometry (MS) were developed for the analysis of saxitoxins. HILIC-MS and IMS-MS coupling were developed, showing a complementary for separation of the various saxitoxin analogues. HILIC-IMS-MS coupling was then optimized and allowed the fast separation of the toxin analogues. The HILIC-IMS-MS coupling was also repeatable when complex mixtures were injected, although it was less sensitive than with the coupling of HILIC with a triple quadrupole instrument operated in targeted mode. IMS application to saxitoxins also permitted to determinate the collision cross section values of each analogue. The calculation of theoretical structures permitted the determination of theoretical collision cross sections that were correlated to experimental values and allowed the access of gaseous phase conformation. Saxitoxin analogues characterization was carried out by tandem mass spectrometry of [M+H]+, [M+Li]+, [M+Na]+, [M+K]+ and [M−H]− species. The different product ions gave information about stability of chemical functions depending on the different species studied
Piscitelli, David. "Simulation de la pulvérisation cathodique dans les écrans à plasma." Toulouse 3, 2002. http://www.theses.fr/2002TOU30184.
Повний текст джерелаCai, Li. "Condensation et homogénéisation des sections efficaces pour les codes de transport déterministes par la méthode de Monte Carlo : Application aux réacteurs à neutrons rapides de GEN IV." Thesis, Paris 11, 2014. http://www.theses.fr/2014PA112280/document.
Повний текст джерелаIn the framework of the Generation IV reactors neutronic research, new core calculation tools are implemented in the code system APOLLO3® for the deterministic part. These calculation methods are based on the discretization concept of nuclear energy data (named multi-group and are generally produced by deterministic codes) and should be validated and qualified with respect to some Monte-Carlo reference calculations. This thesis aims to develop an alternative technique of producing multi-group nuclear properties by a Monte-Carlo code (TRIPOLI-4®).At first, after having tested the existing homogenization and condensation functionalities with better precision obtained nowadays, some inconsistencies are revealed. Several new multi-group parameters estimators are developed and validated for TRIPOLI-4® code with the aid of itself, since it has the possibility to use the multi-group constants in a core calculation.Secondly, the scattering anisotropy effect which is necessary for handling neutron leakage case is studied. A correction technique concerning the diagonal line of the first order moment of the scattering matrix is proposed. This is named the IGSC technique and is based on the usage of an approximate current which is introduced by Todorova. An improvement of this IGSC technique is then presented for the geometries which hold an important heterogeneity property. This improvement uses a more accurate current quantity which is the projection on the abscissa X. The later current can represent the real situation better but is limited to 1D geometries.Finally, a B1 leakage model is implemented in the TRIPOLI-4® code for generating multi-group cross sections with a fundamental mode based critical spectrum. This leakage model is analyzed and validated rigorously by the comparison with other codes: Serpent and ECCO, as well as an analytical case.The whole development work introduced in TRIPLI-4® code allows producing multi-group constants which can then be used in the core calculation solver SNATCH in the PARIS code platform. The latter uses the transport theory which is indispensable for the new generation fast reactors analysis. The principal conclusions are as follows:-The Monte-Carlo assembly calculation code is an interesting way (in the sense of avoiding the difficulties in the self-shielding calculation, the limited order development of anisotropy parameters, the exact 3D geometries) to validate the deterministic codes like ECCO or APOLLO3® and to produce the multi-group constants for deterministic or Monte-Carlo multi-group calculation codes. -The results obtained for the moment with the multi-group constants calculated by TRIPOLI-4 code are comparable with those produced from ECCO, but did not show remarkable advantages
Pan, Cihui. "Diffraction électromagnétique par des réseaux et des surfaces rugueuses aléatoires : mise en œuvre deméthodes hautement efficaces pour la résolution de systèmes aux valeurs propres et de problèmesaux conditions initiales." Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLV020/document.
Повний текст джерелаWe study the electromagnetic diffraction by gratings and random rough surfaces. The C-method is an exact method developed for this aim. It is based on Maxwell’s equations under covariant form written in a nonorthogonal coordinate system. The C-method leads to an eigenvalue problem, the solution of which gives the diffracted field.We focus on the numerical aspect of the C-method, trying to develop an efficient application of this exact method. For gratings, we have developed a new version of C-method which leads to a differential system with initial conditions. This new version of C-method can be used to study multilayer gratings with homogeneous medium.We implemented high performance algorithms to the original versions of C-method. Especially, we have developed a specifically designed parallel QR algorithm for the C- method and spectral projection method to solve the eigenvalue problem more efficiently. Experiments have shown that the computation time can be reduced significantly
Loiseau, Romain. "Real-World 3D Data Analysis : Toward Efficiency and Interpretability." Electronic Thesis or Diss., Marne-la-vallée, ENPC, 2023. http://www.theses.fr/2023ENPC0028.
Повний текст джерелаThis thesis explores new deep-learning approaches for modeling and analyzing real-world 3D data. 3D data processing is helpful for numerous high-impact applications such as autonomous driving, territory management, industry facilities monitoring, forest inventory, and biomass measurement. However, annotating and analyzing 3D data can be demanding. Specifically, matching constraints regarding computing resources or annotation efficiency is often challenging. The difficulty of interpreting and understanding the inner workings of deep learning models can also limit their adoption.The computer vision community has made significant efforts to design methods to analyze 3D data, to perform tasks such as shape classification, scene segmentation, and scene decomposition. Early automated analysis relied on hand-crafted descriptors and incorporated prior knowledge about real-world acquisitions. Modern deep learning techniques demonstrate the best performances but are often computationally expensive, rely on large annotated datasets, and have low interpretability. In this thesis, we propose contributions that address these limitations.The first contribution of this thesis is an efficient deep-learning architecture for analyzing LiDAR sequences in real time. Our approach explicitly considers the acquisition geometry of rotating LiDAR sensors, which many autonomous driving perception pipelines use. Compared to previous work, which considers complete LiDAR rotations individually, our model processes the acquisition in smaller increments. Our proposed architecture achieves accuracy on par with the best methods while reducing processing time by more than five times and model size by more than fifty times.The second contribution is a deep learning method to summarize extensive 3D shape collections with a small set of 3D template shapes. We learn end-to-end a small number of 3D prototypical shapes that are aligned and deformed to reconstruct input point clouds. The main advantage of our approach is that its representations are in the 3D space and can be viewed and manipulated. They constitute a compact and interpretable representation of 3D shape collections and facilitate annotation, leading to emph{state-of-the-art} results for few-shot semantic segmentation.The third contribution further expands unsupervised analysis for parsing large real-world 3D scans into interpretable parts. We introduce a probabilistic reconstruction model to decompose an input 3D point cloud using a small set of learned prototypical shapes. Our network determines the number of prototypes to use to reconstruct each scene. We outperform emph{state-of-the-art} unsupervised methods in terms of decomposition accuracy while remaining visually interpretable. We offer significant advantages over existing approaches as our model does not require manual annotations.This thesis also introduces two open-access annotated real-world datasets, HelixNet and the Earth Parser Dataset, acquired with terrestrial and aerial LiDARs, respectively. HelixNet is the largest LiDAR autonomous driving dataset with dense annotations and provides point-level sensor metadata crucial for precisely measuring the latency of semantic segmentation methods. The Earth Parser Dataset consists of seven aerial LiDAR scenes, which can be used to evaluate 3D processing techniques' performances in diverse environments.We hope that these datasets and reliable methods considering the specificities of real-world acquisitions will encourage further research toward more efficient and interpretable models
Boutoux, Guillaume. "Sections efficaces neutroniques via la méthode de substitution." Phd thesis, Bordeaux 1, 2011. http://tel.archives-ouvertes.fr/tel-00654677.
Повний текст джерелаSzames, Esteban Alejandro. "Few group cross section modeling by machine learning for nuclear reactor." Thesis, université Paris-Saclay, 2020. http://www.theses.fr/2020UPASS134.
Повний текст джерелаModern nuclear reactors utilize core calculations that implement a thermo-hydraulic feedback requiring accurate homogenized few-group cross sections.They describe the interactions of neutrons with matter, and are endowed with the properties of smoothness and regularity, steaming from their underling physical phenomena. This thesis is devoted to the modeling of these functions by industry state-of-theart and innovative machine learning techniques. Mathematically, the subject can be defined as the analysis of convenient mapping techniques from one multi-dimensional space to another, conceptualize as the aggregated sum of these functions, whose quantity and domain depends on the simulations objectives. Convenient is intended in terms of computational performance, such as the model’s size, evaluation speed, accuracy, robustness to numerical noise, complexity,etc; always with respect to the engineering modeling objectives that specify the multidimensional spaces of interest. In this thesis, a standard UO₂ PWR fuel assembly is analyzed for three state-variables, burnup,fuel temperature, and boron concentration.Library storage requirements are optimized meeting the evaluation speed and accuracy targets in view of microscopic, macroscopic cross sections and the infinite multiplication factor. Three approximation techniques are studied: The state-of-the-art spline interpolation using computationally convenient B-spline basis, that generate high order local approximations. A full grid is used as usually donein the industry. Kernel methods, that are a very general machine learning framework able to pose in a normed vector space, a large variety of regression or classification problems. Kernel functions can reproduce different function spaces using an unstructured support,which is optimized with pool active learning techniques. The approximations are found through a convex optimization process simplified by the kernel trick. The intrinsic modular character of the method facilitates segregating the modeling phases: function space selection, application of numerical routines and support optimization through active learning. Artificial neural networks which are“model free” universal approximators able Artificial neural networks which are“model free” universal approximators able to approach continuous functions to an arbitrary degree without formulating explicit relations among the variables. With adequate training settings, intrinsically parallelizable multi-output networks minimize storage requirements offering the highest evaluation speed. These strategies are compared to each other and to multi-linear interpolation in a Cartesian grid, the industry standard in core calculations. The data set, the developed tools, and scripts are freely available under aMIT license
Ciccoli, Marie Claude. "Schémas numériques efficaces pour le calcul d'écoulements hypersoniques réactifs." Nice, 1992. http://www.theses.fr/1992NICE4574.
Повний текст джерелаLe, Poupon Axel. "Méthodes optimales et sous-optimales d'allocation de ressources efficace en codage numérique." Phd thesis, Télécom ParisTech, 2009. http://pastel.archives-ouvertes.fr/pastel-00005503.
Повний текст джерелаBerthet, Antoine Olivier. "Méthodes itératives appliquées au décodage efficace de combinaisons de codes en treillis." Paris 6, 2001. http://www.theses.fr/2001PA066498.
Повний текст джерелаBerthet, Antoine Olivier. "Méthodes itératives appliquées au décodage efficace de combinaisons de codes sur treillis." Paris, ENST, 2001. http://www.theses.fr/2001ENST0036.
Повний текст джерелаFar from concentrating on the theory error-correcting codes (e. G. , turbo codes), the renewed interest for iterative methods has spread to the entire communications theory and lead to the advent of a real turbo principle. In the classical theory, the different elements which make up the receiver (detector, equalizer, demodulator, channel decoder, source decoder) are activated sequentially only once in a given order. The propagation of soft decisions between those elements (in opposition to hard decisions) preserves all the information available at the channel output about the variables to estimate. But still remains the fundamental sub-optimality induced by the partitioning of the receiver chain into distinct specific functions, each of them acting with a partial knowledge on the others (especially the first ones on the last ones). The so-called turbo principle aims at recovering the optimality. It substitutes to the classical approach an iterative approach where the different functions of the receiver chain, formally identified to serially concatenated decoders and activated several times according to a given schedule, accept, deliver, and exchange constantly refined probabilistic information (referred to as extrinsic information) about the variables to estimate. The turbo detection is a first instance of the turbo principle. The basic idea consists in modelling the intersymbol interference channel (IIC) as a rate-1 time-varying convolutional code defined by a generator polynomial with complex coefficients. The serial concatenation of the error-correcting code and the IIC suggests the application of an iterative procedure between the two corresponding decoders, which, in effect, allows removing the intersymbol interference completely. Exploiting the highly structured nature of interfering signals, the turbo principle provides excellent results in multiuser detection as well. Other recent and promising applications are the demodulation of nonlinear continuous phase modulations or the decoding of joint source-channel codes. This PhD thesis is mainly focused on the identification and analysis of new instances of the turbo principle. The first part of the thesis is devoted to the design and iterative decoding of highly spectrally-efficient multilevel codes for the Gaussian channel. The proposed schemes involve a multitude of small linear component codes, convolutional or block, and concatenated or not. The optimal symbol-by-symbol decoding of linear block codes, for which finding a representative trellis as reduced as possible in complexity constitutes a fundamental issue, is thoroughly investigated (chapter 2, in French). The parametrization (length, rates at each level, etc. ) and the performance of the multilevel codes are optimized under iterative multistage decoding (chapter 3, in English). The second part of the thesis deals with near-optimal decoding of serially concatenated modulations, bit-interleaved or not, when transmission occurs over frequency-selective channels. We investigate different reduced-complexity approaches to perform detection/equalization, channel decoding and channel estimation in a completely or partially disjoint and iterative fashion (chapter 4, in English). These approaches are then extended to serially concatenated space-time trellis-coded modulations and frequency-selective multiple-input multiple-output (MIMO) channels (chapter 5, in English)
Roux, Claude Arthur Gaëtan. "Une méthode de parsage efficace pour les grammaires syntagmatiques généralisées." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1996. http://www.collectionscanada.ca/obj/s4/f2/dsk3/ftp04/nq21510.pdf.
Повний текст джерелаMezmaz, Mohand. "Une approche efficace pour le passage sur grilles de calcul de méthodes d'optimisation combinatoire." Lille 1, 2007. https://pepite-depot.univ-lille.fr/LIBRE/Th_Num/2007/50376-2007-Mezmaz.pdf.
Повний текст джерелаThe exact resolution of large combinatorial optimization problems is a challenge for grids. Indeed, it is necessary to rethink the resolution algorithms to take into account the characteristics of such environments, in particular their large-scale, the heterogeneity and the dynamic availability of their resources, and their multi-domain administration. Ln this thesis, we propose a new approach, called B&B@Grid, to adapt exact methods for grids. This approach is based on coding work units in the form of intervals in order to minimize the cost of communications caused by the operations of load balancing, fault tolerance and detection of termination. This approach, about 100 times more efficient than the best known approach in term of communication cost, led to the optimal resolution on the Grid5000 of a standard instance of the Flow-Shop problem remained unsolved for fifteen years. To accelerate the resolution, we also deal with cooperation on the grid of exact methods with meta-heuristics. Two cooperation modes have been considered: the relay mode where a meta-heuristic is performed before an exact method, and the co-evolutionary mode where both methods are executed in parallel. The implementation of this cooperation on a grid has led us to propose an extension of the Linda coordination model
Garnier, Robert. "Une méthode efficace d'accélération de la simulation des réseaux de Petri stochastiques." Bordeaux 1, 1998. http://www.theses.fr/1998BOR10593.
Повний текст джерелаStanczak, Arnaud. "La méthode de la "classe puzzle" est-elle efficace pour améliorer l'apprentissage ?" Thesis, Université Clermont Auvergne (2017-2020), 2020. http://www.theses.fr/2020CLFAL013.
Повний текст джерелаThe objective of this thesis is to test the effect of the Jigsaw classroom on learning. The Jigsaw classroom is a cooperative technique created by Aronson and his colleagues in the 1970s to promote the inclusion of ethnic minorities (e.g., Mexican and African-American) in desegregated schools. Although this method is presented by its developers as an effective tool for improving student learning, empirical evidence is lacking. According to the social interdependence theory, the structure of interactions between individuals determine the effects of cooperative learning (Deutsch, 1949; Johnson & Johnson, 1989). In Jigsaw, this structure comes from the distribution of complementary resources: each individual owns a “jigsaw piece”, namely a piece of information which requires the coordination of efforts among members to answer a problematic. With the help of other group members, promotive interactions (e.g., helping behaviors, explanations and questioning) should emerge which results in a better learning for the members. In this thesis, Jigsaw's effectiveness will be evaluated through a review of the scientific literature, as well as a meta-analysis of recent research and a set of experimental studies conducted among french sixth graders. To our knowledge, the experimental study of Jigsaw’s effects on learning in student populations is almost non-existent in the scientific literature and even though some research testing these effects is compiled in meta-analyses (Kyndt et al., 2013), there are no meta-analyses to date that specifficaly adress the question of Jigsaw's effects on learning. Hence, the research presented in this manuscript will attempt to evaluate the effectiveness of the Jigsaw method on learning. In Chapter 1, we present “social interdependence theory” (Johnson & Johnson, 1989, 2002, 2005), several definitions and ways of structuring cooperation between students, as well as a review of their effects on learning. Chapter 2 examines one of these cooperative technique in detail: Jigsaw (Aronson et al., 1978; Aronson & Patnoe, 2011). We describe the evolution of empirical studies conducted from its conception to the present day. Chapter 3 points out some of the limitations of this literature, particularly in terms of statistical power, and the impacts it may have on the estimation of Jigsaw's effectiveness on learning. We also develop our main hypothesis, its operationalization and the statistical tools and procedures we use in the empirical chapters: equivalence tests (Lakens, 2017), smallest effect size of interest (Hattie, 2009) and meta-analyses (Borenstein et al., 2010; Goh et al., 2016). Chapter 4 presents the results of a meta-analysis of Jigsaw's effects on learning, which synthesized empirical articles published between 2000 and 2020. We test several moderators (e.g., grade level, discipline, type of Jigsaw, location of research) in order to quantify the dispersion of Jigsaw effects and to assess heterogeneity between studies. Chapter 5 compiles five studies conducted among french sixth graders in which we test the effectiveness of Jigsaw on learning, compared to an “individual” (studies 1 and 2) or a “teaching as usual’ condition (studies 3A, 3B and 3C). The results of this chapter are interpreted with regard to the meta-analysis and the debates related to the structure of Jigsaw. In the last chapter of this manuscript, we summarize the main results developed trough the theoretical and empirical chapters. The contributions and limitations of our research are developed, as well as theoretical and practical perspectives to overcome them in view of future research
Pebernet, Laura. "Etude d'un modèle Particle-In-Cell dans une approximation Galerkin discontinue pour les équations de Maxwell-Vlasov : recherche d'une solution hybride non conforme efficace." Toulouse 3, 2010. http://thesesups.ups-tlse.fr/1080/.
Повний текст джерелаThis thesis presents the study and the development of an efficient numerical simulation's tool for the modeling of plasma/microwave interaction in an electromagnetic software based upon a Discontinuous Galerkin (DG) scheme. This work is organized following two main steps. First, we develop a Particle-In-Cell (PIC) model appropriate for DG scheme. For this, on the one hand, we propose a hyperbolic corrector method to take into account the charge conservation law and, on the other hand, we integrate physical plasma models such as high power microwave sources, emission particles surfaces and electrons beams. Then, we propose also optimal performances for the coupling of Maxwell-Vlasov equations in order to increase the efficiency and the size of the applications to treat. This leads to study a non conformal hybridization of methods to solve the Maxwell-Vlasov problem. In the first time, we work on a hybrid method between different numerical schemes to solve a 1D Maxwell problem on non conformal meshes. In the second time, we interest in a 2D TE Maxwell problem, in order to introduce a PIC model. Finally, we realise a FDTD/FDTD hybridization on two non coincident meshes for the 2D Maxwell-Vlasov system
Galhaut, Bastien. "Etude de la mesure de la section efficace de la réaction 16O(n,alpha)¹³C du seuil à 10 MeV." Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC231/document.
Повний текст джерелаSCALP (Scintillating ionization Chamber for ALpha particle production in neutron induced reactions) is an experimental device conceived to measure the cross section of the n-induced reaction on oxygène O-16(n,alpha)C-13. This latter reaction belongs to the HPRL (High Priority Request List) NEA list and is relevant in reactor physics because of the helium production affecting important fast and thermal neutron reactor's parameters.The Monte Carlo simulations with Geant4 showed that the device (a scintillating ionization chamber surrounded by four photomultipliers tubes) can measure and discriminate the different reactions inside the scintillating ion chamber. Cross section of O-16(n,alpha)C-13 and F-19(n,alpha)N-16 (used for cross section normalisation) reactions between the energy threshold and 10MeV could be experimentally measured with a 15% relative accuracy. However some improvement will be necessary to obtain lower uncertainties as requested by the NEA : O-16(n,alpha)C-13 cross section measurement with a accuracy better than 10%
Boillod-Cerneux, France. "Nouveaux algorithmes numériques pour l'utilisation efficace des architectures de calcul multi-coeurs et hétérogènes." Thesis, Lille 1, 2014. http://www.theses.fr/2014LIL10137/document.
Повний текст джерелаThe supercomputers architectures and programming paradigms have dramatically evolve during the last decades. Since we have reached the Petaflopic scale, we forecast to overcome the Exaflopic scale. Crossing this new scale implies many drastic changes, concerning the overall High Performance Computing scientific fields. In this Thesis, we focus on the eigenvalue problems, implied in most of the industrial simulations. We first propose to study and caracterize the Explicitly Restarted Arnoldi Method convergence. Based on this algorithm, we re-use efficiently the computed Ritz-Eigenvalues to accelerate the ERAM convergence onto the desired eigensubspace. We then propose two matrix generators, starting from a user-imposed spectrum. Such matrix collections are used to numerically check and approve extrem-scale eigensolvers, as well as measure and improve their parallel performance on ultra-scale supercomputers
Sicot, Frédéric. "Simulation efficace des écoulements instationaires périodiques en turbomachines." Ecully, Ecole centrale de Lyon, 2009. http://www.theses.fr/2009ECDL0019.
Повний текст джерелаMany industrial applications involve flows periodic in time. Flutter prediction or turbomachinery flows are some examples. Such flows are not simulated with enough efficiency when using classical unsteady techniques as a transient regime must be by-passed. New techniques, dedicated to time-periodic flows and based on Fourier analysis, have been developed recently. These methods, called harmonic balance, cast a time-periodic flow computation in several coupled steady computations, corresponding to a uniform sampling of the period. Their efficiency allow to get a precision good enough for engineering but much faster than classical nonlinear timemarching algorithms. The present study aims at imlementing one of these, the Time Spectral Method, in the ONERA solveur elsA. It is extended to an arbitrary lagrangian/eulerian formulation to take into mesh deformation for aeroelasticity applications. New implicit algorithms are developped to improve robustness. The TSM is successfully validated on external aerodynamic applications. Turbomachinery flows necessitate complex space and time interpolations ro reduce the computational domain to a single blade passage per row regardless of its geometry. Some applications in rotor/stator interactions and aeroelasticity are presented
Griset, Rodolphe. "Méthodes pour la résolution efficace de très grands problèmes combinatoires stochastiques : application à un problème industriel d'EDF." Thesis, Bordeaux, 2018. http://www.theses.fr/2018BORD0219/document.
Повний текст джерелаThe purpose of this Ph.D. thesis is to study optimization techniques for large-scale stochastic combinatorial problems. We apply those techniques to the problem of scheduling EDF nuclear power plant maintenance outages, which is of significant importance due to the major part of the nuclear energy in the French electricity system. We build on a two-stages extended formulation, the first level of which fixes nuclear outage dates and production profiles for nuclear plants, while the second evaluates the cost to meet the demand. This formulation enables the solving of deterministic industrial instances to optimality, by using a MIP solver. However, the computational time increases significantly with the number of scenarios. Hence, we resort to a procedure combining column generation of a Dantzig-Wolfe decomposition with Benders’ cut generation, to account for the linear relaxation of stochastic instances. We then obtain integer solutions of good quality via a heuristic, up to fifty scenarios. We further assume that outage durations are uncertain and that unexpected shutdowns of plants may occur. We investigate robust optimization methods in this context while ignoring possible recourse on power plants outage dates. We report on several approaches, which use bi-objective or probabilistic methods, to ensure the satisfaction of constraints which might be relaxed in the operating process. For other constraints, we apply a budget uncertainty-based approach to limit future re-organizations of the scheduling. Adding probabilistic information leads to better control of the price of the robustness
Younis, Georges. "Modélisation électrique des décharges RF dans des mélanges N2O/He pour applications aux dépôts PECVD utilisés en micro-électronique." Toulouse 3, 2005. http://www.theses.fr/2005TOU30145.
Повний текст джерелаThis work of thesis is dedicated to the modelling of the electric and energetic behaviour of RF discharges at low pressure and temperature, in the N2O/He mixtures, under different voltages and pressures. A particle model based on the Monte Carlo technique coupled to the Poisson equation is developed to reach this target. Among the results of this model, one can mention, among others, the dissipated power, the electric field and potential, the distribution functions, the reaction coefficients and reaction rates, the deposited energy, the parameters of reaction and transport as well as the densities of the charged particles. In the pure N2O as in the N2O/He mixture, the roles of the different collisional process, on the equilibrium and the maintenance of the discharge, have been highlighted. We showed that the N2O/He discharge, at least until 85% of helium, behave as the one of the pure N2O but with electronic and ionic average energies that increase with the percentage of helium. The energetic study of the discharge is made with the help of a new technique of power calculation, based on the counting of the energies deposited in the medium after every collisional process
Nguyen, Minh Khoa. "Exploration efficace de chemins moléculaires par approches aussi rigides que possibles et par méthodes de planification de mouvements." Thesis, Université Grenoble Alpes (ComUE), 2018. http://www.theses.fr/2018GREAM013/document.
Повний текст джерелаProteins are macromolecules participating in important biophysical processes of living organisms. It has been shown that changes in protein structures can lead to changes in their functions and are found linked to some diseases such as those related to neurodegenerative processes. Hence, an understanding of their structures and interactions with other molecules such as ligands is of major concern for the scientific community and the medical industry for inventing and assessing new drugs.In this dissertation, we are particularly interested in developing new methods to find for a system made of a single protein or a protein and a ligand, the pathways that allow changing from one state to another. During past decade, a vast amount of computational methods has been proposed to address this problem. However, these methods still have to face two challenges: the high dimensionality of the representation space, associated to the large number of atoms in these systems, and the complexity of the interactions between these atoms.This dissertation proposes two novel methods to efficiently find relevant pathways for such biomolecular systems. The methods are fast and their solutions can be used, analyzed or improved with more specialized methods. The first proposed method generates interpolation pathways for biomolecular systems using the As-Rigid-As-Possible (ARAP) principle from Computer Graphics. The method is robust and the generated solutions preserve at best the local rigidity of the original system. An energy-based extension of the method is also proposed, which significantly improves the solution paths. However, in scenarios requiring complex deformations, this geometric approach may still generate unnatural paths. Therefore, we propose a second method called ART-RRT, which combines the ARAP principle for reducing the dimensionality, with the Rapidly-exploring Random Trees from Robotics for efficiently exploring possible pathways. This method not only gives a variety of pathways in reasonable time but the pathways are also low-energy and clash-free, with the local rigidity preserved as much as possible. The mono-directional and bi-directional versions of the ART-RRT method were applied for finding ligand-unbinding and protein conformational transition pathways, respectively. The results are found to be in good agreement with experimental data and other state-of-the-art solutions
Burrus, Nicolas. "Apprentissage a contrario et architecture efficace pour la détection d'évènements visuels significatifs." Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://pastel.archives-ouvertes.fr/pastel-00610243.
Повний текст джерелаThebault, Loïc. "Algorithmes Parallèles Efficaces Appliqués aux Calculs sur Maillages Non Structurés." Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLV088/document.
Повний текст джерелаThe growing need for numerical simulations results in larger and more complex computing centers and more HPC softwares. Actual HPC system architectures have an increasing requirement for energy efficiency and performance. Recent advances in hardware design result in an increasing number of nodes and an increasing number of cores per node. However, some resources do not scale at the same rate. The increasing number of cores and parallel units implies a lower memory per core, higher requirement for concurrency, higher coherency traffic, and higher cost for coherency protocol. Most of the applications and runtimes currently in use struggle to scale with the present trend. In the context of finite element methods, exposing massive parallelism on unstructured mesh computations with efficient load balancing and minimal synchronizations is challenging. To make efficient use of these architectures, several parallelization strategies have to be combined together to exploit the multiple levels of parallelism. This P.h.D. thesis proposes several contributions aimed at overpassing this limitation by addressing irregular codes and data structures in an efficient way. We developed a hybrid parallelization approach combining the distributed, shared, and vectorial forms of parallelism in a fine grain taskbased approach applied to irregular structures. Our approach has been ported to several industrial applications developed by Dassault Aviation and has led to important speedups using standard multicores and the Intel Xeon Phi manycore
Massas, Pierre Guironnet de. "Etude de méthodes et mécaniqmes pour un accès transparent et efficace aux données dans un systèmes multiprocesseur sur puce." Grenoble INPG, 2009. https://tel.archives-ouvertes.fr/tel-00434379.
Повний текст джерелаIn order to provide evermore computational power, architects integrates dozen of processors in the same chip. The main goal of our work is to enhance data accesses using software-seamless solutions. Our context targets NoC based muliprocessor systems which contains L1 caches and distributed shared memory. In a first part, we show that the constraints evolution in embedded systems makes possible the usage of a write-through invalidate coherence protocol in such systems. We present also a novel method to evaluate and compare memory coherence protocols. In the second part we present a novel solution for on-chip data migration. It is hardware driven, and it dynamically and wisely places the data in order to decrease the mean cost access to memory
Bossé, Mathieu. "La réalisation audionumérique DIY de groupes rock au Québec : pour une méthode de travail plus efficace." Master's thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/36713.
Повний текст джерелаIn this master’s thesis, we address the issue of creative processes and recording techniques in different DIY (Do It Yourself) settings as implemented through my experience with different rock, punk, hardcore and alternative bands in Quebec City. Musical styles greatly vary from one project to another, but all of the music groups share three common aspects : 1) They come from the underground scene around Quebec City. 2) They have a rock-type instrumentation (for us, a rock-type instrumentation is made up of minimum two of the following instruments: drums, electric guitar, electric bass and vocal.) 3) They record in a DIY setting, up to a certain point. With the rise of new technologies, DIY recording is more and more common in music bands. In our experience, artists from scenes close to the punk and underground cultures are often limited by their budget and resources or simply have a desire for creative liberty when recording their music. Seeking independence, they will often go for the DIY approach when the time has come for them to record a song or an album, which often lends a punk aesthetic to their audio tracks. This masters thesis seeks to analyse working methods that contribute to obtaining good quality results in DIY recording and to find an efficient methodology that could help the self-taught DIYer in recording music. We believe that by reading about these projects, with the analysis and feedback, an artist who wants to work in a DIY setting will be able to make better decisions in regard to their own recording project and situation. I also offer advice for the DIY recording artist to analyse his own unique recording reality and the one of his different recording projects in order for him to benefit from a faster and favorable learning curve.
Liatard, Éric. "Mesures de sections efficaces totales de réaction avec des faisceaux d'ions lourds stables et radioactifs par la méthode du rayonnement associé." Grenoble 1, 1989. http://www.theses.fr/1989GRE10143.
Повний текст джерелаDudouet, Jérémie. "Etude de la fragmentation du 12C sur cible mince à 95 MeV/A pour la hadronthérapie." Caen, 2014. http://www.theses.fr/2014CAEN2058.
Повний текст джерелаThe nuclear models need to be improved in order to reach the required accuracy for a reference simulation code for hadrontherapy. In this context, two experiments have been performed by our collaboration on May 2011 and September 2013 at GANIL to study nuclear reactions of 95 MeV/u 12 C ions on thin targets of medical interest to measure the double differential fragmentation cross sections of each produced isotope. These experimental data have been compared with Monte-Carlo simulations. Different nuclear models provided by the GEANT4 simulation toolkit have firstly been tested. These simulations revealed discrepancies up to one order of magnitude. The phenomenological model HIPSE has then been used and has shown that the overlapping region of two colliding nuclei needs to be taken into account in order to reproduce the kinematics of the emitted fragments at intermediates energies. Due to the difficulties encountered by these models in the data reproduction, a new model is currently under development : SLIIPIE. This one is a semi-microscopical model, built from a participant-spectator geometrical approach. This model was also compared to the experimental data for 12 C+12 C reactions at 95 MeV/A giving promising results
Portelenelle, Brice. "La méthode LS-STAG avec schémas diamants pour l'approximation de la diffusion : une méthode de type "cut-cell" précise et efficace pour les écoulements incompressibles en géométries 3D complexes." Thesis, Université de Lorraine, 2019. http://www.theses.fr/2019LORR0136/document.
Повний текст джерелаThe LS-STAG method is a cartesian method for the computations of incompressible flows in complex geometries, which consists in an accurate discretisation of the Navier-Stokes equations in cut-cells, polyhedral cells with complex shape made by the intersection of cartesian mesh and the immersed boundary. Originally developed for 2D geometries, where only three types of generic cut-cells appear, its extension to 3D geometries has to deal with the large amount of cut-cells types (108). Recently, the LS-STAG method had been extended to 3D complex geometries whose boundary is parallel to an axis of the cartesian coordinate system, where there are only the extruded counterparts of 2D cut-cells. This study highlighted two points to deal with in order to develop a totally 3D method: firstly, the computation of diffusive fluxes by a simple 2-points scheme has shown to be insufficiently accurate in 3D-extruded cut-cells due to the non-orthogonality. In addition to that, implementation of these fluxes on the immersed boundary, which is done with a case by case discretisation according to the type of the cut-cells, appears to be too difficult for its successful extension to the several extra types of 3D cut-cells, and needs to be simplified and rationalized. In this thesis, the first point is solved by using the diamond scheme tool, firstly studied in 2D for the heat equation then for the Navier-Stokes equations in Boussinesq approximation, and finally extended to 3D. Moreover, the diamond schemes have been used to fully revisit the discretisation of shear stresses from Navier-Stokes equations, where the case by case procedure is removed. These modifications have permitted to come up with a systematic discretisation that is accurate and algorithmically efficient for flows in totally 3D geometries. The numerical validation of the LS-STAG method with diamond schemes is presented for a series of test cases in 2D and 3D complex geometries. The precision is firstly assessed by comparison with analytical solutions in 2D, then in 3D by the simulation of Stokes flow between two concentric spheres. The robustess of the method is highlighted by the simulations of flows past a rotating sphere, in laminar modes (steady and unsteady), as well as in a weakly turbulent mode
Soua, Mahmoud. "Extraction hybride et description structurelle de caractères pour une reconnaissance efficace de texte dans les documents hétérogènes scannés : Méthodes et Algorithmes parallèles." Thesis, Paris Est, 2016. http://www.theses.fr/2016PESC1069/document.
Повний текст джерелаThe Optical Character Recognition (OCR) is a process that converts text images into editable text documents. Today, these systems are widely used in the dematerialization applications such as mail sorting, bill management, etc. In this context, the aim of this thesis is to propose an OCR system that provides a better compromise between recognition rate and processing speed which allows to give a reliable and a real time documents dematerialization. To ensure its recognition, the text is firstly extracted from the background. Then, it is segmented into disjoint characters that are described based on their structural characteristics. Finally, the characters are recognized when comparing their descriptors with a predefined ones.The text extraction, based on binarization methods remains difficult in heterogeneous and scanned documents with a complex and noisy background where the text may be confused with a textured background or because of the noise. On the other hand, the description of characters, and the extraction of segments, are often complex using calculation of geometricaltransformations, polygon, including a large number of characteristics or gives low discrimination if the characteristics of the selected type are sensitive to variation of scale, style, etc. For this, we adapt our algorithms to the type of heterogeneous and scanned documents. We also provide a high discriminatiobn between characters that descriptionis based on the study of the structure of the characters according to their horizontal and vertical projections. To ensure real-time processing, we parallelise algorithms developed on the graphics processor (GPU). Our main contributions in our proposed OCR system are as follows:A new binarisation method for heterogeneous and scanned documents including text regions with complex or homogeneous background. In this method, an image analysis process is used followed by a classification of the document areas into images (text with a complex background) and text (text with a homogeneous background). For text regions is performed text extraction using a hybrid method based on classification algorithm Kmeans (CHK) that we have developed for this aim. This method combines local and global approaches. It improves the quality of separation text/background, while minimizing the amount of distortion for text extraction from the scanned document and noisy because of the process of digitization. The image areas are improved with Gamma Correction (CG) before applying HBK. According to our experiment, our text extraction method gives 98% of character recognition rate on heterogeneous scanned documents.A Unified Character Descriptor based on the study of the character structure. It employs a sufficient number of characteristics resulting from the unification of the descriptors of the horizontal and vertical projection of the characters for efficient discrimination. The advantage of this descriptor is both on its high performance and its simple computation. It supports the recognition of alphanumeric and multiscale characters. The proposed descriptor provides a character recognition 100% for a given Face-type and Font-size.Parallelization of the proposed character recognition system. The GPU graphics processor has been used as a platform of parallelization. Flexible and powerful, this architecture provides an effective solution for accelerating intensive image processing algorithms. Our implementation, combines coarse/fine-grained parallelization strategies to speed up the steps of the OCR chain. In addition, the CPU-GPU communication overheads are avoided and a good memory management is assured. The effectiveness of our implementation is validated through extensive experiments
Jedouaa, Meriem. "Une méthode efficace de capture d'interface pour la simulation de suspensions d'objets rigides et de vésicules immergées dans un fluide." Thesis, Université Grenoble Alpes (ComUE), 2017. http://www.theses.fr/2017GREAM042/document.
Повний текст джерелаIn this work, we propose a method to efficiently capture an arbitrary number of fluid/solid or fluid/fluid interfaces, in a level-set framework. This technique, borrowed from image analysis, is introduced in the context of the interaction of several bodies immersed in a fluid. A configuration of the bodies in the fluid/structure domain is described by three label maps providing the first and second neighbours, and their associated distance functions. Only one level set function captures the union of all interfaces and is transported with the fluid velocity or with a global velocity field which takes into account the velocity of each structure. A multi-label fast marching method is then performed in a narrow-band around the interfaces allowing to update the label and distance functions. Within this framework, the numerical treatment of contacts between the structures is achieved by a short-range repulsive force depending on the distance between the closest bodies.The method is validated through the simulation of a dense suspension of rigid bodies immersed in an incompressible fluid. A global penalization model uses the label maps to follow the solid bodies altogether without a separate computation of each body velocity. Consequently, the method shows its efficiency when dealing with a large number of rigid bodies. We also investigate the numerical simulation of vesicle suspensions for which a computation of elastic and bending forces on membranes is required. In the present model, only one elastic and bending force is computed for the whole set of membranes according to the level set function and the label maps
Petit, Sophie. "Étude des méthodes de prédiction de taux d'erreurs en orbite dans les mémoires : nouvelle approche empirique." Toulouse, ENSAE, 2006. http://www.theses.fr/2006ESAE0015.
Повний текст джерелаSens, Nicolas. "Développement d'une méthode de type "velocity map imaging" pour la mesure de sections efficaces d’émission d'électrons par des molécules d'intérêt biologique en collision avec des ions." Thesis, Normandie, 2020. http://www.theses.fr/2020NORMC213.
Повний текст джерелаIn the last decades, ion collision with biologically relevant molecules has received increasing interest due to applications in radiation biology. The aim of this PhD is the development of a new crossed-beam experimental set-up dedicated to the measurement of absolute total, single and double differential (in angle and/or in energy) cross sections for electron emission from biologically relevant molecules. The electrons emitted in the 4π steradian solid angle after the collision between the projectile ion and the target molecule are extracted and analyzed by a Velocity Map Imaging (VMI) spectrometer. The cross sections are derived from measurements of the target beam density, the projectile beam intensity, and the beam overlap by means of a quartz crystal microbalance and an ion profiler. In this thesis, the experimental set-up description, the method used to determine the absolute cross sections and the complete characterization of the set-up are detailed. The first absolute cross sections for electron emission from adenine and uracil molecules upon collision with C4+ and N4+ ions measured with this new set-up are also shown and discussed. The good agreement between these results and previous experimental and theoretical data confirms the correct functionning of the experimental set-up
Maneval, Daniel. "Conception d'un formalisme de pouvoir d'arrêt équivalent et accélération graphique : des simulations Monte Carlo plus efficaces en protonthérapie." Doctoral thesis, Université Laval, 2019. http://hdl.handle.net/20.500.11794/34601.
Повний текст джерелаIn radiotherapy, treatment planning is the optimization of the ballistics to administer the prescribed dose to the treated lesions while minimizing collateral doses received by the healthy tissue. The algorithm of the dose calculation is at the heart of this numerical simulation. It must be precise and computationally efficient. The antagonism of these two features has led to the development of rapid analytical algorithms whose improvement in dosimetric accuracy has nowadays reached its limit. The accuracy of the dose calculation algorithm is particularly important in proton therapy to fully exploit the ballistic potential of protons. The Monte Carlo proton transport method is the most accurate but also the least efficient. This thesis deals with the development of a Monte Carlo dose calculation platform that is sufficiently effective to consider its use in clinical routine. The main objective of the project is to accelerate the Monte Carlo proton transport without compromising the precision of the dose deposition. To do this, two lines of research have been exploited. The first was to establish a new variance reduction technique called the equivalent restricted stopping power formalism (formalism Leq). This technique significantly improves the algorithmic time complexity made constant (O(1)) instead of linear (O(n)) for the current Monte Carlo algorithms. The second line of research focused on the use of graphics processing units to improve the execution speed of the proton Monte Carlo transport. The developed platform, named pGPUMCD, transports protons on graphic processors in a voxelized geometry. In pGPUMCD, condensed and discrete interaction techniques are considered. The inelastic low-range interactions are modeled with a continuous proton slowing down using the Leq formalism and the energy straggling is considered. The elastic interactions are based on the multiple Coulomb scattering. The discrete interactions are the inelastic interactions, the nuclear elastic and the non-elastic proton-nuclei interactions. pGPUMCD is compared to Geant4 and the implemented physical processes are validated one after the other. For the dose calculation in a clinical context, 27 materials are defined for the tissue segmentation from the CT scan. The dosimetric accuracy of the Leq formalism is better than 0.31% for various materials ranging from water to gold. The intrinsic efficiency gain factors of the Leq formalism are greater than 30, between 100 to 630 for a similar dosimetric accuracy. Combined with the GPU acceleration, the efficiency gain is an order of magnitude greater than 10⁵. Dose differences between pGPUMCD and Geant4 are smaller than 1% in the Bragg peak region and below 3% in its distal fall-off for the different simulation configurations with homogeneous phantoms and clinical cases. In addition, 99.5% of the dose points pass the criterion 1% and the prescribing ranges match with those of Geant4 at less than 0.1%. The computing times of pGPUMCD are below 0.5 seconds per million of transported protons compared to several hours with Geant4. The dosimetric and efficiency performances of pGPUMCD make it a good candidate to be used in a clinical dosimetric planning environment. The expected medical benefit is a better control of the delivered doses allowing a significant margin and toxicity reductions of the treatments.
Furstoss, Christophe. "Conception et développement d'un fantôme anthropomorphe équipé de détecteurs dans le but d'évaluer la dose efficace à un poste de travail : étude de faisabilité." Paris 11, 2006. http://www.theses.fr/2006PA112186.
Повний текст джерелаBoffy, Hugo. "Techniques multigrilles et raffinement pour un modèle 3D efficace de milieux hétérogènes sous sollicitations de contact." Phd thesis, INSA de Lyon, 2012. http://tel.archives-ouvertes.fr/tel-00823694.
Повний текст джерелаLeleu, Samuel. "Vers une nouvelle méthode efficace et respectueuse de l'environnement pour la protection contre la corrosion des alliages de magnésium pour l'industrie aéronautique." Phd thesis, Toulouse, INPT, 2018. http://oatao.univ-toulouse.fr/20134/1/Leleu_Samuel-20134.pdf.
Повний текст джерела