Tesis sobre el tema "Précision de calcul"
Crea una cita precisa en los estilos APA, MLA, Chicago, Harvard y otros
Consulte los 50 mejores tesis para su investigación sobre el tema "Précision de calcul".
Junto a cada fuente en la lista de referencias hay un botón "Agregar a la bibliografía". Pulsa este botón, y generaremos automáticamente la referencia bibliográfica para la obra elegida en el estilo de cita que necesites: APA, MLA, Harvard, Vancouver, Chicago, etc.
También puede descargar el texto completo de la publicación académica en formato pdf y leer en línea su resumen siempre que esté disponible en los metadatos.
Explore tesis sobre una amplia variedad de disciplinas y organice su bibliografía correctamente.
Gratton, Serge. "Outils théoriques d'analyse du calcul à précision finie". Toulouse, INPT, 1998. http://www.theses.fr/1998INPT015H.
Texto completoBrunin, Maxime. "Étude du compromis précision statistique-temps de calcul". Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I001/document.
Texto completoIn the current context, we need to develop algorithms which are able to treat voluminous data with a short computation time. For instance, the dynamic programming applied to the change-point detection problem in the distribution can not treat quickly data with a sample size greater than $10^{6}$. The iterative algorithms provide an ordered family of estimators indexed by the number of iterations. In this thesis, we have studied statistically this family of estimators in oder to select one of them with good statistics performance and a low computation cost. To this end, we have followed the approach using the stopping rules to suggest an estimator within the framework of the change-point detection problem in the distribution and the linear regression problem. We use to do a lot of iterations to compute an usual estimator. A stopping rule is the iteration to which we stop the algorithm in oder to limit overfitting whose some usual estimators suffer from. By stopping the algorithm earlier, the stopping rules enable also to save computation time. Under time constraint, we may have no time to iterate until the stopping rule. In this context, we have studied the optimal choice of the number of iterations and the sample size to reach an optimal accuracy. Simulations highlight the trade-off between the number of iterations and the sample size in order to reach an optimal accuracy under time constraint
Vaccon, Tristan. "Précision p-adique". Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S032/document.
Texto completoP-Adic numbers are a field in arithmetic analoguous to the real numbers. The advent during the last few decades of arithmetic geometry has yielded many algorithms using those numbers. Such numbers can only by handled with finite precision. We design a method, that we call differential precision, to study the behaviour of the precision in a p-adic context. It reduces the study to a first-order problem. We also study the question of which Gröbner bases can be computed over a p-adic number field
Braconnier, Thierry. "Sur le calcul des valeurs propres en précision finie". Nancy 1, 1994. http://www.theses.fr/1994NAN10023.
Texto completoPirus, Denise. "Imprécisions numériques : méthode d'estimation et de contrôle de la précision en C.A.O". Metz, 1997. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/1997/Pirus.Denise.SMZ9703.pdf.
Texto completoThe object of this thesis is to bring a solution of numerical problems caused by the use of floating point arithmetic. The first chapter tackles the problems which are induced by the floating point arithmetic. One also develop the different existing methods and tools to solve these problems. The second chapter is devoted to the study of the spreader of errors during algorithms. Differential analysis is not adequate to obtain a good approximation of errors affecting the results of calculation. We next determine an estimation of the loss of precision during the calculation of the intersection point of two lines, according to the angle they draw up. The third chapter presents the method CESTAC (Stochastic checking of rounding of calculaation) [vig 93] which allows to estimate the number of significant digits affecting the result of a numerical calculation. The fourth chapter deals with computer algebra, as with the rational arithmetic and specially with the utilization of software Pari in order to avoid the problems caused by large integers. The fifth chapter describes our methodology which consists to determine the precision of a calculation with the assistance of the method CESTAC and which, if the precision isn't sufficient, uses the rational arithmetic. We also amend the conditional instructions, so that the tests be executed according to the precision of each data
Nguyen, Hai-Nam. "Optimisation de la précision de calcul pour la réduction d'énergie des systèmes embarqués". Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00705141.
Texto completoBoucher, Mathieu. "Limites et précision d'une analyse mécanique de la performance sur ergocycle instrumenté". Poitiers, 2005. http://www.theses.fr/2005POIT2260.
Texto completoIn biomechanics, the modelling of the human body is a major stake to estimate, in an imposed task, muscular effort and subjacent metabolic expenditure. In parallel, the evaluation of physical abilities in sport medicine needs to characterize the athletes' motion and their interactions with the external environment, in order to compare physiological measurements more objectively. These two orientations are based mainly on the activities of cycling. The objective of this work is thus to study the limits of the mechanical analysis of the performance on ergocycle using inverse dynamics technique. These limits depend on the measuring instruments and on the adequacy between the data input of the cycling model and the data measured. The evaluations of the uncertainty of the quantities used in the calculation of the intersegment effort allow to estimate the consequences of them on the precision of each mechanical parameter used in the analysis of the performance
Khali, Hakim. "Algorithmes et architectures de calcul spécialisés pour un système optique autosynchronisé à précision accrue". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0019/NQ53535.pdf.
Texto completoFall, Mamadou Mourtalla. "Contribution à la détermination de la précision de calcul des algorithmes de traitements numériques". Châtenay-Malabry, Ecole centrale de Paris, 1991. http://www.theses.fr/1991ECAP0173.
Texto completoHaddaoui, Khalil. "Méthodes numériques de haute précision et calcul scientifique pour le couplage de modèles hyperboliques". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066176/document.
Texto completoThe adaptive numerical simulation of multiscale flows is generally carried out by means of a hierarchy of different models according to the specific scale into play and the level of precision required. This kind of numerical modeling involves complex multiscale coupling problems. This thesis is thus devoted to the development, analysis and implementation of efficient methods for solving coupling problems involving hyperbolic models.In a first part, we develop and analyze a coupling algorithm for one-dimensional Euler systems. Each system of conservation laws is closed with a different pressure law and the coupling interface separating these models is assumed fix and thin. The transmission conditions linking the systems are modelled thanks to a measure source term concentrated at the coupling interface. The weight associated to this measure models the losses of conservation and its definition allows the application of several coupling strategies. Our method is based on Suliciu's relaxation approach. The exact resolution of the Riemann problem associated to the relaxed system allows us to design an extremely accurate scheme for the coupling model. This scheme preserves equilibrium solutions of the coupled problem and can be used for general pressure laws. Several numerical experiments assess the performances of our scheme. For instance, we show that it is possible to control the flow at the coupling interface when solving constrained optimization problems for the weights.In the second part of this manuscript we design two high order numerical schemes based on the discontinuous Galerkin method for the approximation of the initial-boundary value problem associated to Jin and Xin's model. Our first scheme involves only discretization errors whereas the second approximation involves both modeling and discretization errors. Indeed in the second approximation, we replace in some regions the resolution of the relaxation model by the resolution of its associated scalar equilibrium equation. Under the assumption of a possible characteristic coupling interface, we exactly solve the Riemann problem associated to the coupled model. This resolution allows us to design a high order numerical scheme which captures the possible boundary layers at the coupling interface. Finally, the implementation of our methods enables us to analyze quantitatively and qualitatively the modeling and discretization errors involved in the coupled scheme. These errors are functions of the mesh size, the degree of the polynomial approximation and the position of the coupling interface
Rizzo, Axel. "Amélioration de la précision du formulaire DARWIN2.3 pour le calcul du bilan matière en évolution". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0306/document.
Texto completoThe DARWIN2.3 calculation package, based on the use of the JEFF-3.1.1 nuclear data library, is devoted to nuclear fuel cycle studies. It is experimentally validated for fuel inventory calculation thanks to dedicated isotopic ratios measurements realized on in-pile irradiated fuel rod cuts. For some nuclides of interest for the fuel cycle, the experimental validation work points out that the concentration calculation could be improved. The PhD work was done in this framework: having verified that calculation-to-experiment (C/E) biases are mainly due to nuclear data, two ways of improving fuel inventory calculation are proposed and investigated. It consists on one hand in improving nuclear data using the integral data assimilation technique. Data from the experimental validation of DARWIN2.3 fuel inventory calculation are assimilated thanks to the CONRAD code devoted to nuclear data evaluation. Recommendations of nuclear data evaluations are provided on the basis of the analysis of the assimilation work. On the other hand, new experiments should be proposed to validate nuclear data involved in the buildup of nuclides for which there is no post-irradiation examination available to validate DARWIN2.3 fuel inventory calculation. To that extent, the feasibility of an experiment dedicated to the validation of the ways of formation of 14C, which are 14N(n,p) and 17O(n,α) reaction cross sections, was demonstrated
Hasni, Hamadi. "Logiciels vectoriels d'optimisation de problèmes non contraints de grandes tailles et calcul de la précision". Paris 6, 1986. http://www.theses.fr/1986PA066477.
Texto completoMadeira, De Campos Velho Pedro Antonio. "Evaluation de précision et vitesse de simulation pour des systèmes de calcul distribué à large échelle". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00625497.
Texto completoMadeira, de Campos Velho Pedro Antonio. "Evaluation de précision et vitesse de simulation pour des systèmes de calcul distribué à large échelle". Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM027/document.
Texto completoLarge-Scale Distributed Computing (LSDC) systems are in production today to solve problems that require huge amounts of computational power or storage. Such systems are composed by a set of computational resources sharing a communication infrastructure. In such systems, as in any computing environment, specialists need to conduct experiments to validate alternatives and compare solutions. However, due to the distributed nature of resources, performing experiments in LSDC environments is hard and costly. In such systems, the execution flow depends on the order of events which is likely to change from one execution to another. Consequently, it is hard to reproduce experiments hindering the development process. Moreover, resources are very likely to fail or go off-line. Yet, LSDC archi- tectures are shared and interference among different applications, or even among processes of the same application, affects the overall application behavior. Last, LSDC applications are time consuming, thus conducting many experiments, with several parameters is often unfeasible. Because of all these reasons, experiments in LSDC often rely on simulations. Today we find many simulation approaches for LSDC. Most of them objective specific architectures, such as cluster, grid or volunteer computing. Each simulator claims to be more adapted for a particular research purpose. Nevertheless, those simulators must address the same problems: modeling network and managing computing resources. Moreover, they must satisfy the same requirements providing: fast, accurate, scalable, and repeatable simulations. To match these requirements, LSDC simulation use models to approximate the system behavior, neglecting some aspects to focus on the desired phe- nomena. However, models may be wrong. When this is the case, trusting on models lead to random conclusions. In other words, we need to have evidence that the models are accurate to accept the con- clusions supported by simulated results. Although many simulators exist for LSDC, studies about their accuracy is rarely found. In this thesis, we are particularly interested in analyzing and proposing accurate models that respect the requirements of LSDC research. To follow our goal, we propose an accuracy evaluation study to verify common and new simulation models. Throughout this document, we propose model improvements to mitigate simulation error of LSDC simulation using SimGrid as case study. We also evaluate the effect of these improvements on scalability and speed. As a main contribution, we show that intuitive models have better accuracy, speed and scalability than other state-of-the art models. These better results are achieved by performing a thorough and systematic analysis of problematic situations. This analysis reveals that many small yet common phenomena had been neglected in previous models and had to be accounted for to design sound models
Tisseur, Françoise. "Méthodes numériques pour le calcul d'éléments spectraux : étude de la précision, la stabilité et la parallélisation". Saint-Etienne, 1997. http://www.theses.fr/1997STET4006.
Texto completoKhadraoui, Sofiane. "Calcul par intervalles et outils de l’automatique permettant la micromanipulation à précision qualifiée pour le microassemblage". Thesis, Besançon, 2012. http://www.theses.fr/2012BESA2027/document.
Texto completoMicromechatronic systems integrate in a very small volume functions with differentnatures. The trend towards miniaturization and complexity of functions to achieve leadsto 3-dimensional microsystems. These 3-dimensional systems are formed by microroboticassembly of various microfabricated and incompatible components. To achieve theassembly operations with high accuracy and high resolution, adapted sensors for themicroworld and special tools for the manipulation are required. The microactuators arethe main elements that constitute the micromanipulation systems. These actuators areoften based on smart materials, in particular piezoelectric materials. The piezoelectricmaterials are characterized by their high resolution (nanometric), large bandwidth (morethan kHz) and high force density. This why the piezoelectric actuators are widely usedin the micromanipulation and microassembly tasks. However, the behavior of the piezoelectricactuators is non-linear and very sensitive to the environment. Moreover, thedeveloppment of the micromanipulation and the microassembly tasks is limited by thelack of precise and compatible sensors with the microworld dimensions. In the presenceof the difficulties related to the sensors realization and the complex characteristics ofthe actuators, it is difficult to obtain the required performances for the micromanipulationand the microassembly tasks. For that, it is necessary to develop a specific controlapproach that achieves the wanted accuracy and resolution.The works in this thesis deal with this problematic. In order to success the micromanipulationand the microassembly tasks, robust control approaches such as H∞ havealready been tested to control the piezoelectric actuators. However, the main drawbacksof these methods is the derivation of high order controllers. In the case of embedded microsystems,these high order controllers are time consuming which limit their embeddingpossibilities. To address this problem, we propose in our work an alternative solutionto model and control the microsystems by combining the interval techniques with theautomatic tools. We will also seek to show that the use of these techniques allows toderive robust and low-order controllers
Benmouhoub, Farah. "Optimisation de la précision numérique des codes parallèles". Thesis, Perpignan, 2022. http://www.theses.fr/2022PERP0009.
Texto completoIn high performance computing, nearly all the implementations and published experiments use foating-point arithmetic. However, since foating-point numbers are finite approximations of real numbers, it may result in hazards because of the accumulated errors.These round-off errors may cause damages whose gravity varies depending on the critical level of the application. Parallelism introduces new numerical accuracy problems due to the order of operations in this kind of systems. The proposed thesis subject concerns this last point: improving the precision of massively parallel scientific computing codes such as those found in the field of HPC (High Performance Computing)
Rey, Valentine. "Pilotage de stratégies de calcul par décomposition de domaine par des objectifs de précision sur des quantités d’intérêt". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLN018/document.
Texto completoThis research work aims at contributing to the development of verification tools in linear mechanical problems within the framework of non-overlapping domain decomposition methods.* We propose to improve the quality of the statically admissible stress field required for the computation of the error estimator thanks to a new methodology of stress reconstruction in sequential context and thanks to optimizations of the computations of nodal reactions in substructured context.* We prove guaranteed upper and lower bounds of the error that separates the algebraic error (due to the iterative solver) from the discretization error (due to the finite element method) for both global error measure mentand goal-oriented error estimation. It enables the definition of a new stopping criterion for the iterative solver which avoids over-resolution.* We benefit the information provided by the error estimator and the Krylov subspaces built during the resolution to set an auto-adaptive strategy. This strategy consists in sequel of resolutions and takes advantage of adaptive remeshing and recycling of search directions .We apply the steering of the iterative solver by objective of precision on two-dimensional mechanical examples
Muller, Antoine. "Contributions méthodologiques à l'analyse musculo-squelettique de l'humain dans l'objectif d'un compromis précision performance". Thesis, Rennes, École normale supérieure, 2017. http://www.theses.fr/2017ENSR0007/document.
Texto completoMusculoskeletal analysis becomes popular in applications fields such as ergonomics, rehabilitation or sports. This analysis enables an estimation of joint reaction forces and muscles tensions generated during motion. Models and methods used in such an analysis give more and more accurate results. As a consequence, performances of software are limited: computation time increases, and experimental protocols and associated post-process are long and tedious to define subject-specific models. Finally, such software need a high expertise level to be driven properly.In order to democratize the use of musculoskeletal analysis for a wide range of users, this thesis proposes contributions enabling better performances of such analyses and preserving accuracy, as well as contributions enabling an easy subject-specific model calibration. Firstly, in order to control the whole analysis process, the thesis is developed in a global approach of all the analysis steps: kinematics, dynamics and muscle forces estimation. For all of these steps, quick analysis methods have been proposed. Particularly, a quick muscle force sharing problem resolution method has been proposed, based on interpolated data. Moreover, a complete calibration process, based on classical motion analysis tools available in a biomechanical lab has been developed, based on motion capture and force platform data
Bouraoui, Rachid. "Calcul sur les grands nombres et VLSI : application au PGCD, au PGCD étendu et à la distance euclidienne". Phd thesis, Grenoble INPG, 1993. http://tel.archives-ouvertes.fr/tel-00343219.
Texto completoRolland, Luc Hugues. "Outils algébriques pour la résolution de problèmes géométriques et l'analyse de trajectoire de robots parallèles prévus pour des applications à haute cadence et grande précision". Nancy 1, 2003. http://www.theses.fr/2003NAN10180.
Texto completoParallel robots have been introduced in flight simulators because of their high dynamics. Research is now focused on their application as machine tools. The requirements on accuracy are more stringent. The first objective is to find a resolution method to kinematics problems. Only a few implementations have succeeded to solve the general case (Gough platform). We have cataloged 8 algebraic formulations for the geometric model. The selected exact method is based the computation of Gröbner bases and the Rational Univariate Representation. The method is too slow for trajectory pursuit. The 2nd objective is the realization of a certified numeric iterative method (Newton) based on the Kantorovich theorem and interval arithmetic. The 3rd objective is milling task feasibility. A trajectory simulator includes tool accuracy estimations with given federate. One can determine the impact of a given architecture, selected sensors and the controller. This thesis terminates by a trajectory certification method, verifying if the tool can follow a trajectory included in a zone around the nominal trajectory. A convergence theorem is applied to insure that the forward kinematics model can be solved everywhere in the tube
Salhi, Yamina. "Étude et réalisation de logiciels d'optimisation non contrainte avec dérivation numérique et estimation de la précision des résultats". Paris 6, 1985. http://www.theses.fr/1985PA066412.
Texto completoRoch, Jean-Louis. "Calcul formel et parallélisme : l'architecture du système PAC et son arithmétique rationnelle". Phd thesis, Grenoble INPG, 1989. http://tel.archives-ouvertes.fr/tel-00334457.
Texto completoMagaud, Nicolas. "Changements de Représentation des Données dans le Calcul des Constructions". Phd thesis, Université de Nice Sophia-Antipolis, 2003. http://tel.archives-ouvertes.fr/tel-00005903.
Texto completopreuves formelles en théorie des types. Nous traitons cette question
lors de l'étude
de la correction du programme de calcul de la racine carrée de GMP.
A partir d'une description formelle, nous construisons
un programme impératif avec l'outil Correctness. Cette description
prend en compte tous les détails de l'implantation, y compris
l'arithmétique de pointeurs utilisée et la gestion de la mémoire.
Nous étudions aussi comment réutiliser des preuves formelles lorsque
l'on change la représentation concrète des données.
Nous proposons un outil qui permet d'abstraire
les propriétés calculatoires associées à un type inductif dans
les termes de preuve.
Nous proposons également des outils pour simuler ces propriétés
dans un type isomorphe. Nous pouvons ainsi passer, systématiquement,
d'une représentation des données à une autre dans un développement
formel.
Bosser, Pierre. "Développement et validation d'une méthode de calcul GPS intégrant des mesures de profils de vapeur d'eau en visée multi-angulaire pour l'altimétrie de haute précision". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://tel.archives-ouvertes.fr/tel-00322404.
Texto completoL'action NIGPS, menée en collaboration par l'IGN et le SA (CNRS), vise à développer une correction de ces effets atmosphériques basée sur le sondage de l'humidité à l'aide d'un lidar Raman vapeur d'eau à visée multi-angulaire. Ce travail de thèse s'inscrit dans la continuité des travaux présentés en 2005 par Jérôme Tarniewicz et consiste à poursuivre l'étude méthodologique, les développements instrumentaux et la validation expérimentale de l'analyse conjointe des observations GPS et lidar.
Après l'étude à partir de simulations numériques de l'effet de la troposphère sur le GPS et de sa correction, nous nous intéressons à la restitution précise de mesures de vapeur d'eau par lidar Raman. Les données acquises lors de la campagne VAPIC permettent de vérifier l'impact de la troposphère sur le GPS. La comparaison des observations lidar à celles issues d'autres instruments permet de valider la mesure lidar et souligne la capacité de cette technique à restituer des variations rapides de vapeur d'eau. Une première évaluation de la correction des observations GPS par des mesures lidar au zénith est réalisée sur des sessions GPS de 6 h et montre l'apport de cette technique sur les cas considérés. Ces résultats devraient cependant être améliorés grâce la prise en compte de visées lidar obliques.
Daumas, Marc. "Contributions à l'arithmétique des ordinateurs : vers une maîtrise de la précision". Phd thesis, Lyon, École normale supérieure (sciences), 1996. http://www.theses.fr/1996ENSL0012.
Texto completoBoucard, Stéphane. "Calcul de haute précision d'énergies de transitions dans les atomes exotiques et les lithiumoïdes : corrections relativistes, corrections radiatives, structure hyperfine et interaction avec le cortège électronique résiduel". Phd thesis, Université Pierre et Marie Curie - Paris VI, 1998. http://tel.archives-ouvertes.fr/tel-00007148.
Texto completodans les ions lithiumoïdes et les atomes exotiques : 1) Les nouvelles
sources rendent possible la fabrication d'ions lourds fortement
chargés. Nous nous sommes intéressés à l'étude de la structure
hyperfine des ions lithiumoïdes. Cela nous permet d'examiner les
problèmes relativistes à plusieurs corps et la partie magnétique des
corrections d'Electrodynamique Quantique (QED). Dans les ions lourds,
ces dernières sont de l'ordre de quelques pour-cents par rapport à
l'énergie totale de la transition hyperfine. Nous avons également
évalué l'effet de Bohr-Weisskopf lié à la distribution du moment
magnétique dans le noyau. Nous avons calculé puis comparé ces
différentes contributions en incluant les corrections radiatives
(polarisation du vide et self-énergie) ainsi que l'influence du
continuum négatif. 2) Un atome exotique est un atome dans lequel un
électron du cortège est remplacé par une particule de même charge :
$\mu^(-)$, $\pi^(-)$, $\bar(p)$\ldots Des expériences récentes ont
permis de gagner trois ordres de grandeur en précision et en
résolution. Nous avons voulu améliorer la précision des calculs
d'énergies de transitions nécessaires à la calibration et à
l'interprétation dans deux cas : la mesure de paramètres de
l'interaction forte dans l'hydrogène anti-protonique ($\bar(p)$H) et
la détermination de la masse du pion grâce à l'azote pionique
($\pi$N). Nos calculs prennent en compte la structure hyperfine et le
volume de la distribution de charge de la particule. Nous avons
amélioré le calcul de la polarisation du vide qui ne peut plus être
traitée au premier ordre de la théorie des perturbations dans le cas
des atomes exotiques. Pour les atomes anti-protoniques, nous avons
également ajouté la correction du g-2. Elle provient du caractère
composite de l'anti-proton qui de ce fait possède un rapport
gyromagnétique g $\approx$ -5.5856 .
Allart, Emilie. "Abstractions de différences exactes de réseaux de réactions : améliorer la précision de prédiction de changements de systèmes biologiques". Thesis, Lille, 2021. http://www.theses.fr/2021LILUI013.
Texto completoChange predictions for reaction networks with partial kinetic information can be obtained by qualitative reasoning with abstract interpretation. A typical change prediction problem in systems biology is which gene knockouts may, or must, increase the outflow of a target species at a steady state. Answering such questions for reaction networks requires reasoning about abstract differences such as "increases'' and "decreases''. A task fundamental for change predictions was introduced by Niehren, Versari, John, Coutte, et Jacques (2016). It is the problem to compute for a given system of linear equations with nonlinear difference constraints, the difference abstraction of the set of its positive solutions. Previous approaches provided overapproximation algorithms for this task based on various heuristics, for instance by rewriting the linear equations. In this thesis, we present the first algorithms that can solve this task exactly for the two difference abstractions used in the literature so far. As a first contribution, we show how to characterize for a linear equation system the boolean abstraction of its set of positive solutions. This abstraction maps any strictly positive real numbers to 1 and 0 to 0. The characterization is given by the set of boolean solutions for another equation system, that we compute based on elementary modes. The boolean solutions of the characterizing equation system can then be computed based on finite domain constraint programming in practice. We believe that this result is relevant for the analysis of functional programs with linear arithmetics. As a second contribution, we present two algorithms that compute for a given system of linear equations and nonlinear difference constraints, the exact difference abstraction into Delta_3 and Delta_6 respectively. These algorithms rely on the characterization of boolean abstractions for linear equation systems from the first contribution. The bridge between these abstractions is defined in first-order logic. In this way, the difference abstraction can be computed by finite set constraint programming too. We implemented our exact algorithms and applied them to predicting gene knockouts that may lead to leucine overproduction in B.~Subtilis, as needed for surfactin overproduction in biotechnology. Computing the precise predictions with the exact algorithm may take several hours though. Therefore, we also present a new heuristics for computing difference abstraction based on elementary modes, that provides a good compromise between precision and time efficiency
Chotin-Avot, Roselyne. "Architectures matérielles pour l'arithmétique stochastique discrète". Paris 6, 2003. http://hal.upmc.fr/tel-01267458.
Texto completoTan, Pauline. "Précision de modèle et efficacité algorithmique : exemples du traitement de l'occultation en stéréovision binoculaire et de l'accélération de deux algorithmes en optimisation convexe". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX092/document.
Texto completoThis thesis is splitted into two relatively independant parts. The first part is devoted to the binocular stereovision problem, specifically to the occlusion handling. An analysis of this phenomena leads to a regularity model which includes a convex visibility constraint. The resulting energy functional is minimized by convex relaxation. The occluded areas are then detected thanks to the horizontal slope of the disparity map and densified. Another method with occlusion handling was proposed by Kolmogorov and Zabih. Because of its efficiency, we adapted it to two auxiliary problems encountered in stereovision, namely the densification of sparse disparity maps and the subpixel refinement of pixel-accurate maps.The second part of this thesis studies two convex optimization algorithms, for which an acceleration is proposed. The first one is the Alternating Direction Method of Multipliers (ADMM). A slight relaxation in the parameter choice is shown to enhance the convergence rate. The second one is an alternating proximal descent algorithm, which allows a parallel approximate resolution of the Rudin-Osher-Fatemi (ROF) pure denoising model, in color-image case. A FISTA-like acceleration is also proposed
Laizet, Sylvain. "Développement d'un code de calcul combinant des schémas de haute précision avec une méthode de frontières immergées pour la simulation des mouvements tourbillonnaires en aval d'un bord de fuite". Poitiers, 2005. http://www.theses.fr/2005POIT2339.
Texto completoTo carry out simulations of the vortex dynamics behind a trailing edge remains a difficult task in fluid mechanics. Numerical development has been performed with a computer code which solves the incompressible Navier-Stokes equations with high order compact finite difference schemes on a Cartesian grid. The specificity of this code is that the Poisson equation is solved in the spectral space with the modified spectral formalism. This code can be combined with an immersed boundary method in order to simulate flows with complex geometry. A particular work was made to improve the resolution of the Poisson equation in order to use a stretched mesh and a staggered grid for the pressure. Two mixing layers flows with a blunt and a bevelled trailing edge were performed in order to determinate the influence of the separating plate's shape on the vortex dynamics
Peou, Kenny. "Computing Tools for HPDA : a Cache-Oblivious and SIMD Approach". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG105.
Texto completoThis work presents three contributions to the fields of CPU vectorization and machine learning. The first contribution is an algorithm for computing an average with half precision floating point values. In this work performed with limited half precision hardware support, we use an existing software library to emulate half precision computation. This allows us to compare the numerical precision of our algorithm to various commonly used algorithms. Finally, we perform runtime performance benchmarks using single and double floating point values in order to anticipate the potential gains from applying CPU vectorization to half precision values. Overall, we find that our algorithm has slightly worse best-case numerical performance in exchange for significantly better worst-case numerical performance, all while providing similar runtime performance to other algorithms. The second contribution is a fixed-point computational library designed specifically for CPU vectorization. Existing libraries fail rely on compiler auto-vectorization, which fail to vectorize arithmetic multiplication and division operations. In addition, these two operations require cast operations which reduce vectorizability and have a real computational cost. To allevieate this, we present a fixed-point data storage format that does not require any cast operations to perform arithmetic operations. In addition, we present a number of benchmarks comparing our implementation to existing libraries and present the CPU vectorization speedup on a number of architectures. Overall, we find that our fixed point format allows runtime performance equal to or better than all compared libraries. The final contribution is a neural network inference engine designed to perform experiments varying the numerical datatypes used in the inference computation. This inference engine allows layer-specific control of which data types are used to perform inference. We use this level of control to perform experiments to determine how aggressively it is possible to reduce the numerical precision used in inferring the PVANet neural network. In the end, we determine that a combination of the standardized float16 and bfloat16 data types is sufficient for the entire inference
Barotto, Béatrice. "Introduction de paramètres stochastiques pour améliorer l'estimation des trajectoires d'un système dynamique par une méthode de moindres carrés : application à la détermination de l'orbite d'un satellite avec une précision centimétrique". Toulouse 3, 1995. http://www.theses.fr/1995TOU30196.
Texto completoFontbonne, Cathy. "Acquisition multiparamétrique de signaux de décroissance radioactive pour la correction des défauts instrumentaux : application à la mesure de la durée de vie du 19Ne". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC204/document.
Texto completoThe aim of this thesis is to propose a method for precise half-life measurements adapted to nuclides with half-lives of a few seconds. The FASTER real-time digital acquisition system gives access to the physical characteristics of the signal induced by the detection of each decay during the counting period following beam implantation. The selection of the counting data can be carried out by an optimized post-experimental offline analysis. Thus, after establishing the influence factors impacting the measurement (pile up, gain and base line fluctuations), we are able to estimate, a posteriori, their impact on the half-life estimation. This way, we can choose the deposited energy threshold and dead time in order to minimize their effect. This thesis also proposes a method for measuring and then compensating for influence factors variations. This method was applied to estimate the 19Ne half-life with a relative uncertainty of 1.2 10-4 leading to T1 / 2 = 17.2569 (21) s. This is the most precise measurement to date for this isotope
Lopes, Quintas Christian Louis. "Système ultra précis de distribution du temps dans le domaine de la picoseconde". Paris, CNAM, 2001. http://www.theses.fr/2001CNAM0387.
Texto completoBeaudoin, Normand. "Méthode mathématique et numérique de haute précision pour le calcul des transformées de Fourier, intégrales, dérivées et polynômes splines de tout ordre ; Déconvolution par transformée de Fourier et spectroscopie photoacoustique à résolution temporelle". Thèse, Université du Québec à Trois-Rivières, 1999. http://depot-e.uqtr.ca/6708/1/000659516.pdf.
Texto completoChohra, Chemseddine. "Towards reproducible, accurately rounded and efficient BLAS". Thesis, Perpignan, 2017. http://www.theses.fr/2017PERP0065.
Texto completoNumerical reproducibility failures rise in parallel computation because floating-point summation is non-associative. Massively parallel systems dynamically modify the order of floating-point operations. Hence, numerical results might change from one run to another. We propose to ensure reproducibility by extending as far as possible the IEEE-754 correct rounding property to larger computing sequences. We introduce RARE-BLAS a reproducible and accurate BLAS library that benefits from recent accurate and efficient summation algorithms. Solutions for level 1 (asum, dot and nrm2) and level 2 (gemv and trsv) routines are designed. Implementations relying on parallel programming API (OpenMP, MPI) and SIMD extensions areproposed. Their efficiency is studied compared to optimized library (Intel MKL) and other existing reproducible algorithms
Mezzarobba, Marc. "Autour de l'évaluation numérique des fonctions D-finies". Phd thesis, Ecole Polytechnique X, 2011. http://pastel.archives-ouvertes.fr/pastel-00663017.
Texto completoBizouard, Vincent. "Calculs de précision dans un modèle supersymétrique non minimal". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAY075/document.
Texto completoAlthough the Standard Model has been very successful so far, it presents several limitations showing that it is only an effective low energy theory. For example, the neutrino masses or dark matter are not predicted in this model. Gravity is also not taken into account and we expect that it plays a quantum role at energies around the Planck mass. Moreover, radiative corrections to the Higgs boson mass suffer from quadratic divergences. All these problems underline the fact that new physics should appear, and this has to be described by an extension of the Standard Model. One well-motivated possibility is to add a new space-time symetry, called Supersymmetry, which link bosons and fermions. In its minimal extension, Supersymmetry can already solve the dark matter paradox with a natural candidate, the neutralino, and provide a cancellation of the dangerous quadratic corrections to the Higgs boson mass.In this thesis, we focussed on the Next-to-Minimal SuperSymmetric extension of the Standard Model, the NMSSM. To compare theoretical predictions with experiments, physical observables must be computed precisely. Since these calculations are long and complex, automatisation is desirable. This was done by developping SloopS, a program to compute one-loop decay width and cross-section at one-loop order in Supersymmetry. With this code, we first analysed the decay of the Higgs boson in a photon and a Z boson. This decay mode is induced at the quantum level and thus is an interesting probe of new physics. Its measurement has been started during Run 1 of the LHC and is continued now in Run 2. The possibility of deviation between the measured signal strength and the one predicted by the Standard Model motivates a careful theoretical analysis in beyond Standard Models which we realised within the NMSSM. Our goal was to compute radiative corrections for any process in this model. To cancel the ultraviolet divergences appearing in higher order computations, we had to carry out and implement the renormalisation of the NMSSM in SloopS. Finally, it was possible to use the renormalised model to compute radiatives corrections to masses and decay widths of Higgs bosons and supersymmetric particles in the NMSSM and to compare the results between different renormalisation schemes
Rico, Fabien. "Fonctions élémentaires : algorithmes et précisions". Montpellier 2, 2001. http://www.theses.fr/2001MON20052.
Texto completoTisserand, Arnaud. "Étude et conception d'opérateurs arithmétiques". Habilitation à diriger des recherches, Université Rennes 1, 2010. http://tel.archives-ouvertes.fr/tel-00502465.
Texto completoBocco, Andrea. "A variable precision hardware acceleration for scientific computing". Thesis, Lyon, 2020. http://www.theses.fr/2020LYSEI065.
Texto completoMost of the Floating-Point (FP) hardware units support the formats and the operations specified in the IEEE 754 standard. These formats have fixed bit-length. They are defined on 16, 32, 64, and 128 bits. However, some applications, such as linear system solvers and computational geometry, benefit from different formats which can express FP numbers on different sizes and different tradeoffs among the exponent and the mantissa fields. The class of Variable Precision (VP) formats meets these requirements. This research proposes a VP FP computing system based on three computation layers. The external layer supports legacy IEEE formats for input and output variables. The internal layer uses variable-length internal registers for inner loop multiply-add. Finally, an intermediate layer supports loads and stores of intermediate results to cache memory without losing precision, with a dynamically adjustable VP format. The VP unit exploits the UNUM type I FP format and proposes solutions to address some of its pitfalls, such as the variable latency of the internal operation and the variable memory footprint of the intermediate variables. Unlike IEEE 754, in UNUM type I the size of a number is stored within its representation. The unit implements a fully pipelined architecture, and it supports up to 512 bits of precision, internally and in memory, for both interval and scalar computing. The user can configure the storage format and the internal computing precision at 8-bit and 64-bit granularity This system is integrated as a RISC-V coprocessor. The system has been prototyped on an FPGA (Field-Programmable Gate Array) platform and also synthesized for a 28nm FDSOI process technology. The respective working frequencies of FPGA and ASIC implementations are 50MHz and 600MHz. Synthesis results show that the estimated chip area is 1.5mm2, and the estimated power consumption is 95mW. The experiments emulated in an FPGA environment show that the latency and the computation accuracy of this system scale linearly with the memory format length set by the user. In cases where legacy IEEE-754 formats do not converge, this architecture can achieve up to 130 decimal digits of precision, increasing the chances of obtaining output data with an accuracy similar to that of the input data. This high accuracy opens the possibility to use direct methods, which are more sensitive to computational error, instead of iterative methods, which always converge. However, their latency is ten times higher than the direct ones. Compared to low precision FP formats, in iterative methods, the usage of high precision VP formats helps to drastically reduce the number of iterations required by the iterative algorithm to converge, reducing the application latency of up to 50%. Compared with the MPFR software library, the proposed unit achieves speedups between 3.5x and 18x, with comparable accuracy
Hilico, Laurent. "Mesures de fréquences et calculs de haute précision en physique atomique et moléculaire". Habilitation à diriger des recherches, Université d'Evry-Val d'Essonne, 2002. http://tel.archives-ouvertes.fr/tel-00001922.
Texto completoGraillat, Stef. "Fiabilité des algorithmes numériques : pseudosolutions structurées et précisions". Perpignan, 2005. http://www.theses.fr/2005PERP0674.
Texto completoThe result summarized in the document deal with the stability and accuracy of some numerical algorithms. The contributions of this work are divided into four levels : 1) Improvement of the accuracy : we prensent a compensated Horner scheme that computes a result as if computed in twice the working précision. 2) Applications of pseudozero set : we propose some applications of pseudozeros in computer algebra (approximate coprimeness) and in control theory (stability radius and pseudoabscissa). 3) Real perturbations : we give computable formulas for the real condition number and real backward error for the problem of polynomial evaluation and the computation of zeros. We show that there is little difference between the real and complex condition numbers. On the contrary, we show that the real backward error can be significantly larger than the complex one. 4) Structured matrix perturbation : we study the notion of structured pseudospectra for Toeplitz, Hankel and circulant matrices. We show for this structures there is no difference between the structured and the unstructured pseudospectra. We also study structured condition number for linear systems, inversions and distance to singularity for structures deriving from Lie and Jordan algebras. We show that under mild assumptions there is little or no difference between the structured and the unstructured condition numbers
Gallois-Wong, Diane. "Formalisation en Coq des algorithmes de filtre numérique calculés en précision finie". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG016.
Texto completoDigital filters have numerous applications, from telecommunications to aerospace. To be used in practice, a filter needs to be implemented using finite precision (floating- or fixed-point arithmetic). Resulting rounding errors may become especially problematic in embedded systems: tight time, space, and energy constraints mean that we often need to cut into the precision of computations, in order to improve their efficiency. Moreover, digital filter programs are strongly iterative: rounding errors may propagate and accumulate through many successive iterations. As some of the application domains are critical, I study rounding errors in digital filter algorithms using formal methods to provide stronger guaranties. More specifically, I use Coq, a proof assistant that ensures the correctness of this numerical behavior analysis. I aim at providing certified error bounds over the difference between outputs from an implemented filter (computed using finite precision) and from the original model filter (theoretically defined with exact operations). Another goal is to guarantee that no catastrophic behavior (such as unexpected overflows) will occur. Using Coq, I define linear time-invariant (LTI) digital filters in time domain. I formalize a universal form called SIF: any LTI filter algorithm may be expressed as a SIF while retaining its numerical behavior. I then prove the error filters theorem and the Worst-Case Peak Gain theorem. These two theorems allow us to analyze the numerical behavior of the filter described by a given SIF. This analysis also involves the sum-of-products algorithm used during the computation of the filter. Therefore, I formalize several sum-of-products algorithms, that offer various trade-offs between output precision and computation speed. This includes a new algorithm whose output is correctly rounded-to-nearest. I also formalize modular overflows, and prove that one of the previous sum-of-products algorithms remains correct even when such overflows are taken into account
Marié, Simon. "Etude de la méthode Boltzmann sur Réseau pour les simulations en aéroacoustique". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://tel.archives-ouvertes.fr/tel-00311293.
Texto completoDans un premier temps, les élements historiques et théoriques de la LBM sont présentés ainsi que le développement permettant de passer de l'équation de Boltzmann aux équations de Navier-Stokes. La construction des modèles à vitesses discrètes est également décrite. Deux modèles basés sur des opérateurs de collision différents sont présentés : le modèle LBM-BGK et le modèle LBM-MRT. Pour l'étude des capacités aéroacoustiques de la LBM, une analyse de von Neumann est réalisée pour les modèles LBM-BGK et LBM-MRT ainsi que pour l'équation de Boltzmann à vitesse discrète (DVBE). Une comparaison avec les schémas Navier-Stokes d'ordre élevé est alors menée. Pour remédier aux instabilités numériques de la méthode Boltzmann sur Réseau intervenant lors de la propagation dans des directions particulières à M>0.1, des filtres sélectifs sont utilisés et leur effet sur la dissipation est étudié.
Dans un second temps, le code de calcul L-BEAM est présenté. La structure générale et les différentes techniques de calculs sont décrites. Un algorithme de transition de résolution est développé. La modélisation de la turbulence est abordée et le modèle de Meyers-Sagaut est implémenté dans le code. Enfin, des cas tests numériques sont utilisés pour valider le code et la simulation d'un écoulement turbulent complexe est réalisée.
Romera, Thomas. "Adéquation algorithme architecture pour flot optique sur GPU embarqué". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS450.
Texto completoThis thesis focus on the optimization and efficient implementation of pixel motion (optical flow) estimation algorithms on embedded graphics processing units (GPUs). Two iterative algorithms have been studied: the Total Variation - L1 (TV-L1) method and the Horn-Schunck method. The primary objective of this work is to achieve real-time processing, with a target frame processing time of less than 40 milliseconds, on low-power platforms, while maintaining acceptable image resolution and flow estimation quality for the intended applications. Various levels of optimization strategies have been explored. High-level algorithmic transformations, such as operator fusion and operator pipelining, have been implemented to maximize data reuse and enhance spatial/temporal locality. Additionally, GPU-specific low-level optimizations, including the utilization of vector instructions and numbers, as well as efficient memory access management, have been incorporated. The impact of floating-point number representation (single-precision versus half-precision) has also been investigated. The implementations have been assessed on Nvidia's Jetson Xavier, TX2, and Nano embedded platforms in terms of execution time, power consumption, and optical flow accuracy. Notably, the TV-L1 method exhibits higher complexity and computational intensity compared to Horn-Schunck. The fastest versions of these algorithms achieve a processing rate of 0.21 nanoseconds per pixel per iteration in half-precision on the Xavier platform, representing a 22x time reduction over efficient and parallel CPU versions. Furthermore, energy consumption is reduced by a factor of x5.3. Among the tested boards, the Xavier embedded platform, being both the most powerful and the most recent, consistently delivers the best results in terms of speed and energy efficiency. Operator merging and pipelining have proven to be instrumental in improving GPU performance by enhancing data reuse. This data reuse is made possible through GPU Shared memory, which is a small, high-speed memory that enables data sharing among threads within the same GPU thread block. While merging multiple iterations yields performance gains, it is constrained by the size of the Shared memory, necessitating trade-offs between resource utilization and speed. The adoption of half-precision numbers accelerates iterative algorithms and achieves superior optical flow accuracy within the same time frame compared to single-precision counterparts. Half-precision implementations converge more rapidly due to the increased number of iterations possible within a given time window. Specifically, the use of half-precision numbers on the best GPU architecture accelerates execution by up to x2.2 for TV-L1 and x3.7 for Horn-Schunck. This work underscores the significance of both GPU-specific optimizations for computer vision algorithms, along with the use and study of reduced floating point numbers. They pave the way for future enhancements through new algorithmic transformations, alternative numerical formats, and hardware architectures. This approach can potentially be extended to other families of iterative algorithms
Charles, Joseph. "Amélioration des performances de méthodes Galerkin discontinues d'ordre élevé pour la résolution numérique des équations de Maxwell instationnaires sur des maillages simplexes". Phd thesis, Université Nice Sophia Antipolis, 2012. http://tel.archives-ouvertes.fr/tel-00718571.
Texto completoRoudjane, Mourad. "Etude expérimentale et théorique des spectres d'émission et d'absorption VUV des molécules H2, D2 et HD". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00208073.
Texto completoL'objectif de cette thèse est d'effectuer une étude expérimentale à haute résolution des spectres d'émission et d'absorption des isotopes D2 et HD de l'hydrogène moléculaire dans le VUV et de la compléter par une étude théorique des états électroniques excités en relation avec les transitions observées. Une telle étude avait été effectuée dans notre laboratoire et avait abouti à la réalisation d'un atlas VUV dans le domaine 78-170 nm.
Les spectres d'émission de HD et D2 sont produits par une source à décharge Penning opérant sous faible pression, et sont enregistrés dans la région spectrale 78 -170 nm à l'aide du spectrographe sous vide de 10 mètres à haute résolution (~ 150 000) de l'Observatoire de Meudon, soit sur plaques photographiques, soit sur des écrans phosphore photostimulables pour mesure d'intensités. Les spectres enregistrés contiennent plus de 20 000 raies. Les longueurs d'onde sont mesurées avec une précision de Δλ/λ= 10-6.Les raies des molécules D2 et H2 étant inévitablement présentes dans le spectre de HD, nous avons d'abord cherché à réaliser l'analyse du spectre de D2, qui consiste à identifier et à assigner les raies aux transitions électroniques entre des niveaux d'énergie de la molécule.
Nous avons par ailleurs réalisé une étude en absorption des molécules HD et D2 au Centre Laser LCVU d'Amsterdam. Nous avons mesuré par spectroscopie laser à deux photons 1XUV+1UV, de nouvelles longueurs d'onde avec une précision inégalée de Δλ/λ= 10-8 dans le domaine spectral 99.9-104 nm permis par l'accordabilité du laser XUV.
Ces nouvelles longueurs d'ondes constitueront une base de données de raies de référence pour la calibration des spectres moléculaires, mais leurs intérêts ne s'arrêtent pas au laboratoire. En effet, les nouvelles raies de HD mesurées par spectroscopie laser, ajoutées aux raies de H2 déjà mesurées avec une précision similaire, seront utilisées comme référence pour mettre en évidence une possible variation cosmologique du rapport de masse proton-électron μ= mp/me, par comparaison avec des longueurs d'onde de raies de H2 ou de HD observées dans les spectres d'absorption de quasars à grands déplacements vers le rouge. Cette étude nécessite la connaissance des coefficients de sensibilité des longueurs d'onde par rapport à la possible variation de μ, que nous avons calculés par la résolution d'un système d'équations couplées pour les états électroniques B, B', C et D de la molécule H2 et HD pour diverses valeurs de μ.
Durant ce travail de thèse, nous nous sommes également intéressés à des transitions entre états libres-libres et états libres-liés de la molécule H2. Ces transitions se produisent lors d'une collision H-H formant une quasi-molécule et sont responsables de l'apparition de satellites dans l'aile des raies de l'atome d'hydrogène. Nous avons effectué une étude quantique du satellite quasi-moléculaire de la raie Lymanβ et calculé le profil d'absorption du satellite en fonction de la température. Cette variation est un outil important de diagnostic pour la détermination des caractéristiques des atmosphères des naines blanches.
Durochat, Clément. "Méthode de type Galerkin discontinu en maillages multi-éléments (et non-conformes) pour la résolution numérique des équations de Maxwell instationnaires". Thesis, Nice, 2013. http://www.theses.fr/2013NICE4005.
Texto completoThis thesis is concerned with the study of a Discontinuous Galerkin Time-Domain method (DGTD), for the numerical resolution of the unsteady Maxwell equations on hybrid tetrahedral/hexahedral in 3D (triangular/quadrangular in 2D) and non-conforming meshes, denoted by DGTD-PpQk method. Like in several studies on various hybrid time domain methods (such as a combination of Finite Volume with Finite Difference methods, or Finite Element with Finite Difference, etc.), our general objective is to mesh objects with complex geometry by tetrahedra for high precision and mesh the surrounding space by square elements for simplicity and speed. In the discretization scheme of the DGTD method considered here, the electromagnetic field components are approximated by a high order nodal polynomial, using a centered approximation for the surface integrals. Time integration of the associated semi-discrete equations is achieved by a second or fourth order Leap-Frog scheme. After introducing the historical and physical context of Maxwell equations, we present the details of the DGTD-PpQk method. We prove the L2 stability of this method by establishing the conservation of a discrete analog of the electromagnetic energy and a sufficient CFL-like stability condition is exhibited. The theoritical convergence of the scheme is also studied, this leads to a-priori error estimate that takes into account the hybrid nature of the mesh. Afterward, we perform a complete numerical study in 2D (TMz waves), for several test problems, on hybrid and non-conforming meshes, and for homogeneous or heterogeneous media. We do the same for the 3D implementation, with more realistic simulations, for example the propagation in a heterogeneous human head model. We show the consistency between the mathematical and numerical results of this DGTD-PpQk method, and its contribution in terms of accuracy and CPU time