Tesi sul tema "Calcul en précision mixte"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-50 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Calcul en précision mixte".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Rappaport, Ari. "Estimations d'erreurs a posteriori et adaptivité en approximation numérique des EDPs : régularisation, linéarisation, discrétisation et précision en virgule flottante". Electronic Thesis or Diss., Sorbonne université, 2024. http://www.theses.fr/2024SORUS057.
Testo completoThis thesis concerns a posteriori error analysis and adaptive algorithms to approximately solve nonlinear partial differential equations (PDEs). We consider PDEs of both elliptic and degenerate parabolic type. We also study adaptivity in floating point precision of a multigrid solver of systems of linear algebraic equations. In the first two chapters, we consider elliptic PDEs arising from an energy minimization problem. The a posteriori analysis therein is based directly on the difference of the energy in the true and approximate solution. The nonlinear operators of the elliptic PDEs we consider are strongly monotone and Lipschitz continuous. In this context, an important quantity is the “strength of the nonlinearity” given by the ratio L/α where L is the Lipschitz continuity constant and α is the (strong) monotonicity constant. In Chapter 1 we study an adaptive algorithm comprising adaptive regularization, discretization, and linearization. The algorithm is applied to an elliptic PDE with a nonsmooth nonlinearity. We derive a guaranteed upper bound based on primal-dual gap based estimator. Moreover, we isolate components of the error corresponding to regularization, discretization, and linearization that lead to adaptive stopping criteria. We prove that the component estimators converge to zero in the respective limits of regularization, discretization, and linearization steps of the algorithm. We present numerical results demonstrating the effectiveness of the algorithm. We also present numerical evidence of robustness with respect to the aforementioned ratio L/α which motivates the work in the second chapter. In Chapter 2, we consider the question of efficiency and robustness of the primal-dual gap error estimator. We in particular consider an augmented energy difference, for which we establish independence of the ratio L/α (robustness) for the Zarantonello linearization and only patch-local and computable dependence for other linearization methods including the Newton linearization. Numerical results are presented to substantiate the theoretical developments. In Chapter 3 we turn our attention to the problem of adaptive regularization for the Richards equation. The Richards equation appears in the context of porous media modeling. It contains nonsmooth nonlinearities, which are amenable to the same approach we adopt in Chapter 1. We develop estimators and an adaptive algorithm where the estimators are inspired by estimators based on the dual norm of the residual. We test our algorithm on a series of numerical examples coming from the literature. In Chapter 4 we provide details for an efficient implementation of the equilibrated flux, a crucial ingredient in computing the error estimators so far discussed. The implementation relies on the multi-threading paradigm in the Julia programming language. An additional loop is introduced to avoid memory allocations, which is crucial to obtain parallel scaling. In Chapter 5 we consider a mixed precision iterative refinement algorithm with a geometric multigrid method as the inner solver. The multigrid solver inherently provides an error estimator of the algebraic error which we use in the stopping criterion for the iterative refinement. We present a benchmark to demonstrate the speedup obtained by using single precision representations of the sparse matrices involved. We also design an adaptive algorithm that uses the aforementioned estimator to identify when iterative refinement in single precision fails and is able to recover and solve the problem fully in double precision
Robeyns, Matthieu. "Mixed precision algorithms for low-rank matrix and tensor approximations". Electronic Thesis or Diss., université Paris-Saclay, 2024. http://www.theses.fr/2024UPASG095.
Testo completoData management is often done by mathematical objects such as matrices and tensors, which are the generalization of matrices to more than two dimensions.Some application domains require too many elements to be stored, creating tensors too large; this problem is known as the emph curse of dimensionality.Mathematical methods such as low-rank approximations have been developed to reduce the dimensionality of these objects despite a very high cost in computation time.Moreover, new computer architectures such as GPUs allow us to perform computations quickly, especially when computing with low precision.Combining these new architectures with low-rank approximation is a solution despite the quality of the results being impaired by low precision.This thesis aims to propose low-rank approximation algorithms that are stable in low precision while maintaining the speedup inherent in low-precision computation, which is feasible thanks to mixed-precision computation.We have developed a general method for mixed-precision tensor approximation by first computing a low-precision approximation and iteratively refining it with higher precision to maintain the quality of the result.Knowing that this speedup comes mainly from GPU architectures, more precisely from specialized computing units called emph ensor cores, we have developed a general matrix approximation method for mixed-precision GPU architectures using these emph tensor cores.Our method maintains the quality of the result but at the expense of a higher-dimensional approximation than standard applications.To compensate for this gap, dimension recompression methods exist for different tensor formats.Our final contribution proposes a recompression method encompassing the different tensor and matrix formats while proving analytically its stability
Gratton, Serge. "Outils théoriques d'analyse du calcul à précision finie". Toulouse, INPT, 1998. http://www.theses.fr/1998INPT015H.
Testo completoBrunin, Maxime. "Étude du compromis précision statistique-temps de calcul". Thesis, Lille 1, 2018. http://www.theses.fr/2018LIL1I001/document.
Testo completoIn the current context, we need to develop algorithms which are able to treat voluminous data with a short computation time. For instance, the dynamic programming applied to the change-point detection problem in the distribution can not treat quickly data with a sample size greater than $10^{6}$. The iterative algorithms provide an ordered family of estimators indexed by the number of iterations. In this thesis, we have studied statistically this family of estimators in oder to select one of them with good statistics performance and a low computation cost. To this end, we have followed the approach using the stopping rules to suggest an estimator within the framework of the change-point detection problem in the distribution and the linear regression problem. We use to do a lot of iterations to compute an usual estimator. A stopping rule is the iteration to which we stop the algorithm in oder to limit overfitting whose some usual estimators suffer from. By stopping the algorithm earlier, the stopping rules enable also to save computation time. Under time constraint, we may have no time to iterate until the stopping rule. In this context, we have studied the optimal choice of the number of iterations and the sample size to reach an optimal accuracy. Simulations highlight the trade-off between the number of iterations and the sample size in order to reach an optimal accuracy under time constraint
Vaccon, Tristan. "Précision p-adique". Thesis, Rennes 1, 2015. http://www.theses.fr/2015REN1S032/document.
Testo completoP-Adic numbers are a field in arithmetic analoguous to the real numbers. The advent during the last few decades of arithmetic geometry has yielded many algorithms using those numbers. Such numbers can only by handled with finite precision. We design a method, that we call differential precision, to study the behaviour of the precision in a p-adic context. It reduces the study to a first-order problem. We also study the question of which Gröbner bases can be computed over a p-adic number field
Braconnier, Thierry. "Sur le calcul des valeurs propres en précision finie". Nancy 1, 1994. http://www.theses.fr/1994NAN10023.
Testo completoPirus, Denise. "Imprécisions numériques : méthode d'estimation et de contrôle de la précision en C.A.O". Metz, 1997. http://docnum.univ-lorraine.fr/public/UPV-M/Theses/1997/Pirus.Denise.SMZ9703.pdf.
Testo completoThe object of this thesis is to bring a solution of numerical problems caused by the use of floating point arithmetic. The first chapter tackles the problems which are induced by the floating point arithmetic. One also develop the different existing methods and tools to solve these problems. The second chapter is devoted to the study of the spreader of errors during algorithms. Differential analysis is not adequate to obtain a good approximation of errors affecting the results of calculation. We next determine an estimation of the loss of precision during the calculation of the intersection point of two lines, according to the angle they draw up. The third chapter presents the method CESTAC (Stochastic checking of rounding of calculaation) [vig 93] which allows to estimate the number of significant digits affecting the result of a numerical calculation. The fourth chapter deals with computer algebra, as with the rational arithmetic and specially with the utilization of software Pari in order to avoid the problems caused by large integers. The fifth chapter describes our methodology which consists to determine the precision of a calculation with the assistance of the method CESTAC and which, if the precision isn't sufficient, uses the rational arithmetic. We also amend the conditional instructions, so that the tests be executed according to the precision of each data
Nguyen, Hai-Nam. "Optimisation de la précision de calcul pour la réduction d'énergie des systèmes embarqués". Phd thesis, Université Rennes 1, 2011. http://tel.archives-ouvertes.fr/tel-00705141.
Testo completoBoucher, Mathieu. "Limites et précision d'une analyse mécanique de la performance sur ergocycle instrumenté". Poitiers, 2005. http://www.theses.fr/2005POIT2260.
Testo completoIn biomechanics, the modelling of the human body is a major stake to estimate, in an imposed task, muscular effort and subjacent metabolic expenditure. In parallel, the evaluation of physical abilities in sport medicine needs to characterize the athletes' motion and their interactions with the external environment, in order to compare physiological measurements more objectively. These two orientations are based mainly on the activities of cycling. The objective of this work is thus to study the limits of the mechanical analysis of the performance on ergocycle using inverse dynamics technique. These limits depend on the measuring instruments and on the adequacy between the data input of the cycling model and the data measured. The evaluations of the uncertainty of the quantities used in the calculation of the intersegment effort allow to estimate the consequences of them on the precision of each mechanical parameter used in the analysis of the performance
Khali, Hakim. "Algorithmes et architectures de calcul spécialisés pour un système optique autosynchronisé à précision accrue". Thesis, National Library of Canada = Bibliothèque nationale du Canada, 2000. http://www.collectionscanada.ca/obj/s4/f2/dsk1/tape4/PQDD_0019/NQ53535.pdf.
Testo completoFall, Mamadou Mourtalla. "Contribution à la détermination de la précision de calcul des algorithmes de traitements numériques". Châtenay-Malabry, Ecole centrale de Paris, 1991. http://www.theses.fr/1991ECAP0173.
Testo completoHaddaoui, Khalil. "Méthodes numériques de haute précision et calcul scientifique pour le couplage de modèles hyperboliques". Thesis, Paris 6, 2016. http://www.theses.fr/2016PA066176/document.
Testo completoThe adaptive numerical simulation of multiscale flows is generally carried out by means of a hierarchy of different models according to the specific scale into play and the level of precision required. This kind of numerical modeling involves complex multiscale coupling problems. This thesis is thus devoted to the development, analysis and implementation of efficient methods for solving coupling problems involving hyperbolic models.In a first part, we develop and analyze a coupling algorithm for one-dimensional Euler systems. Each system of conservation laws is closed with a different pressure law and the coupling interface separating these models is assumed fix and thin. The transmission conditions linking the systems are modelled thanks to a measure source term concentrated at the coupling interface. The weight associated to this measure models the losses of conservation and its definition allows the application of several coupling strategies. Our method is based on Suliciu's relaxation approach. The exact resolution of the Riemann problem associated to the relaxed system allows us to design an extremely accurate scheme for the coupling model. This scheme preserves equilibrium solutions of the coupled problem and can be used for general pressure laws. Several numerical experiments assess the performances of our scheme. For instance, we show that it is possible to control the flow at the coupling interface when solving constrained optimization problems for the weights.In the second part of this manuscript we design two high order numerical schemes based on the discontinuous Galerkin method for the approximation of the initial-boundary value problem associated to Jin and Xin's model. Our first scheme involves only discretization errors whereas the second approximation involves both modeling and discretization errors. Indeed in the second approximation, we replace in some regions the resolution of the relaxation model by the resolution of its associated scalar equilibrium equation. Under the assumption of a possible characteristic coupling interface, we exactly solve the Riemann problem associated to the coupled model. This resolution allows us to design a high order numerical scheme which captures the possible boundary layers at the coupling interface. Finally, the implementation of our methods enables us to analyze quantitatively and qualitatively the modeling and discretization errors involved in the coupled scheme. These errors are functions of the mesh size, the degree of the polynomial approximation and the position of the coupling interface
Rizzo, Axel. "Amélioration de la précision du formulaire DARWIN2.3 pour le calcul du bilan matière en évolution". Thesis, Aix-Marseille, 2018. http://www.theses.fr/2018AIXM0306/document.
Testo completoThe DARWIN2.3 calculation package, based on the use of the JEFF-3.1.1 nuclear data library, is devoted to nuclear fuel cycle studies. It is experimentally validated for fuel inventory calculation thanks to dedicated isotopic ratios measurements realized on in-pile irradiated fuel rod cuts. For some nuclides of interest for the fuel cycle, the experimental validation work points out that the concentration calculation could be improved. The PhD work was done in this framework: having verified that calculation-to-experiment (C/E) biases are mainly due to nuclear data, two ways of improving fuel inventory calculation are proposed and investigated. It consists on one hand in improving nuclear data using the integral data assimilation technique. Data from the experimental validation of DARWIN2.3 fuel inventory calculation are assimilated thanks to the CONRAD code devoted to nuclear data evaluation. Recommendations of nuclear data evaluations are provided on the basis of the analysis of the assimilation work. On the other hand, new experiments should be proposed to validate nuclear data involved in the buildup of nuclides for which there is no post-irradiation examination available to validate DARWIN2.3 fuel inventory calculation. To that extent, the feasibility of an experiment dedicated to the validation of the ways of formation of 14C, which are 14N(n,p) and 17O(n,α) reaction cross sections, was demonstrated
Hasni, Hamadi. "Logiciels vectoriels d'optimisation de problèmes non contraints de grandes tailles et calcul de la précision". Paris 6, 1986. http://www.theses.fr/1986PA066477.
Testo completoMadeira, De Campos Velho Pedro Antonio. "Evaluation de précision et vitesse de simulation pour des systèmes de calcul distribué à large échelle". Phd thesis, Université de Grenoble, 2011. http://tel.archives-ouvertes.fr/tel-00625497.
Testo completoMadeira, de Campos Velho Pedro Antonio. "Evaluation de précision et vitesse de simulation pour des systèmes de calcul distribué à large échelle". Thesis, Grenoble, 2011. http://www.theses.fr/2011GRENM027/document.
Testo completoLarge-Scale Distributed Computing (LSDC) systems are in production today to solve problems that require huge amounts of computational power or storage. Such systems are composed by a set of computational resources sharing a communication infrastructure. In such systems, as in any computing environment, specialists need to conduct experiments to validate alternatives and compare solutions. However, due to the distributed nature of resources, performing experiments in LSDC environments is hard and costly. In such systems, the execution flow depends on the order of events which is likely to change from one execution to another. Consequently, it is hard to reproduce experiments hindering the development process. Moreover, resources are very likely to fail or go off-line. Yet, LSDC archi- tectures are shared and interference among different applications, or even among processes of the same application, affects the overall application behavior. Last, LSDC applications are time consuming, thus conducting many experiments, with several parameters is often unfeasible. Because of all these reasons, experiments in LSDC often rely on simulations. Today we find many simulation approaches for LSDC. Most of them objective specific architectures, such as cluster, grid or volunteer computing. Each simulator claims to be more adapted for a particular research purpose. Nevertheless, those simulators must address the same problems: modeling network and managing computing resources. Moreover, they must satisfy the same requirements providing: fast, accurate, scalable, and repeatable simulations. To match these requirements, LSDC simulation use models to approximate the system behavior, neglecting some aspects to focus on the desired phe- nomena. However, models may be wrong. When this is the case, trusting on models lead to random conclusions. In other words, we need to have evidence that the models are accurate to accept the con- clusions supported by simulated results. Although many simulators exist for LSDC, studies about their accuracy is rarely found. In this thesis, we are particularly interested in analyzing and proposing accurate models that respect the requirements of LSDC research. To follow our goal, we propose an accuracy evaluation study to verify common and new simulation models. Throughout this document, we propose model improvements to mitigate simulation error of LSDC simulation using SimGrid as case study. We also evaluate the effect of these improvements on scalability and speed. As a main contribution, we show that intuitive models have better accuracy, speed and scalability than other state-of-the art models. These better results are achieved by performing a thorough and systematic analysis of problematic situations. This analysis reveals that many small yet common phenomena had been neglected in previous models and had to be accounted for to design sound models
Tisseur, Françoise. "Méthodes numériques pour le calcul d'éléments spectraux : étude de la précision, la stabilité et la parallélisation". Saint-Etienne, 1997. http://www.theses.fr/1997STET4006.
Testo completoKhadraoui, Sofiane. "Calcul par intervalles et outils de l’automatique permettant la micromanipulation à précision qualifiée pour le microassemblage". Thesis, Besançon, 2012. http://www.theses.fr/2012BESA2027/document.
Testo completoMicromechatronic systems integrate in a very small volume functions with differentnatures. The trend towards miniaturization and complexity of functions to achieve leadsto 3-dimensional microsystems. These 3-dimensional systems are formed by microroboticassembly of various microfabricated and incompatible components. To achieve theassembly operations with high accuracy and high resolution, adapted sensors for themicroworld and special tools for the manipulation are required. The microactuators arethe main elements that constitute the micromanipulation systems. These actuators areoften based on smart materials, in particular piezoelectric materials. The piezoelectricmaterials are characterized by their high resolution (nanometric), large bandwidth (morethan kHz) and high force density. This why the piezoelectric actuators are widely usedin the micromanipulation and microassembly tasks. However, the behavior of the piezoelectricactuators is non-linear and very sensitive to the environment. Moreover, thedeveloppment of the micromanipulation and the microassembly tasks is limited by thelack of precise and compatible sensors with the microworld dimensions. In the presenceof the difficulties related to the sensors realization and the complex characteristics ofthe actuators, it is difficult to obtain the required performances for the micromanipulationand the microassembly tasks. For that, it is necessary to develop a specific controlapproach that achieves the wanted accuracy and resolution.The works in this thesis deal with this problematic. In order to success the micromanipulationand the microassembly tasks, robust control approaches such as H∞ havealready been tested to control the piezoelectric actuators. However, the main drawbacksof these methods is the derivation of high order controllers. In the case of embedded microsystems,these high order controllers are time consuming which limit their embeddingpossibilities. To address this problem, we propose in our work an alternative solutionto model and control the microsystems by combining the interval techniques with theautomatic tools. We will also seek to show that the use of these techniques allows toderive robust and low-order controllers
Oumaziz, Paul. "Une méthode de décomposition de domaine mixte non-intrusive pour le calcul parallèle d’assemblages". Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLN030/document.
Testo completoAbstract : Assemblies are critical elements for industrial structures. Strong non-linearities such as frictional contact, as well as poorly controlled preloads make complex all accurate sizing. Present in large numbers on industrial structures (a few million for an A380), this involves managing numerical problems of very large size. The numerous interfaces of frictional contact are sources of difficulties of convergence for the numerical simulations. It is therefore necessary to use robust but also reliable methods. The use of iterative methods based on domain decomposition allows to manage extremely large numerical models. This needs to be coupled with adaptedtechniques in order to take into account the nonlinearities of contact at the interfaces between subdomains. These methods of domain decomposition are still scarcely used in industries. Internal developments in finite element codes are often necessary, and thus restrain this transfer from the academic world to the industrial world.In this thesis, we propose a non-intrusive implementation of these methods of domain decomposition : that is, without development within the source code. In particular, we are interested in the Latin method whose philosophy is particularly adapted to nonlinear problems. It consists in decomposing the structure into sub-domains that are connected through interfaces. With the Latin method the non-linearities are solved separately from the linear differential aspects. Then the resolution is based on an iterative scheme with two search directions that make the global linear problems and the nonlinear local problems dialogue.During this thesis, a totally non-intrusive tool was developed in Code_Aster to solve assembly problems by a mixed domain decomposition technique. The difficulties posed by the mixed aspect of the Latin method are solved by the introduction of a non-local search direction. Robin conditions on the subdomain interfaces are taken into account simply without modifying the sources of Code_Aster. We proposed an algebraic rewriting of the multi-scale approach ensuring the extensibility of the method. We were also interested in coupling the Latin method in domain decomposition to a Krylov algorithm. Applied only to a substructured problem with perfect interfaces, this coupling accelerates the convergence. Preloaded structures with numerous contact interfaces have been processed. Simulations that could not be carried out by a direct computationwith Code_Aster were performed via this non-intrusive domain decomposition strategy
Desmeure, Geoffrey. "Une stratégie de décomposition de domaine mixte et multiéchelle pour le calcul des assemblages". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLN011/document.
Testo completoMechanical industries' need of liability in numerical simulations leads to evermore fine and complex models taking into account complicated physical behaviours. With the aim of modelling large complex structures, a non-overlapping mixed domain decomposition method based on a LaTIn-type iterative solver is proposed.The method relies on splitting the studied domain into substructures and interfaces which can both bear mechanical behaviors so that perfect cohesion, contact, delamination can be modelled by the interfaces. The associated solver enables to treat at small scales nonlinear phenomena and, as commonly done, scalabilty is ensured by a coarse problem. The method presented uses the Riesz representation theorem to represent interface tractions in H^1/2 in order to discretize them accordingly to the displacements. Independence of convergence and search direction's optimal value from mesh size is evidenced and high precision can be reached in few iterations.Different test-cases assess the method for perfect and contact interfaces
Rey, Valentine. "Pilotage de stratégies de calcul par décomposition de domaine par des objectifs de précision sur des quantités d’intérêt". Thesis, Université Paris-Saclay (ComUE), 2015. http://www.theses.fr/2015SACLN018/document.
Testo completoThis research work aims at contributing to the development of verification tools in linear mechanical problems within the framework of non-overlapping domain decomposition methods.* We propose to improve the quality of the statically admissible stress field required for the computation of the error estimator thanks to a new methodology of stress reconstruction in sequential context and thanks to optimizations of the computations of nodal reactions in substructured context.* We prove guaranteed upper and lower bounds of the error that separates the algebraic error (due to the iterative solver) from the discretization error (due to the finite element method) for both global error measure mentand goal-oriented error estimation. It enables the definition of a new stopping criterion for the iterative solver which avoids over-resolution.* We benefit the information provided by the error estimator and the Krylov subspaces built during the resolution to set an auto-adaptive strategy. This strategy consists in sequel of resolutions and takes advantage of adaptive remeshing and recycling of search directions .We apply the steering of the iterative solver by objective of precision on two-dimensional mechanical examples
Benmouhoub, Farah. "Optimisation de la précision numérique des codes parallèles". Thesis, Perpignan, 2022. http://www.theses.fr/2022PERP0009.
Testo completoIn high performance computing, nearly all the implementations and published experiments use foating-point arithmetic. However, since foating-point numbers are finite approximations of real numbers, it may result in hazards because of the accumulated errors.These round-off errors may cause damages whose gravity varies depending on the critical level of the application. Parallelism introduces new numerical accuracy problems due to the order of operations in this kind of systems. The proposed thesis subject concerns this last point: improving the precision of massively parallel scientific computing codes such as those found in the field of HPC (High Performance Computing)
Bergach, Mohamed Amine. "Adaptation du calcul de la Transformée de Fourier Rapide sur une architecture mixte CPU/GPU intégrée". Thesis, Nice, 2015. http://www.theses.fr/2015NICE4060/document.
Testo completoMulticore architectures Intel Core (IvyBridge, Haswell…) contain both general purpose CPU cores (4) and dedicated GPU cores embedded on the same chip (16 and 40 respectively). As part of the activity of Kontron (the company partially funding this CIFRE scholarship), an important objective is to efficiently compute arrays and sequences of fast Fourier transforms (FFT) such as one finds in radar applications, on this architecture. While native (but proprietary) libraries exist for Intel CPU, nothing is currently available for the GPU part.The aim of the thesis was to define the efficient placement of FFT modules, and to study theoretically the optimal form for grouping computing stages of such FFT according to data locality on a single computing core. This choice should allow processing efficiency, by adjusting the memory size available to the required application data size. Then the multiplicity of cores is exploitable to compute several FFT in parallel, without interference (except for possible bus contention between the CPU and the GPU). We have achieved significant results, both in the implementation of an FFT (1024 points) on a SIMD CPU core, expressed in C, and in the implementation of a FFT of the same size on a GPU SIMT core, then expressed in OpenCL. In addition, our results allow to define rules to automatically synthesize such solutions, based solely on the size of the FFT (more specifically its number of stages), and the size of the local memory for a given computing core. The performances obtained are better than the native Intel library for CPU, and demonstrate a significant gain in consumption on GPU. All these points are detailed in the thesis document
Muller, Antoine. "Contributions méthodologiques à l'analyse musculo-squelettique de l'humain dans l'objectif d'un compromis précision performance". Thesis, Rennes, École normale supérieure, 2017. http://www.theses.fr/2017ENSR0007/document.
Testo completoMusculoskeletal analysis becomes popular in applications fields such as ergonomics, rehabilitation or sports. This analysis enables an estimation of joint reaction forces and muscles tensions generated during motion. Models and methods used in such an analysis give more and more accurate results. As a consequence, performances of software are limited: computation time increases, and experimental protocols and associated post-process are long and tedious to define subject-specific models. Finally, such software need a high expertise level to be driven properly.In order to democratize the use of musculoskeletal analysis for a wide range of users, this thesis proposes contributions enabling better performances of such analyses and preserving accuracy, as well as contributions enabling an easy subject-specific model calibration. Firstly, in order to control the whole analysis process, the thesis is developed in a global approach of all the analysis steps: kinematics, dynamics and muscle forces estimation. For all of these steps, quick analysis methods have been proposed. Particularly, a quick muscle force sharing problem resolution method has been proposed, based on interpolated data. Moreover, a complete calibration process, based on classical motion analysis tools available in a biomechanical lab has been developed, based on motion capture and force platform data
Bouraoui, Rachid. "Calcul sur les grands nombres et VLSI : application au PGCD, au PGCD étendu et à la distance euclidienne". Phd thesis, Grenoble INPG, 1993. http://tel.archives-ouvertes.fr/tel-00343219.
Testo completoHuber, Vincent. "Contribution au calcul d’écoulements de fluides complexes". Thesis, Bordeaux 1, 2012. http://www.theses.fr/2012BOR14579/document.
Testo completoThe first contribution of this thesis is the analysis and discretization of multiphase flow in a microfluidic framework.We propose a new scheme wich is second order accurate, based on mixed finite volume / finite element method for the two phases Stokes system with surface tension.We then present the comparison of the accuracy of this new scheme with the MAC discretization in 2D and 3D axi.The second contribution is related to the study of numerical schemes in time for viscoelastic fluids.We present the current limitations in this area by studying the case of flows of wormlike micelles, polymers having the ability to reorganize themselves according to the shear rate.We show that a condition of stability related to the ratio of viscosity of the polymer and solvent in time - very restrictive - exists.A new scheme is then proposed to overcome this limitation and stability studies are conducted to demonstrate our results
Rolland, Luc Hugues. "Outils algébriques pour la résolution de problèmes géométriques et l'analyse de trajectoire de robots parallèles prévus pour des applications à haute cadence et grande précision". Nancy 1, 2003. http://www.theses.fr/2003NAN10180.
Testo completoParallel robots have been introduced in flight simulators because of their high dynamics. Research is now focused on their application as machine tools. The requirements on accuracy are more stringent. The first objective is to find a resolution method to kinematics problems. Only a few implementations have succeeded to solve the general case (Gough platform). We have cataloged 8 algebraic formulations for the geometric model. The selected exact method is based the computation of Gröbner bases and the Rational Univariate Representation. The method is too slow for trajectory pursuit. The 2nd objective is the realization of a certified numeric iterative method (Newton) based on the Kantorovich theorem and interval arithmetic. The 3rd objective is milling task feasibility. A trajectory simulator includes tool accuracy estimations with given federate. One can determine the impact of a given architecture, selected sensors and the controller. This thesis terminates by a trajectory certification method, verifying if the tool can follow a trajectory included in a zone around the nominal trajectory. A convergence theorem is applied to insure that the forward kinematics model can be solved everywhere in the tube
Roch, Jean-Louis. "Calcul formel et parallélisme : l'architecture du système PAC et son arithmétique rationnelle". Phd thesis, Grenoble INPG, 1989. http://tel.archives-ouvertes.fr/tel-00334457.
Testo completoMagaud, Nicolas. "Changements de Représentation des Données dans le Calcul des Constructions". Phd thesis, Université de Nice Sophia-Antipolis, 2003. http://tel.archives-ouvertes.fr/tel-00005903.
Testo completopreuves formelles en théorie des types. Nous traitons cette question
lors de l'étude
de la correction du programme de calcul de la racine carrée de GMP.
A partir d'une description formelle, nous construisons
un programme impératif avec l'outil Correctness. Cette description
prend en compte tous les détails de l'implantation, y compris
l'arithmétique de pointeurs utilisée et la gestion de la mémoire.
Nous étudions aussi comment réutiliser des preuves formelles lorsque
l'on change la représentation concrète des données.
Nous proposons un outil qui permet d'abstraire
les propriétés calculatoires associées à un type inductif dans
les termes de preuve.
Nous proposons également des outils pour simuler ces propriétés
dans un type isomorphe. Nous pouvons ainsi passer, systématiquement,
d'une représentation des données à une autre dans un développement
formel.
Bosser, Pierre. "Développement et validation d'une méthode de calcul GPS intégrant des mesures de profils de vapeur d'eau en visée multi-angulaire pour l'altimétrie de haute précision". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2008. http://tel.archives-ouvertes.fr/tel-00322404.
Testo completoL'action NIGPS, menée en collaboration par l'IGN et le SA (CNRS), vise à développer une correction de ces effets atmosphériques basée sur le sondage de l'humidité à l'aide d'un lidar Raman vapeur d'eau à visée multi-angulaire. Ce travail de thèse s'inscrit dans la continuité des travaux présentés en 2005 par Jérôme Tarniewicz et consiste à poursuivre l'étude méthodologique, les développements instrumentaux et la validation expérimentale de l'analyse conjointe des observations GPS et lidar.
Après l'étude à partir de simulations numériques de l'effet de la troposphère sur le GPS et de sa correction, nous nous intéressons à la restitution précise de mesures de vapeur d'eau par lidar Raman. Les données acquises lors de la campagne VAPIC permettent de vérifier l'impact de la troposphère sur le GPS. La comparaison des observations lidar à celles issues d'autres instruments permet de valider la mesure lidar et souligne la capacité de cette technique à restituer des variations rapides de vapeur d'eau. Une première évaluation de la correction des observations GPS par des mesures lidar au zénith est réalisée sur des sessions GPS de 6 h et montre l'apport de cette technique sur les cas considérés. Ces résultats devraient cependant être améliorés grâce la prise en compte de visées lidar obliques.
Vincent, Huber. "Contribution au calcul d'écoulements de fluides complexes". Phd thesis, Université Sciences et Technologies - Bordeaux I, 2012. http://tel.archives-ouvertes.fr/tel-00745794.
Testo completoHinojosa, Rehbein Jorge Andrés. "Sur la robustesse d'une méthode de décomposition de domaine mixte avec relocalisation non linéaire pour le traitement des instabilités géométriques dans les grandes structures raidies". Thesis, Cachan, Ecole normale supérieure, 2012. http://www.theses.fr/2012DENS0010/document.
Testo completoThe thesis work focus on the evaluation and the robustness of adapted strategies for the simulation of large structures with not equitably distributed nonlinearities, like local buckling, and global nonlinearities on aeronautical structures. The nonlinear relocalization strategy allows the introduction of nonlinear solving schemes in the sub-structures of the classical domain decomposition methods.At a first step, the performances and the robustness of the method are analysed on academic examples. Then, the strategy is parallelized and studies of speed-up and extensibility are carried out. Finally, the method is tested on larger and more realistic structures
Kourdey, Alaa. "Une approche mixte (numérique/équilibre limite) pour le calcul de stabilité des ouvrages en terre : développement et application aux barrages et talus miniers". Vandoeuvre-les-Nancy, INPL, 2002. http://www.theses.fr/2002INPL044N.
Testo completoKrayani, Abbas. "Approche non locale d'un modèle élasto-plastique endommagable pour le calcul des structures en béton précontraint". Phd thesis, Ecole centrale de nantes - ECN, 2007. http://tel.archives-ouvertes.fr/tel-00334001.
Testo completoDaumas, Marc. "Contributions à l'arithmétique des ordinateurs : vers une maîtrise de la précision". Phd thesis, Lyon, École normale supérieure (sciences), 1996. http://www.theses.fr/1996ENSL0012.
Testo completoBoucard, Stéphane. "Calcul de haute précision d'énergies de transitions dans les atomes exotiques et les lithiumoïdes : corrections relativistes, corrections radiatives, structure hyperfine et interaction avec le cortège électronique résiduel". Phd thesis, Université Pierre et Marie Curie - Paris VI, 1998. http://tel.archives-ouvertes.fr/tel-00007148.
Testo completodans les ions lithiumoïdes et les atomes exotiques : 1) Les nouvelles
sources rendent possible la fabrication d'ions lourds fortement
chargés. Nous nous sommes intéressés à l'étude de la structure
hyperfine des ions lithiumoïdes. Cela nous permet d'examiner les
problèmes relativistes à plusieurs corps et la partie magnétique des
corrections d'Electrodynamique Quantique (QED). Dans les ions lourds,
ces dernières sont de l'ordre de quelques pour-cents par rapport à
l'énergie totale de la transition hyperfine. Nous avons également
évalué l'effet de Bohr-Weisskopf lié à la distribution du moment
magnétique dans le noyau. Nous avons calculé puis comparé ces
différentes contributions en incluant les corrections radiatives
(polarisation du vide et self-énergie) ainsi que l'influence du
continuum négatif. 2) Un atome exotique est un atome dans lequel un
électron du cortège est remplacé par une particule de même charge :
$\mu^(-)$, $\pi^(-)$, $\bar(p)$\ldots Des expériences récentes ont
permis de gagner trois ordres de grandeur en précision et en
résolution. Nous avons voulu améliorer la précision des calculs
d'énergies de transitions nécessaires à la calibration et à
l'interprétation dans deux cas : la mesure de paramètres de
l'interaction forte dans l'hydrogène anti-protonique ($\bar(p)$H) et
la détermination de la masse du pion grâce à l'azote pionique
($\pi$N). Nos calculs prennent en compte la structure hyperfine et le
volume de la distribution de charge de la particule. Nous avons
amélioré le calcul de la polarisation du vide qui ne peut plus être
traitée au premier ordre de la théorie des perturbations dans le cas
des atomes exotiques. Pour les atomes anti-protoniques, nous avons
également ajouté la correction du g-2. Elle provient du caractère
composite de l'anti-proton qui de ce fait possède un rapport
gyromagnétique g $\approx$ -5.5856 .
Karaseva, Olga. "Déformations élastiques des presses de forgeage et calcul parallèle". Phd thesis, École Nationale Supérieure des Mines de Paris, 2005. http://pastel.archives-ouvertes.fr/pastel-00001513.
Testo completoMartins, Paulo Chaves de Rezende. "Modélisation du comportement jusqu'à rupture en flexion de poutres en béton à précontrainte extérieure ou mixte". Châtenay-Malabry, Ecole centrale de Paris, 1989. http://www.theses.fr/1989ECAP0096.
Testo completoGuérin, Pierre. "Méthodes de décomposition de domaine pour la formulation mixte duale du problème critique de la diffusion des neutrons". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2007. http://tel.archives-ouvertes.fr/tel-00210588.
Testo completoAllart, Emilie. "Abstractions de différences exactes de réseaux de réactions : améliorer la précision de prédiction de changements de systèmes biologiques". Thesis, Lille, 2021. http://www.theses.fr/2021LILUI013.
Testo completoChange predictions for reaction networks with partial kinetic information can be obtained by qualitative reasoning with abstract interpretation. A typical change prediction problem in systems biology is which gene knockouts may, or must, increase the outflow of a target species at a steady state. Answering such questions for reaction networks requires reasoning about abstract differences such as "increases'' and "decreases''. A task fundamental for change predictions was introduced by Niehren, Versari, John, Coutte, et Jacques (2016). It is the problem to compute for a given system of linear equations with nonlinear difference constraints, the difference abstraction of the set of its positive solutions. Previous approaches provided overapproximation algorithms for this task based on various heuristics, for instance by rewriting the linear equations. In this thesis, we present the first algorithms that can solve this task exactly for the two difference abstractions used in the literature so far. As a first contribution, we show how to characterize for a linear equation system the boolean abstraction of its set of positive solutions. This abstraction maps any strictly positive real numbers to 1 and 0 to 0. The characterization is given by the set of boolean solutions for another equation system, that we compute based on elementary modes. The boolean solutions of the characterizing equation system can then be computed based on finite domain constraint programming in practice. We believe that this result is relevant for the analysis of functional programs with linear arithmetics. As a second contribution, we present two algorithms that compute for a given system of linear equations and nonlinear difference constraints, the exact difference abstraction into Delta_3 and Delta_6 respectively. These algorithms rely on the characterization of boolean abstractions for linear equation systems from the first contribution. The bridge between these abstractions is defined in first-order logic. In this way, the difference abstraction can be computed by finite set constraint programming too. We implemented our exact algorithms and applied them to predicting gene knockouts that may lead to leucine overproduction in B.~Subtilis, as needed for surfactin overproduction in biotechnology. Computing the precise predictions with the exact algorithm may take several hours though. Therefore, we also present a new heuristics for computing difference abstraction based on elementary modes, that provides a good compromise between precision and time efficiency
Salhi, Yamina. "Étude et réalisation de logiciels d'optimisation non contrainte avec dérivation numérique et estimation de la précision des résultats". Paris 6, 1985. http://www.theses.fr/1985PA066412.
Testo completoHinojosa, Rehbein Jorge Andres. "Sur la robustesse d'une méthode de décomposition de domaine mixte avec relocalisation non linéaire pour le traitement des instabilités géométriques dans les grandes structures raidies". Phd thesis, École normale supérieure de Cachan - ENS Cachan, 2012. http://tel.archives-ouvertes.fr/tel-00745225.
Testo completoVernet, Raphaël. "Approche mixte théorie / expérimentation pour la modélisation numérique de chambres réverbérantes à brassage de modes". Phd thesis, Clermont-Ferrand 2, 2006. https://theses.hal.science/docs/00/69/11/24/PDF/2006CLF21654.pdf.
Testo completoVernet, Raphaël. "Approche mixte théorie / expérimentation pour la modélisation numérique de chambres réverbérantes à brassage de modes". Phd thesis, Université Blaise Pascal - Clermont-Ferrand II, 2006. http://tel.archives-ouvertes.fr/tel-00691124.
Testo completoLaizet, Sylvain. "Développement d'un code de calcul combinant des schémas de haute précision avec une méthode de frontières immergées pour la simulation des mouvements tourbillonnaires en aval d'un bord de fuite". Poitiers, 2005. http://www.theses.fr/2005POIT2339.
Testo completoTo carry out simulations of the vortex dynamics behind a trailing edge remains a difficult task in fluid mechanics. Numerical development has been performed with a computer code which solves the incompressible Navier-Stokes equations with high order compact finite difference schemes on a Cartesian grid. The specificity of this code is that the Poisson equation is solved in the spectral space with the modified spectral formalism. This code can be combined with an immersed boundary method in order to simulate flows with complex geometry. A particular work was made to improve the resolution of the Poisson equation in order to use a stretched mesh and a staggered grid for the pressure. Two mixing layers flows with a blunt and a bevelled trailing edge were performed in order to determinate the influence of the separating plate's shape on the vortex dynamics
Tan, Pauline. "Précision de modèle et efficacité algorithmique : exemples du traitement de l'occultation en stéréovision binoculaire et de l'accélération de deux algorithmes en optimisation convexe". Thesis, Université Paris-Saclay (ComUE), 2016. http://www.theses.fr/2016SACLX092/document.
Testo completoThis thesis is splitted into two relatively independant parts. The first part is devoted to the binocular stereovision problem, specifically to the occlusion handling. An analysis of this phenomena leads to a regularity model which includes a convex visibility constraint. The resulting energy functional is minimized by convex relaxation. The occluded areas are then detected thanks to the horizontal slope of the disparity map and densified. Another method with occlusion handling was proposed by Kolmogorov and Zabih. Because of its efficiency, we adapted it to two auxiliary problems encountered in stereovision, namely the densification of sparse disparity maps and the subpixel refinement of pixel-accurate maps.The second part of this thesis studies two convex optimization algorithms, for which an acceleration is proposed. The first one is the Alternating Direction Method of Multipliers (ADMM). A slight relaxation in the parameter choice is shown to enhance the convergence rate. The second one is an alternating proximal descent algorithm, which allows a parallel approximate resolution of the Rudin-Osher-Fatemi (ROF) pure denoising model, in color-image case. A FISTA-like acceleration is also proposed
Lagarde, Laurent. "Recherche de méthode simplifiée pour le calcul de poutres multicouches en grandes transformations planes". Phd thesis, Ecole Nationale des Ponts et Chaussées, 2000. http://tel.archives-ouvertes.fr/tel-00838664.
Testo completoBarotto, Béatrice. "Introduction de paramètres stochastiques pour améliorer l'estimation des trajectoires d'un système dynamique par une méthode de moindres carrés : application à la détermination de l'orbite d'un satellite avec une précision centimétrique". Toulouse 3, 1995. http://www.theses.fr/1995TOU30196.
Testo completoPeou, Kenny. "Computing Tools for HPDA : a Cache-Oblivious and SIMD Approach". Electronic Thesis or Diss., université Paris-Saclay, 2021. http://www.theses.fr/2021UPASG105.
Testo completoThis work presents three contributions to the fields of CPU vectorization and machine learning. The first contribution is an algorithm for computing an average with half precision floating point values. In this work performed with limited half precision hardware support, we use an existing software library to emulate half precision computation. This allows us to compare the numerical precision of our algorithm to various commonly used algorithms. Finally, we perform runtime performance benchmarks using single and double floating point values in order to anticipate the potential gains from applying CPU vectorization to half precision values. Overall, we find that our algorithm has slightly worse best-case numerical performance in exchange for significantly better worst-case numerical performance, all while providing similar runtime performance to other algorithms. The second contribution is a fixed-point computational library designed specifically for CPU vectorization. Existing libraries fail rely on compiler auto-vectorization, which fail to vectorize arithmetic multiplication and division operations. In addition, these two operations require cast operations which reduce vectorizability and have a real computational cost. To allevieate this, we present a fixed-point data storage format that does not require any cast operations to perform arithmetic operations. In addition, we present a number of benchmarks comparing our implementation to existing libraries and present the CPU vectorization speedup on a number of architectures. Overall, we find that our fixed point format allows runtime performance equal to or better than all compared libraries. The final contribution is a neural network inference engine designed to perform experiments varying the numerical datatypes used in the inference computation. This inference engine allows layer-specific control of which data types are used to perform inference. We use this level of control to perform experiments to determine how aggressively it is possible to reduce the numerical precision used in inferring the PVANet neural network. In the end, we determine that a combination of the standardized float16 and bfloat16 data types is sufficient for the entire inference
Fontbonne, Cathy. "Acquisition multiparamétrique de signaux de décroissance radioactive pour la correction des défauts instrumentaux : application à la mesure de la durée de vie du 19Ne". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMC204/document.
Testo completoThe aim of this thesis is to propose a method for precise half-life measurements adapted to nuclides with half-lives of a few seconds. The FASTER real-time digital acquisition system gives access to the physical characteristics of the signal induced by the detection of each decay during the counting period following beam implantation. The selection of the counting data can be carried out by an optimized post-experimental offline analysis. Thus, after establishing the influence factors impacting the measurement (pile up, gain and base line fluctuations), we are able to estimate, a posteriori, their impact on the half-life estimation. This way, we can choose the deposited energy threshold and dead time in order to minimize their effect. This thesis also proposes a method for measuring and then compensating for influence factors variations. This method was applied to estimate the 19Ne half-life with a relative uncertainty of 1.2 10-4 leading to T1 / 2 = 17.2569 (21) s. This is the most precise measurement to date for this isotope