Tesi sul tema "Codes de calcul chaînés"
Cita una fonte nei formati APA, MLA, Chicago, Harvard e in molti altri stili
Vedi i top-46 saggi (tesi di laurea o di dottorato) per l'attività di ricerca sul tema "Codes de calcul chaînés".
Accanto a ogni fonte nell'elenco di riferimenti c'è un pulsante "Aggiungi alla bibliografia". Premilo e genereremo automaticamente la citazione bibliografica dell'opera scelta nello stile citazionale di cui hai bisogno: APA, MLA, Harvard, Chicago, Vancouver ecc.
Puoi anche scaricare il testo completo della pubblicazione scientifica nel formato .pdf e leggere online l'abstract (il sommario) dell'opera se è presente nei metadati.
Vedi le tesi di molte aree scientifiche e compila una bibliografia corretta.
Balde, Oumar. "Calage bayésien sous incertitudes des outils de calcul scientifique couplés : application en simulation numérique du combustible". Electronic Thesis or Diss., Université de Toulouse (2023-....), 2024. http://www.theses.fr/2024TLSES035.
Testo completoNowadays, numerical models have become essential tools for modeling, understanding, analyzing and predicting the physical phenomena involved in complex physical systems such as nuclear power plants. Such numerical models often take a large number of uncertain input parameters, thus leading to uncertain outputs as well. Before any industrial use of those numerical models, an important step is therefore to reduce and quantify these uncertainties as much as possible. In this context, the goal of model calibration is to reduce and quantify the uncertainties of the input parameters based on available experimental and simulated data. There are two types of model calibration: deterministic calibration and Bayesian calibration. The latter quantifies parameter uncertainties by probability distributions. This thesis deals with the conditional Bayesian calibration of two chained numerical models. The objective is to calibrate the uncertain parameters of the second model while taking into account the uncertain parameters of the first model. To achieve this, a new Bayesian inference methodology called GP-LinCC (Gaussian Process and Linearization-based Conditional Calibration) was proposed. In practice, the deployment of this new approach has required a preliminary step of global sensitivity analysis to identify the most significant input parameters to calibrate in the second model, while considering the uncertainty of the parameters of the first model. To do this, an integrated version of the HSIC (Hilbert-Schmidt Independence Criterion) was used to define well-suited sensitivity measures and the theoretical properties of their nested Monte Carlo estimators were investigated. Finally, these two methodological contributions have been applied to the multi-physics application called ALCYONE, to quantify the uncertain parameters of the CARACAS code (second model) simulating the behavior of fission gases in the pressurized water reactor conditionally on the uncertainty of the parameter conductivity of the thermal model (first model)
Skersys, Gintaras. "Calcul du groupe d'automorphismes des codes : détermination de l'équivalence des codes". Limoges, 1999. http://www.theses.fr/1999LIMO0021.
Testo completoMontan, Séthy. "Sur la validation numérique des codes de calcul industriels". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00913570.
Testo completoMontan, Séthy Akpémado. "Sur la validation numérique des codes de calcul industriels". Paris 6, 2013. http://www.theses.fr/2013PA066751.
Testo completoNumerical verification of industrial codes, such as those developed atEDF R&D, is required to estimate the precision and the quality ofcomputed results, even more for code running in HPC environments wheremillions of instructions are performed each second. These programsusually use external libraries (MPI, BLACS, BLAS, LAPACK). Inthis context, it is required to have a tool as nonintrusive aspossible to avoid rewriting the original code. In this regard, theCADNA library, which implement the Discrete Stochastic Arithmetic,appears to be one of a promising approach for industrial applications. In the first part of this work, we are interested in an efficientimplementation of the BLAS routine DGEMM (General Matrix Multiply)implementing Discrete Stochastic Arithmetic. The implementation of abasic algorithm for matrix product using stochastic types leads to anoverhead greater than 1000 for a matrix of 1024*1024 comparedto the standard version and commercial versions of xGEMM. Here, wedetail different solutions to reduce this overhead and the results wehave obtained. A new routine DgemmCADNA have been designed. This routine has allowed to reduce the overhead from 1100 to 35compare to optimized BLAS implementations (GotoBLAS). Then, we focus on the numerical verification of Telemac-2D computedresults. Performing a numerical validation with the CADNA libraryshows that more than 30% of the numerical instabilities occurringduring an execution come from the dot product function. A moreaccurate implementation of the dot product with compensated algorithmsis presented in this work. We show that implementing these kind ofalgorithms, in order to improve the accuracy of computed results doesnot alter the code performance
Baladron, Pezoa Javier. "Exploring the neural codes using parallel hardware". Phd thesis, Université Nice Sophia Antipolis, 2013. http://tel.archives-ouvertes.fr/tel-00847333.
Testo completoPicot, Romain. "Amélioration de la fiabilité numérique de codes de calcul industriels". Electronic Thesis or Diss., Sorbonne université, 2018. http://www.theses.fr/2018SORUS242.
Testo completoMany studies are devoted to performance of numerical simulations. However it is also important to take into account the impact of rounding errors on the results produced. These rounding errors can be estimated with Discrete Stochastic Arithmetic (DSA), implemented in the CADNA library. Compensated algorithms improve the accuracy of results, without changing the numerical types used. They have been designed to be generally executed with rounding to nearest. We have established error bounds for these algorithms with directed rounding and shown that they can be used successfully with the random rounding mode of DSA. We have also studied the impact of a target precision of the results on the numerical types of the different variables. We have developed the PROMISE tool which automatically performs these type changes while validating the results thanks to DSA. The PROMISE tool has thus provided new configurations of types combining single and double precision in various programs and in particular in the MICADO code developed at EDF. We have shown how to estimate with DSA rounding errors generated in quadruple precision. We have proposed a version of CADNA that integrates quadruple precision and that allowed us in particular to validate the computation of multiple roots of polynomials. Finally we have used this new version of CADNA in the PROMISE tool so that it can provide configurations with three types (single, double and quadruple precision)
Duclos-Cianci, Guillaume. "Outils de calcul quantique tolérant aux fautes". Thèse, Université de Sherbrooke, 2015. http://hdl.handle.net/11143/6770.
Testo completoSchmitt, Maxime. "Génération automatique de codes adaptatifs". Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD029.
Testo completoIn this thesis we introduce a new application programming interface to help developers to optimize an application with approximate computing techniques. This interface is provided as a language extension to advise the compiler about the parts of the program that may be optimized with approximate computing and what can be done about them. The code transformations of the targeted regions are entirely handled by the compiler to produce an adaptive software. The produced adaptive application allocates more computing power to the locations where more precision is required, and may use approximations where the precision is secondary. We automate the discovery of the optimization parameters for the special class of stencil programs which are common in signal/image processing and numerical simulations. Finally, we explore the possibility of compressing the application data using the wavelet transform and we use information found in this basis to locate the areas where more precision may be needed
Damblin, Guillaume. "Contributions statistiques au calage et à la validation des codes de calcul". Thesis, Paris, AgroParisTech, 2015. http://www.theses.fr/2015AGPT0083.
Testo completoCode validation aims at assessing the uncertainty affecting the predictions of a physical system by using both the outputs of a computer code which attempt to reproduce it and the available field measurements. In the one hand, the codemay be not a perfect representation of the reality. In the other hand, some code parameters can be uncertain and need to be estimated: this issue is referred to as code calibration. After having provided a unified view of the main procedures of code validation, we propose several contributions for solving some issues arising in computer codes which are both costly and considered as black-box functions. First, we develop a Bayesian testing procedure to detect whether or not a discrepancy function, called code discrepancy, has to be taken into account between the code outputs and the physical system. Second, we present new algorithms for building sequential designs of experiments in order to reduce the error occurring in the calibration process based on a Gaussian process emulator. Lastly, a validation procedure of a thermal code is conducted as the preliminary step of a decision problem where an energy supplier has to commit for an overall energy consumption forecast to customers. Based on the Bayesian decision theory, some optimal plug-in estimators are computed
Régnier, Gilles. "Aide à l'utilisation des codes de calcul des structures : une programmation déclarative". Paris 6, 1990. http://www.theses.fr/1990PA066664.
Testo completoChemin, Sébastien. "Etude des interactions thermiques fluide-structure par un couplage de codes de calcul". Reims, 2006. http://theses.univ-reims.fr/exl-doc/GED00000555.pdf.
Testo completoIn this thesis, a conjugate heat transfer procedure between a finite-volume Navier-Stokes solver and a finite-element conduction solver is presented. The coupling has been performed through the MpCCI library and thermal boundary conditions, on the coupling surfaces. These conditions define two coupling coefficients connecting both the fluid and the solid domain. The first part describes the fluid-solid thermal steady state coupling. The stability analysis of the boundary conditions highlights the most efficient coefficients in terms of stability and convergence. As a consequence, a steady state algorithm has been implemented. It corresponds to an iterative procedure between the Navier-Stokes solver and the heat conduction solver. Thanks to the MpCCI library, the thermal quantities (heat flux, temperature) are exchanged between each solver until the thermal steady state is reached in both the fluid and the solid domains. This coupling method has been validated on a simple case, namely a flat plate, and two industrial cases, a flow around a turbine blade and an effusion cooling system. The second part of this thesis is dedicated to the fluid-solid thermal transient coupling. An original coupling algorithm applied to industrial problems is described. This algorithm corresponds to an iterative procedure between a steady state fluid description and a transient solid description. The experimental setup consists of an interaction between a steady flowfield and a transient heat conduction in a flat plate
OUEDERNI, MAROUANE. "Modelisation et simulation des regulations dans les codes de calcul thermique du batiment". Paris, ENMP, 1990. http://www.theses.fr/1990ENMP0236.
Testo completoChemin, Sébastien Lachi Mohammed. "Etude des interactions thermiques fluide-structure par un couplage de codes de calcul". Reims : S.C.D. de l'Université, 2006. http://scdurca.univ-reims.fr/exl-doc/GED00000555.pdf.
Testo completoGrospellier, Antoine. "Décodage des codes expanseurs quantiques et application au calcul quantique tolérant aux fautes". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS575.
Testo completoFault tolerant quantum computation is a technique to perform reliable quantum computation using noisy components. In this context, quantum error correcting codes are used to keep the amount of errors under a sustainable threshold. One of the main problems of this field is to determine the minimum cost, in terms of memory and time, which is needed in order to transform an ideal quantum computation into a fault-tolerant one. In this PhD thesis, we show that the family of quantum expander codes and the small-set-flip decoder can be used in the construction of ref. [arXiv:1310.2984] to produce a fault-tolerant quantum circuit with constant space overhead. The error correcting code family and the decoder that we study has been introduced in ref. [arXiv:1504.00822] where an adversarial error model was examined. Based on the results of this article, we analyze quantum expander codes subjected to a stochastic error model which is relevant for fault-tolerant quantum computation [arXiv:1711.08351], [arXiv:1808.03821]. In addition, we show that the decoding algorithm can be parallelized to run in constant time. This is very relevant to prevent errors from accumulating while the decoding algorithm is running. Beyond the theoretical results described above, we perform a numerical analysis of quantum expander codes to measure their performance in practice [arXiv:1810.03681]. The error model used during these simulations generates X and Z type errors on the qubits with an independent and identically distributed probability distribution. Our results are promising because they reveal that these constant rate codes have a decent threshold and good finite length performance
Luu, Thi Hieu. "Amélioration du modèle de sections efficaces dans le code de cœur COCAGNE de la chaîne de calculs d'EDF". Thesis, Paris 6, 2017. http://www.theses.fr/2017PA066120/document.
Testo completoIn order to optimize the operation of its nuclear power plants, the EDF's R&D department iscurrently developing a new calculation chain to simulate the nuclear reactors core with state of the art tools. These calculations require a large amount of physical data, especially the cross-sections. In the full core simulation, the number of cross-section values is of the order of several billions. These cross-sections can be represented as multivariate functions depending on several physical parameters. The determination of cross-sections is a long and complex calculation, we can therefore pre-compute them in some values of parameters (online calculations), then evaluate them at all desired points by an interpolation (online calculations). This process requires a model of cross-section reconstruction between the two steps. In order to perform a more faithful core simulation in the new EDF's chain, the cross-sections need to be better represented by taking into account new parameters. Moreover, the new chain must be able to calculate the reactor in more extensive situations than the current one. The multilinear interpolation is currently used to reconstruct cross-sections and to meet these goals. However, with this model, the number of points in its discretization increases exponentially as a function of the number of parameters, or significantly when adding points to one of the axes. Consequently, the number and time of online calculations as well as the storage size for this data become problematic. The goal of this thesis is therefore to find a new model in order to respond to the following requirements: (i)-(online) reduce the number of pre-calculations, (ii)-(online) reduce stored data size for the reconstruction and (iii)-(online) maintain (or improve) the accuracy obtained by multilinear interpolation. From a mathematical point of view, this problem involves approaching multivariate functions from their pre-calculated values. We based our research on the Tucker format - a low-rank tensor approximation in order to propose a new model called the Tucker decomposition . With this model, a multivariate function is approximated by a linear combination of tensor products of one-variate functions. These one-variate functions are constructed by a technique called higher-order singular values decomposition (a « matricization » combined with an extension of the Karhunen-Loeve decomposition). The so-called greedy algorithm is used to constitute the points related to the resolution of the coefficients in the combination of the Tucker decomposition. The results obtained show that our model satisfies the criteria required for the reduction of the data as well as the accuracy. With this model, we can eliminate a posteriori and a priori the coefficients in the Tucker decomposition in order to further reduce the data storage in online steps but without reducing significantly the accuracy
Zappatore, Ilaria. "Simultaneous Rational Function Reconstruction and applications to Algebraic Coding Theory". Thesis, Montpellier, 2020. http://www.theses.fr/2020MONTS021.
Testo completoThis dissertation deals with a Computer Algebra problem which has significant consequencesin Algebraic Coding Theory and Error Correcting Codes: the simultaneous rationalfunction reconstruction. Indeed, an accurate analysis of this problem leads to interestingresults in both these scientific domains.More precisely, the simultaneous rational function reconstruction is the problem of reconstructinga vector of rational functions with the same denominator given its evaluations(or more generally given its remainders modulo different polynomials). The peculiarity ofthis problem consists in the fact that the common denominator constraint reduces the numberof evaluation points needed to guarantee the existence of a solution, possibly losing theuniqueness. One of the main contribution of this work consists in the proof that uniquenessis guaranteed for almost all instances of this problem.This result was obtained by elaborating some other contributions and techniques derivedby the applications of SRFR, from the polynomial linear system solving to the decoding ofInterleaved Reed-Solomon codes.In this work, we will also study and present another application of the SRFR problem,concerning the problem of constructing fault-tolerant algorithms: algorithms resilientsto computational errors. These algorithms are constructed by introducing redundancy andusing error correcting codes tools to detect and possibly correct errors which occur duringcomputations. In this application context, we improve an existing fault-tolerant techniquefor polynomial linear system solving by interpolation-evaluation, by focusing on the SRFRproblem related to it
Content, Cédric. "Méthode innovante pour le calcul de la transition laminaire-turbulent dans les codes Navier-Stokes". Toulouse, ISAE, 2011. http://www.theses.fr/2011ESAE0006.
Testo completoMöller, Nathalie. "Adaptation de codes industriels de simulation en Calcul Haute Performance aux architectures modernes de supercalculateurs". Thesis, Université Paris-Saclay (ComUE), 2019. http://www.theses.fr/2019SACLV088.
Testo completoFor many years, the stability of the architecture paradigm has facilitated the performance portability of large HPC codes from one generation of supercomputers to another.The announced breakdown of the Moore's Law, which rules the progress of microprocessor engraving, ends this model and requires new efforts on the software's side.Code modernization, based on an algorithmic which is well adapted to the future systems, is mandatory.This modernization is based on well-known principles as the computation concurrency, or degree of parallelism, and the data locality.However, the implementation of these principles in large industrial applications, which often are the result of years of development efforts, turns out to be way more difficult than expected.This thesis contributions are twofold :On the one hand, we explore a methodology of software modernization based on the concept of proto-applications and compare it with the direct approach, while optimizing two simulation codes developed in a similar context.On the other hand, we focus on the identification of the main challenges for the architecture, the programming models and the applications.The two chosen application fields are the Computational Fluid Dynamics and Computational Electro Magnetics
Cliquet, Julien. "Calcul de la transition laminaire-turbulent dans les codes Navier-Stokes : application aux géométries complexes". Toulouse, ISAE, 2007. http://www.theses.fr/2007ESAE0010.
Testo completoPecquet, Lancelot. "Décodage en liste des codes géométriques". Paris 6, 2001. http://www.theses.fr/2001PA066561.
Testo completoHugo, Andra-Ecaterina. "Composability of parallel codes on heterogeneous architectures". Thesis, Bordeaux, 2014. http://www.theses.fr/2014BORD0373/document.
Testo completoTo face the ever demanding requirements in term of accuracy and speed of scientific simulations, the High Performance community is constantly increasing the demands in term of parallelism, adding thus tremendous value to parallel libraries strongly optimized for highly complex architectures.Enabling HPC applications to perform efficiently when invoking multiple parallel libraries simultaneously is a great challenge. Even if a uniform runtime system is used underneath, scheduling tasks or threads coming from dfferent libraries over the same set of hardware resources introduces many issues, such as resource oversubscription, undesirable cache ushes or memory bus contention.In this thesis, we present an extension of StarPU, a runtime system specifically designed for heterogeneous architectures, that allows multiple parallel codes to run concurrently with minimal interference. Such parallel codes run within scheduling contexts that provide confined executionenvironments which can be used to partition computing resources. Scheduling contexts can be dynamically resized to optimize the allocation of computing resources among concurrently running libraries. We introduced a hypervisor that automatically expands or shrinks contexts using feedback from the runtime system (e.g. resource utilization). We demonstrated the relevance of this approach by extending an existing generic sparse direct solver (qr mumps) to use these mechanisms and introduced a new decomposition method based on proportional mapping that is used to build the scheduling contexts. In order to cope with the very irregular behavior of the application, the hypervisor manages dynamically the allocation of resources. By means of the scheduling contexts and the hypervisor we improved the locality and thus the overall performance of the solver
Nanty, Simon. "Quantification des incertitudes et analyse de sensibilité pour codes de calcul à entrées fonctionnelles et dépendantes". Thesis, Université Grenoble Alpes (ComUE), 2015. http://www.theses.fr/2015GREAM043/document.
Testo completoThis work relates to the framework of uncertainty quantification for numerical simulators, and more precisely studies two industrial applications linked to the safety studies of nuclear plants. These two applications have several common features. The first one is that the computer code inputs are functional and scalar variables, functional ones being dependent. The second feature is that the probability distribution of functional variables is known only through a sample of their realizations. The third feature, relative to only one of the two applications, is the high computational cost of the code, which limits the number of possible simulations. The main objective of this work was to propose a complete methodology for the uncertainty analysis of numerical simulators for the two considered cases. First, we have proposed a methodology to quantify the uncertainties of dependent functional random variables from a sample of their realizations. This methodology enables to both model the dependency between variables and their link to another variable, called covariate, which could be, for instance, the output of the considered code. Then, we have developed an adaptation of a visualization tool for functional data, which enables to simultaneously visualize the uncertainties and features of dependent functional variables. Second, a method to perform the global sensitivity analysis of the codes used in the two studied cases has been proposed. In the case of a computationally demanding code, the direct use of quantitative global sensitivity analysis methods is intractable. To overcome this issue, the retained solution consists in building a surrogate model or metamodel, a fast-running model approximating the computationally expensive code. An optimized uniform sampling strategy for scalar and functional variables has been developed to build a learning basis for the metamodel. Finally, a new approximation approach for expensive codes with functional outputs has been explored. In this approach, the code is seen as a stochastic code, whose randomness is due to the functional variables, assumed uncontrollable. In this framework, several metamodels have been developed and compared. All the methods proposed in this work have been applied to the two nuclear safety applications
Colin, de Verdière Guillaume. "A la recherche de la haute performance pour les codes de calcul et la visualisation scientifique". Thesis, Reims, 2019. http://www.theses.fr/2019REIMS012/document.
Testo completoThis thesis aims to demonstrate that algorithms and coding, in a high performance computing (HPC) context, cannot be envisioned without taking into account the hardware at the core of supercomputers since those machines evolve dramatically over time. After setting a few definitions relating to scientific codes and parallelism, we show that the analysis of the different generations of supercomputer used at CEA over the past 30 years allows to exhibit a number of attention points and best practices toward code developers.Based on some experiments, we show how to aim at code performance suited to the usage of supercomputers, how to try to get portable performance and possibly extreme performance in the world of massive parallelism, potentially using GPUs.We explain that graphical post-processing software and hardware follow the same parallelism principles as large scientific codes, requiring to master a global view of the simulation chain.Last, we describe tendencies and constraints that will be forced on the new generations of exaflopic class supercomputers. These evolutions will, yet again, impact the development of the next generations of scientific codes
Wen, Erzhuang. "Contribution à l'étude des codes correcteurs et des corps finis". Toulouse 3, 1994. http://www.theses.fr/1994TOU30255.
Testo completoDumas, Jean-Guillaume. "Contributions au calcul exact intensif". Habilitation à diriger des recherches, Université de Grenoble, 2010. http://tel.archives-ouvertes.fr/tel-00514925.
Testo completoJaeger, Julien. "Transformations source-à-source pour l'optimisation de codes irréguliers et multithreads". Phd thesis, Université de Versailles-Saint Quentin en Yvelines, 2012. http://tel.archives-ouvertes.fr/tel-00842177.
Testo completoLampoh, Komlanvi. "Différentiation automatique de codes mécaniques : application à l'analyse de sensibilité des tôles sandwich aux paramètres de modélisation". Thesis, Université de Lorraine, 2012. http://www.theses.fr/2012LORR0220/document.
Testo completoIn engineering, for a better understanding of the mechanical behavior of a structure submitted to some perturbation of the modeling parameters, one often proceed to a sensitivity analysis. This provides quantitative and qualitative information on the behavior of the model under study and gives access to gradients that may be used in identification and optimization methods. In this thesis, we demonstrate that this information may be obtained at a low development effort by applying an Automatic Differentiation (AD) tool to the computer code that implements the model. We adapt the AD techniques to the Asymptotic Numerical Method (ANM), in its Diamant version for sensitivity computations of numerical solutions of nonlinear problems discretized through a finite element method. We discuss in a generic manner both the theoretical aspects and the implementation of several algorithms written in Matlab. Applications are concerned with sandwich beams and sandwich plates in both the static and dynamic (free vibration) cases. Sensitivities are computed with respect to geometric and mechanical parameters, and with respect to elementary stiffness matrix. The generality of our developments allows to take into account several viscoelastic laws with no additional effort. Three kinds of viscoelastic models are studied: constant complex modulus, low damping and higher damping. In comparison with the finite difference approximation often used in mechanics, our approach provides more accurate results for the sensitivity of the structure response to a perturbation of the modeling parameters. It also allows a reduction of the computation effort
Benmouhoub, Farah. "Optimisation de la précision numérique des codes parallèles". Thesis, Perpignan, 2022. http://www.theses.fr/2022PERP0009.
Testo completoIn high performance computing, nearly all the implementations and published experiments use foating-point arithmetic. However, since foating-point numbers are finite approximations of real numbers, it may result in hazards because of the accumulated errors.These round-off errors may cause damages whose gravity varies depending on the critical level of the application. Parallelism introduces new numerical accuracy problems due to the order of operations in this kind of systems. The proposed thesis subject concerns this last point: improving the precision of massively parallel scientific computing codes such as those found in the field of HPC (High Performance Computing)
Duplex, Benjamin. "Transfert de déformations géométriques lors des couplages de codes de calcul - Application aux dispositifs expérimentaux du réacteur de recherche Jules Horowitz". Phd thesis, Université de la Méditerranée - Aix-Marseille II, 2011. http://tel.archives-ouvertes.fr/tel-00679015.
Testo completoSenigon, de Roumefort Ravan de. "Approche statistique du vieillissement des disques optiques CD-Audio, CD-R, CD-RW". Paris 6, 2011. http://www.theses.fr/2011PA066109.
Testo completoDemeure, Nestor. "Managing the compromise between performance and accuracy in simulation codes". Thesis, université Paris-Saclay, 2021. http://www.theses.fr/2021UPASM004.
Testo completoFloating-point numbers represent only a subset of real numbers. As such, floating-point arithmetic introduces approximations that can compound and have a significant impact on numerical simulations.We introduce a new way to estimate and localize the sources of numerical error in an application and provide a reference implementation, the Shaman library.Our method uses a dedicated arithmetic over a type that encapsulates both the result the user would have had with the original computation and an approximation of its numerical error. We thus can measure the number of significant digits of any result or intermediate result in a simulation.We show that this approach, while simple, gives results competitive with state-of-the-art methods. It has a smaller overhead and is compatible with parallelism which makes it suitable for the study of large scale applications
Boukari, Nabil. "Modélisation du mouvement à l'aide de codes de calcul par éléments finis en 3D : application à la machine homopolaire et au microactionneur électrostatique". Toulouse, INPT, 2000. http://www.theses.fr/2000INPT008H.
Testo completoAuder, Benjamin. "Classification et modélisation de sorties fonctionnelles de codes de calcul : application aux calculs thermo-hydrauliques accidentels dans les réacteurs à eau pressurisés (REP)". Paris 6, 2011. http://www.theses.fr/2011PA066066.
Testo completoSmith, Guillaume. "Concevoir des applications temps-réel respectant la vie privée en exploitant les liens entre codes à effacements et les mécanismes de partages de secrets". Thesis, Toulouse, ISAE, 2014. http://www.theses.fr/2014ESAE0045/document.
Testo completoData from both individuals and companies is increasingly aggregated and analysed to provide new and improved services. There is a corresponding research effort to enable processing of such data in a secure and privacy preserving way, in line with the increasing public concerns and more stringent regulatory requirements for the protection of such data. Secure Multi-Party Computation (MPC) and secret sharing are mechanisms that can enable both secure distribution and computations on private data. In this thesis, we address the inefficiencies of these mechanisms by utilising results from a theoretically related rich area, erasure codes. We derive links between erasure codes and secret sharing, and use Maximum Distance Separable (MDS) codes as a basis to provide real-time applications relying on private user's data, revealing this data only to the selected group (which can be empty). The thesis has three contributions. A new class of erasure code called on-the-fly coding, have been introduced for their improvements in terms of recovery delay and achievable capacity. However little is known about the complexity of the systematic and non-systematic variants of this code, notably for live multicast transmission of multimedia content which is their ideal use case. The evaluation of both variants demonstrate that the systematic code outperforms the non-systematic one in regard to both the buffer sizes and the computation complexity. Then, we propose a new Layered secret sharing scheme and its application to Online Social Network (OSN). In current OSN, access to the user's profile information is managed by the service provider based on a limited set of rules. The proposed scheme enables automated profile sharing in OSN's groups with fine grained privacy control, via a multi-secret sharing scheme comprising of layered shares, without relying on a trusted third party. We evaluate the security of the scheme and the resulting profile's level of protection in an OSN scenario. Finally, after showing that erasure codes are efficient for real-time applications and that the security offered by secret sharing schemes can be applied to real-case applications, we derive the theoretical links between MDS codes and secret sharing to enable the implementation of efficient secret sharing scheme built from MDS codes. To illustrate this efficiency, we implement two of these schemes and evaluate their benefits in regard to computation and communication costs in an MPC application
Gouicem, Mourad. "Conception et implantation d'algorithmes efficaces pour la résolution du dilemme du fabricant de tables sur architecture parallèles". Paris 6, 2013. http://www.theses.fr/2013PA066468.
Testo completoSince its standardization in 1985, floating-point arithmetic is commonly used toapproximate computations over the real numbers in a portable and predictableway. This predictability of the result is enabled thanks to a strong requirementon the functions specified by the IEEE Std 754: they must return a correctlyrounded result. Even though the implementation of basic operations is mademandatory, that of elementary functions is only recommended. This is mainly dueto a computationally hard to solve problem called the table maker'sdilemma (TMD). In this thesis, we provides algorithms along with their deployment on massivelyparallel architectures, in particular GPUs (Graphics Processing Units),to solve this problem in practice for some elementary functions andfloating-point formats. These deployments enable a speedup by a factor greaterthan 50 on GPU compared to a sequential execution on CPU. The main algorithmictool we use are the number systems based on continued fraction developments. Thelatter allows to efficiently perform arithmetic over the real numbers modulo 1,and to find the hard cases for correct rounding. We then generalize the use of these number systems to modular arithmetic overinteger numbers. This provide a framework to build algorithms for modularmultiplication and modular division, based only on the classical Euclideanalgorithm
Ejjaaouani, Ksander. "Conception du modèle de programmation INKS pour la séparation des préoccupations algorithmiques et d’optimisation dans les codes de simulation numérique : application à la résolution du système Vlasov/Poisson 6D". Thesis, Strasbourg, 2019. http://www.theses.fr/2019STRAD037.
Testo completoThe InKS programming model aims to improve readability portability and maintainability of simulation codes as well as boosting developer productivity. To fulfill these objectives, InKS proposes two languages, each dedicated to a specific concern. First, InKS PIA provides concepts to express simulation algorithms with no concerns for optimization. Once this foundation is set, InKSPSO enables optimization specialists to reuse the algorithm in order to specify the optimization part. The model offers to write numerous versions of the optimizations, typically one per architecture, from a single algorithm. This strategy limits the rewriting of code for each new optimization specification, boosting developer productivity.We have evaluated the InKS programming model by using it to implement the 6D Vlasov-Poisson solver and compared our version with a Fortran one. This evaluation highlighted that, in addition to the separation of concerns, the InKS approach is not more complex that traditional ones while offering the same performance. Moreover, using the algorithm, it is able to generate valid code for non-critical parts of code, leaving optimization specialists more time to focus on optimizing the computation intensive parts
Chabane, Hinde. "Contribution à la validation expérimentale de l'approche Monte-Carlo de l'interaction neutron-silicium utilisée dans des codes de physique nucléaire dédiées au calcul de SER des mémoires SRAM". Montpellier 2, 2006. http://www.theses.fr/2006MON20164.
Testo completoLegeay, Matthieu. "Utilisation du groupe de permutations d'un code correcteur pour améliorer l'efficacité du décodage". Rennes 1, 2012. http://www.theses.fr/2012REN1S099.
Testo completoError correcting codes and the linked decoding problem are one of the variants considered in post-quantum cryptography. In general, a random code has oftenly a trivial permutation group. However, the codes involved in the construction of cryptosystems and cryptographic functions based on error correcting codes usually have a non-trivial permutation group. Moreover, few cryptanalysis articles use the information contained in these permutation groups. We aim at improving decoding algorithms by using the permutation group of error correcting codes. There are many ways to apply it. The first one we focus on in this thesis is the one that uses the cyclic permutation called “shift” on information set decoding. Thus, we dwell on a work initiated by MacWilliams and put forward a detailed analysis of the complexity. The other way we investigate is to use a permutation of order two to create algebraically a subcode of the first code. Decoding in this subcode, of smaller parameters, is easier and allows to recover information in a perspective of decoding in the first code. Finally, we study the last pattern on the well known correcting codes, i. E. Reed-Muller codes, which extends the work initiated by Sidel'nikov-Pershakov
Briaud, Pierre. "Algebraic cryptanalysis of post-quantum schemes and related assumptions". Electronic Thesis or Diss., Sorbonne université, 2023. http://www.theses.fr/2023SORUS396.
Testo completoThis thesis studies the effect of algebraic techniques on certain post-quantum cryptosystems. We give attacks on multivariate and code-based schemes in the rank metric, some of which have been proposed to standardization by NIST. Most of these works involve the MinRank problem or structured versions of it. We have devised new polynomial modelings for some of these versions and contributed to analysis of existing ones, in particular the Support-Minors modeling (Bardet et al., EUROCRYPT 2020). Our break of a recent multivariate encryption scheme (Raviv et al. , PKC 2021) is also a MinRank attack. Finally, we studied other algebraic systems no longer related to MinRank arising from the cryptanalysis of Regular Syndrome Decoding (Augot et al. Mycrypt 2005) and that of a symmetric primitive tailored to zero-knowledge proofs (Bouvier et al., CRYPTO 2023)
Roux, Antoine. "Etude d’un code correcteur linéaire pour le canal à effacements de paquets et optimisation par comptage de forêts et calcul modulaire". Electronic Thesis or Diss., Sorbonne université, 2019. http://www.theses.fr/2019SORUS337.
Testo completoReliably transmitting information over a transmission channel is a recurrent problem in Informatic Sciences. Whatever may be the channel used to transmit information, we automatically observe erasure of this information, or pure loss. Different solutions can be used to solve this problem, using forward error correction codes is one of them. In this thesis, we study a corrector code developped in 2014 and 2015 for Thales society during my second year of master of apprenticeship. It is currently used to ensure the reliability of a transmission based on the UDP protocole, and passing by a network diode, Elips-SD. Elip-SD is an optical diode that can be plugged on an optical fiber to physically ensure that the transmission is unidirectional. The main usecase of such a diode is to enable supervising a critical site, while ensuring that no information can be transmitted to this site. At the opposite, another usecase is the transmission from one or multiple unsecured emitters to one secured receiver who wants to ensure that no information can be robbed. The corrector code that we present is a linear corrector code for the binary erasure channel using packets, that obtained the NATO certification from the DGA ("Direction Générale de Armées" in French). We named it Fauxtraut, for "Fast algorithm using Xor to repair altered unidirectional transmissions". In order to study this code, presenting how it works, its performance and the modifications we added during this thesis, we first establish a state of the art of forward error correction, focusing on non-MDS linear codes such as LDPC codes. Then we present Fauxtraut behavior, and analyse it theorically and with simulations. Finally, we present different versions of this code that were developped during this thesis, leading to other usecases such as transmitting reliable information that can be altered instead of being erased, or on a bidirectionnal channel, such as the H-ARQ protocole, and different results on the number of cycles in particular graphs. In the last part, we present results that we obtained during this thesis and that finally lead to an article in the Technical Computer Science. It concerns a non-polynomial problema of Graphs theorie : maximum matching in temporal graphs. In this article, we propose two algorithms with polynomial complexity : a 2-approximation algorithm and a kernelisation algorithm forthis problema
Daou, Mehdi Pierre. "Développement d’une méthodologie de couplage multimodèle avec changements de dimension : validation sur un cas-test réaliste". Thesis, Université Grenoble Alpes (ComUE), 2016. http://www.theses.fr/2016GREAM061/document.
Testo completoProgress has been performed for decades, in terms of physical knowledge, numerical techniques and computer power, that allows to address more and more complex simulations. Modelling of river and marine flows is no exception to this rule. For many applications, engineers have now to implement complex "modelling systems", coupling several models and software, representing various parts of the physical system. Such modelling systems allow addressing numerous studies, like quantifying the impacts of industrial constructions or highway structures, or evaluating the consequences of an extreme event.In the framwork of the present thesis, we address model coupling techniques using Schwarz's methodology, which is based on domain decomposition methods. The basic principle is to reduce the resolution of a complex problem into several simpler sub-problems, thanks to an iterative algorithm. These methods are particularly well suited for industrial codes, since they are very few intrusive.This thesis was realized within the framework of a CIFRE contract and thanks to the funding of the European CRISMA project and was thus greatly influenced by this industrial context. It was performed within the Artelia company, in collaboration with the AIRSEA team of the Jean Kuntzmann Laboratory, with the main objective of transferring to Artelia some knowledge and expertise regarding coupling methodologies.In this thesis, we develop a methodology for multi-model coupling with heterogeneous dimensions, based on Schwarz's methods, in order to allow modelling of complex problems in operational cases. From the industrial viewpoint, the developed coupled models must use software meeting Artelia's needs (Telemac-3D, Mascaret, InterFOAM, Open-PALM).We firstly study a testcase coupling 1-D and 3-D free surface flows, using the same software system Telemac-Mascaret. The advantage of such coupling is a reduction of the computation cost, thanks to the use of a 1-D model. However the change in the model dimension makes it difficult to define properly the notion of coupling, leading to a coupled solution which is not defined in a unique way but depends on the choice of the interface operators.Then we study a coupling case between a monophasic model and a diphasic model (1-D/3-D and 3-D/3-D), using Telemac-Mascaret and InterFOAM software systems. Once again, the main difficulty lies in the definition of interfaces operators, due to the change in the physics (monophasic / diphasic). Such a coupling makes it possible to solve complex flows that the Telemac-Mascaret system alone cannot address (breaking waves, water blade, closed-conduit flow, etc.), by locally using InterFOAM where necessary (InterFOAM is very expensive in terms of computations). Finally, we implement such a monophasic/diphasic coupling on an operational engineering study.In addition, we also present the work done during the CRISMA project. The overall objective of the CRISMA project was to develop a simulation-based decision support system for the operational crisis management in different domains of natural or industrial risks (floods, forest fires, accidental pollution, etc.). In this context, Artelia coordinated the development of an application allowing to simulate various aspects of crisis linked to flood risks in Charente-Maritime
Legaux, Joeffrey. "Squelettes algorithmiques pour la programmation et l'exécution efficaces de codes parallèles". Phd thesis, Université d'Orléans, 2013. http://tel.archives-ouvertes.fr/tel-00990852.
Testo completoHe, Guanlin. "Parallel algorithms for clustering large datasets on CPU-GPU heterogeneous architectures". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG062.
Testo completoClustering, which aims at achieving natural groupings of data, is a fundamental and challenging task in machine learning and data mining. Numerous clustering methods have been proposed in the past, among which k-means is one of the most famous and commonly used methods due to its simplicity and efficiency.Spectral clustering is a more recent approach that usually achieves higher clustering quality than k-means. However, classical algorithms of spectral clustering suffer from a lack of scalability due to their high complexities in terms of number of operations and memory space requirements. This scalability challenge can be addressed by applying approximation methods or by employing parallel and distributed computing.The objective of this thesis is to accelerate spectral clustering and make it scalable to large datasets by combining representatives-based approximation with parallel computing on CPU-GPU platforms. Considering different scenarios, we propose several parallel processing chains for large-scale spectral clustering. We design optimized parallel algorithms and implementations for each module of the proposed chains: parallel k-means on CPU and GPU, parallel spectral clustering on GPU using sparse storage format, parallel filtering of data noise on GPU, etc. Our various experiments reach high performance and validate the scalability of each module and the complete chains
Mounsif, Mostafa. "Le problème des moments par la méthode de l'entropie maximale". Montpellier 2, 1992. http://www.theses.fr/1992MON20171.
Testo completoHamidi, Hamid-Reza. "Couplage à hautes performances de codes parallèles et distribués". Phd thesis, 2005. http://tel.archives-ouvertes.fr/tel-00010971.
Testo completophysiques en même temps, est apparu. Ce type d'application est appelé "couplage de code". En effet, plusieurs codes (physiques) sont couplés ou interconnectés an qu'ils communiquent pour réaliser la simulation.
Cette thèse s'intéresse aux problématiques liées au couplage à hautes performances de codes parallèles et distribués. L'obtention des performances repose sur la conception d'applications distribuées dont certains composants sont parallélisés et dont les communications sont efcaces. L'idée de bas de cette thèse est d'utiliser un langage de programmation parallèle orienté flot de données (ici Athapascan) dans deux modèles de conception d'applications distribuées ; "modèle appel de procédure à distance (RPC)" et "modèle orienté flux de données (stream-oriented)". Les contributions apportées par ce travail de recherche sont les suivants :
- Utilisation d'un langage de flot de données dans un grille RPC de calcul ;
Dans le cadre de projet HOMA, les extensions au modèle RPC ont porté d'une part sur la sémantique de contrôle et de communication et d'autre part sur les supports exécutifs pour mieux exploiter le parallélisme. Les résultats théoriques de ces extensions pour une implantation sur le bus logiciel CORBA à l'aide du moteur exécutif KAAPI d'Athapascan et pour l'architecture homogène comme grappe de PC, sont présentés sous la forme d'un modèle de coût d'exécution. Les expériences (élémentaires et sur une application réelle) ont validé ce modèle de coût.
- Extension d'un modèle mémoire partagée pour couplage de codes ;
An d'étendre la sémantique d'accès aux données partagées du langage Athapascan, nous avons proposé la notion de "collection temporelle". Ce concept permet de décrire la sémantique d'accès de type flux de données. La "collection spatiale" permet de mieux exploiter les données parallèles. Pour préciser la sémantique associée à ces nouvelles notions, nous avons donné une nouvelle définition pour la donnée partagée. Puis dans le cadre de cette définition, nous avons défini trois types de données partagées ; "séquentielle", "collection temporelle" et "collection spatiale".
Touchette, Dave. "Interactive quantum information theory". Thèse, 2015. http://hdl.handle.net/1866/12341.
Testo completoQuantum information theory has developed tremendously over the past two decades, with analogues and extensions of the source coding and channel coding theorems for unidirectional communication. Meanwhile, for interactive communication, a quantum analogue of communication complexity has been developed, for which quantum protocols can provide exponential savings over the best possible classical protocols for some classical tasks. However, quantum information is much more sensitive to noise than classical information. It is therefore essential to make the best use possible of quantum resources. In this thesis, we take an information-theoretic point of view on interactive quantum protocols and study the interactive analogues of source compression and noisy channel coding. The setting we consider is that of quantum communication complexity: Alice and Bob want to perform some joint quantum computation while minimizing the required amount of communication. Local computation is deemed free. Our results are split into three distinct chapters, and these are organized in such a way that each can be read independently. Given its central role in the context of interactive compression, we devote a chapter to the task of quantum state redistribution. In particular, we prove lower bounds on its communication cost that are robust in the context of interactive communication. We also prove one-shot, one-message achievability bounds. In a subsequent chapter, we define a new, fully quantum notion of information cost for interactive protocols and a corresponding notion of information complexity for bipartite tasks. It characterizes how much quantum information, rather than quantum communication, Alice and Bob must exchange in order to implement a given bipartite task. We prove many structural properties for these quantities, and provide an operational interpretation for quantum information complexity as the amortized quantum communication complexity. In the special case of classical inputs, we provide an alternate characterization of information cost that provides an answer to the following question about quantum protocols: what is the cost of forgetting classical information? Two applications are presented: the first general multi-round direct-sum theorem for quantum protocols, and a tight lower bound, up to polylogarithmic terms, for the bounded-round quantum communication complexity of the disjointness function. In a final chapter, we initiate the study of the interactive quantum capacity of noisy channels. Since techniques to distribute entanglement are well-studied, we focus on a model with perfect pre-shared entanglement and noisy classical communication. We show that even in the harder setting of adversarial errors, we can tolerate a provably maximal error rate of one half minus epsilon, for an arbitrarily small epsilon greater than zero, at positive communication rates. It then follows that random noise channels with positive capacity for unidirectional transmission also have positive interactive quantum capacity. We conclude with a discussion of our results and further research directions in interactive quantum information theory.