Зміст
Добірка наукової літератури з теми "Modélisation d’erreur"
Оформте джерело за APA, MLA, Chicago, Harvard та іншими стилями
Ознайомтеся зі списками актуальних статей, книг, дисертацій, тез та інших наукових джерел на тему "Modélisation d’erreur".
Біля кожної праці в переліку літератури доступна кнопка «Додати до бібліографії». Скористайтеся нею – і ми автоматично оформимо бібліографічне посилання на обрану працю в потрібному вам стилі цитування: APA, MLA, «Гарвард», «Чикаго», «Ванкувер» тощо.
Також ви можете завантажити повний текст наукової публікації у форматі «.pdf» та прочитати онлайн анотацію до роботи, якщо відповідні параметри наявні в метаданих.
Статті в журналах з теми "Modélisation d’erreur"
Piednoir, E. "Maladie de Lyme : modélisation du risque d’erreur diagnostique en pratique courante." Médecine et Maladies Infectieuses 50, no. 6 (September 2020): S108. http://dx.doi.org/10.1016/j.medmal.2020.06.221.
Повний текст джерелаEsso, Loesse Jacques. "La dépendance démographique est-elle un obstacle à l’épargne et à la croissance en Côte d’Ivoire?" Articles 85, no. 4 (December 8, 2010): 361–82. http://dx.doi.org/10.7202/045069ar.
Повний текст джерелаDinga, Bruno, Jimbo Henry Claver, Kum Kwa Cletus, and Shu Felix Che. "Modeling and Predicting Exchange Rate Volatility: Application of Symmetric GARCH and Asymmetric EGARCH and GJR-GARCH Models." Journal of the Cameroon Academy of Sciences 19, no. 2 (August 3, 2023): 155–78. http://dx.doi.org/10.4314/jcas.v19i2.6.
Повний текст джерелаCantin, Richard, and Cédric Bereaud. "Differentes sources d’erreurs dans le diagnostic de performance énergétique pour les bâtiments." Acta Europeana Systemica 8 (July 10, 2020): 231–40. http://dx.doi.org/10.14428/aes.v8i1.56393.
Повний текст джерелаGeoffre, Thierry. "Enseignement-apprentissage de la morphographie du français : de la modélisation praxéologique aux référentiels de compétences et d’erreurs." SHS Web of Conferences 143 (2022): 01007. http://dx.doi.org/10.1051/shsconf/202214301007.
Повний текст джерелаKrecké, Carine, and Patrice Pieretti. "Degré de dépendance face aux prix étrangers d’un secteur exportateur d’un petit pays : une application à l’industrie du Luxembourg." Économie appliquée 50, no. 4 (1997): 153–75. http://dx.doi.org/10.3406/ecoap.1997.1192.
Повний текст джерелаBouchekourte, Mustapha, and Norelislam El Hami. "Modélisation mathématique à correction d’erreurs de la liquidité du marché boursier. Application : Impact de la structure du marché et du drainage de l’épargne institutionnelle sur le marché des capitaux." Incertitudes et fiabilité des systèmes multiphysiques 2, no. 1 (May 2018). http://dx.doi.org/10.21494/iste.op.2018.0259.
Повний текст джерелаДисертації з теми "Modélisation d’erreur"
Moura, Ferreira Florian. "Budget d’erreur en optique adaptative : Simulation numérique haute performance et modélisation dans la perspective des ELT." Thesis, Sorbonne Paris Cité, 2018. http://www.theses.fr/2018USPCC032/document.
Повний текст джерелаIn a few years, a new class of giants telescopes will appear. The diameter of those telescope will be larger than 20m, up to 39m for the european Extremely Large Telescope (ELT). However, images obtained from ground-based observations are severely impacted by the atmosphere. Then, the resolution of those giants telescopes is equivalent to the one obtained with an amateur telescope of a few tens of centimeters of diameter.Therefore, adaptive optics (AO) becomes essential as it aims to correct in real-time the disturbance due to the atmospherical turbulence and to retrieve the theoretical resolution of the telescope. Nevertheless, AO systems are not perfect: a wavefront residual error remains and still impacts the image quality. The latter is measured by the point spread function (PSF) of the system, and this PSF depends on the wavefront residual error. Hence, identifying and understanding the various contributors of the AO residual error is primordial.For those extremely large telescopes, the dimensioning of their AO systems is challenging. In particular, the numerical complexity impacts the numerical simulation tools useful for the AO design. High performance computing techniques are needed, as such relying on massive parallelization.General Purpose Graphical Processing Unit (GPGPU) enables the use of GPU for this purpose. This architecture is suitable for massive parallelization as it leverages GPU's several thousand of cores, instead of a few tens for classical CPU.In this context, this PhD thesis is composed of three parts. In the first one, it presents the development of COMPASS : a GPU-based high performance end-to-end simulation tool for AO systems that is suitable for ELT scale. The performance of the latter allows simulating AO systems for the ELT in a few minutes. In a second part, an error breakdown estimation tool, ROKET, is added to the end-to-end simulation in order to study the various contributors of the AO residual error. Finally, an analytical model is proposed for those error contributors, leading to a new way to estimate the PSF. Possible on-sky applications are also discussed
Bidaj, Klodjan. "Modélisation du bruit de phase et de la gigue d'une PLL, pour les liens séries haut débit." Thesis, Bordeaux, 2016. http://www.theses.fr/2016BORD0355/document.
Повний текст джерелаBit rates of high speed serial links (USB, SATA, PCI-express, etc.) have reached the multi-gigabits per second, and continue to increase. Two of the major electrical parameters used to characterize SerDes Integrated Circuit performance are the transmitted jitter at a given bit error rate (BER) and the receiver capacity to track jitter at a given BER.Modeling the phase noise of the different SerDes components, extracting the time jitter and decomposing it, would help designers to achieve desired Figure of Merit (FoM) for future SerDes versions. Generating white and colored noise synthetic jitter patterns would allow to better analyze the effect of jitter in a system for design verification.The phase locked loop (PLL) is one of the contributors of clock random and periodic jitter inside the system. This thesis presents a method for modeling the PLL with phase noise injection and estimating the time domain jitter. A time domain model including PLL loop nonlinearities is created in order to estimate jitter. A novel method for generating Gaussian distribution synthetic jitter patterns from colored noise profiles is also proposed.The Standard Organizations specify random and deterministic jitter budgets. In order to decompose the PLL output jitter (or the generated jitter from the proposed method), a new technique for jitter analysis and decomposition is proposed. Modeling simulation results correlate well with measurements and this technique will help designers to properly identify and quantify the sources of deterministic jitter and their impact on the SerDes system.We have developed a method, for specifying PLLs in terms of Phase Noise. This method works for any standard (USB, SATA, PCIe, …), and defines Phase noise profiles of the different parts of the PLL, in order to be sure that the standard requirements are satisfied in terms of Jitter
Legrand, Hélène. "Algorithmes parallèles pour le traitement rapide de géométries 3D." Electronic Thesis or Diss., Paris, ENST, 2017. http://www.theses.fr/2017ENST0053.
Повний текст джерелаOver the last twenty years, the main signal processing concepts have been adapted for digital geometry, in particular for 3D polygonal meshes. However, the processing time required for large models is significant. This computational load becomes an obstacle in the current context, where the massive amounts of data that are generated every second may need to be processed with several operators. The ability to run geometry processing operators with strong time constraints is a critical challenge in dynamic 3D systems. In this context, we seek to speed up some of the current algorithms by several orders of magnitude, and to reformulate or approximate them in order to reduce their complexity or make them parallel. In this thesis, we are building on a compact and effective object to analyze 3D surfaces at different scales : error quadrics. In particular, we propose new high performance algorithms that maintain error quadrics on the surface to represent the geometry. One of the main challenges lies in the effective generation of the right structures for parallel processing, in order to take advantage of the GPU
Gokpi, Kossivi. "Modélisation et Simulation des Ecoulements Compressibles par la Méthode des Eléments Finis Galerkin Discontinus." Thesis, Pau, 2013. http://www.theses.fr/2013PAUU3005/document.
Повний текст джерелаThe aim of this thesis is to deal with compressible Navier-Stokes flows discretized by Discontinuous Galerkin Finite Elements Methods. Several aspects has been considered. One is to show the optimal convergence of the DGFEM method when using high order polynomial. Second is to design shock-capturing methods such as slope limiters and artificial viscosity to suppress numerical oscillation occurring when p>0 schemes are used. Third aspect is to design an a posteriori error estimator for adaptive mesh refinement in order to optimize the mesh in the computational domain. And finally, we want to show the accuracy and the robustness of the DG method implemented when we reach very low mach numbers. Usually when simulating compressible flows at very low mach numbers at the limit of incompressible flows, there occurs many kind of problems such as accuracy and convergence of the solution. To be able to run low Mach number problems, there exists solution like preconditioning. This method usually modifies the Euler. Here the Euler equations are not modified and with a robust time scheme and good boundary conditions imposed one can have efficient and accurate results
Yaoumi, Mohamed. "Energy modeling and optimization of protograph-based LDPC codes." Thesis, Ecole nationale supérieure Mines-Télécom Atlantique Bretagne Pays de la Loire, 2020. http://www.theses.fr/2020IMTA0224.
Повний текст джерелаThere are different types of error correction codes (CCE), each of which gives different trade-offs interms of decoding performanceand energy consumption. We propose to deal with this problem for Low-Density Parity Check (LDPC) codes. In this work, we considered LDPC codes constructed from protographs together with a quantized Min-Sum decoder, for their good performance and efficient hardware implementation. We used a method based on Density Evolution to evaluate the finite-length performance of the decoder for a given protograph.Then, we introduced two models to estimate the energy consumption of the quantized Min-Sum decoder. From these models, we developed an optimization method in order to select protographs that minimize the decoder energy consumption while satisfying a given performance criterion. The proposed optimization method was based on a genetic algorithm called differential evolution. In the second part of the thesis, we considered a faulty LDPC decoder, and we assumed that the circuit introduces some faults in the memory units used by the decoder. We then updated the memory energy model so as to take into account the noise in the decoder. Therefore, we proposed an alternate method in order to optimize the model parameters so as to minimize the decoder energy consumption for a given protograph
Fabre, Léa. "Contributions and Opportunities of Wi-Fi Data to Improve Transport Demand Knowledge / Utilisation de données Wi-Fi, quels apports pour la connaissance de la demande de transport?" Electronic Thesis or Diss., Lyon 2, 2024. http://www.theses.fr/2024LYO20011.
Повний текст джерелаDue to its social, environmental and economic importance, mobility plays a key role in urban landscapes. In particular, public transportation is critical to the smooth functioning of cities. Therefore, public transportation systems must be planned to operate properly and efficiently. To this end, it is of paramount importance to have a great knowledge of the mobility demand, especially in an evolving world. The world today is facing a significant demographic growth along with urban sprawl, which implies an increasing demand for transportation in the cities. In addition, travel patterns are diversifying and becoming less regular, mainly due to the emergence of new modes of transport. The data traditionally used for public transportation planning are inadequate to reflect these changes in mobility behaviors. The development of information technologies, digitization and the data science boom can bring interesting benefits to the forecasting of transport demand. The development of new tools and algorithms, such as artificial intelligence, contributes to the diversification and complexity of models to improve the prediction of mobility behaviors. In parallel, we are currently witnessing the diversification of data sources used in mobility analyses. Among them, Wi-Fi data are very promising. These data have significant advantages when used in transportation planning (they provide information on Origin-Destination trips, they are collected continuously and passively…). However, Wi-Fi data also have some drawbacks. Therefore, they require further processing to be used in demand forecasting models. As a new way of collecting mobility data, questions remain about the quality of the data, their contribution, and how they can be used. The objective of this thesis is to provide a data-driven approach to the use of Wi-Fi data for mobility behaviors. In this thesis, we therefore propose solutions to process this interesting data source. A methodology is presented to filter the parasite signals detected by Wi-Fi sensors in order to keep only the passenger signals and construct relevant Origin-Destination matrices. Scaling of the Wi-Fi data to avoid errors in the predicted total number of trips due to undetected Wi-Fi devices is also handled. In the end, we provide Origin-Destination matrices that are relevant to the structure of the trips and complete in trip volumes. In addition, we propose a modeling to quantify the error between the Origin-Destination matrix produced by Wi-Fi data and real Origin-Destination trips, despite the non-continuous availability of the latter. Some applications of the use of Wi-Fi data are also presented. In conclusion, the results of this thesis show that interesting insights into mobility behaviors can be derived from Wi-Fi data, continuously and at low cost
Najjari, Hamza. "Power Amplifier Design Based on Electro-Thermal Considerations." Thesis, Bordeaux, 2019. http://www.theses.fr/2019BORD0422.
Повний текст джерелаThe aim of this work is to design a power amplifier based on electrothermal considerations. It describes the Dynamic Error Vector Magnitude challenge and long packet issue when designing a power amplifier with hetero-junction bipolar transistors. Based on the circuit electrothermal behavior, an optimization method of both the static and dynamic linearity is proposed. A complete RF front-end (PA + coupler + switch + LNA) is designed for the latest WLAN standard: the Wi-Fi 6. The dynamic temperature distribution in the circuit is analyzed. It’s impact on the performances is quantified. Finally, a programmable temperature dependent bias is designed to compensate for performance degradation. The measurements show a significant linearity improvement with this compensation, allowing the PA to maintain the DEVM lower than -47dB at 14.5 dBm output power, over a large ambient temperature range from -40°C to 85°C
Fontana, Ilaria. "Interface problems for dam modeling." Thesis, Université de Montpellier (2022-….), 2022. http://www.theses.fr/2022UMONS020.
Повний текст джерелаEngineering teams often use finite element numerical simulations for the design, study and analysis of the behavior of large hydraulic structures. For concrete structures, models of increasing complexity must be able to take into account the nonlinear behavior of discontinuities at the various interfaces located in the foundation, in the body of the dam or at the interface between structure and foundation. Besides representing the nonlinear mechanical behavior of these interfaces (rupture, sliding, contact), one should also be able to take into account the hydraulic flow through these openings.In this thesis, we first focus on the topic of interface behavior modeling, which we address through the Cohesive Zone Model (CZM). This model was introduced in various finite element codes (with the joint elements), and it is a relevant approach to describe the physics of cracking and friction problems at the geometrical discontinuities level. Although initially the CZM was introduced to take into account the phenomenon of rupture, we show in this thesis that it can be extended to sliding problems by possibly relying on the elasto-plastic formalism coupled to the damage. In addition, nonlinear hydro-mechanical constitutive relations can be introduced to model the notion of crack opening and the coupling with the laws of fluid flow. At the mechanical level, we work in the Standard Generalized Materials (SGM) framework, which provides a class of models automatically satisfying some thermodynamical principles, while having good mathematical and numerical properties that are useful for robust numerical modeling. We adapt the formalism of volumetric SGM to the interface zones description. In this first part of the thesis, we present our developpements under the hypothesis of SGM adapted to CZM, capable of reproducing the physical phenomena observed experimentally: rupture, friction, adhesion.In practice, nonlinearities of behavior of interface zones are dominated by the presence of contact, which generates significant numerical difficulties for the convergence of finite element computations. The development of efficient numerical methods for the contact problem is thus a key stage for achieving the goal of robust industrial numerical simulators. Recently, the weak enforcement of contact conditions à la Nitsche has been proposed as a mean to reduce numerical complexity. This technique displays several advantages, among which the most important for our work are: 1) it can handle a wide range of conditions (slip with or without friction, no interpenetration, etc.); 2) it lends itself for a rigorous a posteriori error analysis. This scheme based on the weak contact conditions represents in this work the starting point for the a posteriori error estimation via equilibrated stress reconstruction. This analysis is then used to estimate the different error components (e.g., spatial, nonlinear), and to develop an adaptive resolution algorithm, as well as stopping criteria for iterative solvers and the automatic tuning of possible numerical parameters.The main goal of this thesis is thus to make the finite element numerical simulation of structures with geometrical discontinuities robust. We address this question from two angles: on one side, we revisit the existing methods for the crack representation working on the mechanical constitutive relation for joints; on the other, we introduce a new a posteriori method for the contact problem and we propose its adaptation for the generic interface models
Hoffmann, Sabine. "Approche hiérarchique bayésienne pour la prise en compte d’erreurs de mesure d’exposition chronique et à faible doses aux rayonnements ionisants dans l’estimation du risque de cancers radio-induits : Application à une cohorte de mineurs d’uranium." Thesis, Université Paris-Saclay (ComUE), 2017. http://www.theses.fr/2017SACLS532/document.
Повний текст джерелаIn radiation epidemiology, exposure measurement error and uncertain input parameters in the calculation of absorbed organ doses are among the most important sources of uncertainty in the modelling of the health effects of ionising radiation. As the structures of exposure and dose uncertainty arising in occupational cohort studies may be complex, these uncertainty components are only rarely accounted for in this domain. However, when exposure measurement is not or only poorly accounted for, it may lead to biased risk estimates, a loss in statistical power and a distortion of the exposure-response relationship. The aim of this work was to promote the use of the Bayesian hierarchical approach to account for exposure and dose uncertainty in the estimation of the health effects associated with exposure to ionising radiation in occupational cohorts. More precisely, we proposed several hierarchical models and conducted Bayesian inference for these models in order to obtain corrected risk estimates on the association between exposure to radon and its decay products and lung cancer mortality in the French cohort of uranium miners. The hierarchical appraoch, which is based on the combination of sub-models that are linked via conditional independence assumptions, provides a flexible and coherent framework for the modelling of complex phenomena which may be prone to multiple sources of uncertainty. In order to compare the effects of shared and unshared exposure uncertainty on risk estimation and on the exposure-response relationship we conducted a simulation study in which we supposed complex and potentially time-varying error structures that are likely to arise in an occupational cohort study. We elicited informative prior distributions for average breathing rate, which is an important input parameter in the calculation of absorbed lung dose, based on the knowledge of three experts on the conditions in French uranium mines. In this context, we implemented and compared three approaches for the combination of expert opinion. Finally, Bayesian inference for the different hierarchical models was conducted via a Markov chain Monte Carlo algorithm implemented in Python to obtain corrected risk estimates on the lung cancer mortality in the French cohort of uranium miners associated with exposure to radon and its progeny
Turbis, Pascal. "Modèles de flammelette en combustion turbulente avec extinction et réallumage : étude asymptotique et numérique, estimation d’erreur a posteriori et modélisation adaptative." Thèse, 2011. http://hdl.handle.net/1866/4916.
Повний текст джерелаWe are interested here in the modeling errors of subgrid flamelet models in nonpremixed turbulent combustion. The goal of this thesis is to develop an a posteriori error estimation strategy to determine the best model within a hierarchy, with a numerical cost at most that of using the models in the first place. Firstly, we develop and test a dual-weighted residual estimator strategy on a system of advection-diffusion-reaction equations. Secondly, we test that methodology on another system of equations, where quenching and ignition effects are added. In the absence of advection, a rigorous asymptotic analysis shows the existence of many combustion regimes already observed in numerical simulations. We obtain approximations of the quenching and ignition parameters, alongside the S-shaped curve, a plot of the maximal flame temperature as a function of the Damköhler number, consisting of three branches and two bends. When advection effects are added, we still obtain a S-shaped curve corresponding to the known combustion regimes. We compare the modeling errors of the asymptotic approximations in the two stable regimes and establish new model hierarchies for each combustion regime. These errors are compared with the estimations obtained by using the error estimation strategy. When only one stable combustion regime exists, the error estimator correctly identifies that regime; when two or more regimes are possible, it gives a systematic way of choosing one regime. For regimes where more than one model is appropriate, the error estimator’s predicted hierarchy is correct.