Teses / dissertações sobre o tema "Méthode ComBat"
Crie uma referência precisa em APA, MLA, Chicago, Harvard, e outros estilos
Veja os 17 melhores trabalhos (teses / dissertações) para estudos sobre o assunto "Méthode ComBat".
Ao lado de cada fonte na lista de referências, há um botão "Adicionar à bibliografia". Clique e geraremos automaticamente a citação bibliográfica do trabalho escolhido no estilo de citação de que você precisa: APA, MLA, Harvard, Chicago, Vancouver, etc.
Você também pode baixar o texto completo da publicação científica em formato .pdf e ler o resumo do trabalho online se estiver presente nos metadados.
Veja as teses / dissertações das mais diversas áreas científicas e compile uma bibliografia correta.
Li, Yingping. "Artificial intelligence and radiomics in cancer diagnosis". Electronic Thesis or Diss., université Paris-Saclay, 2022. http://www.theses.fr/2022UPASG053.
Texto completo da fonteArtificial intelligence (AI) has been widely used in the research field of AI-assisted diagnosis, treatment, and personalized medicine. This manuscript focuses on the application of artificial intelligence methods including deep learning and radiomics in cancer diagnosis. First, effective image segmentation is essential for cancer diagnosis and further radiomics-based analysis. We proposed a new approach for automatic lesion segmentation in ultrasound images, based on a multicentric and multipathology dataset displaying different types of cancers. By introducing the group convolution, we proposed a lightweight U-net network without sacrificing the segmentation performance. Second, we processed the clinical Magnetic Resonance Imaging (MRI) images to noninvasively predict the glioma subtype as defined by the tumor grade, isocitrate dehydrogenase (IDH) mutation and 1p/19q codeletion status. We proposed a radiomics-based approach. The prediction performance improved significantly by tuning different settings in the radiomics pipeline. The characteristics of the radiomic features that best distinguish the glioma subtypes were also analyzed. This work not only provided a radiomics pipeline that works well for predicting the glioma subtype, but it also contributed to the model development and interpretability. Third, we tackled the challenge of reproducibility in radiomics methods. We investigated the impact of different image preprocessing methods and harmonization methods (including intensity normalization and ComBat harmonization) on the radiomic feature reproducibility in MRI radiomics. The conclusion showed that ComBat method is essential to remove the nonbiological variation caused by different image acquisition settings (namely, scanner effects) and improve the feature reproducibility in radiomics studies. Meanwhile, intensity normalization is also recommended because it leads to more comparable MRI images and more robust harmonization results. Finally, we investigated improving the ComBat harmonization method by changing its assumption to a very common case that scanner effects are different for different classes (like tumors and normal tissues). Although the proposed model yielded disappointing results, surely due to the lack of enough proper constraints to help identify the parameters, it still paved the way for the development of new harmonization methods
Masson, Anne-Sophie. "Le droit de la guerre confronté aux nouveaux conflits asymétriques : généralisation à partir du conflit Afghan (2001-2013)". Thesis, Normandie, 2017. http://www.theses.fr/2017NORMLH03.
Texto completo da fonteThe Afghan war (since 2001) may be seen as a new asymmetric conflict. It has all characteristics of the former asymmetric conflicts except territoriality, which has been replaced by ideology. Therefore, the battlefields have been displaced to the cognitive war. The distinction between war and peace became so small that it is now impossible to distinguish the law of war in regard to its intensity or to the implication of several states. The law of wars, due to its lack of adaptation stopped to ease the peace recovery, becoming a hindrance to combat. In consequence, some warriors have been tempted to use forbidden combat methods. Whose effects have been mediatized and took part of the western states legitimacy crisis (and questionning the World division in sovereign states). The lack of conflicts settlement could lead to a worldwide civil war. Unless, law of wars are harmonized through universal core rights mandatory for states and new international actors; a “World Parliament” could protect them. Furthermore, moral integrity of warriors is expected, it may be reflected into the military laws and their position into the civil society
Lamouroux, Raphaël. "Méthodes compactes d’ordre élevé pour les écoulements présentant des discontinuités". Thesis, Toulouse, ISAE, 2016. http://www.theses.fr/2016ESAE0035/document.
Texto completo da fonteFollowing the recent development of high order compact schemes such as the discontinuous Galerkin or the spectraldifferences, this thesis investigates the issues encountered with the simulation of discontinuous flows. High order compactschemes use polynomial representations which tends to introduce spurious oscillations around discontinuities that can lead to computational failure. To prevent the emergence of these numerical issues, it is necessary to improve the schemewith an additional procedure that can detect and control its behaviour in the neighbourhood of the discontinuities,usually referred to as a limiting procedure or a limiter. Most usual limiters include either the WENO procedure, TVB schemes or the use of an artificial viscosity. All of these solutions have already been adapted to high order compact schemes but none of these techniques takes a real advantage of the richness offered by the polynomial structure. What’s more, the original compactness of the scheme is generally deteriorated and losses of scalability can occur. This thesis investigates the concept of a compact limiter based on the polynomial structure of the solution. A monodimensional study allows us to define some algebraic projections that can be used as a high-order tool for the limiting procedure. The extension of this methodology is then evaluated thanks to the simulation of different 2D and 3D test cases. Those results have been obtained thanks to the development of a parallel solver which have been based on a existing unstructured finite volume CFD code. The different exposed studies detailed end up to the numerical simulation of the shock turbulent boundary layer
Andres, Nicolas. "Optimisation de la chaîne d'analyse MBTA et développement d'une méthode d'étalonnage de la réponse fréquentielle du détecteur d'onde gravitationnelle Virgo". Electronic Thesis or Diss., Chambéry, 2023. http://www.theses.fr/2023CHAMA029.
Texto completo da fonteThe LIGO Virgo collaboration marked the beginnings of gravitational astronomy by providing direct evidence of their existence in September 2015. The detection of gravitationnal wave coming from a binary black holes merger led to the physic's Nobel price. This field has since experienced a great growth, each discovery of which allows an advance in disciplines such as astrophysics, cosmology and fundamental physics. At the end of each observation period, the detectors are stopped and many aspects are improved. This work is part of the preparation phase between period O3 and O4 beginning in May 2024 to configure interferometers in their advanced states in order to optimize their sensitivities. Calibration then becomes crucial in order to accurately reconstruct the signal containing gravitational wave information, allowing detection and the production of scientific results such as the measurement of the Hubble constant, etc. An instrumentation work has been carried out, allowing an accurate and regular measurement of the time stamp (timing) of the readout sensing chain of the interferometer signal, which must be mastered better than 0.01 ms for the purpose of a joint analysis of the detectors network data.Many devices for the calibration of the interferometer rely on the reading of control signals by photodetectors whose frequency response has been assumed to be flat. In order to avoid any bias introduced in the reconstruction of the signal, a measurement method must be developed for a frequency calibration of each photo detector involved. Two methods are compared for use in the O5 period.In addition, the increasing sensitivity of the detectors means more detections. Collaboration analysis chains need to follow instrumental improvements by developing new tools to optimize real-time and off-ligne signal search. The MBTA Low Latency Analysis Chain is one of 4 collaboration analysis pipelines focusing on the search for compact binary coalescences by combining independent data analysis from all 3 detectors. It has many powerful noise rejection tools, but does not take into account any astrophysical information a priori. Through the accumulation of data in previous observation periods, the collaboration was able to establish more accurate mass distribution models for compact binary coalescence populations. During my thesis, a new tool was developed by the MBTA team using this new information, aimed at estimating the probability of origin of events (astrophysics or not) and at classifying the nature of the astrophysical source. This tool finally made it possible to restructure the global analysis chain by using it as the main parameter for classifying events according to their level of significance. The collaboration produces low-latency public alerts for multi-messenger astronomy, providing information related to detected signals common to the different analytical pipelines. Not knowing in advance the preferences of the different experiences partners of the LIGO Virgo collaboration to define the optimal parameters allowing multi-messenger detections, it was decided to test another method to implement similar astrophysical information in the MBTA analysis chain. A technique for including astrophysical information directly in the parameter defining the ranking by significance level of candidate events is presented. This method makes it possible to improve research by providing better discrimination between astrophysical and background noise events. By considering the observation period O3 this method makes it possible to increase the number of detection by 10% with MBTA , detections that have been confirmed by the other chains of analysis
Bouhaya, Lina. "Optimisation structurelle des gridshells". Phd thesis, Université Paris-Est, 2010. http://tel.archives-ouvertes.fr/tel-00583409.
Texto completo da fonteHocine, Farida. "Approximation spectrale d'opérateurs". Saint-Etienne, 1993. http://www.theses.fr/1993STET4007.
Texto completo da fonteGrimich, Karim. "Schémas compacts basés sur le résidu d'ordre élevé pour des écoulements compressibles instationnaires. Application à de la capture de fines échelles". Thesis, Paris, ENSAM, 2013. http://www.theses.fr/2013ENAM0033/document.
Texto completo da fonteComputational Fluid Dynamics (CFD) solvers have reached maturity in terms of solution accuracy as well as computational efficiency. However, progress remains to be done for unsteady flows especially when governed by large, coherent structures. For these flows, current CFD solvers do not provide accurate solutions unless very fine mesh are used. Moreover, high-accuracy is a crucial feature for the application of advanced turbulence simulation strategies, like Large Eddy Simulation (LES). In order to apply high-order methods to complex unsteady flows several issues needs to be addressed among which numerical robustness and the capability of handling complex geometries.In the present work, we study a family of compact approximations that provide high accuracy not for each space derivative treated apart but for the complete residual r, i.e. the sum of all of the terms in the governing equations. For steady problems solved by time marching, r is the residual at steady state and it involves space derivatives only; for unsteady problems, r also includes the time derivative. Schemes of this type are referred-to as Residual-Based Compact (RBC). Precisely, we design high-order finite difference RBC schemes for unsteady compressible flows, and provide a comprehensive study of their dissipation properties. The dissipation and dispersion errors introduced by RBC schemes are investigated to quantify their capability of resolving a given wave length using a minimal number of grid-points. The capabilities of RBC dissipation to drain energy only at small, ill-resolved scales are also discussed in view of the application of RBC schemes to implicit LES (ILES) simulations. Finally, RBC schemes are extended to the Finite Volume (FV) framework in order to handle complex geometries. A high-order accuracy preserving FV formulation of the third-order RBC scheme for general irregular grids is presented and analysed. Numerical applications, including complex Reynolds-Averaged Navier-Stokes unsteady simulation of turbomachinery flows and ILES simulations of turbulent flows dominated by coherent structure dynamics or decay, support the theoretical results
Outtier, Pierre-Yves. "Architecture novatrice de code dynamique : application au développement d'un solveur compact d'ordre élevé pour l'aérodynamique compressible dans des maillages recouvrants". Thesis, Paris, ENSAM, 2014. http://www.theses.fr/2014ENAM0029.
Texto completo da fonteHigh-order numerical schemes are usually restricted to research applications, involving highly complex physical phenomena but simple geometries, and regular Cartesian or lowly deformed meshes. A demand exists for a new generation of industrial codes of increased accuracy. In this work, we were led to address the general question of how to design a CFD code architecture that: can take into account a variety of possibly geometrically complex configurations; remains simple and modular enough to facilitate the introduction and testing of new ideas (numerical methods, models) with a minimal development effort; use high-order numerical discretizations and advanced physical models. This required some innovative choices in terms of programming languages, data structure and storage, and code architecture, which go beyond the mere development of a specific family of numerical schemes. A solution mixing Python and Fortran languages is proposed with details on the concepts at the basis of the code architecture. The numerical methods are validated on test-cases of increasing complexity, demonstrating at the same time the variety of physics and geometry currently achievable with DynHoLab. Then, based on the computational framework designed, this work presents a way to handle complex geometries while increasing the order of accuracy of the numerical methods. In order to apply high-order RBC schemes to complex geometries, the present strategy consists in a multi-domain implementation on overlapping structured meshes
Gérald, Sophie. "Méthode de Galerkin Discontinue et intégrations explicites-implicites en temps basées sur un découplage des degrés de liberté : Applications au système des équations de Navier-Stokes". Phd thesis, Université Pierre et Marie Curie - Paris VI, 2013. http://tel.archives-ouvertes.fr/tel-00943621.
Texto completo da fonteZnidarcic, Anton. "Un nouvel algoritme pour la simulation DNS et LES des ecoulements cavitants". Thesis, Paris, ENSAM, 2016. http://www.theses.fr/2016ENAM0056/document.
Texto completo da fonteCavitation-turbulence interactions are problematic aspect of cavitating flows which imposes limitations in development of better cavitation and turbulence models. DNS simulations with homogeneous mixture approach are proposed to overcome this and offer more insight into the phenomena. As DNS simulations are highly demanding and a variety of cavitation models exists, a tool devoted specifically to them is needed. Such tools usually demand application of highly accurate discretization schemes, direct solvers and multi domain methods enabling good scaling of the codes. As typical cavitating flow geometries impose limits on suitable discretization methods, compact finite differences offer the most appropriate discretization tool. The need for fast solvers and good code scalability leads to request for an algorithm, capable of stable and accurate cavitating flow simulations where solved systems feature multiplication of implicitly treated variables only by constant coefficients. A novel algorithm with such ability was developed in the scope of this work using Concus and Golub method introduced into projection methods, through which the governing equations for homogeneous mixture modeling of cavitating flows can be resolved. Work also proposes an effective and new approach for verification of the new and existing algorithms on the basis of Method of Manufactured Solutions
Zenoni, Florian. "Study of Triple-GEM detectors for the CMS muon spectrometer upgrade at LHC and study of the forward-backward charge asymmetry for the search of extra neutral gauge bosons". Doctoral thesis, Universite Libre de Bruxelles, 2016. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/229354.
Texto completo da fonteThis PhD thesis takes place in the CMS experiment at CERN's Large Hadron Collider (LHC). The LHC allowed the discovery of the Brout-Englert-Higgs boson in 2012, and is designed to run for at least 20 years, with an increasing luminosity that will reach by 2025 a value of 7.5 x 10^34 cm^-2 s^-1, that is a yield five times greater than the one initially intended. As a consequence, the experiments must adapt and upgrade many of their components and particle detectors. One of the foreseen upgrades of the CMS experiment concerns the Triple Gas Electron Multiplier (GEM) detectors, currently in development for the forward muon spectrometer. These detectors will be installed in CMS during the second long LHC shutdown (LS2), in 2018-2019. The aim of this upgrade is to better control the event trigger rate at Level 1 for muon detection, thanks to the high performance of these Triple GEM detectors, in presence of very high particle rates (>1 kHz/cm^2). Moreover, thanks to its excellent spatial resolution (~250 um), the GEM technology can improve the muon track reconstruction and the identification capability of the forward detector.The goal of my research is to estimate the sensitivity of Triple GEMs to the hostile background radiation in CMS, essentially made of neutron and photons generated by the interaction between the particles and CMS detectors. The accurate evaluation of this sensitivity is very important, as an underestimation could have ruinous effects of the Triple GEMs efficiency, once they are installed in CMS. To validate my simulations, I have reproduced experimental results obtained with similar detectors already installed in CMS, such as the Resistive Plate Chambers (RPC).The second part of my work regards the study of the CMS experiment capability to discriminate between different models of new physics predicting the existence of neutral vector bosons called Z'. These models belong to plausible extensions of the Standard Model. In particular, the analysis is focused on simulated samples in which the Z' decays in two muons, and on the impact that the Triple GEM detectors upgrades will bring to these measurements during the high luminosity phase of the LHC, called Phase II. My simulations prove that more than 20% of the simulated events see at least one muon in the CMS pseudo-rapidity (eta) region covered by Triple GEM detectors. Preliminary results show that, in the case of 3 TeV/c^2 models, it will be possible already at the end of Phase I to discriminate a Z'I from a Z'SSM with a significance level alpha > 3 sigma.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
Ghaffari, Dehkharghani Seyed Amin. "Simulations numériques d’écoulements incompressibles interagissant avec un corps déformable : application à la nage des poissons". Thesis, Aix-Marseille, 2014. http://www.theses.fr/2014AIXM4369/document.
Texto completo da fonteWe present an efficient algorithm for simulation of deformable bodies interacting with two-dimensional incompressible flows. The temporal and spatial discretizations of the Navier--Stokes equations in vorticity stream-function formulation are based on classical fourth-order Runge--Kutta and compact finite differences, respectively. Using a uniform Cartesian grid we benefit from the advantage of a new fourth-order direct solver for the Poisson equation to ensure the incompressibility constraint down to machine zero over an optimal grid. For introducing a deformable body in fluid flow, the volume penalization method is used. A Lagrangian structured grid with prescribed motion covers the deformable body which is interacting with the surrounding fluid due to the hydrodynamic forces and the torque calculated on the Eulerian reference grid. An efficient law for controlling the curvature of an anguilliform fish, swimming toward a prescribed goal, is proposed which is based on the geometrically exact theory of nonlinear beams and quaternions. Validation of the developed method shows the efficiency and expected accuracy of the algorithm for fish-like swimming and also for a variety of fluid/solid interaction problems
Chevillon, Nicolas. "Etude et modélisation compacte du transistor FinFET ultime". Phd thesis, Université de Strasbourg, 2012. http://tel.archives-ouvertes.fr/tel-00750928.
Texto completo da fonteBouffanais, Yann. "Bayesian inference for compact binary sources of gravitational waves". Thesis, Sorbonne Paris Cité, 2017. http://www.theses.fr/2017USPCC197/document.
Texto completo da fonteThe first detection of gravitational waves in 2015 has opened a new window for the study of the astrophysics of compact binaries. Thanks to the data taken by the ground-based detectors advanced LIGO and advanced Virgo, it is now possible to constrain the physical parameters of compact binaries using a full Bayesian analysis in order to increase our physical knowledge on compact binaries. However, in order to be able to perform such analysis, it is essential to have efficient algorithms both to search for the signals and for parameter estimation. The main part of this thesis has been dedicated to the implementation of a Hamiltonian Monte Carlo algorithm suited for the parameter estimation of gravitational waves emitted by compact binaries composed of neutron stars. The algorithm has been tested on a selection of sources and has been able to produce better performances than other types of MCMC methods such as Metropolis-Hastings and Differential Evolution Monte Carlo. The implementation of the HMC algorithm in the data analysis pipelines of the Ligo/Virgo collaboration could greatly increase the efficiency of parameter estimation. In addition, it could also drastically reduce the computation time associated to the parameter estimation of such sources of gravitational waves, which will be of particular interest in the near future when there will many detections by the ground-based network of gravitational wave detectors. Another aspect of this work was dedicated to the implementation of a search algorithm for gravitational wave signals emitted by monochromatic compact binaries as observed by the space-based detector LISA. The developed algorithm is a mixture of several evolutionary algorithms, including Particle Swarm Optimisation. This algorithm has been tested on several test cases and has been able to find all the sources buried in a signal. Furthermore, the algorithm has been able to find the sources on a band of frequency as large as 1 mHz which wasn’t done at the time of this thesis study
Pujol, Hadrien. "Antennes microphoniques intelligentes : localisation de sources acoustiques par Deep Learning". Thesis, Paris, HESAM, 2020. http://www.theses.fr/2020HESAC025.
Texto completo da fonteFor my PhD thesis, I propose to explore the path of supervised learning, for the task of locating acoustic sources. To do so, I have developed a new deep neural network architecture. But, to optimize the millions of learning variables of this network, a large database of examples is needed. Thus, two complementary approaches are proposed to constitute these examples. The first is to carry out numerical simulations of microphonic recordings. The second one is to place a microphone antenna in the center of a sphere of loudspeakers which allows to spatialize the sounds in 3D, and to record directly on the microphone antenna the signals emitted by this experimental 3D sound wave simulator. The neural network could thus be tested under different conditions, and its performances could be compared to those of conventional algorithms for locating acoustic sources. The results show that this approach allows a generally more precise localization, but also much faster than conventional algorithms in the literature
Dinh, Van Duong. "Strichartz estimates and the nonlinear Schrödinger-type equations". Thesis, Toulouse 3, 2018. http://www.theses.fr/2018TOU30247/document.
Texto completo da fonteThis dissertation is devoted to the study of linear and nonlinear aspects of the Schrödinger-type equations [ i partial_t u + |nabla|^sigma u = F, quad |nabla| = sqrt {-Delta}, quad sigma in (0, infty).] When $sigma = 2$, it is the well-known Schrödinger equation arising in many physical contexts such as quantum mechanics, nonlinear optics, quantum field theory and Hartree-Fock theory. When $sigma in (0,2) backslash {1}$, it is the fractional Schrödinger equation, which was discovered by Laskin (see e.g. cite{Laskin2000} and cite{Laskin2002}) owing to the extension of the Feynman path integral, from the Brownian-like to Lévy-like quantum mechanical paths. This equation also appears in the water waves model (see e.g. cite{IonescuPusateri} and cite{Nguyen}). When $sigma = 1$, it is the half-wave equation which arises in water waves model (see cite{IonescuPusateri}) and in gravitational collapse (see cite{ElgartSchlein}, cite{FrohlichLenzmann}). When $sigma =4$, it is the fourth-order or biharmonic Schrödinger equation introduced by Karpman cite {Karpman} and by Karpman-Shagalov cite{KarpmanShagalov} taking into account the role of small fourth-order dispersion term in the propagation of intense laser beam in a bulk medium with Kerr nonlinearity. This thesis is divided into two parts. The first part studies Strichartz estimates for Schrödinger-type equations on manifolds including the flat Euclidean space, compact manifolds without boundary and asymptotically Euclidean manifolds. These Strichartz estimates are known to be useful in the study of nonlinear dispersive equation at low regularity. The second part concerns the study of nonlinear aspects such as local well-posedness, global well-posedness below the energy space and blowup of rough solutions for nonlinear Schrödinger-type equations.[...]
Scipioni, Angel. "Contribution à la théorie des ondelettes : application à la turbulence des plasmas de bord de Tokamak et à la mesure dimensionnelle de cibles". Thesis, Nancy 1, 2010. http://www.theses.fr/2010NAN10125.
Texto completo da fonteThe necessary scale-based representation of the world leads us to explain why the wavelet theory is the best suited formalism. Its performances are compared to other tools: R/S analysis and empirical modal decomposition method (EMD). The great diversity of analyzing bases of wavelet theory leads us to propose a morphological approach of the analysis. The study is organized into three parts. The first chapter is dedicated to the constituent elements of wavelet theory. Then we will show the surprising link existing between recurrence concept and scale analysis (Daubechies polynomials) by using Pascal's triangle. A general analytical expression of Daubechies' filter coefficients is then proposed from the polynomial roots. The second chapter is the first application domain. It involves edge plasmas of tokamak fusion reactors. We will describe how, for the first time on experimental signals, the Hurst coefficient has been measured by a wavelet-based estimator. We will detail from fbm-like processes (fractional Brownian motion), how we have established an original model perfectly reproducing fBm and fGn joint statistics that characterizes magnetized plasmas. Finally, we will point out the reasons that show the lack of link between high values of the Hurst coefficient and possible long correlations. The third chapter is dedicated to the second application domain which is relative to the backscattered echo analysis of an immersed target insonified by an ultrasonic plane wave. We will explain how a morphological approach associated to a scale analysis can extract the diameter information