To see the other types of publications on this topic, follow the link: Reconstruction de la densité.

Dissertations / Theses on the topic 'Reconstruction de la densité'

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Reconstruction de la densité.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Yan, Zeyin. "Reconstruction de densité d'impulsion et détermination de la matrice densité réduite à un électron." Thesis, Université Paris-Saclay (ComUE), 2018. http://www.theses.fr/2018SACLC001/document.

Full text
Abstract:
La diffraction des rayons X à haute résolution (XRD) et celle des neutrons polarisés (PND) sont couramment utilisées pour modéliser les densités de charge et de spin dans l'espace des positions. Par ailleurs, la diffusion Compton et diffusion Compton magnétiques sont utilisées pour observer les plus diffus des électrons appariés et non appariés, en fournissant les profils Compton directionnels de charge (DCPs) et les profils Compton magnétique directionnels (DMCPs). Il est possible d'utiliser plusieurs DCPs et DMCPs non équivalents pour reconstituer la densité d'impulsion à deux ou trois dimensions. Puisque toutes ces techniques décrivent les mêmes électrons dans différentes représentations, nous nous concentrons sur l'association de la densité d'impulsion, reconstituée par DCPs (DMCPs) avec la densité de charge et spin, telle que déterminée à parties données XRD (PND).La confrontation théorie-experience, ou --plus rarement-- entre différentes techniques expérimentales, requièrent généralement les representations des densités reconstruites dans les espaces des positions et des impulsions. Le défi que pose la comparaison des résultats obtenus par calculs ab-initio et par des approches expérimentales (dans le cas de Nit(SMe)Ph) montre la nécessité de combiner plusieurs expériences et celle d'améliorer les modèles sur lesquels reposent les approches théoriques. Nous montrons que, dans le cas d'une densité de probabilité de présence d'électrons résolue en spin, une approche simple de type Hartree-Fock ou DFT ne suffit pas. Dans le cas de YTiO3, une analyse conjointe des espaces position et impulsion (PND & MCS) met en évidence un possible couplage ferromagnétique selon Ti--O1-Ti. Pour cela, une densité magnétique de "super-position" est proposée et s'avère permettre une vérification aisée de la cohérence entre densité de charge (spin) et densité de 'impulsion déterminées expérimentalement, sans la nécessité d'une étape ab-initio. Pour aller plus loin, un modèle "de Ti isolé", basé sur des coefficients orbitaux affinés par PND, souligne l'importance du couplage cohérent métal-oxygène nécessaire à rendre compte des observations dans l'espace des impulsions.La matrice densité réduite à un électron (1-RDM) est proposée comme socle de base permettant de systématiquement combiner les espaces des positions et des impulsions. Pour reconstruire cette 1-RDM à partir d'un calcul ab-initio périodique, une approche "cluster" est proposée. Il devient alors possible d'obtenir la 1-RDM théorique résolue en spin sur des chemins de liaison chimique particuliers. Ceci nous permet notamment de clarifier la différence entre les couplages Ti--O1--Ti et Ti-O2--Ti. Il est montré que l'importance des contributions du terme d'interaction entre les atomes (de métal et d'oxygène) est différente selon que l'on considère une représentation des propriétés dans l'espace des positions ou des impulsions. Ceci est clairement observé dans les liaisons chimiques métal-oxygène et peut être illustré par une analyse séparant les contributions par orbitales. Les grandeurs decrivant les électrons dans l'espace des phases comme la fonction de Moyal peuvent également être déterminées par cette construction en "cluster". Ceci peut revêtir un intérêt particulier si la technique de diffusion Compton aux positions de Bragg pouvait être généralisée. Les premiers résultats d'un affinement de modèle simple de 1-RDM résolu en spin sont exposés. Le modèle respecte la N-représentabilité et est adapté pour plusieurs données expérimentales (telles que XRD, PND, CS, MCS ou XMD). Le potentiel de ce modèle n'est pas limité à une analyse en spin mais son usage est ici circonscrit à la description des électrons non appariés, ses limites sont identifiées et des voies d'amélioration future sont proposées
High resolution X-ray diffraction (XRD) and polarized neutron diffraction (PND) are commonly used to model charge and spin densities in position space. Additionally, Compton scattering (CS) and magnetic Compton scattering (MCS) are the main techniques to observe the most diffuse electrons and unpaired electrons by providing the “Directional Compton Profiles" (DCPs) and ”Directional magnetic Compton Profiles" (DMCPs), respectively. A set of such DCPs (DMCPs) can be used to reconstruct two-dimensional or three-dimensional electron momentum density. Since all these techniques describe the same electrons in different space representations, we concentrate on associating the electron momentum density reconstructed from DCPs (resp. DMCPs) with electron density refined using XRD (resp. PND) data.The confrontation between theory and experiment, or between different experiments, providing several sets of experimental data, is generally obtained from the reconstructed electron densities and compared with theoretical results in position and momentum spaces. The challenge of comparing the results obtained by ab-initio computations and experimental approaches (in the Nit(SMe)Ph case) shows the necessity of a multiple experiments joint refinement and also the improvement of theoretical computation models. It proves that, in the case of a spin resolved electron density, a mere Hartree-Fock or DFT approach is not sufficient. In the YTiO3 case, a joint analysis of position and momentum spaces (PND & MCS) highlights the possible ferromagnetic pathway along Ti--O1--Ti. Therefore, a “super-position" spin density is proposed and proves to allow cross-checking the coherence between experimental electron densities in posittion and momentum spaces, without having recourse to ab initio results. Furthermore, an ”isolated Ti model" based on PND refined orbital coefficients emphasizes the importance of metal-oxygen coherent coupling to properly account for observations in momentum space.A one-electron reduced density matrix (1-RDM) approach is proposed as a fundamental basis for systematically combining position and momentum spaces. To reconstruct 1-RDM from a periodic ab initio computation, an "iterative cluster" approach is proposed. On this basis, it becomes possible to obtain a theoretical spin resolved 1-RDM along specific chemical bonding paths. It allows a clarification of the difference between Ti--O1--Ti and Ti--O2--Ti spin couplings in YTiO3. It shows that interaction contributions between atoms (metal and oxygen atoms) are different depending on whether the property is represented in position or momentum spaces. This is clearly observed in metal-oxygen chemical bonds and can be illustrated by an orbital resolved contribution analysis. Quantities for electron descriptions in phase space, such as the Moyal function, can also be determinerd by this "cluster model", which might be of particular interest if Compton scattering in Bragg positions could be generalized. The preliminary results of a simple spin resolved 1-RDM refinement model are exposed. The model respects the N-representability and is adapted for various experimental data (e.g.: XRD, PND, CS, MCS, XMD etc.). The potential of this model is not limited to a spin analysis but its use is limited here to the unpaired electrons description. The limitations of this model are analysed and possible improvements in the future are also proposed
APA, Harvard, Vancouver, ISO, and other styles
2

Klose, Gerd. "Density matrix reconstruction of a large angular momentum." Diss., The University of Arizona, 2001. http://hdl.handle.net/10150/290012.

Full text
Abstract:
A complete description of the quantum state of a physical system is the fundamental knowledge necessary to statistically predict the outcome of measurements. In turning this statement around, Wolfgang Pauli raised already in 1933 the question, whether an unknown quantum state could be uniquely determined by appropriate measurements--a problem that has gained new relevance in recent years. In order to harness the prospects of quantum computing, secure communication, teleportation, and the like, the development of techniques to accurately control and measure quantum states has now become a matter of practical as well as fundamental interest. However, there is no general answer to Pauli's very basic question, and quantum state reconstruction algorithms have been developed and experimentally demonstrated only for a few systems so far. This thesis presents a novel experimental method to measure the unknown and generally mixed quantum state for an angular momentum of arbitrary magnitude. The (2F + 1) x (2F + 1) density matrix describing the quantum state is hereby completely determined from a set of Stern-Gerlach measurements with (4F + 1) different orientations of the quantization axis. This protocol is implemented for laser cooled Cesium atoms in the 6S₁/₂(F = 4) hyperfine ground state manifold, and is applied to a number of test states prepared by optical pumping and Larmor precession. A comparison of the input and the measured states shows successful reconstructions with fidelities of about 0.95.
APA, Harvard, Vancouver, ISO, and other styles
3

Bianchetti, Morales Rennan. "Density profile reconstruction methods for extraordinary mode reflectometry." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0031/document.

Full text
Abstract:
Le but de cette thèse est d'améliorer les techniques d'analyse de données de la réflectométrie à balayage de fréquence pour la détermination du profil de densité des plasmas de fusion. Au cours des deux dernières décennies, des améliorations significatives ont été apportées sur la partie matérielle et sur d'extraction des signaux, mais l'analyse des données est en retard et nécessite d'autres améliorations pour répondre aux spécifications exigées pour un fonctionnement en continu des futurs réacteurs. Les améliorations obtenues lors de ce travail de thèse sur la reconstruction des profils de densité fournissent une meilleure précision en un temps plus court ceci même en présence de trou de densité conduisant à une mesure des propriétés de la turbulence suffisamment précise pour valider des modèles numériques et permettant la surveillance en temps réel de la forme et de la position du plasma
The goal of this PhD is to improve the data analysis techniques of frequency swept reflectometry for determination of the density profile of fusion plasmas. There has been significant improvements in the last two decades on the hardware design and signal extraction techniques, but the data analysis is lagging behind and require further improvements to meet the required standards for continuous operation in future reactors. The improvements obtained in this thesis on the reconstruction of density profiles provide a better accuracy in a shorter time, even in the presence of a density hole, also enabling sufficiently precise measurements of the properties of turbulence used to validate numerical models, and allowing real-time monitoring of the shape and position of the plasma
APA, Harvard, Vancouver, ISO, and other styles
4

Khalaf, Reem. "Image reconstruction for optical tomography using photon density waves." Thesis, University of Hertfordshire, 1999. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.302304.

Full text
APA, Harvard, Vancouver, ISO, and other styles
5

Lehovich, Andre. "List-mode SPECT reconstruction using empirical likelihood." Diss., Tucson, Arizona : University of Arizona, 2005. http://etd.library.arizona.edu/etd/GetFileServlet?file=file:///data1/pdf/etd/azu%5Fetd%5F1098%5F1%5Fm.pdf&type=application/pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
6

Zhang, Yu. "Further application of hydroxyapatite reinforced high density polyethylene composite - skull reconstruction implants." Thesis, Queen Mary, University of London, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.414001.

Full text
APA, Harvard, Vancouver, ISO, and other styles
7

Tafas, Jihad. "An algorithm for two-dimensional density reconstruction in proton computed tomography (PCT)." CSUSB ScholarWorks, 2007. https://scholarworks.lib.csusb.edu/etd-project/3281.

Full text
Abstract:
The purpose of this thesis is to develop an optimized and effective iterative reconstruction algorithm and hardware acceleration methods that work synonymously together through reconstruction in proton computed tomography, which accurately maps the electron density.
APA, Harvard, Vancouver, ISO, and other styles
8

Meredith, Kelly Robyn. "The Influence of Soil Reconstruction Methods on Mineral Sands Mine Soil Properties." Thesis, Virginia Tech, 2007. http://hdl.handle.net/10919/31006.

Full text
Abstract:
Significant deposits of heavy mineral sands (primarily ilmenite and zircon) are located in Virginia in Dinwiddie, Sussex and Greensville counties. Most deposits are located under prime farmland, and thus require intensive reclamation when mined. The objective of this study was to determine the effect of four different mine soil reconstruction methods on soil properties and associated rowcrop productivity. Treatments compared were 1) Biosolids-No Tillage, 2) Biosolids-Conventional Tillage, 3) Lime+NPK fertilized tailings (Control), and 4) 15-cm Topsoil over lime+P treated tailings. Treated plots were cropped to corn (Zea Mays L.) in 2005 and wheat (Triticum aestivum L.) in 2006. Yields were compared to nearby unmined prime farmland yields. Over both growing seasons, the two biosolids treatments produced the highest overall crop yields. The Topsoil treatment produced the lowest corn yields due to relatively poor physical and chemical conditions, but the effect was less obvious for the following wheat crop. Reclaimed land corn and wheat yields were higher than long-term county averages, but they were consistently lower than unmined plots under identical management. Detailed morphological study of 20 mine soil pedons revealed significant root-limiting subsoil compaction and textural stratification. The mine soils classified as Typic Udorthents (11), Typic Udifluvents (4) and Typic Dystrudepts (5). Overall, mined lands can be successfully returned to intensive agricultural production with comparable yields to long-term county averages provided extensive soil amendment and remedial tillage protocols are implemented. However, a significant decrease (~25 to 35%) in initial productivity should be expected relative to unmined prime farmland.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
9

Martin, Lorca Dario. "Implementation And Comparison Of Reconstruction Algorithms For Magnetic Resonance." Master's thesis, METU, 2007. http://etd.lib.metu.edu.tr/upload/12608250/index.pdf.

Full text
Abstract:
In magnetic resonance electrical impedance tomography (MR-EIT), crosssectional images of a conductivity distribution are reconstructed. When current is injected to a conductor, it generates a magnetic field, which can be measured by a magnetic resonance imaging (MRI) scanner. MR-EIT reconstruction algorithms can be grouped into two: current density based reconstruction algorithms (Type-I) and magnetic flux density based reconstruction algorithms (Type-II). The aim of this study is to implement a series of reconstruction algorithms for MR-EIT, proposed by several research groups, and compare their performance under the same circumstances. Five direct and one iterative Type-I algorithms, and an iterative Type-II algorithm are investigated. Reconstruction errors and spatial resolution are quantified and compared. Noise levels corresponding to system SNR 60, 30 and 20 are considered. Iterative algorithms provide the lowest errors for the noise- free case. For the noisy cases, the iterative Type-I algorithm yields a lower error than the Type-II, although it can diverge for SNR lower than 20. Both of them suffer significant blurring effects, especially at SNR 20. Another two algorithms make use of integration in the reconstruction, producing intermediate errors, but with high blurring effects. Equipotential lines are calculated for two reconstruction algorithms. These lines may not be found accurately when SNR is lower than 20. Another disadvantage is that some pixels may not be covered and, therefore, cannot be reconstructed. Finally, the algorithm involving the solution of a linear system provides the less blurred images with intermediate error values. It is also very robust against noise.
APA, Harvard, Vancouver, ISO, and other styles
10

Sun, Hongyan. "An investigation into the use of scattered photons to improve 2D Position Emission Tomography (PET) functional imaging quality." Hindawi Publishing Corporation, 2012. http://hdl.handle.net/1993/31031.

Full text
Abstract:
Positron emission tomography (PET) is a powerful metabolic imaging modality, which is designed to detect two anti-parallel 511 keV photons origniating from a positron-electron annihilation. However, it is possible that one or both of the annihilation photons undergo a Compton scattering in the object. This is more serious for a scanner operated in 3D mode or with large patients, where the scatter fraction can be as high as 40-60%. When one or both photons are scattered, the line of response (LOR) defined by connecting the two relevant detectors no longer passes through the annihilation position. Thus, scattered coincidences degrade image contrast and compromise quantitative accuracy. Various scatter correction methods have been proposed but most of them are based on estimating and subtracting the scatter from the measured data or incorporating it into an iterative reconstruction algorithm. By accurately measuring the scattered photon energy and taking advantage of the kinematics of Compton scattering, two circular arcs (TCA) in 2D can be identified, which describe the locus of all the possible scattering positions and encompass the point of annihilation. In the limiting case where the scattering angle approaches zero, the TCA approach the LOR for true coincidences. Based on this knowledge, a Generalized Scatter (GS) reconstruction algorithm has been developed in this thesis, which can use both true and scattered coincidences to extract the activity distribution in a consistent way. The annihilation position within the TCA can be further confined by adding a patient outline as a constraint into the GS algorithm. An attenuation correction method for the scattered coincidences was also developed in order to remove the imaging artifacts. A geometrical model that characterizes the different probabilities of the annihilation positions within the TCA was also proposed. This can speed up image convergence and improve reconstructed image quality. Finally, the GS algorithm has been adapted to deal with non-ideal energy resolutions. In summary, an algorithm that implicitly incorporates scattered coincidences into the image reconstruction has been developed. Our results demonstrate that this eliminates the need for scatter correction and can improve system sensitivity and image quality.
February 2016
APA, Harvard, Vancouver, ISO, and other styles
11

Boyacioglu, Rasim. "Performance Evaluation Of Current Density Based Magnetic Resonance Electrical Impedance Tomography Reconstruction Algorithms." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12611016/index.pdf.

Full text
Abstract:
Magnetic Resonance Electrical Impedance Tomography (MREIT) reconstructs conductivity distribution with internal current density (MRCDI) and boundary voltage measurements. There are many algorithms proposed for the solution of MREIT inverse problem which can be divided into two groups: Current density (J) and magnetic flux density (B) based reconstruction algorithms. In this thesis, J-based MREIT reconstruction algorithms are implemented and optimized with modifications. These algorithms are simulated with five conductivity models which have different geometries and conductivity values. Results of simulation are discussed and reconstruction algorithms are compared according to their performances. Equipotential-Projection algorithm has lower error percentages than other algorithms for noise-free case whereas Hybrid algorithm has the best performance for noisy cases. Although J-substitution and Hybrid algorithms have relatively long reconstruction times, they produced the best images perceptually. v Integration along Cartesian Grid Lines and Integration along Equipotential Lines algorithms diverge as noise level increases. Equipotential-Projection algorithm has erroneous lines starting from corners of FOV especially for noisy cases whereas Solution as a Linear Equation System has a typical grid artifact. When performance with data of experiment 1 is considered, only Solution as a Linear Equation System algorithm partially reconstructed all elements which show that it is robust to noise. Equipotential-Projection algorithm reconstructed resistive element partially and other algorithms failed in reconstruction of conductivity distribution. Experimental results obtained with a higher conductivity contrast show that Solution as a Linear Equation System, J-Substitution and Hybrid algorithms reconstructed both phantom elements and Hybrid algorithm is superior to other algorithms in percentage error comparison.
APA, Harvard, Vancouver, ISO, and other styles
12

Terlan, Bürgehan. "Experimental electron density reconstruction and analysis of titanium diboride and binary vanadium borides." Doctoral thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-119046.

Full text
Abstract:
Intermetallic borides are characterized by a great variety of crystal structures and bonding interactions, however, a comprehensive rationalisation of the electronic structure is missing. A more general interpretation will be targeted towards comparing several boride phases of one particular transition metal on one hand side, but also isostructural borides of various metals at the other side. Finally, a concise model should result from a detailed analysis of excellent data both from experimental charge density analysis and quantum chemical methods. Ultimate target is a transferability model based on typical building blocks. Experimental investigations of the electron density derived from diffraction data are very rare for intermetallic compounds. One of the main reasons is that the suitability of such compounds for charge density analysis is estimated to be relatively low as compared to organic compounds. In the present work, X-ray single crystal diffraction measurements up to high resolution were carried out for TiB2, VB2, V3B4, and VB crystals. The respective experimental electron densities were reconstructed using the multipole model introduced by Hansen and Coppens [1]. The topological aspects of the experimental electron density were analysed on the basis of the multipole parameters using Bader’s Quantum Theory, Atoms in Molecules [2] and compared with theoretical calculations. References [1] Hansen, N.K.; Coppens, P. Acta Crystallogr. 1978, A34, 909 [2] Bader, R.F.W. Atoms in Molecules─A Quantum Theory; Oxford University Press: Oxford, 1990
APA, Harvard, Vancouver, ISO, and other styles
13

Eker, Gokhan. "Performance Evaluation Of Magnetic Flux Density Based Magnetic Resonance Electrical Impedance Tomography Reconstruction Algorithms." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/12610940/index.pdf.

Full text
Abstract:
Magnetic Resonance Electrical Impedance Tomography (MREIT) reconstructs images of electrical conductivity distribution based on magnetic flux density (B) measurements. Magnetic flux density is generated by an externally applied current on the object and measured by a Magnetic Resonance Imaging (MRI) scanner. With the measured data and peripheral voltage measurements, the conductivity distribution of the object can be reconstructed. There are two types of reconstruction algorithms. First type uses current density distributions to reconstruct conductivity distribution. Object must be rotated in MRI scanner to measure three components of magnetic flux density. These types of algorithms are called J-based reconstruction algorithms. The second type of reconstruction algorithms uses only one component of magnetic flux density which is parallel to the main magnetic field of MRI scanner. This eliminates the need of subject rotation. These types of algorithms are called B-based reconstruction algorithms. In this study four of the B-based reconstruction algorithms, proposed by several research groups, are examined. The algorithms are tested by different computer models for noise-free and noisy data. For noise-free data, the algorithms work successfully. System SNR 30, 20 and 13 are used for noisy data. For noisy data the performance of algorithm is not as satisfactory as noise-free data. Twice differentiation of z component of B (Bz) is used for two of the algorithms. These algorithms are very sensitive to noise. One of the algorithms uses only one differentiation of Bz so it is immune to noise. The other algorithm uses sensitivity matrix to reconstruct conductivity distribution.
APA, Harvard, Vancouver, ISO, and other styles
14

Haines, Benjamin A. "A GPU parallel approach improving the density of patch based multi-view stereo reconstruction." Thesis, University of Nottingham, 2016. http://eprints.nottingham.ac.uk/32746/.

Full text
Abstract:
Multi-view stereo is the process of recreating three-dimensional data from a set of two or more images of a scene. The ability to acquire 3D data from 2D images is a core concept in computer vision with wide-ranging applications throughout areas such as 3D printing, robotics, recognition, navigation and a vast number of other fields. While 3D reconstruction has been increasingly well studied over the past decades, it is only with the recent evolution of CPU and GPU technologies that practical implementations, able to accurately, robustly and efficiently capture 3D data of photographed objects have begun to emerge. Whilst current research has been shown to perform well under specific circumstances and for a subset of objects, there are still many practical and implementary issues that remain an open problem for these techniques. Most notably, the ability to robustly reconstruct objects from sparse image sets or objects with low texture. Alongside a review of algorithms within the multi-view field, the work proposed in this thesis outlines a massively parallel patch based multi-view stereo pipeline for static scene recovery. By utilising advances in GPU technology, a particle swarm algorithm implemented on the GPU forms the basis for improving the density of patch-based methods. The novelty of such an approach removes the reliance on feature matching and gradient descent to better account for the optimisation of patches within textureless regions, for which current methods struggle. An enhancement to the photo-consistency matching metric, which is used to evaluate the optimisation of each patch, is then defined. Specifically targeting the shortcomings of the photo-consistency metric when used inside a particle swarm optimisation, increasing its effectiveness over textureless areas. Finally, a multi-resolution reconstruction system based on a wavelet framework is presented to further improve upon the robustness of reconstruction over low textured regions.
APA, Harvard, Vancouver, ISO, and other styles
15

Chighvinadze, Tamar. "A spectroscopic Compton scattering reconstruction algorithm for 2D cross-sectional view of breast CT geometry." Journal of X-Ray Science and Technology, IOS press, 2014. http://hdl.handle.net/1993/23846.

Full text
Abstract:
X-ray imaging exams are widely used procedures in medical diagnosis. Whenever an x-ray imaging procedure is performed, it is accompanied by scattered radiation. Scatter is a significant contributor to the degradation of image quality in breast CT. This work uses our understanding of the physics of Compton scattering to overcome the reduction in image quality that typically results from scattered radiation. By measuring the energy of the scattered photons at various locations about the object, an electron density (ρe) image of the object can be obtained. This work investigates a system modeled using a 2D cross-sectional view of a breast CT geometry. The ρe images can be obtained using filtered backprojection over isogonic curves. If the detector has ideal energy and spatial resolution, a single projection will enable a high quality image to be reconstructed. However, these ideal characteristics cannot be achieved in practice and as the detector size and energy resolution diverge from the ideal, the image quality degrades. To compensate for the realistic detector specifications a multi-projection Compton scatter tomography (MPCST) approach was introduced. In this approach an x-ray source and an array of energy sensitive photon counting detectors located just outside the edge of the incident fan-beam, rotate around the object while acquiring scattering data. The ρe image quality is affected by the size of the detector, the energy resolution of the detector and the number of projections. These parameters, their tradeoffs and the methods for the image quality improvement were investigated. The work has shown that increasing the energy and spatial resolution of the detector improves the spatial resolution of the reconstructed ρe image. These changes in the size and energy resolution result in an increase in the noise. Thus optimizing the image quality becomes a tradeoff between blurring and noise. We established that a suitable balance is achieved with a 500 eV energy resolution and 2×2 mm2 detector. We have also established that using a multi-projection approach can offset the increase in the noise.
APA, Harvard, Vancouver, ISO, and other styles
16

Heng, Sovanchandara. "Thinning Effects on Forest Stands and Possible Improvement in a Stand Reconstruction Technique." Kyoto University, 2019. http://hdl.handle.net/2433/242684.

Full text
APA, Harvard, Vancouver, ISO, and other styles
17

Seck, Bassirou. "Display and Analysis of Tomographic Reconstructions of Multiple Synthetic Aperture LADAR (SAL) images." Wright State University / OhioLINK, 2018. http://rave.ohiolink.edu/etdc/view?acc_num=wright1547740781773769.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Koo, Stephanie C. "3D reconstruction of synaptic and nuclear corticosteroid receptors distribution density in the amygdala: a feasibility study." Thesis, Queensland University of Technology, 2017. https://eprints.qut.edu.au/103629/1/Stephanie_Koo_Thesis.pdf.

Full text
Abstract:
This project tested a new way of looking at different types of stress receptors (corticosteroid receptors) in the brain. The method used fluorescent markers to label the stress receptors in brain tissue, and reconstructed microscope images in 3D by using computer software. The thesis details this new method, and demonstrates how it can be used to compare different types of stress receptors, at different locations in the amygdala.
APA, Harvard, Vancouver, ISO, and other styles
19

Kern, Alexander Marco. "Quantification of the performance of 3D sound field reconstruction algorithms using high-density loudspeaker arrays and 3rd order sound field microphone measurements." Thesis, Virginia Tech, 2017. http://hdl.handle.net/10919/77516.

Full text
Abstract:
The development and improvement of 3-D immersive audio is gaining momentum through the growing interest in virtual reality. Possible applications reach from recreating real world environments to immersive concerts and performances to exploiting big data acoustically. To improve the immersive experience several measures can be taken. The recording of the sound field, the spatialization and the development of the loudspeaker arrays are some of the greatest challenges. In this thesis, these challenges for improving immersive audio will be explored. First, there will be a short introduction about 3D audio and a review about the state of the art technology and research. Next, the thesis will provide an introduction to 3D loudspeaker arrays and describe the systems used during this research. Furthermore, the development of a new 16-element 3rd order sound field microphone will be described. Afterwards, different spatial audio algorithms such as higher order ambisonics, wave field synthesis and vector based amplitude panning will be described, analyzed and compared. For each spatialization algorithm, the quality of soundfield reproduction will be quantified using listener perception tests for clarity and sound source localization.
Master of Science
APA, Harvard, Vancouver, ISO, and other styles
20

Soares-Ferreira, Sofia Knapic. "Aptidão do sobreiro como produtor de matéria-prima para a indústria de madeira e de painéis compósitos com vista a produtos de qualidade." Doctoral thesis, ISA/UTL, 2011. http://hdl.handle.net/10400.5/3576.

Full text
Abstract:
Doutoramento em Engenharia Florestal - Instituto Superior de Agronomia
The objective of this work is the study of the diversification of cork oak stands, managed to produce cork oak wood flooring. The wood of Quercus suber L. has high density values (0,86 gcm-3 to 0,98 gcm-3) with a small ring variation (both axial and radial) as well as between trees. The wood of Q. faginea L., was studied for comparison proposes, has an average density value of 0.85 g cm-3. Cork oak presented an average growth of 3,9 mm/year which, together with its high density, makes it an interesting species when it comes to carbon storage. Modeling and simulation techniques were used regarding the industrial transformation of the cork oak stems. The maximization of the production yields was achieved with small logs and components with short dimensions (parquet and components for multilayer composites). Relevant properties for flooring applications (hardness, wear and dimensional stability) were assessed. Results indicate that the cork oak wood is suitable for flooring applications with high traffic uses. Conclusions show the technological feasibility of cork oak wood to flooring applications, and therefore a strong alternative to other oak and tropical species.
APA, Harvard, Vancouver, ISO, and other styles
21

Baiocco, Giorgio. "Vers une reconstruction des propriétés thermiques des noyaux légers par le biais de réactions de fusion-évaporation." Caen, 2012. http://www.theses.fr/2012CAEN2003.

Full text
Abstract:
Ce travail de thèse se situe dans le cadre d'une campagne expérimentale, proposée par la collaboration NUCL-EX (INFN, 3ème Groupe), dont le but est de progresser dans la compréhension des propriétés statistiques des noyaux légers, à des énergies d'excitation au-dessus du seuil d'émission des particules, par le biais de la mesure de données exclusives pour des réactions de fusion-évaporation. Les objectives de ce travail sont la détermination de la densité des niveaux dans la région de masse A~20, la compréhension du comportement statistique des noyaux légers avec des énergies d'excitation ~ 3A. MeV, et la mise en évidence des effets liés à la présence d'une structure cluster pour certains niveaux nucléaires. En ce qui concerne la partie théorique de cette thèse, nous avons développé un code Monte-Carlo Hauser-Feshbach, pour l'évaporation du noyau composé. La partie expérimentale à consisté dans la participation à l’experience12C(@95 MeV)+12C, mesurée aux Laboratori Nazionali di Legnaro - INFN avec le set-up GARFIELD+Ring Counter(RCo), de la phase de la proposition à la mesure, l'étalonnage des détecteurs et l'analyse des données. Dans ce travail de thèse, plusieurs résultats de l'analyse sont présentés, ainsi qu'une étude théorique du système avec le nouveau code de décroissance. Suite à ce travail, nous pouvons donner des contraintes à la densité des niveaux à haute énergie d'excitation pour les noyaux légers, du C jusqu'au Mg. Des émissions hors équilibre sont mises en évidence, et interprétées comme des effets liés à la structure en cluster des noyaux dans la voie d'entrée de la réaction, ainsi que du système chaud issu de la fusion, avant que la thermalisation soit achevée
This thesis work has been developed in the framework of a new experimental campaign, proposed by the NUCL-EX Collaboration (INFN III Group), in order to progress in the understanding of the statistical properties of light nuclei, at excitation energies above particle emission threshold, by measuring exclusive data from fusion-evaporation reactions. The determination of the nuclear level density in the A~20 region, the understanding of the statistical behavior of light nuclei with excitation energies ~3 A. MeV, and the measurement of observables linked to the presence of cluster structures of nuclear excited levels are the main physics goals of this work. On the theory side, the contribution to this project given by this work lies in the development of a dedicated Monte-Carlo Hauser-Feshbach code for the evaporation of the compound nucleus. The experimental part of this thesis has consisted in the participation to the measurement 12C+12C at 95 MeV beam energy, at Laboratori Nazionali di Legnaro - INFN, using the GARFIELD+Ring Counter(RCo) set-up, from the beam-time request to the data taking, data reduction, detector calibrations and data analysis. Different results of the data analysis are presented in this thesis, together with a theoretical study of the system, performed with the new statistical decay code. As a result of this work, constraints on the nuclear level density at high excitation energy for light systems ranging from C up to Mg are given. Moreover,pre-equilibrium effects, tentatively interpreted as alpha-clustering effects, are put in evidence, both in the entrance channel of the reaction and in the dissipative dynamics on the path towards thermalisation
APA, Harvard, Vancouver, ISO, and other styles
22

PETRETTO, GUIDO. "Density functional simulation of chalcogen doped silicon nanowires." Doctoral thesis, Università degli Studi di Milano-Bicocca, 2012. http://hdl.handle.net/10281/28938.

Full text
Abstract:
In this thesis we report about the first principle investigation of silicon nanowires. Density functional theory has been used to study the electronic properties of pristine and chalcogen doped nanowires. Nanowires with different surface structures and orientations have been considered. We show that substitutional chalcogen atoms have favorable configurations for positions close to the surface of the nanowire. We also show that hyperfine interactions increase at small diameters, as long as the nanowire is large enough to prevent surface distortion which modifies the symmetry of the donor wave function. Moreover, surface effects lead to strong differences in the hyperfine parameters depending on the Se location inside the nanowire, allowing the identification of an impurity site on the basis of EPR spectra.
APA, Harvard, Vancouver, ISO, and other styles
23

Ozsut, Murat Esref. "Design And Implementation Of Labview Based Data Acquisition And Image Reconstruction Environment For Metu-mri System." Master's thesis, METU, 2005. http://etd.lib.metu.edu.tr/upload/12606662/index.pdf.

Full text
Abstract:
Data acquisition and image reconstruction tasks of METU Magnetic Resonance Imaging (MRI) System are used to be performed by a 15 year-old technology. This system is incapable of transmitting control signals simultaneously and has memory limitations. Control software is written mostly in assembly language, which is hard to modify, with very limited user interface functionality, and time consuming. In order to improve the system, a LabVIEW based data acquisition system consisting of a NI-6713 D/A card (to generate RF envelope, gradients, etc.) and a NI-6110E A/D card (to digitize echo signals) from National Instruments is programmed and integrated to the system, and a pulse sequence design, data acquisition and image reconstruction front-end is designed and implemented. Apart from that, a new method that can be used in Magnetic Resonance Current Density Imaging (MRCDI) experiments is proposed. In this method the readily built gradient coil of the MRI scanner is utilized to induce current in the imaging volume. Magnetic fields created by induced currents are measured for various amplitude levels, and it is proved that inducing current with this method is possible.
APA, Harvard, Vancouver, ISO, and other styles
24

Sarpa, Elena. "Velocity and density fields on cosmological scales : modelisation via non-linear reconstruction techniques and application to wide spectroscopic surveys." Thesis, Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0524.

Full text
Abstract:
Un nouvel algorithme de reconstruction entièrement non-linéaire, basé sur le principe de moindre action, extension du Fast Action Minimization method (Nusser & Branchini, 2000), conçu pour applications aux sondages spectroscopiques de galaxies, est présenté. Sa capacité de reconstruire le champ de vitesse à partir du champ de densité observé et testée sur des catalogues de halo de matière noire: les trajectoires de 10^6 halos sont tracées en arrier dans le temps, bien au-delà de l'approximation lagrangienne du premier ordre. Les vittes propres sont modélisées avec succès à la fois dans l'espace réel et dans l'espace du redshift. Le nouvel algorithme est utilisé pour déterminer avec plud grande précision l'échelle des Oscillations Acoustiques de Baryons (BAO) à partir de la fonction de corrélation à deux points. Des tests sur des catalogues de halos de matière montrent comment le nouvel algorithme récupère avec succès les BAO dans l'espace réel et du redshift, également pour des catalogues synthétiques (mocks) exceptionnelles où la signature des BAO est à la mauvaise échelle où absent. Cette technique se révèle plus puissante que l'approximation linéaire, en fournissant une mesure non baiaisée de l'échelle BAO. L'algorithme est ensuite testée sur des mocks de galaxies à faible redshift spécifiquement conçues pour correspondre au functions de corrélation des galaxies lumineuses rouges du catalogue SDSS-DR12. Enfin, l'application à l'analyse des vides cosmiques est présentée, montrant la grande potentialité d'une modélisation non-linéaire du champ de vitesse pour restaurer l'isotropie intrinsèque des vides
A new fully non-linear reconstruction algorithm, based on the least-action principle and extending the Fast Action Minimisation method by is presented, intended for applications with the next-generation massive spectroscopic surveys. Its capability of recovering the velocity field starting from the observed density field is tested on dark-matter halo catalogues simulation to trace the trajectories of up to 10^6 haloes backward-in-time. Both in real and redshift-space it successfully recovers the peculiar velocities. The new algorithm is first employed for the accurate recovery of the Baryonic Acoustic Oscillations (BAO) scale in two-point correlation functions. Tests on dark-matter halo catalogues show how the new algorithm successfully recovers the BAO feature in real and redshift-space, also for anomalous samples showing misplaced or absent signature of BAO. A comparison with the first-order Lagrangian reconstruction is presented, showing that this techniques outperforms the linear approximation in recovering an unbiased measurement of the BAO scale. A second version of the algorithm accounting for the survey geometry and the bias of tracers is finally tested on low-redshift galaxy samples extracted form mocks specifically designed to match the SDSS-DR12 LRG clustering. The analysis of the anisotropic clustering indicates the non-linear reconstruction as a fundamental tool to brake the degeneracy between redshift-space distortion and the Alcock-Paczynski effect. Finally the application to the cosmic voids analysis is introduced, showing the great potentiality of a non-linear modelling of the velocity field in restoring the intrinsic isotropy of voids
APA, Harvard, Vancouver, ISO, and other styles
25

El, Abdi Fouad. "Méthodes de géométrie différentielle dans les modèles statistiques et applications : modèles exponentiels et modèles normaux multidimensionnels : reconstruction des densités de probabilité et des densités spectrales." Paris 11, 1988. http://www.theses.fr/1988PA112253.

Full text
Abstract:
Le présent travail est une large application des méthodes de géométrie différentielle en statistique Le premier chapitre introduit les structures de géométrie différentielle en statistique essentiellement une structure de variété Riemannienne munie d'une paire de connexions en dualité par rapport à certaines métriques. Ces méthodes ont déjà été utilisées par AMARI CHENTSOV, EFRON, LAURlTZEN Les deux chapitres suivants sont consacrés à l'étude des estimateurs par projection étendue, proposés par Amari et Lauritzen et surtout à l'étude de leurs propriétés asymptotiques qui, dans le cas des familles exponentielles, s'expriment comme des propriétés géométriques. Nous montrons ainsi que l'efficacité au premier ordre équivaut au fait que la géométrie Riemannienne sous-jacente est conformément équivalente à celle de Fisher et que l'efficacité au second ordre se traduit par une propriété analogue sur les connexions sous-Jacentes. Il faut noter que cette classe d'estimateurs comporte à peu près tous les estimateurs usuels maximum de vraisemblance, du minimum de contraste. La dernière partie est une application des résultats précédents : 1-Au modèle normal multidimensionnel, en particulier la régression non-linéaire multidimensionnelle gaussienne : 2-A la projection dans le cadre de variétés Hilbertiennes pour la reconstruction des densités de probabilité et des densités spectraIes. Mots clés: variété différentielle, métrique, connexions, divergences, modèles exponentiels, estimation par projection, inférence statistique, densité spectrales
This work is a wide application of differential geometry methods to statistic. In chapter one we introduce the differential methods used in statistical models, essentially a Riemannian manifold structure with a pair of dual connexions with respect to particular metrics These methods have already been used by AMARI, CHENTSOV, EFRON, LAURITZEN In the two following chapters we study the projection estimators introduced by AMARI and LAURITZEN especially their asymptotic properties which in the case of exponential families have a geomeric expression So we have proved first that the first order efficiency means that the underlying Riemannian geometry is conformably equivalent to the Fisher one and then that the second order efficiency means a similar property for the underlying connexions. Note that almost all the usual estimators maximum of likelihood minimum of contrast belong to this class. The last part of this work is an application of the previous results 1- To multivariate normal model, especially to the non-linear multivariate normal regression 2-To reconstruction by projection of probability densities and spectrale densities in the case of Hilbertian manifolds
APA, Harvard, Vancouver, ISO, and other styles
26

Terlan, Bürgehan [Verfasser], Juri [Akademischer Betreuer] Grin, and Michael [Akademischer Betreuer] Ruck. "Experimental electron density reconstruction and analysis of titanium diboride and binary vanadium borides / Bürgehan Terlan. Gutachter: Juri Grin ; Michael Ruck." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://d-nb.info/1068153385/34.

Full text
APA, Harvard, Vancouver, ISO, and other styles
27

Terlan, Bürgehan [Verfasser], Juri Akademischer Betreuer] Grin, and Michael [Akademischer Betreuer] [Ruck. "Experimental electron density reconstruction and analysis of titanium diboride and binary vanadium borides / Bürgehan Terlan. Gutachter: Juri Grin ; Michael Ruck." Dresden : Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2013. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-119046.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Todoroff, Violaine. "Mesure d’un champ de masse volumique par Background Oriented Schlieren 3D. Étude d’un dispositif expérimental et des méthodes de traitement pour la résolution du problème inverse." Phd thesis, Toulouse, INPT, 2013. http://oatao.univ-toulouse.fr/11303/1/todoroff.pdf.

Full text
Abstract:
Ces travaux consistent à mettre en place un dispositif expérimental BOS3D permettant la reconstruction du champ de masse volumique instantané d'un écoulement ainsi qu'à développer un algorithme de reconstruction permettant une mise à disposition rapide des résultats ainsi qu'une robustesse liée à un nombre faible de points de vue. La démarche a consisté dans un premier temps à développer un algorithme de reconstruction BOS3D applicable à toutes les configurations expérimentales. Pour cela, le problème direct a été reformulé sous forme algébrique et un critère a été défini. Cette formulation ainsi que les équations issues des méthodes d'optimisation nécessaires à la minimisation du critère ont été parallélisés pour permettre une implémentation sur GPU. Cet algorithme a ensuite été testé sur des cas de références issus de calcul numérique afin de vérifier si le champ reconstruit par l'algorithme était en accord avec celui fourni. Dans ce cadre, nous avons développé un outil permettant de simuler numériquement une BOS3D afin d'obtenir les champs de déviation associées aux écoulements numériques. Ces champs de déviation ont ensuite été fournis comme entrée au code et nous ont permis d'étudier la sensibilité de notre algorithme à de nombreux paramètres tels que le bruit sur les données, les erreurs de calibration, la discrétisation du maillage... Ensuite, afin de tester notre code sur des données réelles nous avons mis en place un banc expérimental BOS3D pour la reconstruction du champ de masse volumique instantané. Cela a nécessité l'étude d'un nouveau moyen de mesure, faisant appel à des techniques de calibrage multi-caméras et de nouvelles stratégies d'illumination su fond. Finalement les données issues de l'expérimentation ont été utilisées comme entrée de notre algorithme afin de valider son comportement sur données réelles.
APA, Harvard, Vancouver, ISO, and other styles
29

Vernet, Kinson. "Imagerie densitométrique 3D des volcans par muographie." Electronic Thesis or Diss., Université Clermont Auvergne (2021-...), 2022. http://www.theses.fr/2022UCFAC112.

Full text
Abstract:
La muographie est une technique d’imagerie en physique des particules où les muons atmosphériques traversant une cible sont utilisés pour déterminer des informations de l’intérieur de la cible : distribution de la densité ou composition chimique via le numéro atomique. En fonction de l’énergie des muons et de la quantité de matière à traverser, il y en a qui vont survivre et d’autres qui vont être arrêtés par la cible. Et, la diffusion des muons dépend, en première approximation, de leur impulsion et du numéro atomique moyen le long de leur parcours de vol. La muographie propose, à partir de la mesure de la transmission et/ou de la diffusion des muons à travers la cible, de fournir des informations sur son intérieur.Il existe actuellement deux types de muographie : la muographie par transmission où le flux transmis des muons à travers la cible est mesuré pour inférer la distribution de densité de la cible et la muographie par diffusion où la diffusion des muons à travers la cible est utilisée pour déterminer la distribution du numéro atomique de la cible. Cette thèse traite de la muographie par transmission pour radiographier les volcans.Dans le cas de la muographie par transmission, un télescope à muons est utilisé pour mesurer le flux transmis des muons atmosphériques à travers la cible. Ce flux est, en première approximation, une fonction bijective de la quantité de matière rencontrée par les muons. L’idée est d’inverser le nombre de muons mesurés en une estimation de la densité de la cible.Il existe d’autres méthodes d’imagerie en géophysique permettant de reconstruire la densité d’une cible. C’est le cas, par exemple, de la gravimétrie et de l’imagerie par sismicité. Ces méthodes dites conventionnelles présentent des faiblesses. Pour ces méthodes, le problème d’inversion est soit mal posé, c’est-à-dire il n’existe pas de solution unique ou la solution présente de grandes variations pour de petites variations des paramètres dont elle dépend. Un ensemble de contraintes supplémentaires sont alors ajoutées pour enlever la non-unicité.En muographie par contre, le problème d’inversion est bien posé et la solution est unique. Les méthodes conventionnelles en géophysique ne permettent pas, à elles seules, de déterminer la densité de la cible. Jointes avec la muographie, elles présentent de gros potentiel, soit en fournissant d’autres informations sur la roche et/ou sur la nature de l’eau, soit en améliorant la précision sur la reconstruction de la densité de la cible.Plusieurs expériences utilisent l’approximation CSDA (Continuous Slowing Down Approximation) pour estimer la probabilité de survie des muons à travers une cible. Le fait d’utiliser cette approximation, donc de négliger le caractère stochastique de l’interaction des muons avec la matière, sous-estime la probabilité de survie des muons et par conséquent induit des effets systématiques sur la reconstruction de la densité. Dans les kilomètres de roche standard l’effet est de 3% - 8% en fonction de la modélisation de l’interaction des muons de hautes énergies avec la matière. En outre, une mauvaise estimation du bruit de fond des muons de basse impulsion qui affectent la mesure du signal résulte en une sous-estimation de la densité de la cible par rapport à la gravimétrie. Cela vient probablement de l’utilisation de l’approximation analytique pour simuler la propagation des muons à travers la cible et de la difficulté de rejeter dans la mesure ceux de basse impulsion. Pour ces raisons, dans l’expérience MIM (Muon IMaging) (où cette thèse a été réalisée), nous utilisons un traitement Monte Carlo pour simuler le transport des muons à travers la cible. Dans ce cas, nous pouvons estimer précisément l’effet de ces muons de basse impulsion sur la reconstruction de la densité. (...)
Muography is an imaging technique in particle physics where atmospheric muons passing through a target are used to determine information about the interior of the target : density distribution or chemical composition via the atomic number. Depending on the energy of the muons and the amount of matter they have to cross, some of them will survive and others will be stopped by the target. And, the diffusion of the muons depends, to a first approximation, on their momentum and the average atomic number along their flight path. Muography proposes, from the measurement of the transmission and/or diffusion of muons through a target, to provide information about its interior.There are currently two types of muography : transmission muography, where the transmitted flux of muons through the target is measured to infer the density distribution of that target, and diffusion muography, where the diffusion of muons through the target is used to determine the distribution of the atomic number of the target. This thesis discusses transmission muography in order to radiography volcanoes.In the case of transmission muography, a muon telescope is used to measure the transmitted flux of atmospheric muons through the target. This flux is, to a first approximation, a bijective function of the amount of matter encountered by the muons. The idea is to invert the measured number of muons into a density estimation of the target.There are other imaging methods in geophysics that can be used to reconstruct the density of a target. This is the case, for example, of gravimetry and seismic imaging. These so-called conventional methods have weaknesses. For these methods, the inversion problem is either ill-posed, i.e. there is no unique solution, or the solution presents large variations for small variations of the parameters on which it depends. A set of additional constraints are then added to remove the non-uniqueness.In muography however, the inversion problem is well posed and the solution is unique. Conventional geophysical methods alone cannot determine the density of a target. Combined with muography, they have great potential, either by providing other information on the rock and/or on the nature of the water, or by improving the accuracy of the target density reconstruction.Several experiments use the CSDA (Continuous Slowing Down Approximation) approximation to estimate the survival probability of muons through a target. Using this approximation, thus neglecting the stochastic character of the interaction of muons with matter, underestimates the muon survival probability and therefore induces systematic effects on the density reconstruction. In standard rock kilometers the effect is 3% - 8% depending on the modeling of the interaction of high energy muons with matter. In addition, a bad estimation of the background of the low momentum muons affecting the measurement of the signal results in an underestimation of the density of the target with respect to the gravimetry. This probably comes from the use of the analytical approximation to simulate the propagation of the muons through the target and the difficulty of rejecting in the measurement those with low momentum. For these reasons, in the Muon IMaging (MIM) experiment (where this thesis was conducted), we use a Monte Carlo treatment to simulate the muon transport through the target. In this case, we can accurately estimate the effet of these low momentum muons on the density reconstruction. One of the techniques used in our experiment, to make the low momentum muons scatter so that they can be statistically rejected, is to insert a thickness of lead between the telescope detection planes. (...)
APA, Harvard, Vancouver, ISO, and other styles
30

Whitmarsh, Tristan. "3D reconstruction of the proximal femur and lumbar vertebrae from dual-energy x-ray absorptiometry for osteoporotic risk assessment." Doctoral thesis, Universitat Pompeu Fabra, 2012. http://hdl.handle.net/10803/94492.

Full text
Abstract:
In this thesis a method was developed to reconstruct both the 3D shape and the BMD distribution of bone structures from Dual-energy X-ray Absorptiometry (DXA) images. The method incorporates a statistical model built from a large dataset of Quantitative Computed Tomography (QCT) scans together with a 3D-2D intensity based registration process. The method was evaluated for its ability to reconstruct the proximal femur from a single DXA image. The resulting parameters of the reconstructions were subsequently evaluated for their hip fracture discrimination ability. The reconstruction method was finally extended to the reconstruction of the lumbar vertebrae from anteroposterior and lateral DXA, thereby incorporating a multi-object and multi-view approach. These techniques can potentially improve the fracture risk estimation accuracy over current clinical practice.
En esta tesis se desarrolló un método para reconstruir tanto la forma 3D de estructuras óseas como la distribución de la DMO a partir de una sola imagen de DXA. El método incorpora un modelo estadístico construido a partir de una gran base de datos de QCT junto con una técnica de registro 3D-2D basada en intensidades. Se ha evaluado la capacidad del método para reconstruir la parte proximal del fémur a partir de una imagen DXA. Los parámetros resultantes de las reconstrucciones fueron evaluados posteriormente por su capacidad en discriminar una fractura de cadera. Por fin, se extendió el método a la reconstrucción de las vértebras lumbares a partir de DXA anteroposterior y lateral incorporando así un enfoque multi-objeto y multi-vista. Estos técnicas pueden potencialmente mejorar la precisión en la estimación del riesgo de fractura respecto a la estimación que ofrece la práctica clínica actual.
APA, Harvard, Vancouver, ISO, and other styles
31

Valentine, Helen Elizabeth Mary. "Reconstructing cosmological density and velocity fields." Thesis, University of Edinburgh, 2001. http://hdl.handle.net/1842/27570.

Full text
Abstract:
I present a new quasi-linear method for reconstructing cosmological density and velocity fields from all-sky redshift surveys. The method is used to reconstruct the velocity field, dipole, bulk flows and distortion parameter b = Ω0.6/b from the PSCz survey. Analytic expressions for the cosmic variance and shot noise uncertainties on the reconstructed velocity field are presented. It is found that the uncertainties are reduced if reconstruction is carried out in the Local Group frame. The uncertainty on the dipole is also found. A generalised version of the Path Interchange Zel'dovich Approximation (PIZA) is presented. PIZA is a simple Lagrangian reconstruction method based on the Zel'dovich Approximation and the Least Action Principle, which reconstructs cosmological fields given the present day real space positions of galaxies. The generalizations take account of redshift space distortions, incomplete sky coverage, and the selection function. The method can be used to estimate b from radial velocities, bulk flows and the dipole. Generalised PIZA has been tested using a set of PSCz-like simulations. The reconstructed radial peculiar velocity field is compared with that of the simulation and that reconstructed by linear theory. The generalized PIZA is applied to the IRAS PSCz Survey. The dipole, bulk velocities and peculiar velocity field, and the derived value of b are presented. The Local Group is found to have an average displacement of 1225kms-1 in the direction of (1.b)=(264°, 42°). From this it is found that b = 0.512 ± 0.141.
APA, Harvard, Vancouver, ISO, and other styles
32

Alpuche, Aviles Jorge Edmundo. "The Development and Validation of a First Generation X-Ray Scatter Computed Tomography Algorithm for the Reconstruction of Electron Density Breast Images Using Monte Carlo Simulation." Journal of X-ray Science and Technology, IOS Press, 2011. http://hdl.handle.net/1993/4970.

Full text
Abstract:
Breast CT is a promising modality whose inherent scatter could be used to reconstruct electron density (rho_e) images. This has led us to investigate the benefits of reconstructing linear attenuation coefficient (mu) and (rho_e) images of the breast. First generation CT provides a cost-effective and simple approach to reconstruct (rho_e) images in a laboratory but is limited by the anisotropic probability of scatter, attenuation, noise and contaminating scatter (coherent and multiple scatter). These issues were investigated using Monte Carlo (MC) simulations of a first generation breast scatter enhanced CT (B-SECT) system. A reconstruction algorithm was developed for the B-SECT system and is based on a ring of detectors which eliminates the scatter dependence on the relative position of the scattering centre. The algorithm incorporates an attenuation correction based on the (mu) image and was tested against analytical and MC simulations. MC simulations were also used to quantify the dose per scan. The ring measures a fraction of the total single incoherent scatter which is proportional to ray integrals of (rho_e) and can be quantified even when electron binding is non negligible. The algorithm typically reconstructs accurate (rho_e) images using a single correction for attenuation but has the capability for multiple iterations if required. MC simulations show that the dose coefficients are similar to those of cone beam breast CT. Coherent and multiple scatter can not be directly related to (rho_e) and lead to capping artifacts and overestimated (rho_e) by a factor greater than 2. This issue can be addressed using empirical corrections based on the radiological path of the incident beam and result in (rho_e) images of breast soft tissue with 1% accuracy, 3% precision and a mean glandular dose of 4 mGy for a 3D scan. The reconstructed (rho_e) image was more accurate than the (rho_e) estimate derived from the (mu) image. An alternative correction based on the thickness of breast traversed by the beam provides an enhanced contrast image reflecting the breast scatter properties. These results demonstrate the feasibility of detecting small (rho_e) changes in the intact breast and shows that further experimental evaluation of this technique is warranted.
APA, Harvard, Vancouver, ISO, and other styles
33

Philippe, Morgane. "Reconstruction of the density profile, surface mass balance history and vertical strain profile on the divide of the Derwael Ice Rise in coastal Dronning Maud Land, East Antarctica." Doctoral thesis, Universite Libre de Bruxelles, 2007. http://hdl.handle.net/2013/ULB-DIPOT:oai:dipot.ulb.ac.be:2013/254506.

Full text
Abstract:
Antarctic mass balance is mainly controlled by surface mass balance (SMB, i.e. the net effect of precipitations at the surface of the ice sheet) and ice discharge at its margins, mostly through ice shelves. These floating ice bodies made from ice flowing from the continent to the ocean are buttressed by ice rises (elevation of the sea floor on which ice shelf re-grounds) such as the Derwael Ice Rise (DIR) in Dronning Maud Land (DML). In addition to this role important to consider in the future contribution of Antarctica to sea level rise, ice rises are also “climate dipsticks” helping to reconstruct the climate of the past centuries to millennia at high resolution. Due to their coastal location, they witness the changes happening there more rapidly than inland. Furthermore, their internal stratigraphy forms arches that allow to assess their stability, to date their own formation and therefore, in some cases, to constrain the past extension of the ice sheet at the scale of several millennia. As part of the IceCon project :Constraining ice mass changes in Antarctica, this thesis aimed to drill a 120 m ice core (named IC12 for the IceCon project, 2012) at the divide of the DIR and perform physico-chemical analyses to study its density and its internal annual layering with the aim of reconstructing SMB of the last two centuries. We also recorded a virtual image of the borehole using an optical televiewer (OPTV) to assess the ability of this instrument to reconstruct a density profile and measure vertical strain rates when the logging is repeated in the same borehole after a sufficient period of time (here, 2 years).The results show a general increase in snow accumulation rates (SMB) of 30-40% during the 20th century, particularly marked during the last 20-50 years. SMB variability is governed to a large extent by atmospheric circulation and to a lesser extent by variations in sea ice cover. The vertical velocity profile measured from repeat borehole OPTV was applied to refine SMB correction and the results fall in the error range of the corrections made using a model previously developed to study the DIR’s stability. This thesis also contributed to characterizing the spatial variability of SMB across the DIR by dating internal reflection horizons (IRHs), former surfaces of the DIR buried under subsequent snow layers and detected using radio-echo-sounding, and by measuring the density profile of IC12. SMB is found to be 2.5 times higher on the upwind slope than on the downwind slope due to the orographic effect. This pattern is regularly observed on ice rises in DML and stresses the importance of adopting a sufficient spatial resolution (5 km) in climate models.Finally, the technical developments allowing to rapidly reconstruct a density profile from the OPTV image of a borehole contributed to improving our knowledge of two features of Antarctic ice shelves, namely melt ponds, influencing surface mass balance and subglacial channels, influencing basal mass balance. Specifically, the results show that density is 5 % higher in surface trenches associated with subglacial channels, and that ice below melt ponds can reach the density of bubble-free ice due to melting and refreezing processes, with implications on ice shelf viscosity.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
34

Philippe, Morgane. "Reconstruction of the density profile, surface mass balance history and vertical strain profile on the divide of the Derwael Ice Rise in coastal Dronning Maud Land, East Antarctica." Doctoral thesis, Universite Libre de Bruxelles, 2017. https://dipot.ulb.ac.be/dspace/bitstream/2013/254506/7/Appendix.pdf.

Full text
Abstract:
Antarctic mass balance is mainly controlled by surface mass balance (SMB, i.e. the net effect of precipitations at the surface of the ice sheet) and ice discharge at its margins, mostly through ice shelves. These floating ice bodies made from ice flowing from the continent to the ocean are buttressed by ice rises (elevation of the sea floor on which ice shelf re-grounds) such as the Derwael Ice Rise (DIR) in Dronning Maud Land (DML). In addition to this role important to consider in the future contribution of Antarctica to sea level rise, ice rises are also “climate dipsticks” helping to reconstruct the climate of the past centuries to millennia at high resolution. Due to their coastal location, they witness the changes happening there more rapidly than inland. Furthermore, their internal stratigraphy forms arches that allow to assess their stability, to date their own formation and therefore, in some cases, to constrain the past extension of the ice sheet at the scale of several millennia. As part of the IceCon project :Constraining ice mass changes in Antarctica, this thesis aimed to drill a 120 m ice core (named IC12 for the IceCon project, 2012) at the divide of the DIR and perform physico-chemical analyses to study its density and its internal annual layering with the aim of reconstructing SMB of the last two centuries. We also recorded a virtual image of the borehole using an optical televiewer (OPTV) to assess the ability of this instrument to reconstruct a density profile and measure vertical strain rates when the logging is repeated in the same borehole after a sufficient period of time (here, 2 years).The results show a general increase in snow accumulation rates (SMB) of 30-40% during the 20th century, particularly marked during the last 20-50 years. SMB variability is governed to a large extent by atmospheric circulation and to a lesser extent by variations in sea ice cover. The vertical velocity profile measured from repeat borehole OPTV was applied to refine SMB correction and the results fall in the error range of the corrections made using a model previously developed to study the DIR’s stability. This thesis also contributed to characterizing the spatial variability of SMB across the DIR by dating internal reflection horizons (IRHs), former surfaces of the DIR buried under subsequent snow layers and detected using radio-echo-sounding, and by measuring the density profile of IC12. SMB is found to be 2.5 times higher on the upwind slope than on the downwind slope due to the orographic effect. This pattern is regularly observed on ice rises in DML and stresses the importance of adopting a sufficient spatial resolution (5 km) in climate models.Finally, the technical developments allowing to rapidly reconstruct a density profile from the OPTV image of a borehole contributed to improving our knowledge of two features of Antarctic ice shelves, namely melt ponds, influencing surface mass balance and subglacial channels, influencing basal mass balance. Specifically, the results show that density is 5 % higher in surface trenches associated with subglacial channels, and that ice below melt ponds can reach the density of bubble-free ice due to melting and refreezing processes, with implications on ice shelf viscosity.
Doctorat en Sciences
info:eu-repo/semantics/nonPublished
APA, Harvard, Vancouver, ISO, and other styles
35

Li, Lingwei. "Understanding Antarctic Circumpolar Current Transport at the LGM Using an Isotope-enabled Ocean Model." The Ohio State University, 2019. http://rave.ohiolink.edu/etdc/view?acc_num=osu1555594394056462.

Full text
APA, Harvard, Vancouver, ISO, and other styles
36

Seghier, Abdellatif. "Matrices de Toeplitz dans le cas d-dimensionnel : développement asymptotique à l'ordre d.Extension de fonctions de type positif dans le cas d-dimensionnel et maximum d'entropie : application à la reconstruction de densités." Paris 11, 1988. http://www.theses.fr/1988PA112038.

Full text
Abstract:
Les deux premiers articles traitent de la prédiction d'un processus stationnaire du 2°ordre. La prédiction s'effectue relativement à une information provenant d'une partie du passé. L'aspect le plus important de ce travail est l'introduction d'opérateurs (de Tœplitz et de Hankel) qui permettent d'appliquer des techniques de géométrie hilbertienne. II. Les trois articles qui suivent reprennent un problème de Szëgo lié à la prédiction linéaire de processus dépendant d'un paramètre discret. Nous considérons le problème dans le cas d-dimensionnel. Nous donnons un développement asymptotique de la trace de l'inverse de matrice de Tœplitz correspondante jusqu'à l'ordre d. Les cœfficients du développement dépendent alors du symbole (densité spectrale) et de mesures positives à support, selon l'ordre, le domaine de troncature de l'opérateur de Tœplitz (volume, les faces d'ordre d-1 (arêtes) et les sommets. III. Les deux derniers articles sont consacrés à la reconstruction de densités de probabilité et de densités spectrales à l'aide d'extension de fonction de type positif et du principe du maximum d'entropie. Ce problème provient de la cristallographie dont l'un des objectifs est la reconstruction de la densité électronique de molécules. Nous montrons dans le cas d'informations partielles (nombre fini de coefficients de Fourier) et de phases connues que cas multidimensionnel la reconstruction est possible dans le (implantable dans les ordinateurs)
In the two first chapters we are concerned with the prediction of the second order stationnary process. Here the information depends on a part of past. The main aspect of these papers is the use of hilbertian technics based on Tœplitz and Hankel operators. In the following three papers, we deal with an old Szegö's problem on the expansion of the determinant of Tœplitz matrix. We give in the multidimensionnal case a more precise expansion of the trace of the inverse with order d). Moreover the knew cœfficients which appear are strongly related with geometrical invariants of the domain on which the the Tœplitz operators are truncated. In the last two papers knew results about reconstruction of the spectral densities in the multidimentional case are given. The methods are based on extensions of positive defined function and maximum entropy principle. This work is motivated by the problem of the determination of the phases of the electron density function in crystal analysis. Nevertheless, there is still a great amount of work to be done in order to solve this problem
APA, Harvard, Vancouver, ISO, and other styles
37

O'Donnell, Alison J., Kathryn J. Allen, Robert M. Evans, Edward R. Cook, and Valerie Trouet. "Wood density provides new opportunities for reconstructing past temperature variability from southeastern Australian trees." Elsevier B.V, 2016. http://hdl.handle.net/10150/621340.

Full text
Abstract:
Tree-ring based climate reconstructions have been critical for understanding past variability and recent trends in climate worldwide, but they are scarce in Australia. This is particularly the case for temperature: only one tree-ring width based temperature reconstruction – based on Huon Pine trees from Mt Read, Tasmania – exists for Australia. Here, we investigate whether additional tree- ring parameters derived from Athrotaxis cupressoides trees growing in the same region have potential to provide robust proxy records of past temperature variability. We measured wood properties, including tree-ring width (TRW), mean density, mean cell wall thickness (CWT), and tracheid radial diameter (TRD) of annual growth rings in Athrotaxis cupressoides, a long-lived, high-elevation conifer in central Tasmania, Australia. Mean density and CWT were strongly and negatively correlated with summer temperatures. In contrast, the summer temperature signal in TRW was weakly positive. The strongest climate signal in any of the tree-ring parameters was maximum temperature in January (mid-summer; JanTmax) and we chose this as the target climate variable for reconstruction. The model that explained most of the variance in JanTmax was based on TRW and mean density as predictors. TRW and mean density provided complementary proxies with mean density showing greater high-frequency (inter-annual to multi-year) variability and TRW showing more low-frequency (decadal to centennial-scale) variability. The final reconstruction model is robust, explaining 55% of the variance in JanTmax, and was used to reconstruct JanTmax for the last five centuries (1530–2010 C.E.). The reconstruction suggests that the most recent 60 years have been warmer than average in the context of the last ca. 500 years. This unusually warm period is likely linked to a coincident increase in the intensity of the subtropical ridge and dominance of the positive phase of the Southern Annular Mode in summer, which weaken the influence of the band of prevailing westerly winds and storms on Tasmanian climate. Our findings indicate that wood properties, such as mean density, are likely to provide significant contributions toward the development of robust climate reconstructions in the Southern Hemisphere and thus toward an improved understanding of past climate in Australasia.
APA, Harvard, Vancouver, ISO, and other styles
38

Coulangeon, Renaud. "Réseaux quaternioniens et problèmes de densité." Bordeaux 1, 1994. http://www.theses.fr/1994BOR10629.

Full text
Abstract:
Dans un premier chapitre on etudie une famille de reseaux quaternioniens au moyen d'un invariant (invariant de venkov ou nachbachdefekt) lie a la theorie des codes. On etudie ensuite au chapitre 2 un probleme de classification de reseaux unimodulaires quaternioniens. L'outil essentiel est une notion de voisinage a la kneser adaptee a ce contexte. Le troisieme chapitre est consacre a l'etude de fonctions de densite pour les reseaux euclidiens introduites par rankin et qui generalisent l'invariant d'hermite classique. On etablit notamment une caracterisation simple, en termes de k-perfection et de k-eutaxie, des reseaux realisant un maximum local pour ces invariants. Les techniques utilisees mettent en jeu l'algebre exterieure d'un reseau
APA, Harvard, Vancouver, ISO, and other styles
39

Bordes, Guillaume. "Sommes d'ensembles de petite densité supérieure." Bordeaux 1, 2005. http://www.theses.fr/2005BOR13091.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Férin, Guillaume. "Optimisation de Réseaux Ultrasonores Haute Densité." Tours, 2006. http://www.theses.fr/2006TOUR3313.

Full text
APA, Harvard, Vancouver, ISO, and other styles
41

Vural, Kivilcim Basak. "Adsorption Of Gold Atoms On Anatase Tio2 (100)-1x1 Surface." Master's thesis, METU, 2009. http://etd.lib.metu.edu.tr/upload/3/12610962/index.pdf.

Full text
Abstract:
In this work the electronic and structural properties of anatase TiO2 (100) surface and gold adsorption have been investigated by using the first-principles calculations based on density functional theory (DFT). TiO2 is a wide band-gap material and to this effects it finds numerous applications in technology such as, cleaning of water, self-cleaning, coating, solar cells and so on. Primarily, the relation between the surface energy of the anatase (100)-1x1 phase and the TiO2-layers is examined. After an appropriate atomic layer has been chosen according to the stationary state of the TiO2 slab, the adsorption behavior of the Au atom and in the different combinations are searched for both the surface and the surface which is supported by a single Au atom/atoms. It has been observed that a single Au atom tends to adsorb to the surface which has an impurity of Au atom or atoms. Although, the high metal concentration on the surface have increased the strength of the adsorption, it is indicated that the system gains a metallic property which is believed to cause problems in the applications. In addition, the gold clusters of the dimer (Au2) and the trimer (Au3) have been adsorbed on the surface and their behavior on the surface is investigate. It is observed that the interaction between Au atoms in the atomic cluster each other is stronger than that of gold clusters and the surface.
APA, Harvard, Vancouver, ISO, and other styles
42

Ratel, Sébastien. "Densité, VC-dimension et étiquetages de graphes." Electronic Thesis or Diss., Aix-Marseille, 2019. http://www.theses.fr/2019AIXM0427.

Full text
Abstract:
Une partie des résultats de cette thèse sont initialement motivés par l'élaboration de schémas d'étiquetage permettant de réponde à l'adjacence, à la distance ou au routage. Ce document traite cependant de problèmes d'intérêt plus généraux tels que l'étude de bornes sur la densité de graphes, de la VC-dimension de familles d'ensembles, ou de propriétés métriques et structurelles.Nous établissons dans un premier temps des bornes supérieures sur la densité des sous-graphes de produits cartésien de graphes, puis des sous-graphes de demi-cubes. Pour ce faire, nous définissons des extensions du paramètre classique de VC-dimension. De ces bornes sur la densité, nous déduisons des bornes supérieures sur la longueur des étiquettes attribuées par un schéma d'adjacence à ces deux familles de graphes.Dans un second temps, nous nous intéressons à des schémas de distance et de routage pour deux familles importantes de la théorie métrique des graphes: les graphes médians et les graphes pontés. Nous montrons que la famille des graphes médians, sans cube, avec n sommets, admet des schémas de distance et de routage utilisant des étiquettes de O(\log^3 n). Ces étiquettes sont décodées en temps constant pour calculer, respectivement, la distance exacte entre deux sommets, ou le port vers un sommet rapprochant une source d'une destination. Nous décrivons ensuite un schéma de distances 4-approchées pour la famille des graphes pontés, sans K_4, avec n sommets, utilisant des étiquettes de O(\log^3 n) bits. Ces dernières peuvent être décodées en temps constant pour obtenir une valeur entre la distance exacte et quatre fois celle-ci
Constructing labeling schemes supporting adjacency, distance or routing queries constituted the initial motivation of most of the results of this document. However, this manuscript concerns problem of more general interest such as bounding the density of graphs, studying the VC-dimension of set families, or investigating on metric and structural properties of graphs. As a first contribution, we upper bound the density of the subgraphs of Cartesian products of graphs, and of the subgraphs of halved-cubes. To do so, we extend the classical notion of VC-dimension (already used in 1994 by Haussler, Littlestone, and Warmuth to upper bound the density of the subgraphs of hypercubes). From our results, we deduce upper bounds on the size of labels used by an adjacency labeling scheme on these graph classes. We then investigate on distance and routing labeling schemes for two important families of metric graph theory: median graphs and bridged graphs. We first show that the class of cube-free median graphs on n vertices enjoys distance and routing labeling schemes both using labels of O(\log^3 n) bits. These labels can be decoded in constant time to respectively return the exact distance between two vertices, or a port to take from a source vertex in order to get (strictly) closer to a target one. We then describe an approximate distance labeling scheme for the family of K_4-free bridged graphs on n vertices. This scheme also uses labels of size O(\log^3 n) that can be decoded in constant time to return a value of at most four time the exact distance between two vertices
APA, Harvard, Vancouver, ISO, and other styles
43

Fall, Fama. "Sur l'estimation de la densité des quantiles." Paris 6, 2005. http://www.theses.fr/2005PA066051.

Full text
APA, Harvard, Vancouver, ISO, and other styles
44

Sadeghkhani, Abdolnasser. "Estimation d'une densité prédictive avec information additionnelle." Thèse, Université de Sherbrooke, 2017. http://hdl.handle.net/11143/11238.

Full text
Abstract:
Dans le contexte de la théorie bayésienne et de théorie de la décision, l'estimation d'une densité prédictive d'une variable aléatoire occupe une place importante. Typiquement, dans un cadre paramétrique, il y a présence d’information additionnelle pouvant être interprétée sous forme d’une contrainte. Cette thèse porte sur des stratégies et des améliorations, tenant compte de l’information additionnelle, pour obtenir des densités prédictives efficaces et parfois plus performantes que d’autres données dans la littérature. Les résultats s’appliquent pour des modèles avec données gaussiennes avec ou sans une variance connue. Nous décrivons des densités prédictives bayésiennes pour les coûts Kullback-Leibler, Hellinger, Kullback-Leibler inversé, ainsi que pour des coûts du type $\alpha-$divergence et établissons des liens avec les familles de lois de probabilité du type \textit{skew--normal}. Nous obtenons des résultats de dominance faisant intervenir plusieurs techniques, dont l’expansion de la variance, les fonctions de coût duaux en estimation ponctuelle, l’estimation sous contraintes et l’estimation de Stein. Enfin, nous obtenons un résultat général pour l’estimation bayésienne d’un rapport de deux densités provenant de familles exponentielles.
Abstract: In the context of Bayesian theory and decision theory, the estimation of a predictive density of a random variable represents an important and challenging problem. Typically, in a parametric framework, usually there exists some additional information that can be interpreted as constraints. This thesis deals with strategies and improvements that take into account the additional information, in order to obtain effective and sometimes better performing predictive densities than others in the literature. The results apply to normal models with a known or unknown variance. We describe Bayesian predictive densities for Kullback--Leibler, Hellinger, reverse Kullback-Leibler losses as well as for α--divergence losses and establish links with skew--normal densities. We obtain dominance results using several techniques, including expansion of variance, dual loss functions in point estimation, restricted parameter space estimation, and Stein estimation. Finally, we obtain a general result for the Bayesian estimator of a ratio of two exponential family densities.
APA, Harvard, Vancouver, ISO, and other styles
45

Voufack, Ariste Bolivard. "Modélisation multi-technique de la densité électronique." Thesis, Université de Lorraine, 2018. http://www.theses.fr/2018LORR0168/document.

Full text
Abstract:
Il est désormais possible, en utilisant le modèle de densité électronique résolue en spin (CRM2), de combiner les diffractions des rayons X et des neutrons (polarisés) pour déterminer les distributions électroniques de charge et de spin de matériaux magnétiques cristallins. Cette méthode permet la mise en évidence des chemins d’interactions rendant compte de l’ordre magnétique. Le modèle résolu en spin a été appliqué aux complexes de coordination avec un métal de transition portant la majorité du moment magnétique, il a été ensuite utilisé pour étudier les radicaux purs organiques contenant des électrons non appariés délocalisés sur un groupement chimique et les matériaux inorganiques. Dans le radical Nit(SMe)Ph, la modélisation des densités de charge et de spin a permis, en accord avec les résultats antérieurs, de montrer que le spin est délocalisé sur le groupe O-N-C-N-O (fonction nitronyle nitroxyde). Elle a également permis de montrer l’implication des liaisons hydrogène dans les interactions magnétiques ferromagnétique observé en dessous de 0.6K. Cette étude a mis en évidence une répartition dissymétrique de la population de spin sur les deux groupes N—O dont seuls les calculs CASSCF permettent de reproduire l’amplitude. Cette dissymétrie proviendrait d’une combinaison d’effets moléculaires et cristallins. Dans le radical p-O2NC6F4CNSSN de la famille des dithiadiazolyles, la modélisation par affinement joint montre que la majorité du spin est porté par le groupement –CNSSN en accord avec les travaux antérieurs. Grace aux propriétés topologiques de la densité de charge, des interactions halogène, chalcogène et π ont été mis en évidence. Certaines de ces interactions favorisent des couplages magnétiques, notamment les contacts S…N2 entre molécules voisines pouvant contribuer à l’ordre ferromagnétique observé à très basse température (1.3K). Quant au matériau inorganique, YTiO3, les densités de charge en phases paramagnétique et ferromagnétique ont été déterminées ainsi que la densité de spin dans la phase ferromagnétique. Les résultats de cette étude montrent que les orbitales d les plus peuplées en électrons de l’atome de Ti sont dxz et dyz.. L’ordre orbital présent dans ce matériau est observé à 100 et à 20 K suggérant que l’ordre orbitalaire est lié à la distorsion des octaèdres. La fonction d’onde de l’électron non apparié est une combinaison linéaire de ces orbitales t2g
X-ray and neutron diffraction methods can be combined to determine simultaneously electron charge and spin densities in crystals based on spin resolved electron density model developed at CRM2. This method enables to carry out the study of interaction paths leading to the observed ferromagnetic order. First applications of this model were to coordination complexes, where the unpaired electron is mainly located on the transition metal, then generalized to explore organic radicals and to inorganic materials. In radical Nit(SMe)Ph, the modeling of the experimental charge and spin densities showed localization of spin density on O-N-C-N-O group (nitronyl -nitroxyde function), in agreement with previous works. It is also evidenced the involvement of the hydrogen bonds in the magnetic interactions leading to the ferromagnetic transition at very low temperature (0.6K). This study revealed dissymmetrical spin population of the two N-O groups that only CASSCF-type calculations can reproduce in amplitude (not DFT). This dissymmetry originates from both molecular and crystal effects. In radical p-O2NC6F4CNSSN belonging to the family of dithiadiazolyl, the joint refinement showed that the majority of the spin is distributed on -CNSSN group in agreement with the previous works. From topological properties of the charge density, halogen, chalcogen and π interactions have been highlighted. The most important magnetic interactions are observed through the network formed by contacts S ... N2 between neighboring molecules leading to the ferromagnetic order below 1.23K. Concerning the inorganic material, YTiO3, the charge densities in both paramagnetic and ferromagnetic phases and spin density were modelled. The results show that the most populated d orbitals of Ti atom are dxz and dyz. The orbital ordering evidenced in this material is observed at 100 and 20 K due to the orthorhombic distorsion. The wave function of the unpaired electron is a linear combination of these particularly populated t2g orbitals
APA, Harvard, Vancouver, ISO, and other styles
46

Häfner, Stephan Georg. "Mandibular reconstruction /." [S.l.] : [s.n.], 2009. http://opac.nebis.ch/cgi-bin/showAbstract.pl?sys=000281107.

Full text
APA, Harvard, Vancouver, ISO, and other styles
47

Riester, Markus. "Genealogy Reconstruction." Doctoral thesis, Universitätsbibliothek Leipzig, 2010. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-38656.

Full text
Abstract:
Genealogy reconstruction is widely used in biology when relationships among entities are studied. Phylogenies, or evolutionary trees, show the differences between species. They are of profound importance because they help to obtain better understandings of evolutionary processes. Pedigrees, or family trees, on the other hand visualize the relatedness between individuals in a population. The reconstruction of pedigrees and the inference of parentage in general is now a cornerstone in molecular ecology. Applications include the direct infer- ence of gene flow, estimation of the effective population size and parameters describing the population’s mating behaviour such as rates of inbreeding. In the first part of this thesis, we construct genealogies of various types of cancer. Histopatho- logical classification of human tumors relies in part on the degree of differentiation of the tumor sample. To date, there is no objective systematic method to categorize tumor subtypes by maturation. We introduce a novel algorithm to rank tumor subtypes according to the dis- similarity of their gene expression from that of stem cells and fully differentiated tissue, and thereby construct a phylogenetic tree of cancer. We validate our methodology with expression data of leukemia and liposarcoma subtypes and then apply it to a broader group of sarcomas and of breast cancer subtypes. This ranking of tumor subtypes resulting from the application of our methodology allows the identification of genes correlated with differentiation and may help to identify novel therapeutic targets. Our algorithm represents the first phylogeny-based tool to analyze the differentiation status of human tumors. In contrast to asexually reproducing cancer cell populations, pedigrees of sexually reproduc- ing populations cannot be represented by phylogenetic trees. Pedigrees are directed acyclic graphs (DAGs) and therefore resemble more phylogenetic networks where reticulate events are indicated by vertices with two incoming arcs. We present a software package for pedigree reconstruction in natural populations using co-dominant genomic markers such as microsatel- lites and single nucleotide polymorphism (SNPs) in the second part of the thesis. If available, the algorithm makes use of prior information such as known relationships (sub-pedigrees) or the age and sex of individuals. Statistical confidence is estimated by Markov chain Monte Carlo (MCMC) sampling. The accuracy of the algorithm is demonstrated for simulated data as well as an empirical data set with known pedigree. The parentage inference is robust even in the presence of genotyping errors. We further demonstrate the accuracy of the algorithm on simulated clonal populations. We show that the joint estimation of parameters of inter- est such as the rate of self-fertilization or clonality is possible with high accuracy even with marker panels of moderate power. Classical methods can only assign a very limited number of statistically significant parentages in this case and would therefore fail. The method is implemented in a fast and easy to use open source software that scales to large datasets with many thousand individuals.
APA, Harvard, Vancouver, ISO, and other styles
48

Bradley, Judah C. "Iraq Reconstruction." Thesis, Georgia Institute of Technology, 2006. http://hdl.handle.net/1853/11492.

Full text
Abstract:
The invasion planning, execution and ongoing reconstruction operations in Iraq are extremely complex. Using research, personal experience and experience of deployed members, this paper documents reconstruction events which led to the current situation in Iraq, discusses reconstruction lesson learned and offers alternative approaches which may decrease time and budget requirements for future reconstruction operations.
APA, Harvard, Vancouver, ISO, and other styles
49

Wemyss, Michael. "Reconstruction algebras." Thesis, University of Bristol, 2008. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.445895.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

VERREZ, GILLES. "Reconstruction palpébrale." Toulouse 3, 1990. http://www.theses.fr/1990TOU31222.

Full text
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography