Dissertations / Theses on the topic 'Tomography'

To see the other types of publications on this topic, follow the link: Tomography.

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Select a source type:

Consult the top 50 dissertations / theses for your research on the topic 'Tomography.'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse dissertations / theses on a wide variety of disciplines and organise your bibliography correctly.

1

Andrews, Thomasin Catharine. "Positron emission tomographic [tomography] studies in Huntingdon's disease." Thesis, University of London, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.271604.

Full text
APA, Harvard, Vancouver, ISO, and other styles
2

Trumbo, Matthew Lee Marks Robert J. Jean B. Randall. "A new modality for microwave tomographic maging : transit time tomography /." Waco, Tex. : Baylor University, 2006. http://hdl.handle.net/2104/3905.

Full text
APA, Harvard, Vancouver, ISO, and other styles
3

Gavilanez, Franklin. "Network tomography." College Park, Md. : University of Maryland, 2006. http://hdl.handle.net/1903/3938.

Full text
Abstract:
Thesis (Ph. D.) -- University of Maryland, College Park, 2006.
Thesis research directed by: Mathematics. Title from t.p. of PDF. Includes bibliographical references. Published by UMI Dissertation Services, Ann Arbor, Mich. Also available in paper.
APA, Harvard, Vancouver, ISO, and other styles
4

Nugroho, Agung Tjahjo. "Microwave tomography." Thesis, University of Manchester, 2016. https://www.research.manchester.ac.uk/portal/en/theses/microwave-tomography(1000bea8-f286-42dc-9def-8aa09411160e).html.

Full text
Abstract:
This thesis reports on the research carried out in the area of Microwave Tomography (MWT) where the study aims to develop inversion algorithms to obtain cheap and stable solutions of MWT inverse scattering problems which are mathematically formulated as nonlinear ill posed problems. The study develops two algorithms namely Inexact Newton Backtracking Method (INBM) and Newton Iterative-Conjugate Gradient on Normal Equation (NI-CGNE) which are based on Newton method. These algorithms apply implicit solutions of the Newton equations with unspecific manner functioning as the regularized step size of the Newton iterative. The two developed methods were tested by the use of numerical examples and experimental data gained by the MWT system of the University of Manchester. The numerical experiments were done on samples with dielectric contrast objects containing different kinds of materials and lossy materials. Meanwhile, the quality of the methods is evaluated by comparingthem with the Levenberg Marquardt method (LM). Under the natural assumption that the INBM is a regularized method and the CGNE is a semi regularized method, the results of experiments show that INBM and NI-CGNE improve the speed, the spatial resolutions and the quality of direct regularization method by means of the LM method. The experiments also show that the developed algorithms are more flexible to theeffect of noise and lossy materials compared with the LM algorithm.
APA, Harvard, Vancouver, ISO, and other styles
5

Desai, Naeem. "Tensor tomography." Thesis, University of Manchester, 2018. https://www.research.manchester.ac.uk/portal/en/theses/tensor-tomography(9df5f611-ef5f-4b20-b8e4-7aa42de5a4e1).html.

Full text
Abstract:
Rich tomography is becoming increasingly popular since we have seen a substantial increase in computational power and storage. Instead of measuring one scalar for each ray, multiple measurements are needed per ray for various imaging modalities. This advancement has allowed the design of experiments and equipment which facilitate a broad spectrum of applications. We present new reconstruction results and methods for several imaging modalities including x-ray diffraction strain tomography, Photoelastic tomography and Polarimet- ric Neutron Magnetic Field Tomography (PNMFT). We begin with a survey of the Radon and x-ray transforms discussing several procedures for inversion. Furthermore we highlight the Singular Value Decomposition (SVD) of the Radon transform and consider some stability results for reconstruction in Sobolev spaces. We then move onto define the Non-Abelian Ray Transform (NART), Longitudinal Ray Transform (LRT), Transverse Ray Transform (TRT) and the Truncated Trans- verse Ray Transform (TTRT) where we highlight some results on the complete inver- sion procedure, SVD and mention stability results in Sobolev spaces. Thereafter we derive some relations between these transforms. Next we discuss the imaging modali- ties in mind and relate the transforms to their specific inverse problems, primarily being linear. Specifically, NART arises in the formulation of PNMFT where we want to im- age magnetic structures within magnetic materials with the use of polarized neutrons. After some initial numerical studies we extend the known Radon inversion presented by experimentalists, reconstructing fairly weak magnetic fields, to reconstruct PNMFT data up to phase wrapping. We can recover the strain field tomographically for a polycrystalline material using diffraction data and deduce that a certain moment of that data corresponds to the TRT. Quite naturally the whole strain tensor can be reconstructed from diffraction data measured using rotations about six axes. We develop an innovative explicit plane-by-plane filtered back-projection reconstruction algorithm for the TRT, using data from rotations about three orthogonal axes and state the reasoning why two- axis data is insufficient. For the first time we give the first published results of TRT reconstruction. To complete our discussion we present Photoelastic tomography which relates to the TTRT and implement the algorithm discussing the difficulties that arise in reconstructing data. Ultimately we return to PNMFT highlighting the nonlinear inverse problem due to phase wrapping. We propose an iterative reconstruction algorithm, namely the Modified Newton Kantarovich method (MNK) where we keep the Jacobian (Fréchet derivative) fixed at the first step. However, this is shown to fail for large angles suggesting to develop the Newton Kantarovich (NK) method where we update the Jacobian at each step of the iteration process.
APA, Harvard, Vancouver, ISO, and other styles
6

Ku, Jason (Jason Stoutsenberger). "Origami tomography." Thesis, Massachusetts Institute of Technology, 2011. http://hdl.handle.net/1721.1/68948.

Full text
Abstract:
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.
Cataloged from PDF version of thesis.
Includes bibliographical references (p. 65-66).
This thesis analyzes two dimensional tomographic imaging of surface objects with negligible volume, concentrating on piecewise linear surfaces similar to folded origami. In contrast to the large number of projections usually necessary in traditional tomographic imaging, information is extracted directly from a small number of Radon projections. Furthermore, piecewise linear chains are shown to be fully characterized from just two sampled Radon projections, assuming perfect sampling resolution of these projections.
by Jason Ku.
S.M.
APA, Harvard, Vancouver, ISO, and other styles
7

Nahvi, Manoochehr. "Wideband electrical impedance spectro-tomographic imaging." Thesis, University of Leeds, 2008. http://etheses.whiterose.ac.uk/11281/.

Full text
Abstract:
The underlying opportunity for this study is that process materials may show considerable change in their electrical properties in response to an injected signal over a wide frequency range. The use of this concept to demonstrate the construction of tomographic images for a range of frequency bands is described. These can then provide a deeper understanding and interpretation of a process under investigation. The thesis presents an in-depth review of the characteristics of the various wideband signals that could be used for simultaneous spectral measurements. This includes an objective selection process that demonstrates that a Chirp signal form offers key advantages. It then addresses the details of the developed method and algorithms for WElT systems that deploy a Chirp wideband excitation signal and a further aspect of the method, based on the time-frequency analysis, particularly wavelet transform, which is used to reveal spectral data sets. The method has been verified by simulation studies which are described. To provide measurements over a required frequency range a linear chirp is deployed as the excitation signal and corresponding peripheral measurements are synthesised using a 2D model. The measurements are then analysed using a wavelet transform algorithm to reveal spectral datasets which are exemplified in the thesis. The thesis then examines the feasibility of the presented method through various experimental trials; an overview of the implementation of the electronic system is included. This provides a single-channel EIT chirp excitation implementation, in essence simulating a real-time parallel data collection system. through the use of pseudo-static tests on foodstuff materials. The experimental data were then analysed and tomographic images reconstructed using the frequency banded data. These included results illustrate the promise of this composite approach in exploiting sensitivity to variations over a wide frequency range. They indicate that the described method can augment an EIT sensing procedure to support spectroscopic analysis of the process materials.
APA, Harvard, Vancouver, ISO, and other styles
8

Plissonneau, Louis. "Network tomography from an operator perspective." Thesis, Paris, ENST, 2012. http://www.theses.fr/2012ENST0033/document.

Full text
Abstract:
Le domaine de la mesure des caractéristiques du trafic transitant sur un réseau a été largement traité par une vaste communauté de chercheurs, en premier lieu pour répondre aux attentes des opérateurs fournisseurs d'accès à Internet. En effet, leur première préoccupation est de savoir quel type de trafic ils doivent transporter. Une des principales caractéristiques de l'Internet est qu'il évolue très vite, de sorte que le besoin de mesures du trafic grand public ne se tarit jamais. Dans ce travail, nous abordons la question de la mesure du trafic Internet grand public par deux perspectives différentes: les mesures passives et les mesures actives. Dans la première partie de cette thèse, nous capturons et analysons passivement les statistiques des connections d'utilisateurs d'Internet durant plus d'une semaine. Nous utilisons ces données pour réviser et approfondir notre connaissance du trafic Internet résidentiel. Ensuite, nous utilisons des méthodes de regroupement pour créer des ensembles d'utilisateurs en fonctions des applications qu'ils utilisent. Nous apprenons donc qu'une vaste majorité des clients se connectent à Internet principalement pour surfer sur le Web et regarder des vidéos en streaming. Ces données nous servent aussi à évaluer de nouvelles possibilités de contrôler le trafic d'une plateforme ADSL. Comme la principale partie du trafic provient du vidéo streaming, nous prenons plusieurs instantanés de ce trafic avec des captures paquet durant une période de plusieurs années, ceci pour comprendre précisément l'évolution de ce trafic. De plus, nous analysons et relions la performance du vidéo streaming, définie par des indicateurs de qualité de service, au comportement des utilisateurs de ce service. Dans la deuxième partie de cette thèse, nous tirons parti de cette connaissance pour concevoir une sonde active capable de mesurer la qualité d'expérience des sites de vidéo streaming. Nous avons modélisé la lecture des vidéos \emph{streaming} pour pouvoir déterminer leur qualité telle qu'elle est perçue par les utilisateurs. Grâce à cet outil, nous pouvons comprendre l'impact de la sélection du serveur vidéo et du serveur DNS sur la perception de la qualité vidéo par l'utilisateur. De plus, la possibilité de réaliser des mesures depuis divers opérateurs, nous permet de détailler les politiques de distribution vidéo utilisées par les sites de streaming
Network tomography is the study of a network's traffic characteristics using measures. This subject has already been addressed by a whole community of researchers, especially to answer the need for knowledge of residential Internet traffic that ISPs have to carry. One of the main aspects of the Internet is that it evolves very quickly, so that there is a never ending need for Internet measurements. In this work, we address the issue of residential Internet measure from two different perspectives: passive measurements and active measurements. In the first part of this thesis, we passively collect and analyse statistics of residential users' connections spanning over a whole week. We use this data to update and deepen our knowledge of Internet residential traffic. Then, we use clustering methods to form groups of users according to the application they use. This shows how the vast majority of customers are now using the Internet mainly for Web browsing and watching video Streaming. This data is also used to evaluate new opportunities for managing the traffic of a local ADSL platform. As the main part of the traffic is video streaming, we use multiple snapshots of packet captures of this traffic over a period of many years to accurately understand its evolution. Moreover we analyse and correlate its performance, defined out of quality of service indicators, to the behavior of the users of this service. In the second part of this thesis, we take advantage of this knowledge to design a new tool for actively probing the quality of experience of video streaming sites. We have modeled the playback of streaming videos so that we are able to figure out its quality as perceived by the users. With this tool, we can understand the impact of the video server selection and the DNS servers on the user's perception of the video quality. Moreover the ability to perform the experiments on different ISPs allows us to further dig into the delivery policies of video streaming sites
APA, Harvard, Vancouver, ISO, and other styles
9

Ma, Lu. "Magnetic induction tomography for non-destructive evaluation and process tomography." Thesis, University of Bath, 2014. https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.642036.

Full text
Abstract:
Magnetic induction tomography (MIT) is an exciting yet challenging research topic. It is sensitive to all passive electromagnetic properties, and as such it has great appeal to many industries. This thesis presents an experimental investigation of MIT within two broad ar- eas of application: non-destructive evaluation (NDE) and industrial process tomography. Within both areas, MIT is presented as a low cost and non-invasive inspection tool with considerable developmental potential with regard to commercial applicability. Experi- mental investigations into the use of MIT demonstrate its versatility in imaging conduc- tive substances ranging from metallic structures, such as pipelines (s ⇡ 106 108S/m) to new composite material, such as carbon fibre reinforced polymers (s ⇡ 104 105S/m), as well as substances in a state of flow (s < 10S/m). Research innovations presented in this thesis constitute (i) the first experimental evalu- ation of MIT for pipeline inspection, an application never before attempted in the area of NDE, (ii) the development of a novel limited region algorithm, which can improve the traditional resolution from 10% to 2%, (iii) the first experimental 3D planar MIT study for subsurface imaging, which opens many opportunities for MIT as a limited access tomog- raphy technique, (iv) an in-depth experimental evaluation of the MIT system response towards various fluid measurements for the first time, while also reporting some of the first flow rig tests in this field. In addition, for each specific application, the capabilities of the prototype MIT systems are assessed with regard to (v) their flexibility in accommodating different sensor geometries, including circular, dual planar, planar and arc, (vi) situations in which the imaging subject has limited access, and (vii) their capacity to reconstruct a viable image of the subject given limited measurement data. Altogether, the results provide an evidential basis for future exploitation of this tech- nique. From the experimental investigations, it is concluded that the major limitations of this technique lie in both the hardware development in order to meet the standards of widespread commercial applications and the software capability for fully automated real time image reconstruction and structural analysis of the imaging subject. Nevertheless, with consistent development in both aforementioned areas, MIT could eventually be used as a rapid NDE technique for structural health monitoring and process tomography, as such contributing both to the social economy and public safety.
APA, Harvard, Vancouver, ISO, and other styles
10

Van, de Sompel Dominique. "Limited view tomography." Thesis, University of Oxford, 2009. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.515008.

Full text
APA, Harvard, Vancouver, ISO, and other styles
11

Matarese, Joseph R. (Joseph Richard). "Nonlinear traveltime tomography." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12665.

Full text
Abstract:
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Earth, Atmospheric, and Planetary Sciences, 1993.
Includes bibliographical references (p. 249-254).
by Joseph R. Materese.
Ph.D.
APA, Harvard, Vancouver, ISO, and other styles
12

Lo, Tien-when. "Seismic borehole tomography." Thesis, Massachusetts Institute of Technology, 1988. http://hdl.handle.net/1721.1/54325.

Full text
APA, Harvard, Vancouver, ISO, and other styles
13

Huang, David. "Optical coherence tomography." Thesis, Massachusetts Institute of Technology, 1993. http://hdl.handle.net/1721.1/12675.

Full text
APA, Harvard, Vancouver, ISO, and other styles
14

De, Villiers Mattieu. "Limited angle tomography." Doctoral thesis, University of Cape Town, 2004. http://hdl.handle.net/11427/5271.

Full text
Abstract:
Includes bibliographical references (p. 127-130).
This thesis investigates the limited angle tomography problem where axial reconstructions are produced from few measured projection views covering a 100° angular range. Conventional full angle tomography requires at least a 180° range of projection views of the patient at a fine angular spacing. Inference techniques presented in the literature, such as Bayesian methods, perform inadequately on the information-starved problem of interest.
APA, Harvard, Vancouver, ISO, and other styles
15

Muscat, Sarah. "Optical coherence tomography." Thesis, Connect to e-thesis, 2003. http://theses.gla.ac.uk/630/.

Full text
Abstract:
Thesis (Ph.D.) - University of Glasgow, 2003.
Ph.D. thesis submitted to the Department of Cardiovascular and Medical Sciences, Faculty of Medicine, University of Glasgow, 2003. Includes bibliographical references. Print version also available.
APA, Harvard, Vancouver, ISO, and other styles
16

Quiñones, Catherine Thérèse. "Proton computed tomography." Thesis, Lyon, 2016. http://www.theses.fr/2016LYSEI094/document.

Full text
Abstract:
L'utilisation de protons dans le traitement du cancer est largement reconnue grâce au parcours fini des protons dans la matière. Pour la planification du traitement par protons, l'incertitude dans la détermination de la longueur du parcours des protons provient principalement de l'inexactitude dans la conversion des unités Hounsfield (obtenues à partir de tomographie rayons X) en pouvoir d'arrêt des protons. La tomographie proton (pCT) est une solution attrayante car cette modalité reconstruit directement la carte du pouvoir d'arrêt relatif à l'eau (RSP) de l'objet. La technique pCT classique est basée sur la mesure de la perte d'énergie des protons pour reconstruire la carte du RSP de l'objet. En plus de la perte d'énergie, les protons subissent également des diffusions coulombiennes multiples et des interactions nucléaires qui pourraient révéler d'autres propriétés intéressantes des matériaux non visibles avec les cartes de RSP. Ce travail de thèse a consisté à étudier les interactions de protons au travers de simulations Monte Carlo par le logiciel GATE et d'utiliser ces informations pour reconstruire une carte de l'objet par rétroprojection filtrée le long des chemins les plus vraisemblables des protons. Mise à part la méthode pCT conventionnelle par perte d'énergie, deux modalités de pCT ont été étudiées et mises en œuvre. La première est la pCT par atténuation qui est réalisée en utilisant l'atténuation des protons pour reconstruire le coefficient d'atténuation linéique des interactions nucléaires de l'objet. La deuxième modalité pCT est appelée pCT par diffusion qui est effectuée en mesurant la variation angulaire due à la diffusion coulombienne pour reconstruire la carte de pouvoir de diffusion, liée à la longueur de radiation du matériau. L'exactitude, la précision et la résolution spatiale des images reconstruites à partir des deux modalités de pCT ont été évaluées qualitativement et quantitativement et comparées à la pCT conventionnelle par perte d'énergie. Alors que la pCT par perte d'énergie fournit déjà les informations nécessaires pour calculer la longueur du parcours des protons pour la planification du traitement, la pCT par atténuation et par diffusion donnent des informations complémentaires sur l'objet. D'une part, les images pCT par diffusion et par atténuation fournissent une information supplémentaire intrinsèque aux matériaux de l'objet. D'autre part, dans certains des cas étudiés, les images pCT par atténuation démontrent une meilleure résolution spatiale dont l'information fournie compléterait celle de la pCT par perte d'énergie
The use of protons in cancer treatment has been widely recognized thanks to the precise stopping range of protons in matter. In proton therapy treatment planning, the uncertainty in determining the range mainly stems from the inaccuracy in the conversion of the Hounsfield units obtained from x-ray computed tomography to proton stopping power. Proton CT (pCT) has been an attractive solution as this modality directly reconstructs the relative stopping power (RSP) map of the object. The conventional pCT technique is based on measurements of the energy loss of protons to reconstruct the RSP map of the object. In addition to energy loss, protons also undergo multiple Coulomb scattering and nuclear interactions which could reveal other interesting properties of the materials not visible with the RSP maps. This PhD work is to investigate proton interactions through Monte Carlo simulations in GATE and to use this information to reconstruct a map of the object through filtered back-projection along the most likely proton paths. Aside from the conventional energy-loss pCT, two pCT modalities have been investigated and implemented. The first one is called attenuation pCT which is carried out by using the attenuation of protons to reconstruct the linear inelastic nuclear cross-section map of the object. The second pCT modality is called scattering pCT which is performed by utilizing proton scattering by measuring the angular variance to reconstruct the relative scattering power map which is related to the radiation length of the material. The accuracy, precision and spatial resolution of the images reconstructed from the two pCT modalities were evaluated qualitatively and quantitatively and compared with the conventional energy-loss pCT. While energy-loss pCT already provides the information needed to calculate the proton range for treatment planning, attenuation pCT and scattering pCT give complementary information about the object. For one, scattering pCT and attenuation pCT images provide an additional information intrinsic to the materials in the object. Another is that, in some studied cases, attenuation pCT images demonstrate a better spatial resolution and showed features that would supplement energy-loss pCT reconstructions
APA, Harvard, Vancouver, ISO, and other styles
17

Daly, Pter M. (Peter Michael). "Cramér-Rao bounds for matched field tomography and ocean acoustic tomography." Thesis, Massachusetts Institute of Technology, 1997. http://hdl.handle.net/1721.1/43920.

Full text
APA, Harvard, Vancouver, ISO, and other styles
18

Duhant, Alexandre. "Contrôle non destructif par reconstruction en tomographie térahertz." Thesis, Montpellier, 2019. http://www.theses.fr/2019MONTS006/document.

Full text
Abstract:
La tomographie et ses algorithmes associés sont désormais bien connus dans le domaine des rayons X. En revanche tous ces outils s’appuient sur une modélisation qui diffère de celle qui pourrait être envisagée dans le domaine des ondes Térahertz (THz). On retrouve, dans l’état de l’art, des modèles de propagation de l’onde THz au sein d’un objet. Ces modèles génèrent une onde THz qui est soit éloigné d’une vérité terrain, soit d’une complexité algorithmique trop élevée pour être utilisée au sein d’une reconstruction tomographique dans des temps de calcul acceptables. Un des objectifs de ce travail de thèse est donc d’obtenir un modèle de propagation de l’onde THz permettant une meilleure modélisation du processus d’acquisition et pouvant être calculé dans des temps relativement courts. Lors de la mesure d’une projection d’un objet, le phénomène d’absorption n’est pas le seul phénomène responsable de l’atténuation de l’onde THz. Les phénomènes de réfraction et de réflexion sont aussi responsables d’une atténuation de l’onde THz mesurée. Lors d’une reconstruction tomographique THz, si ces phénomènes ne sont pas pris en compte, l’algorithme attribue cette atténuation au phénomène d’absorption. Cela se traduit par une reconstruction des coefficients d’absorption de l’objet éloignée de leur valeur réelle. Sous l’effet de ces phénomènes, le problème de la reconstruction tomographique THz est non linéaire. Cela empêche ainsi l’utilisation directe des méthodes de reconstruction classiques puisque ces méthodes impliquent que la relation liant un objet à ses projections soit linéaire
Tomography and its associated algorithms are now well known in the field of X-rays. On the other hand, all these tools are based on a modeling that differs from which could be envisaged in the field of Terahertz (THz) waves. We find, in the state of the art, models of propagation of the THz wave within an object. These models generate a THz wave that is either far from a ground truth, or of an algorithmic complexity that is too high to be used within a tomographic reconstruction in acceptable computing times. One of the objectives of this thesis work is therefore to obtain a propagation model of the THz wave allowing better modeling of the acquisition process and which can be calculated in relatively short times. When measuring the projection of an object, the absorption phenomenon is not the only phenomenon responsible for the attenuation of the THz wave. The phenomena of refraction and reflection are also responsible for attenuation of the measured THz wave. During a THz tomographic reconstruction, if these phenomena are not taken into account, the algorithm attributes this attenuation to the absorption phenomenon. This results in a reconstruction of the absorption coefficients of the object far from their real value. Under the effect of these phenomena, the problem of THz tomographic reconstruction is non-linear. This prevents the direct use of classical reconstruction methods since these methods imply that the relationship between an object and its projections is linear
APA, Harvard, Vancouver, ISO, and other styles
19

Song, Ningning. "Quantitative photoacoustic tomography for breast cancer screening." Thesis, Ecole centrale de Marseille, 2014. http://www.theses.fr/2014ECDM0005/document.

Full text
Abstract:
Ces travaux de thèse sont motivés par le développement de techniques d’imagerie alternatives pour le diagnostic précoce du cancer du sein. Parmi celles-ci, l’imagerie photoacoustique couple potentiellement les avantages de deux modalités d’imagerie non-invasives, à savoir la quantification de contrastes physiologiques du fait de l’excitation optique et la haute résolution du fait d’un sondage acoustique.Le but de ces travaux est de proposer une modélisation multiondes du phénomène photoacoustique, et d’incorporer ce modèle dans un algorithme de reconstruction efficace pour résoudre le problème inverse. Celui-ci se rapporte à la reconstruction de cartes de propriétés physiques (optique et/ou acoustiques) de l’intérieur du sein. La Méthode des Eléments Finis (MEF) a été retenue pour résoudre l’équation de propagation optique. Pour la résolution de l’équation de propagation acoustique, une méthode semi-analytique, basée sur des calculs par transformées de Fourier (méthod k-space), a été choisie. Pour la résolution du problème inverse, deux approches ont été étudiées : i) un sondage passif, permettant de remonter à la distribution de pression initiale, à l’aide de la méthode de retournement temporel ; ii) un sondage actif, où l’on interroge le milieu sélectivement sous différentes excitations, permettant de remonter quantitativement aux propriétés optiques du milieu. On appelle cette dernière approche Tomographie PhotoAcoustique Quantitative (TPAQ). Une étude spécifique sur le protocole d’illumination/détection a été conduite, prenant également en compte les contraintes expérimentales
The present work was motivated by the development of alternative imaging techniques for breast cancer early diagnosis, that is photoacoustic imaging, which potentially couples the merits of optical imaging and ultrasound imaging, that is high optical functional contrasts brought by optical probing and high spatial resolution by ultrasound detection. Our work aims at modeling the photoacoustic multiwave phenomenon and incorporate it in an efficient reconstruction algorithm to solve the inverse problem. The inverse problem consists in the recovery of interior maps of physical properties of the breast. The forward model couples optical and acoustic propagations. The Finite Element Method (FEM) was chosen for solving the optical propagation equation, while a semi-analytical method based on Fourier transforms calculations (k-space method) was preferred for solving the acoustic propagation equation. For the inverse model, time reversal method was adopted to reconstruct the initial pressure distribution, an active approach of the inverse problem was also achieved, which decoupled the optical properties from measured photoacoustic pressure, this approach is called quantitative photoacoustic tomography (QPAT), in this approach, illumination/detection protocol was studied, and the experimental set up is also take into consideration. In the last step, photoacoustic pressure measurements obtained from experiment and simulation are studied and compared
APA, Harvard, Vancouver, ISO, and other styles
20

Guggenheim, James A. "Multi-modal diffuse optical tomography and bioluminescence tomography system for preclinical imaging." Thesis, University of Birmingham, 2014. http://etheses.bham.ac.uk//id/eprint/5278/.

Full text
Abstract:
The development, characterisation and testing of a novel all-optical, multi-modal preclinical biomedical imaging system is presented. The system aims to provide a new way of accurately visualising the spatial distribution and activity of molecular structures and processes in small animals by combining 3D bioluminescence tomography (BLT; reconstruction-based 3D imaging of internal bioluminescent reporter distributions), diffuse optical tomography (DOT; reconstruction-based imaging of optical parameter distributions) and optical surface capture techniques. The key principle of the imaging system is to use surface capture results to enhance the accuracy of DOT image reconstruction, and to use the results of both surface capture and DOT to enhance the accuracy of BLT. Presented experiments show that the developed system can reconstruct luminescent source distributions and optical parameters accurately and that small animal imaging is feasible with the system.
APA, Harvard, Vancouver, ISO, and other styles
21

Joubert, Cécile. "Étude et calibration d’un hydrophone embarqué sur un flotteur dérivant - application à la sismologie." Thesis, Nice, 2015. http://www.theses.fr/2015NICE4022/document.

Full text
Abstract:
Dans ce travail, nous proposons une étude générale des hydrophones, leur fonctionnement suivant le principe de la piézoélectricité, les éléments les constituant ainsi que les brevets et les hydrophones existants. Nous modélisons les courbes de sensibilité à la réception et à l'émission avec le logiciel COMSOL que nous comparons avec des hydrophones en notre possession. Avec ces modèles, nous proposons le design d'un hydrophone potentiel, large bande, viable à de grandes immersions (>6000 m). Nous testons deux nouvelles méthodes de calibration d'hydrophone à basses fréquences (< 2 Hz) que nous appliquons aux hydrophones des flotteurs MERMAID. Dans la méthode «dynamique», la réponse de l'hydrophone est étudiée suite à l'application d'une brève surpression (1000 Pa avec tτ < 1 s), réalisée par le déplacement vertical de l'hydrophone dans l'eau. La méthode «statique» permet d'étudier la réponse du système d'acquisition complet. L'hydrophone est placé dans un caisson étanche dans lequel une surpression est générée par l'ajout d'une colonne d'eau placée au-dessus. Nous déterminons les pôles et zéros de la chaîne d'acquisition des flotteurs. La correction des sismogrammes enregistrés par trois flotteurs MERMAID déployés en mer Méditerranée, nous permet d'estimer la pression générée par le séisme de Barcelonnette à environ 400 Pa (7 avril 2014, Mw = 4,8). Nous validons les données acquises par les MERMAID dans une étude sismologique. Nous étudions les données de six mois d'enregistrements par les trois flotteurs déployés en mer Ligure, développons un protocole de pré-traitement des données que nous validons avec une étude tomographique
In this work, we propose a general study of hydrophones focusing on their operation, based on the piezoelectric principle, the different elements that compose them and the available hydrophones and patents. We model reception and emission sensibility curves with COMSOL software and compare with hydrophones at our disposal, allowing us to qualitatively estimate the sensitivity. We propose a design for a potential broadband hydrophone, viable at large depths (> 6000 m). We test new methods of hydrophone calibration at low frequencies (< 2 Hz) and apply them to the MERMAID floats hydrophone. In the «dynamic» method, the hydrophone response is studied with a brief pressure variation (1000 Pa with tτ < 1 s), performed by a winch, which vertically moves the hydrophone into water. The «static» method allows us to study the full system response. The hydrophone is placed in a calibration chamber in which a pressure variation is performed with an additional water column. We have determined poles and zeros applicable to the MERMAID. The correction of seismograms recorded by three MERMAID floats, deployed in the Mediterranean Sea, allows us to estimate the pressure variation produced by the Barcelonnette earthquake which is around 400 Pa (April 7, 2014, , Mw = 4,8). We validate the data acquired by the MERMAID in a seismological study. We study data of six months of acquisition of the three floats deployed in the Ligurian Basin, develop a preprocessing method of these data and validate with a tomographic study
APA, Harvard, Vancouver, ISO, and other styles
22

Sharp, Joanne. "Electron tomography of defects." Thesis, University of Cambridge, 2010. https://www.repository.cam.ac.uk/handle/1810/228638.

Full text
Abstract:
Tomography of crystal defects in the electron microscope was first attempted in 2005 by the author and colleagues. This thesis further develops the technique, using a variety of samples and methods. Use of a more optimised, commercial tomographic reconstruction program on the original GaN weak beam dark-field (WBDF) tilt series gave a finer reconstruction with lower background, line width 10-20 nm. Four WBDF tilt series were obtained of a microcrack surrounded by dislocations in a sample of indented silicon, tilt axes parallel to g = 220, 220, 400 and 040. Moiré fringes in the defect impaired alignment and reconstruction. The effect on reconstruction of moiré fringe motion with tilt was simulated, resulting in an array of rods, not a flat plane. Dislocations in a TiAl alloy were reconstructed from WBDF images with no thickness contours, giving an exceptionally clear reconstruction. The effect of misalignment of the tilt axis with systematic row g(ng) was assessed by simulating tilt series with diffraction condition variation across the tilt range of Δn = 0, 1 and 2. Misalignment changed the inclination of the reconstructed dislocation with the foil surfaces, and elongated the reconstruction in the foil normal direction; this may explain elongation additional to the missing wedge effect in experiments. Tomography from annular dark-field (ADF) STEM dislocation images was also attempted. A tilt series was obtained from the GaN sample; the reconstructed dislocations had a core of bright intensity of comparable width to WBDF reconstructions, with a surrounding region of low intensity to 60 nm width. An ADF STEM reconstruction was obtained from the Si sample at the same microcrack as for WBDF; here automatic specimen drift correction in tomography acquisition software succeeded, a significant improvement. The microcrack surfaces in Si reconstructed as faint planes and dislocations were recovered as less fragmented lines than from the WBDF reconstruction. ADF STEM tomography was also carried out on the TiAl sample, using a detector inner angle (βin) that included the first order Bragg spots (in other series βin had been 4-6θ B). Extinctions occurred which were dependent on tilt; this produced only weak lines in the reconstruction. Bragg scattering in the ADF STEM image was estimated by summing simulated dark-field dislocation images from all Bragg beams at a zone axis; a double line was produced. It was hypothised that choosing the inner detector angle to omit these first Bragg peaks may preclude most dynamical image features. Additional thermal diffuse scattering (TDS) intensity due to dilatation around an edge dislocation was estimated and found to be insignificant. The Huang scattering cross section was estimated and found to be 9Å, ten times thinner than experimental ADF STEM dislocation images. The remaining intensity may be from changes to TDS from Bloch wave transitions at the dislocation; assessing this as a function of tilt is for further work. On simple assessment, only three possible axial channeling orientations were found over the tilt range for GaN; if this is typical, dechanneling contrast probably does not apply to defect tomography.
APA, Harvard, Vancouver, ISO, and other styles
23

Westmore, Michael S. "Coherent-scatter computed tomography." Thesis, National Library of Canada = Bibliothèque nationale du Canada, 1998. http://www.collectionscanada.ca/obj/s4/f2/dsk2/ftp02/NQ31125.pdf.

Full text
APA, Harvard, Vancouver, ISO, and other styles
24

Perlo, Juan. "Single sided NMR tomography /." Aachen : Shaker, 2006. http://bvbr.bib-bvb.de:8991/F?func=service&doc_library=BVB01&doc_number=016031200&line_number=0001&func_code=DB_RECORDS&service_type=MEDIA.

Full text
APA, Harvard, Vancouver, ISO, and other styles
25

Kantartzis, Panagiotis. "Multilevel soft-field tomography." Thesis, City University London, 2011. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.537593.

Full text
APA, Harvard, Vancouver, ISO, and other styles
26

Enright, S. A. "Towards quantitative computed tomography." Thesis, University of Canterbury. Electrical and Electronic Engineering, 1992. http://hdl.handle.net/10092/6886.

Full text
Abstract:
Computed tomography is introduced along with an overview of its diverse applications in many scientific endeavours. A unified approach for the treatment of scattering from linear scalar wave motion is introduced. The assumptions under which wave motion within a medium can be characterised by concourses of rays are presented along with comment on the validity of these assumptions. Early and conventional theory applied for modelling the behaviour of rays, within media for which ray assumptions are valid, are reviewed. A new computerised method is described for reconstruction of a refractive index distribution from time-of-flight measurements of radiation/waves passing through the distribution and taken on a known boundary surrounding it. The reconstruction method, aimed at solving the bent-ray computed tomography (CT) problem, is based on a novel ray description which doesn't require the ray paths to be known. This allows the refractive index to be found by iterative solution of a set of linear equations, rather than through the computationally intensive procedure of ray tracing, which normally accompanies iterative solutions to problems of this type. The preliminary results show that this method is capable of handling appreciable spatial refractive index variations in large bodies. A review containing theory and techniques for image reconstruction from projections is presented, along with their historical development. The mathematical derivation of a recently developed reconstruction technique, the method of linograms is considered. An idea, termed the plethora of views idea, which aims to improve quantitative CT image reconstruction, is introduced. The theoretical foundation for this is the idea that when presented with a plethora of projections, by which is meant a number greater than that required to reconstruct the known region of support of an image, so that the permissible reconstruction region can be extended, then the intensity of the reconstructed distribution should be negligible throughout the extended region. Any reconstruction within the extended region, that departs from what would be termed negligible, is deduced to have been caused by imperfections of the projections. The implicit expectation of novel schemes which are presented for improving CT image reconstruction, is that contributions within the extended region can be utilised to ameliorate the effects of the imperfections on the reconstruction where the distribution is known to be contained. Preliminary experimental results are reported for an iterative algorithm proposed to correct a plethora of X-ray CT projection data containing imperfections. An extended definition is presented for the consistency of projections, termed spatial consistency, that incorporates the region with which the projection data is consistent. Using this definition and an associated definition, spatial inconsistency, an original technique is proposed and reported on for the recovery of inconsistencies that are contained in the projection data over a narrow range of angles.
APA, Harvard, Vancouver, ISO, and other styles
27

Smith, Joshua Reynolds. "Toward electric field tomography." Thesis, Massachusetts Institute of Technology, 1995. http://hdl.handle.net/1721.1/62334.

Full text
APA, Harvard, Vancouver, ISO, and other styles
28

Bayraktar, Ömer. "Quantum-Polarization State Tomography." Thesis, KTH, Tillämpad fysik, 2016. http://urn.kb.se/resolve?urn=urn:nbn:se:kth:diva-185797.

Full text
APA, Harvard, Vancouver, ISO, and other styles
29

Ding, Yijun, and Yijun Ding. "Charged-Particle Emission Tomography." Diss., The University of Arizona, 2016. http://hdl.handle.net/10150/621830.

Full text
Abstract:
Conventional charged-particle imaging techniques--such as autoradiography--provide only two-dimensional (2D) images of thin tissue slices. To get volumetric information, images of multiple thin slices are stacked. This process is time consuming and prone to distortions, as registration of 2D images is required. We propose a direct three-dimensional (3D) autoradiography technique, which we call charged-particle emission tomography (CPET). This 3D imaging technique enables imaging of thick sections, thus increasing laboratory throughput and eliminating distortions due to registration. In CPET, molecules or cells of interest are labeled so that they emit charged particles without significant alteration of their biological function. Therefore, by imaging the source of the charged particles, one can gain information about the distribution of the molecules or cells of interest. Two special case of CPET include beta emission tomography (BET) and alpha emission tomography (𝛼ET), where the charged particles employed are fast electrons and alpha particles, respectively. A crucial component of CPET is the charged-particle detector. Conventional charged-particle detectors are sensitive only to the 2-D positions of the detected particles. We propose a new detector concept, which we call particle-processing detector (PPD). A PPD measures attributes of each detected particle, including location, direction of propagation, and/or the energy deposited in the detector. Reconstruction algorithms for CPET are developed, and reconstruction results from simulated data are presented for both BET and 𝛼ET. The results show that, in addition to position, direction and energy provide valuable information for 3D reconstruction of CPET. Several designs of particle-processing detectors are described. Experimental results for one detector are discussed. With appropriate detector design and careful data analysis, it is possible to measure direction and energy, as well as position of each detected particle. The null functions of CPET with PPDs that measure different combinations of attributes are calculated through singular-value decomposition. In general, the more particle attributes are measured from each detection event, the smaller the null space of CPET is. In other words, the higher dimension the data space is, the more information about an object can be recovered from CPET.
APA, Harvard, Vancouver, ISO, and other styles
30

Xu, Weiming. "Offset Optical Coherence Tomography." Miami University / OhioLINK, 2021. http://rave.ohiolink.edu/etdc/view?acc_num=miami1626870603439104.

Full text
APA, Harvard, Vancouver, ISO, and other styles
31

Nurhandoko, Bagus Endar Bachtiar. "Fresnel zone seismic tomography." Kyoto University, 2000. http://hdl.handle.net/2433/180954.

Full text
APA, Harvard, Vancouver, ISO, and other styles
32

Merker, James. "Micro-autoradiographic fusion tomography." [Tampa, Fla] : University of South Florida, 2008. http://purl.fcla.edu/usf/dc/et/SFE0002417.

Full text
APA, Harvard, Vancouver, ISO, and other styles
33

Nam, Haewon. "Ultrasound-modulated optical tomography." Texas A&M University, 2002. http://hdl.handle.net/1969/448.

Full text
APA, Harvard, Vancouver, ISO, and other styles
34

Malyarenko, Eugene V. "Lamb wave diffraction tomography." W&M ScholarWorks, 2000. https://scholarworks.wm.edu/etd/1539623991.

Full text
Abstract:
As the worldwide aviation fleet continues to age, methods for accurately predicting the presence of structural flaws, such as hidden corrosion and disbonds, that compromise air worthiness become increasingly necessary. Ultrasonic guided waves, Lamb waves, allow large sections of aircraft structures to be rapidly inspected. However, extracting quantitative information from Lamb wave data has always involved highly trained personnel with a detailed knowledge of mechanical waveguide physics. In addition, human inspection process tends to be highly subjective, slow and prone to errors. The only practical alternative to traditional inspection routine is a software expert system capable of interpreting data with minimum error and maximum speed and reliability. Such a system would use the laws of guided wave propagation and material parameters to help signal processing algorithms automatically extract information from digitized waveforms. This work discusses several practical approaches to building such an expert system.;The next step in the inspection process is data interpretation, and imaging is the most natural way to represent two-dimensional structures. Unlike conventional ultrasonic C-scan imaging that requires access to the whole inspected area, tomographic algorithms work with data collected over the perimeter of the sample. Combined with the ability of Lamb waves to travel over large distances, tomography becomes the method of choice for solving NDE problems. This work explores different tomographic reconstruction techniques to graphically represent the Lamb wave data in quantitative maps that can be easily interpreted by technicians. Because the velocity of Lamb waves depends on the thickness, the traveltimes of the fundamental modes can be converted into a thickness map of the inspected region. Lamb waves cannot penetrate through holes and other strongly scattering defects and the assumption of straight wave paths, essential for many tomographic algorithms, fails. Diffraction tomography is a way to incorporate scattering effects into tomographic algorithms in order to improve image quality and resolution. This work describes the iterative reconstruction procedure developed for the Lamb Wave tomography and allowing for ray bending correction for imaging of moderately scattering objects.
APA, Harvard, Vancouver, ISO, and other styles
35

Wils, Patricia. "Tomographie par rayons X : correction des artefacts liés à la chaîne d'acquisition." Phd thesis, INSA de Lyon, 2011. http://tel.archives-ouvertes.fr/tel-00708545.

Full text
Abstract:
L'imagerie cone-beam computed tomography (CBCT) est une méthodologie de contrôle non destructif permettant l'obtention d'images volumiques d'un objet. Le système d'acquisition se compose d'un tube à rayons X et d'un détecteur plan numérique. La recherche développée dans ce manuscrit se déroule dans le contexte industriel. L'objet est placé sur une platine de rotation et une séquence d'images 2D est acquise. Un algorithme de reconstruction procure des données volumiques de l'atténuation de l'objet. Ces informations permettent de réaliser une étude métrologique et de valider ou non la conformité de la pièce imagée. La qualité de l'image 3D est dégradée par différents artefacts inhérents à la plateforme d'acquisition. L'objectif de cette thèse est de mettre au point une méthode de correction adaptée à une plateforme de micro-tomographie par rayons X d'objets manufacturés poly-matériaux. Le premier chapitre décrit les bases de la physique et de l'algorithmie propres à la technique d'imagerie CBCT par rayons X ainsi que les différents artefacts nuisant à la qualité de l'image finale. Le travail présenté ici se concentre sur deux types d'artefacts en particulier: les rayonnements secondaires issus de l'objet et du détecteur et le durcissement de faisceau. Le second chapitre présente un état de l'art des méthodes visant à corriger le rayonnement secondaire. Afin de quantifier le rayonnement secondaire, un outil de simulation basé sur des techniques de Monte Carlo hybride est développé. Il permet de caractériser le système d'acquisition installé au laboratoire de façon réaliste. Le troisième chapitre détaille la mise en place et la validation de cet outil. Les calculs Monte Carlo étant particulièrement prohibitifs en terme de temps de calcul, des techniques d'optimisation et d'accélération sont décrites. Le comportement du détecteur est étudié avec attention et il s'avère qu'une représentation 2D suffit pour modéliser le rayonnement secondaire. Le modèle de simulation permet une reproduction fidèle des projections acquises avec le système réel. Enfin, le dernier chapitre présente la méthodologie de correction que nous proposons. Une première reconstruction bruitée de l'objet imagé est segmentée afin d'obtenir un modèle voxélisé en densités et en matériaux. L'environnement de simulation fournit alors les projections associées à ce volume. Le volume est corrigé de façon itérative. Des résultats de correction d'images tomographiques expérimentales sont présentés dans le cas d'un objet mono-matériaux et d'un objet poly-matériaux. Notre routine de correction réduit les artefacts de cupping et améliore la description du volume reconstruit.
APA, Harvard, Vancouver, ISO, and other styles
36

Yang, Ting. "Seismic constraints on structure beneath hotspots : earthquake tomography & finite frequency tomography approaches /." View online ; access limited to URI, 2006. http://0-wwwlib.umi.com.helin.uri.edu/dissertations/dlnow/3232466.

Full text
APA, Harvard, Vancouver, ISO, and other styles
37

Ireland, Robert Henry. "Anatomically constrained image reconstruction applied to emission computed tomography & magnetic impedance tomography." Thesis, University of Sheffield, 2002. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.269337.

Full text
APA, Harvard, Vancouver, ISO, and other styles
38

Tong, Jenna Rose. "Towards multi-scale tomography : advances in electron tomography and allied 3D imaging methods." Thesis, University of Cambridge, 2010. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.608733.

Full text
APA, Harvard, Vancouver, ISO, and other styles
39

Soleimani, Manuchehr. "Image and shape reconstruction methods in magnetic induction tomography and electrical impedance tomography." Thesis, University of Manchester, 2005. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.512290.

Full text
APA, Harvard, Vancouver, ISO, and other styles
40

Ziemann, Astrid, Klaus Arnold, and Armin Raabe. "Acoustic tomography in the atmospheric surface layer." Universitätsbibliothek Leipzig, 2016. http://nbn-resolving.de/urn:nbn:de:bsz:15-qucosa-213411.

Full text
Abstract:
Die vorgestellte Methode der akustischen Tomographie (Simultane Iterative Rekonstruktionstechnik) und ein spezieller Auswertungsalgorithmus können flächengemittelte Werte meteorologischer Größen direkt bereitstellen. Somit werden zur Validierung numerischer mikroskaliger Atmosphärenmodelle weitgehend konsistente Daten geliefert. Das Verfahren verwendet die horizontale Ausbreitung von Schallstrahlen in der atmosphärischen Bodenschicht. Um einen allgemeinen Überblick zur Schallausbreitung unter verschiedenen atmosphärischen Bedingungen zu erhalten, wird ein zweidimensionales Schallausbreitungsmodell genutzt. Von Messungen der akustischen Laufzeit zwischen Sendern und Empfängern an verschiedenen Punkten in einem Meßfeld kann der Zustand der durchquerten Atmosphäre abgeschätzt werden. Die Ableitung flächengemittelter Werte für die Schallgeschwindigkeit und der daraus deduzierten Lufttemperatur resultiert aus der Inversion der Laufzeitwerte für alle möglichen Schallwege. Das angewandte zweidimensionale Tomographiemodell mit geradliniger Schallstrahlapproximation stellt dabei geringe Computeranforderungen und ist auch während des online-Betriebes einfach zu handhaben
The presented method of acoustic tomography (Simultaneous Iterative Reconstruction Technique) and a special algorithm of analysis can directly provide area averaged values of meteorological quantities. As a result rather consistent data will be delivered for validation of numerical atmospheric rnicro-scale models. The procedure uses the horizontal propagation of sound waves in the atmospheric surface layer. To obtain a general overview of the sound propagation under various atmospheric conditions a two-dimensional ray-tracing model is used. The state of the crossed atmosphere can be estimated from measurements of acoustic travel time between sources and receivers on different points in an tomographic array. Derivation of area averaged values of the sound speed and furthermore of air temperature results from the inversion of travel time values for all possible acoustic paths. Thereby, the applied straight-ray two-dimensional tomographic model is characterised as a method with small computational requirements and simple handling, especially, during online working
APA, Harvard, Vancouver, ISO, and other styles
41

ABDALLAH, A. ELLABBAN. "Three-Dimensional Tomographic Features of Dome-Shaped Macula by Swept-Source Optical Coherence Tomography." Kyoto University, 2015. http://hdl.handle.net/2433/199164.

Full text
APA, Harvard, Vancouver, ISO, and other styles
42

Plissonneau, Louis. "Network tomography from an operator perspective." Electronic Thesis or Diss., Paris, ENST, 2012. http://www.theses.fr/2012ENST0033.

Full text
Abstract:
Le domaine de la mesure des caractéristiques du trafic transitant sur un réseau a été largement traité par une vaste communauté de chercheurs, en premier lieu pour répondre aux attentes des opérateurs fournisseurs d'accès à Internet. En effet, leur première préoccupation est de savoir quel type de trafic ils doivent transporter. Une des principales caractéristiques de l'Internet est qu'il évolue très vite, de sorte que le besoin de mesures du trafic grand public ne se tarit jamais. Dans ce travail, nous abordons la question de la mesure du trafic Internet grand public par deux perspectives différentes: les mesures passives et les mesures actives. Dans la première partie de cette thèse, nous capturons et analysons passivement les statistiques des connections d'utilisateurs d'Internet durant plus d'une semaine. Nous utilisons ces données pour réviser et approfondir notre connaissance du trafic Internet résidentiel. Ensuite, nous utilisons des méthodes de regroupement pour créer des ensembles d'utilisateurs en fonctions des applications qu'ils utilisent. Nous apprenons donc qu'une vaste majorité des clients se connectent à Internet principalement pour surfer sur le Web et regarder des vidéos en streaming. Ces données nous servent aussi à évaluer de nouvelles possibilités de contrôler le trafic d'une plateforme ADSL. Comme la principale partie du trafic provient du vidéo streaming, nous prenons plusieurs instantanés de ce trafic avec des captures paquet durant une période de plusieurs années, ceci pour comprendre précisément l'évolution de ce trafic. De plus, nous analysons et relions la performance du vidéo streaming, définie par des indicateurs de qualité de service, au comportement des utilisateurs de ce service. Dans la deuxième partie de cette thèse, nous tirons parti de cette connaissance pour concevoir une sonde active capable de mesurer la qualité d'expérience des sites de vidéo streaming. Nous avons modélisé la lecture des vidéos \emph{streaming} pour pouvoir déterminer leur qualité telle qu'elle est perçue par les utilisateurs. Grâce à cet outil, nous pouvons comprendre l'impact de la sélection du serveur vidéo et du serveur DNS sur la perception de la qualité vidéo par l'utilisateur. De plus, la possibilité de réaliser des mesures depuis divers opérateurs, nous permet de détailler les politiques de distribution vidéo utilisées par les sites de streaming
Network tomography is the study of a network's traffic characteristics using measures. This subject has already been addressed by a whole community of researchers, especially to answer the need for knowledge of residential Internet traffic that ISPs have to carry. One of the main aspects of the Internet is that it evolves very quickly, so that there is a never ending need for Internet measurements. In this work, we address the issue of residential Internet measure from two different perspectives: passive measurements and active measurements. In the first part of this thesis, we passively collect and analyse statistics of residential users' connections spanning over a whole week. We use this data to update and deepen our knowledge of Internet residential traffic. Then, we use clustering methods to form groups of users according to the application they use. This shows how the vast majority of customers are now using the Internet mainly for Web browsing and watching video Streaming. This data is also used to evaluate new opportunities for managing the traffic of a local ADSL platform. As the main part of the traffic is video streaming, we use multiple snapshots of packet captures of this traffic over a period of many years to accurately understand its evolution. Moreover we analyse and correlate its performance, defined out of quality of service indicators, to the behavior of the users of this service. In the second part of this thesis, we take advantage of this knowledge to design a new tool for actively probing the quality of experience of video streaming sites. We have modeled the playback of streaming videos so that we are able to figure out its quality as perceived by the users. With this tool, we can understand the impact of the video server selection and the DNS servers on the user's perception of the video quality. Moreover the ability to perform the experiments on different ISPs allows us to further dig into the delivery policies of video streaming sites
APA, Harvard, Vancouver, ISO, and other styles
43

Krehl, Jonas. "Incorporating Fresnel-Propagation into Electron Holographic Tomography." Master's thesis, Saechsische Landesbibliothek- Staats- und Universitaetsbibliothek Dresden, 2017. http://nbn-resolving.de/urn:nbn:de:bsz:14-qucosa-217919.

Full text
Abstract:
Tomographic electron holography combines tomography, the reconstruction of three-dimensionally resolved data from multiple measurements with different specimen orientations, with electron holography, an interferometrical method for measuring the complex wave function inside a transmission electron microscope (TEM). Due to multiple scattering and free wave propagation conventional, ray projection based, tomography does perform badly when approaching atomic resolution. This is remedied by incorporating propagation effects into the projection while maintaining linearity in the object potential. Using the Rytov approach an approximation is derived, where the logarithm of the complex wave is linear in the potential. The ray projection becomes a convolution with a Fresnel propagation kernel, which is considerably more computationally expensive. A framework for such calculations has been implemented in Python. So has a multislice electron scattering algorithm, optimised for large fields of view and high numbers of atoms for simulations of scattering at nanoparticles. The Rytov approximation gives a remarkable increase in resolution and signal quality over the conventional approach in the tested system of a tungsten disulfide nanotube. The response to noise seems to be similar as in conventional tomography, so rather benign. This comes at the downside of much longer calculation time per iteration
Tomographische Elektronenholographie kombiniert Tomographie, die Rekonstruktion dreidimensional aufgelößter Daten aus einem Satz von mehreren Messungen bei verschiedenen Objektorientierungen, mit Elektronenholographie, eine interferrometrische Messung der komplexen Elektronenwelle im Transmissionselektronenmikroskop (TEM). Wegen Mehrfachstreuung und Propagationseffekten erzeugt konventionelle, auf einer Strahlprojektion basierende, Tomography ernste Probleme bei Hochauflösung hin zu atomarer Auflösung. Diese sollen durch ein Modell, welches Fresnel-Propagation beinhaltet, aber weiterhin linear im Potential des Objektes ist, vermindert werden. Mit dem Rytov-Ansatz wird eine Näherung abgeleitet, wobei der Logarithmus der komplexen Welle linear im Potential ist. Die Strahlen-Projektion ist dann eine Faltung mit dem Fresnel-Propagations-Faltungskernel welche rechentechnisch wesentlich aufwendiger ist. Ein Programm-Paket für solche Rechnungen wurde in Python implementiert. Weiterhin wurde ein Multislice Algorithmus für große Gesichtsfelder und Objekte mit vielen Atomen wie Nanopartikel optimiert. Die Rytov-Näherung verbessert sowohl die Auflösung als auch die Signalqualität immens gegenüber konventioneller Tomographie, zumindest in dem getesteten System eines Wolframdisulfid-Nanoröhrchens. Das Rauschverhalten scheint ähnlich der konventionallen Tomographie zu sein, also eher gutmütig. Im Gegenzug braucht die Tomographie basierend auf der Rytov-Näherung wesentlich mehr Rechenzeit pro Iteration
APA, Harvard, Vancouver, ISO, and other styles
44

Giel, Dominik M. "Hologram tomography for surface topometry." [S.l. : s.n.], 2003. http://deposit.ddb.de/cgi-bin/dokserv?idn=968530842.

Full text
APA, Harvard, Vancouver, ISO, and other styles
45

Marashdeh, Qussai Mohammad. "Advances in electrical capacitance tomography." Columbus, Ohio : Ohio State University, 2006. http://rave.ohiolink.edu/etdc/view?acc%5Fnum=osu1148591259.

Full text
APA, Harvard, Vancouver, ISO, and other styles
46

Ramotar, Alexei. "General Geometry Computed Tomography Reconstruction." Thesis, University of Waterloo, 2006. http://hdl.handle.net/10012/2914.

Full text
Abstract:
The discovery of Carbon Nanotubes and their ability to produce X-rays can usher in a new era in Computed Tomography (CT) technology. These devices will be lightweight, flexible and portable. The proposed device, currently under development, is envisioned as a flexible band of tiny X-ray emitters and detectors. The device is wrapped around an appendage and a CT image is obtained. However, current CT reconstruction algorithms can only be used if the geometry of the CT device is regular (usually circular). We present an efficient and accurate reconstruction technique that is unconstrained by the geometry of the CT device. Indeed the geometry can be both regular and highly irregular. To evaluate the feasibility of reconstructing a CT image from such a device, a simulated test bed was built to generate simulated CT ray sums of an image. This data was then used in our reconstruction method. We take this output data and grid it according to what we would expect from a parallel-beam CT scanner. The Filtered Back Projection can then be used to perform reconstruction. We have also included data inaccuracies as is expected in "real world" situations. Observations of reconstructions, as well as quantitative results, suggest that this simple method is efficient and accurate.
APA, Harvard, Vancouver, ISO, and other styles
47

Lees, Jonathan Matthew. "Seismic tomography in western Washington /." Thesis, Connect to this title online; UW restricted, 1989. http://hdl.handle.net/1773/6829.

Full text
APA, Harvard, Vancouver, ISO, and other styles
48

Yang, Tongning. "Wavefield tomography using extended images." Thesis, Colorado School of Mines, 2013. http://pqdtopen.proquest.com/#viewpdf?dispub=3557791.

Full text
Abstract:

Estimating an accurate velocity model is crucial for seismic imaging to obtain a good understanding of the subsurface structure. The objective of this thesis is to investigate methods of velocity analysis by optimizing seismic images.

A conventional seismic image is obtained by zero-lag crosscorrelation of wavefields extrapolated from a source wavelet and recorded data on the surface using a velocity model. The velocity model provides the kinematic information needed by the imaging algorithm to position the reflectors at correct locations and to focus the image. In complex geology, wave-equation migration is a powerful tool for accurately imaging the earth's interior; the quality of the output image, however, depends on the accuracy of the velocity model. Given such a dependency between the image and model, analyzing the velocity information from the image is still not intuitive and often ambiguous. If the nonzero space- and time-lags information are preserved in the crosscorrelation, the output are image hypercube defined as extended images. Compared to the conventional image, the extended images provide a straightforward way to analyze the image quality and to characterize the velocity model accuracy.

Understanding the reflection moveout is the key to developing velocity model building methods using extended images. In the extended image space, reflections form coherent objects which depend on space (lags) and time (lags). These objects resemble cones which ideally have their apex at zero space and time lags. The symmetry axis of the cone lies along the time-lag axis. The apex of the cone is located at zero lags only if the velocity model is accurate. This corresponds to the situation when reflection energy focuses at origin in both the space- and time-lag common-image gathers (the slices at zero time and space lags, respectively). When the velocity model is inaccurate, the cone shifts along the time-lag axis. This results in residual moveout in space-lag gathers (zero time-lag slice) and defocusing in time-lag gathers (zero space-lag slice). These phenomena are correlated, and they are a rich source of information for velocity model updates.

The extended image distortions caused by velocity model errors can be used to design velocity model building algorithms. When the extended image cones shift, the distance and direction of their apex away from zero time lag constrain model errors. This information can be used to construct an image perturbation, from which a slowness perturbation is inverted under the framework of linearized wave-equation migration velocity analysis. Alternatively, one can formulate a non-linear optimization problem to reconstruct the model by minimizing this image error. This approach requires the adjoint-state method to compute the gradient of the objective function, and iteratively update the model in the steepest-descent direction.

The space-lag subset of extended images has been used to reconstruct the velocity model by differential semblance optimization for a decade. The basis of the method is to penalize the defocusing in the gathers and to focus the reflection energy at zero lags by optimizing the model. The assumption that defocusing is caused by velocity model error is violated where the subsurface illumination is uneven. To improve the robustness and accuracy of the technique, the illumination compensation must be incorporated into the model building. The illumination compensation effectively isolates the defocusing due to uneven illumination or missing data. The key is to construct an illumination-based penalty operator by illumination analysis. Such a penalty automatically downweights the defocusing from illumination effects and allows the inversion to suffer less from the effects of uneven illumination and to take into account only the image error due to inaccurate velocity models.

One major issue for differential semblance optimization with space-lag gathers is the cost of computing and storing the gathers. To address the problem, extended space- and time-lag point gathers can be used as an alternative to the costlier common-image gathers. The point gathers are subsets of extended images constructed sparsely in subsurface on reflectors. The point gathers share similar reflection moveout characteristics with space-lag gathers, and thus differential semblance optimization can be implemented with such gathers. The point gathers reduce the computational and storage cost required by space-lag gathers especially in 3-D applications. Furthermore, the point gathers avoid the dip limitation in space-lag gathers and more accurately characterize the velocity information for steep reflections.

APA, Harvard, Vancouver, ISO, and other styles
49

Strother, S. C. (Steven Charles) 1955. "Quantitation in positron emission tomography." Thesis, McGill University, 1986. http://digitool.Library.McGill.CA:80/R/?func=dbin-jump-full&object_id=72815.

Full text
APA, Harvard, Vancouver, ISO, and other styles
50

Schneider, Ingo D. "Investigations in electrical impedance tomography." Thesis, Cardiff University, 2000. http://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.427798.

Full text
Abstract:
This thesis presents an investigation of various designs and implementation aspects of multi-frequency Electrical Impedance Tomography (EIT) systems for medical applications. EIT presents a relatively new imaging modality that involves the measurement of the complex impedance of a body through voltage measurements around the body's surface, when it is subjected to electrical excitation. The primary region of interest for measurement involves excitation in the frequency range from several kHz to about a few MHz. EIT system design objectives were defined which are the starting point of a detailed error analysis of an EIT system. The analysis undertaken introduced new aspects in terms of the multiplexers' on-resistance, the CMV error analysis and the investigation of the feedthrough errors incorporating the frequency dependence of the skin-electrode interface. A specification of a novel multi-frequency EIT system has been derived through careful consideration of the design objectives based on the results of the error analysis. The merits and drawbacks of different types of stimulus signal for bio-impedance measurements are reviewed and a novel multi-frequency signal for the in vivo measurement of biological impedances has been introduced. An active electrode was built for differential voltage measurement which combines a superior CMRR performance, compared to previously reported implementations, with high input impedance. The implemented circuit has been designed to allow further miniaturisation by means of hybrid semiconductor technologies. Prototypes of several digital subsystem components of the specified EIT system were designed and validated the concept of the novel multi-frequency EIT system. For testing and calibrating the developed front-end electronics, a novel EIT phantom systems is presented, which employs active impedance elements. Utilising active impedance elements enables computer control of the actual impedance values which simplifies and automates the measurement of phantom impedances over a wide range.
APA, Harvard, Vancouver, ISO, and other styles
We offer discounts on all premium plans for authors whose works are included in thematic literature selections. Contact us to get a unique promo code!

To the bibliography